Promote your site here! contact the webmaster
In physics, sound is a vibration that typically propagates as an audible wave of pressure, through a transmission medium such as a gas, liquid or solid.
In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Humans can hear sound waves with frequencies between about 20 Hz and 20 kHz. Sound above 20 kHz is ultrasound and below 20 Hz is infrasound. Other animals have different hearing ranges.
Acoustics is the interdisciplinary science that deals with the study of mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound, and infrasound. A scientist who works in the field of acoustics is an acoustician, while someone working in the field of acoustical engineering may be called an acoustical engineer. An audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound.
Applications of acoustics are found in almost all aspects of modern society, subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electro-acoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration.
Sound is defined as "(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a)."
Perception of sound
A distinct use of the term sound from its use in physics is that in physiology and psychology, where the term refers to the subject of perception by the brain. The field of psychoacoustics is dedicated to such studies. Historically the word "sound" referred exclusively to an effect in the mind. Webster's 1947 dictionary defined sound as: "that which is heard; the effect which is produced by the vibration of a body affecting the ear." This meant (at least in 1947) the correct response to the question: "if a tree falls in the forest with no one to hear it fall, does it make a sound?" was "no". However, owing to contemporary usage, definitions of sound as a physical effect are prevalent in most dictionaries. Consequently, the answer to the same question (see above) is now "yes, a tree falling in the forest with no one to hear it fall does make a sound".
The physical reception of sound in any hearing organism is limited to a range of frequencies. Humans normally hear sound frequencies between approximately 20 Hz and 20,000 Hz (20 kHz),:382 The upper limit decreases with age.:249 Sometimes sound refers to only those vibrations with frequencies that are within the hearing range for humans or sometimes it relates to a particular animal. Other species have different ranges of hearing. For example, dogs can perceive vibrations higher than 20 kHz, but are deaf below 40 Hz.
As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate, record, transmit, and broadcast sound.
Digital audio is a technology that can be used for sound recording and reproduction using audio signals that have been encoded in digital form. Following significant advances in digital audio technology during the 1970s, it gradually replaced analog audio technology in many areas of audio engineering and telecommunications in the 1990s and 2000s.
In a digital audio system, a microphone converts sound to an analog electrical signal, then an analog-to-digital converter (ADC)—typically using pulse-code modulation—converts the analog signal into a digital signal. This digital signal can then be recorded, edited and modified using digital audio tools. When the sound engineer wishes to listen to the recording on headphones or loudspeakers (or when a consumer wishes to listen to a digital sound file of a song), a digital-to-analog converter (DAC) performs the reverse process, converting a digital signal back into an analog signal, through an audio power amplifier and send to a loudspeaker.
Digital audio systems may include compression, storage, processing and transmission components. Conversion to a digital format allows convenient manipulation, storage, transmission and retrieval of an audio signal. Unlike analog audio, in which making copies of a recording results in generation loss, a degradation of the signal quality, when using digital audio, an infinite number of copies can be made without any degradation of signal quality.
Audio editing software is software which allows editing and generating of audio data. Audio editing software can be implemented completely or partly as library, as computer application, as Web application or as a loadable kernel module. Wave Editors are digital audio editors and there are many sources of software available to perform this function. Most can edit music, apply effects and filters, adjust stereo channels etc.
A digital audio workstation (DAW) consists of software to a great degree, and usually is composed of many distinct software suite components, giving access to them through a unified graphical user interface using GTK+, Qt or some other library for the GUI widgets.
In digital recording, audio signals picked up by a microphone or other transducer or video signals picked up by a camera or similar device are converted into a stream of discrete numbers, representing the changes over time in air pressure for audio, and chroma and luminance values for video, then recorded to a storage device. To play back a digital sound recording, the numbers are retrieved and converted back into their original analog waveforms so that they can be heard through a loudspeaker. To play back a digital video recording, the numbers are retrieved and converted back into their original analog waveforms so that they can be viewed on a video monitor, television or other display.
A software synthesizer, also known as a softsynth, is a computer program, or plug-in that generates digital audio, usually for music. Computer software that can create sounds or music is not new, but advances in processing speed are allowing softsynths to accomplish the same tasks that previously required dedicated hardware. Softsynths are usually cheaper and more portable than dedicated hardware, and easier to interface with other music software such as music sequencers.
Types of software synthesizers
Softsynths can cover a range of synthesis methods, including subtractive synthesis (including analog modeling, a subtype), FM synthesis (including the similar phase distortion synthesis), physical modelling synthesis, additive synthesis (including the related resynthesis), and sample-based synthesis.
Many popular hardware synthesizers are no longer manufactured, but have been emulated in software. The emulation can even extend to having graphics that model the exact placements of the original hardware controls. Some simulators can even import the original sound patches with accuracy that is nearly indistinguishable from the original synthesizer. Popular synthesizers such as the Minimoog, Yamaha DX7, Korg M1, Prophet-5, Oberheim OB-X, Roland Jupiter 8, ARP 2600 and dozens of other classics have been recreated in software.
Some softsynths are heavily sample-based, and frequently have more capability than hardware units, since computers have fewer restrictions on memory than dedicated hardware synthesizers. Some of these sample based synthesizers come with sample libraries many gigabytes in size. Some are specifically designed to mimic real world instruments such as pianos. Many sample libraries are available in a common format like .wav, .sf or .sf2, and can be used with almost any sampler-based softsynth.
The major downside of using softsynths can often be more latency (delay between playing the note and hearing the corresponding sound). Decreasing latency requires increasing the demand on the computer's processor. When the soft synthesizer is running as a plug-in for a host sequencer, both the soft synth and the sequencer are competing for processor time. Multi-processor computers can handle this better than single-processor computers. As the processor becomes overloaded, sonic artifacts such as "clicks" and "pops" can be heard during performance or playback. When the processor becomes completely overloaded, the host sequencer or computer can lock up or crash. Increasing buffer size helps, but also increases latency. However modern professional audio interfaces can frequently operate with extremely low latency, so in recent years this has become much less of a problem than in the early days of computer music.
It is also possible to generate sound files offline, meaning sound generation does not have to be in real time, or live. For example, the input could be a MIDI file and the output could be a WAV file or an MP3 file. Playing a WAV or MP3 file simply means playing a precalculated waveform. The advantage of offline synthesis is that the software can spend as much time as it needs to generate the resulting sounds, potentially increasing sound quality. It could take 30 seconds of computing time to generate 1 second of real-time sound, for example. The disadvantage is that changes to the music specifications cannot be heard immediately.
Often a composer or virtual conductor will want a "draft mode" for initial score editing, and then use the "production mode" to generate high-quality sound as one gets closer to the final version. The draft mode allows for quicker turn-around, perhaps in real time, but will not have the full quality of the production mode. The draft render is roughly analogous to a wire-frame or "big polygon" animation when creating 3D animation or CGI. Both are based on the trade-off between quality and turn-around time for reviewing drafts and changes.
Sound recording and reproduction is an electrical, mechanical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording. Prior to the development of sound recording, there were mechanical systems for encoding and reproducing instrumental music, such as wind-up music boxes and, later, player pianos.
Acoustic analog recording is achieved by a microphone diaphragm that can detect and sense the changes in atmospheric pressure caused by acoustic sound waves and record them as a mechanical representation of the sound waves on a medium such as a phonograph record (in which a stylus cuts grooves on a record). In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, which is then converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a bigger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Oscillations may also be recorded directly from devices such as an electric guitar pickup or a synthesizer, without the use of acoustics in the recording process, other than the need for musicians to hear how well they are playing during recording sessions via headphones.
Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of digitization. This lets the audio data be stored and transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers (zeros and ones) representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. Digital recordings are considered higher quality than analog recordings not necessarily because they have higher fidelity (wider frequency response or dynamic range), but because the digital format can prevent much loss of quality found in analog recording due to noise and electromagnetic interference in playback and mechanical deterioration or damage to the storage medium. Whereas successive copies of an analog recording tend to degrade in quality, as more noise is added, a digital audio recording can be reproduced endlessly with no degradation in sound quality. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound.
kimbersoft.com is hosted on a re-seller Virtual Private Server
This page was last updated January 6th, 2018 by kim
Where wealth like fruit on precipices grew.
SEO Links SEM Links . Traffic