How does digital audio editing work?

In digital audio editing, the mechanisms of digital signal processing are applied. Using Fast Fourier Transformation (FFT), it is possible to transform the audio signal from the original time range into the frequency range (spectrum) section by section. In the frequency range, various operations (filtering, removing noise, removing crackling) can then be carried out. Finally, the signal is transformed back into the time range section by section. To many readers, the terms "frequency range" and "spectrum" may be confusing at first. However, every music fan already thinks "in spectral terms" whenever he or she talks about bass and treble or compares the frequency response of different hi-fi components. Frequency response in particular is a regular object of discussion, as it is a fundamental quality feature in analog audio technology, although hardly anyone has ever looked at frequency response or at a spectrum. WavePurity makes it visible for you now:

This spectrum from a music track broadcast on VHF radio demonstrates the limitation of the frequency response to about 15 kHz. Furthermore, the stereo pilot reference at 19 kHz is easily identified. (The noise signal at 17.5 kHz is caused by my sound card.) This pilot reference causes your tuner to activate stereo display. Basically, the pilot reference does not belong there any more, but in analog technology it is very complicated to create a filter which shows no attenuation at 16 kHz but full attenuation at 19 kHz.

In digital signal processing, the unwanted line is simply removed from the spectrum, and the signal is then transformed back into the time range. Furthermore, individual lines can be deleted from the spectrum, for example to eliminate network hum. Also, in the frequency range (spectrum), noise can be identified and eliminated more easily. You can see this by watching the FFT display (spectrum) on replaying a music track. The spectrum "dances" with the rhythm of the music. In the quiet sections, you can sometimes identify the underlying noise. This is a seemingly banal discovery with extensive implications.

To remove noise, WavePurity looks for these quiet sections and determines the profile of the underlying noise. This noise profile is then subtracted from the spectrum. This is the basic principle. WavePurity uses a number of additional tricks which help to make sure that, after noise reduction, there are no other interfering background noises, and that the music is not altered in an uncharacteristic way. Due to its nature, a noise profile should remain constant as to its frequency and should be independent of the frequency. If there is a dependency, this indicates that, very likely, the audio source (for example the tape deck) does not have a linear frequency response. In this manner, the noise profile indicates linear distortions of the audio source. On the basis of these data, WavePurity allows you to apply automatic frequency response correction. My tests with this feature have yielded astonishing results: About 10 years ago, I linearized my tape deck in laborious efforts. The image above demonstrates how WavePurity evaluates the recordings from the time before that change was made.

Later recordings, which were made after the change, show results such as in the image above. The limitation of the frequency response to 16 kHz was set manually for both analyses. The yellow curve up to 16 kHz shows the inverse frequency response of the audio source, which, after the change (10 years ago), is now obviously linarized sufficiently. Meaning of the colours: red/green = noise profile of the right and the left channel respectively, yellow = frequency response correction

« back to overview