The HAMMOND ORGAN

North Suburban HAMMOND ORGAN Service

Digital signal processing can be summarized very briefly as follows. You take the ordinary analog audio signal from an electronic organ or other musical instrument, convert it to a bunch of numbers, manipulate the numbers somehow, and turn that back to an analog signal and send it off to a speaker so you can hear the results of that manipulation. Sounds pretty simple, right? Unfortunately, the implementation of the above is anything but simple. The theory however is that it's a lot easier to manipulate a bunch of numbers than it is to do the same thing with an analog signal, but in the studios of the major recording companies, most of what we do today by simple, small digital signal processors was indeed done by analog means for many years previous to the advent of digital signal processing. The big plus about digital signal processing is that in spite of the complexity of the machinery and the process, the hardware that performs digital signal processing is small, not too expensive and getting easier to use than it was previously.

Years ago, one of the best ways to get really natural sounding reverberation was to use a real echo chamber, a reasonably large room with hard reflecting surfaces for walls, floor and ceiling. In one part of the room there was a very high quality speaker, and at the other end was a high quality microphone. Sounds from the studio or the electronic organ or whatever was being played went to the high quality speaker and got converted to sound. The sound bounced around in this special room creating very good reverberation. Then the microphone picked up the reverberated sound and it went back to the recording control room to be mixed in with the actual recording of the musical instrument.

Today, via DSP, you need only a small unit, perhaps 18 inches long, 5 inches deep and two inches high, or some special software installed in a computer and you will end up with really high quality reverberation.

If the early echo chamber created a reverb that was too long, the recording engineer would move large baffles into the room which were covered with acoustical material to absorb some of the sound and make the reverb decay time shorter. With a modern digital signal processor, you need make only a few tweaks of some controls or enter a command on a computer keyboard, and you can shorten the reverb time if you'd like. Here, in a brief summary with a few simple diagrams is the basic outline of the process.

It all starts out in the ADC or analog-to-digital converter. Here, an audio AC waveform gets sampled thousands of times per second and the values of that signal at each sampling instance get converted to a specific numerical value. Figure one.

Digitally sampled waveform, graphical representation

Figure 1. Here are small sampled sections of two AC audio waveforms; the left and right channels of a stereo signal. As you look at them you can see the overall shape of the waveforms but you can also see the individual "steps" or samples. This is a greatly magnified view. The sampling frequency is 44,100 Hz which is much higher than even the highest audio frequency waveform that might exist. See "Nyquist Theory" explanation for this below.

As the picture above shows, it's easy to see the general shape of the waveform, and yet you can also see the individual samples or steps. The audio frequency band consists of frequencies from 20 Hz to 20,000 Hz. This is the typical band of frequencies or pitches that people can hear, from the lowest to the highest. Above 20,000 [20 kHz] it's not necessary to sample because we can't hear it anyhow! In fact most adults' hearing begins to cut out around 15 kHz and for some people, the cutoff frequency is much lower than that.

The sampling frequency must, however be higher than the highest frequency that we would want to reproduce accurately. The Nyquist theory states that the sampling frequency must be at least twice as high as the highest frequency that we wish to convert for accurate reproduction. In most music digital signal applications, the sampling frequency therefore is 44.1 kHz. This is a little higher than twice the 20 kHz which is the top limit that we need to be concerned with. 20 kHz is the upper limit for audio CDs and we all know how good they sound. Unless we are making music for bats, there is no need for humans to worry about any audio frequencies above 20 kHz because that's our top limit of hearing. It is possible, however, that some musical instrument sounds may contain frequencies that are higher than 22.05 kHz. High notes on a loudly played trumpet, for example, will produce harmonics whose frequencies exceed 20 kHz. The same is true of harmonics of high violin notes and even some of the sibilance noises in speech.

If these were to enter the analog to digital converter, they would be higher than the Nyquist limit. If this were to happen, the analog to digital converter would get loused up and generate so-called "alias" frequencies which would lie in the audible range and result in severe distortion in the final result. Therefore, the very first thing that happens when a musical signal enters a digital signal processor is that it must run through a low pass filter that will eliminate all of these ultrasonic frequencies above 20 kHz.

In reality, filters such as these are not that sudden, in that they don't immediately cut off 100% just above the selected cutoff value. Therefore, a filter which is designed to cut out frequencies above 20 kHz will still let a little bit that is slightly over 20 kHz get through. This is why designers of digital audio equipment have standardized the frequency of 44.1 kHz as a sampling frequency. They figure that if a low pass filter should cut out frequencies above 20 kHz, by the time the frequency is up to 22.05 kHz, now a 20 kHz filter will be cutting the signal out completely.

So the first step is to low-pass filter the signal to eliminate anything over 20 kHz and then sample the result at a much higher rate, typically 44,100 times a second and then get discrete values for each sample. At this point, the values are binary numbers, that is all zeros and ones according to the binary system. From here on the process becomes considerably more complicated. There is a great deal on the Internet about the subsequent aspects of the process so rather than for me to fill pages and pages with some really technical stuff which you can easily look up if you wish, I will summarize briefly on the following pages what happens in a typical digital signal processor [often called an effects processor] and then follow it up with a few pictures and sound clips that demonstrate particular effects.

Analog waveform

Figure 2. The end of the process, here the wave is filtered to remove the 44.1 kHz "steps".

It remains to be said that at the other end of the process, after the DSP has done its magic to the digitally sampled signal, this has to be converted back to an analog signal so that it can be amplified and ultimately sent to a speaker. Essentially the waveform is reconstructed, and then another filtering circuit eliminates the 44.1 kHz component from the resulting waveform. As you look at the above picture, it's very easy to see the shape of the waveform. We just need to filter out the individual little steps so that the result looks like this second picture.

 

 Previous Page   Page2.    Next page