ELEC 241 Lab

Experiment 6.2

Listening to Digital Audio Signals

Part 1: Earphone driver amp

In previous labs, we've listened directly to the microphone output. This week, we will insert the Lab PC between the microphone signal and the signal we listen to. So we will need separate amplifiers for the microphone and the earpiece.


Step 1:

If you have not already done so, connect a bypass capacitor between the positive power supply bus and ground. A 10 $\mu$ F electrolytic capacitor should do nicely.

Warning
Electrolytic capacitors are polarized, i.e. they have a + terminal and a - terminal. Be sure to connect the + terminal to the positive supply and the - terminal to ground. If connected backwards, an electrolytic capacitor stops being a capacitor and conducts a large DC current. This current can heat up the capacitor to the point where it may EXPLODE.

Unfortunately it's not immediately obvious by looking at most electrolytic capacitors which terminal is positive and which is negative. Most are marked only no the negative terminal, and not with a simple minus sign, but a weird "minus inside an oval" symbol ( \includegraphics[scale=1.000000]{electro_minus.ps} ).



Step 2:

Wire the following circuit. Remember to connect the power supply voltages (not shown) to the op amp. Be sure to disconnect the earphone from the output of the mixer amp before connecting it to this circuit.
\includegraphics[scale=0.650000]{headphone_amp.ps}


Question 4:

Find the expression for $v_{out}$ as a function of $v_{in}$ for this circuit.

Step 3:

To reduce the chance of feedback, try to route the wire from the op amp to the earphone away from other wires.

Step 4:

Connect D/A output 0 (pin 51) to $v_{in}$ of the earphone driver amplifier and to CH2 of the scope.

Step 5:

Load and start the Labview program "process1". Here's what you should have:
\includegraphics[scale=0.500000]{ckt6.3.ps}


Step 6:

Set the function generator to produce a 300 Hz sine wave and adjust the AMPLITUDE control for a comfortable sound level in the earphone.

Remark:

One consequence of reading, processing, and writing (as opposed to reading and processing, or processing and writing, as we've done up to now) is that our Labview program now has strict real time constraints. I.e. it must read a block of input data samples, process them, and write the output data samples before the next block of samples arrives, or data will be lost.

Since Windows is a multitasking operating system with no provisions for supporting real time processes, it is possible for another process to interfere with this requirement. When this happens, Labview displays an error message (e.g. "Error -10846 occurred at AI Buffer Read") and stops. This is likely to happen if you try to bring another application (e.g. Mozilla) to the front. If this happens, dismiss the error message and restart the program.

Step 7:

Vary the frequency or amplitude slightly. Note the delay between the change in the input and the change in the input. This is caused by the buffering of the input and output samples

Step 8:

Set the number of quantization levels to 100. Note the effect on the sound. Can you see any change in the scope display?

Step 9:

Try different numbers of quantization levels. What is the smallest number you can use without producing a noticeable degradation in the sound?

Step 10:

Stop the program, set the sample rate to 8000, the quantization levels to 4096, and restart. (The sample rate is only read when the program starts, so changing it while running has no effect.)

Step 11:

Increase the frequency of the function generator towards 4 kHz. What happens to the sound as you reach and pass 4 kHz?

Step 12:

Continue increasing the frequency through 8 kHz. Note what happens to the sound.

Step 13:

Set the function generator to triangle wave. Vary the frequency from a few hundred to a few thousand Hz. Listen for "birdies", faint tones that rise and fall as the frequency of the main tone changes.

Step 14:

Switch the function generator to square wave and vary the frequency. The birdies should now be louder and more numerous.

Question 5:

Explain the source of the "birdie" tones.

Step 15:

Disconnect the function generator from A/D input4 and connect the output of the microphone mixer to it. Here's what you should have:
\includegraphics[scale=0.500000]{ckt6.5.ps}


Step 16:

While holding the handset to your ear, speak into the mouthpiece. Note the delay between the microphone and the earphone. What is the source of this delay?

Step 17:

With the scope, look at the output of the mixer while speaking into the microphone. Set the Max level control to about 125% of the peak (not peak-to-peak) value of this signal.

Step 18:

Try various numbers of quantization levels and note the effect on the sound. What is the smallest number of levels at which you can still understand what is being said? What is the largest number at which quantization effects are audible?

Question 6:

Define the data rate of the digitized signal to be $\log_2({\rm no.\; of\; levels}) \times ({\rm sampling\; frequency})$ . Based on your observations, what is the lowest data rate which will give an intelligible speech signal? What is the lowest rate that produces "acceptable" quality?