Music 62 notes


Oct 11

Power Amplifiers: matching to speakers, impedance (= resistance at audio frequencies, in ohms),
damping factor: ratio of speaker impedance to source impedance. How well it controls mechanical resonances: high damping factor acts as a "brake" on the cone; low damping factor means it can ring. So you want output impedance low (typically 0-1Ω), speaker impedance high (8Ω down to 2Ω).
Many amplifier manufacturers state power levels going into a low-impedance load, makes them look more powerful.

Digital recording
Technical overview:
Take a sample of the signal voltage and write it down as a number.
Issues: how often (sample rate), how accurate is the number (word length), how accurate is the sample clock (jitter).
A-D converter does this.
Nyquist theorem: highest frequency sampleable is 1/2 the sampling rate. If you go too high, you get aliasing. Nyquist frequency=sampling rate/2
Word length: number is binary, so number of bits determine the range. With 10 bits, you get 0-1023. With 16, 0-65335. Difference between analog input and digitized signal is called Quantization noise.
Dynamic range=highest level possible/quantization noise level=6.02 number of bits + 1.76dB

D-A converter: creates signal voltages from samples; uses sharp filters (decimation) to round off the edges of the waveforms.

Digital Recording formats:
Sony PCM-F1, PCM-1610, JVC, etc: used video tape, either 1/2" or 1/4", so
could only edit on frame boundaries i.e., 33.3 msec resolution.
DAT: was killed in the consumer market by RIAA lobbying for law requiring SCMS chip in consumer units, so no one made any.

Multitrack: Mitsubishi, Studer, Tascam, Sony PCM-3324 and -3348. Most of them long gone. Replaced by ADAT, and to some extent by Tascam DA-88 (didn't do as well: price point was higher and introduction was a few months later).

ADATs could easily be combined, and controlled by a single controller which acted as if it was a 32-track deck.

Digital disc vs. digital tape
Tape is sequential, disc is random access.
With tape there is a direct correlation between the number of physical inputs, the number of tracks, and the number of physical outputs.
With disc, there is no correlation: inputs and outputs are determined by the audio interfaces, and tracks can be much higher: determined by the speed of the CPU and the throughput of the disc.
Disc systems can do processing on the fly, non-destructive editing
In early days when space was scarce, sometimes used destructive editing. Now "constructive": do a file edit, and it creates a new file.


Oct 4

Using speakers in the lab: not when anyone else is in the lab!
Using Audio-technica headphones: Get key from keysafe for tall cabinet. Take a short cable from the box on the second shelf--the two ends are different sizes, and the smaller one goes into the headphones. Make sure you return them, lock the cabinet, and scramble the keysafe combination when you're done.

Speakers:
Damage: causing woofer cone to go too far can tear it or pull it off its mount. Sending high-frequency distortion products to tweeter can damage it.
How to use speakers in practical situations? Get used to them! Listen to music that you know on them, so your ear can make comparisons.

Headphones: open (foam), closed (Koss), semi-closed (lighter plastic), noise-cancelling (Bose), bass heavy (Beats).
Can be more accurate, move much less air so elements are lighter, no room effects.
Usually one element for entire frequency range. Problem: interaural bleed is gone, so stereo image is very different from speakers. Processors beginning to appear that simulate speakers in headphones.

Ear buds: Getting better! Watch out for exaggerated LF response. Watch SPL!!

In-ear monitors: Isolated, advantage is less sound on stage getting into FOH system. For bass players and drummers, often combined with speakers or throne drivers, e.g. Expensive but worth it for professionals.

“Buttkicker” for drummers so monitor levels don't have to be so high on stage.

How to use speakers in practical situations? Get used to them! Listen to music that you know on them, so your ear can make comparisons.


Oct 2

Speakers:
Of all the components in an audio system, these have by far the worst frequency response and distortion. Physics of moving air is difficult. The perfect speaker would weigh nothing and have infinite rigidity. The spider which holds the cone against the magnet would weigh nothing and have infinite flexibility. The space inside the cabinet would be infinite so that nothing impedes the movement of the cone.

Break up the spectrum into components that work best over a linited range: Woofers, tweeters, midrange, Sub-woofers.

Directivity: low frequencies spread out more, high frequencies are localized, “beamed”.

Crossovers: filters to divide the spectrum between the elements.

Distortion: harmonic, intermodulation

Time-aligned: tweeter is delayed or set back to compensate for depth of woofer cone. Theory says this preserves transients, prevents phase interference between drivers at overlapping frequencies. Concentric drivers sometimes used for time/space alignment (e.g., Tannoys).

Passive vs. active speakers: Crossover goes after amp, or before amps. Bi-/tri-amplification.

Sensitivity: output SPL at a given distance, per 1 watt input.

Other specs: freq response, THD, maximum power, often miselading.

Near-field: small speakers up close to minimize room effects.

In a studio, use multiple speakers to monitor recording and especially mix: high-end and low-end. Auratones, Yamaha NS-10s popular for simulating home hi-fi, television, car. NS-10s very bright, engineers often used tissue paper to calm them down.


Sept 27

ProTools: After recording automation, use drop-down in Edit window to show movements, edit. Select, delete, scale up or down, or pencil edit.

Grouping stereo or multiple tracks for editing and/or mixing. (Using pencil editor on automation on one track in a group will not change the others.)

Bouncing to AIFF/WAV: File>Bounce to Disk. 16-bit, 44.1 kHz, interleaved. Mix will be stored in Bounces folder of session unless you specify otherwise.

In the studio: four different types of rooms
Control room—very tight, flat
Live room—may have different areas with different acoustics
Drum room—also may be variable
Iso booth—for voices, amps, usually dead

Three studio paradigms:
Analog tape
Digital tape
Digital hard-disk ("in the box")

Studio wiring: input panels, monitor outputs, patch bays, cue systems


Sept 25

1st Projects Due Oct 4: 2 mics to Pro Tools.

On the cart:
MOTU Stage B16 is mixer for inputs, outputs, and cue mixes, also is audio interface and A/D D/A converter.

Mac Mini has external drive. Operate with wireless keyboard and trackpad or wired keyboard and mouse.

Speakers for monitoring, also headphones from B16 output.

Remove speakers and snake. Use snake for mic inputs.

Open mixer controls in Safari only! Look for MOTU icon in upper right. Mic trims, panning--for this project, hard left and right. Use phantom power if necessary. Get a "green " signal without a red light. Level should top out between -12 and -18. Fader positions don’t matter.

ProTools 12 on the recording cart:
When you launch ProTools, double-click “Stereo Template” from the Template Group window, and then name your session and save it on the External hard drive. Now all the files should be in the right place inside your session folder.

Recording into ProTools. 44.1 kHz, WAV,16-bit. Or 24-bit

After recording, use a flash or portable drive to move entire folder onto lab computer to edit and/or mix or use network access to the server from the desktop.

Two Rode multipattern and two Electro-Voice n/d267a dynamic cardioids with stands and cables, plus two headphones, AC extension cord, are in the milk carton on the cart.

Use pop filter on all vocals!

Cable wrapping! https://www.youtube.com/watch?v=ktI0mLAoSTc Please use velcro or ties to secure your cables after you wrap them!

Choosing your mic pattern: use the results of our experiment. How much “center” do you need? Is tonal balance or spatial placement more important. If recording instruments of very different volumes, not necessary to stick with a stereo pattern, just make it sound good in stereo.

If you use M-S, when you are playing back, duplicate the Side channel onto another track. Phase-reverse it using using Trim plug-in, and then group both Side faders. Level of side faders determines width of stereo image.

Edit if you need to. Goal is to make something that sounds realistic, and good. Try different instrument and mic positions.

Smart tool for trimming, selecting region (command-E), fading in or out, cross-fading between adjacent regions.

Crossfades

Editing modes: Slip: move freely. Grid: move in quantized intervals. Shuffle: move a region and other regions jump around to fill in.

To automate fader movements: put track in auto “write”, not Record! To play back, set to “Read”.
Grouping stereo or multiple tracks for editing and/or mixing: select tracks, command-G


Sept 20

First project teams:

Zev Hattis
B. Berke Imren
Fifi Wong*
Andre LaPan
Tim Holt*
Lauren Hassi
Ari Brown
Yekwon Park
Amanda Lillie*
Scott Einsidler
Nathan Stocking
Ella McDonald*
Erick Orozco
John Morgan Keane*
Jacob Jaffe
*booth access

PRIORITIES IN A RECORDING SESSION/STUDIO (in descending order, according to Prof. Lehrman)
Performer
  Instrument
    Mic placement
      Room
        Microphone
          Monitors
            Control room
              A/D converter — Mic preamp
                Analog mixer/channel strip
                  Master clock (except when you have multiple A/D converters)
                    Plug-ins / Outboard (unless you have very specialized needs)
                      DAW
                        Computer
                          Cables


Sept 18

Microphone techniques: respect the historical use of instruments!
• What's the instrument?
• What's the performance style?
• Is the room sound good? Is it quiet?
• Are there other instruments playing at the same time?
• How much room sound do you want?
• What mics do you have?
• Do you want stereo or mono? How much stereo?
Good positioning is always better than trying to eq later. Good positioning means phasing is favorable: hard to fix with eq!

Mics need to be closer than our ears, since we don't have the visual cues to tell us what to look for, and mics can't distinguish between direct and reflected sound--we always want more direct sound in the recording. Can add reflections (echo/reverb) later, but impossible to remove them!

Listening to the instruments in the space: finding the right spot to record. Get the room balance in your ear, then take two steps forward and put the mic there.

3-to-1 rule: when using multiple microphones, mics need to be at least three times as far away from each other as they are from their individual sources.

Winds & Strings: not on top of the bridge. Too close, loses resonance and high frequencies (data from Michigan Tech, using DPA 4011 cardioid).
At least 3 ft away from source, if possible, except when it would violate 3-to-1 rule! String sections: mic in stereo as an ensemble, not meant to be a bunch of soloists. Horn sections, can go either way: mic individually or if there is enough isolation from other instruments, as section.

Guitar: exception since we are used to hearing close-miked guitars. But there is no one good spot on the guitar, since sound comes from all over the instrument: soundhole (too boomy by itself), body, top, neck, headstock. Best to use 2 mics, or if room is quiet, from a distance.

Vocals: Always use pop filters

Piano: exception since pianists like the sound of the instrument close up--doesn’t really need the room to expand. Different philosophies for pop and classical. 3:1 rule on soundboard, or even better, 5:1 since reflections are very loud and phase relationships very complex. Can use spaced cardioids, spaced omnis, or coincident cardioids, in which case you want to reposition them for the best balance within the instrument (bass/treble). Stereo image? Performer or audience perspective?

Drums: first of all, make them sound good! Tune them, dampen rattles, dampen heads so they don’t ring as much (blanket in kick drum).
Three philosophies--Choice will depend on spill, room sound, and how much power and immediacy you want in the drums.
1) stereo pair overhead (cardioid or omni); good for jazz, if you don’t mind some spill, or if they’re in a good-sounding isolation room.
2) add kick (dynamic or high-level condensor) and snare mics for extra punch and flexibility
3) add mics to everything. Complicates things because of spill, may have to add noise gates later.

Glyn Johns technique --
https://www.youtube.com/watch?v=jqHQRr2Oaxs


Sept 11

Transducer = converts one type of energy to another
Microphone = converts sound waves in air to Alternating Current (AC) voltages. Dynamic Microphone has a magnetic metal diaphragm mounted inside a coil of wire. Diaphragm vibrates with sound waves, induces current into coil, which is analog (stress the term!) of sound wave. This travels down a wire as an alternating current: positive voltage with compression, negative voltage with rarefaction.

Dynamic/moving coil (pressure-gradient mic)

Condensor/capacitor=charged plate, uncharged plate, acts as capacitor, one plate moves, capacitance changes.
Charge comes from battery, or permanently-charged plate (electret), or dedicated power supply (old tube mics), or phantom power: 48v DC provided by mixer (doesn’t get into signal, because input transformer or blocking capacitor removes it).

Ribbon (velocity mic)
Metal ribbon is suspended between strong magnets, as it vibrates it generates a small current. High sensitivity, good freq response, a little delicate, figure-8 pattern.

Boundary (pressure zone)
Owned by Crown. Mic element is very close to wall. Hemispherical pickup, reflections off of wall are very short, essentially non-existent, prevents comb-filtering caused by usual reflections, even frequency response. Not good for singing, but good for grand piano (against soundboard), conference rooms, theatrical (put on the stage, pad against foot noises).

Polar patterns/phase relationships.
Standard configurations Pickup pattern design. Off-axis response, proximity effect. Pop filters on vocals and flute.

Cables: Balanced vs. Unbalanced:
Balanced = two conductor and surrounding shield or ground. Two conductors are in electrical opposition to each other — when one has positive voltage the other has negative. At receiving end, one leg is flipped in polarity—also called phase—and the two are added. If noise is introduced, it affects each conductor the same. If you flip any signal and add it to itself, the result is zero. Because it is flipped at the receiving end, the noise cancels out. This means there is little noise over long lengths of cable. Best for microphones, which have low signal levels, but also for long lengths of line level.
Unbalanced = single conductor and shield. Cheaper and easier to wire, but open to noise as well as signal loss over long length, particularly high frequencies due to capacitance (of interest to EEs only). Okay for line-level signals over short distances (like hi-fi rigs or electronic instruments), or microphones over very short distances (cheap recorders and PA systems).
Connectors: Balanced: XLR (as on microphone cable), 1/4” tip-ring-sleeve.
Unbalanced: RCA (“phono”), 1/4” (“phone”), mini (cassette deck or computer).
Mini comes in stereo version also (tip-ring-sleeve), for computers and Walkman headphones (both channels share a common ground). 1/4” TRS is also used as a stereo cable for headphones = two unbalanced channels with a common ground.


Sept 6

Waveforms = simple and complex (show)
Simple waveform is a sine wave, has just the fundamental frequency. Other forms have harmonics, which are integer multiples of the fundamental. Fourier analysis theory says that any complex waveform can be broken down into a series of sine waves.
Saw: each harmonic at level 1/n. Square, only odd harmonics at 1/n. Triangle, only odd harmonics at 1/n2
If there are lots of non-harmonic components, we hear it as noise.
White noise: equal energy per cycle (arithmetic scale)
Pink noise: equal energy per octave (logarithmic scale-more suited for ears)

Timbre = complexity of waveform, number and strength of harmonics. We can change timbre with filters or equalizers.

Stereo = since we have two ears. Simplest and best high-fidelity system is walking around with two mics clipped to your ears, and then listening over headphones: this is called binaural. Binaural recordings are commercially available: they use a dummy head with microphones in the earholes.
Systems with speakers are an approximation of stereo. The stereo field is the area between the speakers, and the “image” is what appears between the two speakers. If you sit too far from the center, you won’t hear a stereo image.
Multi-channel surround can do more to simulate "real" environments. Quad, 5.1 (.1=LFE since low frequencies are heard less directionally), 7.1, 10.1, etc. Will do a little with it in this course.
Position in the stereo or surround field = L/R, F/B, U/D. Determined by relative amplitude, arrival time, and phase.

Fidelity: what is it and what can get in the way? What goes in = what goes out.
Ideal amplifier=A straight wire with gain (signal is louder)

Coloration: Frequency response is limited.
Function of frequency response curve is not linear.

Distortion is introduced, certain extra harmonics are produced, either even or odd.

• Distortion caused by clipping or non-linearity: adds odd harmonics, particularly nasty (show in Reason)=harmonic distortion
Crossover distortion= certain types of amplifiers, where different power supplies work on the negative and positive parts of the signal (“push-pull”). If they’re not balanced perfectly, you get a glitch when the signal swings from + to - and vice versa.
Intermodulation distortion=frequencies interacting with each other.

Aliasing, a by-product of digital conversion.
Noise, hum, extraneous signals, electromagnetic interference (static, RFI)

Frequency sensitivity changes at different loudness levels: at low levels, we hear low frequencies poorly, and high frequencies too, although the effect isn’t as dramatic. Fletcher-Munson curve: ear is more sensitive to midrange frequencies at low levels, less sensitive to lows and extreme highs. In other words, the frequency response of the ear changes depending on the volume or intensity of the sound. When you monitor a recording loud, it sounds different (better?) than when soft.

Loudness sensitivity: Just Noticeable Difference (JND)--about 1 dB--changes with frequency and loudness level. We can often hear much smaller differences under some conditions, and not hear larger ones under different conditions.
Also, JND changes with duration--short sounds (<a few tenths of a second) seem softer than long sounds of the same intensity


Sept 4

Basic audio principles:
Nature of Sound waves = pressure waves through a medium = compression (more molecules per cubic inch) and rarefaction (fewer molecules per cubic inch) of air. A vibrating object sets the waves in motion, your ear decodes them. Sound also travels through other media, like water and metal. No sound in a vacuum, because there’s nothing to carry it.
Speed of sound in air: about 1100 feet per second. That’s why you count seconds after a lightning strike to see how far the lightning is: 5 seconds = one mile. Conversely, 1 millsecond = about 1 foot.
Sound travels a little faster in warmer air, about 0.1% per degree F, and in a more solid medium: in water, 4000-5000+ fps, in metal, 9500-16000 fps.
When we turn sound into electricity, the electrical waveform represents the pressure wave in the form of alternating current. The electrical waveform is therefore an analog of the sound wave, Electricity travels at close to the speed of light, much faster than sound, so transmission of audio in electrical form is instantaneous.

Characteristics of a sound:
Frequency = pitch http://www.psbspeakers.com/Images/Audiotopics/fChart.gif.
How many vibrations or changes in pressure per second.
Expressed in cycles per second, or Hertz (Hz).
The mathematical basis of the musical scale: go up an octave = 2x the frequency.
Each half-step is the twelfth root of 2 higher than the one below it. = approx. 1.063
The limits of human hearing = approximately 20 Hz to 20,000 Hz or 20 k(ilo)Hz.
Fundamentals vs. harmonics = fundamental pitch is predominant pitch, harmonics are multiples (sometimes not exactly even) of the fundamental, that give the sound character, or timbre.
Period = 1/frequency
Wavelength = velocity of sound in units per second/frequency

Loudness (volume, amplitude) = How much air is displaced by the pressure wave. Measured in decibels (dB) above threshold of audibility (look at chart). The decibel is actually a ratio, not an absolute, and when you use it to state an absolute value, you need a reference. “dB SPL” (as in chart in course pack) is also referenced to the perception threshold of human hearing. Obviously subjective, so set at 0.0002 dyne/cm2, or 0.00002 Newtons/m2. That is called 0 dB SPL. By contrast, atmospheric pressure is 100,000 Newtons/m2
dB often used to denote a change in level. A minimum perceptible change in loudness (Just Noticeable Difference) is about 1 dB. Something we hear as being twice as loud is about 10 dB louder. So we talk about “3 dB higher level on the drums” in a mix, or a “96 dB signal-to noise-ratio” as being the difference between the highest volume a system is capable of and the residual noise it generates.
“dBV” is referenced to something, so it is an absolute measurement. “0 dBV” means a signal referenced to a specific electrical voltage in a wire, which is 1 volt. “0 dBu” is referenced to 0.775 volts, but it also specifies an impedance of 600 ohms. We’ll deal with impedance later. Common signal levels in audio are referenced to that: -10 dBV (consumer gear), +4 dBu (pro gear)
The threshold of pain is about 130 dB SPL, so the total volume or “dynamic” range of human hearing is about 130 dB.
http://harada-sound.com/sound/handbook/soundspl.gif

Characteristics of the ear as transducer.
Ear converts sound waves to nerve impulses.
Each hair or cilium responds to a certain frequency, like a tuning fork. Frequencies in between get interpolated. As we get older, hairs stiffen, break off, and high-frequency sensitivity goes down. Also can be broken by prolonged or repeated exposure to loud sound.


click here for Prof. Azevedo's ES 65 notes

click here for assignments