|Music 64 section 2 • Spring 2018 • Lecture notes|
Audio in DP
Audio can be dragged into a DP session from desktop if it’s AIF, WAV, MP3 including from a CD. Stereo files can only go in stereo tracks; mono files can only go in mono tracks! File is copied (and converted if necessary) into Audio Files folder. USB, flash drive, CD, SD card, etc.
Effects in DP: Plug-ins come in various formats: Apple Audio Units, MOTU Audio System (MAS), VST, Rewire. Can be effects or virtual instruments (like Purity or Reason). Formats we can't use: RTAS, TDM, DirectX.
Don't use plug-in presets and combinations! They are never right for what you're doing and are NOT good starting points!
Automating effects. Works on audio, Aux, and instrument tracks (not MIDI!). Put track in automation record. Insert effect plug-in in mixing console. Move a control in the effect. Now that parameter appears in the drop-down for that track in the Sequence window. It can be edited there using arrow or reshape tool.
Using aux buses in DP more mutiple tracks through fx. Use sends (post-fader) to aux bus (1-2), set up Aux track with fx and input as bus 1-2. Set effects mix to 100%. (If you don't want any dry signal at all, set send to "pre-fader".)
Final projects: Can be original song with vocals and/or instruments; orchestration or arrangement of a pop, jaz, or classical tune; score a short video; audio collage with MIDI elements; pretty much anything as long as it uses much of the tools we covered. Look on server for projects from other years.
Analog tape saturation: Push particles too hard and they resist creating distortion (primarily even-order) and volume compression. Can be useful in rock recordings, making them sound more aggressive. Not desirable with classical music.
Digital recording: why?
Fidelity: not dependent on physical medium
Copyability: each copy is a perfect clone (as long as you don’t compress it)
Longevity: medium doesn't wear out quickly, and can be cloned before it does
What it does: Measures ("samples") the level of the waveform at a particular instant and records it as a number.
How often the sample is taken=sampling rate
What the possible range of numbers is=word length or bit length or resolution
The more bits, the better the approximation of the signal. The difference between the input signal and the converted signal is heard as noise, and is called quantization noise. The quantization noise is the noise floor of the medium. The range in dB from the noise floor to the highest level before clipping is the dynamic range. Dynamic range = number of bits x 6 dB (approx).
The higher the sampling rate, the more high frequencies can be recorded. Sampling rate must be at least twice the highest frequency desired=Nyquist theorem. Signals higher than 1/2 the sampling rate (the Nyquist frequency) will result in aliasing. Filters are used before the conversion process to make sure no frequencies higher than the Nyquist are converted.
Standard format for CD digital audio: 44.1 kHz, 16 bits, stereo
Higher sampling rates and resolutions are used in pro audio, but we can't hear them.
Convertors handle this sampling and un-sampling. We need them because the world is analog, and our ears respond to analog.
A/D and D/A convertors built into Mbox2Pro. Mac also has convertors, 16/44, but hard to get high quality in such an electrically noisy enviornment.
Signal cannot go above zero: hard clipping, sounds terrible (unlike analog clippping, which can be interesting)
Signal cannot go below noise floor—last bit.
You can use low-level white noise ("dither") to create "false" noise floor, mask quantization noise so first bit is never reached; white noise sounds more natural. Modern noise-shaping uses filtered noise that is almost inaudible but does the same thing.
Linear recording: preserves all data. "PCM" encoding most popular, but others exist.
Lossy recording (or compression): compromises quality, we'll talk about it at end of semester.
Recording audio into DP:
Create a stereo or mono audio track. Set input and output to Scarlett. Always make output stereo, even if it's a mono track, so you can pan it.
Record-enable the track. Set level on Scarlett and use Pad if necessary. Peak light should come on rarely. Check level in DP Mixer or Audio Monitor window, adjust input if necessary.
After recording, turn off Record-enable to play back.
Can move audio events around.
Use Edit>Split to break up recording into different regions. Shorten or lengthen regions with brackets at beginning and end. All edits are non-destructive and entire file can always be recovered!
Scrub in and out points with "speaker" on.
Non-linear editing: Any piece of any sound can be played from any track in any order.
Cross fading regions by clicking in red dot at the top left or right edge.
Reason: Kong. Drum pads/machine. Each pad assignable sound, volume, output, effects, etc. (notes not changeable). Like NN-19, can sample right into it.
Analog recording: some history
Vinyl: Original records used wax. Later went to plastic/vinyl. Physical model of waveform is carved into a pliant surface using a "cutter head": a diamond stylus connected to two electronmagnets, which make it wiggle according to the waveform. In monaural records, the stylus moves vertically. In stereo records, channels are encoded at 90° angle to each other (45° from vertical), and the stylus moves horizontally and vertically. The mono (sum) signal is vertical and the stereo (difference) signal is horizontal. To play back, a mechanical needle tracks the groove(s) in the surface, creates magnetic fields in the coils in cartridge, this change in voltages is then amplified.
Frequency response: Needle has to track very fast, vinyl surface resolution has to be very fine. Friction causes surface to heat up, get softer. Every time you play a record, some of the high frequencies get lopped of.
Dynamic range: If you push the needle too far it jumps out of the groove. Too much bass in one channel will push needle sideways.
Speed variation: wow (slow) and flutter (fast).
Noise: dust collects on the surface, and after playing, it is ground in so noise becomes permanent. Surface scratches cause pops and crackle.
Turntable rumble: low-frequency noise very hard to avoid even in best turntables.
Analog Magnetic Tape: embeds waveforms as magnetic patterns on a magnetized surface on plastic ribbon.
Problems: Frequency response: requires very fine particles, faster tape speeds in order to get accurate small waveforms/high frequencies.
Noise: Medium has inherent noise due to random (Brownian) motion of molecules.
Dynamic range: Magnetic orientation of particles can be changed a limited amount. Push them too hard and they resist.
Copying: Each copy adds "generation" noise.
Speed variation: Wow (low-frequency variation) and flutter (high) caused by imperfections in mechanical system, tape stretching.
Longevity: Many tapes of the '70s and '80s are now unplayable because of binder failure.
Editing analog tape: destructive, linear. Have to physically cut tape, put into proper sequence.
Things you can do with audio in the analog realm:
splice/trim/concatenate, reverse, equalize/filter, delay/ reverb, change speed, pan, loop, layer (overdub)
Examples played today: Ilhan Mimaroglu - Six Preludes For Magnetic Tape
Effects in Reason: Very efficient! Effects can be patched in after each module or using aux sends/buses. When using effects "in line" (between source module and mixer), use Wet/Dry to balance the amount of effect. When using effects in Aux buses, set Wet to 100%, use sends and returns to balance the amount of effect. 6:2 mixer has only one Aux bus--if you need more, use 14:2 mixer.
Reverb: multiple delays simulating sound bouncing off the walls, floor and, ceiling.
Size of room, distance of walls from source, materials on surfaces will determine reverb size and frequency characteristics. Pre-delay (distance from closest wall); early reflections (size and shape of room, sonic characteristics of walls); tail (size of room, liveness of surfaces); damping (liveness of room)
Equalization: emphasize or reduce specific frequencies. Control of frequency, gain (+ or -) and bandwidth.
Delay: discrete echoes, can be timed to tempo or fixed; feedback control for number of echoes. Usually used with a module "in line", and not in an aux bus.
Flanger/phaser/chorus: very short delays that cause comb filtering: mulitple sharp filters. If you move the delay time with an LFO, filters move, resulting in “jet plane” effect.
Freezing tracks in Digital Performer (a/k/a rendering, printing). MIDI tracks are not audio, and cannot be mixed down or put on a CD or MP3. Freezing creates an audio track from a MIDI track. Different ways to do it in Reason, others.
in MSI and SoundCanvas: select all MIDI tracks for each instrument and their corresponding instrument track, from beginnng to end. Make sure both MIDI and Instrument tracks are set to Play. From Audio menu, choose "Freeze selected tracks." Sequence will play through and a single new audio track will be created, which will mix all the instrument’s tracks together. Any changes in volume or pan, controller changes, or any changes in the mixers will be preserved on the audio track. It will stop automatically at the end of the sequence.
in Reason: Put Reason's Audio track into record (and nothing else). Set Reason MIDI tracks to play. Mute everything else. Press Record and run the sequence. Reason’s output will be saved as an audio track. (Good idea to use countoff without metronome to make sure the first note is recorded.) Effects and mixer settings in Reason will be preserved on the audio track. It will not stop automatically!
After freezing tracks: You can mute the MIDI and instrument tracks (but don’t delete them in case you want to modify them later!) and just play the audio tracks. If you have other MIDI tracks you haven't frozen, you can play them at the same time and they will sync.
Final two-track mixdown for CD or export: Select multiple audio tracks (not MIDI!) and from the Audio menu select "Bounce to disk." Specify AIFF interleaved stereo, 16-bit (you can always export as MP3 later). Name is automatically created, and the file is stored in the project's "Bounces" folder, unless you want it somewhere else. Process is much faster than real time
Big mixer in Reason—14 inputs: many modules!
All knobs, sliders, and buttons in a module have a MIDI controller assignment. You can control them and record changes in real time from the MIDI keyboard.
Each module has a chart in the "Reason controllers " folder showing the controller numbers for that module. Assign a slider or knob on the keyboard to send the MIDI controller of the knob/slider you want to control on the module. You can record sliders with notes, or afterwards in Overdub mode.
When the track plays back, you can see the knobs move on the module.
You can also move controls in the Reason module itself while recording (or overdubbing) and the movements will get recorded.
Edit the controllers in the Sequence window--they will appear under "Notes" in the dropdown list at the left side of the track after you've recorded them. Use the arrow or reshape tool to change them. (Automation Play only needed if you have volume and pan on the track—not necessary for any other controllers.)
Setting up a controllers console in DP:
Project>New Console. From pop-up, choose slider or knob or something else. Drag into Console window. In Control Assignment window, Source is irrelevant. Set Target to Track, and type in Controller #. Set Min/Max (usually 0-127). Take console out of edit mode (pencil at bottom). Put track in record and move knob/slider. Can also be used in overdub mode.
Can add as many controllers as you wish, each independent. Use labels!
Loopback problem: When Reason is running under Digital Performer, moving a control on a Reason module will move the same-numbered control on the Reason module that is record-enabled in DP. This can badly screw up your Reason rack!
Put DP in Setup>Multirecord, set input as "Impulse Impulse-any" on all tracks. This filters out data coming from everything but the keyboard. But when you are recording, make sure you are recording on only one track at a time!
Initial template: 6-input mixer, NN19 module, reverb.
Sample playback using AIFF/WAV/MP3 files, from local files, modified files, or your own sounds.
Three uses of NN19:
1) to play factory patches, of which there is a large library on Local disk.
2) to make your own instruments, using multisamples that you create yourself or import from somewhere else.
3) to make banks of sound effects, loops, beats, etc. that can be triggered from MIDI.
Standalone mode—use this for creating patches and sampling into Reason:
Keyboards and Control surfaces OFF/Delete
Advanced MIDI Tool on front panel: assign channels to modules
Look at wiring: Tab key to flip rack. Mixer output goes to hardware 1-2 input. Ignore “Master Section.”
To create a patch from scratch: Edit>Reset Device, load Sample in by clicking on the folder icon above the keyzone map. Samples can be taken from within NN19 or NNXT folder, or anywhere else, or dragged in from the Finder (select a keyzone first). More Samples in Documents>Sounds. Sound FX Libraries, Earshot FX library, SampleCell FX bank, SampleCell instrument bank, Optimum drums, other (weird) banks. Bring in your own on flash drive or download from anywhere. AIFF, WAV, or MP3 will work.
To create a new keyzone for another sample, Edit>Split Key Zone. To do realistic instruments you need multiple samples. Transposing a sound too much shifts formants, causes muchkinization: sound becomes unnatural.
Set root note to determine relative pitch on keyboard. Tune samples to get a consistent scale up the entire keyboard.
Loops: can be turned on and off for each sample. Forwards or Forwards/Backwards. Sample must be designed for looping, or else you'll hear a glitch or a space.
To add another module, first highlight the mixer and create a new NN19. New module will automatically be wired into mixer.
Modifying the patches: LFO speed and amount, filter, filter envelope, amp envelope, pitchbend range, mod wheel assignment. All controls outside of keyzone area affect all samples.
Sampling into NN19—must be done in standalone mode, not through DP!
Get microphone from cabinet (key is in the lower keysafe—make sure to scramble the combination when you’re done!) Or guitar or other audio source. Set interface input level in the middle, lower if Peak light is on a lot. Configure Reason Audio I/O module for mono or stereo: flip panel around, disconnect one channel if mono. Set Big Meter for input and adjust level. Start sampling, play sound. 30-second limit.
Software will automatically cut off leading silence, load sample into instrument. You can do multiple samples, and they will all appear when you turn the Sample Select knob. Open Toolbox window to rename or edit or loop samples. Assign them to different regions, tune them. Turn off "keyboard track" if you want to play everything at fixed pitch.
We do not use the sequencer in Reason!
Saving a Reason rack or "Song": If you have brought in or recorded your own samples, go to File>"Song Self-Contain Settings" and "check all" before saving—or Reason will not know where your samples are!
Rewire mode: Using Reason in Digital Performer: Always launch DP first, then Reason (QUIT it if it’s running!). At end of session, always Quit Reason first.
DP and Reason communicate using “Rewire” protocol. After launching Reason, you must set up two paths to Reason from Digital Performer: audio, and MIDI.
In Reason Prefs: turn off keyboard in Sync. De-assign MIDI channels in Advanced MIDI tool. MIDI channels will be handled in DP. Audio assignment is automatic through DP.
In Digital Performer: Create new Stereo Audio track. Set output to Scarlett 1-2. Set input to Reason Mix L1-R2. Turn on "Mon" (input monitor) on that track. Reason can have more than two audio outputs, but we’ll just use two for now. Don't put audio track in "Record".
Now create a new MIDI Track, and assign it to "Bus6:Reason NN19 1" (or whatever the default instrument is). Set the MIDI track to record. With multiple modules, each one gets its own MIDI assignment within DP. Name Reason modules! Names will carry into DP.
You have to save your Reason rack separately from your DP file!
Quantize: swing. For triplet swing use 66%, for dotted-eighth/sixteenth swing use 75%. Smaller values more "laid back"; larger ones more "tense".
MOTU Symphonic Instrument (MSI):
Use within shell: UVI Workstation
Create instrument track: UVI. Open MSI and double-click to select instrument.
Multiple racks, each one with 16 channels (only 4 in default state).
Excellent acoustic samples of strings, winds, brass, harp, guitar, keyboards, etc. Samples are "streamed"--they live on disc, but the initial few milliseconds of each is loaded into RAM.
Can adjust envelopes, filters, LFO, reverb.
Saved with sequence--don't need to save a separate file.
Munchkinization if you use pitchbend or glide on a sampled instrument and change the pitch far from the original sampled pitch, because you are transposing the formants along with the fundamental. It distorts the harmonic content of the sound. So samplers use "Multisamples" with different samples for each note (or small range); therefore samples don't need to be transposed very far, if at all.
Reason: NN19 sampler.
Set up Stereo audio track with Reason input. Launch Reason, set up MIDI track with Reason "bus 6" output.
To load a patch: click on the folder icon next to the patch name. Patch libraries are in:
Factory Sound Bank>NN19 Sampler Patches> various families
Patch contains one or more samples. Samples are arranged into keyzones within a patch. Big difference between loading a patch (.smp) and loading a sample (.aif or .wav)!
Insert measures: puts blank space on all tracks. To insert blank space on individual tracks, use Shift command.
Snip=cut & close up gap
Splice=paste & push to the right
Merge=paste without deleting what you're pasting into
Setup>Chasing. Turn it on for pitchbend and controllers, it will take care of resetting everything at the beginning of a sequence. Usually don’t turn it on for note.
But Make Sure to: Initialize all controllers that you’re using, especially volume and pitchbend, at the beginning of the sequence, or you may be left with hanging controllers, meaning tracks that are out of tune or that have been faded out!
Transpose by key or scale interval, harmonize=transpose and keep original
Change velocity and duration (add, substract, scale, set, limit).
Looping: can have different loops on different tracks.
Select a range or other characteristic (duration, velocity, top or bottom note of chord) and move to another track. You can cut the notes from the existing track, or copy to the clipboard.
View Filter: Global (all including graphic windows) or Event List (only). View and edit only certain types of data. Good for finding glitches or unusual events in a track, or for eliminating all of the pitchbend or controller info.
Event list: notes, controllers, etc. Good for analyzing a track closely, inserting or changing discrete events. Use View Filter to focus on types of events
Step time entry (command-8). Change duration. Step=rest, held notes will become a chord until you release one of them.
Define a rhythmic grid, and bring the notes to it.
Strength: how close to the grid to make it. Less than 100% can keep some of the original rhythmic feel.
Offset: place quantized notes fixed distance before or after the grid.
Randomize: Places notes randomly away from grid after quantizing.
Humanize (under Region menu): randomize without quantizing.
Techniques for composing electronically:
Not restricted to what you can put on paper, or dealing with a specific instrument or group of instruments. A lot of freedom! Where do you start?
• a sound. Play with a sound on the keyboard and see where it takes you
• a beat or a groove. Play a 4-bar phrase and then jam over it
• a melody
• a chord progression. Blues, falling fifths, rising thirds, modal, V of V of V, etc.
MIDI window can show one or several tracks at the same time. Use show/hide pane to set these up. To differentiate between tracks, reset the colors for the tracks in the Tracks window. Track with pencil icon can be added to.
Changing velocities in Continuous Data area. For individual note velocity change, use arrow. Group select and change, use marquee and arrow (same tool).
Left tool: what to show and what will be affected by selection
Right tool: pencil and reshape tools will affect this data.
Show only: hides data that you haven't selected.
Toolbox at right side of transport window. (Shift-O if it's not visible.) Default tool is arrow. Use pencil for adding events. Use S-curve to reshape events (like velocity). When reshaping velocity, notes must first be selected!
Add notes or controllers with pencil. edit with arrow, or group edit with marquee and arrow. Reshape controllers or velocities with reshape ("S”) tool.
Draw a curve, or use reshape, or set to free.
Reshape tool can be used with other curves, and is periodic. Period depends on grid setting.
Sequence window: one or more tracks in separate lines. Dropdowns for volume, pan, controllers. Only controllers that are already on the track will show up. Change size.
Tempo slider (manually variable) vs. Tempo map (Conductor, fixed with automatic changes)
Create a tempo map: in Tracks window, double-click on conductor. Can magnify vertically by left edge dragging (magnify tool). OR open right-side pane using Shift-], select Event List, and Conductor. Can do time signature changes in Event List too.
Controllers in Sound Canvas. Volume, pan, sustain, a few others are universal. Otherwise unique to each instrument; Sound Canvas uses only these:
To set up controllers on the Novation keyboards: (Defaults are on the whiteboard)
Press Controls bitton: display says “Move Control”. Move the control you want to program. Press “+”. Use the Data knob to dial in the number. Push the knob to save the assignment. Go back to “MIDI Chan” so you don’t disturb it. Remember your assignments, since they may not be there the next time.
Wait for note: with countoff (“infinite”) or without.
Changing controller data in Continuous Data area.
Right tool: pencil and reshape tools will affect this data.
Left tool: what to show and what will be affected by selection. “Show only” hides everything except what is selected in right tool.
Toolbox at right side of transport window. Default tool is arrow. Use pencil for adding events. Use S-curve to reshape events.
Hanging controllers: make sure that you turn off sustain pedal, mod wheel, etc. and zero pitch bend before you stop recording, or those controllers will stay in effect forever!
Mixing console automation: Changing volume and pan, recording motion. Put track into “auto record”, but do not click on Record in Transport window! Click on play. (Whether track is record-enabled or not doesn’t matter.) All movements of fader and pan knob will be recorded, can then be edited in sequence or MIDI window.
To play back volume or pan, Automation Play must be enabled!
Common configuration for MIDI program changes. Used for multimedia, file exchange, like .txt
Sound set (128 programs), Arranged in families of 8. Percussion (47 sounds) on ch 10.
Fixed controller meanings: volume, pan, expression, mod wheel, PB range
Most computers and mobile devices how include a GM synth.
Sound Canvas is a General MIDI virtual instrument, has 16 channels, each one can have any of 128 sounds, except drums on Channel 10. Follows GM protocol and has additional sounds in each family.
Configuring the keyboard (press "MIDI chan" two times until it says "Music64 1")
Use Pads for Drums on channel 10 as much as possible!!
Editing in Digital Performer: moving and changing notes. Grid on and off for selecting and moving events. Sets cursor movement, not (necessarily) note placement. Temporarily defeat it with Apple key.
Copy/paste or option-drag to duplicate events. Cut, paste, to replace
Tracks vs. channels. More than one track can be assigned to a channel. All tracks going to the same channel have to have the same patch/sound and changing volume on one of those tracks will change it on all of them!
Volume vs. velocity
Important to note the difference between the velocity byte in note-on command
(which affects onset of note only) and volume controller (#7) (can affect sound continuously). Velocity=how loud the instrument is played. Volume=how high
the fader is. Velocity affects one note at a time, Volume affects all notes on the channel.
Sign on to server: open Music64 storage, username: Music 64; password same as on computer. Create a folder with your name on it for your stuff.
NEVER RUN A FILE FROM THE SERVER.
Always copy it to the Documents folder on the local computer first. When you are done, move your folder to the server for safekeeping.
Input devices for electronic music
Keyboards, drum pads, guitars, wind controllers, pitch convertors, marimbas, maracas, positional indicators, ribbons, game controllers
Because physical characteristics of device are not linked with acoustic characteristcs, you have total freedom.
The software MIDI studio: Virtual MIDI cables: connecting applications (instruments and effects) using different protocols like Rewire, VST, Apple AU, TDM (ProTools), DSMidi (wireless), OSC (Wired or Bluetooth), and IAC. In the software studio, MIDI speed limit doesn't have to be adhered to. We use AU, VST, Rewire, and MOTU Audio System instruments and effects.
Note on: Command byte (144-159 depending on channel) followed by two data bytes (0-127): note number, key velocity=how fast the key moves from top to bottom
Note off: Command byte (128-143) + note number + velocity. Why note-off velocity?
Continuous Controllers: (176-191) + controller number (mod wheel, volume, pan, sustain) + value
127 possible controllers per channel.
Many controllers defined, some as transmitters (mod wheel=1), some as receivers (volume=7), some as both (sustain pedal=64).
Others that are defined: Stereo pan=10, Foot pedal=4, Data slider=6
Many others loosely or not defined.
Volume vs. velocity
Important to note the difference between velocity byte in note-on command
(which affects onset of note only) and volume controller (#7) (can affect sound continuously). Velocity=how loud the instrument is played. Volume=how high
the fader is.
Program change: (192-207) (Cn) + single data byte=value. Program change numbers are 0-127, often but not always called 1-128. Calls up a register in the synth's memory.
Pitchbend: (224-239) + two data bytes: Most significant byte (MSB) + Least Significant Byte (LSB). Designers wanted “double precision” so that when [tich wheel was moved you didn’t hear discrete pitches. So possible values are 0 to (128*128)-1=65,383. Turns out it wasn’t necessary: LSB is almost always ignored. But it must be there anyway.
"Zero pitch bend" is actually a value of 64 (MSB). Many sequencing programs describe pitchbend as +/-64, but in reality the values are 0-127.
Channel pressure/aftertouch: (208-223) + single data byte=value. Amount of pressure on keys after the note is played. Used for vibrato, filters, pitch change, etc.
Key pressure/polyphonic aftertouch: (160-175) + note number + value. Individual pressure values for each key. Quite rarely used: expensive to implement, complex to program, uses a lot of bandwidth
Recording MIDI into a sequencer (Digital Performer)
Open DP: New, create a new folder with your name in Documents folder.
Name the project
Turn on click (metronome)
Turn on Wait for Note
Choose instrument in SoundCanvas
Record a track
Drums are on channel 10. You can overdub onto any track with Overdub switch enabled.
Types of synthesis
Additive or Fourier: http://www.falstad.com/fourier/ (choose mag/phase view, log view)
Building up from individual harmonics, with separate levels and envelopes for each. Impossible to do analog, hard to do digitally because hard to make interactive: so many computations per second. Used in Kawai synths, Kurzweil 150, some experimental synths. Hammond organ.
Subtractive (Reason): Start with complex waveforms like noise, sawtooth, square, and filter out harmonics — high, low, or bandpass. Filter envelopes much easier to deal with than individual harmonic envelopes. Analog synthesis is subtractive, also can be done digitally, digital simulations of analog synths now very popular. Also digitally controlled analog synths. “Real analog” synthesis has drift problems.
FM (Proton): uses 2-6 sine waves or more complex waves ("operators") modulating each other, each with an envelope. Modulator, instead of filter, determines harmonic content, which can be very complex. Lends itself well to real-time control, not hard to compute. However, programming is not at all intuitive. Describe Sound Blaster chip: awful 2-op FM.
Physical modeling: (Kong) digital models of instrument parts exist in software, interact in real time. Also called "waveguide". Might have excitation of a flute, resonance of a saxophone, bell of a trumpet. Can include elements like breath, embouchure pressure, tonguing, change in tube resonance as you cover "holes". Very powerful, difficult to do -- lots of computation. Yamaha mostly. Instruments can generally play only one or two voices at a time.
Granular (Reason): breaks up files (like samples) into tiny pieces, plays them back and reassembles them at different speeds and pitches, in real time. Adds processing.
RAM: make recordings in a digital audio program, load them in. Also called (incorrectly) "wavetable". Samples are looped for long sustains.
Sample + Filters + DSP: Digital samples stored in ROM or RAM, played back under control of MIDI.
Multisampling prevents munchkinization. Formants—spectral areas that remain constant despite pitch; if you transpose them, you change the characteristic of the sound.
Adding DSP to samples: envelopers, filters, LFO
Disc streaming: can use longer samples, no loops; headers of samples are in RAM, the rest streams from disc. Very resource heavy! Limitations on how many notes can sound at once. Some composers use multiple computers in their studio for large orchestrations.
Real-time Control over synth parameters:
Key number = note/pitch
Key Velocity: volume, timbre, envelope depth or speed
Pitch bend: variable range
Mod wheel: LFO depth or filter cutoff
MIDI: What is it?
MIDI is not music, not audio, but it is a representation of a musical performance, like a score, or a player piano roll. Every performance nuance is communicated, without the actual music. Notes, sliders, knobs, pedals, patch changes, other parameters.
Cannot be stored on tape, must be stored digitally — a sequencer: a list of instructions (commands) with their timings. Sequencer can be a computer, and the sequence is a computer file. Stored on disk, you can move it around between studios, or over a network. Usually quite small.
On stage, one keyboard could act as master and play all the others. In studio, a central controlling sequencer could control an entire orchestra of synths.
Central controller has performance data, while the actual sound is produced by the remote devices.
Initially this was all done using discrete hardware modules connected by MIDI cables. Now we can do it all in software.
Since performance data is broken down into gesture parameters, can isolate individual performance parameters: change key velocity, or note number, or pitchbend setting, or instrument, or rhythm without changing other performance parameters
Prepare an entire performance, change any parameter, singly or globally, at any time.
Who owns it
Public domain. A specification, not a standard, no legal or official standing. The industry agreed to support it, market forces keep it in line. If someone strays, users and reviewers will report the violation.
A living language: many holes in the spec for future development. Participation from all corners of the industry: hardware mfrs, software mfrs, systems designers
A compromise: performance vs. cost, initial hardware cost c.$10/unit. Seems slow by today's standards, but is still effective for its purpose.
Electrical and digital protocol
Originally used cables, binary on/off 5-volts DC. Each MIDI byte or word consisted of eight data bits, plus two “framing” bits. Speed is 31,250 bits per second, or 3,125 bytes/sec (8 data bits + two buffer bits= 10 bits/byte). Now mostly virtual.
Travels in one direction only: from MIDI Out jack to MIDI In jack. If you want bi-directional, you need two cables.
A universal standard: if an instrument has MIDI, it must be able to talk to any other MIDI instrument without restrictions.
MIDI over USB, etc.
Now MIDI can be piggybacked on other cables, like USB, Firewire, and Ethernet. But there are no industry standards for these, so manufacturers have to make software
drivers which are installed on computers so that they can understand what's coming in. However, Apple and Microsoft publish specs for MIDI over USB that instrument and interface manufacturers can choose to follow--then they are called "class compliant" and don't need special drivers. This includes our keyboards.
Advantage: can specify faster than MIDI speeds!
Disadvantage: can't plug instruments into each other, but you need a computer in the middle.
MIDI Message set
An eight-bit number has a decimal range of 0 (00000000) to 255 (11111111)
MIDI has two types of bytes:
Status or Command byte (>127, first bit is 1) is instruction
Data byte (≤127, first bit is zero) is value.
Some commands are defined as having 2 data bytes, some have 1 data byte.
Receiving device knows what to expect. Incomplete command is usually ignored.
First four bits of command byte is the type of instruction. Second four bits is the channel number. Early MIDI devices only read one channel at a time, ignored data on other channels (some, like drum synths, still do). Means you can use different devices on the same MIDI cable.
Modern synths, called "multitimbral", sort out data by channel, assign to different sounds in the instrument.
Channel number = 0000 (zero) to 1111 (15). But we call them 1 to 16.
Software synths are not limited in the number of channels they can play, but are usually limited to 16 or else organized in banks of 16.
Note on: Command byte (144-159, depending on channel) followed by two data bytes (0-127: note number, key velocity=how fast the key moves from top to bottom)
Other musical factors:
Noise = random frequencies, sometimes filtered (colored)
Envelope = Change over time, applicable to volume, pitch, timbre, location, LFO
Vibrato = Low Frequency Oscillator (LFO) = periodic change in any of the above. Vibrato itself can have an envelope, both intensity and speed.
Location = L/R, F/B, U/D
Audio electronics principles and components:
Transducer = converts one type of energy to another
Microphone = converts sound waves in air to Alternating Current (AC) voltages. Microphone has a magnetic metal diaphragm mounted inside a coil of wire. Diaphragm vibrates with sound waves, induces current into coil, which is analog of sound wave. This travels down a wire as an alternating current: positive voltage with compression, negative voltage with rarefaction.
Dynamic vs. Condensor/capacitor mics, Condensor mics can use phantom power, battery, or be “permanently” (electret) charged.
Pickup patterns: Omni, figure-8, cardioid, hypercardioid, shotgun
Speaker, headphones = converts AC voltages to sound waves in air. Speaker has a wire coil that receives alternating current from amplifier, paper cone is attached to a magnet inside the coil. As the current alternates, the magnet moves in and out, and makes the paper cone move in and out, producing compression and rarefaction.
Human Ear converts sound waves to nerve impulses. Each hair or cilium responds to a certain frequency. The brain interpolates sounds between those frequencies. As we get older, hairs stiffen, break off, and high-frequency sensitivity goes down. Also can be broken by prolonged or repeated exposure to loud sound.
Cables and Connectors:
Balanced: XLR, ¼-inch TRS. Less noise (differential amplifier cancels noise picked up on line), better frequency response, longer distance.
Unbalanced: ¼-inch TS, RCA, mini. More prone to interference, high-frequency roll-off.
Stereo unbalanced: ¼-inch TRS, mini TRS.
DI box: for balancing unbalanced lines, like guitar cables on long runs.
Electronic Instruments—The Parts of a Synthesizer
Oscillator: simple or complex waveform
Filter/Equalizer: static or dynamic
Envelopes: volume, filter, pitch. Attack/Decay/Sustain/Release: approximation of natural envelopes, invented by Moog. Invertable for filter use.
LFO: volume, filter, pitch. Variable depth and rate, selectable waveform. random segments/random levels (sample+hold)
Characteristics of a sound:
Sound is vibration of a medium, such as air. Travels in waves: compression, rarefaction = changes in air pressure = volume.
Frequency = pitch = Number of changes in pressure that go past your ear per unit time.
Expressed in cycles per second, or Hertz (Hz).
The mathematical basis of the musical scale: go up an octave = 2x the frequency.
Each half-step is the twelfth root of 2 higher than the one below it. = approx. 1.063
The limits of human hearing = approximately 20 Hz to 20,000 Hz or 20 k(ilo)Hz.
Fundamentals vs. harmonics = fundamental pitch is predominant pitch, harmonics are multiples (sometimes not exactly even) of the fundamental, that give the sound character, or timbre.
Loudness (volume, amplitude) Difference between maximum and minimum pressure
measured in decibels (dB). The decibel is actually a ratio, not an absolute.
A minimum perceptible change in loudness is about 1 dB. Something we hear as being twice as loud is about 10 dB. So we talk about “3 dB higher level on the drums” in a mix, or a “96 dB signal-to noise-ratio” as being the difference between the highest volume a system is capable of and the residual noise it generates.
“dB SPL” is referenced to something: 0 dB SPL (Sound Pressure Level) = the perception threshold of human hearing. Obviously subjective, so set at 0.0002 dyne/cm2
The total volume or “dynamic” range of human hearing, from the threshold of perception to the threshold of pain, is about 130 dB, so the threshold of pain is about 130 dB SPL.
Timbre = complexity of waveform, number and strength of harmonics
Harmonic series = break down waveforms into harmonics or partials. Fourier transform/analysis.
Show sine, saw, triangle, square, noise
Show harmonic breakdown of sine, saw, triangle, and square waves. Saw: each harmonic at level 1/n. Square, only odd harmonics at 1/n. Triangle, odd harmonics at 1/n2
In an electronic system, the ability to reproduce those high harmonic frequencies is called frequency response or fidelity.
Using filters or equalizers to change timbral characteristics. Hipass, lowpass, bandpass
©2018 Paul D. Lehrman, all rights reserved