|Music 64 • Fall 2017 • Lecture notes|
Audio (data) compression
Linear PCM (or any linear coding of digital audio) is inefficient: data rate is the same regardless of whether there's signal or not, or the frequency range, or the dynamic range. Is there a way to compress it so we can throw away the "unimportant" parts and keep the "important" ones?
Lossy compression, using psychoacoustic masking effect. Softer sounds, sounds close in frequency to others, sounds close in time to others, can be masked. Redundant data in the two channels eliminated. Stereo separation can be reduced, especially in the lower frequencies. Variable rate encoding uses a slower bit rate when data is less complex.
Different from dynamic compression-doesn't change dynamic range. Unlike Zip or Stuffit compression, not recoverable.
Codec: Coder/decoder, or compressor/decompressor.
MP3, short for MPEG-1 Layer 3: originally designed for compressing video, this is the audio part of the spec. Reasonable quality, compression ratio about 1:10, depends on bit rate. Bit rate is not sample rate!!
Algorithm for making files is not necessarily free-needs to be purchased in software whose maker has paid a license to the owner (Fraunhofer Institute, German firm). open-source encoder: LAME ("Lame Ain't an MP3 Encoder").
Apple iTunes uses AAC (Advanced Audio Coding), better algorithm, more efficient, can do multichannel. Part of MPEG-4 spec.
Lossless AAC has 2-to-1 size advantage, is truly lossless. Same as FLAC (Free Lossless Audio Compression--open source, 30-50% size reduction), Meridian Lossless Packing, Shorten (SHN).
John Monforte's experiment:
Convert an AIFF file to a compressed format (MP3, AAC, etc.). Then convert it back to AIFF. Flip the polarity (or phase) on the re-converted file. Mix the flipped/re-converted file with the original. If the files were identical, you would end up with an empty file, and with lossless compression that's what happens. With lossy compression, what you end up with is the difference between the two files, i.e., what the compression process removes.
Today's PowerPoint: Careers in pro and consumer audio and music
Turning Scans and PDFs into Sibelius scores
Open Sibelius first, when it's done open PhotoScore Lite.
Open PDFs in PhotoScore, by dragging to upper panel. They should group themselves at the bottom of the left column. "Read" all pages. Adjust staff lines if necessary. Correct time signatures and key signatures if necessary. When all pages are open, Send to>Sibelius. Specify your own instruments.
Arrange in Sibelius: automatic orchestrator.
Can take one (or two) staff and spread it out on many, e.g., piano to string quartet; or reduce many to one (or two), e.g. piano reduction of ensemble.
After file is loaded, create the staves you want to arrange to. Choose the material you want to arrange and Copy it. Select the destination staves, Notes>Arrange. Choose an algorithm: "Explode" is standard.
Best to use Arrange for only a few bars at a time, creates cleanest work.
"Graintable," uses tiny pieces of samples, plays them in different ways and at different rates, extrapolates between the "grains". (Can't load your own samples in.)
Index: where the sample starts when it receives a note-on.
Motion: how fast to move through table (unrelated to pitch). May be forward or forward+backward, defined in table. All the way to the left: static, plays the same grain over and over.
Shift: formant shift
Pitch: pitch shift
Modulators, Filters, and shapers can be used on one or both oscillators, different parameters.
LFOs have complex waveforms, can be used in "one-shot" mode.
Spread: separates two oscillators in stereo output.
lets you control multiple modules with one MIDI channel, send multiple controller messages from one control.
Insert modules within combi. Can have mixer inside combi or mix externally.
Open "Programmer", select the module you want to assign a controller too, insert source (mod wheel, breath control, expression, etc.), destination, and scaling in table. A source can be assigned to one or more parameters, with scaling and polarity, in each module. (Mod wheel gets through as mod wheel regardless of whether it's assigned.)
"Rotaries" are controllers 71-74.
"Buttons" are binary, controllers 75-78
Analog: simple waveform.
Wavetable: sample, just a few cycles, move through it using position.
Phase modulation: combine two waves in series, and second one modulates first.
FM pair: two sine waves, relative pitch creates sidebands. FM control determines level of sidebands.
Multi: multiple osc same wave, detunable.
Noise: white, colored, band-limited
LP (Moog-style), Comb (many teeth), Formant (x-y, harmonic-related peaks, vowel sounds), state variable (notch or peak sweepable between high-pass and low-pass)
LFOs have delays, assignable in matrix.
Global section: affects everything. Envelope assigns to filter, or somewhere else.
LFO 2 is free-floating, must be assigned.
Libraries of MIDI files:
Libraries of scanned classical pieces:
The Petrucci Project: http://imslp.org/wiki/IMSLP:Library_Portal
U of Wisconsin Music Library (links to others): http://music.library.wisc.edu/resources/scores.html#online-scores
MIDI editing vs. notation editing. Why?
MIDI: more accurate, reproducible, contains non-note information (e.g., vibrato, timbral change), greater levels of timing and dynamic resolution.
Notation: interpretable, readable by human musicians, large body of work in that format.
Set up score: use template, or blank score and add your own instruments (I). Move up and down in Instrument window after adding.
Input methods: mouse, keypad, Mac keyboard, MIDI keyboard. Use esc key to get out of entry mode!
Keyboard: "Flexi-time" (command shift-F) records to a click track, makes approximations. Flexi-time options lets you set the approximations.
"Live Playback" button in Playback pane preserves original performance; otherwise it follows score literally.
If you put in a note that's out of range, it will show up as red.
Loading sounds: Play> Configuration: Sibelius 6 Essentials
View>Panorama: straight across.
Zoom in and out from menu or with slider on lower right.
Can’t edit while playing!
Click in bar selects bar, double-click selects staff, triple-click selects entire part. If one staff is selected, only it will play. De-select to play all.
To simplify after the fact: Notes>Plug-ins>simplify notation>renotate performance/Overwrite selected passage.
Copy in place: select an item or bar or staff, hold down the option key, and click where you'd like to copy it to. It will take whatever you had selected and clone it. Be careful where you option-click: it takes the position very literally!
Drums not GM map!
Putting audio into a video soundtrack (dialogue, sound effects, ambience) : Lock audio track to SMPTE with Lock column. Otherwise audio start times will change when you change tempo. Change duration in Spectral Effects if necessary. To bring audio that's already in the movie into DP, use drop-down in movie: import audio into sequence. It will create a new audio track with the soundtrack on it.
Native Instruments Komplete 8
Huge library: piano, drums, synths, orchestral. Streaming samples: actually on disk, but headers are loaded into RAM.
Instantiate with a single instrument track: Kontakt 5 (stereo). Drag instruments into the rack and assign MIDI channels (you can assign multiple instruments to a single MIDI channel). Set up a MIDI track for each instrument. Each rack has only stereo outputs, but you can use multiple channels on the internal mixer. You can also insert effects on the mixer.
All instrument setups are also saved with the sequence--don't need to save a separate file.
Battery: giant percussion instrument. Many drum sets, import or record your own samples, tune and change paramaters for each one.
FM8: FM synthesis, very flexible. Hundreds of interesting patches.
Video in North America and Japan: Actual speed is 29.97 fps. To keep SMPTE time and "wall clock" time in sync (they differ by 3.6 seconds/hour), sometimes use "drop-frame" timecode, in which some frame numbers are skipped. Digital Performer allows this, usually not an issue.
Playing with video for use in DP
You can edit a QuickTime file in QT Player 7 (not 10!). Set in (I) and out (O) points, and Trim to Selection. Save as... and select "Self-Contained Movie." You can also splice QT files together using Copy and Paste.
DP with video
To use video in DP, must be QuickTime format. Shift-V opens Movie window. Control-click-release in Movie window and select "Set Movie" to load.
If movie has a soundtrack, you can turn it on with Assign soundtrack, or you can import it into the sequence as a new audio track.
You must know 1) start time of film: it usually defaults to zero, you can change it: Control-click-release in movie window, select Set Start Time on menu; best to use 01:00:00:00 in case you want music to start before picture.
2) start time of sequence: Open the Chunks list in the sidebar, select the chunk, and use the upper-right dropdown to set the Chunk Start Time. It can start before the film for count-ins, etc., or after.
You must use the Conductor Track (not the Tempo Slider) and put in an initial tempo. You can switch back to slider to slow things down to record MIDI stuff, but picture will be out of sync! That's okay--just make sure you put it back to Conductor track when you're done.
Set Transport counter to show timecode ("Frames" NOT "real time") and bars/beats. Under
Time Formats: Event Information: check Measures and Frames
Spotting: Find your hit points: places in the film where something in the music changes. Open Markers window in sidebar, set markers at hit points with dropdown. Lock (to timecode). Now when you change tempo, marker stays with picture.
To find tempo for first section: Select how many measures you want to be in there from beginning to first marker (make sure you select conductor track along with all other tracks.) Look at marker frame number. Region>Scale tempos: Set end time to marker position. Tempos will change. Repeat for all subsequent sections, scaling tempo from one marker to the next.
To follow action on the screen, like walking or bouncing, mute MIDI tracks, turn on DP metronome, set tempo control to Tempo Slider, adjust slider until it matches action, write down the tempo, put that number into the conductor track at the beginning or the place where you want to use it.
Music for picture
Films: Music must follow picture: change with scenes, accent events.
Directors love to recut scenes, music has to change.
In old days either cut music on tape (couldn't always find good edit points) or re-record it (very expensive!)
MIDI is great for this because it's easy to change tempos, edit music at the performance level, not the recording level.
Most important issue: how to keep picture and music in sync.
To be in sync: must know when to start and stop, where you are, how fast you are going.
Old style: Video on tape. Special audio track containing SMPTE timecode, an analog signal conveying digital information: Frame number (hours:minutes:seconds:frames; 30 frames=1 second), and speed. Similar track on audio tape deck. Synchronizer reads both codes, adjusts speed and location of audio recorder to match video position.
Now we can do it all on the same platform.
Using aux buses in DP more mutiple tracks through fx. Use sends (post-fader) to aux bus (1-2), set up Aux track with fx and input as bus 1-2. Set effects mix to 100%. (If you don't want any dry signal at all, set send to "pre-fader".)
Spectral effects (constructive): separate pitch, tempo, formants
Reason: Kong. Drum pads/machine. Each pad assignable sound, volume, output, effects, etc. (notes not changeable). Like NN-19, can sample right into it.
Module uses static oscillators, one or two at a time. Two filters, two LFOs. Amplitude and Filter envelopes similar to NN19. Modulation envelope allows LFO or other changes over time. Keyboard tracking on filter determines whether filter character changes as you go up. Noise: adds noise with variable "color". FM: adds extra harmonics, often not integral multiples.
Give each instrument its own space: use time, panning, and EQ to accomplish this.
Start with foundation: drums, bass.
To make kick stand out more, use EQ to boost at around 2kHz, not bottom.
To take tubbiness out of bass, reduce at 250-400 Hz. Low frequencies build up when mixing, so be careful to avoid that by eq'ing out that range on other instruments as well (e.g., acoustic guitar, piano)
To take "tone" out of snare drum, increase EQ sharply and adjust frequency until you hear it exaggerated, then reduce that frequency.
To make space around an instrument without using reverb (which can make things muddy), use a single short delay: 40-80ms
Kick and bass usually in the center. We don't hear bass signals directionally, this spreads the energy out. Drums can be in stereo.
Find complementary instruments and spread them left and right to widen image.
To add reverb to the whole mix, use an Aux track: Create a send in one of the channels to "Bus1-2", Aux track will appear with "Bus1-2" as input. Insert a reverb, mix at 100%. Use individual sends to adjust reverb on each instrument.
Recording audio into DP:
Create a mono or stereo audio track depending on whether you're using one mic or two. Set input and output to Scarlet. (Always make output of track stereo, even if it's a mono track, so you can pan it.)
Record-enable the track. Set level on Scarlet. Peak light should come on rarely. Check level in DP Mixer or Tracks window, adjust input if necessary.
After recording, turn off Record-enable to play back.
Can move audio events around in Sequence window.
Cursor in top half for moving; in bottom half for selecting and trimming.
Use Edit>Split (Command-Y) to break up recording into different regions. All edits are non-destructive and entire file can always be recovered!
Scrub with Audible "speaker" on.
Non-linear editing: Any piece of any sound can be played from any track in any order.
Fade a region by clicking in red dot at the top left or right edge. Crossfade adjacent regions by clicking in one of the red dots and moving across the other.
Audio can be dragged into a DP session from desktop if it’s AIF, WAV, MP3 including from a CD, or USB, flash drive, SD card, etc. Stereo files can only go in stereo tracks; mono files can only go in mono tracks! File is copied (and converted if necessary) into Audio Files folder.
Effects in DP: Plug-ins come in various formats: Apple Audio Units, MOTU Audio System (MAS), VST, Rewire. Can be effects or virtual instruments (like SoundCanvas or Reason). Formats we can't use: RTAS, TDM, DirectX.
Don't use plug-in presets and combinations! They are never right for what you're doing and are NOT good starting points!
Automating effects. Works on audio, Aux, and instrument tracks (not MIDI!). Put track in automation record. Insert effect plug-in in mixing console. Move a control in the effect. Now that parameter appears in the drop-down for that track in the Sequence window. It can be edited there using arrow or reshape tool.
MIDI automation of effects. Click the Learn Controller icon at the bottom of the plug-in window. Click the desired plug-in parameter. Move, turn, or press the desired control on your MIDI device. That control is now assigned to that parameter permanently (within the sequence).
To cancel the operation before completing it, click the icon again at any time during the process.
Analog tape problems: Frequency response: requires very fine particles, faster tape speeds in order to get accurate small waveforms/high frequencies.
Noise: Medium has inherent noise due to random (Brownian) motion of molecules.
Dynamic range: Magnetic orientation of particles can be changed a limited amount. Push them too hard and they resist.
Copying: Each copy adds "generation" noise.
Speed variation: Wow (low-frequency variation) and flutter (high) caused by imperfections in mechanical system, tape stretching.
Longevity: Many tapes of the '70s and '80s are now unplayable because of binder failure.
Editing analog: destructive, linear. Have to physically cut tape, put into proper sequence.
Things you can do with audio in the analog realm:
splice/trim/concatenate, reverse, equalize/filter, delay/ reverb, change speed, pan, loop, layer (overdub)
Played in class: Electronic Studies by Ilhan Mimaroglou
Fidelity: not dependent on physical medium
Copyability: each copy is a perfect clone (as long as you don’t compress it)
Longevity: medium doesn't wear out quickly, and can be cloned before it does
What it does: Measures ("samples") the level of the waveform at a particular instant and records it as a number.
How often the sample is taken=sampling rate
What the possible range of numbers is=word length or bit length or resolution
The more bits, the better the approximation of the signal. The difference between the input signal and the converted signal is heard as noise, and is called quantization noise. The quantization noise is the noise floor of the medium. The range in dB from the noise floor to the highest level before clipping is the dynamic range. Dynamic range = number of bits x 6 dB (approx).
The higher the sampling rate, the more high frequencies can be recorded. Sampling rate must be at least twice the highest frequency desired=Nyquist theorem. Signals higher than 1/2 the sampling rate (the Nyquist frequency) will result in aliasing. Filters are used before the conversion process to make sure no frequencies higher than the Nyquist are converted.
Standard format for CD digital audio: 44.1 kHz, 16 bits, stereo
Higher sampling rates and resolutions are used in pro audio, but we can't hear them.
Convertors handle this sampling and un-sampling. We need them because the world is analog, and our ears respond to analog.
A/D and D/A convertors built into Mbox2Pro. Mac also has convertors, 16/44, but hard to get high quality in such an electrically noisy enviornment.
Signal cannot go above zero: hard clipping, sounds terrible (unlike analog clippping, which can be interesting)
Signal cannot go below noise floor—last bit.
You can use low-level white noise ("dither") to create "false" noise floor, mask quantization noise so first bit is never reached; white noise sounds more natural. Modern noise-shaping uses filtered noise that is almost inaudible but does the same thing.
Linear recording: preserves all data. "PCM" encoding most popular, but others exist.
Lossy recording (or compression): compromises quality, we'll talk about it at end of semester.
Flanger/phaser/chorus: very short delays that cause comb filtering: mulitple sharp filters. If you move the delay time with an LFO, filters move, resulting in “jet plane” or motion effect. Chorus also creates small pitch changes that modulate.
Unison: Doubling the sound with small pitch changes and delays.
Controlling Reason effects: use a separate MIDI track for each module. Consult controller charts.
Freezing tracks in Digital Performer (a/k/a rendering, printing). MIDI tracks are not audio, and cannot be mixed down or put on a CD or MP3. Freezing creates an audio track from a MIDI track. Different ways to do it in Reason, and SoundCanvas/UVI-MSI.
in MSI and SoundCanvas (can't do both at the same time): select all of the MIDI tracks for the synthesizer and the corresponding Instrument track, from beginning to end. Make sure both MIDI and Instrument tracks are set to Play. From Audio menu, choose "Freeze selected tracks."
Sequence will play through and a single new (stereo) audio track will be created, which will mix all the instrument’s tracks together. Any changes in volume or pan, or controller changes, or other settings will be preserved on the audio track.
in Reason: Put Reason's Audio track into record (and nothing else). Set Reason's MIDI tracks to play. Mute everything else. Press Record and run the sequence. Reason’s output will be saved as an audio track. (It can be a good idea to use countoff without the metronome to make sure the first note is recorded.) Effects and mixer settings in Reason will be preserved on the audio track.
After freezing tracks: You can mute the MIDI and instrument tracks (but don’t delete them in case you want to modify them later!) and just play the audio tracks. If you have other MIDI tracks you haven't frozen, you can play them at the same time and they will sync.
Final two-track mixdown for CD or export: Select multiple audio tracks (not MIDI!) and from the Audio menu select "Bounce to disk." Specify AIFF interleaved stereo, 16-bit (you can always export as MP3 later). Name is automatically created, and the file is stored in the project's "Bounces" folder, unless you want it somewhere else. Process is much faster than real time.
Analog recording: some history
Vinyl: Original records used wax. Later went to plastic/vinyl. Physical model of waveform is carved into a pliant surface using a "cutter head": a diamond stylus connected to two electronmagnets, which make it wiggle according to the waveform. In monaural records, the stylus moves horizontally. In stereo records, channels are encoded at 90° angle to each other (45° from vertical), and the stylus moves horizontally and vertically. The mono (sum) signal is horizontal and the stereo (difference) signal is vertical. To play back, a mechanical needle tracks the groove(s) in the surface, creates magnetic fields in the coils in cartridge, this change in voltages is then amplified.
Frequency response: Needle has to track very fast, vinyl surface resolution has to be very fine. Friction causes surface to heat up, get softer. Every time you play a record, some of the high frequencies get lopped of.
Dynamic range: If you push the needle too far it jumps out of the groove. Too much bass in one channel will push needle sideways.
Speed variation: wow (slow) and flutter (fast) from warped record, off-center hole, uneven stretching of drive belt.
Noise: dust collects on the surface, and after playing, it is ground in so noise becomes permanent. Surface scratches cause pops and crackle.
Turntable rumble: low-frequency noise very hard to avoid even in best turntables.
Tape: embeds waveforms as magnetic patterns on a magnetized (iron oxide) surface on plastic ribbon.
Effects in Reason. Can be used on Aux bus (for all modules connected to a mixer) or "in-line" with just one unit.
Reverb: multiple delays simulating sound bouncing off the walls, floor and, ceiling.
Size of room, distance of walls from source, materials on surfaces will determine reverb size and frequency characteristics. Pre-delay (distance from closest wall); early reflections (size and shape of room, sonic characteristics of walls); tail (size of room, liveness of surfaces); damping (liveness of room)
Equalization: emphasize or reduce specific frequencies. Usually used with a module "in line", and not in an aux bus.
Delay: discrete echoes, can be timed to tempo or fixed; feedback control for number of echoes.
Big (audio) mixer and small (synth) mixer in Reason—just use the small one! There's a 6:2 and a 14:2 if you need more inputs. Connect the mixer outputs directly to the Audio I/O panel.
Velocity assignment: amplitude envelope, filter envelope, sample start
When assembling a drum or sound effects patch, when you don't want pitches to change, turn off "Keyboard Track."
To do realistic instruments you need multiple samples. Transposing a sound too much shifts formants, causes muchkinization: sound becomes unnatural.
Drum loops and samples are in Redrum, Kong, and Dr. Rex folders, other samples in NN19 and NN-XT folders.
You can Load multiple samples, and they will all appear on Sample Select knob. Assign them to different regions, tune them.
Real-time controls in Reason:
All knobs, sliders, and buttons in a module have a MIDI controller assignment.
Each module has a chart in the "ReasonCC" folder (in Documents) showing the controller numbers for that module. Assign a slider or knob on the keyboard to send the MIDI controller of the knob/slider you want to control on the module. You can record sliders with notes, or afterwards in Overdub mode.
When the track plays back, you can see the knobs move on the module.
Loopback problem: When Reason is running under Digital Performer, moving a control on a Reason module will move the same-numbered control on whichever Reason module is record-enabled in DP. This can badly screw up your Reason rack!
The Solution: Put DP in Setup>Multirecord, set input as "Impulse Impulse-any" on all tracks. This filters out data coming from everything but the keyboard. But when you are recording, make sure you are recording on only one track at a time!
Effects in Reason: Very efficient! Effects can be patched in after each module or using aux sends. When using effects "in line" (between source module and mixer), use Wet/Dry to balance the amount of effect. When using effects in Aux buses, set Wet to 100%, use sends and returns to balance the amount of effect. 6:2 mixer has only one Aux bus--if you need more, use 14:2 mixer.
Reason: intro and NN19 sampler.
Sample playback using AIFF/WAV/MP3 files, from local files, modified files, or your own sounds
Three uses of NN19:
1) to play factory patches, of which there is a large library on Local disk>Applications>Music Applications>Reason>Factory bank.
2) to make your own instruments, using multisamples that you create yourself or import from somewhere else.
3) to make banks of sound effects, loops, beats, etc. that can be triggered from MIDI.
Advanced MIDI Tool on front panel: assign channels to modules, change keyboard channels as needed
Default template: 6-input mixer, NN19 module, reverb.
Look at wiring: Tab key to flip rack. Mixer output goes to hardware 1-2 input. Ignore “Master Section.”
To load a patch: click on the folder icon next to the patch name. Patch libraries are in:
Factory Sound Bank>NN19 Sampler Patches>various families
Patch contains one or more samples. Samples are arranged into keyzones within a patch. Big difference between loading a patch (.smp) and loading a sample (.aif or .wav)!
To create a patch from scratch: Edit>Reset Device, load Sample in by clicking on the folder icon above the keyzone map or drag it in from the Finder.
Samples can be taken from within NN19 or NNXT folder, or anywhere else.
To create a new keyzone for another sample, Edit>Split Key Zone.
Set root note to determine relative pitch on keyboard.
Tune samples to get a consistent scale up the entire keyboard.
Loops: can be turned on and off for each sample. Forwards or Forwards/Backwards. Sample must be designed for looping, or else you'll hear a glitch or a space.
To add another module, highlight the mixer and create a new NN19. New module will automatically be wired into mixer.
Modifying the patches: LFO speed and amount, filter, filter envelope, amp envelope, pitchbend range, mod wheel assignment. All controls outside of keyzone area affect all samples.
More Samples in "Sounds" folder on local disks. Sound FX Libraries, Earshot FX library, SampleCell FX bank, SampleCell instrument bank, Optimum drums, other (weird) banks.
Also can bring in AIFF or WAV files (no MP3!) You can drag sound files from the desktop into the keyzone area: select a keyzone, and the sample will be assigned to it!
Sampling in NN19—must be done in standalone mode, not through DP!
Use microphone through XLR cable at the station, or guitar or other audio source. Set interface input level in the middle, lower if Peak light is on a lot. Configure Reason Audio I/O module for mono or stereo: flip panel around, disconnect one channel if mono. Set Big Meter for input and adjust level. Start sampling, play sound. 30-second limit.
Software will automatically cut off leading silence, load sample into instrument.
Saving a Reason "Song": If you have brought in or recorded your own samples, go to File>"Song Self-Contain Settings" and "check all" before saving—or Reason will not know where your samples are!
We do not use the sequencer in Reason!
Rewire mode: Using Reason in Digital Performer: Always launch DP first, then Reason. At end of session, always Quit Reason first.
DP and Reason communicate using “Rewire” protocol. After launching Reason, you must set up two paths to Reason from Digital Performer: audio, and MIDI.
In Reason Prefs: turn off keyboard in Advanced MIDI and de-assign MIDI channels in Interface. MIDI channels will be handled in DP. Audio assignment is automatic through DP.
In Digital Performer: Create new Stereo Audio track. Set output to Scarlet. Set input to Reason Mix L1-R2. Turn on "Mon" (input monitor) on that track. Reason can have more than two audio outputs, but we’ll just use two for now.
Then create a new MIDI Track, and assign it to "Bus6:Reason NN19 1." Set it to record. With multiple modules, each one gets its own MIDI assignment within DP. Name Reason modules! Names will carry into DP.
Save your Reason rack separately from your DP file!
Region menu: globally change velocity and duration (add, substract, scale, set, limit) in region menu.
Select a range or other characteristic (duration, velocity, top or bottom note of chord) and move to another track. You can cut the notes from the existing track, or copy to the clipboard.
Event list: notes, controllers, etc. Good for analyzing a track closely, inserting discrete events. Use View Filter to focus on types of events.
Time signatures in event list.
Techniques for composing electronically:
Not restricted to what you can put on paper, or dealing with a specific instrument or group of instruments. A lot of freedom! Where do you start?
• a sound. Play with a sound on the keyboard and see where it takes you
• a beat or a groove. Play a 4-bar phrase and then jam over it
• a melody
• a chord progression. Blues, falling fifths, rising thirds, modal, V of V of V, etc.
MOTU Symphonic Instrument (MSI): Open from within "UVI Workstation" shell. Default is four instruments, but you can add for a total of 16. Use multiple instantiations if you need more!
Excellent acoustic samples of strings, winds, brass, harp, guitar, keyboards, etc.
Can adjust envelopes, filters, LFO, reverb.
Saved with sequence--don't need to save a separate file.
Setup>Chasing. Turn it on for pitchbend and controllers, it will take care of resetting everything at the beginning of a sequence. Usually don’t turn it on for note.
Transpose by key or scale interval, harmonize
Drum track: looping and copying
Step time entry (command-8). Change duration, turn on Auto Step. Step=rest, chords.
Define a rhythmic grid, and bring the notes to it.
Swing: changes every other grid interval. 100%= no swing. 66%=triplet swing, 75%=dotted note
Strength: how close to the grid to make it. Less than 100% can keep some of the original rhythmic feel.
Offset: place quantized notes fixed distance before or after the grid.
Randomize: Places notes randomly away from grid after quantizing.
Humanize: randomize without quantizing.
MIDI window can show one or several tracks at the same time. Use show/hide pane to set these up. To differentiate between tracks, reset the colors for the tracks in the Tracks window. Track with pencil icon can be added to.
Sequence window: one or more tracks in separate lines. Dropdowns for volume, pan, controllers. Change size independently or all together.
Controller parameter numbers in Sound Canvas:
To set up controllers on the Novation keyboards: (See whiteboard for default slider settings)
• Press Controls: display says “Move Control”.
• Move the slider or knob you want to program.
• Press “+”.
• Use knob to dial in the number.
• Push the knob to save the assignment.
• Go back to “MIDI Chan” so you don’t disturb it.
Remember your assignments, since they may not be there the next time.
Controllers in DP: Draw a curve, or use reshape, or set to free.
Reshape tool can be used with other curves, and is periodic. Period depends on grid setting.
Insert measures: puts blank space on all tracks. To insert blank space on individual tracks, use Shift command.
Snip=cut & close up gap
Splice=paste & push to the right
Merge=paste without deleting what you're pasting into
Pan and volume: Make sure automation play is turned on in mixer window or they won't work!
( Other controllers don’t need to have it turned on.)
Tempo editing in event list: From main window, open right-side pane using Shift-], select Event List. Or click on E (or shift-E), opens event list, select Conductor. You can now enter or edit tempo changes and time signature changes. Remember they won't take effect unless tempo control in transport window is set to "Conductor Track."
Digital Performer: Replace/overdub mode.
Changing velocities in Continuous Data area. For individual note velocity change, use arrow. Group select and change, use marquee and arrow (same tool).
Left tool: what to show and what will be affected by selection
Right tool: pencil and reshape tools will affect this data.
Toolbox at right side of transport window. Default tool is arrow. Use pencil for adding events. Use S-curve to reshape events (like velocity). When reshaping velocity, notes must first be selected!
Add notes or controllers with pencil. edit with arrow, or group edit with marquee and arrow. Reshape controllers or velocities with reshape ("S”) tool.
Mixer window: Set volume and pan for each track, statically. Is saved with sequence.
Tempo slider vs. Tempo map (Conductor)
Create a tempo map: in Tracks window, double-click on conductor. Can magnify vertically by left edge dragging (magnify tool).
Mixing console automation: Changing volume and pan, recording motion. Put track into “auto record”, but do not click on Record in Transport window! Click on play. (Whether track is record-enabled or not doesn’t matter.) All movements of fader and pan knob will be recorded, can then be edited in sequence or MIDI window.
To play back volume or pan, Automation Play must be enabled!
Common configuration for MIDI program changes. Used for multimedia, file exchange, like .txt
Sound set (128 programs), Arranged in families of 8. Percussion (47 sounds) on ch 10.
Fixed controller meanings: volume, pan, expression=volume, mod wheel, PB range
Most computers and mobile devices how include a GM synth.
Sound Canvas, a virtual instrument, has 16 channels, each one can have any of 128 sounds, except drums on Channel 10. Follows GM protocol, with extended sound sets.
Configuring the keyboard (press MIDI chan 2x until it says Music64)
Name your tracks! option-click on name. Enter to continue down the list.
Use Pads for Drums on channel 10 as much as possible!!
Editing: moving and changing notes. Grid on and off for selecting and moving events. Sets cursor movement, not (necessarily) note placement. Temporarily defeat it with Apple key.
Copy, paste to duplicate events. Cut, paste, to replace
Tracks vs. channels. More than one track can be assigned to a channel. All tracks going to the same channel have to have the same patch/sound!
Volume vs. velocity
Note the difference between velocity byte in note-on command (which affects onset of note only, each note has its own) and volume controller (#7) (can affect sound continuously, affects all notes on the channel).
Velocity=how loud the instrument is played. Volume=how high the fader is.
Sign on to server: open Music64 storage, Music 64, f***
Create a folder with your name on it for your stuff.
NEVER RUN A FILE FROM THE SERVER. Always copy it to the local computer first. Copy it back to the server when you're done.
Input devices for electronic music
Keyboards, drum pads, guitars, wind controllers, pitch convertors, marimbas, maracas, positional indicators, ribbons, game controllers
Because physical characteristics of device are not linked with acoustic characteristcs, you have total freedom.
An eight-bit number has a decimal range of 0 (00000000) to 255 (11111111)
MIDI has two types of bytes:
Status or Command byte (>127, first bit is 1) is instruction
Data byte (≤127, first bit is zero) is value.
Command set - some commands are defined as having 2 data bytes, some have 1 data byte.
Receiving device knows what to expect. Incomplete command is usually ignored.
First four bits of command byte is the type of instruction. Second four bits is the channel number. Early MIDI devices only read one channel at a time, ignored data on other channels (some, like drum synths, still do). Means you can use different devices on the same MIDI cable.
Modern synths, called "multitimbral", sort out data by channel, assign to different sounds in the instrument.
Channel number = 0000 (zero) to 1111 (15). But we call them 1 to 16.
Note on: Command byte (144-159) followed by two data bytes (0-127: note number, key velocity=how fast the key moves from top to bottom)
Note off: Command byte (128-143) + note number + velocity. Why note-off velocity?
Continuous Controllers: (176-191) + controller number (mod wheel, volume, pan, sustain) + value. 127 possible controllers per channel.
Many controllers defined, some as transmitters (mod wheel=1), some as receivers (volume=7), some as both (sustain pedal=64).
Others that are defined: Stereo pan=10, Foot pedal=4, Data slider=6
Many others loosely or not defined.
Program change: (192-207) (Cn) + single data byte=value. Program change numbers are 0-127, often but not always called 1-128. Calls up a register in the synth's memory.
Pitchbend: (224-239) + two data bytes: Most significant byte (MSB) + Least Significant Byte (LSB). Designers wanted “double precision” so that when [tich wheel was moved you didn’t hear discrete pitches. So possible values are 0 to (128*128)-1=65,383. Turns out it wasn’t necessary: LSB is almost always ignored. But it must be there anyway.
"Zero pitch bend" is actually a value of 64 (MSB). Many sequencing programs describe pitchbend as +/-64, but in reality the values are 0-127.
Channel pressure/aftertouch: (208-223) + single data byte=value. Amount of pressure on keys after the note is played. Used for vibrato, filters, pitch change, etc.
Key pressure/polyphonic aftertouch: (160-175) + note number + value. Individual pressure values for each key. Quite rarely used: expensive to implement, complex to program, uses a lot of bandwidth
Recording MIDI into a sequencer (Digital Performer)
Open DP: New, save in Documents folder.
Setting up the audio output in DP
Instrument track: SoundCanvas. Output is automatically analog 1-2, inputs will show up on MIDI tracks.
Setting metronome: use audio, Count-in
Tempo setting: set dropdown to “Tempo slider”, move slider.
Arming a track: routes keyboard input to that track. Selecting a patch in the SoundCanvas window.
Analog synthesis is subtractive, also can be done digitally, digital simulations of analog synths now very popular. Also digitally controlled analog synths (Dave Smith Instruments). “Real analog” synthesis has drift problems.
A real-world Additive synth=Hammond organ
FM (Proton): uses 2-6 sine waves or more complex waves ("operators") modulating each other, each with an envelope. Modulator, instead of filter, determines harmonic content, which can be very complex. Lends itself well to real-time control, not hard to compute. However, programming is not at all intuitive. Describe Sound Blaster chip: awful 2-op FM.
Physical modeling: (Kong) digital models of instrument parts exist in software, interact in real time. Might have excitation of a flute, resonance of a saxophone, bell of a trumpet. Can include elements like breath, embouchure pressure, tonguing, change in tube resonance as you cover "holes". Very powerful, difficult to do -- lots of computation. Yamaha mostly. Instruments can generally play only one or two voices at a time. Will show next class. Kong is simple example: uses physical model of drum.
Granular (Reason): breaks up files (like samples) into tiny pieces, plays them back and reassembles them at different speeds and pitches, in real time. Adds processing.
Samplers (NN19) make recordings in a digital audio program, load them in. Also called (incorrectly) "wavetable". Samples are looped for long sustains.
Digital samples stored in ROM or RAM, played back under control of MIDI.
Multisampling prevents munchkinization. Formants—spectral areas that remain constant despite pitch; if you transpose them, you change the characteristic of the sound. Adding DSP to samples: envelopers, filters, LFO
Real-time Control over synth parameters:
Key number = note/pitch
Key Velocity: volume, timbre, envelope depth or speed
Pitch bend: variable range
Mod wheel: LFO depth or filter cutoff
MIDI: What is it?
MIDI is not music, not audio, but it is a representation of a musical performance, like a score, or a player piano roll. Every performance nuance is communicated, without the actual music. Notes, sliders, knobs, pedals, patch changes, other parameters.
Cannot be stored on tape, must be stored digitally — a sequencer: a list of instructions (commands) with their timings. Sequencer can be a computer, and the sequence is a computer file. Stored on disk, you can move it around between studios, or over a network. Usually quite small.
On stage, one keyboard could act as master and play all the others. In studio, a central controlling sequencer could control an entire orchestra of synths.
Central controller has performance data, while the actual sound is produced by the remote devices.
Initially this was all done using discrete hardware modules connected by MIDI cables. Now we can do it all in software.
Since performance data is broken down into gesture parameters, can isolate individual performance parameters: change key velocity, or note number, or pitchbend setting, or instrument, or rhythm without changing other performance parameters
Prepare an entire performance, change any parameter, singly or globally, at any time.
Who owns it
Public domain. A specification, not a standard, no legal or official standing. The industry agreed to support it, market forces keep it in line. If someone strays, users and reviewers will report the violation.
A living language: many holes in the spec for future development. Participation from all corners of the industry: hardware mfrs, software mfrs, systems designers
Electrical and digital protocol
Originally used cables, binary on/off 5-volts DC. Each MIDI byte or word consisted of eight data bits, plus two “framing” bits. Speed is 31,250 bits per second, or 3,125 bytes/sec (8 data bits + two buffer bits= 10 bits/byte). Now mostly virtual.
Travels in one direction only: from MIDI Out jack to MIDI In jack. If you want bi-directional, you need two cables.
A universal standard: if an instrument has MIDI, it must be able to talk to any other MIDI instrument without restrictions.
MIDI over USB, etc.
Now MIDI can be piggybacked on other cables, like USB, Firewire, and Ethernet. But there are no industry standards for these, so manufacturers have to make software
drivers which are installed on computers so that they can understand what's coming in. However, Apple and Microsoft publish specs for MIDI over USB that instrument and interface manufacturers can choose to follow--then they are called "class compliant" and don't need special drivers. This includes our new keyboards.
Advantage: can specify faster than MIDI speeds! Disadvantage: can't plug instruments into each other, but you need a computer in the middle.
Other musical parameters:
Noise = random frequencies, sometimes filtered (colored)
Envelope = Change over time, applicable to any of the above
Vibrato = Low Frequency Oscillator = periodic change in any of the above. Vibrato itself can have an envelope, both intensity and speed.
Location = L/R, F/B, U/D
Audio electronics principles and components:
Transducer = converts one type of energy to another
Microphone = converts sound waves in air to Alternating Current (AC) voltages. Microphone has a magnetic metal diaphragm mounted inside a coil of wire. Diaphragm vibrates with sound waves, induces current into coil, which is analog of sound wave. This travels down a wire as an alternating current: positive voltage with compression, negative voltage with rarefaction.
Dynamic vs. Condensor/capacitor mics, Condensor mics can use phantom power, battery, or be “permanently” (electret) charged.
Pickup patterns: Omni, figure-8, cardioid, hypercardioid, boundary
Speaker, headphones = transducer, converts AC voltages to sound waves in air. Speaker has a wire coil that receives alternating current from amplifier, paper cone is attached to a magnet inside the coil. As the current alternates, the magnet moves in and out, and makes the paper cone move in and out, producing compression and rarefaction.
Human Ear: converts sound waves to nerve impulses. Each hair or cilium responds to a certain frequency. The brain interpolates sounds between those frequencies. As we get older, hairs stiffen, break off, and high-frequency sensitivity goes down. Also can be broken by prolonged or repeated exposure to loud sound.
Cables and Connectors:
Balanced: XLR, ¼-inch TRS. Less noise (differential amplifier cancels noise picked up on line), better frequency response, longer distance.
Unbalanced: ¼-inch TS, RCA, mini. More prone to interference, high-frequency roll-off.
Stereo unbalanced: ¼-inch TRS, mini TRS.
Synthesizers: their parts
Oscillator: simple or complex waveform
Filter/Equalizer: static or dynamic
Envelopes: volume, filter, pitch. Attack/Decay/Sustain/Release: approximation of natural envelopes, invented by Moog. Invertable for filter use.
LFO: volume, filter, pitch. Variable depth and rate, selectable waveform. random segments/random levels (sample+hold)
Types of synthesis
Additive or Fourier: http://www.falstad.com/fourier/
Building up from individual harmonics, with separate levels and envelopes for each. Impossible to do analog, hard to do digitally because hard to make interactive: so many computations per second. Used in Kawai synths, Kurzweil 150, some experimental synths.
Subtractive: Start with complex waveforms like noise, sawtooth, square, and filter out harmonics — high, low, or bandpass. Filter envelopes much easier to deal with than individual harmonic envelopes.
Characteristics of a sound:
Sound is vibration of a medium, such as air. Travels in waves: compression, rarefaction = changes in air pressure = volume.
Frequency = pitch = Number of changes in pressure that go past your ear per unit time.
Expressed in cycles per second, or Hertz (Hz).
The mathematical basis of the musical scale: go up an octave = 2x the frequency.
Each half-step is the twelfth root of 2 higher than the one below it. = approx. 1.063
The limits of human hearing = approximately 20 Hz to 20,000 Hz or 20 k(ilo)Hz.
Fundamentals vs. harmonics = fundamental pitch is predominant pitch, harmonics are multiples (sometimes not exactly even) of the fundamental, that give the sound character, or timbre.
Loudness (volume, amplitude) Difference between maximum and minimum pressure
measured in decibels (dB). The decibel is actually a ratio, not an absolute.
A minimum perceptible change in loudness is about 1 dB. Something we hear as being twice as loud is about 10 dB. So we talk about “3 dB higher level on the drums” in a mix, or a “96 dB signal-to noise-ratio” as being the difference between the highest volume a system is capable of and the residual noise it generates.
“dB SPL” is referenced to something: 0 dB SPL (Sound Pressure Level) = the perception threshold of human hearing. Obviously subjective, so set at 0.0002 dyne/cm2
The total volume or “dynamic” range of human hearing, from the threshold of perception to the threshold of pain, is about 130 dB, so the threshold of pain is about 130 dB SPL.
Timbre = complexity of waveform, number and strength of harmonics
Harmonic series = break down waveforms into harmonics or partials. Fourier transform/analysis.
Show sine, saw, triangle, square, noise
Show harmonic breakdown of sine, saw, triangle, and square waves. Saw: each harmonic at level 1/n. Square, only odd harmonics at 1/n. Triangle, odd harmonics at 1/n2
In an electronic system, the ability to reproduce those high harmonic frequencies is called frequency response or fidelity.
Using filters or equalizers to change timbral characteristics. Hipass, lowpass, bandpass
©2017 Paul D. Lehrman, all rights reserved