Music 64 section 1 • Spring 2018 • Lecture notes
course home

April 30

Audio compression
Linear PCM (or any linear coding of digital audio) is inefficient: data rate is the same regardless of whether there's signal or not, or the frequency range, or the dynamic range. Is there a way to compress it so we can throw away the "unimportant" parts and keep the "important" ones?

Lossy compression, using psychoacoustic masking effect. Softer sounds, sounds close in frequency to others, sounds close in time to others, can be masked. Redundant data in the two channels eliminated. Stereo separation can be reduced, especially in the lower frequencies. Variable rate encoding uses a slower bit rate when data is less complex.
Different from dynamic compression-doesn't change dynamic range. Unlike Stuffit or ZIP compression, not recoverable.

Codec=Coder/decoder, or compressor/decompressor.

MP3, short for MPEG-1 Layer 3: originally designed for compressing video, this is the audio part of the spec. Reasonable quality, compression ratio about 1:10, depends on bit rate. Bit rate is not sample rate!!
Algorithm for making files is not necessarily free-needs to be purchased in software whose maker has paid a license to the owner (Fraunhofer Institute, German firm). open-source encoder: LAME ("Lame Ain't an MP3 Encoder"), used by AudioHijack and others.

Apple iTunes uses AAC (Advanced Audio Coding), better algorithm, more efficient, can do multichannel. Part of MPEG-4 spec.
Lossless AAC has 2-to-1 size advantage, is truly lossless. Same as FLAC (Free Lossless Audio Compression--open source, 30-50% size reduction), Meridian Lossless Packing, Shorten (SHN).

John Monforte's experiment:
Convert an AIFF file to a compressed format (MP3, AAC, etc.). Then convert it back to AIFF. Flip the polarity (or phase) on the re-converted file. Mix the flipped/re-converted file with the original. If the files were identical, you would end up with an empty file, and with lossless compression that's what happens. With lossy compression, what you end up with is the difference between the two files, i.e., what the compression process removes.

If you want to know more about data compression:

http://www.emusician.com/news/0766/cramped-quarters/145338
http://en.wikipedia.org/wiki/MP3


 

April 25

More on the Ballet mecanique:
The Leger/Murphy film with the Lehrman realization and edit of the Antheil score
How the music was adapted to fit the film

Things to do with what you learned in this class:

Composition
recording
film/TV/Web
commercials
libraries
Games: loops, layers, transitions

Sound design
film/TV/Web
Theatrical

Orchestrating/arranging
Music direction and playing for artists, theater
Assisting producers, artists in the studio
Assisting composers, esp. film/TV

Audio engineering
Studio
Books on tape/Podcasts/voiceovers
Audio for visuals
Broadcast production
Mastering: the last set of ears. Different criteria for CD, downloads, streaming, broadcast

Music programming
Ringtones
Formatting and conforming audio for Web
Games: translation of music>MIDI and vice versa (Guitar Hero)

System design and installation
Studios, project/home studios
Advertising agencies
Web developers
Game developers
Broadcasters
Theatrical, Schools, houses of worship
Industrial (PA, background music)

Product design
Software synths
Plug-ins
Mobile and tablet apps
Sequencers and performance programs
Hardware controllers
Pro and consumer audio hardware
Toys

Tech support
Concert crew
Theater
Radio/TV/cable/Webcast
Software companies (sequencers, instruments, plug-ins)
Hardware (instruments, audio components, computers)
Tech writing, documentation

Sales & marketing
Inside
Retail
advertising
Repping

Education
K-12
Private music schools
Colleges, trade schools
Studios give classes
Online lessons
Video Tutorials


April 23

Reason: Thor
Main panel
Modulation routings
Sequencer/arpeggiator

Oscillators:
analog: simple waveform.
Wavetable: sample, just a few cycles, move through it using position.
Phase modulation: combine two waves in series, and second one modulates first.
FM pair: two sine waves, relative pitch creates sidebands. FM control determines level of sidebands.
Multi: multiple osc same wave, detunable.
Noise: white, colored, band-limited

Filters:
LP (Moog-style),
state variable (notch or peak sweepable between high-pass and low-pass)
Comb (many teeth),
Formant (x-y, harmonic-related peaks, vowel sounds),

LFOs have delays, assignable in matrix.

Shaper: distortion
Chorus, delay

Global section: affects everything. Envelope assigns to filter, or somewhere else. LFO 2 is free-floating, must be assigned.


April 18

Sibelius
To simplify a passage or track after you've played it in: To simplify after the fact: Note Input>Plug-ins>simplify notation. Use Live Playback to keep original MIDI data.

Scans and PDFs
Open Sibelius first, when it's done open PhotoScore Lite.
Open PDFs in PhotoScore, by dragging to upper panel. They should group themselves at the bottom of the left column. "Read" all pages. Adjust staff lines if necessary. Correct time signatures and key signatures if necessary. When all pages are open, Send to>Sibelius. Specify your own instruments.

Large libraries of scanned classical pieces can be found at:
The Petrucci Project: http://imslp.org/wiki/IMSLP:Library_Portal
U of Wisconsin Music Library (links to others) https://www.library.wisc.edu/music/research-help/sheet-music-scores-online/

Arrange in Sibelius: automatic orchestrator.
Can take one (or two) staff and spread it out on many, e.g., piano to string quartet; or reduce many to one (or two), e.g. piano reduction of ensemble. After file is loaded, Create the staves you want to arrange to. Choose the material you want to arrange and Copy it. Select the destination staves, Note Input>Arrange. Choose an algorithm: "Explode" is standard.
Best to use Arrange for only a few bars at a time, creates cleanest work.


April 11

Sibelius

Click in bar selects bar, double-click selects staff, triple-click selects entire part. If one staff is selected, only it will play. De-select to play all.

To simplify after the fact: Notes>Plug-ins>simplify notation>renotate performance/Overwrite selected passage.

Copy in place: select an item or bar or staff, hold down the option key, and click where you'd like to copy it to. It will take whatever you had selected and clone it. Be careful where you option-click: it takes the position very literally!

Drums not GM map!

MIDI files: early form of file-sharing, small size, good for low bandwidth. Hundreds of thousands still available. Will play on Macs automatically through Quicktime Player, using built-in QuickTime Musical Instruments.

Exchange format for all music programs, like .txt. Every sequencer and notation program can load and save .mid. Not necessary for .mid to follow GM spec, but almost all files for general distribution do.

Unless file is very simple, use Digital Performer as a filter before going to Sibelius.
In DP: check to see if tempo map is adhered to, that is, if the barlines are where they are supposed to be. (Some MIDI File creators ignore the barlines. These will be hard to work with! If they are not correct, either find another version of the file or use "Scale Time" to adjust the note lengths—although that requires a bit of arithmetic.

Then quantize attacks and releases, filter out all controllers, remove tempo changes, make sure tracks are labelled correctly, but simply. (Sibelius will assign instruments to the tracks based on their names.) Delete empty tracks. Save as Standard MIDI File, Format 1.

Sources of MIDI files:
http://www.midiworld.com/files/A/all/
http://www.classicalmidiresource.com/archives1.html
http://www.piano-midi.de/

Arrange in Sibelius: automatic orchestrator. Can take one (or two) staff and spread it out on many, e.g., piano to string quartet; or reduce many to one (or two), e.g. piano reduction of ensemble. After file is loaded, Create the staves you want to arrange to. Choose the material you want to arrange and Copy it. Select the destination staves, Note Input>Arrange. Choose an algorithm: "Explode" is standard.
Best to use Arrange for only a few bars at a time, creates cleanest work.


April 9

Kontakt Factory–many more cool sounds.

Sibelius
MIDI editing vs. notation editing. Why?
MIDI: more accurate, reproducible, contains non-note information (e.g., vibrato, timbral change), greater levels of timing and dynamic resolution.
Notation: interpretable, readable by human musicians, large body of work in that format.

Set up score: use template, or blank score and add your own instruments (I). Move up and down in Instrument window after adding.
Input methods: mouse, keypad, Mac keyboard, MIDI keyboard. Use esc key to get out of entry mode!
Keyboard: "Flexi-time" (command shift-F) records to a click track, makes approximations. Flexi-time options lets you set the approximations.

"Live Playback" button in Playback pane preserves original performance; otherwise it follows score literally.

If you put in a note that's out of range, it will show up as red.

Loading sounds: Play> Configuration: Sibelius 6 Essentials

View>Panorama: straight across.

Zoom in and out from menu or with slider on lower right.

Can’t edit while playing.


April 4

Reason:
Subtractor
uses static oscillators, one or two at a time. Two filters, two LFOs. Amplitude and Filter envelopes similar to NN19. Modulation envelope allows LFO or other changes over time. Keyboard tracking on filter determines whether filter character changes as you go up. Noise: adds noise with variable "color". FM: adds extra harmonics, often not integral multiples.

Malstrom
"Graintable," uses tiny pieces of samples, plays them in different ways and at different rates, extrapolates between the "grains". (Can't load your own samples in.)

Index: where the sample starts when it receives a note-on.

Motion: how fast to move through table (unrelated to pitch). May be forward or forward+backward, defined in table. All the way to the left: static, plays the same grain over and over.

Shift: formant shift

Pitch: pitch shift

Modulators, Filters, and shapers can be used on one or both oscillators, different parameters.
LFOs have complex waveforms, can be used in "one-shot" mode.
Spread: separates two oscillators in stereo output.

Combinator
Lets you control multiple modules with one MIDI channel, send multiple controller messages from one control.
Insert modules within combi. Can have mixer inside combi or mix externally.

Open "Programmer", select the module you want to assign a controller too, insert source (mod wheel, breath control, expression, etc.) and destination and scaling in table. A source can be assigned to one or more parameters, with scaling and polarity, in each module. (Mod wheel gets through as mod wheel regardless of whether it's assigned.) Rotaries are controllers 71-74.
Buttons, binary, controllers 75-78

Native Instruments Komplete 8
Huge library: piano, drums, synths, orchestral. Streaming samples: actually on disk, but headers are loaded into RAM.
Instantiate with a single instrument track: Kontakt 5 (stereo). Drag instruments into the rack and assign MIDI channels (you can assign multiple instruments to a single MIDI channel). Set up a MIDI track for each instrument. Each rack has only stereo outputs, but you can use multiple channels on the internal mixer. You can also insert effects on the mixer.
All instrument setups are also saved with the sequence—don't need to save a separate file.


April 2

Putting audio (dialogue, sound effects, ambience) into a video soundtrack: drag into sequence (make sure if audio is mono track is mono!) or into Soundbites window (where you can see if it's mono or stereo). Lock audio track to SMPTE with Lock column in Tracks window. Otherwise audio start times will change when you change tempo. Change duration in Spectral Effects if necessary.


March 28

Actual video speed in US (& Canada, Japan) is 29.97 fps. To keep SMPTE time and "clock" time in sync (they differ by 3.6 seconds/hour), sometimes use "drop-frame" timecode, in which some frame numbers are skipped. Digital Performer allows this, usually not an issue.

Playing with video for use in DP
You can edit a QuickTime file in QT Player 7 (not 10!). Set in and out points, and Trim to Selection. Save as... and select "Self-Contained Movie." You can also splice QT files together using Copy and Paste.

DP with video
To use video in DP, must be QuickTime format. Shift-V opens Movie window. Control-click in Movie window and select "Set Movie" to load.
If movie has a soundtrack, you can turn it on with Assign soundtrack, or you can import it into the sequence as a new audio track.

You must know start time of film: it usually defaults to zero, you can change it: Control-click-release in screen, select Set Start Time on menu; best to use 01:00:00:00. And you need to know when you want sequence to start: Open the Chunks list in the sidebar, select the chunk, and use the drop down to set the Chunk Start Time. It can start before the film for count-ins, etc., or after.

You must use the Conductor Track (not the Tempo Slider) and put in an initial tempo. You can switch back to slider to slow things down to record MIDI stuff, but picture will be out of sync! That's okay--just make sure you put it back to Conductor track when you're done.

Set Transport counter to show timecode ("Frames" NOT "real time") and bars/beats.
Under Time Formats: Event Information: check Measures and Frames

Spotting: Open markers window in sidebar, set markers at hit points with drop-down. Lock (to timecode). Now when you change tempo, marker stays with picture.

To find tempo for first section: Select how many measures you want to be in there from beginning to first marker (make sure you select conductor track along with all other tracks.) Look at marker frame number. Region>Scale tempos: Set end time to marker position. Tempos will change. Repeat for any other sections.

To follow action on the screen, like walking or bouncing, mute MIDI tracks, turn on DP metronome, set tempo control to Tempo Slider, adjust slider until it matches action, write down the tempo, put that number into the conductor track at the right place.


March 26

Mixing down
Give each instrument its own space: use delays, reverb, panning, and EQ to accomplish this.
Start with foundation: drums, bass.
To make kick stand out more, use EQ to boost at around 2kHz, not bottom.
To take tubbiness out of bass, reduce at 250-400 Hz. Low frequencies build up when mixing, so be careful to avoid that by eq'ing out that range on other instruments as well (e.g., acoustic guitar, piano)
To take "tone" out of snare drum, increase EQ sharply and adjust frequency until you hear it exaggerated, then reduce that frequency.
To make space around an instrument without using reverb (which can make things muddy), use a single short delay: 40-80ms
Kick and bass usually in the center. We don't hear bass signals directionally, this spreads the energy out. Drums can be in stereo.
Find complementary instruments and spread them left and right to widen image.
To add reverb to the whole mix, use an Aux track: Create a send in one of the channels to "Bus1-2", Aux track will appear with "Bus1-2" as input. Insert a reverb, mix at 100%. Use individual sends to adjust reverb on each instrument. Or use a Master Fader track and insert a reverb (less flexible!)

Three types of audio processing:
Destructive (changes the file, now obsolete!)
Real-time/Non-destructive (changes the way sound is played, without changing the file, e.g., using a plug-in, takes a lot of CPU),
Constructive (changes the file but creates a new version of it)

Spectral effects is constructive process: separate control of pitch, tempo, and formants

Music for picture
Films: Music must follow picture: change with scenes, accent events.
Directors love to recut scenes, music has to change.
In old days either cut music on tape (couldn't always find good edit points) or re-record it (very expensive!)
MIDI is great for this because it's easy to change tempos, edit music at the performance level, not the recording level.
Most important issue: how to keep picture and music in sync.
To be in sync: must know when to start and stop, where you are, how fast you are going.
Old style: Video on tape. Special audio track containing SMPTE timecode, an analog signal conveying digital information: Frame number (hours:minutes:seconds:frames; 30 frames=1 second), and speed. Similar track on audio tape deck. Synchronizer reads both codes, adjusts speed and location of audio recorder to match video position.
Now we can do it all on the same platform.


March 14

Audio in DP

Audio can be dragged into a DP session from desktop if it’s AIF, WAV, MP3 including from a CD. Stereo files can only go in stereo tracks; mono files can only go in mono tracks! File is copied (and converted if necessary) into Audio Files folder. USB, flash drive, CD, SD card, etc.

Effects in DP: Plug-ins come in various formats: Apple Audio Units, MOTU Audio System (MAS), VST, Rewire. Can be effects or virtual instruments (like Purity or Reason). Formats we can't use: RTAS, TDM, DirectX.
Don't use plug-in presets and combinations! They are never right for what you're doing and are NOT good starting points!

Automating effects. Works on audio, Aux, and instrument tracks (not MIDI!). Put track in automation record. Insert effect plug-in in mixing console. Move a control in the effect. Now that parameter appears in the drop-down for that track in the Sequence window. It can be edited there using arrow or reshape tool.

Using aux buses in DP more mutiple tracks through fx. Use sends (post-fader) to aux bus (1-2), set up Aux track with fx and input as bus 1-2. Set effects mix to 100%. (If you don't want any dry signal at all, set send to "pre-fader".)

Final projects: Can be original song with vocals and/or instruments; orchestration or arrangement of a pop, jaz, or classical tune; score a short video; audio collage with MIDI elements; pretty much anything as long as it uses much of the tools we covered. Look on server for projects from other years.


March 7

Digital recording: why?
Fidelity: not dependent on physical medium
Copyability: each copy is a perfect clone (as long as you don’t compress it)
Longevity: medium doesn't wear out quickly, and can be cloned before it does

What it does: Measures ("samples") the level of the waveform at a particular instant and records it as a number.
How often the sample is taken=sampling rate
What the possible range of numbers is=word length or bit length or resolution

The more bits, the better the approximation of the signal. The difference between the input signal and the converted signal is heard as noise, and is called quantization noise. The quantization noise is the noise floor of the medium. The range in dB from the noise floor to the highest level before clipping is the dynamic range. Dynamic range = number of bits x 6 dB (approx).

The higher the sampling rate, the more high frequencies can be recorded. Sampling rate must be at least twice the highest frequency desired=Nyquist theorem. Signals higher than 1/2 the sampling rate (the Nyquist frequency) will result in aliasing. Filters are used before the conversion process to make sure no frequencies higher than the Nyquist are converted.

Standard format for CD digital audio: 44.1 kHz, 16 bits, stereo
Higher sampling rates and resolutions are used in pro audio, but we can't hear them.

Convertors handle this sampling and un-sampling. We need them because the world is analog, and our ears respond to analog.
A/D and D/A convertors built into Mbox2Pro. Mac also has convertors, 16/44, but hard to get high quality in such an electrically noisy enviornment.

Signal cannot go above zero: hard clipping, sounds terrible (unlike analog clippping, which can be interesting)
Signal cannot go below noise floor—last bit.
You can use low-level white noise ("dither") to create "false" noise floor, mask quantization noise so first bit is never reached; white noise sounds more natural. Modern noise-shaping uses filtered noise that is almost inaudible but does the same thing.

Linear recording: preserves all data. "PCM" encoding most popular, but others exist.
Lossy recording (or compression): compromises quality, we'll talk about it at end of semester.

Recording audio into DP:
Create a stereo or mono audio track. Set input and output to Scarlett. Always make output stereo, even if it's a mono track, so you can pan it.

Record-enable the track. Set level on Scarlett and use Pad if necessary. Peak light should come on rarely. Check level in DP Mixer or Audio Monitor window, adjust input if necessary.

After recording, turn off Record-enable to play back.
Can move audio events around.
Use Edit>Split to break up recording into different regions. Shorten or lengthen regions with brackets at beginning and end. All edits are non-destructive and entire file can always be recovered!
Scrub in and out points with "speaker" on.

Non-linear editing: Any piece of any sound can be played from any track in any order.
Cross fading regions by clicking in red dot at the top left or right edge.


March 5

Analog recording: some history
Vinyl: Original records used wax. Later went to plastic/vinyl. Physical model of waveform is carved into a pliant surface using a "cutter head": a diamond stylus connected to two electronmagnets, which make it wiggle according to the waveform. In monaural records, the stylus moves vertically. In stereo records, channels are encoded at 90° angle to each other (45° from vertical), and the stylus moves horizontally and vertically. The mono (sum) signal is vertical and the stereo (difference) signal is horizontal. To play back, a mechanical needle tracks the groove(s) in the surface, creates magnetic fields in the coils in cartridge, this change in voltages is then amplified.

Problems:
Frequency response: Needle has to track very fast, vinyl surface resolution has to be very fine. Friction causes surface to heat up, get softer. Every time you play a record, some of the high frequencies get lopped of.
Dynamic range: If you push the needle too far it jumps out of the groove. Too much bass in one channel will push needle sideways.
Speed variation: wow (slow) and flutter (fast).
Noise: dust collects on the surface, and after playing, it is ground in so noise becomes permanent. Surface scratches cause pops and crackle.
Turntable rumble: low-frequency noise very hard to avoid even in best turntables.

Analog Magnetic Tape: embeds waveforms as magnetic patterns on a magnetized surface on plastic ribbon.

Problems: Frequency response: requires very fine particles, faster tape speeds in order to get accurate small waveforms/high frequencies.
Noise: Medium has inherent noise due to random (Brownian) motion of molecules.
Dynamic range: Magnetic orientation of particles can be changed a limited amount. Push them too hard and they resist.
Copying: Each copy adds "generation" noise.
Speed variation: Wow (low-frequency variation) and flutter (high) caused by imperfections in mechanical system, tape stretching.
Longevity: Many tapes of the '70s and '80s are now unplayable because of binder failure.

Editing analog tape: destructive, linear. Have to physically cut tape, put into proper sequence.

Things you can do with audio in the analog realm:
splice/trim/concatenate, reverse, equalize/filter, delay/ reverb, change speed, pan, loop, layer (overdub)

Examples played today: Ilhan Mimaroglu - Six Preludes For Magnetic Tape


Feb 28

Effects in Reason: Effects can be patched in after each module or using aux sends. When using effects "in line" (between source module and mixer), use Wet/Dry to balance the amount of effect. When using effects in Aux buses, set Wet to 100%, use sends and returns to balance the amount of effect. 6:2 mixer has only one Aux bus--if you need more, use 14:2 mixer.

Reverb: multiple delays simulating sound bouncing off the walls, floor and, ceiling.
Size of room, distance of walls from source, materials on surfaces will determine reverb size and frequency characteristics. Pre-delay (distance from closest wall); early reflections (size and shape of room, sonic characteristics of walls); tail (size of room, liveness of surfaces); damping (liveness of room)
Equalization: emphasize or reduce specific frequencies.
Delay: discrete echoes, can be timed to tempo or fixed; feedback control for number of echoes. Usually used with a module "in line", and not in an aux bus.
Flanger/phaser/chorus: very short delays that cause comb filtering: mulitple sharp filters. If you move the delay time with an LFO, filters move, resulting in “jet plane” effect.
Distortion: Scream.

Reason: Kong. Drum pads/machine. Each pad has assignable sound, volume, output, effects, etc. (notes not changeable). Like NN-19, can sample right into it.

Freezing tracks in Digital Performer (a/k/a rendering, printing). MIDI tracks are not audio, and cannot be mixed down or put on a CD or MP3. Freezing creates an audio track from a MIDI track. Different ways to do it in Reason, others.

in MSI and SoundCanvas: select all MIDI tracks for each instrument and their corresponding instrument track, from beginnng to end. Make sure both MIDI and Instrument tracks are set to Play. From Audio menu, choose "Freeze selected tracks." Sequence will play through and a single new audio track will be created, which will mix all the instrument’s tracks together. Any changes in volume or pan, controller changes, or any changes in the mixers will be preserved on the audio track.

in Reason: Put Reason's Audio track into record (and nothing else). Set Reason MIDI tracks to play. Mute everything else. Press Record and run the sequence. Reason’s output will be saved as an audio track. (It's a good idea to use countoff without metronome to make sure the first note is recorded.) Effects and mixer settings in Reason will be preserved on the audio track.

After freezing tracks: You can mute the MIDI and instrument tracks (but don’t delete them in case you want to modify them later!) and just play the audio tracks. If you have other MIDI tracks you haven't frozen, you can play them at the same time and they will sync.

Final two-track mixdown for CD or export: Select all audio tracks (not MIDI!) and from the Audio menu select "Bounce to disk." Specify AIFF interleaved stereo, 16-bit (you can always export as MP3 later). Name is automatically created, and the file is stored in the project's "Bounces" folder, unless you want it somewhere else. Process is much faster than real time.


Feb 26

Big mixer in Reason—14 inputs: many modules!

Programming NN19:
Velocity assignment: amplitude envelope, filter envelope, sample start

Real-time controls:
All knobs, sliders, and buttons in a module have a MIDI controller assignment. You can control them and record changes in real time from the MIDI keyboard.

Each module has a chart in the "Reason controllers " folder showing the controller numbers for that module. Assign a slider or knob on the keyboard to send the MIDI controller of the knob/slider you want to control on the module. You can record sliders with notes, or afterwards in Overdub mode.
When the track plays back, you can see the knobs move on the module.

You can also move controls in the Reason module itself while recording (or overdubbing) and the movements will get recorded.

Edit the controllers in the Sequence window--they will appear under "Notes" in the dropdown list at the left side of the track after you've recorded them. Use the arrow or reshape tool to change them. (Automation Play only needed if you have volume and pan on the track—not necessary for any other controllers.)

Setting up a controllers console in DP:
Project>New Console. From pop-up, choose slider or knob or something else. Drag into Console window. In Control Assignment window, Source is irrelevant. Set Target to Track, and type in Controller #. Set Min/Max (usually 0-127). Take console out of edit mode (pencil at bottom). Put track in record and move knob/slider. Can also be used in overdub mode.
Can add as many controllers as you wish, each independent. Use labels!

Loopback problem: When Reason is running under Digital Performer, moving a control on a Reason module will move the same-numbered control on the Reason module that is record-enabled in DP. This can badly screw up your Reason rack!

The Solution:
Put DP in Setup>Multirecord, set input as "Impulse Impulse-any" on all tracks. This filters out data coming from everything but the keyboard. But when you are recording, make sure you are recording on only one track at a time!


Feb 22

Reason: NN19 sampler.
Sample playback using AIFF/WAV/MP3 files, from local files, files in "Sounds" folder, or your own sounds.
Three uses of NN19:
1) to play factory patches, of which there is a large library on Local disk.
2) to make your own instruments, using multisamples that you create yourself or import from somewhere else.
3) to make banks of sound effects, loops, beats, etc. that can be triggered from MIDI.

Standalone mode:
Preferences>Audio:Scarlett
Sync:Novation Impulse.
Keyboards and Control surfaces OFF/Delete

Advanced MIDI Tool on front panel: assign channels to modules

Look at wiring: Tab key to flip rack. Mixer output goes to hardware 1-2 input. Ignore “Master Section.”

To create a patch from scratch: Edit>Reset Device.

Samples can be taken from within NN19 or NNXT folder, or anywhere else, or dragged in from the Finder (select a keyzone first). More Samples in Documents>Sounds. Sound FX Libraries, Earshot FX library, SampleCell FX bank, SampleCell instrument bank, Optimum drums, other (weird) banks. Bring in your own on flash drive or download from anywhere. AIFF, WAV, or MP3 will work.

To create a new keyzone for another sample, Edit>Split Key Zone. To do realistic instruments you need multiple samples. Transposing a sound too much shifts formants, causes muchkinization: sound becomes unnatural.

Set root note to determine relative pitch on keyboard. Tune samples to get a consistent scale up the entire keyboard.

Loops: can be turned on and off for each sample. Forwards or Forwards/Backwards. Sample must be designed for looping, or else you'll hear a glitch or a space.

To add another module, first highlight the mixer and create a new NN19. New module will automatically be wired into mixer.

Modifying the patches: LFO speed and amount, filter, filter envelope, amp envelope, pitchbend range, mod wheel assignment. All controls outside of keyzone area affect all samples.

Sampling into NN19—must be done in standalone mode, not through DP!
Get microphone from cabinet (key is in the lower keysafe—make sure to scramble the combination when you’re done!) Or guitar or other audio source. Set interface input level in the middle, lower if Peak light is on a lot. Configure Reason Audio I/O module for mono or stereo: flip panel around, disconnect one channel if mono. Set Big Meter for input and adjust level using gain control on interface. Start sampling, play sound. 30-second limit.
Software will automatically cut off leading silence, load sample into instrument. You can do multiple samples, and they will all appear on Sample Select knob. Open Toolbox window to rename or edit or loop samples. Assign them to different regions, tune them. Turn off "keyboard track" if you want to play everything at fixed pitch.

We do not use the sequencer in Reason!

Saving a Reason rack or "Song": If you have brought in or recorded your own samples, go to File>"Song Self-Contain Settings" and "check all" before saving—or Reason will not know where your samples are!

Rewire mode: Using Reason in Digital Performer: Always launch DP first, then Reason (QUIT it if it’s running!). At end of session, always Quit Reason first.

DP and Reason communicate using “Rewire” protocol. After launching Reason, you must set up two paths to Reason from Digital Performer: audio, and MIDI.

In Reason Prefs: turn off keyboard in Sync and de-assign MIDI channels in Advanced MIDI. MIDI channels will be handled in DP. Audio assignment is automatic through DP.

In Digital Performer: Create new Stereo Audio track. Set output to Scarlett 1-2. Set input to Reason Mix L1-R2. Turn on "Mon" (input monitor) on that track, but don’t turn on Record on the audio track. Reason can have more than two audio outputs, but we’ll just use two for now.
Then create a new MIDI Track, and assign it to "Bus6:Reason ACGUITAR" or whichever module you want. Set it to record. With multiple modules, each one gets its own MIDI assignment within DP.
Name Reason modules! Names will carry into DP.

Save your Reason rack separately from your DP file!


Feb 21

MOTU Symphonic Instrument (MSI):
Use within shell: UVI Workstation
Multiple racks, each one with 16 channels (only 4 in default state).
Excellent acoustic samples of strings, winds, brass, harp, guitar, keyboards, etc.
Can adjust envelopes, filters, LFO, reverb, portamento.
Saved with sequence--don't need to save a separate file.

Reason intro and NN19 sampler
Default rack has NN19 sampler, mixer, and reverb.
NN19: Sample playback using AIFF/WAV/MP3 files, from local files, modified files, or your own sounds.
Open DP before Reason! In DP, create a new Stereo Audio track. Asssign its input (using dropdown "New Stereo Bundle)" to Reason Mix L1-R2. Click on "Mon(itor)" icon so it's blue.
Now open Reason. Assign a MIDI track to the Reason NN19 module ("ACGUITAR"). Put that track in Record (but not the audio track) and you can play the module.
To change sound, click on folder icon next to sound name. Browser window appears at left. Use up arrow to go to higher folder ("NN19 Sampler Patches").

Warning! You must save the Reason rack (or "song") by itself, separate from the DP session. DP does not save it automatically!


Feb 14

Setup>Chasing. Turn it on for pitchbend and controllers, it will take care of resetting everything at the beginning of a sequence. Usually don’t turn it on for note.

But even with it on, make sure to initialize all the controllers that you’re using, especially volume and pitchbend, at the beginning of the sequence, or you may be left with hanging controllers--meaning tracks that are out of tune or that have been faded out!

Region menu: Transpose, Transpose by key or scale interval, harmonize=transpose but keep original notes.

Globally change velocity and duration (add, substract, scale, set, limit) in Region menu.

Looping, drum tracks or others. Can have different loops of different lengths on different tracks.

Split notes
Select a range or other characteristic (duration, velocity, top or bottom notes of chord) and move to another track. You can cut the notes from the existing track, or copy to the clipboard, and paste anywhere you want.

View Filter: Two modes: Global (all including graphic windows) or Event List (only). View and edit only certain types of data. Good for finding glitches or unusual events in a track, or for eliminating all of the pitchbend or controller info.

Event list: notes, controllers, etc. Good for analyzing a track closely, inserting or changing discrete events. Use View Filter to focus on types of events (notes, pitchbend, controllers)

Step time entry (command-8). Preset duration, turn on Auto Step. Track does not need to be in record! Step=rest. Roll chords until you release a note, then it will advance. Make sure you are on the right track, and starting at the right place--it does not always follow the cursor.

Quantizing
Define a rhythmic grid, and bring the notes to it.
Swing: changes every other grid interval. 100%=no swing. 66%=triplet swing, 75%=dotted note
Strength: how close to the grid to make it. Less than 100% can keep some of the original rhythmic feel.
Offset: place quantized notes fixed distance before or after the grid.
Randomize: Places notes randomly away from grid after quantizing.
Humanize (under Region menu): randomize without quantizing.

Techniques for composing electronically:
Not restricted to what you can put on paper, or dealing with a specific instrument or group of instruments. A lot of freedom! Where do you start?
• a sound. Play with a sound on the keyboard and see where it takes you
• a beat or a groove. Play a 4-bar phrase and then jam over it
• a melody
• a chord progression. Blues, falling fifths, rising thirds, modal, V of V of V, etc.


Feb 12

Controllers. Volume, pan, sustain, a few others are universal. Otherwise unique to each instrument; Sound Canvas uses these:



To set up controllers on the Novation keyboards: (See board for defaults) Press Controls: display says “Move Control”. Move the control you want to program. Press “+”. Use knob to dial in the number. Push the knob to save the assignment. Go back to “MIDI Chan” so you don’t disturb it. Remember your assignments, since they may not be there the next time.

Viewing and editing in "Continuous Data area" of track.
Serial nature of controllers: not really continuous, discrete events.
Points and Bars view shows this. Lines view leaves out intermediate events if transitions are smooth. Repeating a controller event at the same value usually does nothing.
Left tool: what to show and what will be affected by selection
Right tool: pencil and reshape tools will affect this data.
Quick Filter: hides data that you haven't selected.

Draw a curve, or use reshape, or set to free.
Reshape tool can be used with other curves, and is periodic. Period depends on grid setting.

Insert measures: puts blank space on all tracks. To insert blank space on individual tracks, use Shift command.


Feb 7

Wait for note: if countoff is on, it will run forever until you play a note.

MIDI window can show one or several tracks at the same time. Use show/hide pane to set these up. To differentiate between tracks, reset the colors for the tracks in the Tracks window. Track with pencil icon can be added to.

Changing velocities and controller data in Continuous Data area. For individual note velocity change, use arrow. Group select and change, use marquee and arrow (same tool).
Toolbox at right side of transport window. Default tool is arrow. Use pencil for adding events. Use S-curve to reshape events (like velocity). When reshaping velocity, notes must first be selected!
Left tool: what to show and what will be affected by selection
Right tool: pencil and reshape tools will affect this data.

Add notes or controllers with pencil. edit with arrow, or group edit with marquee and arrow. Reshape controllers or velocities with reshape ("S”) tool.
Use triangle/sine/square "periodic" modes with reshape tool to do rhythmic controllers, pitchbend. Grid setting determines the length of the period (regardless of whether grid is on).

Sequence window: one or more tracks in separate lines. Dropdowns for volume, pan, controllers. Change size.

Mixer window: Set volume and pan for each track, statically. Is saved with sequence.
Mixing automation: Put track into “auto record”, but do not click on Record in Transport window! Click on play. (Whether track is record-enabled or not doesn’t matter.) All movements of fader and pan knob will be recorded, can then be edited in sequence or MIDI window.
To play back volume or pan, Automation Play must be enabled!

Tempo slider (manually variable) vs. Tempo map (Conductor, fixed with automatic changes)
Create a tempo map: in Tracks window, double-click on conductor. Can magnify vertically by left edge dragging (magnify tool). OR open right-side pane using Shift-], select Event List, Conductor, insert tempo event. Can do time signature changes in Event List too.


Feb 5

The software MIDI studio: Virtual MIDI cables: connecting applications (instruments and effects) using different protocols like:
(we use) Rewire, VST, Apple AU, MOTU Audio Systems
(we don’t use) TDM (ProTools), DSMidi (wireless), OSC (Wired or Bluetooth), DXi. In the software studio, MIDI speed limit doesn't have to be adhered to.

In the software studio, MIDI speed limit doesn't have to be adhered to. We use AU, VST, Rewire, and MOTU Audio System instruments and effects.

General MIDI
Common configuration for MIDI program changes. Used for multimedia, file exchange, like .txt
Sound set (128 programs), Arranged in families of 8. Percussion (47 sounds) on ch 10.
https://www.midi.org/specifications/item/gm-level-1-sound-set
Fixed controller meanings: volume, pan, expression=volume, mod wheel, PB range
Most computers and mobile devices how include a GM synth.

Sound Canvas, a virtual instrument, has 16 channels, each one can have any of 128 sounds, except drums on Channel 10. Follows GM protocol with extended sound sets/

Configuring the keyboard (press MIDI chan 2x until it says Music64)
Rename a track: option-click on name. Enter to continue down the list.

Use Pads for Drums on channel 10 as much as possible!!

Editing: moving and changing notes. Grid on and off for selecting and moving events. Sets cursor movement, not (necessarily) note placement. Temporarily defeat it with Apple key.

Copy, paste to duplicate events. Cut, paste, to replace

Tracks vs. channels. More than one track can be assigned to a channel. All tracks going to the same channel have to have the same patch/sound!

Volume vs. velocity
Important to note the difference between velocity byte in note-on command
(which affects onset of note only) and volume controller (#7) (can affect sound continuously). Velocity=how loud the instrument is played. Volume=how high
the fader is.

Save often!

Sign on to server: Create a folder with your name on it for your stuff. NEVER RUN A FILE FROM THE SERVER. Always copy it to the local computer first.


Jan 31

Input devices for electronic music
Keyboards, drum pads, guitars, wind controllers, pitch convertors, marimbas, maracas, positional indicators, ribbons, game controllers
Because physical characteristics of device are not linked with acoustic characteristcs, you have total freedom.

MIDI Message set
An eight-bit number has a decimal range of 0 (00000000) to 255 (11111111)
MIDI has two types of bytes:
Status or Command byte (>127, first bit is 1) is instruction
Data byte (≤127, first bit is zero) is value.

Command set - some commands are defined as having 2 data bytes, some have 1 data byte.
Receiving device knows what to expect. Incomplete command is usually ignored.

Channel messages
First four bits of command byte is the type of instruction. Second four bits is the channel number. Early MIDI devices only read one channel at a time, ignored data on other channels (some, like drum synths, still do). Means you can use different devices on the same MIDI cable.
Modern synths, called "multitimbral", sort out data by channel, assign to different sounds in the instrument.
Channel number = 0000 (zero) to 1111 (15). But we call them 1 to 16.

Note on: Command byte (144-159) followed by two data bytes (0-127: note number, key velocity=how fast the key moves from top to bottom)

Note off: Command byte (128-143) + note number + velocity. Why note-off velocity?

Continuous Controllers: (176-191) + controller number (mod wheel, volume, pan, sustain) + value
127 possible controllers per channel.
Many controllers defined, some as transmitters (mod wheel=1), some as receivers (volume=7), some as both (sustain pedal=64).
Others that are defined: Stereo pan=10, Foot pedal=4, Data slider=6
Many others loosely or not defined.

Volume vs. velocity
Important to note the difference between velocity byte in note-on command
(which affects onset of note only) and volume controller (#7) (can affect sound continuously). Velocity=how loud the instrument is played. Volume=how high
the fader is.

Program change: (192-207) (Cn) + single data byte=value. Program change numbers are 0-127, often but not always called 1-128. Calls up a register in the synth's memory.

Pitchbend: (224-239) + two data bytes: Most significant byte (MSB) + Least Significant Byte (LSB). Designers wanted “double precision” so that when [tich wheel was moved you didn’t hear discrete pitches. So possible values are 0 to (128*128)-1=65,383. Turns out it wasn’t necessary: LSB is almost always ignored. But it must be there anyway.

"Zero pitch bend" is actually a value of 64 (MSB). Many sequencing programs describe pitchbend as +/-64, but in reality the values are 0-127.

Channel pressure/aftertouch: (208-223) + single data byte=value. Amount of pressure on keys after the note is played. Used for vibrato, filters, pitch change, etc.

Key pressure/polyphonic aftertouch: (160-175) + note number + value. Individual pressure values for each key. Quite rarely used: expensive to implement, complex to program, uses a lot of bandwidth

Logging onto computers. Use Music account.

Recording MIDI into a sequencer (DP)
Open DP: New, create a new folder with your name in Documents folder.
Name the project
Turn on click (metronome)
Turn on Wait for Note
Set tempo
Choose instrument in SoundCanvas
Record a track
Drums are on channel 10. You can overdub onto any track with Overdub switch enabled.


Jan 29

Types of synthesis
Additive or Fourier:
http://www.falstad.com/fourier/ Select: mag/phase view, log view

Building up from individual harmonics, with separate levels and envelopes for each. Impossible to do analog, hard to do digitally because hard to make interactive: so many computations per second. Used in Kawai synths, Kurzweil 150, some experimental synths. Hammond organ.

Subtractive: Start with complex waveforms like noise, sawtooth, square, and filter out harmonics — high, low, or bandpass. Filter envelopes much easier to deal with than individual harmonic envelopes. Analog synthesis is subtractive, also can be done digitally, digital simulations of analog synths now very popular. Also digitally controlled analog synths. “Real analog” synthesis has drift problems.

FM: uses 2-6 sine waves or more complex waves ("operators") modulating each other, each with an envelope. Modulator, instead of filter, determines harmonic content, which can be very complex. Lends itself well to real-time control, not hard to compute. However, programming is not at all intuitive. Describe Sound Blaster chip: awful 2-op FM.

Physical modeling: digital models of instrument parts exist in software, interact in real time. Also called "waveguide". Might have excitation of a flute, resonance of a saxophone, bell of a trumpet. Can include elements like breath, embouchure pressure, tonguing, change in tube resonance as you cover "holes". Very powerful, difficult to do -- lots of computation. Yamaha mostly. Instruments can generally play only one or two voices at a time.

Granular: breaks up files (like samples) into tiny pieces, plays them back and reassembles them at different speeds and pitches, in real time. Adds processing.

Samplers (NN19)
RAM: make recordings in a digital audio program, load them in. Also called (incorrectly) "wavetable". Samples are looped for long sustains.
Sample + Filters + DSP: Digital samples stored in ROM or RAM, played back under control of MIDI. Envelopes, filters, LFO
Multisampling prevents munchkinization. Formants—spectral areas that remain constant despite pitch; if you transpose them, you change the characteristic of the sound.

Real-time Control over synth parameters:
Key number = note/pitch
Key Velocity: volume, timbre, envelope depth or speed
Pitch bend: variable range
Mod wheel: LFO depth or filter cutoff

MIDI: What is it?
MIDI is not music, not audio, but it is a representation of a musical performance, like a score, or a player piano roll. Every performance nuance is communicated, without the actual music. Notes, sliders, knobs, pedals, patch changes, other parameters.
Cannot be stored on tape, must be stored digitally — a sequencer: a list of instructions (commands) with their timings. Sequencer can be a computer, and the sequence is a computer file. Stored on disk, you can move it around between studios, or over a network. Usually quite small.

On stage, one keyboard could act as master and play all the others. In studio, a central controlling sequencer could control an entire orchestra of synths.
Central controller has performance data, while the actual sound is produced by the remote devices.

Initially this was all done using discrete hardware modules connected by MIDI cables. Now we can do it all in software.

Since performance data is broken down into gesture parameters, can isolate individual performance parameters: change key velocity, or note number, or pitchbend setting, or instrument, or rhythm without changing other performance parameters

Prepare an entire performance, change any parameter, singly or globally, at any time.

Who owns it
Public domain. A specification, not a standard, no legal or official standing. The industry agreed to support it, market forces keep it in line. If someone strays, users and reviewers will report the violation.
A living language: many holes in the spec for future development. Participation from all corners of the industry: hardware mfrs, software mfrs, systems designers
A compromise: performance vs. cost, initial hardware cost c.$10/unit. Seems slow by today's standards, but is still effective for its purpose.

Electrical and digital protocol
Originally used cables, binary on/off 5-volts DC. Each MIDI byte or word consisted of eight data bits, plus two “framing” bits. Speed is 31,250 bits per second, or 3,125 bytes/sec (8 data bits + two buffer bits= 10 bits/byte). Now mostly virtual.
Travels in one direction only: from MIDI Out jack to MIDI In jack. If you want bi-directional, you need two cables.
A universal standard: if an instrument has MIDI, it must be able to talk to any other MIDI instrument without restrictions.

MIDI over USB, etc.
Now MIDI can be piggybacked on other cables, like USB, Firewire, and Ethernet. But there are no industry standards for these, so manufacturers have to make software
drivers which are installed on computers so that they can understand what's coming in. However, Apple and Microsoft publish specs for MIDI over USB that instrument and interface manufacturers can choose to follow--then they are called "class compliant" and don't need special drivers. This includes our keyboards.
Advantage: can specify faster than MIDI speeds! Disadvantage: can't plug instruments into each other, but you need a computer in the middle.


Jan 24

Audio electronics principles and components:
Transducer = converts one type of energy to another

Microphone = converts sound waves in air to Alternating Current (AC) voltages. Microphone has a magnetic metal diaphragm mounted inside a coil of wire. Diaphragm vibrates with sound waves, induces current into coil, which is analog of sound wave. This travels down a wire as an alternating current: positive voltage with compression, negative voltage with rarefaction.
Dynamic vs. Condensor/capacitor mics, Condensor mics can use phantom power, battery, or be “permanently” (electret) charged.

Pickup patterns: Omni, figure-8, cardioid, hypercardioid, boundary
Speaker, headphones = transducer, converts AC voltages to sound waves in air. Speaker has a wire coil that receives alternating current from amplifier, paper cone is attached to a magnet inside the coil. As the current alternates, the magnet moves in and out, and makes the paper cone move in and out, producing compression and rarefaction.
Human Ear:
converts sound waves to nerve impulses. Each hair or cilium responds to a certain frequency. The brain interpolates sounds between those frequencies. As we get older, hairs stiffen, break off, and high-frequency sensitivity goes down. Also can be broken by prolonged or repeated exposure to loud sound.

Cables and Connectors:
Balanced: XLR, ¼-inch TRS. Less noise (differential amplifier cancels noise picked up on line), better frequency response, longer distance.
Unbalanced: ¼-inch TS, RCA, mini. More prone to interference, high-frequency roll-off.
Stereo unbalanced: ¼-inch TRS, mini TRS.
DI box: for balancing unbalanced lines, like guitar cables on long runs.

Other musical factors:
Noise = random frequencies, sometimes filtered (colored)
Envelope = Change over time, applicable to any of the above
Vibrato = Low Frequency Oscillator = periodic change in any of the above. Vibrato itself can have an envelope, both intensity and speed.
Location = L/R, F/B, U/D

Synthesizers and their Parts
Oscillator: simple or complex waveform
Filter/Equalizer: static or dynamic
Envelopes: volume, filter, pitch. Attack/Decay/Sustain/Release: approximation of natural envelopes, invented by Moog. Invertable for filter use.
LFO: volume, filter, pitch. Variable depth and rate, selectable waveform. random segments/random levels (sample+hold)


Jan 22

Characteristics of a sound:
Sound is vibration of a medium, such as air. Travels in waves: compression, rarefaction = changes in air pressure = volume.
Frequency = pitch = Number of changes in pressure that go past your ear per unit time.
Expressed in cycles per second, or Hertz (Hz).
The mathematical basis of the musical scale: go up an octave = 2x the frequency.
Each half-step is the twelfth root of 2 higher than the one below it. = approx. 1.063

The limits of human hearing = approximately 20 Hz to 20,000 Hz or 20 k(ilo)Hz.

Fundamentals vs. harmonics = fundamental pitch is predominant pitch, harmonics are multiples (sometimes not exactly even) of the fundamental, that give the sound character, or timbre.

Loudness (volume, amplitude) Difference between maximum and minimum pressure
measured in decibels (dB). The decibel is actually a ratio, not an absolute.
A minimum perceptible change in loudness is about 1 dB. Something we hear as being twice as loud is about 10 dB. So we talk about “3 dB higher level on the drums” in a mix, or a “96 dB signal-to noise-ratio” as being the difference between the highest volume a system is capable of and the residual noise it generates.

“dB SPL” is referenced to something: 0 dB SPL (Sound Pressure Level) = the perception threshold of human hearing. Obviously subjective, so set at 0.0002 dyne/cm2
The total volume or “dynamic” range of human hearing, from the threshold of perception to the threshold of pain, is about 130 dB, so the threshold of pain is about 130 dB SPL.

Timbre = complexity of waveform, number and strength of harmonics
Harmonic series = break down waveforms into harmonics or partials. Fourier transform/analysis.
Show sine, saw, triangle, square, noise
Show harmonic breakdown of sine, saw, triangle, and square waves. Saw: each harmonic at level 1/n. Square, only odd harmonics at 1/n. Triangle, odd harmonics at 1/n2
In an electronic system, the ability to reproduce those high harmonic frequencies is called frequency response or fidelity.
Using filters or equalizers to change timbral characteristics. Hipass, lowpass, bandpass

assignments

©2018 Paul D. Lehrman, all rights reserved