Frequently Asked Questions

Q: What's the difference between a Limiter, Compressor and Normalizer?

A: They are 3 major “automatic volume adjusters”.

Audio normalization applies a constant amount of gain to bring the volume up to a target level (the norm). Because the same amount of gain is applied across the entire recording, the signal-to-noise ratio and relative dynamics are unchanged. Audio Compression reduces the volume of loud sounds or amplifies quiet sounds compressing an audio signal's dynamic range. This was used to abusive levels on CD's to insure your music would always be played loud on the radio to sell more product. Audio Limiter allows signals below a specified input power or level to pass unaffected while attenuating (lowering) the peaks of stronger signals that exceed a threshold. Limiting is a type of compression intended to avoid Clipping in a digital system (trying to go louder than a finite number of bits would allow). POP QUIZ: Which Squashes the dynamics to a narrower range?

Which makes everything louder?

Which prevents the worst form of distortion and sometimes hardware damage (i.e. speakers get blown out)?

Q: Why are IOS DAW's so fragile? Everything crashes around me when I make my projects. Who tests these Apps we pay so much for? Apple? The Developer? No. It's me!

A: Think of an App as an appliance like these analogous devices:

A Hairdryer (1000 Watts/10 Amps needed) A drill (100 Watts/1 Amp needed) Most home systems in the use allow up to 15 Amps to be pulled from a plug (power strips can have 8-10 ports) before the line draws too much current and needs to throw the breaker to prevent the wiring from melting due to heat.

2 hair dryers can't use the same power strip but you could use 14 drills (using 2 power strips into a power strip for 19 free ports)

So, thought experiment, using Apps: 1. iSymphony Strings 10 Amps 2. Perfect Piano 7 Amps 3. Tiny Pianos .3 Amps

Try using iSymphony Strings and the Perfect Piano at the same time. They crash the DAW (10 + 7 = 17 Amps. BREAKER)

Use 2 Pianos without issue but probably little else at 14 Amps

Switch to 30 Tiny Pianos… 9 Amps.

“Now why does the Perfect Piano I paid $50 for crash in my DAW and the $5 Tiny Piano work? Doesn't Apple test this crap? I want my money back.”

Apple tests the breakers for AU's at 340MB - they try to help. I'm only considering one resource in my analogy. Most DAW's show CPU use but few show RAM and the interplay between RAM and CPU is exactly what the DAW is managing for you to create.

So, most DAW on IOS complaints are due to resource limits. More RAM and faster CPU's provide more resources and Apple appears to be raising the AU resource limits for newer iPads since they have 4GB's. The big dog has 6GB for $2000.

It's not a problem with IOS. It's just a boundary condition on a mobile device. Just seek a compromise with the Apps you love (understand their needs) and try not to throw the breakers and ruin your work with outages. It is a challenge worth the effort.

There are many rules to live by but these are my top 3:

Freeze heavy RAM/CPU Apps to audio early (free up RAM/CPU use) Don't expect a project to support 10-15 AU apps (if they do it's a “Tiny Piano”) Try to Apply FX carefully to avoid wasting excess resources when they could be grouped or applied in post production. NOTE: Many Apps are synths/FX/Sequencer combos (I'm looking at you Aparillo). They are definitely Hairdryers. More and more we're getting seduced by hairdryers. So, things will only get worse without knowledge of this limitation.

Q: What is Granular Synthesis?

A: (from a post by @aplourde re:SpaceCraft Granular)

SpaceCraft is a bit of a departure from “classic” granular synthesis, but considering classic granular tends to the experimental or academic more traditional tonalities aren’t a bad thing, just different.

Classic granular deals with clouds of micro-sounds, typically 1 - 50 ms, the idea being that the individual grains are not distinguishable, but from the cloud the aggregate character emerges. Typically, those grains are sampled stochastically from a source, again adding to the indistinct tendencies.

SpaceCraft differs in that a good portion of the grain control window has grain lengths that are clearly discernible snippets of the source audio. Also, the grains are individually played instead of “smeared” in a cloud of grains. Finally, the sampling is deterministic, with regular LFOs oscillating from the selected point instead of random selection.

This is not to say SpaceCraft is wrong, it’s a different approach that yields more controlled and traditionally musical results. Contrast this with Borderlands Granular which has a more “classic” granular approach (micro-sounds, layering of grains, random selection of grains from within a bounding box). Amazing and beautiful instrument, but it does tend towards more abstract sounds.

Even the sequencers of both emphasize their different approaches: SpaceCraft has an arpeggiator that plays traditional pitches in series. Borderlands has a motion recorder that captures your movements of the grain clouds traversing the samples.

How would you differentiate the granular and its uses from other types of synthesis?

Because you’re sampling another sound source, it’s very much dependent on what the sample is. That said, because of the discontinuities and/or layering, it will tend towards a more harmonically dense sound. Also, by virtue of the grains / clouds, it’s a bit harder to get sharp, clearly defined sounds (when you do get this with SpaceCraft it’s typically because you’ve stretched the grain out to a larger, defined snippet of the source). As such, granular synthesis is typically used for more pad-type sounds whether sustained or pointillistic.

Instruments like Quanta, Tardigrain, iPulsaret use granular synthesis as the oscillators of more traditional synthesizer engines. Here, you can get whatever you want, as the granular cloud is pitched and further shaped with filters, envelopes and other modulation. Still, it will tend to more harmonically rich sounds than traditional subtractive synthesis that starts with a sawtooth wave.

How have you used it in your tracks?

For SpaceCraft, as a background pad sound. One tip: use a sample of the song you’re working on as the source for the granulations, especially the main melodic line. This retains a harmonic echo of your track, but re-contextualized. It also helps keep the sound from overwhelming the rest of your track.

I’ll often do this live with Borderlands as it can continuously sample a defined-length buffer. So the granulations are a real-time processing of another source, providing a fractured “echo”.

How would you characterize the sound? It seems capable of a kind of primeval grandeur not found elsewhere. I often search for pads that give an epic feeling. Seems more possible in granular for some reason… At least to me. Kind of like looking thru a magnifying glass makes an ant terrifying… By George… That's what the darn thing is!

Granular is the sound of swarms, of aggregate actions, like a wave crashing on a shore. Yes, it can be epic! But, depending on how you use it, it can also be indistinct, a fog in the background that makes all the colors bleed together.

Q: Do I need to dither my music before change the bit-depth or uploading to a streaming site?

A: When you convert the bit-rate of audio some extra frequencies can be generated in the process that are small injections of noise. Dithering masks these artifacts and lowers the “noise floor”. That sounds like something you'd want but here's some solid information provided by @Tarekith on dither related Forum Thread:

Dither noise is almost always way, WAY below the noisefloor of most pro-audio equipment. I'd bet 99.99% of most professional audio engineers can't even hear dither unless it's in very specific cases meant to highlight it, and at volumes much louder than normal playback on high end audio systems. While it serves a useful purpose, it's effect on audio production is about as minimal as it gets.

The only reason musicians these days even know about it is because early on some manufacturers were trying to over-sell their mastering plug ins and gave an option of dither types to people. Otherwise it's something that ideally should just be done when needed by the software, without any knowledge or input from musicians in the first place.

You should definitely be using dither if you have the option to do so when converting to a lower bit depth. My point about the software doing it automatically for you is more of a “in a perfect world musicians wouldn't have to worry about it and it would just be done for you when appropriate” kind of statement.

Sadly, these days in reality it's still something that is left up to us to choose to use or not. And it serves a very valid and useful purpose, even if the effect is incredibly minuscule in the long run.

Always dither when going from higher to lower bit depths. Any type of dither is better than no dither. If you're not sure which kind to use and your software gives you a choice of dither types, triangular dithering is a good all around option. Hope that clears things up, and I'd appreciate if this was added to the FAQ since it sounds like I'm advocating people not to use dither there.

Q: How can you split MIDI input [chords] notes across multiple MIDI Channels?

A: You can use the Poly To NxM script by setting it up in a 4 by 1 configuration, for an easy start just adapt one of the supplied 5×1 sample sessions.

  • playground/frequently_asked_questions.txt
  • Last modified: 2019/05/15 14:38
  • by McD