A forum prophet said it best when he posted, “What is it with composers who write great stuff but can’t mix for sh*t.” The question sans question mark morphed into a verbal observation.

Well, there are two reasons for this carnal condition among composers.

The first, and the one that contributes to their lack of buying sequencing programs and audio plug-ins, put simply, is that they don’t want to. What they want is to key in their work and hear it played back. And that’s it. For this there are several notation programs already starting with the Notion 4 app on the iPad for under $20.

And let me tell you, this is a big marketplace, literally in the millions globally when you factor in “degreed” composers over the past 40 years and then un-degreed composers including thousands of jazz and concert band composers. Then there’s all those arrangers and composers in the military (Samuel Adler and Sammy Nestico fit here)! So this marketplace is huge, and not reached by any single media publication or web site.

And their reasoning is sound (sorry about the pun). MIDI is a time sucker which drags them away from keeping the main thing the main thing – the writing of new music, along with studying music and happily listening to it. As John Williams pointed out in a recent interview, he still hasn’t heard all 106 Haydn symphonies yet. Ah yes. Listening does take time. Even if just a once through.

The second reason is found with those composers who realized that what music technology did for the songwriter was now available to them, allowing them to pick up the mantle of either becoming a film composer, an artist marketing their own original works, or a combination of the two, since now they had orchestral sounds to work with on demand.

So here you have a group that knows their music, but is now confronted with the time commitment required to learn, literally, two new crafts (at minimum). The first is the craft of sequencing and doing MIDI edits. The second is learning how to record and mix with virtual orchestral libraries.

It’s here we see the great struggle for many composers. Composers are music people who best learn with pattern, step-wise instruction, which is exactly how music technology overall is not taught.

So now, not just a decision to learn, but a journey as well, one that can take years to learn but never quite master because it’s not the composer’s passion.

This described me until recently: writes well, but mixes with the skill of a Texas longhorn.

Consequently, I framed the problem this way.

I am recording with that which has been previously recorded, and excluding programs recorded in an anechoic chamber, contains the sound of the room, to some degree, baked in. All the rooms are different. Therefore, my objective is to create a template that as best possible gets everyone into the same room sounding like one orchestra.

Eventually I arrived, but it took a year of dedicated struggle and questioning to get there including the observation that audio plug-in makers don’t really want to train you on their products as much as they want you to experiment and discover their wonderfulness with manuals less comprehensive then the calculus book I used in college.

My breakthrough came when Vienna sent me their Vienna Suite for review and Ernest Cholakis of Numerical Sound got me his FORTI/SERTI program. Add to this Verb Session from Ircam and these became my points of entry.

My willing mentor along the way has been Ernest because he was so willing to answer my questions. While the FORTI/SERTI package (only available from VSL) has a lot of stuff in it, for basic mixing it offers impulse responses (IRs) in the three major components of a professional software reverb program: early reflections, reverb tail (decay) and TILT filters, which in layman’s language is akin to EQ. I’ll keep it this simple.

TILT Filters
The TILT Filters are pretty astonishing. They’re organized in two groups of bright and dark. And you apply them based on the lowest pitch of the instrument. So for the violins, the TILT Filter is a C3 (since G3 is the lowest pitch) while for the flute it’s C4. Once you get that part you select bright or dark as needed. There are two ways to apply TILT Filters, but the one I use broadens and brings out the sound. The other approach makes it a little more silky and in the distance.

At any rate I start here first. The results with LASS and Vienna are really startling in a positive way.

Early Reflections
This what really through me in FORTI/SERTI – the lack of explanation of how to know what you’re selecting and why. Seeing I wasn’t getting it, Ernest graciously sent me the FORTI/SERTI info in a series of spreadsheets showing the relationship between room size, early reflections and reverb tail.

Starting with the Vienna Orchestral strings, Violins 1, I went through every single ER in FORTI/SERTI and discovered that certain early reflections/room sizes (and types) worked better with Vienna than others.

I then tested this with LASS and found the same principle to be true.

Compared to a standard reverb, you only need three tools to work out which ERs are the best for any given library: your ears, a pencil, and a checklist.

This is the beauty of the IR approach. You use your ears to literally pick out an ER, or series of ERs that work best with each lib in your library.

Reverb Tails
We went through multiple spreadsheets until Ernest worked out solution sheets showing which ERs worked best with the reverb tails presented in FORTI/SERTI.

Around this time, we went to the next step of identifying the RT60s of rooms where film scores and sample libraries have been recorded.

And then, as a point of curiosity, we went one step further with Ernest doing tests to determine the RT60s of the finished orchestral sample libraries which oftentimes were different from the RT60s of the rooms recorded in. Sometimes the library RT60 was actually less than that of the room, while in a few situations, it was the same or greater.

At this point, we had a beginning methodology for working out and selecting those early reflections which would aid in getting all the libraries into the same room before applying the reverb tail. And here, there are several strategies to be deployed.

Verb Session
Composers do not mix by IRs alone! What is often called the Hollywood sound, or better, a film sound, is a Lexicon 960, an algorithmic reverb which seems to glue the entire recording together.

This is where Verb Session came in compared to all the other reverbs, since its approach really runs parallel in concept to Ernest’s approach. The graphic below shows Verb Session. If you know the room size and the RT60 (decay), you can create a natural sounding algorithmic reverb, literally, in seconds using the default template.

verbsession-default

Victory
With these two tools plus Virtual Sound Stage (or Spat if you’re more flush with cash), I was able to far more easily get my libs into the same room and have a clear mix contrasted to the more muddy mixes I often hear which are bathed in algorithmic reverb.

Designing Visual Orchestration 3: Doing The Basic Virtual Orchestral Mix
So, at long last, a system that worked quickly that produced a great sound, yet liberated me, at least for now, from having to put in the extended time required to garner the skills of a recording engineer.

So it was time to teach this as a course that would let others in.

I prevailed upon Ernest, and he graciously created for Visual Orchestration 3, 17 original impulse responses that come with the course, including 3 TILT Filters (covering most of the orchestral instruments), 5 sets of ERs (reflecting five different rooms similar to where film scores and orchestral sample libraries have been recorded), and 4 reverb tails.

Since most sequencing/digital audio programs come with convolution reverbs, the included IRs enable everyone to learn, hands-on. using the exact same tools.

So this is where you start learning, in a step-by-step manner, how to do a basic virtual orchestral mix with your existing libs. You then learn how to apply it to an algorithmic reverb (using Verb Session as a model), which you can then apply to the algorithmic reverbs that come with your sequencing program.

So check out Visual Orchestration 3: Doing the Basic Virtual Orchestral Mix, where you’ll also find a free sample lesson.

You Should Also Check Out This Post:

More Active Posts: