AUS230 – MAJOR PROJECT – POST MORTEM REVIEW

I will begin by identifying, chronologically, several events and issues that took place over the course of the project, their causes and solutions (mitigate negative events and facilitate positive events):

Illness (2 colds and 1 diabetic complication)(-)

Afflicted at both the beginning and end of flu season (has adverse effects on insulin absorption)

Preventative remedies and more efficient management/monitoring

Recording session with David and Adam (+)

Time efficient, positive and productive session required for dialogue overdubs

Maintaining a friendly, enjoyable yet productive atmosphere is key. Having the director present meant the vocal takes were exactly to his liking (no back and forth/additional recording sessions required)

Updates with Andrew Hill (+)

Andrew often validated my choices, steered me onto different avenues and always had great suggestions/solutions to attempt/implement while moving forward.

The delivery of a rough/fine cut (-)

This happened ridiculously late in the schedule and didn’t allow for much breathing room when it came to modifications,updates and lock offs. There were several modification updates which, even when seemingly small, had a large effect particularly on integrated LUFS metering.

Adherence to the schedule and being more assertive and insistent on deadlines that leave appropriate working times is essential and a lesson well learned, be it, the hard way this time.

Mix review with Andrew Hill (+)

Was reminded of one of the last stages of sound production assets and that is the LUFS metering. Factoring this stage into production would probably have allowed for a more efficient mix review.

Attempts at improved Black Dog sibilance improvement (-)

In an attempt to close the gap between “coolness vs audibility” i made several attempts at processing the recorded dialogue in various ways to get the most out it. Including various EQ attempts, toning down plug-ins effects, recorded plosives to accentuate the existing dialogue. A recommendation of EQing an approx. 80Hz boost rather than an audio clip pitch shift were taken on board and tested but desired results were not reached in a quick enough timeframe to warrant usage.

Re-recording of black dog vocals for sibilance improvements (+)

The final solution was to re-record the dialogue with exaggerated enunciation and pitch shift to lesser factors than previous.I time stretched the audio back in line with dialogue placement for the rough cut, as per director’s request.

Fine cut LUFS (-)

Tuning the LUFS metering was a fine and exhaustive process and trying to level out before the final lock off proved more of a hindrance than a help. Recognising the importance and time requirements for this section of post production is essential in effective scheduling and time management.


My understanding of workflow was verified by the delivery timing of full-scale assets. This occurred at the latter portion of the project process Google drive facilitated file transfer, review and updated progress. I feel that although both David and I would have had schedules these were not adhered to or updated throughout the project. There were a few necessary back and forth moments, particularly regarding dialogue segments that required additional length for dialogue animation and for facial expression alterations.

On reflection my contributions to a productive atmosphere, towards the middle of the project, were lacking. Aside from the demonstration analogue synthesised music score, to confirm pace and gauge aversion for synthesis, my contribution was minimal until the desired/required full length rough cuts were made available. This was rationalised with an equal mix of time management considerations and issues of illness. David took an active role in elements of audio constructing, particularly the war ‘scene’ at the beginning of the animation. Having provided the various samples for David to select from, I was able  to recreate his desired war scape and build upon it. The main issue upon final pre-showcase review was of Black Dog’s dialogue clarity. Andy pointed to an overshift in pitch shift and that a boost around 80 Hz in the original or less pitch shifted samples could work. I made some attempts but went on with my original plan of overdubbing over enunciated vocal takes and using a less shifted pitch shifted clip and a subtle but large pitch shifted clip low in the mix to retain the sub/bass feel of the dialogue.

During all Recording and mixing sessions, in studios, I included and consulted with David in all finalising decisions before moving on through required tasks. I believe that the productive atmosphere created allowed David to speak his mind on any readings which didn’t capture the feeling of what he was after.It was great to have the director/creator on board as he was able to directly communicate HOW he wanted his dialogue to be read and I was able to direct Adam on how to deliver said dialogue into the microphone.During my final session, David was unhappy with a scream and I was able to immediately facilitate David’s scream overdubs.

The intent of Dark Imaginings is to continue discourse about depression particularly related to military casualties. The included act of suicide adds to the dark and serious nature of the animation. The first section of score was a set of polyphonic rising chords on tremolo strings using a blur of major/minor tonalities. My main theme consisted of || :V7 V7 | i i | V7 V7 | i VI7 :|| and used the theremin style synth lead that i created for the demo to add to the otherworldly feel of the escape sequence.

To reflect the intensity and action of the internal thought sequence I used An element i kept from the original synthesised demo. This was the tempo changes pre/post the internal dream sequence. Initially represented in arpeggiation the quarter note drive of the strings leading into and an accidental ostinato (created during midi sequencing) represented the quickening and slowing of the heart, avoiding the conventional thump-thump.of sound design. There was clipping in the final section of score so I replaced the theremin synth lead with a heavily reverberant piano lead. This was to evoke the reflective, affirmative hope filled conclusion. COnsiderations were taken to reflect the yin yang aspects of dog and rabbit by having pitched down bassier duplicates for black dog dialogue and blended higher pitch shifted duplicates for Jack rabbit character. The sound design remained as realistic as possible for all diagetic sounds. In our final session, David requested heavy breathing during the initial scene which we overdubbed on the spot. 


My main influence for synthesized score for dark imaginative projects has been Kyle Dixon and Michael Stein’s STRANGER THINGS soundtrack, which draws comparisons to film composer John Carpenter. From a production standpoint, it’s obvious that Dixon and Stein are not only composers, but also craftsmen with a deep enough knowledge of analog synthesizers, like the Prophet 6 and ARP 2600, to be able to manipulate them into producing the exact sentiment that a given scene calls for. From Feel-good warmth and melancholia to the darker, more mysterious nature of the story. (N.Yoo, 2016) One of the director’s guidelines given was that they didn’t want the score to be ‘too synth’. “There’s a part of a synthesizer called “resonance” and it makes everything sound kinda (makes laser noises) laser-y. As long as we’re not doing too much of that, I think we’re in good shape. I think that’s what it means, but you never know”  – Dixon (J.Marez, 2016) There was concern that a synthesised score was leaning towards a more video game sound, so I reverted to string scoring instead and postponed an in-depth exploration of analogue synthesis, for now…

REFERENCES

Yoo, Noah (16 Aug 2016) Inside the Spellbinding Sound of Stranger Things

Retrieved from: http://pitchfork.com/thepitch/1266-inside-the-spellbinding-sound-of-stranger-things/

Maerz, Jennifer (24 Jul 2016) Obsessed with Stranger Things?

Retrievedfrom: http://www.salon.com/2016/07/23/obsessed_with_stranger_things_meet_the_band_behind_the_shows_spine_chilling_theme_and_synth_score/

 

DEMON SEED: AUDIO REVIEW

demon seed 01Demon Seed (1977) was directed by Donald Cammell and stars Julie Christie as the wife of a scientist (Fritz Weaver) who has invented the Proteus IV supercomputer. However, Proteus soon develops the need to procreate—and uses Christie as the means to that end, trapping her in her house and terrorizing her. Jerry Fielding’s avant garde score was a high-water mark in the composer’s experimentation, featuring eerie suspense and violence as Proteus and Christie engage in a battle of wills.
Jerry Fielding conceived and recorded several pieces electronically, using the musique concrete sound world of Pierre Schaeffer; some of this music he later reworked symphonically. FilmScoreMonthly’s premiere release of the Demon Seed score features the entire orchestral score in stereo, as well as the unused electronic experiments performed by Ian Underwood (FSM, 2017)

Musique concrète, (French: “concrete music”), experimental technique of musical composition using recorded sounds as raw material. A precursor to the use of electronically generated sound, musique concrète was among the earliest uses of electronic means to extend the composer’s sound resources.The finished composition thus represents the combination of varied auditory experiences into an artistic unity.
The technique was developed about 1948 by the French composer Pierre Schaeffer and his associates at the Studio d’Essai (“Experimental Studio”) of the French radio system (Wiki, 2017)

Compositions in musique concrète include Symphonie pour un homme seul (1950; Symphony for One Man Only) by Schaeffer and Pierre Henry and Déserts (1954; for tape and instruments) and Poème électronique (performed by 400 loudspeakers at the 1958 Brussels World’s Fair), both by the French-American composer Edgard Varèse. (EEC, 2017)

demonseed 02The development of musique concrète was facilitated by the emergence of new music technology in post-war Europe. Access to microphones, phonographs, and later magnetic tape recorders (created in 1939
In 1948, a typical radio studio consisted of a series of shellac record players, a shellac record recorder, a mixing desk with rotating potentiometers, mechanical reverberation, filters, and microphones.

The combination of orchestral and concrète scores suits the mood and narrative of the film, in particular to relation to the fusion of real/classical with technological/mechanical In regards to sound design, all the dialogue is mixed appropriately. The effected Proteus dialogue, which involves a slightly raised pitch shifted recordings various delays and chorus which has phaser/flanger on it’s fluctuating reverb, effectively adds tension and unease when applied during raised levels of Proteus anger. The digital vocal effect paved the way for future synthesised voices, particularly the synthetic android character, Ash from the original Alien (1979).

demon-seed-03-e1503894118977.jpgThe score and soundscapes during the many “dream” sequences reflect their uneasy and otherworldly nature. The foley is always disconcerting and appropriate especially the electronic hum mixed with mechanical clicks for Proteus’s “arm chair” (a wheelchair with a mechanical arm attached) and a large array of tones and weird bubbly noises for Proteus’s bizarre geometric manifestation.

demonseed04

REFERENCES
Editors of Encyclopedia Britannica (2017) Music Concrete (Musical Composition Technique)
Retrieved from: https://www.britannica.com/art/musique-concrete

Film Score Monthly (2017) Soylent Green/Demon Seed
Retrieved from: https://www.filmscoremonthly.com/cds/detail.cfm/cdID/264/

Wikipedia (2017) Musique concrète
Retrieved from: https://en.wikipedia.org/wiki/Musique_concr%C3%A8te

ALOG (AUDIO BLOG) ON AUDIO

Adobe “sound-shop”, cool drum hacks and an ocarina, OH MY!

In the podcast to end all podcasts, Corey, Stuart and myself let loose pondering life the universe and everything related to audio. Mostly everything…

Thanks go out to Corey and Stuart for contributing their time and voices to this LO ticking adventure. All hail pre-dawn where anything is possible, everything is permitted and things sound a lot funnier than they actually are…

DRUM REC TECHS

GLYN JOHN’S TECHNIQUE

A British musician, engineer and producer who made a name for himself with his technique for capturing Led Zeppelin’s drum sounds with only 4 mics! Born just outside London in 1942, Glyn Johns was sixteen years old at the dawn of rock and roll. His big break as a producer came on the Steve Miller Band’s debut album, Children of the Future, and he went on to engineer or produce iconic albums for the best in the business: Abbey Road with the Beatles, Led Zeppelin’s and the Eagles’ debuts, Who’s Next by the Who, and many others.(Penguin Random House 2017)

“Specifically all you need for this method are 2 overhead mics (ideally large diaphragm condensers), one kick mic (dynamic or condenser), and one snare mic (usually a dynamic). The big picture is that the sound comes from the overheads while the kick and snare mics act as “spot” mics to fatten up those two huge elements of the kit and give you a bit more to mix with.” (G. Cochrane, 2011)

300px-Glyn_Johns_Mic_Placement

  1. Overhead mic, 3-4 feet directly above snare, record and listen for a complete balance, reposition mic if required.
  2. 2nd mic equidistant from snare, to the right of the toms, pointing at hi-hat
  3. 3. Kick and snare miking  acting as spot mics to enhance these two drum elements. As there will be a sufficient lack of low-end punch (kick) and snare fatness
  4. The cherry on top appears in the mixing stage: the above OH is panned halfway to the right, and the side OH is panned far left. This gives balance and a depth and stereo image to the kit (with kick and snare mics centered)

The man himself shows off his technique in this video:

Ryan Earnhardt from CREATIVE SOUND LAB describes the ‘DANGER ZONE’ of the cymbals null zone. He suggests moving the 2nd “OH” mic higher and further back from the kit, remaining equidistant (almost over the shoulder similar to the RecorderMan technique) Or, alternatively, Ryan suggests a “Reverse Glyn Johns” by starting with the side “OH”, finding a suitable position which captures the toms and avoids cymbal fluctuations and then working equidistant from that to find the above OH position. As can be seen in the video below: 

An addition to consider is taken from the RecorderMan drum technique, where the distance for the kick beater is also taken into consideration when placing OH mics. This allows for less phase issues concerning the kick drum. This technique is a close OH technique (requiring only 2 mics) that captures a balanced and phase accurate stereo image of the entire drum kit. (NOTE: extreme panning of RecorderMan overhead tracks can leave a hole in the middle of the stereo field, which is where the additional mics, as per Glyn Johns, would come in handy.) (D. Rochman, 2011) 

Eric Sarafin is a gold and platinum award-winning record producer and mixer. He has worked with many nationally known acts, including The Pharcyde, Tone Loc, Ben Harper, Lifehouse, Nine Days, Bare Naked Ladies, Amy Grant, and Foreigner, just to name a few.. Eric is also a published author and has written several books under the pen name of MixerMan about recording, producing, and mixing. (E. Sarafin, 2014)

img_how_to_stereo_miking_drums_4

Charlie Waymire and his mohawk demonstrates the RecorderMan technique:

REFERENCES

Penguin Random House (2017) Book; Sound Man  
Retrieved from: http://glynjohns.com/books/book)

Cochrane, Graham (10 Jan 2011), the Glyn Johns Drum Recording Method
Retrieved from: https://www.recordingrevolution.com/the-glyn-johns-drum-recording-method/

Rochman, David (23 Jun 2011) Five techniques For Stereo Miking Drums 
Retrieved from: http://blog.shure.com/five-techniques-for-stereo-miking-drums/

Sarafin, Eric (2014), About; MixerMan
Retrieved from: https://mixerman.net/about/

Images retrieved from

Glyn John – https://isaacwebb.wordpress.com/2015/11/23/glyn-johns-method/

RecorderMan – http://blog.shure.com/five-techniques-for-stereo-miking-drums/

REVIEW: KORG MINILOGUE

POWERFUL SOUND CREATION AND RICH VARIETY ARE THE TRUE HALLMARKS OF AN ANALOGUE SYNTHESIZER

The structure consists of:
2VCO (Voltage Control Oscilator)
1VCF, (Voltage Controlled Filter)
2EG, (Envelope Generator)
1VCA (Voltage Controlled Amplifier), and
1LFO (Low Frequency Oscilator)

8VOICEMODES

The Minilogue’s unique wave shape capability lets you fine tune the oscillators’ harmonics, creating the most divine sounds and compositions. The Minilogue is also equipped with a variety of powerful types of modulation (cross, ring and oscillator sync) as well as a delay with a high-pass filter. (KORG Inc. 2017)
“The delay line itself offers just delay time and feedback gain controls, so it’s equivalent to the simplest stomp-boxes and single–head tape delays. Nonetheless, it’s capable of a range of quasi–reverberate effects as well as standard echoes and delays. What’s more, with a maximum feedback gain of a tad greater than unity, you can generate all manner of ’50s sci–fi effects.” (G. Reid, 2016)

The example above is a demo soundtrack pacing mapping for an animation in its early stages. The foundation of this demo is built on the [insert one name and number] which is an arpeggiator preset.
I used the tempo pot to increase tension and pacing and initially simulating a heartbeat feel. Throughout each of the recorded layers the cutoff knob played an important role in opening up the sound at the escalation of the main melodic theme (using theremin styled tone # [insert here]).
And conversely used to cutoff the mid/high content for a brief underwater perspective Each of the descending diminished chords have varied cutoff tails while the tempo pot was reduced on the other arpegiated tracks connoting and end to urgency and leading into a relaxed, reflective conclusion.

korg-slider-image.jpgkorg sliderAs an introductory aide to analogue synthesis, I find the Minilogue to be exceptional. The oscillator adds a visual element to synthesis exploration/manipulation. This is just the tip of the Minilogue ice burg, there are a range of features and functions (eg. 16 step sequencer) that I am yet to delve into, but I am thoroughly looking forward to the experience, process and learning this will facilitate.

The accompanying website includes, among tech specs, a sound librarian for managing program data and factory and bonus sound packs distributed by KORG.
http://www.korg.com/au/products/synthesizers/minilogue/librarian_contents.ph

REFERENCES

Goldman, Dan ‘JD73’  (15 Jan 2016) KORG Minilogue Review
http://www.musicradar.com/reviews/tech/korg-minilogue-633098

KORG Inc. (2017) Minilogue, Polyphonic Analogue Synthesizer
Retrieved from: http://www.korg.com/au/products/synthesizers/minilogue/

Reid, Gordon (Mar 2016) KORG Minilogue Review
Retrieved from: http://www.soundonsound.com/reviews/korg-minilogue

MAJOR PROJECT POST MORTEM (AUD220) FOR ART’S SAKE

FOR ARTS SAKE is “A poetic documentary that takes you on an artistic journey through the thoughts and experiences of talented artists, in order to explore the therapeutic ways art can express emotion.” In regards to the audio components she required, project creator Talisha Zarb stipulated “Audio will play a large role in creating emotion. I want to incorporate a mood that is dark, mind altering and mysterious. As we will be having ADR of the artists which will be used over the artistic visual images, each artist will answer the questions given.Music and sounds, I want to include synth, echoes, drums, reverbs etc. The audio will play along with the three act structure of the documentary, which follows the artist’s journey into the mind.” (T.Zarb, 2017)

Our bounce and publication of the film
Both John and I acted as team players and worked successfully fully together during this project. We always kept clear and consistent communication and weren’t afraid to ask each other questions. Self confidence in our abilities ensured our recording session, with 2 of the 3 interviewed artists on the C41 at SAE, ran smoothly and professionally. The film students positively commented on this at the completion of the session and they had no issue and were overly positive about our final deliverable to them for the film. During the final mixing stages our effective communication skills aided us in delivering a quality product given time constraints applied to us thanks to Audio’s position in the creators workflow.

We utilised our time management skills and flexibility/adaptability and worked well under pressure when a short time frame was assigned for audio. We were given a weekend prior to delivery to produce our mix, unfortunately we had been provided with audio files with the unprocessed dialogue and backing music (un)mixed together. Using my problem solving skills I was able to insist that the project creator supply us with separated audio elements. I then, using the previous protools session as a template, realigned the new separated audio clips to match their combined predecessors and set out to mix and fix the dialogue tracks (there were many plosives that needed to be reduced and ends of words faded out in the film student’s dialogue audio edits. The music provided unfortunately had a lot of clip pops where, one assumes, the film students are unaware of “top and tailing” processes which an audio student takes for granted. I used my flexibility and adaptability to creatively address and fix these issues.

The “no time, end of the chain” workflow was certainly not unexpected. What was unexpected though was the lack of audio experiences that this project actually entailed. The interviews were regarded as narrative elements and so we were not required to film any live sound on location as they were being shot Motor Only Sync (MOS) “The term “MOS” is used, on a slate, when a scene is filmed without sync sound (or any sound).
What MOS originally stood for is still up for discussion. MOS may stand for…  
Minus optical signal, Minus optical sound, Minus optical stripe, Muted on screen, Mute on sound, Mic off stage, Music on side, Motor only shot or Motor only sync.” (filmsound.org)

We never had our ability to accept and learn from criticism tested as none was encountered.

There was suggestion of creating our own backup audio music elements, after it was revealed our musical compositional services would not be required. But instead John and I acted as team players and used our effective communication skills to iterate our strong work ethic with the film students and agreed that we would be given stems so the actual mixing of the musical and narrative audio elements would be under our control and discretion. This was unfortunately not the case but was rectified shortly after. Whether or not there was an error  in initial communications as to exactly what was expected from us in the scope document, or a snap decision to take music composition out of our hands we will never know. I often felt a sense of irony that the two students who had a direct focus on audio for film ended up with the project with the least amount film audio tasks to take part in. Luckily both of us have experience with these elements so it was not a total loss. Our strong work ethic and positive attitude carried us through these unexpected and completely out of our control happenings.

“This paragraph’s, and to a greater extent, this sentence’s main priority and function, sole purpose, or raison d’etre (which Dictionary.com defines as:a reason or justification for being or existence, from the French Reason for being) is to generate word content, approximately 40-50 words worth, towards increasing the total word count of this literary submission; 70 additional words to be specific, far beyond the amount of words previously approximated . This following sentence serves as part of the same function, with similar intent, but to a far lesser extent.“ (M.Wearing, 2017)

A Gantt Chart created and utilised by the creator and all participants may have facilitated greater communication skills and time management abilities. A Gantt chart, commonly used in project management, is one of the most popular and useful ways of showing activities (tasks or events) displayed against time. On the left of the chart is a list of the activities and along the top is a suitable time scale. Each activity is represented by a bar; the position and length of the bar reflects the start date, duration and end date of the activity. This allows you to see at a glance:
.1. What the various activities are .2. When each activity begins and ends .3. How long each activity is scheduled to last .4. Where activities overlap with other activities, and by how much .5. The start and end date of the whole project. To summarize, a Gantt chart shows you what has to be done (the activities) and when (the schedule).” (Gantt.com, 2007)

John and I didn’t do anything out of the ordinary as such, so continuing as we are will allow us to perpetuate the things that went right and avoid the things that went wrong. It was good to demonstrate our professional and positive attitudes to group work, the production of high quality audio deliverables in a timely manner and solidifies our practice into the professional realm.

REFERENCES
Filmsound.org (March 23, 1997) MOS
Retrieved from:http://filmsound.org/terminology/mos.htm

Gantt.com (2017) What is a Gantt chart?
Retrieved from: www.Gantt.com

Wearing, Mark (May 5, 2017) AUS 220 ‘Major Project’ Post Mortem
Retrieved from: https://wordpress.com/post/mremannblog.wordpress.com/1557

Zarb, Talisha (March 2017) FOR ARTS SAKE
Retrieved from: FOR ART’S SAKE proposal document

‘LIVE SOUND’ REFLECTIVE EVALUATION

After 7 weeks of “live sound” training from Ben, it is time to reflect upon our learning so we can address issues and challenges before the upcoming big gig at the Boston.

Generally speaking our class has been on top of the live sound learning process. Our first few weeks were spent absorbing as much information as possible which allowed us to aid each other in times of uncertainty or forgetfulness. Any issues that we came across were resolved and remembered in the following week which shows we all are learning from our own and each other’s mistakes. We all worked cohesively as a group and there were very few moments where nobody knew what to do. We often approached roles in pairs so that everyone had another person to ask questions of, confirm decisions and process. We effortlessly took on differing roles to ensure each setup ran as smoothly as possible.

Our group certainly wasn’t perfect, there was at least one thing per class that we missed or plugged in incorrectly or not at all in one case! I had one very confusing session where there was a massive xlr plug in confusion at  the stage box end which resulted in issues getting signal to the correct speakers and subs. This was due to a miscommunication where I had been given the wrong information about what was plugged into where at the front of the multicore. Ben was on hand to problem solve and walk us through correcting our slip ups. In general,  our setup speed was rather slow, the week before our practical examination we were still running about an hour behind our expected progress. I think this was due to low confidence in our roles and also a very linear perspective.  We would often be waiting for other members to finish their tasks when many tasks by separate roles could have been completed at the same time or with more of an overlapping flow. This resulted in a lot of times where people were milling about waiting for the next step or unsure about helping others to complete their tasks. Personally, my biggest issues during my senior tech role was some assumptions about I/Os from ProTools (to the digital desk via optical cables). Whilst i thought I had problem solved enough to get signal through some of the output paths were incorrect (i had a stereo mix going through channels 1 and 2 rather than channels1 and 2, what am I like?) so whilst there was signal going to the channels it wasn’t the correct signal.

There was not a lot in the realm of unexpected events. The dying multicore was a perpetual issue at both ends. Including broken pins and a tangled mess of XLR connector ends which were vaguely labelled, so every week it was something new and unexpected that died and we would have to work around it. We had some mega feedback issues during our monitor setup class. This was due to an error in mic/line inputs into the monitor speakers. It is safe to say that after some mild ear damage that that lesson was learnt.

Thanks to our rotating of roles over the final weeks, we were all able to keep each other in check and help out when any issues, questions or concerns arose. Some people continued the same roles throughout the final weeks and this was very helpful, not only for their own learning but to aid others in understanding the flow and process. I learnt a lot about tuning the room from watching Lucy repeatedly, and then confidently, go through the PA room tuning process. Having peers about to confer with and help problem solve was invaluable and allowed us to keep doing the right things and avoid the wrong things from week to week. Any time a mistake was made, not only did the person/s in the role learn from them but the rest of the group did  too.

In the end, I would have liked more time for more opportunities to investigate some of the different roles better. I didn’t get much of a chance to use the digital desk especially when it came to setting up mix layers, setting up effects and applying compressors and gates etc. to different channels. Most of my “learning” was over the shoulder of my classmates. Having said that, as far as my participation goes, i tried to step back and take a back seat, of sorts, for a number of reasons:

  1. I have had previous experience in live sound (I have performed live sound duties for a couple of my bands where PA has not been provided by venues. This ranges from a 4 piece Gypsy Jazz Band to a 8 piece Ska band) and so wanted others to be able to take full advantage of the hands on experience in class.
  2. Live sound is not my field for educational focus (my focus and reason for studying at SAE being post production) and wanted to ensure i wasn’t preventing those who were focusing on live sound from getting the most out of these classes.

It was a great chance to validate my pre-existing knowledge and to build upon it,  filling in any gaps that I had. My previous experience was on the simpler side (minimal equipment and no actual operating duties during performances (set and forget, making adjustments whilst performing) so things like crossovers, PA/room tuning and EQ was welcomed knowledge. I am definitely looking forward to the next few weeks when we go to the Boston and put our skills to the test into a ‘real’ context.

FILM SCORE GENRES (AUS220)

CLASSICAL & NOISE

BATMAN (1989)

Before Hans Zimmer set his masterful compositional mind on the Dark Knight (having been the synthesist in a Batman animated film! (1993))  Danny Elfman aided director Tim Burton to realise his dark stylised vision for the caped crusader which set the tone for many movies, animations and games that followed. Turning away from the Zock! Whack! Kapow! of the Adam West incarnation from the 60’s, Danny Elfman’s powerful, heroically dark, classical scoring enhances Burton’s characterisation. (Performed by the Sinfonia of London (up to 50 musicians))
In particular I will be looking at Batman to the Rescue:

The first full-blown action cue in the movie. The track starts with a bang as Vicki and Batman flee from the Joker’s goons. There’s some cool timpani, percussion and piano interludes between the orchestral moments. Then Danny brings out all sorts of rare percussion instruments as Batman takes on the goons one at a time.
(I.Davis, 1996)

A flurry of horns and flutes pump this section of score with adrenaline and urgency, this is then handed to the string section as the horns emit the 4 of the iconic 5 note main theme melody (played during Viki Vale’s first encounter with the Batmobile) as Joker’s goons scramble from the building. The ensuing car chase scene flits back and forth between the  main theme and woodwind ostinatos symbolising sirens/danger of the Joker’s goon’s pursuit cars. Internal cockpit shots are accented by held brass notes before the action continues with woodwind flurries. Every time the Batmobile is a main visual focus, the main theme is reprised. A gadget assisted turn is matched with string flurries. During the car crash/pileup Timpani underscores the chaos which allows for the SFX to take take a spotlight without loosing any impact. As the Batmobile screeches to a halt so does the music. The tumultuous percussion, particularly the snare blasts,  reflects a hectic and frantic city-scape accompanied by piano/double bass playing fast paced ostinatos that drive the action throughout. Strings and cymbals represent the fantastical morphing nature of the shield plates. The main theme is reprised for Batman’s grappling hook antics, including a xylophone ostinato that provides a clock-like count down urgency. A pipe organ, which plays a large part of the film score, lends overwhelming power to the main theme, and there’s a dynamic intensity drop during the brief dialogue. Strings and cymbals accentuate some of Batman’s punches while pizzicato strings portray the cowardly escape of the last standing joker henchman.
The instrumentation of the score is quite diverse. Danny uses a bass marimba, pipe organ, celeste, bongos (his trademark), piano and electric piano, and tons of rare percussion instruments.The most intriguing element in the score for Batman is easily the hyperactive percussion section. For his henchmen, Elfman conjures an array of wildly percussive rhythms that accompany their chaotic activities, and the mix of the drums specifically creates an outstanding soundscape, especially for moments of rowdy Joker behavior and the resulting havoc. (Filmtracks, Aug 97)

SOUND OF NOISE (2010)

Doctor Doctor Gimme Gas (in my ass) is the first of 4 renegade performances of an anarchic guerrilla percussionist group. The  films soundtrack, composed by Fred Avril, Magnus Borjesson and ‘six drumers’ (imdb.com), is majoritively Noise Music which is derived from electronic/industrial/house music. Noise is an Experimental genre that strays away from conventional music structure and consists primarily of noise. Noise can be generated with virtually anything, including acoustic and traditional instruments, non-musical objects and machines, as well as electronic equipment and extreme vocal techniques. The origins of noise in music date back to the early 20th century and the Futurism movement. Luigi Russolo is credited for being one of the first artists to consciously use noise as a backbone of music composition. In his 1913 manifesto, L’arte dei rumori (“The Art of Noises”), he stated that artists should not limit themselves to traditional instruments, because there is an infinite number of different noises that can be used to enlarge and enrich the domain of musical sounds. (Sonemic, 2017)

This soundtrack Is a unique example of a diegetic soundtrack. Diegetic sound is any sound whose source is visible on the screen or whose source is implied to be present by the action of the film. In this case, and more specifically: Source music, music represented as coming from instruments in the story space. (sven. E carlsson (1999)   unlike regular musicals which comprise of diegetic vocals with non-diegetic musical accompaniment.
These sources include: heart rate monitor (rhythm), bed lift mechanism (chordal harmony), air compressor, various medical machine beeps (rhytm and melody), extendable pipe (bass), oxygen tanks (different volumes producing differing pitches) syringe on kidney bowl (hi hat) and vocal “notification” samples
And the patient himself (chest compressions for kick drum, hand claps as hand claps and a belly slap solo)!!!

There are 2 sound and foley advisers and 2 foley artists (from Europa Foley) that are credited. Also, Matin Hennel is credited for “re recording music sequences” which suggests to me that the found sounds would have been recorded on location and then replaced in post much like ADR having been rerecorded on location in a treated space or borrowing machines etc  and recorded in studio. Johannes Stjärne Nilsson (director) elaborated: “A big inspiration for the film was John Cage and chance music. The best music for Cage was to open his windows and hear the sound of traffic – the kind of noise that most of us go out of our way to shut out.” The filmmakers said that the entire project took about five years to create, starting with a year of simply gathering and recording individual sounds, drawn from junkyards, hospitals and other settings. A digital library of about 23,000 individual sound files was assembled. (B. Wise, 2012)

This piece is in common time (4/4) at 120 (heart) bpm – the outro at 180 (heart) bpm

REFERENCES

Carlsson, Sven E. (1999) – (taken from exerpts from Bordwell-Thompsson; Film Art & Reize-Millar; The Technique of Film Editing)
Retrieved from:filmsound.org/terminology/diegetic.htm

Davis, Ian (1996) – Music for a Darkened Knight
Retrieved from: http://www.bluntinstrument.org.uk/elfman/comment/dark_knight/

Filmtracks Publications (29/08/97) – Editorial Review
Retrieved from: http://www.filmtracks.com/titles/batman.html

IMDB.com – Sound of Noise
Retrieved from: http://www.imdb.com/title/tt1278449/fullcredits?mode=desktop&ref_=m_ft_dsk

Sonemic, Inc. (2000 – 2017) – Genres>Noise
Retrieved from: https://rateyourmusic.com/genre/Noise/

Wise, Brian , (13/3/12) – ‘sound of noise’ film portrays musical crime spree
Retrieved from: http://www.wqxr.org/#!/story/191723-sound-noise-film-portrays-musical-crime-spree/

the FINAL eighteen hours

PERFORMANCE-WISE

LITTLE PEDRO @ the CIVIC (Fri 9th DEC)

VOUDOU ZAZOU @ the SWALLOW bar (Sun 11th DEC)

PROJECT – WISE

IT (was) CRUNCH TIME!

This week T-REX (Bec, Damien, Josh & I) used our final studio session to finalize our 2 masterpieces: TASTE MY INSTRUMENT and INDECENT.

T.M.I. is a lighthearted and bouncy HipHop track with a gospel styled breakdown while Indecent is jazz/funk infused HipHop track reminiscent of Red Hot Chili Peppers’s ‘if you want me to stay’ from 1985 (which i reference with my subtle vocal backing).

I contributed the chord progression for verse/chorus, tenor sax ‘bass’ line, melodica riff and played the organ synth in the breakdown for T.M.I. and in INDECENT I created sax samples and intended to play the RHCP reference on organ synth but ended up recording vocals due to time constraints. Unfortunately we couldn’t execute our intended ‘live jam styled recording’ but because we were prepared with our audio samples (drums, sax, guitar, vox) we were able to whip this track up during our final studio session using the bass synth line as the foundation.

During preparing for INDECENT I found out, the hard way, that certain trigger pads available work only with ABLETON which impaired their functional usage with ProTools. And that midi mapping Oxygen 25 keyboards is not as straightforward as  it seems (unless using ABLETON).

Using ABLETON has been a big focus for my Solo major project: the 8bit rescoring of a selection from the fifth element. I restricted myself based on the limitations of NES sound chips that only had 5 channels for audio. my 8 channels consisted of 2 pulse waves (melodic), a triangle wave (bass) and a noise wave (drums); 2 noise channels and a pulse wave (for SFX/foley) ; and one channel vor vocoded vocals/dialogue. I used the OPERATOR synthesiser and was able to make a few differing sounds and through the clip warp transposition functions I was able to create a wide variety of tones for a large number of visual action cues. for example. one sample could either be stretched out and transposed quite low for an explosion sound. the same sample could be time stretched shorter and pitched in a higher octave for a crunchy punch sound. the same sample with an LPF EQ and in succession makes for good footsteps.Thanks to my horrible time management skills my vocoding efforts were short and a tad underwhelming. I believe my choice of modulation signal contributed to a distortion that arose and was difficult to reduce.

the Disciples of Steve contributed to LUCID STUDIO’s first EP release featuring differing audio interpretations of SPRINGVIBES. D.o.S. submitted an Indie Funk offering which embodies the fresh developing nature of Spring.

My biggest issue, upon self-analysis and reflection, is (still) time management. There was very much a ‘last minute’ nature across the board and this is something I must overcome in the immediate future heading into the second year of audio studies at SAE.

EIGHTEEN HOURS – WEEK TWELVE

This week I completed the musical elements for  my solo 8 bit re-scoring project. I wrote the 3rd and final piece using a number of differing techniques, they include:

Rhythmic Bass Ostinato – establishing and maintaining a marching element and drive

Descending Chromatic Octave Split Lead – complementing the bass ostinato with clockwork  drive and digital alarm aprehension.

Ascending Lead – which helped to establish chord progression (beginning in Dm: D E F G Ab  (Bb chord (Ab is dominant 7th); F G A B  C (Ab chord (C is Major 3rd); etc.

Triplet-y Feel Established by the Hi Hat – introduces a playful twist to this ominous tune containing tri-tone chord changes during the progression and in the outro flurry

Lead Arpeggiation – for suggestive chords. (single note arpeggiation, 2, 3 and finally 4)

Musical Alliteration – the octave chromatic rise and fall coincides with the visual rise and fall of a thrown chest (you’ll have to take my word on that one, until next week that is!)

 

I was initially hesitant in using Ableton’s arrangement widow but it has proved to be tolerable and not overly complicated (the zooming/navigation functions are a bit maddening still!!!!!) However, I have persisted and this week l also made strong headway through my foley elements as well. I had re-approached my usage of my 3 designated tracks for foley (consisting of 2 noise channels and one pulse wave (square) ) instead of only 3 channel layered sounds, I’m using channels for particular sounds (currently: wooshes, impacts/explosions and lazers!) to allow for overlapping in the arrangement/timeline.Whilst making my way down the ‘iceburg’s tip”, it’s fascinating to see how many different variations are possible just from the use of the warp function & transpose pot. I’ve been able to shortcut my way through making numerous similar but differing sounds which has sped up the arrangement process immensely! (a track of foley sound fx will be made available shortly)

In this final project week I aim to finish the foley tracks ASAP and then get cracking on vocoding. Currently I am feeling confident and on track for doing so!

Woot!