• Adi Keltsh

Updated: Jun 11

Hello, Today I will be talking a bit about the first draft of my most recent sound re-design. It has been quite a challenge due to the complex elements that I needed to create/design. Firstly, the entire re-design was done using Reaper. I have used the program quite often before for audio processing and mixing music, so was blown away by how stable (140+ tracks) and easy it was to work with a media source. In terms of ambience, I took another trip to the forest near my house and recorded some A-format Ambisonic tracks that I was able to down mix to stereo using the SoundField plugin by Rode. As a sound designer, nothing excites me more than trying to get movements and specifics to sound as organic as possible. For both human characters I used a series of belts in addition to some leather coats, although I did add an additional fur texture for Aloy (as well as some other specifics). For the Aloy’s impact sound, I used my trusty bag of potatoes layered with some kickboxing impacts I had previously recorded. Here in the Netherlands, we have trash/recycling points that are basically huge and very deep metal boxes that go into the ground. As the sound is somewhat dampened by this, I chose to record some impacts by slamming the doors of the container (I got some funny looks). As I want to keep this blog to the point, I will now talk more about the hardest part of the re-design.

The most challenging aspect of this redesign was creating the voice for the Watcher (the machine in the video). My approach was to listen to the original audio around ten to twenty times and note down the elements I hear and my original thoughts on how I would go about designing the sound. I also looked online and listened to everything I could find about the development of the audio for the game (I have attached some links below). Once I had done this, I was on my own and whatever I created would be my interpretation. Of course, I am fully aware it will never come close to the amazing work of Pinar Temiz but it was a great learning experience and motivation to continue learning and researching new sound design techniques.

While designing voices for my previous projects, I would frequently rely on using Zynaptiq Morph to interweave various audio layers. I set myself a goal to not use this plugin for this re-design and use other techniques instead. After a pleasant talk with Bjørn Jacobsen (Cujo Sound), I was informed I can use custom impulse responses (IR's) to process audio to give it various textures. So, I used this method for gluing the various organic and synthetic elements together. I recorded several custom IR's using the setup shown below!

I would play a sweep (you can play a fairly short sweep as you are not after a long reverb tail!) through a tiny bluetooth speaker and record it using a microphone. When capturing acoustic information like this, I would refrain from using Bluetooth speakers as the signal is compressed. I would also use a better speaker, but for the purpose of this exercise it could all be considered artistic choice I suppose (it's also all I had). Using the Voxengo Deconvolver (you can also use Altiverb), you are able to deconvolve your recorded audio file to create an IR that can then be used with a convolution reverb VST. I am pleased with the result I got and look forward to using this technique in the future.

As I had been planning this re-design for a while, I managed to get a hold of some contact and coil microphones. I am incredibly pleased with them and I recommend them if you are hoping to expand your arsenal and are on a tight budget. The ones I purchased can be found here:

I mainly used these microphones for designing the movement, additional SFX and voice of the Watcher within the re-design. I went around the house and started placing the coil microphones on everything connected to the mains. I got an interesting sound from them when I powered up the coffee machine and it ended up being the sound for the Watchers red eye. Play the video below to hear the raw audio!

I used a variety of winged animal voices to create the organic aspect of the Watchers voice (Vultures, Parrots, Owls, Turkeys, Emu Throat noises, Cranes and Crows), probably too many layers, but what's important is I had fun messing around with all these different sounds. I further layered these with several iterations of a Serum preset I had designed that used the various vowel oscillators to give the Watcher some synthetic vocalizations. The IR was used between elements to glue them together. At the moment, I am pleased with the result but am fully aware there are improvements that can be made! For the movement of the Watcher I used a combination of the coil and contact microphone on a printer. I also recorded some drills and sanders. As I did not have much experience using contact microphones (and recording printers), it took some time before I recorded audio I was happy with. Here is a photo of me attacking the printer while exploring the scanner area:

I can easily write ten more pages about the work that went into designing the twenty seconds of this audio re-design, although I think I've skimmed over the more interesting/technical aspects. I hope you learned something and feel free to contact me or want to share you experience(s) capturing and using your own IR's for the purpose of sound design.

Here are some of the sources I explored while working on this re-design: GDC Slides by Pinar:

GDC talk by Anton:

Cujo Sound: Marshall Mcgee:

Some other interesting articles:

  • Adi Keltsh

Updated: May 3

Update One:

I have recently started exploring collisions in Unity and naturally wanted to simulate how combined game objects could be detached from one another, and how I can use audio to enhance this effect. Similar mechanics can be experienced in games such as Mordhau (released in 2019). I began by constructing a body and attaching joints that could be broken. Once a joint breaks, a particle effect will simulate blood, and an audio event will be triggered. Additionally, once a body part collides with the ground, another audio event is triggered to play a "splat" sound. I will continue to research and investigate this topic to improve and add additional mechanics to this simulation. My overall aim is to allow a player to interact and dismember the ragdoll. I will also implement binaural audio (of course, cause why not, it is incredible) and have a voice for the ragdoll that is occluded when their mouth is colliding with the ground. Although this simulation is quite NSFW, I feel that by trying to replicate mechanics from other games, I will improve my scripting skills and understanding of game engine mechanics other than audio. Here is a clip of my work so far! Obviously, a lot of work still to be done, but it is a start.

Update Two:

So... resonance audio has been implemented, I have some Ambisonic ambience to create the illusion of space and I have added voice. Moreover, once the ragdoll head has detached from the body, the dialogue changes to something a bit more fitting. I also implemented some RTPC (real-time-parameter-control), so when the mouth is colliding with the floor the dialogue is audibly occluded/obstructed. Here is the second update, things are getting weird! Although there is no blood, one could argue that this clip is NSFW.

Update Three: I have had some time to continue working on this demo and so have added a first person controller, footstep (walk-sprint-jump-land) audio, the ability to pick up/throw objects and to highlight whatever object the player is looking at. When items are being dragged or rolled across the terrain, an audio event is triggered. The player will also be able to rotate objects across all axes when pressing control so that the camera rotate and object rotate do not interfere with one another. I also fixed the occlusion effect on the voice and attempted to make the particle system look more realistic. There is still much more to do and I need to apply audio events/particle systems and other parameters to all the ragdoll body game objects, but I was eager to show an update so here it is:

  • Adi Keltsh

So I recently completed the FMOD and Unity Essentials course available at:! I was eager to add all the audio to the game and also use Resonance Audio once again to make everything binaural. I took this as a chance to showcase my current capabilities with scripting, using Unity and FMOD. Some of the audio in my demo is mine, although as I said, I wanted to focus on developing my scripting skills and learning FMOD's API in more depth. I've made a video (attached to this blog) where I chat about some of the features I implemented into the game to spice things up a little. I had a load of fun working on this project and even got to explore the woods and capture some great recordings that I used for the demo! My next demonstration will be showcasing Wwise, where I will be creating all the sound effects and implementing them into Unity. I am currently on a Unity role as I've also been involved in some game jams that are using the engine, although I hope to upload some Unreal (with Wwise) demo's in the near future! I hope you enjoy the chat and also manage to learn something new. If you are looking for new courses, check out the Scotts Game Sound website, and make sure to join the Discord for loads of support and fun times. I even made a short lesson that's available on the course!

©2020 by Adi Keltsh. Proudly created with