Convolution Reverb Part 3: Fast Convolution

  Fourier transforms can be a big headache when learning about DSP, but if you can learn the practical details surrounding one type of Fourier transform (Discrete Fourier Transform) and the efficient algorithm for computing it (Fast Fourier Transform), you will find many important uses when designing audio programs. Fourier transforms have a reputation for being difficult beasts, and you will likely need to cross reference many other articles and play around in MATLAB to deeply understand them -- this article is meant to tie together some more complex topics in the context of implementing our reverb.

CONVOLUTION REVERB PART 1: Overview

    This series is meant to outline some important digital signal processing topics as well as showcase one possible implementation of a real-time convolution reverb. We will discuss digital systems, FIR filtering,  FFT algorithms, and how to perform convolution in C++ efficiently and without significant input/output latency. But before that, it is worth describing convolution reverb itself.

C++ Audio Engine PART 4: HASHING OBJECTS

    All the articles in this series so far have been about components of software architecture and design, and this is a little more nitty gritty -- I want to talk about data hashing. This is a useful technique in a variety of contexts. 

    You can allow users to enter strings on a high level API but represent them internally with their associated hashes; you can hash an entire loaded resource like an audio file while loading to make sure you have not loaded the same audio file by another name; you can hash metadata about audio file format to quickly find a Voice Factory that outputs the correct type of voices.

    Unsigned ints make great keys, and you can be sure that your hashes will remain unique and consistent with an encryption algorithm such as MD5. The algorithm in that link can be distilled into a single function, and whenever you need to hash data, just include your Hash Function.

C++ Audio Engine PART 3: Timeline Driven Architecture

Since most of the audio signals we are dealing with in our engine are time-based (save for when we want to process them in the frequency domain), it makes sense for the functionality of our engine to be on a linear, reliable event-based timeline. This not only allows us to coordinate the time stamps of audio commands on scripts, but also queue up any other internal events we want such as commands sent from sound calls on the game thread.

                    BASIC TIMELINE 

    I set up my timeline using a command pattern, a time library for a time reference, and an STL multimap of time values to AudioCommands. When the Timeline is started, the initial time is grabbed. Then a Process method is called once per frame in the main audio loop (via an Update method in the SoundManager). On each loop, the Timeline:

  • Grabs the current time
  • Checks it against the entries in the multimap
  • For each entry in the map whose time value is less than the current time, all audio commands mapped to that time value are fired off and removed from the map

The registration comes in the form of an AudioCommand* and a relative time value in milliseconds; the offset from the absolute start time should be hidden in the registration method. This basic timeline works well for most tasks, but you can squeeze out more advanced functionality by baking it into surrounding systems.

 

Sample code and information about tweening after the jump!

UNITY SCRIPTING: Physics Parameters and deciBels

This video is a little old at this point, but I wanted to showcase it as a little Unity scripting demo I did about a year ago. This video shows the use of snapshots in Unity's AudioMixer system triggered by raycast calculations between the player and the ground. There are a series of rolling sound snapshots for this little Fisher Price egg guy (one of which is silence, and is delineated by little vocalizations as he bounces), while the dB level of the master fader is controlled by a normalized ratio of current velocity / maximum velocity. Though this game was pretty heavily reworked shortly after, and this rides the line between sound design and audio programming, I felt this demo was worth sharing!

(Video after the jump)

C++ Audio Engine Part 2: Factories and Pools

In any large scale system, especially one driven by a variety of commands and resources, memory allocations can become serious performance bottlenecks as well as liabilities in memory management. One effective way to mitigate these issues is to replace your raw allocations with factories that pool your resources for you. You can potentially avoid hundreds of allocations when creating timeline commands, playlists, voices, by only making a new allocation when your pool of inactive objects is empty.

 

    The nice thing about this approach is that it can become almost formulaic, with only situational tweaks when you need to be properly reset and reconfigure recycled resources. Read more to see sample code and more info.

C++ Audio Engine Part 1: Multithreaded Architecture

This is the first in a short series of posts about designing a C++ audio engine, and I want to discuss overall architecture as well as the mechanics of multithreading in a real-time environment. (Especially one where you could potentially blow out someone's speakers!)

There is a lot of gunk that goes on internally to make a game programmer feel like a superhero, and clearly most of the workload of the engine is on the audio thread. The communication between threads is an important issue that should be addressed early, since it affects the overall layout of your engine and allows you to add more complex calculations down the road such as convolution and FFTs without worry about lowering the FPS of the game.

 

Read more to see diagrams, sample code, and architecture for a multithreaded audio engine.