Auburn Sounds
  December 14, 2016 — We are in Computer Music!

We are in Computer Music!

We're happy to report Panagement CM has been shipped with Computer Music Magazine #238!

Computer Music Issue 238

It feels great to be introduced to this readership. Thanks a lot to the good people at Computer Music that made this possible!

What is Panagement CM?

Panagement CM is like Panagement EE but without the LFO and the mini-game. In other words it's an intermediate between the Free Edition and the Enterprise Edition.

It happens that Computer Music explains the various features of Panagement better than we did. Maybe you'll want to watch their video:

Check out the Panagement CM video here.

  November 10, 2016 — Running D without its runtime

Running D without its runtime

In this article, we'll disable the D programming language runtime. Expected audience: software developers interested in D. Reading time = 8 min.

Our products Panagement and Graillon now run with the D language runtime disabled. This post is both a post-mortem and tutorial on how to live without the D runtime.

1. What does the D language runtime do?

Upon entry into a normal D program, druntime is initialized.

What does druntime do?

  • Running global constructors,
  • Allocating space for thread-local variables (TLS),
  • Enabling the Garbage Collector to function properly (GC).

A lot of the runtime machinery exists for the sole usage of the GC. druntime maintains a list of registered threads, whose stack segments have to be scanned by the GC, in case they would hold pointers to managed memory blocks.

2. The D runtime is optional

As a system language, D is able to operate without its runtime. A program without druntime will instead rely on the C runtime only. This doesn't come without effort.

Two different solutions here:

  • "Runtime-less": Not linking with the runtime. This has the benefit of turning every runtime use into a linking error. Some language features depend on data structures provided by the runtime source code. Hence the need to rewrite a minimal druntime. This is involved and more fit for writing an OS than for making consumer software.

  • "Runtime-free": Linking with the runtime, and then not enabling it. Runtime.initialize() is just not called. GC allocations, global constructors/destructors, and thread-local variables have to be avoided. And that's about it.

We went with "runtime-free" because it's easier.

3. Why did we disable druntime for audio plugins?

This was more of a logical next step than an absolute necessity.


  • Support unloading D shared libraries on macOS with existing D compilers. This was previously working with a hack, but that hack broke with macOS Sierra.

  • Avoids to register and unregister threads constantly. When called from an audio host, the only clean way is to register the incoming thread, and unregister it as it goes. "This causes some overhead", we thought.

  • Most of our code was already avoiding the GC and TLS,

  • We were expecting performance improvements.


  • Disabling the runtime includes (but is not limited to) disabling the GC,

  • Inability to use parts of the standard library,

  • Inability to use most of the library ecosystem.

So we set ourselves on a task to mainly remove all outstanding GC allocations. Fortunately, D has an attribute called @nogc.

4. Going fully @nogc

@nogc ensures zero GC allocation in functions it annotates. @nogc is essential to reach runtime freedom.

Step 1. Assuming @nogc with casting

Ironically, one of the first thing we need is an escape-hatch from nothrow @nogc (these attributes often travel together in practice, so let's group them).

/// Assumes a function to be nothrow and @nogc
auto assumeNothrowNoGC(T) (T t) if (isFunctionPointer!T || isDelegate!T)
    enum attrs = functionAttributes!T
               | FunctionAttribute.nogc
               | FunctionAttribute.nothrow_;
    return cast(SetFunctionAttributes!(T, functionLinkage!T, attrs)) t;

This breaks the type-system and allows to conveniently shoot oneself in the foot. So why do this?

The reason is: Object.~this(), base destructor of every class object, is virtual and neither nothrow nor @nogc. Because of covariance, every class object's destructor is assumed to throw exceptions and use the GC by default.

So assumeNothrowNoGC!T is an important building block to call carefully choosen class destructors in nothrow @nogc contexts. Let's see how.

Step 2. An optimistic object.destroy()-like

We can use our brand new foot-shooting device in the following way:

// For classes
void destroyNoGC(T)(T x) nothrow @nogc
    if (is(T == class) || is(T == interface))
        (T x)
            return destroy(x);

// For structs
void destroyNoGC(T)(ref T obj) nothrow @nogc
    if (is(T == struct))
        (ref T x)
            return destroy(x);

For our purpose, object.destroy() has the fatal flaw of not being nothrow @nogc (because it may call Object.~this()). assumeNothrowNoGC make object.destroy() work for nothrow @nogc code.

Step 3. Objects on the malloc heap

Going forward with our trail of casts and unsafety, let's make a template function like C++'s new:

/// Allocates and construct a class or struct object.
/// Returns: Newly allocated object.
auto mallocEmplace(T, Args...)(Args args)
    static if (is(T == class))
        immutable size_t allocSize = __traits(classInstanceSize, T);
        immutable size_t allocSize = T.sizeof;

    void* rawMemory = malloc(allocSize);
    if (!rawMemory)

    static if (is(T == class))
        T obj = emplace!T(rawMemory[0 .. allocSize], args);
        T* obj = cast(T*)rawMemory;
        emplace!T(obj, args);

    return obj;

Then a function like C++'s delete:

/// Destroys and frees an object created with `mallocEmplace`.
void destroyFree(T)(T p) if (is(T == class) || is(T == interface))
    if (p !is null)
        static if (is(T == class))
            // A bit different with interfaces,
            // because they don't point to the object itself
            void* here = cast(void*)(cast(Object)p);

/// Destroys and frees a non-class object created with `mallocEmplace`.
void destroyFree(T)(T* p) if (!is(T == class) && !is(T == interface))
    if (p !is null)

(Note: If the GC were enabled, one would maintain GC roots for the allocated memory chunk.)

It turns out this mallocEmplace / destroyFree duo is an adequate replacement for class objects allocated with new, including exceptions.

Step 4. Throwing exceptions despite @nogc

How do we throw and catch exceptions in @nogc code?

One can construct an Exception with manual memory management and throw it:

// Instead of:
//    throw new Exception("Message")
throw mallocEmplace!Exception("Message");

At the call site, such manual exceptions would have to be released when caught:

    return true;
catch(Exception e)
    e.destroyFree(); // release e manually
    return false;

Using exceptions in @nogc code can be easy, provided both caller and callee agree on manual exceptions.

5. Results

Long story short, we brute-forced our way into having fully @nogc programs, with the runtime left uninitialized.

As expected this fixed macOS Sierra compatibility. As a bonus Panagement and Graillon are now using 2x less memory, which is nice, but hardly life-changing.

We found no magical speed enhancement. Speed-wise nothing changed. Not registering threads in callbacks did not bring any meaningful gain. GC pauses were already never happening so disabling the GC did not help.

In conclusion, it is still our opinion that outside niche requirements, there isn't enough reasons to depart from the D runtime and its GC.

Learn more about Dplug, our audio plugin framework...

  November 7, 2016 — Panagement and Graillon 1.2 release

What's new in this 1.2 free update?

This version is a maintenance update with bug fixes only. Thanks to everyone who reported the crash in Sierra and helped test the beta version!

If you are a customer, you can find the new downloads through the emails sent by Gumroad.

Bringing macOS Sierra compatibility

Panagement and Graillon 1.2 will now load correctly when instantiated on macOS Sierra. We are deeply sorry that our plug-ins were among the few that wouldn't work in the new Apple OS. The reason is highly intricate, and will be discussed in a further blog post. Hence why the fix took so long.

50% less RAM used

Both Panagement and Graillon will now use 50% less memory than in 1.1. This is actually a byproduct of the Sierra fix and wasn't intended.

Find the new free downloads here for Panagement and Graillon.

  September 16, 2016 — PBR for Audio Software Interfaces

PBR for Audio Software Interfaces

This article describes our rendering system. Reading time = 5 min.

The User Interface (UI) of the last Auburn Sounds audio plug-ins are fully rendered. This rendering is heavily inspired by Physically Based Rendering (PBR), used in today's video games.

1. Why PBR?


Quite unsurprisingly, audio plug-ins are primarily about audio processing.

Yet all things being equal, it is still valuable to have a good-looking user interface. More press coverage, users liking the sound more, users sharing more on social networks: the benefits are seemingly endless.

What PBR does is taking average graphics as input, giving back more aesthetically pleasing images in a systematic way.

2. Other approaches

Audio plug-ins UI are expected to be — much like video games — pretty, unique and identical accross platforms. How is it usually done?

Option 1: Pre-rendering widget states

A common way to render widgets in plug-ins UI is to use pre-rendered widgets in every possible state. For example this was a potentiometer knob texture used for one of our former plug-ins:


Here the widget graphics need to be stored in memory and disk 100 times. For this very reason, plug-in installations are often found over 100MB in size.

The primary goal of using PBR was to reduce installation size. With PBR, widgets can use an order-of-magnitude less memory and disk space, because only the current state gets rendered.

While users rarely complain about large binaries, beta-testing, hosting and downloading all get easier with small file sizes.

Option 2: OpenGL

OpenGL logo

An alternative to pre-rendering widget states is to redraw everything with an accelerated graphics API like OpenGL. This technology enables the largest real-time updates on screen and potentially the nicest graphics.

However, OpenGL exposes developers to graphics drivers bugs. The “bug surface” of applications becomes a lot larger, while some users are inevitably left behind because of inadequate drivers.

3. Input channels

In order to begin compositing, our renderer requires 9 channel inputs to be filled with pixel data:

  • Depth
  • Base Color (red, green and blue)
  • Physical
  • Roughness
  • Metalness
  • Specular
  • Emissive

Panagement will be used throughout the rest of the article as an example.


Depth map

The Depth channel describes the elevation: the whiter, the higher. Originally Depth was stored in an 8-bit buffer but this was the cause of quantization issues with normals. It is now stored in a 16-bit buffer.

Editing Depth is akin to adding or removing matter.

Base Color

Base color

Arguably this input requires the most work. The Base Color of the material is also known as “albedo”. This is the appropriate channel for painting labels and markers.

The two darker rectangles are the pre-computed areas. They are manual copies of the same interface parts at a later stage of rendering, fed back into the inputs to gain speed.

Editing Base Color is akin to adding paint.


Physical map The Physical channel signals those pre-computed areas. While rendering, they are blitted into the final color buffer with no lighting computation. This saves a lot of processing in the case of continuously updated widgets, where 60 FPS is desirable.


Roughness map The Roughness channel separates rough and soft materials: the whiter, the softer.

Increasing roughness Increasing Roughness from left to right.


Metalness map The Metalness channel separates metallic from dielectric materials. The whiter, the more metal. Increasing metalness Increasing Metalness from left to right.


Specular map The Specular channel tells whether the material is shiny. The whiter, the shinier.

You may notice that there is no black in this channel. Rendering practitioners have noticed that Everything is Shiny.

Increasing specular Increasing Specular from left to right.


Emissive map The Emissive channel identifies the areas that are emitting light by themselves. As a simplification, the emitted light takes the Base Color as its own color.


Skybox A skybox is used to fake environment reflections. It isn't mandatory to take an actual sky picture for this purpose, but this is the case in our example.

All the afore-mentionned 9 channels are mipmapped for fast access during lighting computations, and organized in a set of 3 different textures (which is helpful for interpolation and look-ups).

4. Lighting computations in our PBR renderer

We'll now describe the 8 steps in which the final color is computed, for each pixel.

Step 1: Getting normals from Depth


First a buffer of normals is computed using a filtered version of Depth.

Step 2: Ambient component

Ambient light

Light contributions start with a weak, almost black ambient component.

Note that the pre-computed areas do not partake in this summing, being already shaded.

Step 3: First light

First light

A hard light is cast to make a short-scale shadows appear across the entire UI.

Step 4: Second light

Second light

Then another light is cast which is more of a diffuse one.

Step 5: Adding specular highlights

Specular lighting

At this moment, specular highlights are added from a third virtual light source, which is only specular.

Step 6: Adding skybox reflections


Skybox reflections gets added, which differentiate the metallic materials from others.

Step 7: Adding light emitted by neighbours

Neighbour blur

With mipmapping we can add light contributions of neighbouring pixels efficiently. At this stage we can take pixels and put them in another pre-computed area back into the inputs.

The balance is still unsatisfactory because the available color gamut isn't completely used. We need one last step of color correction.

Step 8: Correcting colors

Color correction

Colors are finally corrected with interactively selected Lift-Gamma-Gain-Contrast curves. This step feels a bit like Audio Mastering in that checking with different users and trying on different screens is helpful.

This is the final output users see.

How fast is this?

This renderer has been optimized thoroughly. Widgets drawing, mipmapping and compositing were all made parallel to leverage multiple cores.

The first opening of Panagement takes about 160 ms and subsequent updates are under 16 ms. When a widget is touched, only an area around it is composited again.

5. Conclusion

PBR comes with natural perks like small file sizes and global lighting control.

It is also complex with a lot of parameters to tune. While the UI becomes flexible, the process of creating it gets more work-intensive.

This renderer is available in the Dplug audio plug-in framework.

  September 8, 2016 — Panagement and Graillon 1.1 release

What's new in this 1.1 free update?

We fixed all the bugs we know of in Panagement and Graillon. Thanks to everyone reporting bugs and testing pre-release versions!

If you are a customer, you can find the new downloads through the original purchase email sent by Gumroad.

No more crashes

All crashes should be gone in Ableton Live, Apple Logic, Steinberg Cubase, Steinberg Nuendo, Adobe Audition, Audacity, and Digital Performer. If you still experience a crash within your DAW, please report it at [email protected].

Note that to fix the crashes, upgrading is not enough, the host program must also be restarted.

Fix pass-through of key presses to the host

Unhandled key press are now correctly forwarded to the host. You can use the SPACE bar in our plugins like with every other plugins.

No more stuck LFO in Panagement

The Panagement LFO is now always moving, even when the host sequencer is stopped. Thanks goes to David Mondrup for testing intermediate versions for us.

Also, the LFO wasn't working in Audio Unit. Fixed now!

Snappier UI

The UI of both plugins will load and show faster.

Memory savings

Both Panagement and Graillon now use about 30% less memory. You probably don't care about this, but this means you can stack more plugins in 32-bit sequencers.

Graillon becomes Freemium

Graillon now comes with a Free Edition which gets 2 out of 4 shift faders. It is also available in Audio Unit like Panagement.

Haven't tested Graillon on your voice yet? Try it here.

  August 22, 2016 — Why AAX is not supported right now

Why don't you support Pro Tools?

As of 2016, the AAX plugin format is not supported in Auburn Sounds plugins. Therefore, you can't use them in Pro Tools without a wrapper. The reasons are simple: time and cost.


With Panagement launch, Audio Unit support, and the race towards a 1.1 release that would fix most remaining bugs, there is simply no time to support another plugin format right now. Especially with Auburn Sounds being one person. I'm guessing adding AAX support would be three man-months of work, which is half a plugin.

Why doesn't Auburn Sounds rely on a ready-made framework that would have AAX built-in then? There are pros and cons of course. The problem with this strategy happens when a difficult bug occurs. You realize you rely on some large amount of code you don't really understand. Things do not look so good at this point.

Making an own framework allows to understand completely what's going on. In our case, this enables the particular look of the plugins and small size of the binaries. This is a reward in differentiation and control, one that many audio companies have seeked and paid for.

N plugins x M formats

When you have only two products people can buy, it isn't economically sound to add new formats before extending the product line. It makes more sense to create a new sellable artifact that will most probably add 50% to sales, than to add a new format that would maybe add 30% top.

There is also something specific to AAX. It mandates PACE signing, and PACE signing has a license cost every year. As a plugin business one must wait before sales pass some threshold to be able to afford AAX development.

This all compounds to make implementing AAX a costly choice to make for this particular software shop.

  June 22, 2016 — Introducing Panagement

We are very proud to introduce Panagement!

Panagement is a spatialization multi-effect that is at its core a binaural panner combined with a distance panner. Then we went to the next level of obsessiveness and added features to make it an easy, inspiring and pristine-sounding stereo processor.


The inception

Binaural panning was a topic of Auburn Sounds (formerly GFM) with Psypan 4 years ago. Psypan got a tiny underground following and users sent us emails asking for more. We started paying attention.

So we enhanced the basics: inter-aural delay (ITD) and power difference are better tuned. We added a lower-latency delay interpolation. Finally inter-aural spectral difference which enhance realism and externalization further. The result is that Panagement blows Psypan completely out of the water, on its own turf.

And that's before you turn on the new stuff.

Going further

We thought it would be useful to add a Tilt Filter, Mono-to-Stereo, and most of all a Distance fader that would put things further. With all the talk about "Depth" lately, what could possibly create depth more directly and in one click?

Our precious beta-testers then asked for a LFO. That turned out to be a very good idea. Panagement in its current form was borned.

While we went a bit mad with the features, the hope is that it all adds up nicely and so far the reviews are very positive.

Available in Audio Unit

We've implemented Audio Unit support in our new framework.

In addition to VST 2.4, Panagement works as a 32-bit and 64-bit Audio Unit, in both Cocoa and Carbon hosts.

Panagement can be used for free with the Free Edition, or with all features with the Entreprise Edition. Try Panagement now!

  February 8, 2016 — Making a Windows VST plugin with D

Making a Windows VST plugin with D

In this tutorial, you'll learn how to make a VST plugin without UI using the D programming language, for Windows. The Mac OS X version would bring a bit more complexity and will be investigated in another blog post.

Introducing the dplug library

dplug is a library that wraps plugin formats and manages the UI if needed. It's source code is available on Github.

dplug logo

It is most similar to IPlug and JUCE, two C++ alternatives you should absolutely consider when making plugins. Only a subset of JUCE and IPlug features are supported. If you need VST3 or AAX support, use them instead. AudioUnit support isn't there yet either, but will probably happen this year.

dplug also offers a way to render your plugin UI with a depth map and fancy lighting, but this is out-of-scope for this tutorial. We'll focus on getting something on the table as quickly as possible.

Setting up the environment

  • This tutorial assumes git is installed and ready-to-run. Get it here otherwise.

  • For 64-bit support, it is recommended to install Visual Studio before the D compiler (for example Visual Studio 2013 Community Edition). This is necessary because the DMD compiler uses the Microsoft's linker when building 64-bit binaries. You can skip this step if you don't want 64-bit support.

  • Install DMD: go to the D compiler downloads. The easiest way is to download and execute the installer. If you choose so, VisualD will also be installed. It allows to edit and debug D code from within Visual Studio. DMD should be in your PATH environment variable afterwards. Type dmd --version in a command prompt to check for correct setup.

  • Install DUB: go to the D package manager downloads. You will find releases there. DUB must be in your PATH environment variable. Type dub help in a command prompt to check for correct installation.

Build the M/S Encoder example

For the sake of brevity, the effect we'll create is a simple M/S encoder plugin.

You can find the full source code here. I recommend you copy this example to start creating your own plugins.

  • Checkout dplug: git clone

  • Go to the M/S encoder directory: cd dplug\examples\ms-encode

  • Build the plugin by typing: dub

This will create a DLL which can be used in a host as a VST2 plugin. Now let's get into details and see what files were necessary.

What is the file module.def for?

See its content here.

This file is passed to the linker so that the VST entry point VSTPluginMain is created. This is the function the host will call when instantiating the plugin.

What is the file dub.json for?

See its content here.

DUB needs a project description file to work its magic.

DUB logo

Let's explain all of the JSON keys:

  • name is necessary for every DUB project. In some cases it is even the only mandatory key.
  • importPaths: this list of paths is passed to the D compiler with the -I switch, so that you can import from them.
  • sourcePaths: this list of paths is scanned for .d files to pass to the compiler. The D compilation model is similar to the C++ compilation model: there is a distinction between source files and import paths.
  • targetType must be set to dynamicLibrary.
  • sourceFiles-windows will provide module.def to the linker, when on Windows. Without that exported symbol, the VST host wouldn't be able to load your plugin.
  • dependencies lists all dependencies needed by this project. Only dplug:vst is needed here.
  • CFBundleIdentifierPrefix will only be useful for the Mac version.

What is the file msencode.d for?

See its content here.

This is the main source file for our M/S encoder. Like in JUCE or IPlug, it is a matter of subclassing a plugin client class and overloading some functions.

  • audio processing happens in the processAudio() overload. This is pretty straightforward to understand, you get a number of input pointers, a number of ouput pointers, and a number of samples. The interesting things happen here!

  • The reset() overload is called at initialization time or whenever the sampling rate changes. Since our M/S encoder has no state, this is left empty.

  • buildParams() is where you define plugin parameters. We have only one boolean parameter here, "On/Off". In processAudio() this parameter is read with readBoolParamValue(paramOnOff).

  • The buildLegalIO() overload is there to define which combination of input and output channels are allowed. In this example, stereo to stereo is the only legal combination.

  • Finally, the buildPluginInfo() overload allows to define the plugin identity and some options.

How do I debug it?

If you have Visual Studio and VisualD installed, you can generate an IDE project using the command: dub generate visuald. This will create a solution able to build your project, and suitable for debugging (much like CMake or premake do).

Getting an optimized build

To build our M/S encoder with optimizations, you can do:

dub -b release-nobounds -f --combined


dub -b release-nobounds -f --combined -a x86_64 for a 64-bit plugin.

Speed-wise, this plugin should then be about 2500x real-time. Which is expected since it doesn't do much in the first place.

Why the D programming language?

Indeed. Why use D over the obvious alternative: C++?

dplug logo

This is a touchy topic that already has filled entire blog posts. Virtually everyone in real-time audio is using C++ and it's probably still the sanest choice to make.

We are a handful of people using D though. Prior work with VST and D include:

I worked with both languages for years and felt qualified enough for the inevitable bullet point comparison. The most enabling thing is the D ecosystem and package management through DUB, which makes messing with dependencies basically a solved problem. Development seems to "flow" way more, and I tend to like the end result better in a way that is undoubtedly personal.

Isn't Garbage Collection at odds with real-time audio?

This will be counter-intuitive to many programmers, but the D GC isn't even given a chance to be a problem. The ways to avoid the dreaded GC pauses are well known within the community.

In our plugins the GC is used in the UI but not in audio processing. No collection happens after UI initialization. If there was some, the audio thread wouldn't get stopped thanks to being unregistered to the runtime.

The mere presence of a GC doesn't prevent you to do real-time audio, provided you are given the means to control it and avoid it as needed.


Making VST plugins with D isn't terribly involved. I hope you find the process enjoyable and most importantly, easy.

  February 4, 2016 — Interested in shaping our future plugins?

Looking for beta-testers

We are looking for beta-testers for our next VST/AU plugin, which will be an evolution of Psypan.

What you'll do:

  • Help us test unreleased plugins on your system and DAW,
  • Provide feedback about features through email exchanges,
  • Suggest things that seem obvious to everyone except the developer :)

That's about it. It's not a big time involvement at all.

What you'll get:

  • Our next plugin for free
  • A mention in the user manual (yes, some people actually read it)

If you are interested send an e-mail to [email protected]. Mention your OS version, DAW(s) of choice, and provide a link to your music.

  November 26, 2015 — Graillon 1.0 released

It's a steal!

We're relieved to tell that Graillon can now be purchased through the Gumroad service.

Check out the Graillon page here.

What is Graillon again?

The general idea was to make an effect singers and producers could enjoy.

Graillon is technically a pitch-tracking poly-Bode-Shifter VST. It creates distorted growl-like vocals from clean vocals and keep consonants untouched. Its most distinctive feature is its low-latency.

How to get best results with Graillon?

Pitch and voicedness detection can get confused with already processed signals. The ideal input is a mono singing voice, before any processing, has no reverb, chorus or delay, and has minimal background noise.

I'm a journalist / blogger, any Press Releases to share?

We added a Press area for you. Do not hesitate to ask for more images or stories.

Check out the Press area.

I'm a developer, anything open-source?

dplug logo

Yes. The open-source part of Graillon is called dplug.

Discuss it on KVR:

I'm not suscribed to the Auburn Sounds newsletter yet.

Please reconsider. This mailing-list is very low traffic, only meaningful stuff. Suscribing is the logical step forward.

  November 17, 2015 — First plugin Graillon in open beta!

This is it!

We are happy to introduce our first new-style plugin, Graillon.


After 8 months of intensive work, fear and joy, it has reached a near-completion state.

This plugin is now in open beta.

Available as a VST 2.x for Windows XP+ and Mac OS X 10.6+.

This version features a periodic noise every 30 seconds of processing. This artifact will be removed in the final 1.0 release, available Soon™.

Want to know when it's released? Keep in touch by suscribing to our newsletter (if you aren't already like all reasonable people).

Read more about it here...

  April 7, 2015 — Auburn Sounds website is now live!

This is the beginning of a long journey!

Auburn Sounds is an audio plugins company. In the past free VST/AU were made under the name GFM. Auburn Sounds is the logical next step with a completely different framework and focus on quality.

What should I expect from Auburn Sounds?

  • See our manifesto.
  • The next plugin will focus on vocals.

In the meantime, stay in touch by suscribing to our newsletter!