Where to start on iOS audio synth? Where to start on iOS audio synth? ios ios

Where to start on iOS audio synth?


This is a really good question. I sometimes ask myself the same things and I always end up using the MoMu Toolkit from the guys at Stanford. This library provides a nice callback function that connects to AudioUnits/AudioToolbox (not sure), so that all you care about is to set the sampling rate, the buffer size, and the bit depth of the audio samples, and you can easily synthesize/process anything you like inside the callback function.

I also recommend the Synthesis ToolKit (STK) for iOS that was also released by Ge Wang at Stanford. Really cool stuff to synthesize / process audio.

Every time Apple releases a new iOS version I check the new documentation in order to find a better (or simpler) way to synthesize audio, but always with no luck.

EDIT: I want to add a link to the AudioGraph source code: https://github.com/tkzic/audiograph This is a really interesting app to show the potential of AudioUnits, made by Tom Zicarelli. The code is really easy to follow, and a great way to learn about this --some would say-- convoluted process of dealing with low level audio in iOS.


Swift & Objective C

There's a great open source project that is well documented with videos and tutorials for both Objective-C & Swift.

AudioKit.io


The lowest level way to get the buffers to the soundcard is through the audiounit api, and particularly the remoteIO audiounit. This is a bunch of gibberish, but there are a few examples scattered around the web. http://atastypixel.com/blog/using-remoteio-audio-unit/ is one.

I imagine that there are other ways to fill buffers, either using the AVFoundation framework, but I have never done them.

The other way to do it is use openframeworks for all of your audio stuff, but that also assumes that you want to do your drawing in openGL. Tearing out the audiounit implementation shouldn't be too much of an issue though, if you do want to do your drawing in another way. This particular implementation is nice because it casts everything to -1..1 floats for you to fill up.

Finally, if you want a jump start on a bunch of oscillators / filters / delay lines that you can hook into the openframeworks audio system (or any system that uses arrays of -1..1 floats) you might want to check out http://www.maximilian.strangeloop.co.uk.