CoreAudio AudioQueue callback function never called, no errors reported CoreAudio AudioQueue callback function never called, no errors reported objective-c objective-c

CoreAudio AudioQueue callback function never called, no errors reported


My first answer was not good enough, so I compiled a minimal example that will play a 2 channel, 16 bit wave file.

The main difference from your code is that I made a property listener listening for play start and stop events.

As for your code, it seems legit at first glance. Two things I will point out, though:1. Is seems you are allocating buffers with TOO SMALL a buffer size. I have noticed that AudioQueues won't play if the buffers are too small, which seems to fit your problem.2. Have you verified the properties returned?

Back to my code example:

Everything is hard coded, so it is not exactly good coding practice, but it shows how you can do it.

AudioStreamTest.h

#import <Foundation/Foundation.h>#import <AudioToolbox/AudioToolbox.h>uint32_t bufferSizeInSamples;AudioFileID file;UInt32 currentPacket;AudioQueueRef audioQueue;AudioQueueBufferRef buffer[3];AudioStreamBasicDescription audioStreamBasicDescription;@interface AudioStreamTest : NSObject- (void)start;- (void)stop;@end

AudioStreamTest.m

#import "AudioStreamTest.h"@implementation AudioStreamTest- (id)init{    self = [super init];    if (self) {        bufferSizeInSamples = 441;        file = NULL;        currentPacket = 0;        audioStreamBasicDescription.mBitsPerChannel = 16;        audioStreamBasicDescription.mBytesPerFrame = 4;        audioStreamBasicDescription.mBytesPerPacket = 4;        audioStreamBasicDescription.mChannelsPerFrame = 2;        audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;        audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;        audioStreamBasicDescription.mFramesPerPacket = 1;        audioStreamBasicDescription.mReserved = 0;        audioStreamBasicDescription.mSampleRate = 44100;    }    return self;}- (void)start {    AudioQueueNewOutput(&audioStreamBasicDescription, AudioEngineOutputBufferCallback, (__bridge void *)(self), NULL, NULL, 0, &audioQueue);    AudioQueueAddPropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);    AudioQueueStart(audioQueue, NULL);}- (void)stop {    AudioQueueStop(audioQueue, YES);    AudioQueueRemovePropertyListener(audioQueue, kAudioQueueProperty_IsRunning, AudioEnginePropertyListenerProc, NULL);}void AudioEngineOutputBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {    if (file == NULL) return;    UInt32 bytesRead = bufferSizeInSamples * 4;    UInt32 packetsRead = bufferSizeInSamples;    AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, inBuffer->mAudioData);    inBuffer->mAudioDataByteSize = bytesRead;    currentPacket += packetsRead;    if (bytesRead == 0) {        AudioQueueStop(inAQ, false);    }    else {        AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);    }}void AudioEnginePropertyListenerProc (void *inUserData, AudioQueueRef inAQ, AudioQueuePropertyID inID) {    //We are only interested in the property kAudioQueueProperty_IsRunning    if (inID != kAudioQueueProperty_IsRunning) return;    //Get the status of the property    UInt32 isRunning = false;    UInt32 size = sizeof(isRunning);    AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &isRunning, &size);    if (isRunning) {        currentPacket = 0;        NSString *fileName = @"/Users/roy/Documents/XCodeProjectsData/FUZZ/03.wav";        NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: fileName];        AudioFileOpenURL((__bridge CFURLRef) fileURL, kAudioFileReadPermission, 0, &file);        for (int i = 0; i < 3; i++){            AudioQueueAllocateBuffer(audioQueue, bufferSizeInSamples * 4, &buffer[i]);            UInt32 bytesRead = bufferSizeInSamples * 4;            UInt32 packetsRead = bufferSizeInSamples;            AudioFileReadPacketData(file, false, &bytesRead, NULL, currentPacket, &packetsRead, buffer[i]->mAudioData);            buffer[i]->mAudioDataByteSize = bytesRead;            currentPacket += packetsRead;            AudioQueueEnqueueBuffer(audioQueue, buffer[i], 0, NULL);        }    }    else {        if (file != NULL) {            AudioFileClose(file);            file = NULL;            for (int i = 0; i < 3; i++) {                AudioQueueFreeBuffer(audioQueue, buffer[i]);                buffer[i] = NULL;            }        }    }}-(void)dealloc {    [super dealloc];    AudioQueueDispose(audioQueue, true);    audioQueue = NULL;}@end

Lastly, I want to include some research I have done today to test the robustness of AudioQueues.

I have noticed that if you make too small AudioQueue buffers, it won't play at all. That made me play around a bit to see why it is not playing.

If I try buffer size that can hold only 150 samples, I get no sound at all.

If I try buffer size that can hold 175 samples, it plays the whole song through, but with A lot of distortion. 175 amounts to a tad less than 4 ms of audio.

AudioQueue keeps asking for new buffers as long as you keep supplying buffers. That is regardless of AudioQueue actually playing your buffers or not.

If you supply a buffer with size 0, the buffer will be lost and an error kAudioQueueErr_BufferEmpty is returned for that queue enqueue request. You will never see AudioQueue ask you to fill that buffer again. If this happened for the last queue you have posted, AudioQueue will stop asking you to fill any more buffers. In that case you will not hear any more audio for that session.

To see why AudioQueues is not playing anything with smaller buffer sizes, I made a test to see if my callback is called at all even when there is no sound. The answer is that the buffers gets called all the time as long as AudioQueues is playing and needs data.

So if you keep feeding buffers to the queue, no buffer is ever lost. It doesn't happen. Unless there is an error, of course.

So why is no sound playing?

I tested to see if 'AudioQueueEnqueueBuffer()' returned any errors. It did not. No other errors within my play routine either. The data returned from reading from file is also good.

Everything is normal, buffers are good, data re-enqueued is good, there is just no sound.

So my last test was to slowly increase buffer size till I could hear anything. I finally heard faint and sporadic distortion.

Then it came to me...

It seems that the problem lies with that the system tries to keep the stream in sync with time so if you enqueue audio, and the time for the audio you wanted to play has passed, it will just skip that part of the buffer. If the buffer size becomes too small, more and more data is dropped or skipped until the audio system is in sync again. Which is never if the buffer size is too small. (You can hear this as distortion if you chose a buffer size that is barely large enough to support continuous play.)

If you think about it, it is the only way the audio queue can work, but it is a good realisation when you are clueless like me and "discover" how it really works.


I decided to take a look at this again and was able to solve it by making the buffers larger. I've accepted the answer by @RoyGal since it was their suggestion but I wanted to provide the actual code that works since I guess others are having the same problem (question has a few favorites that aren't me at the moment).

One thing I tried was making the packet size larger:

aData->aDescription.mFramesPerPacket = 512; // or some other numberaData->aDescription.mBytesPerPacket = (    aData->aDescription.mFramesPerPacket * aData->aDescription.mBytesPerFrame);

This does NOT work: it causes AudioQueuePrime to fail with an AudioConverterNew returned -50 message. I guess it wants mFramesPerPacket to be 1 for PCM.

(I also tried setting the kAudioQueueProperty_DecodeBufferSizeFrames property which didn't seem to do anything. Not sure what it's for.)

The solution seems to be to only allocate the buffer(s) with the specified size:

AudioQueueAllocateBuffer(    aData->aQueue,    aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,    &aData->aBuffer[i]);

And the size has to be sufficiently large. I found the magic number is:

mBytesPerPacket * 1024 / N_BUFFERS

(Where N_BUFFERS is the number of buffers and should be > 1 or playback is choppy.)

Here is an MCVE demonstrating the issue and solution:

#import <Foundation/Foundation.h>#import <AudioToolbox/AudioToolbox.h>#import <AudioToolbox/AudioQueue.h>#import <AudioToolbox/AudioFile.h>#define N_BUFFERS 2#define N_BUFFER_PACKETS 1024typedef struct AStreamData {    AudioFileID aFile;    AudioQueueRef aQueue;    AudioQueueBufferRef aBuffer[N_BUFFERS];    AudioStreamBasicDescription aDescription;    SInt64 pOffset;    volatile BOOL isRunning;} AStreamData;void printASBD(AudioStreamBasicDescription* desc) {    printf("mSampleRate = %d\n", (int)desc->mSampleRate);    printf("mBytesPerPacket = %d\n", desc->mBytesPerPacket);    printf("mFramesPerPacket = %d\n", desc->mFramesPerPacket);    printf("mBytesPerFrame = %d\n", desc->mBytesPerFrame);    printf("mChannelsPerFrame = %d\n", desc->mChannelsPerFrame);    printf("mBitsPerChannel = %d\n", desc->mBitsPerChannel);}void bufferCallback(    void *vData, AudioQueueRef aQueue, AudioQueueBufferRef aBuffer) {    AStreamData* aData = (AStreamData*)vData;    UInt32 bRead = 0;    UInt32 pRead = (        aBuffer->mAudioDataBytesCapacity / aData->aDescription.mBytesPerPacket    );    OSStatus stat;    stat = AudioFileReadPackets(        aData->aFile, false, &bRead, NULL, aData->pOffset, &pRead, aBuffer->mAudioData    );    if(stat != 0) {        printf("AudioFileReadPackets returned %d\n", stat);    }    if(pRead == 0) {        aData->isRunning = NO;        return;    }    aBuffer->mAudioDataByteSize = bRead;    stat = AudioQueueEnqueueBuffer(aQueue, aBuffer, 0, NULL);    if(stat != 0) {        printf("AudioQueueEnqueueBuffer returned %d\n", stat);    }    aData->pOffset += pRead;}AStreamData* beginPlayback(NSURL* path) {    static AStreamData* aData;    aData = malloc(sizeof(AStreamData));    OSStatus stat;    stat = AudioFileOpenURL(        (CFURLRef)path, kAudioFileReadPermission, 0, &aData->aFile    );    printf("AudioFileOpenURL returned %d\n", stat);    UInt32 dSize = 0;    stat = AudioFileGetPropertyInfo(        aData->aFile, kAudioFilePropertyDataFormat, &dSize, 0    );    printf("AudioFileGetPropertyInfo returned %d\n", stat);    stat = AudioFileGetProperty(        aData->aFile, kAudioFilePropertyDataFormat, &dSize, &aData->aDescription    );    printf("AudioFileGetProperty returned %d\n", stat);    printASBD(&aData->aDescription);    stat = AudioQueueNewOutput(        &aData->aDescription, bufferCallback, aData, NULL, NULL, 0, &aData->aQueue    );    printf("AudioQueueNewOutput returned %d\n", stat);    aData->pOffset = 0;    for(int i = 0; i < N_BUFFERS; i++) {        // change YES to NO for stale playback        if(YES) {            stat = AudioQueueAllocateBuffer(                aData->aQueue,                aData->aDescription.mBytesPerPacket * N_BUFFER_PACKETS / N_BUFFERS,                &aData->aBuffer[i]            );        } else {            stat = AudioQueueAllocateBuffer(                aData->aQueue,                aData->aDescription.mBytesPerPacket,                &aData->aBuffer[i]            );        }        printf(            "AudioQueueAllocateBuffer returned %d for aBuffer[%d] with capacity %d\n",            stat, i, aData->aBuffer[i]->mAudioDataBytesCapacity        );        bufferCallback(aData, aData->aQueue, aData->aBuffer[i]);    }    UInt32 numFramesPrepared = 0;    stat = AudioQueuePrime(aData->aQueue, 0, &numFramesPrepared);    printf("AudioQueuePrime returned %d with %d frames prepared\n", stat, numFramesPrepared);    stat = AudioQueueStart(aData->aQueue, NULL);    printf("AudioQueueStart returned %d\n", stat);    UInt32 pSize = sizeof(UInt32);    UInt32 isRunning;    stat = AudioQueueGetProperty(        aData->aQueue, kAudioQueueProperty_IsRunning, &isRunning, &pSize    );    printf("AudioQueueGetProperty returned %d\n", stat);    aData->isRunning = !!isRunning;    return aData;}void endPlayback(AStreamData* aData) {    OSStatus stat = AudioQueueStop(aData->aQueue, NO);    printf("AudioQueueStop returned %d\n", stat);}NSString* getPath() {    // change NO to YES and enter path to hard code    if(NO) {        return @"";    }    char input[512];    printf("Enter file path: ");    scanf("%[^\n]", input);    return [[NSString alloc] initWithCString:input encoding:NSASCIIStringEncoding];}int main(int argc, const char* argv[]) {    NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];    NSURL* path = [NSURL fileURLWithPath:getPath()];    AStreamData* aData = beginPlayback(path);    if(aData->isRunning) {        do {            printf("Queue is running...\n");            [NSThread sleepForTimeInterval:1.0];        } while(aData->isRunning);        endPlayback(aData);    } else {        printf("Playback did not start\n");    }    [pool drain];    return 0;}