Connecting AVAudioSourceNode to AVAudioSinkNode does not work

…衆ロ難τιáo~ 提交于 2021-02-08 08:26:19

问题


Context

I am writing a signal interpreter using AVAudioEngine which will analyse microphone input. During development, I want to use a default input buffer so I don't have to make noises for the microphone to test my changes. I am developing using Catalyst.

Problem

I am using AVAudioSinkNode to get the sound buffer (the performance is allegedly better than using .installTap). I am using (a subclass of) AVAudioSourceNode to generate a sine wave. When I connect these two together, I expect the sink node's callback to be called, but it is not. Neither is the source node's render block called.

let engine = AVAudioEngine()

let output = engine.outputNode
let outputFormat = output.inputFormat(forBus: 0)
let sampleRate = Float(outputFormat.sampleRate)

let sineNode440 = AVSineWaveSourceNode(
    frequency: 440,
    amplitude: 1,
    sampleRate: sampleRate
)

let sink = AVAudioSinkNode { _, frameCount, audioBufferList -> OSStatus in
    print("[SINK] + \(frameCount) \(Date().timeIntervalSince1970)")
    return noErr
}

engine.attach(sineNode440)
engine.attach(sink)
engine.connect(sineNode440, to: sink, format: nil)

try engine.start()

Additional tests

If I connect engine.inputNode to the sink (i.e., engine.connect(engine.inputNode, to: sink, format: nil)), the sink callback is called as expected.

When I connect sineNode440 to engine.outputNode, I can hear the sound and the render block is called as expected. So both the source and the sink work individually when connected to device input/output, but not together.

AVSineWaveSourceNode

Not important to the question but relevant: AVSineWaveSourceNode is based on Apple sample code. This node produces the correct sound when connected to engine.outputNode.

class AVSineWaveSourceNode: AVAudioSourceNode {

    /// We need this separate class to be able to inject the state in the render block.
    class State {
        let amplitude: Float
        let phaseIncrement: Float
        var phase: Float = 0

        init(frequency: Float, amplitude: Float, sampleRate: Float) {
            self.amplitude = amplitude
            phaseIncrement = (2 * .pi / sampleRate) * frequency
        }
    }

    let state: State

    init(frequency: Float, amplitude: Float, sampleRate: Float) {
        let state = State(
            frequency: frequency,
            amplitude: amplitude,
            sampleRate: sampleRate
        )
        self.state = state

        let format = AVAudioFormat(standardFormatWithSampleRate: Double(sampleRate), channels: 1)!

        super.init(format: format, renderBlock: { isSilence, _, frameCount, audioBufferList -> OSStatus in
            print("[SINE GENERATION \(frequency) - \(frameCount)]")
            let tau = 2 * Float.pi
            let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
            for frame in 0..<Int(frameCount) {
                // Get signal value for this frame at time.
                let value = sin(state.phase) * amplitude
                // Advance the phase for the next frame.
                state.phase += state.phaseIncrement
                if state.phase >= tau {
                    state.phase -= tau
                }
                if state.phase < 0.0 {
                    state.phase += tau
                }
                // Set the same value on all channels (due to the inputFormat we have only 1 channel though).
                for buffer in ablPointer {
                    let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
                    buf[frame] = value
                }
            }

            return noErr
        })

        for i in 0..<self.numberOfInputs {
            print("[SINEWAVE \(frequency)] BUS \(i) input format: \(self.inputFormat(forBus: i))")
        }

        for i in 0..<self.numberOfOutputs {
            print("[SINEWAVE \(frequency)] BUS \(i) output format: \(self.outputFormat(forBus: i))")
        }
    }
}

回答1:


outputNode drives the audio processing graph when AVAudioEngine is configured normally ("online"). outputNode pulls audio from its input node, which pulls audio from its input node(s), etc. When you connect sineNode and sink to each other without making a connection to outputNode, there is nothing attached to an output bus of sink or an input bus of outputNode, and therefore when the hardware asks for audio from outputNode it has nowhere to get it.

If I understand correctly I think you can accomplish what you'd like to do by getting rid of sink, connecting sineNode to outputNode, and running AVAudioEngine in manual rendering mode. In manual rendering mode you pass a manual render block to receive audio (similar to AVAudioSinkNode) and drive the graph manually by calling renderOffline(_:to:).



来源:https://stackoverflow.com/questions/61768696/connecting-avaudiosourcenode-to-avaudiosinknode-does-not-work

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!