Android AudioRecord questions?

瘦欲@ 提交于 2019-12-18 17:04:44

问题


I have been messing around with the AudioRecord feature of the Android API and found some strange behaviors with it.

Background info: My phone is a HTC Incredible I am using the Eclipse plugin for Android development with the emulator. Targeted platform or OS is 2.2... Since it is what my phone uses.

Some code:

bufferSize = AudioRecord.getMinBufferSize(FREQUENCY, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);

This is the code I use to setup the AudioRecord API with. Now, for the emulator it likes FREQUENCY to be set to 8000 for it to work. Comes back with a buffer size 640. For the phone I use 44100. One issue here is it seems the resulting PCM data for the wave seems to be an eight bit signed wave. I get values from -127 to 128. I thought the value AudioFormat.ENCODING_PCM_16BIT would produce something different.

I process the audio with a thread,

public void run() {
  while(isRecording) {
    audioRecord.startRecording();
    byte[] data = new byte[bufferSize];
    audioRecord.read(data, 0, bufferSize);
    listener.setData(data);
    handleData(data);
  }
  audioRecord.release();
}

I have a way to graphically display to corresponding wave in real time using a SurfaceView. There seems to be a lot of noise coming from the MIC. I get this noise from the emulator and the phone as well. Do I need to run the data through some sort of filter(s)? I would like to use this data to calculate some fun FFT and stuff just to play around with the wave. But I need to reduce the noise somehow.

Has anyone else experience this as well. Does anyone have a solution?

I appreciated your time and responses, thanks, dk


回答1:


When you read bytes from a "AudioFormat.ENCODING_PCM_16BIT" stream, it actually gives you both the upper and lower bytes of each sample as 2 sequential bytes. This will seem extremely noisy if you just take each byte to be a sample (instead of the half sample it actually is), plus the signing will be wrong for the first byte (it's in little endian order).

To get meaningful data out of the stream, read via shorts, e.g.

public void run() {
  while(isRecording) {
    audioRecord.startRecording();
    short[] data = new short[bufferSize/2];
    audioRecord.read(data, 0, bufferSize/2);
    listener.setData(data);
    handleData(data);
  }
  audioRecord.release();
}



回答2:


It's been a while since this question was asked, so I don't know if answering it is relevant any more.

The reason you're getting values from -127 to 128 is that you're reading into an array of bytes, each of which hold a signed 8-bit number. For 16-bit audio, read into an array of shorts.

I'm afraid I can't help with the noise issue.




回答3:


I might be able to help a tiny bit. However, I am in the same situation as you so I can only tell you about my experiences.

I believe you can get your device's preferred sampleRate like so:

int sampleRate = AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_SYSTEM);

The 16Bit encoding mode is the only one that works with AudioRecord. I am unsure why you are getting 8Bit like output (right?). Maybe someone knows.

I am also unsure how you pull the amplitude from the retrieved bytes, did you do this?

Finally, I believe you need to use the periodic listener functions, like so:

int sampleRate = AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_SYSTEM);
int channelMode = AudioFormat.CHANNEL_IN_MONO;
int encodingMode = AudioFormat.ENCODING_PCM_16BIT;
int bufferSize = AudioRecord.getMinBufferSize(sampleRate, channelMode, encodingMode);

AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate, channelMode, encodingMode, bufferSize);

recorder.setPositionNotificationPeriod(intervalFrames);
recorder.setRecordPositionUpdateListener(recordListener);
recorder.startRecording();

private RecordListener recordListener = new RecordListener();
private class RecordListener implements AudioRecord.OnRecordPositionUpdateListener {
    public void onMarkerReached(AudioRecord recorder) {
        Log.v("MicInfoService", "onMarkedReached CALL");
    }

    public void onPeriodicNotification(AudioRecord recorder) {
        Log.v("MicInfoService", "onPeriodicNotification CALL");
    }
}

There are two big questions here which I cannot answer and want to know the answer of myself as well:

  1. What should be set as intervalFrames?
  2. Why are the listener methods never called?

I wish I could help more.




回答4:


This open source project has a good amount of helper classes to help you acquire the audio data and experiment with analyzing it:

https://github.com/gast-lib/gast-lib

It has an AsyncTask

It has something that controls AudioRecorder

It even has a simple frequency estimation algorithm



来源:https://stackoverflow.com/questions/4707994/android-audiorecord-questions

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!