audio-recording

Record Sounds from AudioContext (Web Audio API)

丶灬走出姿态 提交于 2019-12-04 06:52:25
Is there a way to record the audio data that's being sent to webkitAudioContext.destination ? The data that the nodes are sending there is being played by the browser, so there should be some way to store that data into a (.wav) file. Currently, there's not a native way to do that, but as Max said in the comment above, Recorderjs does essentially this (it doesn't chain onto the destination, but is a ScriptProcessorNode you can connect other nodes to, and have its input recorded. I built on Recorderjs to do a simple audio file recorder - https://github.com/cwilso/AudioRecorder . Sine to opus

Recording audio works, playback way too fast

两盒软妹~` 提交于 2019-12-04 04:42:58
问题 I am recording audio input from microphone using the following function: private function handleSampleData(p_sampleEvent:SampleDataEvent) :void { var data :Number; while (p_sampleEvent.data.bytesAvailable) { data = p_sampleEvent.data.readFloat(); _buffer.writeFloat(data); _buffer.writeFloat(data); } } This seems to work. After I have finished recording, I copy the recorded data to another buffer like this: _buffer.position = 0; _lastRecord = new ByteArray(); while (_buffer.bytesAvailable) {

What is the simplest way to continuously sample from the line-in using C#

不想你离开。 提交于 2019-12-04 03:25:22
I want to continuously sample from my PC's audio line in using C# (then process that data). What is the best way to do the sampling? You can do some (basic) audio capture using the open source NAudio .NET Audio Library. Have a look at the NAudioDemo project to see a simple example of recording to a WAV file using the WaveIn functions. NAudio also now includes the ability to capture audio using WASAPI (Windows Vista and above) and ASIO (if your soundcard has an ASIO driver). There is the Alvas Audio library as well, not free, has a nagging screen if you don't pay, but works beautifully. And the

How to get&modify metadata to supported audio files on Android?

泄露秘密 提交于 2019-12-04 01:56:12
问题 Background Android supports various audio files encoding and decoding. I record audio into an audio file using android.media.MediaRecorder class, but I also wish to show information about the files I've recorded (not standard data, but still just text, maybe even configurable by user), and I think it's best to store this information within the files. examples of possible data to store: when it was recorded, where it was recorded, notes by the user... The problem The MediaRecorder class doesn

How to play an audio if we pass NSData?

℡╲_俬逩灬. 提交于 2019-12-04 01:24:49
问题 I converted this path (file://localhost/var/mobile/Applications/8F81BA4C-7C6F-4496-BDA7-30C45478D758/Documents/sound.wav) which is an audio file i.e, recorded. I am converting this path to NSData. NSData is : Example : <00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 64617461 1cf50200 32003200 e2ffe2ff 3cff3cff 08fe08fe 44fe44fe 04fe04fe e6fde6fd 95fd95fd 96fe96fe

IOS Swift read PCM Buffer

女生的网名这么多〃 提交于 2019-12-03 22:15:37
I have a project for Android reading a short[] array with PCM data from microphone Buffer for live analysis. I need to convert this functionality to iOS Swift. In Android it is very simple and looks like this.. import android.media.AudioFormat; import android.media.AudioRecord; ... AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, someSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, AudioRecord.getMinBufferSize(...)); recorder.startRecording(); later I read the buffer with recorder.read(data, offset, length); //data is short[] (That's what i'm

iPhone trim audio recording

萝らか妹 提交于 2019-12-03 20:48:32
I have a voice memo component in my app, and I want to allow the user to trim the audio, similar to QuickTime X on Mac OS Ten point Six handles it, or like the Voice Memos app on the iPhone. Here's an example of both: Any help is appreciated. I am not a UI programmer by any means. This was a test I wrote to see how to write custom controls. This code may or may not work. I have not touched it in some time. header @interface SUIMaxSlider : UIControl { @private float_t minimumValue; float_t maximumValue; float_t value; CGPoint trackPoint; } @property (nonatomic, assign) float_t minimumValue,

iOS: How to trim silence from start and end of .aif audio recording?

£可爱£侵袭症+ 提交于 2019-12-03 16:52:47
My app includes the ability for the user to record a brief message; I'd like to trim off any silence (or, to be more precise, any audio whose volume falls below a given threshold) from the beginning and end of the recording. I'm recording the audio with an AVAudioRecorder, and saving it to an .aif file. I've seen some mention elsewhere of methods by which I could have it wait to start recording until the audio level reaches a threshold; that'd get me halfway there, but won't help with trimming silence off the end. If there's a simple way to do this, I'll be eternally grateful! Thanks. This

Recording with NAudio using C#

喜你入骨 提交于 2019-12-03 13:41:02
问题 I am trying to record audio in C# using NAudio. After looking at the NAudio Chat Demo, I used some code from there to record. Here is the code: using System; using NAudio.Wave; public class FOO { static WaveIn s_WaveIn; static void Main(string[] args) { init(); while (true) /* Yeah, this is bad, but just for testing.... */ System.Threading.Thread.Sleep(3000); } public static void init() { s_WaveIn = new WaveIn(); s_WaveIn.WaveFormat = new WaveFormat(44100, 2); s_WaveIn.BufferMilliseconds =

How to set timestamp of CMSampleBuffer for AVWriter writing

一笑奈何 提交于 2019-12-03 13:04:33
问题 I'm working with AVFoundation for capturing and recording audio. There are some issues I don't quite understand. Basically I want to capture audio from AVCaptureSession and write it using AVWriter, however I need some shifting in the timestamp of the CMSampleBuffer I get from AVCaptureSession. I read documentation of CMSampleBuffer I see two different term of timestamp: 'presentation timestamp' and 'output presentation timestamp'. What the different of the two ? Let say I get a CMSampleBuffer