audio-recording

Plotting android recording samples's amplitude on moving graph RealTime

穿精又带淫゛_ 提交于 2019-12-24 08:57:41
问题 I'm developing a recording app, for which I am trying plot amplitude of audio recording samples against time on Android View similar to below image. The graph should move with time towards the left side of the screen. I tried using LineChart of MPChart Library but still, the result is not as expected. MPChart homepage has a screenshot of such graph, but couldn't figure it how to implement. Please help. Thank you. 回答1: You can use LineChart for that, here is official RealtimeLineChartActivity

AVAudioRecorder - Continue recording to file after user stops recording by leaving the application and then re-opens it

你离开我真会死。 提交于 2019-12-24 07:45:52
问题 Can this be done? And if not, how far down towards Core Audio do I need to go (what method of recording should I be using instead)? I've noticed the behavior of AVAudioRecorder is to overwrite a file if it finds one at the path provided when you request that it record again, so I know that's not going to work. I'm also curious about file format restriction with this idea. Can you effectively resume an AAC or IMA4 encoding (the length of the files I want to record make WAV and probably even

Is that possible to get the current sample amplitude by MediaRecorder or other class

我只是一个虾纸丫 提交于 2019-12-24 04:34:07
问题 I have a media recorder, and want to record a media from mic and get it's amplitude sample. I want to try to get the correct and current amplitude instantaneously when calling some API. But there is just one API in MediaRecorde for getting amplitude: getMaxAmplitude, and it is used to get the maximum absolute amplitude measured since the last call. Is that possible to get the current sample amplitude instantaneously by MediaRecorder or other class from mic? Thanks, Best regards, Chen 回答1: The

Create audio file from samples of amplitude

随声附和 提交于 2019-12-24 04:32:42
问题 If I have a text file of sample amplitudes (0-26522), how can I create a playable audio file from them? I have a vague recollection of tinkering with .pcm files and 8-bit samples way back in the nineties. Is there any software to automatically create an audio file (PCM or other format) from my samples? I found SoX, but I even after looking at the documentation I can't figure out if it can do what I want, and if so how... 回答1: GUI audio workstation called Audacity that lets you do this File ->

unwanted silence parts in NAudio recording

為{幸葍}努か 提交于 2019-12-24 03:25:54
问题 I try to write an application that records the sound from the microphone and send it directly to the speakers. For testing I use a headset to avoid an acoustic feedback. I found this tutorial https://markheath.net/post/how-to-record-and-play-audio-at-same . Since I had a problem in my final application with this, I created a small test app to make sure that the cause of my problem isn't some side effect. I create a small test program with 2 buttons (start&stop) to test it. But for some reason

PyAudio : AttributeError: 'module' object has no attribute 'PyAudio'

╄→гoц情女王★ 提交于 2019-12-24 02:44:18
问题 Today I installed Pyaudio by using instruction on http://people.csail.mit.edu/hubert/pyaudio/ and trying to run some examples like this one. import pyaudio import wave CHUNK = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 2 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = "output.wav" p = pyaudio.PyAudio() stream = p.open(format=FORMAT, channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK) print("* recording") frames = [] for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):

Record, modify pitch and play back audio in real time on iOS

♀尐吖头ヾ 提交于 2019-12-23 19:23:10
问题 I just wondering if anyone can explain to me how I would go about recording audio, modifying it (pitch?) and playing it back to the user on iPhone. I'm not asking for someone to do this for me, just a few hints on how I should go about it. looking through the docs it seems like I should be using AVAudioSession , AVAudioRecorder and AVAudioPlayer ( AVFoundation.Framework ) for the recording playing parts. Or should I be using the CoreAudio.Framework ? and then there is the question regarding

PyAudio recorder script IOError: [Errno Input overflowed] -9981

五迷三道 提交于 2019-12-23 12:27:40
问题 The code below is what I use to record audio until the "Enter" key is pressed it returns an exception, import pyaudio import wave import curses from time import gmtime, strftime import sys, select, os # Name of sub-directory where WAVE files are placed current_experiment_path = os.path.dirname(os.path.realpath(__file__)) subdir_recording = '/recording/' # print current_experiment_path + subdir_recording # Variables for Pyaudio chunk = 1024 format = pyaudio.paInt16 channels = 2 rate = 48000 #

How to setup codec, samplerate and bitrate on an audio blob in javascript?

拥有回忆 提交于 2019-12-23 08:57:21
问题 I just created a blob: const audioBlob = new Blob(audioChunks, { 'type' : 'audio/wav; codecs=0' }); and sent it to the backend in base64 format. I saved this into a file named "test.wav" using the following code: await writeFile('./temp/test.wav', Buffer.from(filename.replace('data:audio/wav; codecs=0;base64,', ''), 'base64'), 'base64'); On the output "test.wav" file, I get the codec as opus, bitrate=N/A and sample rate=48000. I want to change these values to codec=wav, bitrate=256kbps and

Best practice for C++ audio capture API under Linux?

我的梦境 提交于 2019-12-23 07:53:35
问题 I need to create a C++ application with a simple audio recording from microphone functionality. I can't say that there aren't enough audio APIs to do this! Pulse, ALSA, /dev/dsp, OpenAL, etc. My question is what is the current "Best practice" API? Pulse seems supported by most modern distros, but seems almost devoid of documentation. Will OpenAL be supported across different distros, or is it too obscure? Have I missed any? Is there not a simple answer? 回答1: Lennart Pottering has a guide here