wav

Can ffmpeg convert audio from raw PCM to WAV?

≯℡__Kan透↙ 提交于 2019-11-28 03:57:59
I can convert wav file to pcm ffmpeg -i file.wav -f s16le -acodec pcm_s16le file.pcm How can I revert this operation? The wav container just adds a simple header to the raw PCM data. The header includes the format, sample rate, and number of channels. Since the raw PCM data does not include this information, you will need to specify it on the command line. Options are specified before the file they apply to, so options before the input file may be used to specify the format of the input file, and options after the input file and before the output file may be used to specify the desired format

Saving audio input of Android Stock speech recognition engine

我的梦境 提交于 2019-11-28 03:56:19
I am trying to save in a file the audio data listened by speech recognition service of android. Actually I implement RecognitionListener as explained here: Speech to Text on Android save the data into a buffer as illustrated here: Capturing audio sent to Google's speech recognition server and write the buffer to a Wav file, as in here. Android Record raw bytes into WAVE file for Http Streaming My problem is how to get appropriate audio settings to save in the wav file's headers. In fact when I play the wav file only hear strange noise, with this parameters, short nChannels=2;// audio channels

What do the bytes in a .wav file represent?

时光总嘲笑我的痴心妄想 提交于 2019-11-28 03:20:38
When I store the data in a .wav file into a byte array, what do these values mean? I've read that they are in two-byte representations, but what exactly is contained in these two-byte values? kratenko You will have heard, that audio signals are represented by some kind of wave. If you have ever seen this wave diagrams with a line going up and down -- that's basically what's inside those files. Take a look at this file picture from http://en.wikipedia.org/wiki/Sampling_rate You see your audio wave (the gray line). The current value of that wave is repeatedly measured and given as a number. That

NSURL returns Invalid Summary when merging WAV files

陌路散爱 提交于 2019-11-28 02:12:37
问题 I'm trying to merge 2 .wav files inside of an Objective-C project. The idea is that i want it to give this output: file1 + (file2 - header). In that case the first file's header has to be changed to reflect the new size. If the first file is empty (so only the first time) i want the method to return the second file as a whole, but in the file 1 url. Right now i have: +(NSURL *)mergeFile1:(NSURL *)file1 withFile2:(NSURL *)file2 { if(file1 == nil) { return [file2 copy]; } NSData * wav1Data =

java wav player adding pause and continue

青春壹個敷衍的年華 提交于 2019-11-28 00:36:33
I have this code which plays wav files from a folder with play and stop. How can I add pause and continue? import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; import java.applet.AudioClip; import java.net.URL; public class JukeBox extends JFrame { private JComboBox musicCombo; private JButton stopButton, playButton; private AudioClip[] music; private AudioClip current; public JukeBox(String title) { super(title); getContentPane().add(new JukeBoxControls()); } public static void main(String [] args) { JukeBox myf = new JukeBox ("WAV COMBOBOX"); myf

How to write stereo wav files in Python?

独自空忆成欢 提交于 2019-11-27 22:20:10
The following code writes a simple sine at frequency 400Hz to a mono WAV file. How should this code be changed in order to produce a stereo WAV file. The second channel should be in a different frequency. import math import wave import struct freq = 440.0 data_size = 40000 fname = "WaveTest.wav" frate = 11025.0 # framerate as a float amp = 64000.0 # multiplier for amplitude sine_list_x = [] for x in range(data_size): sine_list_x.append(math.sin(2*math.pi*freq*(x/frate))) wav_file = wave.open(fname, "w") nchannels = 1 sampwidth = 2 framerate = int(frate) nframes = data_size comptype = "NONE"

Is the endianness of format params guaranteed in RIFF WAV files?

孤人 提交于 2019-11-27 21:36:11
Is the endianness of format params guaranteed in RIFF WAV files? I have heard conflicting answers to this including references to a RIFX file format. Yes. If the file starts with RIFF, then it's little endian. If it starts with FFIR or RIFX, then it's probably not. Generally, supporting the WAV format means supporting RIFF files, although adding RIFX support should not prove difficult. The AES31 specification for BWF (Broadcast Wave Format) references this specification for RIFF: http://www.tactilemedia.com/info/MCI_Control_Info.html From this: RIFF has a counterpart, RIFX, that is used to

How to edit raw PCM audio data without an audio library?

人走茶凉 提交于 2019-11-27 21:22:20
问题 I'm interested in precisely extracting portions of a PCM WAV file, down to the sample level. Most audio modules seem to rely on platform-specific audio libraries. I want to make this cross platform and speed is not an issue, are there any native python audio modules that can do this? If not, I'll have to interpret the PCM binary. While I'm sure I can dig up the PCM specs fairly easily, and raw formats are easy enough to walk, I've never actually dealt with binary data in Python before. Are

How can I draw sound data from my wav file?

情到浓时终转凉″ 提交于 2019-11-27 18:38:22
First off this is for homework or... project. I'm having trouble understanding the idea behind how to draw the sound data waves on to a graph in Java for a project. I have to make this assignment entirely from scratch with a UI and everything so basically making a .wav file editor. The main issue I'm having is getting the sound data into the graph to be drawn. Currently I have a randomly generated array of values just being drawn right now. So far I have a mini-program running and validating the wav file for it to actually be a wav file. I'm reading it in with a FileInputStream and validating:

What is a channel in a .wav file format?Do all channels play simultaneaously when a wav file is played?

本小妞迷上赌 提交于 2019-11-27 17:45:57
问题 I read about.wav file format by googling,all I could figure was that Frames are made of samples(of some defined bit depth) and a wav stereo file has a multiple of something called channels.... The confusion is whether a channel is made up of frames? Do all channels play along when I play some audio file? If a channel is made up of frames,are all channels equal in length(bit wise)? Please answer if someone can,I have to display each channel separately when playing a wav file in waveform 回答1: