audio-streaming

ffmpeg: flushing output file every chunk

僤鯓⒐⒋嵵緔 提交于 2020-05-13 02:28:12
问题 I'm using ffmpeg to generate a sine tone in real time for 10 seconds. Unfortunately, ffmpeg seems to flush the output file only rarely, every few seconds. I'd like it to flush every 2048 bytes (=2bytes sample width*1024 samples, my custom chunk size). The output of the following script: import os import time import subprocess cmd = 'ffmpeg -y -re -f lavfi -i "sine=frequency=440:duration=10" -blocksize 2048 test.wav' subprocess.Popen(cmd, shell=True) time.sleep(0.1) while True: print(os.path

ffmpeg: flushing output file every chunk

半世苍凉 提交于 2020-05-13 02:28:06
问题 I'm using ffmpeg to generate a sine tone in real time for 10 seconds. Unfortunately, ffmpeg seems to flush the output file only rarely, every few seconds. I'd like it to flush every 2048 bytes (=2bytes sample width*1024 samples, my custom chunk size). The output of the following script: import os import time import subprocess cmd = 'ffmpeg -y -re -f lavfi -i "sine=frequency=440:duration=10" -blocksize 2048 test.wav' subprocess.Popen(cmd, shell=True) time.sleep(0.1) while True: print(os.path

ffmpeg: flushing output file every chunk

倾然丶 夕夏残阳落幕 提交于 2020-05-13 02:26:22
问题 I'm using ffmpeg to generate a sine tone in real time for 10 seconds. Unfortunately, ffmpeg seems to flush the output file only rarely, every few seconds. I'd like it to flush every 2048 bytes (=2bytes sample width*1024 samples, my custom chunk size). The output of the following script: import os import time import subprocess cmd = 'ffmpeg -y -re -f lavfi -i "sine=frequency=440:duration=10" -blocksize 2048 test.wav' subprocess.Popen(cmd, shell=True) time.sleep(0.1) while True: print(os.path

node.js live streaming ffmpeg stdout to res

让人想犯罪 __ 提交于 2020-05-12 10:55:33
问题 I want node.js to convert an extremly long audio file to mp3, and the second data is available on stdout, node.js should send it to the client for them to play. I've written the following, and while it works, the html5 audio/video tag waits until ffmpeg is 100% done transcoding, where-as I want to start playing the video while ffmpeg is doing its thing. var ffmpeg = childProcess.spawn('ffmpeg', [ '-i', params.location, //location of the specified media file '-f', 'mp3', 'pipe:1' ]); res

Objective c: Send audio data in rtp packet via socket

自古美人都是妖i 提交于 2020-04-30 07:39:27
问题 In my app, I have to capture microphone and send audio data in rtp packet. But I only see receive rtp data like iOS RTP live audio receiving or unanswered one. I used following code with AsuncUdpSocket to sent audio data but it wasn't wrap in rtp packet. Is there any library to wrap my audio data into rtp packet? initial AsyncUdpSocket: udpSender = [[GCDAsyncUdpSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()]; NSError *error; [udpSender connectToHost:@"192.168.1.29

Convert 8kHz mulaw to 16KHz PCM in real time

最后都变了- 提交于 2020-04-11 04:36:23
问题 In my POC I'm receiving a conversation streaming from Twilio in 8kHz mulaw and I want to transcribe it using Amazon Transcribe that needs to get the audio in 16KHz and PCM. I found here how to convert a file but failed to do this in streaming... The code for a file is: File sourceFile = new File("<Source_Path>.wav"); File targetFile = new File("<Destination_Path>.wav"); AudioInputStream sourceAudioInputStream = AudioSystem.getAudioInputStream(sourceFile); AudioInputStream

Convert 8kHz mulaw to 16KHz PCM in real time

百般思念 提交于 2020-04-11 04:36:19
问题 In my POC I'm receiving a conversation streaming from Twilio in 8kHz mulaw and I want to transcribe it using Amazon Transcribe that needs to get the audio in 16KHz and PCM. I found here how to convert a file but failed to do this in streaming... The code for a file is: File sourceFile = new File("<Source_Path>.wav"); File targetFile = new File("<Destination_Path>.wav"); AudioInputStream sourceAudioInputStream = AudioSystem.getAudioInputStream(sourceFile); AudioInputStream

Is it possible to splice advertisements or messages dynamically into an MP3 file via a standard GET request?

随声附和 提交于 2020-03-26 05:21:24
问题 Say you have an MP3 file and it's 60,000,000 bytes, and you also have an MP3 advertisement that's 500,000 bytes, both encoded at the same bit rate. Would it be possible using an nginx or apache module to change the MP3 "Content-Length" header value to 60,500,000 and then control the incoming "Content-Range" requests so the first 500,000 bytes return the advertisement audio, and any range request greater than 500,000 begins returning the regular audio file with a 500,000 byte offset? Or is it

Is it possible to splice advertisements or messages dynamically into an MP3 file via a standard GET request?

我们两清 提交于 2020-03-26 05:20:50
问题 Say you have an MP3 file and it's 60,000,000 bytes, and you also have an MP3 advertisement that's 500,000 bytes, both encoded at the same bit rate. Would it be possible using an nginx or apache module to change the MP3 "Content-Length" header value to 60,500,000 and then control the incoming "Content-Range" requests so the first 500,000 bytes return the advertisement audio, and any range request greater than 500,000 begins returning the regular audio file with a 500,000 byte offset? Or is it

Streaming server or http server

这一生的挚爱 提交于 2020-02-25 22:36:31
问题 We’re taking in mind the possibility of using a media server in order to build our on-premise media service. We’re only focusing at Video- and Audio-on-Demand use-case. Live stream is out of our scope right now. I mean, we’re need to serve pre-registered videos and audios with a good performance. We’ve played with ant community server, but we’ve some issues we’re not quite figure out. We’ve tested two scenarios: to serve a video hosted on a straight http server (httpd) to serve a video behind