audio-streaming

Web Audio API Stream: why isn't dataArray changing?

♀尐吖头ヾ 提交于 2019-12-10 12:08:20
问题 EDIT 2: solved. See answer below. EDIT 1: I changed my code a little, added a gain node, moved a function. I also found that IF I use the microphone, it will work. Still doesn't work with usb audio input. Any idea? This is my current code: window.AudioContext = window.AudioContext || window.webkitAudioContext; window.onload = function(){ var audioContext = new AudioContext(); var analyser = audioContext.createAnalyser(); var gainNode = audioContext.createGain(); navigator.mediaDevices

Content-Range working in Safari but not in Chrome

依然范特西╮ 提交于 2019-12-10 11:40:04
问题 I'm streaming audio files from a Node.js Express server with Content-Range headers plus no caching headers. This works ok in latest Safari instead, but it does not in Chrome. While when streaming the full audio file with HTTP 200 , my headers were { 'Content-Length': 4724126, 'Content-Type': 'audio/mpeg', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'POST, GET, OPTIONS', 'Access-Control-Allow-Headers': 'POST, GET, OPTIONS', Expires: 0, Pragma: 'no-cache', 'Cache-Control

Streaming audio and video

懵懂的女人 提交于 2019-12-10 10:34:43
问题 I've been trying for a while but struggling. I have two projects: Stream audio to server for distribution over the web Stream audio and video from a webcam to a server for distribution over the web. I have thus far tried ffmpeg and ffserver, PulseAudio, mjpegstreamer (I got this working but no audio) and IceCast all with little luck. While I'm sure this is likely my fault, I was wondering if there are any more option? I've spent a while experimenting with Linux options and was also wondering

Sound card detection for web

北城余情 提交于 2019-12-09 07:28:42
问题 We need some heads up for a hobby web project. At this stage we want to detect client's sound card and direct whatever coming from sound card to server to process audio. And low latency is an important issue for us. So we need your suggestions for language, library etc to use. If you can give us some information about the big picture, then we can study on our own. 回答1: Capturing Audio Client-Side You can use the Web Audio API along with getUserMedia (generally considered part of the WebRTC

Use Audio Unit I/O to create audio on the fly?

↘锁芯ラ 提交于 2019-12-09 00:55:03
问题 I am doing a POC in which I need to create an app which fetches Input from iPhone mic and outs the output to the Bluetooth headset/speakers. I referred the following code http://www.stefanpopp.de/2011/capture-iphone-microphone/ The code works flawlessly but it produces the output via In-Call Speakers. Can anyone suggest where I should edit the code to re-route the output to Bluetooth Speakers? 来源: https://stackoverflow.com/questions/20393249/use-audio-unit-i-o-to-create-audio-on-the-fly

Choppy/inaudible playback with chunked audio through Web Audio API

穿精又带淫゛_ 提交于 2019-12-08 23:17:52
问题 I brought this up in my last post but since it was off topic from the original question I'm posting it separately. I'm having trouble with getting my transmitted audio to play back through Web Audio the same way it would sound in a media player. I have tried 2 different transmission protocols, binaryjs and socketio, and neither make a difference when trying to play through Web Audio. To rule out the transportation of the audio data being the issue I created an example that sends the data back

Android MediaPlayer takes long time to prepare and buffer

倖福魔咒の 提交于 2019-12-08 22:46:27
问题 My application takes a long time to prepare and buffer an audio stream. I have read this question Why does it take so long for Android's MediaPlayer to prepare some live streams for playback?, however it just says people have experienced this issue, it does not state how to improve the problem. I am experiencing this in all versions of Android, tested from 2.2 - 4.1.2. The streams are in a suitable bit-rate for mobile and 3G connection. The same stream takes less than a second to start

Getting stuttering during rendering of my DirectShow filter despite output file being “smooth”

大城市里の小女人 提交于 2019-12-08 19:28:31
I have a DirectShow application written in Delphi 6 using the DSPACK component library. I have two filter graphs that cooperate with each other. The primary filter graph has this structure: Capture Filter with 100 ms buffer size. (connected to) A Sample Grabber Filter. The "secondary" filter graph has this structure. Custom Push Source Filter that accepts audio directly to an audio buffer storehouse it manages. (connected to) A Render Filter. The Push Source Filter uses an Event to control delivery of audio. Its FillBuffer() command waits on the Event. The Event is signaled when new audio data

How to Convert audio .mp3 file to String and vice versa?

流过昼夜 提交于 2019-12-08 17:00:45
Is it possible to convert an audio mp3 file to a string data for sending data to the server, and server will return a string data to my app i want to convert that data to mp3 file and play audio. I am using this code to convert mp3 file to string data public static String readFileAsString(String filePath) throws java.io.IOException { BufferedReader reader = new BufferedReader(new FileReader(filePath)); String line, results = ""; while( ( line = reader.readLine() ) != null) { results += line; } reader.close(); return results; } i don't know how to get back my mp3 file from converted strind data

Send recorded audio as microphone input

删除回忆录丶 提交于 2019-12-08 17:00:18
问题 What must I write to send a recorded audio file as the microphone input in android programmatically? Example: User records "hello world". He can then play the recording in a call. 回答1: once you have a recorded file, you can open it as a InputStream, or any other way. BUT if you specifically looking for something like injecting the audio into a running call , then this is not possible. it's protected OS-level. unless you are dealing with custom ROMs and modified kernels. which is not official