audio-streaming

In .NET, how do I access an audio input stream?

て烟熏妆下的殇ゞ 提交于 2019-12-13 15:27:29
问题 Using VB.NET, I want to analyze input audio streams obtained from Web radio stations (e.g.: "Flower Power Radio" via TuneIn). However, I am struggling in finding a suitable starting point. Obviously, when entering into the Web browser a Web address (as given in above example), a stream starts flowing and is being interpreted by, in this case, said Web browser. Only that my planned experiments do not involve the requirement for a browser; and since I do not want to replay the received audio

I need to implement music player using AV Player in SWIFT

自闭症网瘾萝莉.ら 提交于 2019-12-13 09:44:58
问题 I need to implement streaming music player using AV Player in SWIFT that music file Stored in server.Please help me to any one..How to implement this one 回答1: According to this: http://www.techotopia.com/index.php/IOS_8_Video_Playback_using_AVPlayer_and_AVPlayerViewController You could add following code in your ViewController, like viewDidLoad method or viewWillAppear, depends on the view will show repeatedly or not. If Yes, add it in the viewWillAppear method. let player = AVPlayer(URL: url

Python Pyaudio — How to play a file streamed via HTTP

﹥>﹥吖頭↗ 提交于 2019-12-13 05:35:50
问题 I am trying to figure out how to play an mp3 that exists on my server served through HTTP. I tried to figure out pyglet but there were too many issues with AVBin to make that work (something about dividing by zero in the source code). So, I decided to try PyAudio, but I can't figure out how to stream an mp3 source from HTTP with it. All the examples are wav files and I need examples rather than the docs or I'm afraid I'll have to figure out the particulars of how audio works on the lowest

Applying filter complex fails with something related to audio

柔情痞子 提交于 2019-12-13 04:43:35
问题 I finally managed to build ffmpeg as detailed in here: https://enoent.fr/blog/2014/06/20/compile-ffmpeg-for-android/ and in the end, I have a ffmpeg library which accepts command arguments. I am trying to apply a watermark image over the video so for it I am using preparing this ffmpeg command: ffmpeg -i input.avi -i logo.png -filter_complex 'overlay=10:main_h-overlay_h-10' output.avi I have first tried it on windows using ffmpeg.exe and the result was as expected. I have tried it on android

Stream android to android

喜夏-厌秋 提交于 2019-12-13 04:42:28
问题 I want to buffer an audio from My Android phone via wifi and all the other android phones having my app should receive the audio if they are in the same network. can i get a way to send buffers from android. My code i copied from some link. Not working. public class MainActivity extends Activity { private Button startButton,stopButton,listenButton; public byte[] buffer; public static DatagramSocket socket; AudioRecord recorder; private int sampleRate = 8000; private int channelConfig =

node: serving unique audio streams causing possible EventEmitter memory leak errors

让人想犯罪 __ 提交于 2019-12-13 03:37:11
问题 In the following example what I'm trying to do is initialize a unique stream for each client that connects to the server and serve that stream to the client through html5's audio tag. Because I want a live experience I'm attempting to write the latest buffer from my transcoding to the output of the audio stream. For testing purposes I'm grabbing NPRs public stream, transcoding it via transcode.sh to speex, and serving it to my clients. The issue is that when I connect, the transcoded stream

WaveSurfer JS can not generate graph in firefox for the specific mp3 audio file

∥☆過路亽.° 提交于 2019-12-13 03:35:07
问题 We are facing problem to draw the audio visualization (graph) by wavesurfer JS in Firefox for some specific format of the mp3 file. It always gives us the error like: The buffer passed to decodeAudioData contains an unknown content type. But same file is running in chrome without any problem. After the investigation, we have found that decodeAudioData() is used in wavesurfer JS which is generating the error while decoding audio file data contained in an ArrayBuffer. Since we don't have an

NodeJs: How to pipe two streams into one spawned process stdin (i.e. ffmpeg) resulting in a single output

偶尔善良 提交于 2019-12-12 19:16:45
问题 In order to convert PCM audio to MP3 I'm using the following: function spawnFfmpeg() { var args = [ '-f', 's16le', '-ar', '48000', '-ac', '1', '-i', 'pipe:0', '-acodec', 'libmp3lame', '-f', 'mp3', 'pipe:1' ]; var ffmpeg = spawn('ffmpeg', args); console.log('Spawning ffmpeg ' + args.join(' ')); ffmpeg.on('exit', function (code) { console.log('FFMPEG child process exited with code ' + code); }); ffmpeg.stderr.on('data', function (data) { console.log('Incoming data: ' + data); }); return ffmpeg;

Very Basic JS Coding, and SoundManager2 or not?

青春壹個敷衍的年華 提交于 2019-12-12 18:47:05
问题 I would like to include an audio/possible video player on my website with the following attributes: Must be placeable via a <div> ; Styled via CSS; Can read all ID3 info; Can pull the file from a database (probably GoDaddy's Easy Database); No flash; Transferrable to smartphones etc. I have been herded to SoundManager2 which appears to fit the bill, but I seem to be having real trouble just making a clickable image to begin playing my mp3. I have zero JS skills so am going from silly basic

How to append .wav format sounds to each other

一世执手 提交于 2019-12-12 15:42:33
问题 I am studying C# by myself by reading some books and watching some tutorials. So, i decided to make a small project at the same time, to get more experience and harden my knowledge. I am trying to create a text to speech program in Georgian(my language). I have done the same program in java but want to transfer it in C#, but i couldn't understand how to append different sounds to each other.For example, when my program wants to say a word "general" it divides the word in parts like this "ge"