audio-streaming

Getting metadata from an audio stream

可紊 提交于 2019-12-03 03:20:01
问题 I would like to get the file name and, if possible, album image from a streaming URL in a AVPlayerItem that I am playing with AVQueuePlayer but I don't know how to go about doing this. Also if it turns out that my streaming URL doesn't have any metadata can I put metadata in my NSURL* before passing it to the AVPlayerItem? Thanks. 回答1: Well I am surprised no one has answered this question. In fact no one has answered any of my other questions. Makes me wonder how much knowledge people in here

How to customize MPVolumeView?

橙三吉。 提交于 2019-12-03 02:48:04
问题 I have tried many methods to implement a regular UISlider and control the device volume, but it's all Native-C functions which results in many untraceable bugs. I tried the MPVolumeView it works like charm, it even controls the device volume even after you close the app, just like the iPod app. My question is, is there anyway to customize the MPVolumeView with specific colors and images, just like UISlider ? NOTE: I want a legal method without using private undocumented APIs. UPDATE As per

Bluetooth audio streaming between android devices

别说谁变了你拦得住时间么 提交于 2019-12-03 02:25:50
问题 I made a research on the same topic and found that android devices are a2dp sources and the audio can be streamed only from an a2dp source to an a2dp sink. A2dp sink can be a bluetooth headset or a bluetooth speaker. But my question is then how the android app named "Bluetooth Music Player" works ? It allows streaming from one mobile to another. So in this case the listening mobile device must act as a sink. How this is possible? Are they using some other profile instead of a2dp? Ok, that may

understanding getByteTimeDomainData and getByteFrequencyData in web audio

青春壹個敷衍的年華 提交于 2019-12-03 02:17:00
The documentation for both of these methods are both very generic wherever I look. I would like to know what exactly I'm looking at with the returned arrays I'm getting from each method. For getByteTimeDomainData, what time period is covered with each pass? I believe most oscopes cover a 32 millisecond span for each pass. Is that what is covered here as well? For the actual element values themselves, the range seems to be 0 - 255. Is this equivalent to -1 - +1 volts? For getByteFrequencyData the frequencies covered is based on the sampling rate, so each index is an actual frequency, but what

Playing audio from a continuous stream of data (iOS)

。_饼干妹妹 提交于 2019-12-02 21:10:33
Been banging my head against this problem all morning. I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio) and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it. Now I been searching all morning finding different examples but none were

how to speed up google cloud speech

元气小坏坏 提交于 2019-12-02 18:49:47
问题 I am using a microphone which records sound through a browser, converts it into a file and sends the file to a java server. Then, my java server sends the file to the cloud speech api and gives me the transcription. The problem is that the transcription is super long (around 3.7sec for 2sec of dialog). So I would like to speed up the transcription. The first thing to do is to stream the data (if I start the transcription at the beginning of the record. The problem is that I don't really

Python: realtime audio streaming with PyAudio (or something else)?

爷,独闯天下 提交于 2019-12-02 18:30:25
Currently I'm using NumPy to generate the WAV file from a NumPy array. I wonder if it's possible to play the NumPy array in realtime before it's actually written to the hard drive. All examples I found using PyAudio rely on writing the NumPy array to a WAV file first, but I'd like to have a preview function that just spits out the NumPy array to the audio output. Should be cross-platform, too. I'm using Python 3 (Anaconda distribution). This has worked! Thanks for help! def generate_sample(self, ob, preview): print("* Generating sample...") tone_out = array(ob, dtype=int16) if preview: print("

is it possible to read metadata using HTTP live streaming in the iPhone SDK

非 Y 不嫁゛ 提交于 2019-12-02 17:45:23
When playing a live stream using the HTTP Live Streaming method, is it possible read the current metadata (eg. Title and Artist)? This is for an iPhone radio app. m8labs Not sure that this question is still actual for its author, but may be it will help someone. After two days of pain I investigated that it's quite simple. Here is the code that works for me: AVPlayerItem* playerItem = [AVPlayerItem playerItemWithURL:[NSURL URLWithString:<here your http stream url>]]; [playerItem addObserver:self forKeyPath:@"timedMetadata" options:NSKeyValueObservingOptionNew context:nil]; AVPlayer* player = [

How to get duration of Audio Stream and continue audio streaming from any point

廉价感情. 提交于 2019-12-02 17:45:22
Description: I have following code for an Audio player. I can continue Audio playback from any duration by clicking on Progress Bar(between 0-to-mediaplayer.getDuration()). It is working perfectly for Audio playback. Problem in Audio Streaming: When I stream an Audio file from an internet server (say s3-bucket) it starts streaming correctly. But mediaPlayer.getDuration() and mediaPlayer.getCurrentPosition() return wrong values. At the beginning of streaming mediaPlayer.getCurrentPosition() returns 5 hours. Due to this I am not able to continue Audio streaming from a specified duration of

Android: Incoming call auto answer, play a audio file

让人想犯罪 __ 提交于 2019-12-02 17:12:56
In Android, at the time of an incoming call, I want to receive it. Then, from my app, automatically play an audio file during a call and the other party should hear it. Is this possible? What you are talking about is not exactly possible with android. Android has no access to the in-call audio stream. Though i can give you a little bit idea about how to do it. first to intercept incoming call, you need to register a broadcast receiver, which is invoked whenever call is received public void onReceive(final Context context, Intent intent) { TelephonyManager telephonyManager = null;