audio

Browser denying javascript play()

谁都会走 提交于 2020-12-29 08:51:48
问题 I have a page with an input field for scanning products. When a barcode is scanned or a SKU is typed into the field, an ajax request is made and the application plays either a success or an error sound depending on the response using HTMLMediaElement.play(). sounds.error.play(); This was working fine a while ago but now I get this error: ⚠ Autoplay is only allowed when approved by the user, the site is activated by the user, or media is muted. Followed by: NotAllowedError: The play method is

Java Record / Mix two audio streams

两盒软妹~` 提交于 2020-12-29 07:25:07
问题 i have a java application that records audio from a mixer and store it on a byte array, or save it to a file. What I need is to get audio from two mixers simultaneously, and save it to an audio file (i am trying with .wav). The thing is that I can get the two byte arrays, but don't know how to merge them (by "merge" i don't mean concatenate). To be specific, it is an application that handles conversations over an USB modem and I need to record them (the streams are the voices for each talking

Create a wav file from blob audio django

寵の児 提交于 2020-12-27 06:54:19
问题 On the client side, I am sending a blob audio (wav) file. On the server side, I am trying to convert the blob file to an audio wav file. I did the following: blob = request.FILES['file'] name = "TEST.wav" audio = wave.open(name, 'wb') audio.setnchannels(1) audio.writeframes(blob.read()) I thought that converting the blob would be similar to converting a blob image to a jpeg file, but was very incorrect in that assumption. That didn't work; I get an error - "Error: sample width not specified."

Save 16 bit signed PCM audio file on Android from BGX device

▼魔方 西西 提交于 2020-12-27 05:56:34
问题 I am working on a mobile application which should be able to read some audio data from a device via Bluetooth. The device has a BGX low energy bluetooth module. There's a very nicely documented framework for BGX on the website of the manufacturer and I successfully managed to connect the device and read the audio data. The problem is that the BGXService provides me the data in String format, which is an array of chars and char is 16 bit unsigned in Java; However, the audio data that I want to

How can I determine if a codec / container combination is compatible with FFmpeg?

我的梦境 提交于 2020-12-26 04:04:48
问题 I'm looking at re-muxing some containers holding audio and video such that I extract the best, first audio stream, and store it in a new container where e.g. only the audio stream is present. The output context for FFmpeg is created like so: AVFormatContext* output_context = NULL; avformat_alloc_output_context2( &output_context, NULL, "mp4", NULL ); I have a shortlist of acceptable outputs, e.g. MP4, M4A, etc … essentially those that are readable by Apple's Audio File Services:

How to combine video and audio through API or JS? [closed]

不问归期 提交于 2020-12-25 02:08:21
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 days ago . Improve this question I am working to design a system which does the following: User uploads a video, JS code finds the length of the video. Performs HTTP calls to an already-existing service to retrieve an audio track of the same length. Synchronize and combine the audio and video

How to combine video and audio through API or JS? [closed]

﹥>﹥吖頭↗ 提交于 2020-12-25 02:05:32
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 days ago . Improve this question I am working to design a system which does the following: User uploads a video, JS code finds the length of the video. Performs HTTP calls to an already-existing service to retrieve an audio track of the same length. Synchronize and combine the audio and video

How to combine video and audio through API or JS? [closed]

爱⌒轻易说出口 提交于 2020-12-25 02:04:17
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 days ago . Improve this question I am working to design a system which does the following: User uploads a video, JS code finds the length of the video. Performs HTTP calls to an already-existing service to retrieve an audio track of the same length. Synchronize and combine the audio and video

How to combine video and audio through API or JS? [closed]

喜你入骨 提交于 2020-12-25 02:04:11
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 days ago . Improve this question I am working to design a system which does the following: User uploads a video, JS code finds the length of the video. Performs HTTP calls to an already-existing service to retrieve an audio track of the same length. Synchronize and combine the audio and video

How the mixer thread created in Audio filnger and how the mapping is done between app and audio flinger mixer?

笑着哭i 提交于 2020-12-15 06:43:25
问题 I am trying to map this arch in the source code. As I checked in source code there are multiple threads created by audio flinger like direct output thread, mixer thread, offload thread, etc. But I am not getting where should I change in configuration to create multiple mixer thread and how to map the data between app and mixer thread as shown in the below diagram. just to mention the source of this arch is from- https://source.android.com/devices/automotive/audio 来源: https://stackoverflow.com