web-audio-api

Isolate frequencies of audio context using the Web Audio API

ⅰ亾dé卋堺 提交于 2021-01-26 23:19:06
问题 I'm experimenting with the WebAudio API and am attempting to build an analyser that a user can interact with and ultimately turn on and off different frequencies within the music to isolate different beats within the track, i.e bass, kick etc. I'm visualising the frequency data using the Canvas, and would like the user to be able to highlight parts of the visualisation, and in-turn muting frequencies. By default the visualisation would look like this and the user would hear all frequencies.

Isolate frequencies of audio context using the Web Audio API

懵懂的女人 提交于 2021-01-26 23:18:24
问题 I'm experimenting with the WebAudio API and am attempting to build an analyser that a user can interact with and ultimately turn on and off different frequencies within the music to isolate different beats within the track, i.e bass, kick etc. I'm visualising the frequency data using the Canvas, and would like the user to be able to highlight parts of the visualisation, and in-turn muting frequencies. By default the visualisation would look like this and the user would hear all frequencies.

Isolate frequencies of audio context using the Web Audio API

丶灬走出姿态 提交于 2021-01-26 23:11:22
问题 I'm experimenting with the WebAudio API and am attempting to build an analyser that a user can interact with and ultimately turn on and off different frequencies within the music to isolate different beats within the track, i.e bass, kick etc. I'm visualising the frequency data using the Canvas, and would like the user to be able to highlight parts of the visualisation, and in-turn muting frequencies. By default the visualisation would look like this and the user would hear all frequencies.

Isolate frequencies of audio context using the Web Audio API

放肆的年华 提交于 2021-01-26 23:11:19
问题 I'm experimenting with the WebAudio API and am attempting to build an analyser that a user can interact with and ultimately turn on and off different frequencies within the music to isolate different beats within the track, i.e bass, kick etc. I'm visualising the frequency data using the Canvas, and would like the user to be able to highlight parts of the visualisation, and in-turn muting frequencies. By default the visualisation would look like this and the user would hear all frequencies.

How can I reduce the noise of a microphone input with the Web Audio API?

落爺英雄遲暮 提交于 2021-01-21 04:26:18
问题 I've been playing around with the Web Audio API and using my laptop's microphone as an input source. I can hear a lot of white noise when I listen to the input though; how can I create a filter to reduce the noise so that the sound is clearer? Are there any libraries that provide a pre-written noise filter for this situation? 回答1: '`m working on some POC and reduced laptop "life noses" with a BiquadFilter. I have also use a compressor but you don't have to )) (function(){ var filter,

Web Audio API Memory Leaks on Mobile Platforms

半城伤御伤魂 提交于 2021-01-17 04:07:06
问题 I am working on an application that will be using Audio quite heavily and I am in the research stages of deciding whether to use Web Audio API on devices that can support it. I have put together a very simple test bed that loads an MP3 sprite file (~600kB in size), has a play and pause button and also a destroy button, which should in theory allow GC reclaim the memory used by the Web Audio API implementation. However, after loading and destroying ~5 times iOS crashes due to an out of memory

Adding audio to an incoming stream during video call to record voice of both parties in a call

别来无恙 提交于 2021-01-07 06:57:12
问题 I have created an app using peer js to initiate video calls. I am using mediarecorder Api to record the incoming stream from caller. However, I need to add audio of both the caller and receiver in the call to the recording, and video should be of only the caller(incoming stream). I have tried https://github.com/muaz-khan/MultiStreamsMixer this. However, on recording it I get an unreadable file by vlc. I have also tried adding the local audio track to the recording stream, but that doesn't

base64ToArrayBuffer Error: Failed To Execute 'atob' on 'Window'. (Web Audio API)

霸气de小男生 提交于 2021-01-07 06:33:25
问题 So I'm pretty new to the Web Audio API, having only heard about it 4 days ago (though since then I've put in probably about 50 hours of research and experimentation with it). I'm also more of a novice at javascript. Situation: I'm trying to develop a script that will take Google's TTS return from the API (which is encoded as a base64 string) and transfer it to an arrayBuffer for use in the Web Audio API so that I can send it through some of the nodes. I've already got the return from the

base64ToArrayBuffer Error: Failed To Execute 'atob' on 'Window'. (Web Audio API)

∥☆過路亽.° 提交于 2021-01-07 06:32:32
问题 So I'm pretty new to the Web Audio API, having only heard about it 4 days ago (though since then I've put in probably about 50 hours of research and experimentation with it). I'm also more of a novice at javascript. Situation: I'm trying to develop a script that will take Google's TTS return from the API (which is encoded as a base64 string) and transfer it to an arrayBuffer for use in the Web Audio API so that I can send it through some of the nodes. I've already got the return from the

How do I use the WebAudio API channel splitter for adjusting the Left or Right gain on an audio track?

不打扰是莪最后的温柔 提交于 2021-01-07 06:31:24
问题 <body> <audio id="myAudio" src="audio/Squirrel Nut Zippers - Trou Macacq.mp3"></audio> </body> <script> var myAudio = document.getElementById('myAudio') var context = new AudioContext(); var audioSource = context.createMediaElementSource(myAudio) var splitter = context.createChannelSplitter(2); audioSource.connect(splitter); var merger = context.createChannelMerger(2) //REDUCE VOLUME OF LEFT CHANNEL ONLY var gainNode = context.createGain(); gainNode.gain.setValueAtTime(0.5, context