audio

webkitAudioContext createMediaElementSource on iOS Safari not working

北战南征 提交于 2020-05-10 04:07:07
问题 I want to do a live sound analysis on the iPhone. Therefor I use the webkitAudioContext Analyser. var ctx = new (window.AudioContext || window.webkitAudioContext); var audioGoodmorning = new Audio('assets/sounds/greeting.m4a'); var audioSrc = ctx.createMediaElementSource(audioGoodmorning); var analyser = ctx.createAnalyser(); analyser.fftSize = 32; audioSrc.connect(analyser); audioSrc.connect(ctx.destination); var frequencyData = new Uint8Array(analyser.fftSize); analyser.getByteFrequencyData

Possible to analyze streaming audio from Icecast using Web Audio API and createMediaElementSource?

旧城冷巷雨未停 提交于 2020-05-09 19:50:20
问题 Using the Web Audio API and createMediaElement method you can use a typed array to get frequency data from audio playback in an <audio> element and it works in most browsers as long as the source URL is local (not streaming). See Codepen: http://codepen.io/soulwire/pen/Dscga Actual Code: var audioCtx = new (window.AudioContext || window.webkitAudioContext)(); var audioElement = new Audio('http://crossorigin.me/http://87.230.103.9:80/top100station.mp3'); // example stream audioElement

How to read realtime microphone audio volume in python and ffmpeg or similar

試著忘記壹切 提交于 2020-05-09 19:14:23
问题 I'm trying to read, in near-realtime , the volume coming from the audio of a USB microphone in Python. I have the pieces, but can't figure out how to put it together. If I already have a .wav file, I can pretty simply read it using wavefile : from wavefile import WaveReader with WaveReader("/Users/rmartin/audio.wav") as r: for data in r.read_iter(size=512): left_channel = data[0] volume = np.linalg.norm(left_channel) print volume This works great, but I want to process the audio from the

How to read realtime microphone audio volume in python and ffmpeg or similar

故事扮演 提交于 2020-05-09 19:14:05
问题 I'm trying to read, in near-realtime , the volume coming from the audio of a USB microphone in Python. I have the pieces, but can't figure out how to put it together. If I already have a .wav file, I can pretty simply read it using wavefile : from wavefile import WaveReader with WaveReader("/Users/rmartin/audio.wav") as r: for data in r.read_iter(size=512): left_channel = data[0] volume = np.linalg.norm(left_channel) print volume This works great, but I want to process the audio from the

Custom progress bar for <audio> and <progress> HTML5 elements

假如想象 提交于 2020-04-27 17:24:44
问题 I am mind boggled at working out how to create a custom seekbar for an audio player using the tag and simple Javascript. Current Code: <script> function play() { document.getElementById('player').play(); } function pause() { document.getElementById('player').pause(); } </script> <audio src="sample.mp3" id="player"></audio> <button onClick="javascript:play()" >Play</button> <button onClick="javascript:pause()" >Pause</button> <progress id="seekbar"></progress> Would it be possible to link the

Custom progress bar for <audio> and <progress> HTML5 elements

℡╲_俬逩灬. 提交于 2020-04-27 17:21:07
问题 I am mind boggled at working out how to create a custom seekbar for an audio player using the tag and simple Javascript. Current Code: <script> function play() { document.getElementById('player').play(); } function pause() { document.getElementById('player').pause(); } </script> <audio src="sample.mp3" id="player"></audio> <button onClick="javascript:play()" >Play</button> <button onClick="javascript:pause()" >Pause</button> <progress id="seekbar"></progress> Would it be possible to link the

Record audio and send to Symfony

天涯浪子 提交于 2020-04-18 07:18:19
问题 I want to simply record audio and send the file to my controller. I'm using https://addpipe.com/simple-recorderjs-demo/ to record my audio, and this is working just fine. But when i want to actually send my to anything in my Symfony 4 project, it never receives anything. So i figured i manually remake a form with the audio blob inserted, but does not work out either. Any ideas how to make this happen? Any leads in the correct direction would be great! I've tried simply inserting the audio

Error: ENOENT: no such file or directory expo-av

岁酱吖の 提交于 2020-04-18 06:10:34
问题 I'm building an application using react-native with expo-cli and I'm building the apk using the command: expo build:android but I get the error: Error: ENOENT: no such file or directory, open 'C:\Users\Ahmed Hassan\Desktop\app-name\assets\sounds\alert.mp3' regarding the asset I'm importing in a component with the following code: import { Audio } from 'expo-av'; async function sound() { const soundObject = new Audio.Sound(); try { await soundObject.loadAsync(require('../assets/sounds/alert.mp3

Is there any workaround (or hack) to make the audio autoplay on Chrome, Firefox and Safari?

ε祈祈猫儿з 提交于 2020-04-18 05:48:06
问题 I have an animation project that needs to autoplay the audio on all main browsers includes those on mobiles. We also need to control the audio to make it continue playing after it paused and played again. That means we can't use iframe because it will replay the audio everytime. Plus, just find out that iframe can't autoplay in Chrome now... Will there be any workaround to fix this problem? 回答1: What all these browsers ask is that your user provide a gesture to confirm they actually want your

On Fedora using Qt 5.9.4, I'm unable to simultaneously record and play audio at the same time

瘦欲@ 提交于 2020-04-17 22:53:31
问题 I'm trying to write a program in Qt that simultaneously records audio from a microphone and plays it back at the same time. I'm using Qt 5.9.4 and I'm on Fedora 29 (can't update to latest version as our production environment is Fedora 29 -- can't update it, have already asked boss). I have some barebones code written, as you can see below. But everytime I run the program, I get the following error message: using null output device, none available using null input device, none available I've