audio

Playing a simple sound with web audio api

空扰寡人 提交于 2020-08-27 07:07:09
问题 I've been trying to follow the steps in some tutorials for playback of a simple, encoded local wav or mp3 file with the web Audio API using a button. My code is the following (testAudioAPI.js): window.AudioContext = window.AudioContext || window.webkitAudioContext; var context = new AudioContext(); var myBuffer; clickme = document.getElementById('clickme'); clickme.addEventListener('click',clickHandler); var request = new XMLHttpRequest(); request.open('GET', 'WoodeBlock_SMan_B.wav', true);

How to compare / match two non-identical sound clips

风格不统一 提交于 2020-08-26 08:48:26
问题 I need to take short sound samples every 5 seconds, and then upload these to our cloud server. I then need to find a way to compare / check if that sample is part of a full long audio file. The samples will be recorded from a phones microphone, so they will indeed not be exact. I know this topic can get quite technical and complex, but I am sure there must be some libraries or online services that can assist in this complex audio matching / pairing. One idea was to use a audio to text

Calling MusicDeviceMIDIEvent from the audio unit's render thread

余生长醉 提交于 2020-08-26 05:55:27
问题 There's one thing I don't understand about MusicDeviceMIDIEvent . In every single example I ever seen (searched Github and Apple examples) it was always used from the main thread. Now, in order to use the sample offset parameter the documentation states: inOffsetSampleFrame: If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a sample offset that the audio unit may apply when applying that event in its next audio unit render. This allows you to

Calling MusicDeviceMIDIEvent from the audio unit's render thread

不羁岁月 提交于 2020-08-26 05:54:49
问题 There's one thing I don't understand about MusicDeviceMIDIEvent . In every single example I ever seen (searched Github and Apple examples) it was always used from the main thread. Now, in order to use the sample offset parameter the documentation states: inOffsetSampleFrame: If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a sample offset that the audio unit may apply when applying that event in its next audio unit render. This allows you to

Calling MusicDeviceMIDIEvent from the audio unit's render thread

◇◆丶佛笑我妖孽 提交于 2020-08-26 05:54:30
问题 There's one thing I don't understand about MusicDeviceMIDIEvent . In every single example I ever seen (searched Github and Apple examples) it was always used from the main thread. Now, in order to use the sample offset parameter the documentation states: inOffsetSampleFrame: If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a sample offset that the audio unit may apply when applying that event in its next audio unit render. This allows you to

What's causing this slow/delayed audio playback in Safari?

末鹿安然 提交于 2020-08-25 04:07:09
问题 var audio = new Audio('data:audio/wav;base64,UklGRoABAABXQVZFZm10IBAAAAABAAEAiBUAAIgVAAABAAgAZGF0YVwBAACHlqa1xNLg7vv/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////Tk1LSklHRkVEQ0JBQD8+Pj08PDs6Ojk5OTg4ODg3Nzc3Nzc3Nzc3Nzc3Nzg4ODg5OTk6Ojs7Ozw8PT4+P0BAQUJCQ0RFRUZHSElJSktMTU5OT1BRUlNUVVVWV1hZWltcXV1eX2BhYmNkZGVmZ2hpaWprbG1ubm9wcXFyc3R0dXZ3d3h5eXp6e3x8fX1+f3

How to make a short beep in javascript that can be called *repeatedly* on a page?

假装没事ソ 提交于 2020-08-22 05:11:10
问题 This is like the question at: Sound effects in JavaScript / HTML5 but I'm just seeking a specific answer to the issue of repeatability. The above question and other similar ones have helpful answers to use the following javascript: function beep1() { var snd = new Audio("file.wav"); // buffers automatically when created snd.play(); } or even more self-contained, you can now include a wav in-line, such as: function beep2() { var snd = new Audio("data:audio/wav;base64,/

How can I improve the look of scipy's spectrograms?

♀尐吖头ヾ 提交于 2020-08-10 18:51:29
问题 I need to generate spectrograms for audio files with Python and I'm following the solution given here. However, the spectrograms I'm getting don't look very "populated," and not at all like other spectrograms I get from other software. This is the code I used for the particular image I'm showing here: import matplotlib.pyplot as plt from matplotlib import cm from scipy import signal from scipy.io import wavfile sample_rate, samples = wavfile.read('audio-mono.wav') frequencies, times,

Python gtts vlc plays too quickly to hear the first couple words

爱⌒轻易说出口 提交于 2020-08-08 05:20:27
问题 In python, using the module gTTS and VLC-python, I've created a Text to speech program, which is quite simple to do. But the thing which is bugging me is when I get to playing the mp3 file created by gTTS it skips the first word or two. So if I have the string "The weather today will be cloudy". It'll speak out "today will be cloudy" Even if I adjust the string, it seems to miss out the first word or two, sometimes it starts mid word. When I play the audio file outside of the code, it plays