alsa

android下调试声卡驱动之概述

我怕爱的太早我们不能终老 提交于 2019-12-19 13:12:19
在Android中音频系统使用的是ALSA系统架构。ASoC--ALSA System on Chip 。是建立在标准ALSA驱动层上,为了更好地支持 嵌入式处理器和移动设备中的音频Codec的一套软件体系。在音频设备驱动中ASoC被分为Machine、Platform和Codec三大部分。 Codec部分:负责音频解码,这部分代码全然无平台无关(设备原厂提供),它包括了一些音频的控件(Controls),音频接 口,DAMP(动态音频电源管理)的定义和Codec IO功能。为了保证硬件无关性,不论什么特定于平台的代码都 要移 到Platform和Machine驱动中。 Platform部分:包括了平台的音频DMA和音频接口的配置和控制(I2S,PCM。AC97等); 与处理器芯片相关 的代码。 Machine部分:是耦合Platform和Codec驱动,同一时候与上层交互的代码。因为上层是标准的alsa架构。所下面层接口肯定要做了 统一,这部分由Machine本身的Platform驱动和Platform设备组成(请跟上面的Platform驱动差别开)。Platform驱动 内核已经完毕了,所以无须过多的关心怎么跟上层ALSA怎么衍接的问题。我们仅仅须要注冊一个Machine的Platform 设备以及完毕Platform和Codec耦合。 1、ALSA设备文件结构

GNU Radio: Use sound output as input source

帅比萌擦擦* 提交于 2019-12-19 03:42:24
问题 In gnuradio-companion I'm using the audio source block as my input signal for the next blocks. All works almost fine. The only little problem is that I'm getting the signal from my microphone (this is the normal behavior off course). I would rather like to get the audio signal being played directly without having to go through my speakers, the air from my room and the microphone. All this generates signal losses and adds noise. I know there is the file source block but this isn't a real

Android > 4.0 : Ideas how to record/capture internal audio (e.g. STREAM_MUSIC)?

戏子无情 提交于 2019-12-18 10:19:11
问题 Some months ago, with Android ICS (4.0), I developed an android kernel module which intercepted the "pcmC0D0p"-module to fetch all system audio. My target is to stream ALL audio (or at least the played music) to a remote speaker via AirPlay. The kernel module worked, but there where several problems (kernel-versions, root-privileges etc.) so I stopped working on this. Now, we have Android 4.1 and 4.2 and I have new hope! Who has an idea how to capture the audio in Android? I had following

alsa - mem leak?

半腔热情 提交于 2019-12-18 06:49:11
问题 I've been chasing a memory leak (reported by 'valgrind --leak-check=yes') and it appears to be coming from ALSA. This code has been in the free world for some time so I'm guessing that it's something I'm doing wrong. #include <stdio.h> #include <stdlib.h> #include <alsa/asoundlib.h> int main (int argc, char *argv[]) { snd_ctl_t *handle; int err = snd_ctl_open( &handle, "hw:1", 0 ); printf( "snd_ctl_open: %d\n", err ); err = snd_ctl_close(handle); printf( "snd_ctl_close: %d\n", err ); } The

Linux pipe audio file to microphone input

假装没事ソ 提交于 2019-12-18 01:14:08
问题 I'm looking for a way to feed audio data from a file into the microphone so when 3rd party applications (such as arecord or Chromium's "search by voice" feature) use the microphone for audio input, they receive the audio data from the file instead. Here's my scenario : An application I wrote records audio data from the microphone (using ALSA) and saves it to a file (audioFile0.raw). At some unknown point in time in the future, some unknown 3rd party application (as in, something I did not

run apps using audio in a docker container

爱⌒轻易说出口 提交于 2019-12-17 15:26:55
问题 This question is inspired by Can you run GUI apps in a docker container?. The basic idea is to run apps with audio and ui (vlc, firefox, skype, ...) I was searching for docker containers using pulseaudio but all containers I found where using pulseaudio streaming over tcp. (security sandboxing of the applications) https://gist.github.com/hybris42/ce429de428e5af3a344a https://github.com/jlund/docker-chrome-pulseaudio https://github.com/tomparys/docker-skype-pulseaudio In my case I would

Strange PulseAudio monitor device behaviour

℡╲_俬逩灬. 提交于 2019-12-14 04:20:27
问题 Faced strange PulseAudio monitor device (i.e. audio input device which plays sound sent to speaker) behaviour. I've reduced code from my real project to simple example based on code from PulseAudio docs https://freedesktop.org/software/pulseaudio/doxygen/parec-simple_8c-example.html, I've only added time limit and read bytes counting. It works for example 30 seconds and prints read bytes count. Problem is that bytes count vastly differs if something is played during program run. I've executed

aplay piping to arecord using a file instead of stdin and stdout

人盡茶涼 提交于 2019-12-14 03:06:22
问题 Below command will record the data from default device and output it on stdout and aplay will play the data from stdin. arecord -D hw:0 | aplay -D hw:1 - Why we prefer stdin and stdout instead of writing into a file and reading from it as below? arecord -D hw:0 test.wav | aplay -D hw:1 test.wav 回答1: Using a pipe for this operation is more efficient and effective than using a file, simply because of the following reasons: 1) Pipe (|) is an interprocess communication technique. The output of

Running pyfluidsynth + pyaudio demo, many problems with alsa and jack

半世苍凉 提交于 2019-12-13 05:32:16
问题 I'm following the demo here. I'm very new to creating audio via python, so I'm not sure how to debug which errors I should consider, what naive things I might be doing wrong. Here are my python errors: >>> import time >>> import numpy >>> import pyaudio >>> import fluidsynth >>> >>> pa = pyaudio.PyAudio() ALSA lib pcm_dmix.c:1018:(snd_pcm_dmix_open) unable to open slave ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear ALSA lib pcm.c:2217:(snd_pcm_open_noupdate) Unknown

Alsa api: how to use mmap in c?

荒凉一梦 提交于 2019-12-12 18:59:55
问题 I'm currently using snd_pcm_writei to play a sound file that was previously loaded into an array of short (16 bits PCM format). To play this sound I create a buffer (short*), that contains a period (or fragment). Then, I use a while loop to call snd_pcm_writei which gives me that line: int err = snd_pcm_writei(handle, buffer, frames); It is pretty simple to understand how it works, and everything works fine, I can hear the sound. However, I'd like to try to use mmap instead of writei, but I