问题
I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution).
The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone.
GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available.
In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.
回答1:
To debug that kind of problem i would try:
- Run
gst-launch audiotestsrc ! alsasink
to checkthat sounds works - Use a
fakesink
orfilesink
to see if we get any buffers - Try to find the pipeline problem with
GST_DEBUG
, for example check caps withGST_DEBUG=GST_CAPS:4
or check use*:2
to get all errors/warnings - Use wireshark to see if packets are sent
These pipelines work for me:
with RTP:
gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false
gst-launch-0.10 audiotestsrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay ! udpsink host=localhost port=5000
with TCP::
gst-launch-0.10 tcpserversrc host=localhost port=3000 ! audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)44100", channels="(int)1" ! alsasink
gst-launch-0.10 audiotestsrc ! tcpclientsink host=localhost port=3000
回答2:
Can you post some of the gst-launch pipelines you have tried? That might help in understanding why you are having issues. In general RTP/RTSP should work pretty easily.
Edit: Couple items I can think of is 1. change host=localhost to host= where is the actual ip-address of the other linux machine 2. add caps="application/x-rtp, media=(string)audio to the udpsrc element in the receiver.
回答3:
My solution is very similar to tilljoel but I am using Microphone (which is what you need) as a source - Hence some tweaking in the gstreamer pipeline.
Decode Audio from Microphone using TCP:
gst-launch-0.10 tcpserversrc host=localhost port=3000 ! audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)22000", channels="(int)1" ! alsasink
Encode Audio from Microphone using TCP:
gst-launch-0.10 pulsesrc ! audio/x-raw-int,rate=22000,channels=1,width=16 ! tcpclientsink host=localhost port=3000
Decode Audio from Microphone using RTP:
gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)22000, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false
Encode Audio from Microphone using RTP:
gst-launch-0.10 pulsesrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=22000 ! rtpL16pay ! udpsink host=localhost port=5000
来源:https://stackoverflow.com/questions/2715257/moving-audio-over-a-local-network-using-gstreamer