rtp

C# - Capture RTP Stream and send to speech recognition

谁说胖子不能爱 提交于 2019-12-05 00:53:20
问题 What I am trying to accomplish: Capture RTP Stream in C# Forward that stream to the System.Speech.SpeechRecognitionEngine I am creating a Linux-based robot which will take microphone input, send it Windows machine which will process the audio using Microsoft Speech Recognition and send the response back to the robot. The robot might be hundreds of miles from the server, so I would like to do this over the Internet. What I have done so far: Have the robot generate an RTP stream encoded in MP3

c++ Hole punching UDP(RTP)

放肆的年华 提交于 2019-12-04 16:48:07
I am doing a client-server voice chat program(unmanaged C++,win32) in which clients connects to the server using TCP and textchat/chatroom functions are done in TCP while all audiotransmission is sent through a separate UDP/RTP socket (using the API from JRTPLIB). So the IP is known from the TCP connection, and the port number of the RTP socket can be sent after connection is established. The problem is that in TCP only the server needs to do port forwarding for communications to work both ways since you establish a connection, while in UDP you'd have to use recvfrom() -- which afaik needs the

Streaming Mp4 video through RTP protocol using Gstreamer in Ubuntu

杀马特。学长 韩版系。学妹 提交于 2019-12-04 16:33:38
I'm trying to fetch the video file from my local directory,enable the stream from server and capture these frames from my Client side.I have used the following pipelines: Server side: gst-launch -v filesrc location=/home/gokul/Videos/Econ_TestVideo/radioactive.mp4 ! qtdemux ! rtpmp4vpay ! udpsink host=192.168.7.61 port=5000 sync=true Setting pipeline to PAUSED ... Pipeline is PREROLLING ... /GstPipeline:pipeline0/GstRtpMP4VPay:rtpmp4vpay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)MP4V-ES, profile-level-id=(string)1, config=(string

Decoding h264 frames from RTP stream

纵饮孤独 提交于 2019-12-04 13:48:08
问题 I am using live555 and ffmpeg libraries to get and decode RTP H264 stream from server; Video stream was encoded by ffmpeg, using Baseline profile and x264_param_default_preset(m_params, "veryfast", "zerolatency") I read this topic and add SPS and PPS data in the every frame, which I receive from network; void ClientSink::NewFrameHandler(unsigned frameSize, unsigned numTruncatedBytes, timeval presentationTime, unsigned durationInMicroseconds) { ... EncodedFrame tmp; tmp.m_frame = std::vector

FFmpeg RTP streaming error [closed]

≡放荡痞女 提交于 2019-12-04 12:26:12
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . I want to broadcast a video file via FFmpeg, but I get this error: Only one stream supported in the RTP muxer I get that error when I write this: ffmpeg.exe -i SomeVideo.mp4 -f rtp rtp://127.0.0.1:11111 I don't know what's wrong. 回答1: Your ffmpeg command creates two streams, one for video, one for audio. Do this

H.264 RTSP Absolute TIMESTAMP

我只是一个虾纸丫 提交于 2019-12-04 11:58:44
问题 Is it possible to read an absolute timestamp from an H.264 stream sent trough RTSP from an Axis camera? It will be necessary to know when the frame has been taken by the camera. Thanks Andrea 回答1: as Ralf already said - the RTP timestamps are relative to a random clock - they are only useful for computing the difference between two frames (or RTP-packets in general). For synchronizing these relative values to a wall clock you can use the RTCP sender - just have a look on the links Ralf

ffmpeg create RTP stream

倖福魔咒の 提交于 2019-12-04 09:59:09
问题 I'm trying to encode and stream using ffmpeg (libavcodec/libavformat - MSVC x64 with Zeranoe builds) Here is my code, largely adapted from the encoding example, error handling removed #include "stdafx.h" extern "C" { #include <libavformat/avformat.h> #include <libavcodec/avcodec.h> #include <libavutil/opt.h> #include <libavutil/channel_layout.h> #include <libavutil/common.h> #include <libavutil/imgutils.h> #include <libavutil/mathematics.h> #include <libavutil/samplefmt.h> } #pragma comment

Parsing h.264 NAL units from a quicktime MOV file

柔情痞子 提交于 2019-12-04 09:50:49
I'm trying to get h.264 NAL units from a MOV file on the iPhone, in order to RTP h.264 video from the iPhone camera to a server. Apple's API does not allow direct access to the encoded bitstream from the camera output, so I can only access the MOV file, while it's being written. I've parsed the MOV file into Atoms , according to Apple's MOV structure reference but now i need to extract the NAL units from the mdat atom in order to pack it to RTP and stream it. I'd be glad for some help here because i can't find documentation about the mdat structure . Thanks! The mdat atom is a big blob of data

Use MediaCodec for H264 streaming

北城余情 提交于 2019-12-04 09:43:30
问题 I'm currently trying to use Android as a Skype endpoint. At this stage, I need to encode video into H.264 (since it's the only format supported by Skype) and encapsulate it with RTP in order to make the streaming work. Apparently the MediaRecorder is not very suited for this for various reasons. One is because it adds the MP4 or 3GP headers after it's finished. Another is because in order to reduce latency to a minimum, hardware accelaration may come in handy. That's why I would like to make

Streaming Avi files from C# using RTP

天大地大妈咪最大 提交于 2019-12-04 09:37:03
问题 I have a read/seek input stream of a video file (.avi mpeg4\xVid\ect..) in C# and I would like to stream it to a video player with jump to moment X feature enabled. How can I implement this? I heard that RTP might be a good protocol. What I'm really looking for is a library in C# that will help me out. Thanks in advance. 回答1: Yes, Streamcoders solution is very good if you have some €1890. But if you do not want to pay money you should look at some free libraries or write your own from scratch