gstreamer

飞凌方案|基于i.MX8MM的数字IP网络广播系统

我只是一个虾纸丫 提交于 2020-04-14 17:38:38
【推荐阅读】微服务还能火多久?>>> 系统介绍: IP网络广播系统是完全不同于传统广播系统、调频寻址广播系统和数控广播系统的产品。因建立在通用网络平台上,并融入数字音频技术,多方面体现了显著的优越性: 产品应用范围: 学校 、高速公路、宾馆大厦、商业连锁店、大中型企业 二、i.MX8MM实现 IP网络广播系统方案框图 i.MX8M Mini采用NXP四核64位处理器设计,主频最高1.8GHz,ARM Cortex-A53架构;2GB DDR4 RAM,8GB eMMC ROM,内部支持一个通用型Cortex®-M4 400 MHz内核处理器,支持使用MCUXpresso SDK工具开发,并可以运行裸机以及FreeRTOS实时操作系统,并且i.MX8MM工作温度是 从0℃~70℃。 三、i.MX8MM硬件方案特点: (1)支持高清大屏显示,电容多点触控,界面流畅,增强人机交互的互操性; (2)支持OV5640、UVC摄像头,像素可达500W,支持预览、拍照、录像; (3)高带宽、高速率的千兆网络,支持音视频等大文件的传输; (4)支持1080P 60帧/S的编解码,支持VP8、VP9、H.265、H.264解码,H.264、VP8编码; (5)5x SAI,支持IIS, AC97, TDM,其中1个支持8通道输入、输出,1个支持4通道输入、输出,2个支持2通道输入、输出

创龙带您解密TI、Xilinx异构多核SoC处理器核间通讯

半世苍凉 提交于 2020-03-25 12:19:20
3 月,跳不动了?>>> 一、什么是异构多核SoC处理器 顾名思义,单颗芯片内集成多个不同架构处理单元核心的SoC处理器,我们称之为异构多核SoC处理器, 比如: TI的OMAP-L138(DSP C674x + ARM9)、AM5708(DSP C66x + ARM Cortex-A15)SoC处理器 等; Xilinx的ZYNQ(ARM Cortex-A9 + Artix-7/Kintex-7可编程逻辑架构)SoC处理器等。 二、异构多核SoC处理器有什么优势 相对于单核处理器,异构多核SoC处理器能带来性能、成本、功耗、尺寸等更多的组合优势,不同架构间各司其职,各自发挥原本架构独特的优势。比如 : ARM廉价、耗能低,擅长进行控制操作和多媒体显示; DSP天生为数字信号处理而生,擅长进行专用算法运算; FPGA擅长高速、多通道数据采集和信号传输。 同时,异构多核SoC处理器核间通过各种通信方式,快速进行数据的传输和共享,可完美实现1+1>2的效果。 三、常见核间通信方式 要充分发挥异构多核SoC处理器的性能,除开半导体厂家对芯片的硬件封装外,关键点还在于核间通信的软硬件机制设计,下面介绍几种在TI、Xilinx异构多核SoC处理器上常见的核间通信方式。 OpenCL OpenCL(全称Open Computing Language,开放运算语言

raspberry安装命令版网易云音乐

落爺英雄遲暮 提交于 2020-03-14 17:36:42
更改pip源 1.临时 pip install -i https://pypi.doubanio.com/simple/ kivy.deps.gstreamer 2.永久 更改pip源 新建~/.pip/pip.conf文件,写入其地址。阿里云、中科大、豆瓣等都有pip源。 [global] index-url = http://pypi.douban.com/simple/ 安装 pip install Netease-MusicBox 如果报下面的错,更新pip Downloading https://pypi.doubanio.com/packages/7f/3c/80cfaec41c3a9d0f524fe29bca9ab22d02ac84b5bfd6e22ade97d405bdba/pycryptodomex-3.9.7.tar.gz (15.5MB) 99% |████████████████████████████████| 15.5MB 4.9MB/s eta 0:00:01Exception: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/pip/_internal/cli/base_command.py", line 143, in main status =

qt环境搭建

你说的曾经没有我的故事 提交于 2020-03-08 09:41:05
QT开发&QT应用运行用到的QT库: Linux X64平台Qt Creator开发&应用运行用到的Qt库: https://download.csdn.net/download/wanvan/10695824 ARM64平台应用运行用到的Qt库: https://download.csdn.net/download/wanvan/10695879 QT Library配置须知: 直接下载使用我上面提供的编译好的Qt库,需要正确配置才能正常使用。 可能出现的问题: 问题1:移入我编译的QT库至Ubuntu下的QT Creator中时,可能QT Creator中会提示“Qt version is not properly installed,please run make install”的问题。 该提示就是说,Qt没有被正确安装,请运行make install这个问题是比较常见的一个。 出现此问题的原因就是:qmake.exe是在Qt安装编译时生成的,里面内嵌了Qt相关的一些路径。如果直接拷贝过来使用,自己的路径结构与原来不同,则Qt库就不能正常使用。提示就是Qt version is not properly installed,please run make install Qt没有被正确安装,请运行make install。 既然路径信息是内嵌在qmake.exe中了

What's wrong with this GStreamer pipeline?

好久不见. 提交于 2020-03-06 02:59:05
问题 I'm sure I've had this pipeline working on an earlier Ubuntu system I had set up (formatted for readability): playbin uri=rtspt://user:pswd@192.168.xxx.yyy/ch1/main video-sink='videoconvert ! videoflip method=counterclockwise ! fpsdisplaysink' Yet, when I try to use it within my program, I get: Missing element: H.264 (Main Profile) decoder WARNING: from element /GstPlayBin:playbin0/GstURIDecodeBin:uridecodebin0: No decoder available for type 'video/x-h264, stream-format=(string)avc, alignment

realtime v4l2src for deepstream test1 c application does not work

大城市里の小女人 提交于 2020-03-05 04:57:07
问题 So my pipeline is as such which works with gst_parse_launch in the c code below but I wanted to use dynamic pipelines, and I am not getting error but at the same time, I am not getting the desired output too. gst-launch-1.0 v4l2src ! 'video/x-raw,format=(string)YUY2' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! nvvidconv ! 'video/x-raw,format=(string)NV12' ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! mux.sink_0 nvstreammux live-source=1 name=mux batch

How to play two different videos in two different displays simultaneously using gstreamer

拜拜、爱过 提交于 2020-03-03 06:59:29
问题 I am using Raspberry Pi 4 Model B which has 2 HDMI ports. Gstreamer-1.0 I have two videos saved in memory card. I want to drive two videos to two different HDMI port and play videos in two different Displays simultaneously. I would like to know the Gstreamer pipeline to access the HDMI0 and HDMI1 ports and also play two different videos in different displays simultaneously. 来源: https://stackoverflow.com/questions/57865741/how-to-play-two-different-videos-in-two-different-displays

Gtk+ Tutorials & Resources

心已入冬 提交于 2020-02-25 10:01:18
Welcome to Gtk+ Tutorials & Resources. This page is a collection of information (Documentation, Tutorials, Examples) for Gtk+ programmers. Not everything is Gtk specific but everything here can be utilized to develop different types of Gtk programs. Note: All of these books/tutorials focus on the c programming language. Note: What needs to be done this week. 1. List needs to be reorganized, updated, redesigned "Starting to look like a mess" also all tutorials and references need to be backed-up. Since i started this list we have lost a total of 5 tutorials due to sites being taken down. Thank

How can I run a service inside a docker container to get feed from a IDS uEye camera using gstreamer?

|▌冷眼眸甩不掉的悲伤 提交于 2020-02-25 06:45:08
问题 I have a docker container that uses a gstreamer plugin to capture the input of a camera. It runs fine with a Bastler camera but now I need to use an IDS uEye camera. To be able to use this camera I need to have the ueyeusbdrc service running. The IDS documentation says that to start it I can run sudo systemctl start ueyeusbdrc or sudo /etc/init.d/ueyeusbdrc start . The problem is that when the docker container runs, that service is not running and I get a Failed to initialize camera error,

Using Gstreamer with Google speech API (Streaming Transcribe) in C++

荒凉一梦 提交于 2020-02-02 13:42:51
问题 I am using the Google Speech API from cloud platform for getting speech-to-text of a streaming audio. I have already done the REST API calls using curl POST requests for a short audio file using GCP. I have seen the documentation of the Google Streaming Recognize, which says "Streaming speech recognition is available via gRPC only." I have gRPC (also protobuf) installed in my OpenSuse Leap 15.0 . Here is the screenshot of the directory. Next I am trying to run the streaming_transcribe example