opus

What value to use for Libopus encoder max_data_bytes field?

一笑奈何 提交于 2021-02-10 11:59:51
问题 I am currently using libopus in order to encode some audio that I have. When consulting the documentation for how to use the encoder, one of the arguments the encode function takes in is max_data_bytes , a opus_int32 that has the following documentation: Size of the allocated memory for the output payload. May be used to impose an upper limit on the instant bitrate, but should not be used as the only bitrate control Unfortunately, I wasn't able to get much out of this definition as to how to

Splitting an Ogg Opus File stream

最后都变了- 提交于 2021-02-10 05:38:30
问题 I am trying to send an OGG_OPUS encoded stream to google's speech to text streaming service. Since there is a time limit imposed by Google for their stream requests, I have to route the audio stream to another Google Speech To Text streaming session on a fixed interval. From what I've read, the pages in the OGG stream cannot be read independently since the data in the pages are calculated by considering the data of the previous and next pages. If that is the case, can we cut off the stream at

How to encode and decode Real-time Audio using OpusCodec in IOS?

烂漫一生 提交于 2021-02-07 07:53:41
问题 I am working on a app which has following requirements : Record real time audio from iOS device (iPhone) Encode this audio data to Opus data and send it to server over WebSocket Decode received data to pcm again Play received audio from WebSocket server on iOS device(iPhone) I've used AVAudioEngine for this. var engine = AVAudioEngine() var input: AVAudioInputNode = engine.inputNode var format: AVAudioFormat = input.outputFormat(forBus: AVAudioNodeBus(0)) input.installTap(onBus:

How do I play Opus encoded audio in Java?

戏子无情 提交于 2021-02-06 19:02:32
问题 When playing back the decoded audio, I've managed to produce a variety of sounds from gurgling to screeching to demonic chants. The closest of which sounds similar to being played in fast-forward and playback only lasts about 15 seconds. I've tried with a large combination of parameters for the decoding and AudioSystem API methods, nothing seems to be working. So, what is causing this audio distortion? Opusinfo for this file shows the following: Processing file "test.opus"... New logical

Encode AudioBuffer with Opus (or other codec) in Browser

痞子三分冷 提交于 2021-01-29 13:21:38
问题 I am trying to stream Audio via Websocket. I can get an AudioBuffer from the Microphone (or other Source) via Web-Audio-Api and stream the RAW-Audio-Buffer, but i think this would not be very efficient. So i looked arround to encode the AudioBuffer somehow. - If the Opus-Codec would not be practicable, i am open to alternatives and thankful for any hints in the right direction. I have tried to use the MediaRecorder (from MediaStreamRecording-API) but it seems not possible to stream with that

How can I read OPUS packets one by one from ogg/opus file

℡╲_俬逩灬. 提交于 2021-01-28 22:04:57
问题 I need to read OPUS packets one by one from ogg/opus file and send them further in OPUS format so without decoding. I'm looking at opusfile lib but API and examples are rather complicated and more focused on decoding the file and getting resulted PCM. Is there a way to achieve what I want with this lib and how? If not what other options do I have? 回答1: libogg could be used to parse the Ogg Opus file's "pages", and then the opus "packets" could then be extracted from those pages. Mind that

How to include/use latest version of Opus codec in Android NDK

怎甘沉沦 提交于 2021-01-28 06:24:40
问题 A complete novice question here. I am pretty familiar with programming in C/C++ on Linux environments. However, I have no experience whatsoever with Android environment, let alone when it comes to making an application with C for Android platforms. I need to use opus codec in my application but it is not present in the default libraries of Android NDK. How can I add it? Some sources on internet talk about Android.mk files. I am using the most recent version of Android Studio and there is no

Slow motion effect when decoding OPUS audio stream

让人想犯罪 __ 提交于 2021-01-27 21:08:05
问题 I'm capturing the audio stream of a voice chat program (it is proprietary, closed-source and I have no control over it) which is encoded with the OPUS Codec, and I want to decode it into raw PCM audio (Opus Decoder doc). What I'm doing is: Create an OPUS decoder: opusDecoder = opus_decoder_create(48000, 1, &opusResult); Decode the stream: opusResult = opus_decode(opusDecoder, voicePacketBuffer, voicePacketLength, pcm, 9600, 0); Save it to a file: pcmFile.write(pcm, opusResult * sizeof(opus

EasyRTC实现基于WebRTC技术实现的即时通信类应用

吃可爱长大的小学妹 提交于 2021-01-19 15:57:58
WebRTC简介 WebRTC,名称源自网页即时通信(英语:Web Real-Time Communication)的缩写,是一个支持网页浏览器进行实时语音对话或视频对话的API。它于2011年6月1日开源并在Google、Mozilla、Opera支持下被纳入万维网联盟的W3C推荐标准。EasyRTC基于WebRTC,凭借多年音视频开发经验并结合实际情况,开发了 基于WebRTC的音视频通讯云平台,提供互动教学、连麦直播、视频会议、指挥调度等多种音视频跨平台解决方案。 WebRTC历史 2010年5月,Google以6820万美元收购VoIP软件开发商Global IP Solutions的GIPS引擎,并改为名为“WebRTC”。WebRTC使用GIPS引擎,实现了基于网页的视频会议,并支持722,PCM,ILBC,ISAC等编码,同时使用谷歌自家的VP8视频解码器;同时支持RTP/SRTP传输等。 2012年1月,谷歌已经把这款软件集成到Chrome浏览器中。同时FreeSWITCH项目宣称支持iSAC audio codec。 WebRTC核心API WebRTC原生APIs文件是基于WebRTC规格书撰写而成,这些API可分成Network Stream API、 RTCPeerConnection、Peer-to-peer Data API三类: Network

2020中国系统架构师大会活动回顾:ZEGO实时音视频服务架构实践

荒凉一梦 提交于 2020-10-30 19:37:38
10月24日,即构科技后台架构负责人&高级技术专家祝永坚(jack),受邀参加2020中国系统架构师大会,在音视频架构与算法专场进行了主题为《ZEGO实时音视频服务架构实践》的技术分享。 以下为演讲内容的节选: 作为一家专业的音视频云服务商,即构服务了泛娱乐、在线教育、金融、产业互联网、IoT等行业的多家头部公司,例如映客、花椒、微博、好未来等。今年上半年受疫情影响,即构所服务的多家教育、泛娱乐客户都出现了流量暴增的现象。而即构提供的稳定后台服务,保障了客户线上业务0故障运营,这离不开我们成熟稳定、可用性高、能自动扩容的流媒体服务架构。 下面我从ZEGO流媒体服务简介、流媒体服务架构、调度逻辑设计和运营监控四部分进行分享: 一、ZEGO流媒体服务介绍 以这张图为例,我们来看ZEGO流媒体服务的全貌: 假设图中有三位主播A,B,C和观众,主播A,B,C要进行连麦互动,他们分别通过浏览器、原生App和微信/QQ小程序来推流。由于主播使用了不同的终端形式来进行推流,那么底层使用的音视频协议也是不同的,分别对应着WebRTC,AVERTP(ZEGO的私有音视频协议),RTMP。主播之间连麦互动需要互相拉流,为了获得良好的互动效果,需要很低的端到端拉流延迟(<400ms)。因此,主播们可以到即构全球实时网络来进行拉流,支持Web终端,和原生App拉流,国内的实际环境端到端延迟可以做到150