Does react-native-webview can perform webrtc call audio + hot keywords speech to text detection using my own js code?

梦想的初衷 提交于 2019-12-11 14:09:37

问题


I have developed a js code capable to perform a webrtc audio call only in combination with pocketsphinxjs for hot keywords speech detection.

I've merged this hot keywords speech to text detection demonstration (which use a 3 old year version of pocketsphinxjs) with this webrtc call audio only and this audio visualiser, with the following modifications :

  • I have moved the two AudioContext() constructor calls in just one (and gives it as a parameter when needed), after the getUserMedia() success operation, as Google Chrome only allow AudioContext() creation after the user gives sensors access.
  • I also have moved the getUserMedia() operation to be fired since the event window.onLoad is fired, like in the pocketsphinxjs demonstration, to obtain access of the stream immediatly.
  • I've bounded the "start" and "stop" buttons to "call" and "hangup" ones.
  • Since pocketsphinxjs (recognizer.js + audioRecorderWorker.js) runs in web workers and they cannot be load as standard js files, I've ended up using the following command to run my demonstration : python -m SimpleHTTPServer 8000

The main.js file is quite big (but mostly hard to read) since it's a merge of three demonstrations, but It seems to work :

1) I run my webapplication with the command "python -m SimpleHTTPServer 8000" inside my webapp directory 2) I open a google chrome instance to "localhost://8000" and give access to my Microphone 3) I wait until recognizer and recorder handlers are readys 4) I press the call button and I am able to see the audio graph visualizer, recognition output of some "Cat" keywords as the recognizer have been already compiled for this specifc one and hear myself.

I know it's only happenning in my laptop browser and there are efforts to accomplish to run this demonstration in a mobile application. So far, using a webview is the best idea I've got, but here are the questions that cames to me, because I have never worked with react native.

  1. This is the react-native-webview module Id'like to use.
  2. Does it have access to the audio stream with the getUserMedia() api ? If yes, does it popup twice, once as a native app requesting the audio permission and secondly as a browser ?
  3. Does it allow the creation of web workers ? If yes, can they be spawned via filesystem and not via a local http daemon like with my python -m SimpleHTTPServer command ?
  4. Can it be hidden ?

Also, do you think this it sounds like reasonnable technical stack or I should go with pure mobile native code ?

来源:https://stackoverflow.com/questions/56102224/does-react-native-webview-can-perform-webrtc-call-audio-hot-keywords-speech-to

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!