问题
I have developed a js code capable to perform a webrtc audio call only in combination with pocketsphinxjs for hot keywords speech detection.
I've merged this hot keywords speech to text detection demonstration (which use a 3 old year version of pocketsphinxjs) with this webrtc call audio only and this audio visualiser, with the following modifications :
- I have moved the two
AudioContext()
constructor calls in just one (and gives it as a parameter when needed), after thegetUserMedia()
success operation, as Google Chrome only allowAudioContext()
creation after the user gives sensors access. - I also have moved the
getUserMedia()
operation to be fired since the event window.onLoad is fired, like in the pocketsphinxjs demonstration, to obtain access of the stream immediatly. - I've bounded the "start" and "stop" buttons to "call" and "hangup" ones.
- Since pocketsphinxjs (recognizer.js + audioRecorderWorker.js) runs in web workers and they cannot be load as standard js files, I've ended up using the following command to run my demonstration :
python -m SimpleHTTPServer 8000
The main.js file is quite big (but mostly hard to read) since it's a merge of three demonstrations, but It seems to work :
1) I run my webapplication with the command "python -m SimpleHTTPServer 8000" inside my webapp directory 2) I open a google chrome instance to "localhost://8000" and give access to my Microphone 3) I wait until recognizer and recorder handlers are readys 4) I press the call button and I am able to see the audio graph visualizer, recognition output of some "Cat" keywords as the recognizer have been already compiled for this specifc one and hear myself.
I know it's only happenning in my laptop browser and there are efforts to accomplish to run this demonstration in a mobile application. So far, using a webview is the best idea I've got, but here are the questions that cames to me, because I have never worked with react native.
- This is the react-native-webview module Id'like to use.
- Does it have access to the audio stream with the
getUserMedia()
api ? If yes, does it popup twice, once as a native app requesting the audio permission and secondly as a browser ? - Does it allow the creation of web workers ? If yes, can they be spawned via filesystem and not via a local http daemon like with my
python -m SimpleHTTPServer
command ? - Can it be hidden ?
Also, do you think this it sounds like reasonnable technical stack or I should go with pure mobile native code ?
来源:https://stackoverflow.com/questions/56102224/does-react-native-webview-can-perform-webrtc-call-audio-hot-keywords-speech-to