speech-recognition

CMUSphinx live speech recognition too slow?

雨燕双飞 提交于 2019-12-07 15:48:07
问题 CMU Sphinix is toooo Slow for recognizing live speech.I don't know if you have any idea for boost it? This is my configuration: configuration.setAcousticModelPath("WSJ_8gau_13dCep_16k_40mel_130Hz_6800Hz"); configuration.setDictionaryPath("cmudict.0.6d"); configuration.setLanguageModelPath("en-us.lm.dmp"); 回答1: We are currently working on speedup, but for now sphinx4 is not realtime for large vocabulary. It's actually not a trivial task. If you want a fast and not very accurate transcription

Speech Recognizer on HTC One M7

旧时模样 提交于 2019-12-07 15:02:16
问题 I wrote a speech recognition app using android's built-in speech recognition classes. The following exception shows up in my developer console when the startListening function is called on the speech recognizer object obtained using createSpeechRecognizer(context) function. SecurityException: java.lang.SecurityException: Not allowed to bind to service Intent { act=android.speech.RecognitionService cmp=com.htc.android.voicedictation/.VoiceDictationService } Any ideas why this is happening and

Google voice recognizer doesn't start on Android 4.x

坚强是说给别人听的谎言 提交于 2019-12-07 14:35:29
问题 I stumbled with this random issue... Here is my code mSpeechRecognizer = SpeechRecognizer.createSpeechRecognizer(mContext); initializeRecognitionListener(); mSpeechRecognizer.setRecognitionListener(mRecognitionListener); Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getClass().getPackage().getName()); intent.putExtra

How to detect if speech to text is available on android?

蹲街弑〆低调 提交于 2019-12-07 14:34:08
问题 I believe I have figured out how to detect if an android device has a microphone, like so: Intent speechIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); List<ResolveInfo> speechActivities = packageManager.queryIntentActivities(speechIntent, 0); TextView micAvailView = (TextView) findViewById(R.id.mic_available_flag); if (speechActivities.size() != 0) { //we have a microphone } else { //we do not have a microphones } However, how does one detect whether the android device has

Using Gstreamer with Google speech API (Streaming Transcribe) in C++

Deadly 提交于 2019-12-07 13:54:28
I am using the Google Speech API from cloud platform for getting speech-to-text of a streaming audio. I have already done the REST API calls using curl POST requests for a short audio file using GCP. I have seen the documentation of the Google Streaming Recognize, which says "Streaming speech recognition is available via gRPC only." I have gRPC (also protobuf) installed in my OpenSuse Leap 15.0 . Here is the screenshot of the directory. Next I am trying to run the streaming_transcribe example from this link , and I found that the sample program uses a local file as the input but simulate it as

For Watson's Speech-To-Text Unity SDK, how can you specify keywords?

眉间皱痕 提交于 2019-12-07 12:04:03
问题 I am trying to specify keywords in Watson's Speech-To-Text Unity SDK , but I'm unsure how to do this. The details page doesn't show an example (see here: https://www.ibm.com/watson/developercloud/doc/speech-to-text/output.shtml), and other forum posts are written for Java applications (see here: How to specify phonetic keywords for IBM Watson speech2text service?). I've tried hard-coding these values in the RecognizeRequest class created in the "Recognize" function like so, but without

Training sapi : Creating transcripted wav files and adding file paths to registry

送分小仙女□ 提交于 2019-12-07 10:54:07
问题 We are trying to do acoustic training but we are unable to create the transcripted audio files, how to create it? Also we are using GetTranscript and Appendtranscript but we are unable to get the ISpTranscript interface for the ISpStream if we open the stream in READWRITE mode, so how do you create the transcript wav files. hr = SPBindToFile(L"e:\\file1.wav", SPFM_OPEN_READONLY, &cpStream); hr = cpStream.QueryInterface(&cpTranscript); // We get a error here for as E_NONINTERFACE if SPFM_OPEN

Android Offline Speech Recognition shows only one result?

拈花ヽ惹草 提交于 2019-12-07 10:13:53
问题 I've set a speech recognition service as shown in this post Android Speech Recognition as a service on Android 4.1 & 4.2 and when I use the offline recognition (putting the phone in plane mode) it only shows me 1 result in the onResults() while in online mode I always get more than 5 results. I use this Intent : mSpeechRecognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); mSpeechRecognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL

How to hide toast“ Your audio will be sent to google to provide speech recognition service.” in Speech Recognizer?

我与影子孤独终老i 提交于 2019-12-07 09:01:06
问题 I am using google speech recognizer for integrating voice services in Android but while pressing on mic buttong this annoying toast message is showing.please suggest me a way to hide this. Thanks 回答1: If your device is rooted you can hide the notification, but not prevent the audio from being sent to Google. Install Xposed framework and module UnToaster Xposed, then add: com.google.android.googlequicksearchbox 回答2: So what you can do is to customize your speech recognizer by building it by

Continuous speech recognition with phonegap

蓝咒 提交于 2019-12-07 07:58:33
问题 I want to create app in phonegap with continuous speech recognition in Android and IOS. My app should wait for user voice and when he/she say "next", app should update screen and do some actions. I find this plugin: https://github.com/macdonst/SpeechRecognitionPlugin and it works really fast. But after few seconds after voice recognition is started and there is no voice, speech recogniser stops. Is there any method or flag like isSpeechRecognizerAlive or any other solution? Or is it possible