speech-recognition

Speech is not being recognized with default dictation grammar in my UWP application.

删除回忆录丶 提交于 2019-12-12 03:09:01
问题 Speech is not being recognized with default dictation grammar in my UWP application. However, it is perfectly recognized when I use programmatic list constraint. Below is the speech recognition part of my code for reference. If I do not comment the 5th line, this works fine. Am I doing something wrong below: speechRecognizer = new SpeechRecognizer(); bool PermissionGained = await CheckMicrophonePermission(); if (PermissionGained) { //speechRecognizer.Constraints.Add(new

Stopping speech recognition before using text to speech

孤人 提交于 2019-12-12 03:03:30
问题 I am implementing a dialogue application using speech recognition and text to speech. I noticed that once the recognizer is started it tries to recognition any sound including the result of the text to speech. I tried the code below to prevent it to listen to the TTS but I get this exception: E/JavaBinder(29640): *** Uncaught remote exception! (Exceptions are not yet supported across processes.) E/JavaBinder(29640): java.lang.RuntimeException: SpeechRecognizer should be used only from the

Permission Denial Error - SpeechRecognizer as a continuous service? (android.permission.INTERACT_ACROSS_USERS_FULL)

≯℡__Kan透↙ 提交于 2019-12-12 01:48:15
问题 EDITED: I have changed my service code to implement as started service instead of IntentService as updated StreamService.java below Now, I am getting error regarding permission denial error as described in logcat messages after StreamService.java EDITED: As mentioned in Android Developer site that SpeechRecognizer API can only be used as Application Context. Is there any woraround with which I can get it working I have implemented MainActivity class that has all the UI Components. Class is as

Google recognizer and pocketsphinx in two different classes, how to loop them?

你。 提交于 2019-12-12 01:42:20
问题 Yesterday i ask a simplified question of my problem, but think its too simplified. What my programm should do, is to hear a keyword and when he hear it, he should listen to what i said. (like if you told to siri or google now, by saying siri or ok google). I'm using pocketsphinx for the keyword and the google speechrecognizer for the longer parts. It works, but only for one time. The pocketsphinx is in the MainActivity and the google recognizer is in an extra class (Jarvis). The programm

How to get Microsoft Azure Speech To Text to start transcribing when program is run? (Unity, C#)

微笑、不失礼 提交于 2019-12-11 18:04:17
问题 I am trying to build a simple app using Microsoft Azure's Cognitive Services Speech To Text SDK in Unity3D. I've following this tutorial, and it worked quite well. The only problem with this tutorial is that the Speech-To-Text is activated by a button. When you press the button, it'll transcribe for the duration of a sentence, and you'll have to press the button again for it to transcribe again. My problem is I'd like it to start transcribing as soon as the program is run in Unity, rather

How do I set credentials for google speech to text without setting environment variable?

你。 提交于 2019-12-11 17:09:58
问题 There is C# example client-libraries-usage-csharp of using the lib. And there is example how to set an evironment variable export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json" How do I set credentials for google speech to text without setting environment variable? Somehow like this: var credentials = ...create(file.json); var speech = SpeechClient.Create(credentials); 回答1: using Grpc.Auth; then string keyPath = "key.json"; GoogleCredential googleCredential; using

PocketSphinx - How to understand when getHypstr() returns empty yet getInSpeech() returns True?

萝らか妹 提交于 2019-12-11 17:09:07
问题 Trying edu.cmu.sphinx.pocketsphinx with processRaw to detect silence. Using the following config: en-us.lm.bin language model en-us-ptm acoustic model cmudict-en-us.dict dictionary also setting remove_noise to True and samprate to 8000 I want to do a Ngram Search. When the While loop calling processRaw finishes I call both hypothesis.getHypstr() and decoder.getInSpeech() Why does getHypstr returns empty but getInSpeech returns True while actually there is no speech in the input argument given

Voice Activity Detection in Android

只谈情不闲聊 提交于 2019-12-11 16:59:00
问题 I am writing an application that will behave similar to the existing Voice recognition but will be sending the sound data to a proprietary web service to perform the speech recognition part. I am using the standard MediaRecord (which is AMR-NB encoded) which seems to be perfect to speech recognition. The only data provided by this is the Amplitude via the getMaxAmplitude() method. I am trying to detect when the person starts to talk so that when the person stops talking for about 2 seconds I

Cannot call SpeechClient.recognize(RecognizeRequest request): Throwing Exception

烈酒焚心 提交于 2019-12-11 16:54:18
问题 This is my first time posting, so I'm not too familiar with the rules, but here goes. I've been trying to get the Google Cloud Speech API to work on Android, but to no avail. The same code works just fine on Java, but not on Android. My code runs fine until I call the recognize method, using a speech client. Here is the error: 11-02 18:38:03.922 6959-6982/capstone.speechrecognitionsimple E/AndroidRuntime: FATAL EXCEPTION: AsyncTask #1 Process: capstone.speechrecognitionsimple, PID: 6959 java

SpeechRecognizer throws ERROR_NO_MATCH on first listening when googlequicksearchbox is in background

妖精的绣舞 提交于 2019-12-11 16:52:23
问题 The behavior is very similar to what is described here, but only happens when the googlequicksearchbox is in the background. I'm with Google APP 5.2.33.19.arm. I created the SpeechRecognizer by calling SpeechRecognizer.createSpeechRecognizer(myContext, new ComponentName("com.google.android.googlequicksearchbox", "com.google.android.voicesearch.serviceapi.GoogleRecognitionService")) I got the following error message in Android Studio Logcat GoogleRecognitionServic﹕ #startListening [es-MX]