speech-recognition

onServiceConnected never called after bindService method

99封情书 提交于 2019-11-26 12:43:40
问题 I have a particular situation: a service started by a broadcast receiver starts an activity. I want to make it possible for this activity to communicate back to the service. I have chosen to use AIDL to make it possible. Everything seems works good except for bindService() method called in onCreate() of the activity. bindService(), in fact, throws a null pointer exception because onServiceConnected() is never called while onBind() method of the service is. Anyway bindService() returns true.

Voice recognition on android with recorded sound clip?

拜拜、爱过 提交于 2019-11-26 12:13:33
I've used the voice recognition feature on Android and I love it. It's one of my customers' most praised features. However, the format is somewhat restrictive. You have to call the recognizer intent, have it send the recording for transcription to google, and wait for the text back. Some of my ideas would require recording the audio within my app and then sending the clip to google for transcription. Is there any way I can send an audio clip to be processed with speech to text? I got a solution that is working well to have speech recognizing and audio recording. Here is the link to a simple

Google Speech Recognition timeout

佐手、 提交于 2019-11-26 11:49:51
I am developing an Android Application that is based around Speech Recognition. Until today everything has been working fine and in a timely manner, e.g. I would start my speech recogniser, speak, and within 1 or 2 seconds max the application received the results. It was a VERY acceptable user experience. Then today I now have to wait for ten or more seconds before the recognition results are available. I have tried setting the following EXTRAS, none of which make any discernible difference RecognizerIntent.EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS RecognizerIntent.EXTRA

How to add Speech Recognition to Unity project? [closed]

我是研究僧i 提交于 2019-11-26 11:36:38
问题 I am presently working on a Augmented Reality project using Vuforia that uses Speech recognition to control the objects in Unity. I was just looking for a sample working code. 回答1: Unity does not have this built in yet. They have been doing research on it for a long time and this will likely be added into Unity very soon. You can get the working Speech-to-Text(free) from the Assets store here. It is open source and you can help contribute to it if you find any problems. As a side note, almost

Recognizing multiple keywords using PocketSphinx

谁说胖子不能爱 提交于 2019-11-26 11:22:55
I've installed the PocketSphinx demo and it works fine under Ubuntu and Eclipse, but despite trying I can't work out how I would add recognition of multiple words. All I want is for the code to recognize single words, which I can then switch() within the code, e.g. "up", "down", "left", "right". I don't want to recognize sentences, just single words. Any help on this would be grateful. I have spotted other users' having similar problems but nobody knows the answer so far. One thing which is baffling me is why do we need to use the "wakeup" constant at all? private static final String KWS

Is there a way to use a grammar with the HTML 5 speech input API?

∥☆過路亽.° 提交于 2019-11-26 11:22:39
问题 I\'m working with the HTML 5 speech input API and I want to let the server know which answers it can expect to be returned from the speech input. Is there a way to set a list of possible inputs? 回答1: In Google Chrome you can not use grammars yet, overall they decided to use free-form recognition only. Relevant question is Grammar in Google speech API. Grammars are supported in Firefox Web Speech API, but the feature is experimental. If your browser supports HTML5 audio, you might want to try

record/save audio from voice recognition intent

二次信任 提交于 2019-11-26 11:20:35
Before asking this question, I checked all stackoverflow other threads related to this issue without any success, so please, don't answer with links to other threads, :) I want to save/record the audio that google recognition service used for speech to text operation (using RecognizerIntent or SpeechRecognizer). I experienced many ideas: onBufferReceived from RecognitionListener: I know, this is not working, just test it to see what happens and onBufferReceived is never called (tested on galaxy nexus with JB 4.3) used a media recorder: not working. it's breaking speech recognition. only one

Is there a way to use the SpeechRecognizer API directly for speech input?

一笑奈何 提交于 2019-11-26 10:36:22
问题 The Android Dev website provides an example of doing speech input using the built-in Google Speech Input Activity. The activity displays a pre-configured pop-up with the mic and passes its results using onActivityResult() My question: Is there a way to use the SpeechRecognizer class directly to do speech input without displaying the canned activity? This would let me build my own Activity for voice input. 回答1: Here is the code using SpeechRecognizer class (sourced from here and here): import

iPhone App › Add voice recognition? [closed]

限于喜欢 提交于 2019-11-26 10:30:19
I'd like to build an app that uses voice recognition. I've seen big companies like Google etc implement this feature, but I'm curious about doing it on a start-up level. Anyone looked into this? Are there any tools out there for us to do this? OpenEars looks promising... http://www.politepix.com/openears/ Based on Pocket Sphinx. JJ Rohrer If you start here at wikipedia, you'll get a good list engines ( http://en.wikipedia.org/wiki/Speech_recognition#Commercial_software.2Fmiddleware ) As I write this (June 24, 2009) it looks to me that are two viable open source solutions Pocket Sphinx ( http:/

Voice Recognition stops listening after a few seconds

我的未来我决定 提交于 2019-11-26 09:25:08
问题 I tried a lot but can´t find it out, so I hope you can help me. I am trying to build my own voice recognition app, which doesn´t show up the dialog. I already wrote some code and it works quite fine, but my problem is that the recognizer seems to stop without any errors or other messanges in the LogCat. A strange fact is that the \"onRmsChanged\" from the \"RecognitionListener\" interface is still called all the time, but no \"onBeginningOfSpeech\" is called anymore. If I speak just after the