speech-recognition

SpeechRecognizer : not connected to recognition service

安稳与你 提交于 2019-12-21 17:28:04
问题 In my app, am using SpeechRecognizer directly. I destroy SpeechRecognizer onPause of the Activity and I recreate it in onResume method as below ... public class NoUISpeechActivity extends Activity { protected static final String CLASS_TAG = "NoUISpeechActivity"; private SpeechRecognizer sr; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_no_uispeech); sr = getSpeechRecognizer(); } @Override protected void onPause

How to Extend Google Now Voice Commands in Android with Custom Actions

空扰寡人 提交于 2019-12-21 17:18:23
问题 I recently installed the Google Now Launcher on my Nexus 4, and it got me thinking about how I could use it to interact with my own apps. While I can open my app by asking for it by title, I was wondering if there's a way to intercept the voice commands (possibly through a broadcast receiver) so I can say something like "turn off living room light" to send a signal to an Arduino to turn off the light in a room across the house? On the same note, I haven't been able to find the documentation

speech recognition python code not working

99封情书 提交于 2019-12-21 11:13:42
问题 I am running the following code in Python 2.7 with pyAudio installed. import speech_recognition as sr r = sr.Recognizer() with sr.Microphone() as source: # use the default microphone as the audio source audio = r.listen(source) # listen for the first phrase and extract it into audio data try: print("You said " + r.recognize(audio)) # recognize speech using Google Speech Recognition except LookupError: # speech is unintelligible print("Could not understand audio") The output gives a blinking

3-state phone model in Hidden Markov Model (HMM)

落花浮王杯 提交于 2019-12-21 06:18:39
问题 I want to ask regarding the meaning of 3-state phone model in HMM. This case is based on the theory of HMM in speech recognition system. So the example is based on the acoustic modeling of the speech sounds in HMM. I get this example picture from a journal paper: http://www.intechopen.com/source/html/41188/media/image8_w.jpg Figure 1: 3-State HMM for the sound /s/ So, my question is: what is it mean by 3 state? what actually S1, S2 & S3 mean? (I know it is state but it represent what?) How to

Input for Pocketsphinx on Android

允我心安 提交于 2019-12-21 05:53:28
问题 I make a demo for speech recognize to text. I have just built the demo Building Pocketsphinx On Android and it work well. But my problem is how to make input from an audio file, not from real time speaking. Any idea to solve it? Thanks. 回答1: You can use Pocketsphinx API to process any binary data, including binary data read from file. You only need to make sure that data is in the required format. Once you read the binary data into the buffer of type short[] you can process it using

Python Speech Recognition: 'module' object has no attribute 'microphone'

与世无争的帅哥 提交于 2019-12-21 04:50:44
问题 Running the following code on a macbook air 64 bit, testing the code on python 2.7 and python 3.4 import speech_recognition as sr r = sr.Recognizer() with sr.microphone() as source: audio = r.listen(source) try: print("You said " + r.recognize(audio)) except LookupError: print("Could not understand audio") When i try python 2.7, I keep getting the error of : Traceback (most recent call last): File "star.py", line 3, in <module> with sr.microphone() as source: AttributeError: 'module' object

swift 3 using speech recognition and AVFoundation together

自闭症网瘾萝莉.ら 提交于 2019-12-21 04:17:07
问题 I am successfully able to use Speech (speech recognition) and I can use AVFoundation to play wav files in Xcode 8/IOS 10. I just can't use them both together. I have working speech recognition code where I import Speech. When I import AVFoundation into the same app and use the following code, there is no sound and no errors are generated: var audioPlayer: AVAudioPlayer! func playAudio() { let path = Bundle.main.path(forResource: "file.wav", ofType: nil)! let url = URL(fileURLWithPath: path)

swift 3 using speech recognition and AVFoundation together

百般思念 提交于 2019-12-21 04:17:01
问题 I am successfully able to use Speech (speech recognition) and I can use AVFoundation to play wav files in Xcode 8/IOS 10. I just can't use them both together. I have working speech recognition code where I import Speech. When I import AVFoundation into the same app and use the following code, there is no sound and no errors are generated: var audioPlayer: AVAudioPlayer! func playAudio() { let path = Bundle.main.path(forResource: "file.wav", ofType: nil)! let url = URL(fileURLWithPath: path)

Add iOS speech recognition support for web app?

谁都会走 提交于 2019-12-21 03:51:12
问题 Currently, the HTML5 web speech api works great on google chrome for all devices except mobile iOS. Text-to-speech works, but speech-to-text is not supported. webkitSpeechRecognition is not supported. See: Chrome iOS webkit speech-recognition I am unable to find a workaround. I would like to add speech recognition support for iOS to my current web app that uses speech recognition and speech synthesis. Any suggestions? Thank you. 回答1: Try something like this recognition = new (window

What's a good open source VoiceXML implementation?

天涯浪子 提交于 2019-12-21 03:38:09
问题 I am trying to find out if it's possible to build a complete IVR application by cobbling together parts from open source projects. Is anyone using a non-commercial VoiceXML implementation to build speech-enabled systems? 回答1: I've tried JVoiceXML in the past and had some luck with it. http://jvoicexml.sourceforge.net/ It's java of course, but that wasn't a problem for my situation. 回答2: Voiceglue (http://www.voiceglue.org/) is an implementation of voicexml using openvxi and asterisk. It may