voice-recognition

How to enable Android Open Application voice interaction

℡╲_俬逩灬. 提交于 2020-01-10 05:40:31
问题 According to the system voice command docs, you can open an application with a voice command. e.g. OK Google - open foobar . Also according to the docs, this Works by default; no specific intent. In my sample development app, this isn't working. I've tried adding a few combinations of action and category permutations to the intent-filter, but no luck so far. I'm targeting a minimum SDK of 23, testing on a device with 6.0.1. Should this work, and if so, what are the changes to a new empty

How do speech recognition algorithms recognize homophones?

大兔子大兔子 提交于 2020-01-05 15:20:38
问题 I was pondering this question earlier. What clues do modern algorithms (specifically those that convert voice to text) use to determine which homophone was said (E.g. to, too, or two?) Do they use contextual clues? Sentence structure? Perhaps there are slight differences in the way each word is usually pronounced (for example, I usually hold the o sound longer in two than in to ). A combination of the first two seems most plausible. 回答1: Do they use contextual clues? Yes, ASR systems use

How do speech recognition algorithms recognize homophones?

青春壹個敷衍的年華 提交于 2020-01-05 15:19:14
问题 I was pondering this question earlier. What clues do modern algorithms (specifically those that convert voice to text) use to determine which homophone was said (E.g. to, too, or two?) Do they use contextual clues? Sentence structure? Perhaps there are slight differences in the way each word is usually pronounced (for example, I usually hold the o sound longer in two than in to ). A combination of the first two seems most plausible. 回答1: Do they use contextual clues? Yes, ASR systems use

How do speech recognition algorithms recognize homophones?

十年热恋 提交于 2020-01-05 15:19:08
问题 I was pondering this question earlier. What clues do modern algorithms (specifically those that convert voice to text) use to determine which homophone was said (E.g. to, too, or two?) Do they use contextual clues? Sentence structure? Perhaps there are slight differences in the way each word is usually pronounced (for example, I usually hold the o sound longer in two than in to ). A combination of the first two seems most plausible. 回答1: Do they use contextual clues? Yes, ASR systems use

How to display Voice Recognition Settings screen programmatically

ⅰ亾dé卋堺 提交于 2020-01-05 07:16:12
问题 From within an Android app, how to show the system Voice Recognition Settings screen? Note: there is a similar question here but it is out of date. 回答1: Pre-Jelly Bean, the way to do this is using the intent: Intent intent = new Intent(Intent.ACTION_MAIN); intent.setClassName("com.google.android.voicesearch", "com.google.android.voicesearch.VoiceSearchPreferences"); HOWEVER: I have not had a chance to test this on Honeycomb (API 11-13) - anyone know? Jelly Bean, you need to change the package

How I can listen for a keyword in a web browser with Javascript [closed]

女生的网名这么多〃 提交于 2020-01-05 04:23:13
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 5 years ago . How I can listen for a keyword from web browser and when it catched - do something using JavaScript like it implemented in google search app for android with their "Ok, google"? 回答1: You can do it with pocketsphinx.js: https://github.com/syl22-00/pocketsphinx.js See keyword

Calling android.speech.RecognizerIntent API results in Connection Error dialog, shows 'calling_package' warning in log

拈花ヽ惹草 提交于 2020-01-04 04:22:06
问题 I wrote a small app to allow the user to choose which language he uses the Voice Search via a button, rather than relying on the user's language preference (sometimes you want to voice search in Japanese without switching your whole UI to Japanese). I am testing the app on my HTC Desire /Android 2.1 (Softbank-x06ht). However, when I call the voice api, I get a "Connection Failed" dialog box [retry/cancel], and LogCat shows this warning: 09-12 11:26:13.583: INFO/RecognitionService(545): ssfe

What's the icon needed for Cortana's Help list?

六眼飞鱼酱① 提交于 2020-01-03 15:59:14
问题 So I'm adding voice command recognitions with Cortana in my app. My VCD file is all set and everything is working as expected, so now I have to look for the little things. I have all the needed icons (that I know of) in my app but still when my app appears on the Cortana screen (the "What can I say?" screen), my app appears with a default icon, not added by me. So my question is, what's the missing icon I'm not seeing? p.s: the official Remote Desktop app also shows the same icon so I guess I

Using Android Voice Control launch my Activity

天涯浪子 提交于 2020-01-03 06:41:14
问题 I have read this post on SO and I have tried the code to launch my own speech recognition activity. It worked!. So my question is that how can I customize the action of the built in voice command button(hardware) to launch the activity which I have written instead of the built in voice search? I have thoroughly searched the net including this website. But I could not find a solution. I know that someone on SO has it!!! 回答1: You must use SpeechRecognizer instead of RecognizerIntent . It's a

Angular2: Web Speech API - Voice recognition

怎甘沉沦 提交于 2020-01-02 00:56:08
问题 After reading the documentation of webkitSpeechRecognition (voice recognition in Javascript) I tried to implement it in Angular 2 . But when I did this: const recognition = new webkitSpeechRecognition(); TypeScript say this error: [ts] Cannot find name 'webkitSpeechRecognition'. any And if I try to extract webkitSpeechRecognition from window : if ('webkitSpeechRecognition' in window) { console.log("Enters inside the condition"); // => It's printing const { webkitSpeechRecognition } = window;