speech

Handle onActivityResult on a Service

不问归期 提交于 2020-01-01 04:36:29
问题 So i have a simple Question , it is Possible to Handle the Method onActivityResult() in a Service if this Activity was Started from the Same Service (Using Intent) ? In My Case , i want to start SpeechRegnition , Speak , and Get the Result on the Service , Everything on Background (Main Service Start from a Widget) , Thanks . 回答1: Thanks for a recent downvoting, whoever it was. The previous answer I gave back in 2012 is a total nonsesnse so I decided to write a proper one. You can not handle

Use x-webkit-speech in an HTML/JavaScript extension

牧云@^-^@ 提交于 2020-01-01 04:26:10
问题 I am trying to use the new x-webkit-speech function in a simple HTML/JavaScript extension in Google Chrome. I, however, have tried and tried looking at a bunch of examples and cannot get it to successfully call the function. I have seen other people do it, and I don't really get why I cannot. I put the JavaScript code into a separate file, but I include using <script src="filename.js"> this is my line for the x-webkit-speech.... <input id="speechInput" type="text" style="font-size:25px;" x

VBA - Save SAPI speech to a GIVEN file type?

拥有回忆 提交于 2019-12-31 03:44:26
问题 My Task It's possible to use speech in Office applications. My goal to save MS SAPI speech to a given file type. AFAIK my code example saves to a WAV file. Problem I don't know, if it's possible to define the wanted file type extension only or if it's necessary to do some further setting. I didn't find an appropriate solution using VBA. Question Is there a code example how to precisely define a wanted file type, e.g. MP3, save a given text to this file type using the necessary settings

Why does Application.Speech.Speak read some numbers individually rather than put them together?

时光怂恿深爱的人放手 提交于 2019-12-30 10:08:10
问题 Let's suppose now it is 11h11min. It reads "ONE ONE" hours and "eleven" minutes, as in: Sub TEST1() Application.Speech.Speak "It is " & Hour(Now()) & " hours and " & Minute(Now()) & " minutes" End Sub However, the following reads "eleven" hours and "eleven" minutes Sub TEST2() Application.Speech.Speak "It is 11 hours and 11 minutes" End Sub On the contrary, it reads "ONE ONE" hours and "eleven" minutes, as in: Sub TEST3() Application.Speech.Speak "It is " & "11" & " hours and " & "11" & "

Why does Application.Speech.Speak read some numbers individually rather than put them together?

风格不统一 提交于 2019-12-30 10:08:06
问题 Let's suppose now it is 11h11min. It reads "ONE ONE" hours and "eleven" minutes, as in: Sub TEST1() Application.Speech.Speak "It is " & Hour(Now()) & " hours and " & Minute(Now()) & " minutes" End Sub However, the following reads "eleven" hours and "eleven" minutes Sub TEST2() Application.Speech.Speak "It is 11 hours and 11 minutes" End Sub On the contrary, it reads "ONE ONE" hours and "eleven" minutes, as in: Sub TEST3() Application.Speech.Speak "It is " & "11" & " hours and " & "11" & "

SpeechRecognizer throws onError on the first listening

青春壹個敷衍的年華 提交于 2019-12-29 07:54:14
问题 In the Android 5 I faced with strange problem. The first call to the startListening of SpeechRecognizer results to the onError with error code 7 ( ERROR_NO_MATCH ). I made test app with the following code: if (speechRecognizer == null) { speechRecognizer = SpeechRecognizer.createSpeechRecognizer(this); speechRecognizer.setRecognitionListener(new RecognitionListener() { @Override public void onReadyForSpeech(Bundle bundle) { Log.d(TAG, "onReadyForSpeech"); } @Override public void

Change the language of Speech Recognition Engine library

若如初见. 提交于 2019-12-29 07:53:08
问题 I am working on a program (in C#) to recognize voice commands from the user and execute in the PC, i.e. the user says "start menu" and the PC opens the start menu. I have find a cool library: SpeechRecognitionEngine for the speech recognition, the problem is that I need to recognize spanish language too, is there any way to change the language? 回答1: You can use the SpeechRecognitionEngine(CultureInfo) overload. var speechRec = new SpeechRecognitionEngine(new CultureInfo("es-ES"))); This

Speechlib on Shared hosting - ASP.NET

自作多情 提交于 2019-12-24 07:52:25
问题 I am trying to use SpeechLib on my personal website. It's a very simple app that saves some text to a wav file - standard stuff. Works great on the dev machine. But all hell breaks loose when I deploy it to the shared host. Sometimes I get prompted for user name and password at the time of writing the wav file. Sometimes, I get the "Security exception". The site has full trust and I can write a simple txt file from my app without any issues. On scouring the internet, I realized that the

iOS Divide audio from URL into frames

爱⌒轻易说出口 提交于 2019-12-24 05:44:07
问题 I am working on a simple internet radio app in iOS with a very simple speech/music discrimination. The main idea is a radio which plays a signal from url and at the same time it checks what type of signal is being broadcast. When it detects a speech it change the channel and so on. I wrote a simple iOS app using storyboards and AVFoundation for Player. I have a problem with implementation of a speech detection. I wrote a Matlab code for an algorithm, but I'm not sure how to do it in Xcode.

Parsing LIUM Speaker Diarization Output

空扰寡人 提交于 2019-12-23 17:15:22
问题 How can I know which speaker spoke for how much time by using LIUM Speaker Diarization toolkit? For example, this is my .seg file. ;; cluster S0 [ score:FS = -33.93166562542459 ] [ score:FT = -34.24966646974656 ] [ score:MS = -34.05223781565528 ] [ score:MT = -34.32834794609819 ] Seq06 1 0 237 F S U S0 Seq06 1 2960 278 F S U S0 ;; cluster S1 [ score:FS = -33.33289449700619 ] [ score:FT = -33.64489165914674 ] [ score:MS = -32.71833169822944 ] [ score:MT = -33.380835069917275 ] Seq06 1 238 594