voice-recognition

How to open Activity based on the voice command

风格不统一 提交于 2019-12-23 20:06:28
问题 I am doing dashboard application in which there are lot of screens. When the user tell the voice command based on that I need to open the activity. I don't know where to start I have already completed all the screens and I would like to implement voice search. my app screens are Advances, Leaves, Recruitment, Permissions, Notifications etc Example: when the user say 'Advances' it should open the advances screens. Please help me. 回答1: 1) Start a voice recognition intent 2) Handle the returned

Android - Unlocking phone with voice [closed]

≯℡__Kan透↙ 提交于 2019-12-23 17:56:19
问题 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 8 years ago . Is it possible to unlock phone with voice command, even when device is in PowerManager.WakeLock mode?I thought of using Service as background process but can phone device react when locked?Any ideas? ps.Just to

Embed language pack with application

拜拜、爱过 提交于 2019-12-23 17:33:44
问题 I'm making an application which use offline voice recognition (SpeechRecognizer from the Google API). It works perfectly but I need to download the language pack before using the application. So, here is my question: Is there any way to embed the language pack and install it directly from my application? Or can the language pack be downloaded with the application when it is downloaded from the Play store? Thank you :) 来源: https://stackoverflow.com/questions/32950791/embed-language-pack-with

Parsing LIUM Speaker Diarization Output

空扰寡人 提交于 2019-12-23 17:15:22
问题 How can I know which speaker spoke for how much time by using LIUM Speaker Diarization toolkit? For example, this is my .seg file. ;; cluster S0 [ score:FS = -33.93166562542459 ] [ score:FT = -34.24966646974656 ] [ score:MS = -34.05223781565528 ] [ score:MT = -34.32834794609819 ] Seq06 1 0 237 F S U S0 Seq06 1 2960 278 F S U S0 ;; cluster S1 [ score:FS = -33.33289449700619 ] [ score:FT = -33.64489165914674 ] [ score:MS = -32.71833169822944 ] [ score:MT = -33.380835069917275 ] Seq06 1 238 594

How can I convert a series of words into camel case in AppleScript?

╄→尐↘猪︶ㄣ 提交于 2019-12-23 01:01:07
问题 I'm trying to modify Dragon Dictate, which can execute AppleScript with a series of words that have been spoken. I need to find out how I can take a string that contains these words and convert it to camel case. on srhandler(vars) set dictatedText to varDiddly of vars say dictatedText end srhandler So if I set up a macro to execute the above script, called camel, and I say "camel string with string", dictatedText would be set to "string with string". It's a cool feature of DD. However I don't

Recognize Speech To Text Swift

穿精又带淫゛_ 提交于 2019-12-22 17:16:49
问题 Is it possible to recognize speech and then convert it into text with custom keyboard. Like by default message app in iphone. Screen Shot 1. Default recognize speech in iphone keyboard. 2. Speech to text Any help would be greatly appreciated. Thanks in advance. 回答1: I have following code which are used in my sample application to convert speech-to-text. import UIKit import Speech import AVKit class ViewController: UIViewController { //----------------------------------------------------------

Android catch Bluetooth HFP's Activate Voice Recognition

冷暖自知 提交于 2019-12-22 11:16:02
问题 When Bluetooth hands free device connected to mobile phone, if device sends an AT command AT+BVRA to enable voice recognition, the mobile phone launches the default voice recognition app if it supports. My android phone (OS : 4.1.2, Model : Samsung Galaxy Core I8262) launching S Voice app for recognition. I think if my phone has more recognition activities, it may show list to select one, if no default set. I never observed this case. My question, Is there any way to catch AT+BVRA command

Managing text-to-speech and speech recognition at same time in iOS

自古美人都是妖i 提交于 2019-12-21 23:02:24
问题 I'd like my iOS app to use text-to-speech to read to the user some information that it receives from a server, and I'd also like to allow the user to stop such speech by a voice command. I have tried speech recognition frameworks for iOS like OpenEars and I find the problem that it is listening and detecting the information the app itself is "saying" and it intereferes in the recognition of user's voice commands. Has somebody dealt with this scenario in iOS and found a solution for that?

How to perform DTW on an array of MFCC coefficients?

谁说我不能喝 提交于 2019-12-21 22:47:11
问题 Currently I'm working on speech recognition project in MATLAB. I've taken two voice signals and have extracted the MFCC coefficients of the same. As far as I know, I should now calculate the Euclidean distance between the two and then apply the DTW algorithm. That's why I calculated the distnace between the two and got an array of the distances. So my question is how to implement DTW on resultant array? Here's my MATLAB code: clear all; close all; clc; % Define variables Tw = 25; % analysis

Apple Dictation - Use in app

馋奶兔 提交于 2019-12-21 19:49:09
问题 Is there any way to utilize Apple's dictation voice to text abilities in native Apple application? 回答1: Your question is a little vague, it would be good to know what you have tried using or doing first, or eve what you are trying to achieve. More commonly found is keyword recognition API. But a speech recognition API that can be used for this is Open Ears. Along with that is Ceed Vocal. The first is free (Open Ears), but apparently Ceed Vocal give better results. EDIT If you want a speech