WatchKit Option to ONLY Dictate?

[亡魂溺海] 提交于 2019-12-02 04:15:13

You are correct. Dev evangelists in the forums have noted that the simulator won't show anything for dictation, owing to its lack of support.

Make sure you're using WKTextInputModePlain and the suggestions array is nil and you'll be fine.

As of WatchOS 2.1, and iOS 9, I have been able to what you propose, in 2 different ways, give me a Green Tick and upvote if it works!!:

OPTION 1 - RECORD WAV FILE AND UPLOAD TO ASR SERVER I recorded and saved a WAV file to the apple watch. After that I uploaded the file to a paid Speech Recognition provider and everything worked fine! Here is the code to record, replace the UI updating lines of code (and the debug ones) with your own:

//RECORD AUDIO SAMPLE
    var saveUrl: NSURL? //this var is initialized in the awakeWithContext method//
    func recordAudio(){
        let duration = NSTimeInterval(5)
        let recordOptions =
        [WKAudioRecorderControllerOptionsMaximumDurationKey : duration]
       // print("Recording to: "+(saveUrl?.description)!)
        //CONSTRUCT AUDIO FILE URL
        let fileManager = NSFileManager.defaultManager()
        let container = fileManager.containerURLForSecurityApplicationGroupIdentifier("group.artivoice.applewatch");
        let fileName = "audio.wav"
        saveUrl = container?.URLByAppendingPathComponent(fileName)
        presentAudioRecorderControllerWithOutputURL(saveUrl!,
            preset: .WideBandSpeech,
            options: recordOptions,
            completion: { saved, error in
                if let err = error {
                    print(err.description)
                    self.sendMessageToPhone("Recording error: "+err.description)
                }
                if saved {
                    self.btnPlay.setEnabled(true)
                    self.sendMessageToPhone("Audio was saved successfully.")
                    print("Audio Saved")
                    self.uploadAudioSample()
                }
        })
    }

OPTION 2 - USE THE iWATCH's NATIVE SPEECH RECOGNITION In this approach I take the original, native voice menu, but...! I don't add any button options, just pure ASR. I launched the empty voice menu, and then recover the string returned by the ASR. Here's the code, enjoy:

func launchIWatchVoiceRecognition(){
     //you can see the empty array [], add options if it suits you well
        self.presentTextInputControllerWithSuggestions([], allowedInputMode: WKTextInputMode.Plain, completion:{(results) -> Void in
            let aResult = results?[0] as? String
            if(!(aResult == nil)){
                print(aResult) //print result
                self.sendMessageToPhone("Native ASR says:  "+aResult!)
                dispatch_async(dispatch_get_main_queue()) {
                    self.txtWatch.setText(aResult) //show result on UI
                }
            }//end if
        })//end show voice menu
    }

OPTION 2 is lightning fast, but OPTION 1 can be more handy if you want to do some advanced speech recon functions (custom vocabularies, grammar...) I would recommend OPTION 1 for most users. Voila!! If you need extra hints let me know!

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!