text-to-speech

Swift iOS Text To Speech not working with “delay” in loop

你说的曾经没有我的故事 提交于 2019-12-04 03:58:07
问题 I'm trying to have the iOS text-to-speech synthesizer "say" a list of phrases with a variable delay between the phrases. For example, I may want to it say "Hello", then wait 5 seconds, then "Is anyone there?", then wait 10 seconds, then say "Hello?"...etc. I've made a simple example below that illustrates what I am trying to do. I know that the speech synthesizer is speaking, additional utterances are added to a queue and spoken in the order they are received. I've tried many ways to achieve

How to wait for TextToSpeech initialization on Android

十年热恋 提交于 2019-12-04 03:56:26
问题 I am writing an activity that speaks to the user and I'd really like to block on TextToSpeech initialization - or else time out. How can I get my thread to wait? I tried: while (! mIsTtsReady || i>limit) try { Thread.sleep(100); i++; ... }; along with: @Override public void OnInit() { mIsTtsReady = true; } // TextToSpeech.OnInitListener But OnInit() never runs. It seems that OnInit executes within my thread (via a message to my activities Looper?), which is in a tight sleep() loop. It seems

set turkish language for text to speech [duplicate]

强颜欢笑 提交于 2019-12-04 03:55:39
问题 This question already has answers here : Using text to speech APIs in android application (3 answers) Closed 3 years ago . Im working on text to speech app , i want to set turkish language to be as this: tts.setLanguage(Locale.TR); BUT this is not available in android , is it wrong to add this way or there is different way to add turkish language to text to speech . any help and advice will be appreciated text to speech code : public class AndroidTextToSpeechActivity extends Activity

Constant Memory Leak in SpeechSynthesizer

五迷三道 提交于 2019-12-04 03:16:13
I have developed a project which I would like to release which uses c#, WPF and the System.Speech.Synthesizer object. The issue preventing the release of this project is that whenever SpeakAsync is called it leaves a memory leak that grows to the point of eventual failure. I believe I have cleaned up properly after using this object, but cannot find a cure. I have run the program through Ants Memory Profiler and it reports that WAVEHDR and WaveHeader is growing with each call. I have created a sample project to try to pinpoint the cause, but am still at a loss. Any help would be appreciated.

TTS-UtteranceProgressListener not being called

女生的网名这么多〃 提交于 2019-12-03 23:15:37
问题 I don't want to put all my code here, so I'm just putting the relevant pieces. If you need more, feel free to ask. I'm using Text To Speech (TTS) which leads to a speech listener after it asks a question... I found through Log outputs that TTS's onInit is being called, but the UtteranceProgressListener is not and I can't figure out why. Any help is appreciated. // ---Initialize TTS variables--- // Implement Text to speech feature tts = new TextToSpeech(this, new ttsInitListener()); // set

Text to Speech 503 and Captcha Now

不羁岁月 提交于 2019-12-03 20:58:25
I've been using the Google Text to Speech engine for quite some time and today I've started receiving 503s and captcha requests. My original query was https://translate.google.com/translate_tts?tl=en&q=hi Assuming I needed an API Key, I requested a key and added that to the URL query string https://translate.google.com/translate_tts?tl=en&key=xxxxxxx&q=hi However, my service is still receiving the captcha request. I'm assuming that the API has been changed but can't find any documentation to support this. Anyone else running into this issue? There is no official API for TTS from Google. https:

Problem with isSpeaking() when using Text-to-Speech on Android

本秂侑毒 提交于 2019-12-03 20:56:28
问题 I'm having problem with isSpeaking() method. When passing QUEUE_FLUSH to the speak() method, isSpeaking() works fine. However, when I queue multiple utterances (by passing QUEUE_ADD ), the isSpeaking() method starts returning false immediately after more than one utterance have been queued. Then I stumbled across the source code of the TtsService class and saw this code: public boolean isSpeaking() { return (mSelf.mIsSpeaking && (mSpeechQueue.size() < 1)); } Does anyone have any idea, why was

Text To Speech functionality when app is in background mode?

好久不见. 提交于 2019-12-03 20:24:13
问题 I am working on a TextToSpeech app. I write one paragraph in a UITextField , then I press the Speak button. Sound plays according to the text written in the UITextField . However, when the app is in background mode, the audio stops playing. How can I continue to play the sound in background mode? Similar to how an audio player can play a song in the background. I am using the following code for text to speech: #import "ViewController.h" #import "Google_TTS_BySham.h" #import <AVFoundation

SAPI 5 voice synthesis and C#

∥☆過路亽.° 提交于 2019-12-03 17:23:34
I have installed new SAPI5 voice. In computer settings of Speech program is new voice visible and available to use. But my program cannot find it. To find it, I am using this part of code, I use System.Speech.Synthesis namespace. SpeechSynthesizer s = new SpeechSynthesizer(); foreach (InstalledVoice v in s.GetInstalledVoices()) { st += v.VoiceInfo.Name+"\n"; } MessageBox.Show(st); The only voice found is Microsoft Anna. My code for speeking is as follow: s.SelectVoice("Eliska22k");//name of the voice is Eliska22k s.Speak("ahoj"); I am using C# 4 and I have windows vista 32 bit. Where is my

iOS text to speech: What decides the default voice returned by [AVSpeechSynthesisVoice voiceWithLanguage]?

故事扮演 提交于 2019-12-03 16:04:18
AVSpeechSynthesisVoice.voiceWithLanguage has been introduced in iOS SDK 7.0. At that time, there is only one voice per language/locale. Since iOS SDK 9.0, more voices have been added for each language/locale. So Apple introduces an new API voiceWithIdentifier so you can get the specific voice you want. My question here is, what if we still use voiceWithLanguage in iOS 9 or above. What does this API exactly returns? And more importantly, does the returned voice changed between iOS versions and even between different devices? I've noticed that, what voiceWithLanguage returns is kind of relying