text-to-speech

Why does the list of unavailable voices is always empty?

孤街醉人 提交于 2019-12-13 18:02:21
问题 Android enables an application to query the platform for the availability of language files: simply instantiate the intent below and send it in an asynchronous request by using startActivityForResult method. Intent checkIntent = new Intent(); checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA); startActivityForResult(checkIntent, TTS_CHECK_DATA_REQUEST_CODE); The result of the above request is returned by calling the onActivityResult method: the second argument is a value which

Using PCM format of AWS Polly

我的梦境 提交于 2019-12-13 17:00:26
问题 I am trying to use AWS Polly (for TTS) using JavaScript SDK from AWS lambda (which is exposed through a REST API using API gateway). There is no trouble in getting the PCM output. Here is a call flow in brief. .NET application --> REST API (API gateway) --> AWS Lambda (JS SDK) --> AWS Polly The .NET application (am using POSTMAN too for testing) gets an audio stream buffer in following format. {"type":"Buffer","data":[255,255,0,0,0,0,255,255,255,255,0,0,0,0,0,0,255,255,255,255,0,0,0,0,255,255

How can I Toast after Text to Speech finish speaking Android

拟墨画扇 提交于 2019-12-13 15:19:39
问题 How can I Toast after Text to Speech finish speak. Actually I want to do someting more than Log. This is my code. public class MainActivity extends AppCompatActivity implements TextToSpeech.OnInitListener, TextToSpeech.OnUtteranceCompletedListener { private TextToSpeech mTts; Button btnSpeak; EditText editTextTTS; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mTts = new TextToSpeech(this,this);

Google Cloud Text-to-Speech Interface Confusion (How do I download the mp3 files?)

喜夏-厌秋 提交于 2019-12-13 12:33:50
问题 I'd like to preface this with the fact that I am not a programmer/developer - I am a multimedia designer. I use text-to-speech to generate placeholder audio files that can be used to time animations before we record the official audio narration. Previously I was using Amazon Polly but I wanted to give Google Cloud a try. However, I'm having the hardest time actually figuring out how to generate the mp3 files and save them. With Amazon Polly, you simply go to a website, enter your text into a

TTS play silence in a for-loop doesn`t work

眉间皱痕 提交于 2019-12-13 04:41:34
问题 I want to add a pause button to my TTS-app. I am trying to do this with a while-loop, which should playSIlence. It isn't working, but I can't find my mistake. my boolean: boolean pausegedrückt; for-loop: for (int i = 1; i < anzahl+1; i++) { while (pausegedrückt==true) { tts.playSilence(1000, TextToSpeech.QUEUE_ADD, null); } String str = String.valueOf(i); tts.speak(str, TextToSpeech.QUEUE_ADD, null); tts.playSilence(3000, TextToSpeech.QUEUE_ADD, null); } my onCheckedChanged @Override public

Creating a UWP DLL using Windows::Media::SpeechSynthesis

天大地大妈咪最大 提交于 2019-12-13 03:33:37
问题 I am currently trying to develop a speech synthesis UWP DLL using the namespace Windows::Media::SpeechSynthesis. I read this documentation and the Microsoft page dedicated to the namespace. I tried to implement the namespace in code. Header file #pragma once #include <stdio.h> #include <string> #include <iostream> #include <ppltasks.h> using namespace Windows::Media::SpeechSynthesis; using namespace Windows::UI::Xaml::Controls; using namespace Windows::UI::Xaml::Media; using namespace Windows

Chrome android text to speech not changing language

亡梦爱人 提交于 2019-12-13 02:47:21
问题 The code below works fine in Chrome desktop, but in Chrome Android, it's not using the msg.lang specified. The French text is being read out as if it was English, in an American accent. My phone's default language is English, does that matter? I want the page to read out in the selected language regardless of what settings the user has on their phone. const msg = new SpeechSynthesisUtterance(); msg.volume = 1; msg.text = text; // these words are in French msg.lang = 'fr-FR'; speechSynthesis

How to detect all Views and text inside a system dialog?

微笑、不失礼 提交于 2019-12-13 02:36:32
问题 Let's say I have a similar scenario: I'm using an accessibility service in order to my TTS engine talk when this dialog appears, but the only thing I was able to detect were the selectable views (those pointed by the arrow). Is there any way to detect the title and (more importantly) whole text inside the dialog? 回答1: Yes. I think it is likely that you're grabbing these items off of accessibility events, which focus on a single node. What you want to do instead is look at the entire view

Huh? Why doesn't playEarcon() produce onUtteranceCompleted()?

喜欢而已 提交于 2019-12-12 18:43:15
问题 An Android book I have states that using TextToSpeech.playEarcon() is preferable to playing audio files (using MediaPlayer) because: Instead of having to determine the opportune moment to play an audible cue and relying on callbacks to get the timing right, we can instead queue up our earcons among the text we send to the TTS engine. We then know that our earcons will be played at the appropriate time, and we can use the same pathway to get our sounds to the user, including the

Overriding “Always use my settings” option in Text-To-Speech settings programmatically in android

谁都会走 提交于 2019-12-12 17:05:43
问题 Some of the tablets have an option of overriding the App text-to-speech settings, named: "Always Use my Settings" in Text To Speech Settings. If this option is checked then the TTS engine will pick up the User settings for TTS and not the App-specific settings. My requirement is: whenever my App is using TTS engine, my APP settings should always be used since it has to announce in a particular language at a particular speech rate. But once "Always Use my Settings" is selected and if it has