sapi

Training sapi : Creating transcripted wav files and adding file paths to registry

谁说胖子不能爱 提交于 2019-12-05 16:20:06
We are trying to do acoustic training but we are unable to create the transcripted audio files, how to create it? Also we are using GetTranscript and Appendtranscript but we are unable to get the ISpTranscript interface for the ISpStream if we open the stream in READWRITE mode, so how do you create the transcript wav files. hr = SPBindToFile(L"e:\\file1.wav", SPFM_OPEN_READONLY, &cpStream); hr = cpStream.QueryInterface(&cpTranscript); // We get a error here for as E_NONINTERFACE if SPFM_OPEN_READWRITE hr = cpTranscript->AppendTranscript(sCorrectText); hr = cpTranscript->GetTranscript(

Memory leak in .Net Speech.Synthesizer?

柔情痞子 提交于 2019-12-05 04:39:47
I found a continuous leakage in my application. After examining using a memory profiler, I found the course is some object from Microsoft Speech.Synthesizer So I build up a toy project to verify the hypothesis: //Toy example to show memory leak in Speech.Synthesizer object static void Main(string[] args) { string text = "hello world. This is a long sentence"; PromptBuilder pb = new PromptBuilder(); pb.StartStyle(new PromptStyle(PromptRate.ExtraFast)); pb.AppendText(text); pb.EndStyle(); SpeechSynthesizer tts = new SpeechSynthesizer(); while (true) { //SpeechSynthesizer tts = new

SAPI 5 voice synthesis and C#

淺唱寂寞╮ 提交于 2019-12-05 01:33:36
问题 I have installed new SAPI5 voice. In computer settings of Speech program is new voice visible and available to use. But my program cannot find it. To find it, I am using this part of code, I use System.Speech.Synthesis namespace. SpeechSynthesizer s = new SpeechSynthesizer(); foreach (InstalledVoice v in s.GetInstalledVoices()) { st += v.VoiceInfo.Name+"\n"; } MessageBox.Show(st); The only voice found is Microsoft Anna. My code for speeking is as follow: s.SelectVoice("Eliska22k");//name of

how to convert SAPI's MS LANG ID to BCP 47 language tag?

无人久伴 提交于 2019-12-04 21:04:45
The call to SAPI's get language method returns an MS LangID, but for my purpose, it needs to be converted to a BCP 47 language tag (eg. en-GB) . how do we do it? I am not able to do it using LCIDToLocalName , as to use this function, I need to convert the returned value into the LCID format first. For eg, it returns "809" for english, now how do I convert it into LCID first, as LCIDHex for English is "0809", and LCIDec is "2057". Any help would be appreciated. Edit: Following is the code if (S_OK != SpEnumTokens(SPCAT_VOICES, NULL, NULL, &voice_tokens)) return FALSE; unsigned long voice_count,

C# SAPI 5.4 Languages?

扶醉桌前 提交于 2019-12-04 20:21:12
I've made a Simple Program That Recognizes Speech Using SAPI 5.4 , i wanted to ask if i can add some more languages to the TTS and The ASR , Thanks Here is the code i made you anybody needs to take a look at it using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using SpeechLib; using System.Globalization; using System.Speech.Recognition; namespace WindowsFormsApplication1 { public partial class Form1 : Form { // Speech Recognition Object SpSharedRecoContext

深入理解php底层:php生命周期

两盒软妹~` 提交于 2019-12-04 20:13:40
1、PHP的运行模式: PHP两种运行模式是WEB模式、CLI模式。无论哪种模式,PHP工作原理都是一样的,作为一种SAPI运行。 1、当我们在终端敲入php这个命令的时候,它使用的是CLI。 它就像一个web服务器一样来支持php完成这个请求,请求完成后再重新把控制权交给终端。 2、当使用Apache或者别web服务器作为宿主时,当一个请求到来时,PHP会来支持完成这个请求。一般有: 多进程(通常编译为apache的模块来处理PHP请求) 多线程模式 2、一切的开始: SAPI接口 通常我们编写[php]Web程序都是通过Apache或者Nginx这类Web服务器来测试脚本. 或者在命令行下通过php程序来执行PHP脚本. 执行完成脚本后,服务器应答,浏览器显示应答信息,或者在命令结束后在标准输出显示内容. 我们很少关心PHP解释器在哪里. 虽然通过Web服务器和命令行程序执行脚本看起来很不一样. 实际上她们的工作是一样的. 命令行程序和Web程序类似, 命令行参数传递给要执行的脚本,相当于通过url 请求一个PHP页面. 脚本戳里完成后返回响应结果,只不过命令行响应的结果是显示在终端上. 脚本执行的开始都是通过SAPI接口进行的. 1)、启动apache :当给定的SAPI启动时,例如在对/usr/local/apache/bin/apachectl start的响应中

Speech Recognition with SAPI: Custom Language Support through phonemes

自古美人都是妖i 提交于 2019-12-04 16:13:39
I have a text that I have transcribed from text to phonemes. I want now to modify or create a custom grammar XML which will define the pronounciation of the words with international phonemes and use that grammer with that specific spelling to be recognized instead of anything else. I want to add speech recognition for certain words spoken in different languages than english/german etc; Would that be possible with SAPI and how? can anyone point me in the right direction (using SpInProcRecoContext.Recognizer and custom grammar) So I want to use the already existing recognition engine for e.a.

Microsoft Speech Recognition Custom Training

核能气质少年 提交于 2019-12-04 09:34:54
I have been wanting to create an application using the Microsoft Speech Recognition. My application's users are expected to often say abbreviated things, such as 'LHC' for 'Large Hadron Collider' or 'CERN'. Given that exact order, my application will return You said: At age C. You said: Cern While it did work for 'CERN', it failed very badly for 'LHC'. However, if I could make my own custom training files, I could easily place the term 'LHC' somewhere in there. Then, I could make the user access the Speech Control Panel and run my training file. All the links I have found for this have been

Speech training files and registry locations

醉酒当歌 提交于 2019-12-04 08:09:54
I have a speech project that requires acoustic training to be done in code. I a successfully able to create training files with transcripts and their associated registry entries under Windows 7 using SAPI. However, I am unable to determine if the Recognition Engine is successfully using these files and adapting its model. My questions are as follows: When performing training through the Control Panel training UI, the system stores the training files in "{AppData}\Local\Microsoft\Speech\Files\TrainingAudio". Do the audio training files HAVE to be stored in this location, or can I store them

Synchronization Problem for SAPI or (text to speech ) … C#

空扰寡人 提交于 2019-12-04 06:10:25
问题 I'm working on a project which will speak the content of browsed web page.Browser is made by me using WebControl. I'm using SAPI for speech engine. I wanted to highlight the line in web page while reading that trough SpVoice.speak. But the problem is that if I use this speak method in Asynchronous way,then only last line of the web page getting highlighted,because the loop doesn't wait for the voice to complete. Thus it happens so fast that only last line is shown as highlighted.Highlight