azure-cognitive-services

CORS errors on Azure Translator API (Cognitive Services) when using Aurelia's Fetch Client

大兔子大兔子 提交于 2019-12-13 16:23:03
问题 I try to use a very basic API call from Windows Azure to translate some texts. They gives a quickstart example code. I try this code and it works pretty well. The text Hello world is translated into deutch and italian. I removed my personal subscription key. Here is the sample: const request = require('request'); const uuidv4 = require('uuid/v4'); const subscriptionKey = '........'; let options = { method: 'POST', baseUrl: 'https://api.cognitive.microsofttranslator.com/', url: 'translate', qs

What can I do with inconsistent sentiment detection provided by cognitive services?

只谈情不闲聊 提交于 2019-12-13 16:17:19
问题 Using Text Analytics for sentiment detection I receive sometimes results I consider being inconsistent. They can be demonstrated by one simple example: I'm sad was marked as 1% (0% means very negative) Hello I'm sad was marked as 85% (100% means very positive) Is there a way to improve/contribute to Text Analytics service for sentiment detection? Or to use own model similar to LUIS to detect sentiment? Alternatively is there some recommended service/library to use to change input text prior

Azure Cognitive Services - Face API Response: Reserved Fields or Bugs?

白昼怎懂夜的黑 提交于 2019-12-13 04:13:18
问题 In the Azure Cognitive Services Face API (see e.g. https://azure.microsoft.com/en-us/services/cognitive-services/face), the following response fields never seem to trigger: headPose:pitch (reserved field) foreheadOccluded eyeOccluded Am I misusing these, or is there a plan for them, or is there no plan to activate them? 回答1: If you look at the API documentation here: For the headPose , it says: EDIT 13/06/2019: doc was saying HeadPose's pitch value is a reserved field and will always return 0

LUIS API - Retrieve all endpoint utterances and its scores

泄露秘密 提交于 2019-12-13 03:48:56
问题 I have been searching the past few days how to retrieve the endpoint utterances and its scores for a dashboard I am working with. Problem is I'm lost with the APIs, there seems to be many, but I cannot find the exact one that fits my need. In this API documentation here, there is one that gets example utterances. What I would want to get is the actual endpoint utterances. Anyone can point me where to find the API to use? Thanks in advance. 回答1: @Jeff, actually in that API docs that you linked

Azure Cognitive services - TTS

◇◆丶佛笑我妖孽 提交于 2019-12-12 18:10:48
问题 I got an api keys for Azure Cognitive services, but I can't find any documentation how I am calling this service through postmen. Anybody has experience with this? 回答1: Seems you are trying to call Text To Speech service with your keys. There are two steps for that. 1. Need Access Token You have to get your token like this format: Request URL: https://YourResourceEndpoint/sts/v1.0/issuetoken Method: POST Hearder: Content-Type:application/x-www-form-urlencoded Ocp-Apim-Subscription-Key

Is there an official LUIS API that returns the total number of Utterances for each intent?

柔情痞子 提交于 2019-12-12 06:58:39
问题 I noticed the LUIS portal (www.luis.ai) shows Intents with the total number of Utterances for each. I'm looking to build a similar page in my application, although Microsoft's published APIs don't have a method that returns the total utterances per intent. We noticed the LUIS dashboard is using this API to pull the data, but the method is not published in their docs: https://westus.api.cognitive.microsoft.com/luis/webapi/v2.0/apps/{appId}/versions/{version}/stats/labelsperintent Does anyone

QnA Maker missing train endpoint

社会主义新天地 提交于 2019-12-11 17:52:29
问题 API V4 documentation is missing Train endpoint or is the feature missing in the GA-version? https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff 回答1: Official reply has been made on Github issue by Prashant Choudhari from Microsoft: We do not have the train API in V4. We are re-thinking this feature and will re-enable an advanced version in a future release. See here: https://github.com/Microsoft/BotBuilder-CognitiveServices

CustomVision API returns “Operation returned an invalid status code: 'NotFound'”

↘锁芯ラ 提交于 2019-12-11 08:44:32
问题 I am using the Nuget package Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction I have created a Custom Vision application in the Custom Vision portal and obtained API keys and a project ID. Whenever I try to make a request to the API, I always get the following exception thrown: HttpOperationException: Operation returned an invalid status code 'NotFound' Here is my code: HttpClient httpClient = new HttpClient(); CustomVisionPredictionClient customVisionPredictionClient = new

Display Text for QnAMaker follow-on prompts

女生的网名这么多〃 提交于 2019-12-11 08:14:16
问题 I'm attempting to use follow-on prompts within QnAMaker but am confused about the purpose of the field labelled "Display text" in the "Follow-up prompt" creation dialogue. https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation describes this field as "The custom text to display in the follow-up prompt.". To me, that suggests that it's just a label for the follow-up prompt which is typically rendered as a button. I therefore assumed that the text had

Rate limit exceeded in Face API

[亡魂溺海] 提交于 2019-12-11 04:55:38
问题 What should I do when i encountered rate limit exceeded for face api other than using Task.Delay(1000) ? I have about 50 records and detect/identify/verify in 2 seconds. For the identifyasync , I set the confidence threshold to be 0.0f and the max number of candidates returned to be 50. I tried to use Task.Delay(1000) and reduced the number of candidates, but it doesn't help to solve my problem. Please give me advice on how to resolve this issue as i'm new to this. 回答1: I wrote a library