ibm-watson

IBm Cloud Watson Assistant: How to get the ID of a workspace

我怕爱的太早我们不能终老 提交于 2019-12-08 08:34:48
问题 I made a chatbot using IBM Cloud Watson Assistant and I need to use it in my Android applications. This my my config.xml code : <?xml version="1.0" encoding="utf-8"?> <resources> <!-- Watson Conversation Service Credentials --> <string name="workspace_id">???</string> <string name="conversation_username">2m5tAP3W_ELNzcKlc4B5mRN6R-QXtF1C9zS22XzYXYbA</string> <string name="conversation_password">2m5tAP3W_ELNzcKlc4B5mRN6R-QXtF1C9zS22XzYXYbA</string> <!--Watson Speech-To-Text Service Credentials-

IBM Watson Visual Recognition in Java

你离开我真会死。 提交于 2019-12-08 07:12:22
问题 I want to use IBM Watson Visual Recognition for my android app and want to call APIs in JAVA but i don't find any example or any reference to the list of methods in JAVA to use this service. You can see the JAVA examples are missing here. Please help me to find few suitable examples or any reference to these methods. Please also tell me what is bluemix platform and is it necessary to use it in order to use IBM Watson Visual Recognition? Thanks in Advance! 回答1: Look at the Java SDK, and in

How to use QA Service of IBM watson with REST API

南楼画角 提交于 2019-12-07 13:48:19
问题 I have just started to learn IBM Watson services. I need to use Question and answer API of bluemix in java using REST API. But I couldn't find any service like Question and answer . Please can anybody tell me is the name is changed or where can I find the documentation for this service. I have tried with existing answers in SO. But those links which are in answers are not working removed. Regards 回答1: The QA service has been discontinued since the end of last year. Instead what has happened

For Watson's Speech-To-Text Unity SDK, how can you specify keywords?

眉间皱痕 提交于 2019-12-07 12:04:03
问题 I am trying to specify keywords in Watson's Speech-To-Text Unity SDK , but I'm unsure how to do this. The details page doesn't show an example (see here: https://www.ibm.com/watson/developercloud/doc/speech-to-text/output.shtml), and other forum posts are written for Java applications (see here: How to specify phonetic keywords for IBM Watson speech2text service?). I've tried hard-coding these values in the RecognizeRequest class created in the "Recognize" function like so, but without

How to instruct IBM Watson Discovery about the format of my documents?

我的梦境 提交于 2019-12-07 03:23:26
I am trying to use the Watson Discovery service to build a virtual customer support agent. We have many documents with tons of Q and A in various formats. In the simplest case, we just have a doc, with an array of: Q:.. A:... Q:... A:... etc. When we upload these PDF files and then try to query it, it returns the full document that included the relevant answer. Is there a way to instruct Discover service, so that it will only return the relevant question and answer pair instead of the full document? To have Discovery return the individual relevant QA pairs, they should be split up and passed

How to tie a backend to Watson's Conversation service?

断了今生、忘了曾经 提交于 2019-12-06 12:58:59
I am using Conversation service in my application, at the backend I want to use the corpus I have setup so that I can ask deep technical questions since my corpus has been populated with Technical videos and articles spanning 20+ years. Can you please point me to examples where Conversation service has been integrated with backend Watson services ? There is an example of integrating Retrieve and Rank at http://conversation-enhanced.mybluemix.net/ The code to show this integration is housed at https://github.com/watson-developer-cloud/conversation-enhanced I did find the code where the query is

Visual Recognition error 400: Cannot execute learning task no classifier name given

不羁的心 提交于 2019-12-06 10:37:26
I am using Visual Recognition curl command to add a classification to an image: curl -u "user":"password" \ -X POST \ -F "images_file=@image0.jpg" \ -F "classifier_ids=classifierlist.json" \ "https://gateway.watsonplatform.net/visual-recognition-beta/api/v2/classifiers?version=2015-12-02" json file: { "classifiers": [ { "name": "tomato", "classifier_id": "tomato_1", "created": "2016-03-23T17:43:11+00:00", "owner": "xyz" } ] } (Also tried without the classifiers array. Got the same error) and getting an error: {"code":400,"error":"Cannot execute learning task : no classifier name given"} Is

How can I access IBM speech-to-text api with curl?

删除回忆录丶 提交于 2019-12-06 08:40:13
I cannot access the speech-to-text API on IBM Bluemix with curl! I tried the example from the documentation for a sessionless request with curl and it didn't work; I got an invalid userID/password message. Here is the error I got: "{ "code" : 401 , "error" : "Not Authorized" , "description" : "2016-10-08T15:22:37-04:00, Error ERCDPLTFRM-DNLKUPERR occurred when accessing https://158.85.132.94:443/speech-to-text/api/v1/recognize?timestamps=true&word_alternatives_threshold=0.9&continuous=true , Invalid UserId and/or Password. Please confirm that your credentials match the end-point you are trying

For Watson's Speech-To-Text Unity SDK, how can you specify keywords?

旧城冷巷雨未停 提交于 2019-12-05 18:46:09
I am trying to specify keywords in Watson's Speech-To-Text Unity SDK , but I'm unsure how to do this. The details page doesn't show an example (see here: https://www.ibm.com/watson/developercloud/doc/speech-to-text/output.shtml ), and other forum posts are written for Java applications (see here: How to specify phonetic keywords for IBM Watson speech2text service? ). I've tried hard-coding these values in the RecognizeRequest class created in the "Recognize" function like so, but without success: **EDIT - this function never gets called -- ** public bool Recognize(AudioClip clip, OnRecognize

Slack-App-Watson: Watson looses intent from previous message received

心已入冬 提交于 2019-12-04 18:52:07
I am writing a simple Slack bot which can look for weather conditions for a given location. In the Watson conversation chat box, Watson is doing good: me: Weather please Watson (detected #weather_asked): Where do you want to know the weather conditions? me: Paris Watson (detected @location for intent #weather_asked): Finding weather conditions for Paris... But in my node.js app (connected to Slack), it seems that Watson is " not keeping in mind that I am providing a location for the #weather_asked intent ": me: Weather please Watson (detected #weather_asked): Where [...]? me: Paris Watson