ibm-watson

com/sun/jna/android-arm/libjnidispatch.so not found in resource path

僤鯓⒐⒋嵵緔 提交于 2019-12-23 18:04:30
问题 All of the following is being done in Android Studio. I have successfully compiled and tested the Android Watson Speech to Text demo app. I then created a library project containing the Watson related API's and a 2nd app project with a simple UI that references the Watson library project. The UI successfully starts and calls Watson speech to text api's. I thought I was set to use the Watson library project for real. So I incorporated the Watson API project into my 'real' project. When I start

How to instruct IBM Watson Discovery about the format of my documents?

ε祈祈猫儿з 提交于 2019-12-23 03:29:10
问题 I am trying to use the Watson Discovery service to build a virtual customer support agent. We have many documents with tons of Q and A in various formats. In the simplest case, we just have a doc, with an array of: Q:.. A:... Q:... A:... etc. When we upload these PDF files and then try to query it, it returns the full document that included the relevant answer. Is there a way to instruct Discover service, so that it will only return the relevant question and answer pair instead of the full

Visual Recognition error 400: Cannot execute learning task no classifier name given

為{幸葍}努か 提交于 2019-12-22 16:59:28
问题 I am using Visual Recognition curl command to add a classification to an image: curl -u "user":"password" \ -X POST \ -F "images_file=@image0.jpg" \ -F "classifier_ids=classifierlist.json" \ "https://gateway.watsonplatform.net/visual-recognition-beta/api/v2/classifiers?version=2015-12-02" json file: { "classifiers": [ { "name": "tomato", "classifier_id": "tomato_1", "created": "2016-03-23T17:43:11+00:00", "owner": "xyz" } ] } (Also tried without the classifiers array. Got the same error) and

Visual Recognition error 400: Cannot execute learning task no classifier name given

守給你的承諾、 提交于 2019-12-22 16:58:01
问题 I am using Visual Recognition curl command to add a classification to an image: curl -u "user":"password" \ -X POST \ -F "images_file=@image0.jpg" \ -F "classifier_ids=classifierlist.json" \ "https://gateway.watsonplatform.net/visual-recognition-beta/api/v2/classifiers?version=2015-12-02" json file: { "classifiers": [ { "name": "tomato", "classifier_id": "tomato_1", "created": "2016-03-23T17:43:11+00:00", "owner": "xyz" } ] } (Also tried without the classifiers array. Got the same error) and

Trouble passing string variable to return data from python function to be used globally anywhere in a python script or program - EDITED for clarity

僤鯓⒐⒋嵵緔 提交于 2019-12-22 01:07:34
问题 I am editing my question to reflect the issue I am having in my application. I am trying to take a streamed audio and convert it to text using Google text to speech. Then pass that that text as input to a conversation not on Watson. Watson then returns its answer. The latter half works great. The issue I am having is that I can't get the script to pass the text from the recorded speech to the Watson service I created. I don't get an error, I just get nothing. The mic is working (I tested it

Can't access IBM Watson API locally due to CORS on a Rails/AJAX App

ε祈祈猫儿з 提交于 2019-12-20 05:21:30
问题 There doesn't seem to be a lot of answers (but lots of questions) out there on how to handle this, so I'm going to add my name to the chorus and pray for an answer that doesn't involve Node. My error via Chrome console: 1. POST https://gateway.watsonplatform.net/visual-recognition-beta/api 2. XMLHttpRequest cannot load https://gateway.watsonplatform.net/visual-recognition-beta/api. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is

Can't access IBM Watson API locally due to CORS on a Rails/AJAX App

大憨熊 提交于 2019-12-20 05:21:09
问题 There doesn't seem to be a lot of answers (but lots of questions) out there on how to handle this, so I'm going to add my name to the chorus and pray for an answer that doesn't involve Node. My error via Chrome console: 1. POST https://gateway.watsonplatform.net/visual-recognition-beta/api 2. XMLHttpRequest cannot load https://gateway.watsonplatform.net/visual-recognition-beta/api. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is

Watson Responds with one API code

蓝咒 提交于 2019-12-20 05:16:07
问题 i know if I want send any to Watson in conversation I use the: var latestResponse = Api.getResponsePayload(); var context = latestResponse.context; Api.sendRequest("Hi Watson!", context); This result of my code: I want to know how do I get Watson to send something in the conversation. I saw some examples and tried and it did not work. Can someone help? I dont now If I'm doing right , but My example is: // var responseText = null; //responseText = {}; var latestResponse = Api

IBM Watson TextToSpeech - cannot read property .pipe of undefined

那年仲夏 提交于 2019-12-20 04:34:58
问题 I have the following code, straight from the documentation: var TextToSpeechV1 = require('watson-developer-cloud/text-to- speech/v1'); var fs = require('fs'); var textToSpeech = new TextToSpeechV1({ iam_apikey: '---myapikey---', url: 'https://stream.watsonplatform.net/text-to-speech/api/' }); var synthesizeParams = { text: 'Hello world, you dummy ass', accept: 'audio/wav', voice: 'en-US_AllisonVoice' }; // Pipe the synthesized text to a file. textToSpeech.synthesize(synthesizeParams).on(

Intermittent javax.net.ssl failure bad_record_mac

谁说我不能喝 提交于 2019-12-20 04:13:39
问题 I have a Java Spring web app running on Tomcat through an Apache https proxypass which fails intermittently when it tries to access a secure IBM Watson service. Apache is secured with a LetsEncrypt cert, redirecting to Tomcat port 8080. Environment: Java: jdk1.7.0_80 Solaris 10 Tomcat 8.0.33 Apache 2.4.18 I turned on javax.net debug and I can see it gets through ServerHelloDone. Here is the rest of the log up to the exception. ServerHelloDone [read] MD5 and SHA1 hashes: len = 4 0000: 0E 00 00