cloudant-sdp

XXXX does not exist in the discovered schema. Document has not been imported

戏子无情 提交于 2020-01-17 05:26:28
问题 When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this: [XXXX does not exist in the discovered schema. Document has not been imported.] Questions What does this error mean? How can I fix it? 回答1: This error is similar to: No matched schema for {"_id":"...","doc":{...}, so the same answer applies here. There are two main phases to the SDP process: Schema analysis Data import In the schema

How can I tell if my SDP process is still running the 'initial' load?

☆樱花仙子☆ 提交于 2020-01-06 13:54:30
问题 You cannot run any sql statements on a DashDB that result in locks that will conflict with the schema discovery process (SDP) during the initial load. See here for more information: SQLCODE=-911 : "warehouser_error_message": "File <<filename>>.csv.zip could not be loaded due to an exception in dashDB Question : How can I verify if SDP is running the initial load? 回答1: Log in to the Cloudant dashboard and select the _warehouser database. Inside that database, select the document that

How can I tell if my SDP process is still running the 'initial' load?

久未见 提交于 2020-01-06 13:53:34
问题 You cannot run any sql statements on a DashDB that result in locks that will conflict with the schema discovery process (SDP) during the initial load. See here for more information: SQLCODE=-911 : "warehouser_error_message": "File <<filename>>.csv.zip could not be loaded due to an exception in dashDB Question : How can I verify if SDP is running the initial load? 回答1: Log in to the Cloudant dashboard and select the _warehouser database. Inside that database, select the document that

SQLCODE=-911 : “warehouser_error_message”: "File <<filename>>.csv.zip could not be loaded due to an exception in dashDB

蹲街弑〆低调 提交于 2019-12-25 05:35:31
问题 The Cloudant Schema Discovery Process (SDP) is reporting the following error message while loading my dashDB database: "warehouser_error_message": "File xxxxxx/xxxxxx_nnnn_nnn_n_n.csv.zip could not be loaded due to an exception in dashDB. Reason: <DB2 SQL Error: SQLCODE=-911, SQLSTATE=40001, SQLERRMC=68, DRIVER=4.18.60>" How can I fix this? 回答1: The SDP locks the database during the initial load . The -911 error indicates there is a lock contention issue. Ensure you aren't performing any

The value type for json field XXXX was presented as YYYY but the discovered data type of the table's column was ZZZZ

眉间皱痕 提交于 2019-12-13 07:24:51
问题 When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this: EXCEPTION The value type for json field XXXX was presented as java.lang.String but the discovered data type of the table's column was Boolean. The document could not be imported into the created database. _ID mydocument-12345 Questions Why am I getting this error? How can I fix it? 回答1: The SDP has to decide a matching SQL data type

No matched schema for {“_id”:“…”,“doc”:{…}

二次信任 提交于 2019-12-02 19:41:06
问题 When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this: No matched schema for {"_id":"...","doc":{...} Questions What does this error mean? How can I fix it? 回答1: There are two main phases to the SDP process: Schema analysis Data import In the schema analysis phase, the SDP analyses a sample of documents in Cloudant and uses the document structures of the sample to infer the target schema

No matched schema for {“_id”:“…”,“doc”:{…}

雨燕双飞 提交于 2019-12-02 10:34:31
When using the SDP to extract data from Cloudant and populate dashDB, I occasionally see error messages in the dashdb "XXXX_OVERFLOW" table that look like this: No matched schema for {"_id":"...","doc":{...} Questions What does this error mean? How can I fix it? Chris Snow There are two main phases to the SDP process: Schema analysis Data import In the schema analysis phase, the SDP analyses a sample of documents in Cloudant and uses the document structures of the sample to infer the target schema in dashDB. The above error is encountered when the SDP tries to import a document with a schema

how to increase the sample size used during schema discovery to 'unlimited'?

微笑、不失礼 提交于 2019-11-30 09:52:30
问题 I have encountered some errors with the SDP where one of the potential fixes is to increase the sample size used during schema discovery to 'unlimited'. For more information on these errors, see: No matched schema for {"_id":"...","doc":{...} The value type for json field XXXX was presented as YYYY but the discovered data type of the table's column was ZZZZ XXXX does not exist in the discovered schema. Document has not been imported Question: How can I set the sample size? After I have set

how to increase the sample size used during schema discovery to 'unlimited'?

99封情书 提交于 2019-11-29 18:10:30
I have encountered some errors with the SDP where one of the potential fixes is to increase the sample size used during schema discovery to 'unlimited'. For more information on these errors, see: No matched schema for {"_id":"...","doc":{...} The value type for json field XXXX was presented as YYYY but the discovered data type of the table's column was ZZZZ XXXX does not exist in the discovered schema. Document has not been imported Question: How can I set the sample size? After I have set the sample size, do I need to trigger a rescan? These are the steps you can follow to change the sample