marklogic

Marklogic: Data do not comply with query

旧街凉风 提交于 2019-12-12 05:36:15
问题 Environment: NodeJS client, Marklogic 8 server. The query from NodeJS is: var query = qb.where( qb.directory('/root/dir/'), qb.scope( qb.property('sources'), qb.value('brand','MyBrand') ) ); The translated query is: { "whereClause": { "query": { "queries": [ { "directory-query": { "uri": [ "/root/dir/" ] } }, { "container-query": { "json-property": "sources", "value-query": { "json-property": "brand", "text": [ "MyBrand" ] } } } ] } }, "queryType": "structured", "queryFormat": "json" } The

path index is recognized by cts:path-reference when xpath contains namespace prefix

最后都变了- 提交于 2019-12-12 05:15:11
问题 If an element exists more than one places in a XML. Then it was difficult to sort the data on a particular element (by default docs will be sorted on the first element). I was able to solve this problem by defining a path-index and passing it to cts:path-reference query. But if the xpath contains a namespace prefix(namespace is defined for the xml) then cts:path-reference is not able to find the defined path-index . I am getting the below error. SEARCH-BADORDERBY: (err:FOER0000) Indexes are

How can I run MarkLogic on AWS EC2 using my own license key?

戏子无情 提交于 2019-12-12 05:07:09
问题 I started an EC2 instance running linux, and installed the MarkLogic Server rpm. But when I try to start the MarkLogic service, I see messages like this: Waiting for block device on /dev/sdf Waiting for block device on /dev/sdf Waiting for block device on /dev/sdf There is no /dev/sdf . How can I get past this problem? 回答1: When setting up your EC2 instance, suggest you also add an EBS block. You will get asked for a device name. At present, when using the RedHat AMI, regardless of the name

MarkLogic Cluster - Configure Forest with all documents

喜你入骨 提交于 2019-12-11 20:08:45
问题 We are working on MarkLogic 9.0.8.2 We are setting up MarkLogic Cluster (3 VMs) on Azure and as per failover design, want to have 3 forests (each for Node) in Azure Blob. I am done with Setup and when started ingestion, i found that documents are distributed across 3 forests and not stored all in each Forest. For e.g. i ingested 30000 records and each forest contains 10000 records. What i need is to have all forest with 30000 records. Is there any configuration (at DB or forest level) i need

How to do Selective Sorting by Attribute based on associated Value of Element or other Attribute in MarkLogic

人盡茶涼 提交于 2019-12-11 18:23:38
问题 Requirement: Resolve directly through Index, don't want to open the documents (Hence, Not considering FLOWR expression). Search results return a list of purchase order documents The purchase order may or may not have a line-item (for purposes of example) There can be thousands of line items for a purchase order, each with a different type There will only be 1 line-item for a type (no duplicates within the purchase order) For sorting: Based on one of the many types selected by user at the time

Marklogic How to continue looping after throw catching exception

梦想与她 提交于 2019-12-11 17:24:25
问题 I would like to know how to continue looping after throwing exception and document fail at total count. example: fail document 010291.xml at count 4000 and continue loop again. xquery version "1.0-ml"; try { let $uris := cts:uris((),(), cts:and-query( cts:collection-query("/TRA") ) )[1 to 200000] for $uri in $uris return if (fn:exists(doc($uri))) then () else $uri, xdmp:elapsed-time() } catch($err) { "received the following exception: ", $err } 回答1: Put the try-catch statement inside the loop

Marklogic CPF strategy

╄→гoц情女王★ 提交于 2019-12-11 17:22:31
问题 For an insert/update operation on a document in a collection, I have CPF turned on. I want to make sure in my CPF, if I modify another document, I donot want it to get added the CPF queue. How can I achieve this ? For eg: say in my CPF queue I have followwing documents d1,d2,d3,d4,d5 Say in my CPF task for d1 , I had to update document d4 . What CPF state do I need to set so that d4 will not be added to the end of CPF queue.. but d4 if already present in the queue, should continue with

Why Do Invalid Characters Get Into MarkLogic Database?

烂漫一生 提交于 2019-12-11 16:39:02
问题 I have discovered that it is possible to insert invalid XML characters into a MarkLogic database. This only becomes apparent if I happen to extract, xdmp:quote then later xdmp:unquote an XML document, whereupon I get a message such as "Invalid character entity '14'". The character got into the database via an XQuery-generated HTML form submission. I think the user pasted text in from Excel, which includes such hidden nasties. Clearly I am going to need to check what is being input in future,

How to create and use GeoSpatial indexes in Marklogic from Sparql

為{幸葍}努か 提交于 2019-12-11 16:22:54
问题 I have loaded the geospatial data from geonames.org into Marklogic using RDF import. When using the Query Console to explore the data, I see the data has been loaded into an xml document and looks like this: <sem:triple> <sem:subject>http://sws.geonames.org/2736540/</sem:subject> <sem:predicate>http://www.w3.org/2003/01/geo/wgs84_pos#lat</sem:predicate> <sem:object datatype="http://www.w3.org/2001/XMLSchema#string">40.41476</sem:object> </sem:triple> <sem:triple> <sem:subject>http://sws

Geospatial Queries with Optics API (MarkLogic)

我的未来我决定 提交于 2019-12-11 16:19:02
问题 Question brought over from thread.I am currently trying to do a search with SPARQL and CTS using Optics API. I have attempted to try using the following code Query Used import module namespace op="http://marklogic.com/optic" at "/MarkLogic/optic.xqy"; let $people := op:from-lexicons( map:entry("people",cts:uri-reference()), "lexicon" )=>op:where( cts:path-geospatial-query("people_data/location", cts:circle(7500, cts:point(89.39101918779838, 51.97989163203445)), "type=long-lat-point") ) let