marklogic

Expanded tree cache full error need to tune the query

假如想象 提交于 2019-12-13 00:39:49
问题 Description: $enumValues will have sequence of strings that I have to look into $assetSubGroup will have a element value from XML (for loop) i.e string that I have to match in above maintained sequence If match is not, I have to hold few element values and return. All three of my attempts below are giving me expanded tree cache full errors. There are ~470000 assets i.e XML I'm querying. How can I tune these queries to avoid expanded tree cache errors? approach 1: let $query-name := "get-asset

MarkLogic template driven extraction and triples: dealing with array nodes

家住魔仙堡 提交于 2019-12-13 00:37:07
问题 I've been studying the examples here: https://docs.marklogic.com/guide/semantics/tde#id_25531 I have a set of documents that are structured with a parent name and an array of children nodes with their own names. I want to create a template that generates triples of the form "name1 is-a-parent-of name2". Here's a test I tried, with a sample of the document structure: declareUpdate(); xdmp.documentInsert( '/test/tde.json', { content: { name:'Joe Parent', children: [ { name: 'Bob Child' }, {

MarkLogic “search:suggest” finds constraint name

拟墨画扇 提交于 2019-12-12 23:12:53
问题 So this one is really weird. I have a completly empty database and use the following code: xquery version "1.0-ml"; import module namespace search = "http://marklogic.com/appservices/search" at "/MarkLogic/appservices/search/search.xqy"; search:suggest("qwe" , <options xmlns="http://marklogic.com/appservices/search"> <constraint name="qweqwe"> <word type="xs:string" collation="http://marklogic.com/collation/"> <element name="test"/> </word> </constraint> <default-suggestion-source ref="qweqwe

Marklogic : Multiple XML files created on document on importing a csv. How to get root Document URI path?

Deadly 提交于 2019-12-12 19:15:15
问题 I am new to Marklogic, I tried to import my CSV files of 100k records to Marklogic and after import, I found it gets imported to Documents Database by default. Also, I found for each records, I see a XML file generated in the database with incremental number appended to the "documentUri" that I mentioned while importing. For Example: documentUri_1.xml. I understands multiple xml files are created inorder to read the data in a distributed manner. Question: 1. How to get the root document URI

Marklogic (Nodejs API) - Search documents that match 2 (or more) conditions in object array attribute

非 Y 不嫁゛ 提交于 2019-12-12 18:17:23
问题 My documents are stored in JSON in marklogic like this (I remove useless attributes for my case): { documentId: '', languages: [{ locale: 'en_UK', content: { translated: 'true', } }, { locale: 'de_DE', content: { translated: 'false', } }, {...}], } edit: It seem that my 'useless' attributes cause some problems. Here my detailed object. { documentId: '', /* 4 attrs */, languages: [{ locale: 'en_UK', attr: '', content: { /* 14 attrs */, translated: true, /* 2 or 4 attrs */, } }, { locale: 'de

MarkLogic: XQuery to Get Distinct Names from XML Document?

情到浓时终转凉″ 提交于 2019-12-12 17:07:26
问题 I am using the following file: <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title lang="en">XQuery Kick Start</title> <author>James McGovern</author> <author>Per Bothner</author> <author>Kurt Cagle</author>

MarkLogic - Extending the Search, return specific object node

拥有回忆 提交于 2019-12-12 15:12:05
问题 I am very new getting started with MarkLogic and XQuery. I am trying to create a Search Transform to return, the actual JSON from a specific level of the document Here is a sample Document. I would like to return the whole JSON based segment no matter where the search results are in the lower level (transcript, topics, banners etc.,) Splashing around in the Query Console... search:search('trump')/search:result/search:snippet//@path Successfully Returns the path of the object, wrapped in a fn

How to use marklogic database for real time processing of data

*爱你&永不变心* 提交于 2019-12-12 14:23:45
问题 I am trying to evaluate marklogic for real time processing of the data. Earlier i have used kafka and storm for real time handling of data and after processing inserted to database. I am new to marklogic, so can anybody tell me is there anything available in marklogic which i can use for real time handling of data and after getting the data process it and then insert it into marklogic database. 回答1: MarkLogic is extremely scalable and has features like triggers, Alerting and CPF for which you

marklogic mlcp custom transform split aggregate document to multiple files

拥有回忆 提交于 2019-12-12 11:07:50
问题 I have a JSON "aggregate" file that I want to split up and ingest as multiple documents into MarkLogic using mlcp. I want to transform the content during ingestion using javascript. My JSON file looks something like this: { "type": "FeatureCollection", "features": [ {blobA}, {blobB}, {blobC} ...... ] } ...and I want to run this file through MLCP so that each document contains an item in the array. i.e. One document will contain {blobA}, another will contain {blobB}, and another will contain

Marklogic query optimization on profiler outcome

怎甘沉沦 提交于 2019-12-12 09:46:11
问题 Hi MarkLoggers out there, I have again a question for you! I have a collection of documents containing postalcode information. 400.000 docs. The docs are ordered one zip code per doc, each doc contains 400 features , ordered in categories and variabeles like so: <postcode id="9728" xmlns="http://www.nvsp.nl/p4"> <meta-data> <!-- Generated by DIKW for NetwerkVSP ST!P --> <version>0.3</version> <dateCreated>2014-06-28+02:00</dateCreated> </meta-data> <category name="Oplages"> <variable name=