solr

Index Data into Solr Core using Postman

醉酒当歌 提交于 2019-12-12 05:29:16
问题 I have created a Solr core. Now i want to insert data into it using Postman. Can we do that and how can we insert data to solr core using Postman. Is there any REST API in Apache SOLR which can be directly called from Postman and insert data to Solr Core. This is my JSON data which i want to insert. I am getting this exception Exception writing document id 6 to the index; possible analysis error: For input string: \"\"","code":400}} [{ "id":6, "AssetId": 123456, "Availability": "Up" }, { "id"

How to use proxyUrl parameter via AJAX Solr

泄露秘密 提交于 2019-12-12 04:58:57
问题 Problem: I have a Solr 4.5.0 intance that lives on a private server not directy accessible to the public, 12.34.56.789:8983/solr/collection1/select?q=*%3A*&wt=json&indent=true I am leveraging JavaScript framework AJAX Solr to present the search results once the JSON is retrieved from that Solr instance. The UI is presented here, www.mywebapp.com/searchresults.html Please note: www.mywebapp.com can access... 12.34.56.789 I've explored the suggestion for proxying and it seems as though AJAX

Solrj Select All

六眼飞鱼酱① 提交于 2019-12-12 04:58:02
问题 I am having issues selecting everything in my 25 document Solr (3.6) index via Solrj (running Tomcat). public static void main(String[] args) throws MalformedURLException, SolrServerException { SolrServer solr = new HttpSolrServer("http://localhost:8080/solr"); ModifiableSolrParams parameters = new ModifiableSolrParams(); parameters.set("?q", "*:*"); parameters.set("wt", "json"); QueryResponse response = solr.query(parameters); System.out.println(response); } The result I get is:

Pysolr: I continually get json.decoder error when attempting a query

Deadly 提交于 2019-12-12 04:57:50
问题 import pysolr solr = pysolr.Solr('http://replaced_url.abc:8983/solr/#/tran_timings_shard1_replica2/query', timeout=10) results = solr.search('SubmitterId:clientname') When pulling flat files I can go to the solr web interface http://replaced_url.abc:8983/solr/#/tran_timings_shard1_replica2/query and do a simple query of SubmitterId:clientname I've searched for a couple hours now and tried to go by examples, but no matter what I put as the solr.search query variable, I consistently get the

Solr : Importing data with single SP call

随声附和 提交于 2019-12-12 04:49:31
问题 I have one solr collection called document. In that there are fields like id,name,associated_folder,is_associate & other The field is_associate depends on associated_folder. I am importing data using Data Import provided in Solr Dashboard My problem is that Stored Procudeure which returns data like : # id name associated_folder is_associate 1 DOC1 DOCNAME 1001,1002,1003 true 2 DOC2 DOCNAME 4001,4002,4003 true 3 DOC3 DOCNAME -1 false & in my schema file associated_folder declare like : <field

Give less weight to term frequency in solr?

此生再无相见时 提交于 2019-12-12 04:48:02
问题 How do I change the scoring function of Solr to give less weight to "term frequency"? I am using a pagerank-like document boost as a relevancy factor. My search index currently puts many documents that are "spammy" or not well-cleaned up and have repetitive words on top. I know the score is calculated by term frequency (how often a search term is in the document), inverse document frequency, and others (How are documents scored?). I could just increase the boost, but that would disemphasize

SolrCloud is detecting non-existing nodes

前提是你 提交于 2019-12-12 04:46:36
问题 I am having an interesting situation with SolrCloud. Basically, I dont know why, but Solr instance, which does not in the cloud normally, is displayed on SolrCloud page and also visible in live_nodes path in Zookepeer. Here are details about the situation: I have one Solr instance, running as a standalone application on a virtual machine, located on a remove machine. We will call it virtual1 from now on. This is the script for running it: java -server -XX:+UnlockExperimentalVMOptions -XX:

Distributed Search Across Multiple Solr Instances

空扰寡人 提交于 2019-12-12 04:44:55
问题 I have 100 billion rows of data that I have split into multiple solr instances, each with a separate schema. I need to: Query each instance. Get the results from each instance. Append those results to a final query. Call a final Solr instance for the ultimate result. How can I do this? Do I need to write a separate requestHandler? eg, $ curl http://localhost:8983/solr/select?q=query1.result AND ... AND queryN.result 回答1: what you are looking for is called distributed search -> http://wiki

Solr: Use Cursor in pagination to get previous page

依然范特西╮ 提交于 2019-12-12 04:37:16
问题 I am new to solr and trying to implement the pagination feature on my search page. Initially I was using the basic pagination method mentioned here: https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results So moving to next and previous pages both were fine as I can just change the index over there. But how do I go to the previous page using cusrsorMark as solr only returns the nextCursorMark . 回答1: You'll have to keep the previous cursorMark available client side - meaning that

Searching words with spaces for word without spaces in solr

心不动则不痛 提交于 2019-12-12 04:36:11
问题 how can i search "ice cube" if I have "icecube" in my index. I have set mm as 2<-1 4<70%. While using shingle in query analyzer, the query "ice cube" creates three tokens as "ice","cube", "icecube". But mm is the limitation here. Only ice and cubes are searched but not "icecubes".i.e not working for pair though I am using shingle filter. However in analysis tool, three tokens are created. How to solve it ?. Here the schema configuration link: http://pastebin.com/74xaKEyv 回答1: I think you