solrj

How do I add one document to solr index using solrj?

随声附和 提交于 2019-12-05 21:45:37
I can reindex an entire solr core using the following code: public void indexSolr() throws SolrServerException, IOException { HttpSolrServer solr = new HttpSolrServer(solrIndexPath); logger.info("Indexing fcv solr at " + solrIndexPath); // reindex to pickup new articles ModifiableSolrParams params = new ModifiableSolrParams(); params.set("qt", "/" + solrDataImportPath); params.set("command", "full-import"); params.set("clean", "true"); params.set("commit", "true"); solr.query(params); } How can I insert just one single document into the index without having to index the whole thing? Are you

How to connect embedded solr with each other by sharding

一曲冷凌霜 提交于 2019-12-05 20:26:23
I have been using sharding with multiple basic solr server for clustering. I also used one embedded solr server (Solrj Java API) with many basic solr servers and connecting them by sharding as embedded solr server is the caller of them. I used the code line below for this purpose. SolrQuery query = new SolrQuery(); query.set("shards", "solr1URL,solr2URL,..."); Now, I have many embedded solr servers running on different computers and they are unaware of each others. I want to communicate them with each other by sharding. Is it possible? if yes how? if not what are the other options that you can

APDPlat拓展搜索之集成Solr

做~自己de王妃 提交于 2019-12-05 13:52:36
APDPlat 充分利用Compass的 OSEM 和 ORM integration 特性,提供了 简单易用 且 功能强大 的 内置搜索 特性。 APDPlat的内置搜索,在设计 简洁优雅 的同时,还具备了强大的 实时搜索 能力,用户只需用 注解 的方式在模型中指定需要搜索哪些字段(还可在模型之间进行关联搜索)就获得了搜索能力,而不用编写任何代码。平台自动处理索引维护、查询解析、结果高亮等支撑功能。 然而APDPlat的内置搜索只能在单机上面使用,不支持分布式,只能用于中小规模的场景。为了支持大规模的分布式搜索和实时分析,APDPlat 除了可以选择 Compass的进化版 ElasticSearch 外( APDPlat拓展搜索之集成ElasticSearch ), 还可以有另外一个选择 ,那就是 Solr 。 Solr提供了Java Client API( SolrJ ),我们可以使用SolrJ来和Solr服务器进行交互。首先我们在pom.xml中引入SolrJ依赖: <dependency> <groupId>org.apache.solr</groupId> <artifactId>solr-solrj</artifactId> <version>${solrj.version}</version> </dependency> 接下来我们看一个APDPlat和

Faceting using SolrJ and Solr4

坚强是说给别人听的谎言 提交于 2019-12-05 08:26:18
I've gone through the related questions on this site but haven't found a relevant solution. When querying my Solr4 index using an HTTP request of the form &facet=true&facet.field=country The response contains all the different countries along with counts per country. How can I get this information using SolrJ? I have tried the following but it only returns total counts across all countries, not per country: solrQuery.setFacet(true); solrQuery.addFacetField("country"); The following does seem to work, but I do not want to have to explicitly set all the groupings beforehand: solrQuery

Sorting solr search results using multiple fields (solrj)

徘徊边缘 提交于 2019-12-05 06:12:05
问题 I need to sort the results I get back from apache solr based on two factors: There are three entities in our system that are indexed by solr (groups, projects and datasets) and in the results I want datasets to be displayed first, followed by projects and then groups; but I still want it to respect to score values for each of the types. So, for example: results would be Dataset with score of 0.325 Dataset with score of 0.282 Dataset with score of 0.200 Project with score of 0.298 Project with

使用SolrCloud出现org.apache.solr.common.SolrException: Collection not found: my_solr2

雨燕双飞 提交于 2019-12-05 03:54:11
首先,我是用的是solr5.5.4,但是我的solrj使用的4.10.3,这样在使用SolrCloud时,就会出现 org.apache.solr.common.SolrException: Collection not found: my_solr2 这样的异常,collection确定是存在,这时候,我们需要把solrj的版本修改为5.5.4,来保持一致。 这时候我单独测试没有问题了,奇怪的是,使用maven打包时,打进war包的还是4.10.3版本的solrj,在maven工程的pom文件的根目录文件夹,使用 mvn dependency:tree 命令可以查看jar包版本。 在仔细检查各个依赖工程时并没能发现有什么solrj版本不一致,将所有parent工程下面的子工程全部update,解决问题。 来源: CSDN 作者: 西皮皮 链接: https://blog.csdn.net/z56zzzz/article/details/77530889

How can use the /export request handler via SolrJ?

我的梦境 提交于 2019-12-04 22:32:15
I'm using Solr 4.10. I have enabled the /export request handler for an index by adding this to the solrconfig.xml (as mentioned here: https://cwiki.apache.org/confluence/display/solr/Exporting+Result+Sets ): <requestHandler name="/export" class="solr.SearchHandler"> <lst name="invariants"> <str name="rq">{!xport}</str> <str name="wt">xsort</str> <str name="distrib">false</str> </lst> <arr name="components"> <str>query</str> </arr> </requestHandler> Now I can use: http://localhost:8983/solr/index/select ?.... as well as http://localhost:8983/solr/index/export ?.... from a browser or curl. But,

Situations to prefer Apache Lucene over Solr?

妖精的绣舞 提交于 2019-12-04 15:59:09
问题 There are several advantages to use Solr 1.4 (out-of-the-box facetting search, grouping, replication, http administration vs. luke, ...). Even if I embed a search-functionality in my Java application I could use SolrJ to avoid the HTTP trade-off when using Solr. Is SolrJ recommended at all? So, when would you recommend to use "pure-Lucene"? Does it have a better performance or requires less RAM? Is it better unit-testable? PS: I am aware of this question. 回答1: If you have a web application,

How to insert Bean object which has many child Bean in SolrJ

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 12:59:30
I have a Bean Class Activity which associates List of Profiles and a User Bean. Now If I am trying to insert this Activity bean in Solr by SOLRJ, it is giving me null pointer exception. The Exception is causing by below piece of code: public <T> boolean insert (T bean) { try { UpdateResponse response = solrClient.addBean(bean); System.out.println("insert bean ElapsedTime: " + response.getElapsedTime()); solrClient.commit(); return true; } catch (IOException | SolrServerException e) { e.printStackTrace(); } return false; } Sanjay Madnani Refer below URL for nested document insertion and

How to configure Solr for improved indexing speed

元气小坏坏 提交于 2019-12-04 10:26:01
问题 I have a client program which generates a 1-50 millions Solr documents and add them to Solr. I'm using ConcurrentUpdateSolrServer for pushing the documents from the client, 1000 documents per request. The documents are relatively small (few small text fields). I want to improve the indexing speed. I've tried to increase the "ramBufferSizeMB" to 1G and the "mergeFactor" to 25 but didn't see any change. I was wondering if there is some other recommended settings for improving Solr indexing