Indexing PDF with Solr

∥☆過路亽.° 提交于 2019-12-18 12:54:43

问题


Can anyone point me to a tutorial.

My main experience with Solr is indexing CSV files. But I cannot find any simple instructions/tutorial to tell me what I need to do to index pdfs.

I have seen this: http://wiki.apache.org/solr/ExtractingRequestHandler

But it makes very little sense to me. Do I need to install Tika?

Im lost - please help


回答1:


With solr-4.9 (the latest version as of now), extracting data from rich documents like pdfs, spreadsheets(xls, xlxs family), presentations(ppt, ppts), documentation(doc, txt etc) has become fairly simple. The sample code examples provided in the downloaded archive from here contains a basic solr template project to get you started quickly.

The necessary configuration changes are as follows:

  1. Change the solrConfig.xml to include following lines :

    <lib dir="<path_to_extraction_libs>" regex=".*\.jar" /> <lib dir="<path_to_solr_cell_jar>" regex="solr-cell-\d.*\.jar" />

create a request handler as follows:

<requestHandler name="/update/extract" startup="lazy" class="solr.extraction.ExtractingRequestHandler" > <lst name="defaults" /> </requestHandler>

2.Add the necessary jars from the solrExample to your project.

3.Define the schema as per your needs and fire a query like :

curl "http://localhost:8983/solr/collection1/update/extract?literal.id=1&literal.filename=testDocToExtractFrom.txt&literal.created_at=2014-07-22+09:50:12.234&commit=true" -F "myfile=@testDocToExtractFrom.txt"

go to the GUI portal and query to see the indexed contents.

Let me know if you face any problems.




回答2:


You could use the dataImportHandler. The DataImortHandle will be defined at the solrconfig.xml, the configuration of the DataImportHandler should be realized in an different XML config file (data-config.xml)

For indexing pdf's you could

1.) crawl the directory to find all the pdf's using the FileListEntityProcessor

2.) reading the pdf's from an "content/index"-XML File, using the XPathEntityProcessor

If you have the list of related pdf's, use the TikaEntityProcessor look at this http://solr.pl/en/2011/04/04/indexing-files-like-doc-pdf-solr-and-tika-integration/ (example with ppt) and this Solr : data import handler and solr cell




回答3:


The hardest part of this is getting the metadata from the PDFs, using a tool like Aperture simplifies this. There must be tonnes of these tools

Aperture is a Java framework for extracting and querying full-text content and metadata from PDF files

Apeture grabbed the metadata from the PDFs and stored it in xml files.

I parsed the xml files using lxml and posted them to solr




回答4:


Use the Solr, ExtractingRequestHandler. This uses Apache-Tika to parse the pdf file. I believe that it can pull out the metadata etc. You can also pass through your own metadata. Extracting Request Handler




回答5:


public class SolrCellRequestDemo {
public static void main (String[] args) throws IOException, SolrServerException {
SolrClient client = new
HttpSolrClient.Builder("http://localhost:8983/solr/my_collection").build();
ContentStreamUpdateRequest req = new
ContentStreamUpdateRequest("/update/extract");
req.addFile(new File("my-file.pdf"));
req.setParam(ExtractingParams.EXTRACT_ONLY, "true");
NamedList<Object> result = client.request(req);
System.out.println("Result: " +enter code here result);
}

This may help.



来源:https://stackoverflow.com/questions/6694327/indexing-pdf-with-solr

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!