html-content-extraction

What is the state of the art in HTML content extraction?

与世无争的帅哥 提交于 2019-12-02 14:07:15
There's a lot of scholarly work on HTML content extraction, e.g., Gupta & Kaiser (2005) Extracting Content from Accessible Web Pages , and some signs of interest here, e.g., one , two , and three , but I'm not really clear about how well the practice of the latter reflects the ideas of the former. What is the best practice? Pointers to good (in particular, open source) implementations and good scholarly surveys of implementations would be the kind of thing I'm looking for. Postscript the first : To be precise, the kind of survey I'm after would be a paper (published, unpublished, whatever)

RegEx for extracting HTML Image properties

不想你离开。 提交于 2019-12-02 08:56:41
I need a RegEx pattern for extracting all the properties of an image tag. As we all know, there are lots of malformed HTML out there, so the pattern has to cover those possibilities. I was looking at this solution https://stackoverflow.com/questions/138313/how-to-extract-img-src-title-and-alt-from-html-using-php but it didn't quite get it all: I come up something like: (alt|title|src|height|width)\s*=\s*["'][\W\w]+?["'] Is there any possibilities I'll be missing or a more efficient simple pattern? EDIT: Sorry, I will be more specific, I'm doing this using .NET so it's on the server side. I've

Extracting pure content / text from HTML Pages by excluding navigation and chrome content

眉间皱痕 提交于 2019-12-01 07:02:16
I am crawling news websites and want to extract News Title, News Abstract (First Paragraph), etc I plugged into the webkit parser code to easily navigate webpage as a tree. To eliminate navigation and other non news content I take the text version of the article (minus the html tags, webkit provides api for the same). Then I run the diff algorithm comparing various article's text from same website this results in similar text being eliminated. This gives me content minus the common navigation content etc. Despite the above approach I am still getting quite some junk in my final text. This

How do I save a web page, programatically?

主宰稳场 提交于 2019-12-01 05:49:34
I would like to save a web page programmatically. I don't mean merely save the HTML. I would also like automatically to store all associated files (images, CSS files, maybe embedded SWF, etc), and hopefully rewrite the links for local browsing. The intended usage is a personal bookmarks application, in which link content is cached in case the original copy is taken down. Take a look at wget , specifically the -p flag −p −−page−requisites This option causes Wget to download all the files that are necessary to properly display a givenHTML page. Thisincludes such things as inlined images, sounds,

Extracting pure content / text from HTML Pages by excluding navigation and chrome content

落爺英雄遲暮 提交于 2019-12-01 04:16:21
问题 I am crawling news websites and want to extract News Title, News Abstract (First Paragraph), etc I plugged into the webkit parser code to easily navigate webpage as a tree. To eliminate navigation and other non news content I take the text version of the article (minus the html tags, webkit provides api for the same). Then I run the diff algorithm comparing various article's text from same website this results in similar text being eliminated. This gives me content minus the common navigation

Using Beautiful Soup Python module to replace tags with plain text

社会主义新天地 提交于 2019-12-01 01:38:58
I am using Beautiful Soup to extract 'content' from web pages. I know some people have asked this question before and they were all pointed to Beautiful Soup and that's how I got started with it. I was able to successfully get most of the content but I am running into some challenges with tags that are part of the content. (I am starting off with a basic strategy of: if there are more than x-chars in a node then it is content). Let's take the html code below as an example: <div id="abc"> some long text goes <a href="/"> here </a> and hopefully it will get picked up by the parser as content <

Possible to parse a HTML document and build a DOM tree(java)

孤街醉人 提交于 2019-12-01 01:31:59
Is it possible and what tools could be used to parse an html document as a string or from a file and then to construct a DOM tree so that a developer can walk the tree through some API. For example: DomRoot = parse("myhtml.html"); for (tags : DomRoot) { } Note: this is a HTML document not XHtml. You can use TagSoup - it is a SAX Compliant parser that can clean malformed content such as HTML from generic web pages into well-formed XML. This is <B>bold, <I>bold italic, </b>italic, </i>normal text gets correctly rewritten as: This is <b>bold, <i>bold italic, </i></b><i>italic, </i>normal text.

Possible to parse a HTML document and build a DOM tree(java)

痴心易碎 提交于 2019-11-30 20:47:39
问题 Is it possible and what tools could be used to parse an html document as a string or from a file and then to construct a DOM tree so that a developer can walk the tree through some API. For example: DomRoot = parse("myhtml.html"); for (tags : DomRoot) { } Note: this is a HTML document not XHtml. 回答1: You can use TagSoup - it is a SAX Compliant parser that can clean malformed content such as HTML from generic web pages into well-formed XML. This is <B>bold, <I>bold italic, </b>italic, </i

Using Beautiful Soup Python module to replace tags with plain text

为君一笑 提交于 2019-11-30 20:15:41
问题 I am using Beautiful Soup to extract 'content' from web pages. I know some people have asked this question before and they were all pointed to Beautiful Soup and that's how I got started with it. I was able to successfully get most of the content but I am running into some challenges with tags that are part of the content. (I am starting off with a basic strategy of: if there are more than x-chars in a node then it is content). Let's take the html code below as an example: <div id="abc"> some

Create Great Parser - Extract Relevant Text From HTML/Blogs

半城伤御伤魂 提交于 2019-11-29 20:28:38
I'm trying to create a generalized HTML parser that works well on Blog Posts. I want to point my parser at the specific entrie's URL and get back clean text of the post itself. My basic approach (from python) has been to use a combination of BeautifulSoup / Urllib2, which is okay, but it assumes you know the proper tags for the blog entry. Does anyone have any better ideas? Here are some thoughts maybe someone could expand upon, that I don't have enough knowledge/know-how yet to implement. The unix program 'lynx' seems to parse blog posts especially well - what parser do they use, or how could