Save all image files from a website

半世苍凉 提交于 2019-11-27 18:36:57

问题


I'm creating a small app for myself where I run a Ruby script and save all of the images off of my blog.

I can't figure out how to save the image files after I've identified them. Any help would be much appreciated.

require 'rubygems'
require 'nokogiri'
require 'open-uri'

url = '[my blog url]'
doc = Nokogiri::HTML(open(url))

doc.css("img").each do |item|
  #something
end

回答1:


URL = '[my blog url]'

require 'nokogiri' # gem install nokogiri
require 'open-uri' # already part of your ruby install

Nokogiri::HTML(open(URL)).xpath("//img/@src").each do |src|
  uri = URI.join( URL, src ).to_s # make absolute uri
  File.open(File.basename(uri),'wb'){ |f| f.write(open(uri).read) }
end

Using the code to convert to absolute paths from here: How can I get the absolute URL when extracting links using Nokogiri?




回答2:


assuming the src attribute is an absolute url, maybe something like:

if item['src'] =~ /([^\/]+)$/
    File.open($1, 'wb') {|f| f.write(open(item['src']).read)}
end



回答3:


Tip: there's a simple way to get images from a page's head/body using the Scrapifier gem. The cool thing is that you can also define which type of image you want it to be returned (jpg, png, gif).

Give it a try: https://github.com/tiagopog/scrapifier

Hope you enjoy.




回答4:


system %x{ wget #{item['src']} }

Edit: This is assuming you're on a unix system with wget :) Edit 2: Updated code for grabbing the img src from nokogiri.



来源:https://stackoverflow.com/questions/7926675/save-all-image-files-from-a-website

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!