Save all image files from a website

删除回忆录丶 提交于 2019-11-29 04:37:55
Phrogz
URL = '[my blog url]'

require 'nokogiri' # gem install nokogiri
require 'open-uri' # already part of your ruby install

Nokogiri::HTML(open(URL)).xpath("//img/@src").each do |src|
  uri = URI.join( URL, src ).to_s # make absolute uri
  File.open(File.basename(uri),'wb'){ |f| f.write(open(uri).read) }
end

Using the code to convert to absolute paths from here: How can I get the absolute URL when extracting links using Nokogiri?

assuming the src attribute is an absolute url, maybe something like:

if item['src'] =~ /([^\/]+)$/
    File.open($1, 'wb') {|f| f.write(open(item['src']).read)}
end

Tip: there's a simple way to get images from a page's head/body using the Scrapifier gem. The cool thing is that you can also define which type of image you want it to be returned (jpg, png, gif).

Give it a try: https://github.com/tiagopog/scrapifier

Hope you enjoy.

system %x{ wget #{item['src']} }

Edit: This is assuming you're on a unix system with wget :) Edit 2: Updated code for grabbing the img src from nokogiri.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!