httr

How can I screenshot a website using R?

心已入冬 提交于 2019-11-28 07:01:00
So I'm not 100% sure this is possible, but I found a good solution in Ruby and in python , so I was wondering if something similar might work in R. Basically, given a URL, I want to render that URL, take a screenshot of the rendering as a .png, and save the screenshot to a specified folder. I'd like to do all of this on a headless linux server. Is my best solution here going to be running system calls to a tool like CutyCapt , or does there exist an R-based toolset that will help me solve this problem? You can take screenshots using Selenium: library(RSelenium) rD <- rsDriver(browser =

SOAP request in R

扶醉桌前 提交于 2019-11-28 01:51:17
Does anyone know how to formulate following SOAP request with R? POST /API/v201010/AdvertiserService.asmx HTTP/1.1 Host: advertising.criteo.com Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "https://advertising.criteo.com/API/v201010/clientLogin" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <clientLogin xmlns="https://advertising.criteo.com/API/v201010"> <username>string</username> <password

Error connecting to azure blob storage API from R

余生颓废 提交于 2019-11-27 22:25:00
问题 I am attempting to work with Azure storage via the REST API in R. I'm using the package httr which overlays Curl. Setup You can use R-fiddle: http://www.r-fiddle.org/#/fiddle?id=vh8uqGmM library(httr) requestdate<-format(Sys.time(),"%a, %d %b %Y %H:%M:%S GMT") url<-"https://preconstuff.blob.core.windows.net/pings?restype=container&comp=list" sak<-"Q8HvUVJLBJK+wkrIEG6LlsfFo19iDjneTwJxX/KXSnUCtTjgyyhYnH/5azeqa1bluGD94EcPcSRyBy2W2A/fHQ==" signaturestring<-paste0("GET",paste(rep("\n",12),collapse

Oauth authentification to Fitbit using httr

邮差的信 提交于 2019-11-27 16:52:16
问题 I'm trying to connect to the fitbit api using the httr library. Using the examples provided, I came up with the following code: library(httr) key <- '<edited>' secret <- '<edited>' tokenURL <- 'http://api.fitbit.com/oauth/request_token' accessTokenURL <- 'http://api.fitbit.com/oauth/access_token' authorizeURL <- 'https://www.fitbit.com/oauth/authorize' fbr <- oauth_app('fitbitR',key,secret) fitbit <- oauth_endpoint(tokenURL,authorizeURL,accessTokenURL) token <- oauth1.0_token(fitbit,fbr) sig

Rcurl: url.exists returns false when url does exists

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-27 06:48:59
问题 Trying to download information from a specific web page, and although it opens fine in any browser, RCurl says it does not exists: url.exists("http://www.transfermarkt.es/liga-mx-apertura/startseite/wettbewerb/MEXA") [1] FALSE Same results when using ".de". url.exists("http://www.transfermarkt.de/liga-mx-clausura/startseite/wettbewerb/MEX1") [1] FALSE It also returns an error when using other functions of RCurl > htmlParse("http://www.transfermarkt.es/liga-mx-apertura/startseite/wettbewerb

How to login and then download a file from aspx web pages with R

独自空忆成欢 提交于 2019-11-27 04:00:56
I'm trying to automate the download of the Panel Study of Income Dynamics files available on this web page using R. Clicking on any of those files takes the user through to this login/authentication page . After authentication, it's easy to download the files with your web browser. Unfortunately, the httr code below does not appear to be maintaining the authentication. I have tried inspecting the Headers in Chrome for the Login.aspx page ( as described here ), but it doesn't appear to maintain the authentication even when I believe I'm passing in all the correct values. I don't care if it's

Using R to “click” a download file button on a webpage

℡╲_俬逩灬. 提交于 2019-11-27 03:30:38
问题 I am attempting to use this webpage http://volcano.si.edu/search_eruption.cfm to scrape data. There are two drop-down boxes that ask for filters of the data. I do not need filtered data, so I leave those blank and continue on to the next page by clicking " Search Eruptions ". What I have noticed, though, is that the resulting table only includes a small amount of columns (only 5) compared to the total amount of columns (total of 24) it should have. However, all 24 columns will be there if you

How can I screenshot a website using R?

旧时模样 提交于 2019-11-27 01:39:52
问题 So I'm not 100% sure this is possible, but I found a good solution in Ruby and in python, so I was wondering if something similar might work in R. Basically, given a URL, I want to render that URL, take a screenshot of the rendering as a .png, and save the screenshot to a specified folder. I'd like to do all of this on a headless linux server. Is my best solution here going to be running system calls to a tool like CutyCapt, or does there exist an R-based toolset that will help me solve this

SOAP request in R

有些话、适合烂在心里 提交于 2019-11-26 23:34:47
问题 Does anyone know how to formulate following SOAP request with R? POST /API/v201010/AdvertiserService.asmx HTTP/1.1 Host: advertising.criteo.com Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "https://advertising.criteo.com/API/v201010/clientLogin" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body>

Scrape password-protected website in R

假如想象 提交于 2019-11-26 16:02:07
I'm trying to scrape data from a password-protected website in R. Reading around, it seems that the httr and RCurl packages are the best options for scraping with password authentication (I've also looked into the XML package). The website I'm trying to scrape is below (you need a free account in order to access the full page): http://subscribers.footballguys.com/myfbg/myviewprojections.php?projector=2 Here are my two attempts (replacing "username" with my username and "password" with my password): #This returns "Status: 200" without the data from the page: library(httr) GET("http:/