python-requests

Downloading file using requests module creates an empty file

霸气de小男生 提交于 2021-02-20 00:39:25
问题 After a few tentatives and seeing a lot of examples and questions around here I can't figure out why I'm not able to download a file using requests module, the File i'm trying to download is around 10mb only: try: r = requests.get('http://localhost/sample_test', auth=('theuser', 'thepass'), stream=True) with open('/tmp/aaaaaa', 'wb') as f: for chunk in r.iter_content(chunk_size=1024): f.write(chunk) except: raise Empty file: [xxx@xxx ~]$ ls -ltra /tmp/aaaaaa -rw-rw-r--. 1 xxx xxx 0 Jul 21 12

python authentication with requests library via POST

那年仲夏 提交于 2021-02-19 08:16:14
问题 I read several similar topic.. I try to follow the others examples but I'm still stuck in the middle of nowhere.. I have basic skills of python programming and little knowledge about http protocol, my two goals are: -succesfull authentication to a website via requests library -fetch data from the website after the login while the session is active This is the code: import requests targetws = 'https://secure.advfn.com/login/secure' s = requests.session() payload_data = {'login_username': 'xxx'

Single session multiple post/get in python requests

妖精的绣舞 提交于 2021-02-19 05:43:31
问题 I am trying to write a crawler to automatically download some files using python requests module. However, I met a problem. I initialized a new requests session, then I used post method to login into the website, after that as long as I try to use post/get method (a simplified code below): s=requests.session() s.post(url,data=post_data, headers=headers) #up to here everything is correct, the next step will report error s.get(url) or s.post(url) even repeat s.post(url,data=post_data, headers

Single session multiple post/get in python requests

老子叫甜甜 提交于 2021-02-19 05:43:27
问题 I am trying to write a crawler to automatically download some files using python requests module. However, I met a problem. I initialized a new requests session, then I used post method to login into the website, after that as long as I try to use post/get method (a simplified code below): s=requests.session() s.post(url,data=post_data, headers=headers) #up to here everything is correct, the next step will report error s.get(url) or s.post(url) even repeat s.post(url,data=post_data, headers

Too many redirects error using Python requests

狂风中的少年 提交于 2021-02-19 05:28:38
问题 HTTP requests are working fine on my localhost, but running the same HTTP requests using the python requests library on my server returns a "Too Many Redirects" error When I enter localhost/terminal/jfk in a browser, I get a json dictionary as expected. However, when I run the following in python using the python requests library on my server requests.get('http://splitmyri.de/terminal/jfk') I receive a "Too Many Redirects" error from the requests module. Any thoughts as to what's causing the

Too many redirects error using Python requests

扶醉桌前 提交于 2021-02-19 05:28:11
问题 HTTP requests are working fine on my localhost, but running the same HTTP requests using the python requests library on my server returns a "Too Many Redirects" error When I enter localhost/terminal/jfk in a browser, I get a json dictionary as expected. However, when I run the following in python using the python requests library on my server requests.get('http://splitmyri.de/terminal/jfk') I receive a "Too Many Redirects" error from the requests module. Any thoughts as to what's causing the

Download content-disposition from http response header (Python 3)

梦想的初衷 提交于 2021-02-19 05:03:58
问题 Im looking for a little help here. I've been using requests in Python to gain access to a website. Im able access the website and get a response header but im not exactly sure how to download the zip file contained in the Content-disposition. Im guessing this isnt a function Requests can handle or at least I cant seem to find any info on it. How do I gain access to the file and save it? 'Content-disposition': 'attachment;filename=MT0376_DealerPrice.zip' 回答1: Using urllib instead of requests

requests process hangs

Deadly 提交于 2021-02-19 01:15:40
问题 I'm using requests to get a URL, such as: while True: try: rv = requests.get(url, timeout=1) doSth(rv) except socket.timeout as e: print e except Exception as e: print e After it runs for a while, it quits working. No exception or any error, just like it suspended. I then stop the process by typing Ctrl+C from the console. It shows that the process is waiting for data: ............. httplib_response = conn.getresponse(buffering=True) #httplib.py response.begin() #httplib.py version, status,

BeautifulSoup, Requests, Dataframe Saving to Excel arrays error

被刻印的时光 ゝ 提交于 2021-02-18 18:59:53
问题 I am a novice at Python and helping out on a school project. Any help is much appreciated. THANKS. I get an error when it gets to the year 2004 and 2003. And it is caused by the result_list list. The error is "ValueError: arrays must all be same length". How can I introduce code that fixes this. The scores are important.... import requests import pandas as pd from pandas import ExcelWriter from bs4 import BeautifulSoup #from openpyxl.writer.excel import ExcelWriter import openpyxl #from

logging into a twitter using python3 and requests

生来就可爱ヽ(ⅴ<●) 提交于 2021-02-18 18:00:08
问题 I have a project that I am working on, and the requirements are to login to a website using a username and password. I have to do it in python, and then be able to access a part of the site only accessible to people who are logged in. I have tried a few variations of coding to do this, and haven't been able to successfully log in yet. Here is my coding: the function to login to it: def session2(url): #r = requests.get(url) #ckies = [] #print("here are the cookies for twitter:\n") #for cky in