python-requests

Get response 200 instead of <418 I'm a Teapot>, using DDG

為{幸葍}努か 提交于 2021-01-28 13:33:56
问题 I was trying to scrape search results from DDG the other day, but i keep getting response 418. How can i make it response 200 or get results from it? This is my code. import requests from bs4 import BeautifulSoup import urllib while True: query = input("Enter Search Text: ") a = query.replace(' ', '+') url = 'https://duckduckgo.com/?q=random' +a headers = {"User-Agent": "Mozilla/5.0 (Linux; Android 6.0.1; SHIELD Tablet K1 Build/MRA58K; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0

Get response 200 instead of <418 I'm a Teapot>, using DDG

那年仲夏 提交于 2021-01-28 13:32:15
问题 I was trying to scrape search results from DDG the other day, but i keep getting response 418. How can i make it response 200 or get results from it? This is my code. import requests from bs4 import BeautifulSoup import urllib while True: query = input("Enter Search Text: ") a = query.replace(' ', '+') url = 'https://duckduckgo.com/?q=random' +a headers = {"User-Agent": "Mozilla/5.0 (Linux; Android 6.0.1; SHIELD Tablet K1 Build/MRA58K; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0

Pass (optional) parameters to HTTP parameter (Python, requests)

有些话、适合烂在心里 提交于 2021-01-28 10:16:38
问题 I am currently working on an API Wrapper, and I have an issue with passing the parameters from a function, into the payload of requests. The parameters can be blockId, senderId, recipientId, limit, offset, orderBy. All parameters join by "OR". One possible solution could be having if statements for every combination, but I imagine that that is a terrible way to do it. (requests and constants are already imported) def transactionsList(*args **kwargs): if blockId not None: payload = {'blockId':

Pass (optional) parameters to HTTP parameter (Python, requests)

微笑、不失礼 提交于 2021-01-28 10:11:42
问题 I am currently working on an API Wrapper, and I have an issue with passing the parameters from a function, into the payload of requests. The parameters can be blockId, senderId, recipientId, limit, offset, orderBy. All parameters join by "OR". One possible solution could be having if statements for every combination, but I imagine that that is a terrible way to do it. (requests and constants are already imported) def transactionsList(*args **kwargs): if blockId not None: payload = {'blockId':

Python requests library is not working, while cURL is working

北战南征 提交于 2021-01-28 08:06:43
问题 I need to retrieve a JWT (JSON Web Token) from a Microsoft API using Python (check this API documentation for Microsoft Graph) The following Python code using the requests library does not work giving HTTP response code 400, however, the equivalent cURL command does work giving back the expected JSON containing the JWT. Python / requests code: tenant = "<MY_FOO_TENANT>" token_url = "https://login.microsoftonline.com/{}/oauth2/v2.0/token".format(tenant) http_headers = { 'Content-Type':

Content-length header not being set on Flask App Engine response for served blob

…衆ロ難τιáo~ 提交于 2021-01-28 08:04:15
问题 In my Flask-based Google App Engine server, I am trying to return a response with a 'content-length' header which will contain the final size of a blob being served to the client. This blob is a large media file, so this header is going to be used to set the maximum value of a progress bar on the UI frontend. The blob lives in Cloud Storage, but is using the blobstore API from the App Engine packages to retrieve the blob. Below returns with a 200 status code: response.headers['Content-length'

Loop pages and save contents in Excel file from website in Python

…衆ロ難τιáo~ 提交于 2021-01-28 06:14:27
问题 I'm trying to loop pages from this link and extract the interesting part. Please see the contents in the red circle in the image below. Here's what I've tried: url = 'http://so.eastmoney.com/Ann/s?keyword=购买物业&pageindex={}' for page in range(10): r = requests.get(url.format(page)) soup = BeautifulSoup(r.content, "html.parser") print(soup) xpath for each element (might be helpful for those that don't read Chinese): /html/body/div[3]/div/div[2]/div[2]/div[3]/h3/span --> 【润华物业】 /html/body/div[3]

Python-automated bulk request for Elasticsearch not working “must be terminated by a newline”

孤者浪人 提交于 2021-01-28 05:16:28
问题 I am trying to automate a bulk request for Elasticsearch via Python. Therefore, i am preparing the data for the request body as follows (saved in a list as separate rows): data = [{"index":{"_id": ID}}, {"tag": {"input": [tag], "weight":count}}] Then i will use requests to do the Api call: r = requests.put(endpoint, json = data, auth = auth) This is giving me the Error: b'{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"The bulk request must be terminated by a newline [\

Python requests lib is taking way longer than it should to do a get request

浪子不回头ぞ 提交于 2021-01-28 02:41:29
问题 So I have this code. Whenever I run the code, and it gets to line 3, it takes about 20 whole seconds to do the get request. There is no reason it should be taking this long, and it's consistently taking long every time. Any help? def get_balance(addr): try: r = requests.get("http://blockexplorer.com/api/addr/"+addr+"/balance") return int(r.text)/10000000 except: return "e" 回答1: It works for me most of the time. >>> def get_balance(addr): ... try: ... start = time.time() ... r = requests.get(

Insert data into sqlite3 database with API

瘦欲@ 提交于 2021-01-27 19:27:24
问题 I'm trying to insert data from a web API into my database (I am using sqlite3 on python 3.7.2) and I can't find any tutorials on how do to so. So far all my code is: import requests, sqlite3 database = sqlite3.connect("ProjectDatabase.db") cur = database.cursor() d = requests.get("http://ergast.com/api/f1/2019/drivers") I'm aiming to get the names and driver numbers of each driver and insert them all into a table called Drivers. (I'm also using more APIs with more tables, but if I figure out