python-requests

SSLError in Requests when packaging as OS X .app

旧巷老猫 提交于 2021-02-06 09:16:11
问题 I'm developing an application for OS X. The application involves communicating with a server through python-requests, using a secure connection. I am able to run the python file I intend to package, and it succeeds with the SSL connection. However, when I package the file with py2app and try to run it, I get the following error: Traceback (most recent call last): File "/Users/yossi/Documents/repos/drunken-octo-nemesis/dist/drunken-octo.app/Contents/Resources/__boot__.py", line 338, in <module

Unused import statement 'import requests' in PyChram [closed]

你。 提交于 2021-02-05 12:32:30
问题 Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 3 months ago . Improve this question I wrote: import requests - in my code without adding anything else, and this line is highlighted in gray. PyCharm itself offered me to delete the line, but I need it. Saw a way with adding: #noinspection Py Unresolved References - but is it possible somehow

Iterate and extract tables from web saving as excel file in Python

╄→尐↘猪︶ㄣ 提交于 2021-02-05 11:30:12
问题 I want to iterate and extract table from the link here, then save as excel file. How can I do that? Thank you. My code so far: import pandas as pd import requests from bs4 import BeautifulSoup from tabulate import tabulate url = 'http://zjj.sz.gov.cn/ztfw/gcjs/xmxx/jgysba/' res = requests.get(url) soup = BeautifulSoup(res.content,'lxml') print(soup) New update: from requests import post import json import pandas as pd import numpy as np headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0;

Bad Handshake when using requests

筅森魡賤 提交于 2021-02-05 09:29:17
问题 I was trying to download a PDF file from Internet and Python2.7.15cr1 and requests 2.19.1 but I am facing this error: > Traceback (most recent call last): > File "download.py", line 5, in <module> > r = requests.get(url,verify=False) > File "/home/user/.local/lib/python2.7/site-packages/requests/api.py", > line 72, in get > return request('get', url, params=params, **kwargs) > File "/home/user/.local/lib/python2.7/site-packages/requests/api.py", > line 58, in request > return session.request

Weird character not exists in html source python BeautifulSoup

夙愿已清 提交于 2021-02-05 09:26:10
问题 I have watched a video that teaches how to use BeautifulSoup and requests to scrape a website Here's the code from bs4 import BeautifulSoup as bs4 import requests import pandas as pd pages_to_scrape = 1 for i in range(1,pages_to_scrape+1): url = ('http://books.toscrape.com/catalogue/page-{}.html').format(i) pages.append(url) for item in pages: page = requests.get(item) soup = bs4(page.text, 'html.parser') #print(soup.prettify()) for j in soup.findAll('p', class_='price_color'): price=j

Python : how to set parameters for python request

情到浓时终转凉″ 提交于 2021-02-05 09:14:46
问题 I have this python request code that works but I don't understand what the parameters represent. I want to understand how the set parameters for python request and if there is a good reference for this. Here is the code i use url = 'https://www.walmart.com/store/1003-York-pa/search?query=ice%20cream' api_url = 'https://www.walmart.com/store/electrode/api/search' params = { 'query': word, 'cat_id': 0, 'ps': 24, 'offset': 0, 'prg': 'desktop', 'stores': re.search(r'store/(\d+)', url).group(1) }

How can I access this type of site using requests? [duplicate]

落花浮王杯 提交于 2021-02-05 08:09:38
问题 This question already has answers here : Scraper in Python gives “Access Denied” (3 answers) Closed 8 months ago . This is the first time I've encountered a site where it wouldn't 'allow me access' to the webpage. I'm not sure why and I can't figure out how to scrape from this website. My attempt: import requests from bs4 import BeautifulSoup def html(url): return BeautifulSoup(requests.get(url).content, "lxml") url = "https://www.g2a.com/" soup = html(url) print(soup.prettify()) Output:

How can I access this type of site using requests? [duplicate]

落爺英雄遲暮 提交于 2021-02-05 08:07:44
问题 This question already has answers here : Scraper in Python gives “Access Denied” (3 answers) Closed 8 months ago . This is the first time I've encountered a site where it wouldn't 'allow me access' to the webpage. I'm not sure why and I can't figure out how to scrape from this website. My attempt: import requests from bs4 import BeautifulSoup def html(url): return BeautifulSoup(requests.get(url).content, "lxml") url = "https://www.g2a.com/" soup = html(url) print(soup.prettify()) Output:

How to extract table from website using python

青春壹個敷衍的年華 提交于 2021-02-05 08:02:48
问题 i have been trying to extract the table from website but i am lost. can anyone help me ? my goal is to extract the table of scope page : https://training.gov.au/Organisation/Details/31102 import requests from bs4 import BeautifulSoup url = "https://training.gov.au/Organisation/Details/31102" response = requests.get(url) page = response.text soup = BeautifulSoup(page, 'lxml') table = soup.find(id ="ScopeQualification") [row.text.split() for row in table.find_all("tr")] 回答1: find OrganisationId

404 Eror API Call Pexels Python

让人想犯罪 __ 提交于 2021-02-05 07:25:26
问题 I want to download an image with Pexels API (documentation) with Python. First, I get the ID of the picture by doing: import requests image_base_url = 'https://api.pexels.com/v1/search' api_key = 'my_api_key' my_obj = {'query':'Stock market'} x = requests.get(image_base_url,headers = {'Authorization':api_key},data = my_obj) print(x.text) Then, I obtain an ID for the image I want and run this: photo_request_link = 'https://api.pexels.com/v1/photos/' photo_id = {'id':159888} final_photo =