python-requests

“invalid character 'u' looking for beginning of value”Parsing Error from an Service developed in golang

孤街醉人 提交于 2020-12-13 03:56:27
问题 I am trying to have the response from API which is developed in Golang. Using PostMan I get the proper response, but when I use requests library, I get the following message: { u'status':400, u'title':u'Unable to parse data', u'code':u'400001', u'id':u'edf83LlwYx', u'detail':u"invalid character 'u' looking for beginning of value" } My Python script is: import requests import json import data url = data.user_login headers = { 'Content-Type': 'application/json', 'content-encoding': 'deflate' }

MaxRetryError: HTTPConnectionPool: Max retries exceeded (Caused by ProtocolError('Connection aborted.', error(111, 'Connection refused')))

℡╲_俬逩灬. 提交于 2020-12-13 03:50:07
问题 I have one question:I want to test "select" and "input".can I write it like the code below: original code: 12 class Sinaselecttest(unittest.TestCase): 13 14 def setUp(self): 15 binary = FirefoxBinary('/usr/local/firefox/firefox') 16 self.driver = webdriver.Firefox(firefox_binary=binary) 17 18 def test_select_in_sina(self): 19 driver = self.driver 20 driver.get("https://www.sina.com.cn/") 21 try: 22 WebDriverWait(driver,30).until( 23 ec.visibility_of_element_located((By.XPATH,"/html/body/div[9

MaxRetryError: HTTPConnectionPool: Max retries exceeded (Caused by ProtocolError('Connection aborted.', error(111, 'Connection refused')))

心不动则不痛 提交于 2020-12-13 03:49:29
问题 I have one question:I want to test "select" and "input".can I write it like the code below: original code: 12 class Sinaselecttest(unittest.TestCase): 13 14 def setUp(self): 15 binary = FirefoxBinary('/usr/local/firefox/firefox') 16 self.driver = webdriver.Firefox(firefox_binary=binary) 17 18 def test_select_in_sina(self): 19 driver = self.driver 20 driver.get("https://www.sina.com.cn/") 21 try: 22 WebDriverWait(driver,30).until( 23 ec.visibility_of_element_located((By.XPATH,"/html/body/div[9

Python Requests_html: giving me Timeout Error

痞子三分冷 提交于 2020-12-13 03:37:22
问题 I'm trying to scrape headlines from medium.com by using this library called requests_html The code I'm using works well on other's PC but not mine. Here's what the original code looks like this: from requests_html import HTMLSession session = HTMLSession() r = session.get('https://medium.com/@daranept27') r.html.render() x = r.html.find('a.eg.bv') [print(elem.text) for elem in x] It gives me pyppeteer.errors.TimeoutError: Navigation Timeout Exceeded: 8000 ms exceeded. Here's the full error:

How to fix “latin-1 codec can't encode characters in position” in requests

拥有回忆 提交于 2020-12-10 07:44:20
问题 I am having trouble with encoding in python 3. When I was testing on my PC I get no errors: Python 3.7.3 (default, Jun 24 2019, 04:54:02) [GCC 9.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import requests >>> print(requests.get('https://www.kinopoisk.ru').text) everything good. But when I ran this code on my VPS a have following error: Python 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux

How to fix “latin-1 codec can't encode characters in position” in requests

梦想与她 提交于 2020-12-10 07:43:05
问题 I am having trouble with encoding in python 3. When I was testing on my PC I get no errors: Python 3.7.3 (default, Jun 24 2019, 04:54:02) [GCC 9.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import requests >>> print(requests.get('https://www.kinopoisk.ru').text) everything good. But when I ran this code on my VPS a have following error: Python 3.7.3 (default, Apr 3 2019, 19:16:38) [GCC 8.0.1 20180414 (experimental) [trunk revision 259383]] on linux

How to extract this content rendered by javascript?

旧巷老猫 提交于 2020-12-07 07:37:29
问题 I'm using requests_html to extract the element <div id="TranslationsHead">...</div> in this url in which <span id="LangBar"> ... </span> is rendered by javascript. from requests_html import HTMLSession session = HTMLSession() from bs4 import BeautifulSoup url = 'https://www.thefreedictionary.com/love' headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0'} r = session.get(url, headers = headers) soup = BeautifulSoup(r.content, 'html.parser')

Max retries exceeded with url requests Python

别来无恙 提交于 2020-12-06 11:55:16
问题 I am trying to web scrap this page and the code i use is this: page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page") I get this error when i run this code: Traceback (most recent call last): File "/Users/lakesh/WebScraping/Gold.py", line 46, in <module> page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page") File "/Library/Python/2.7/site-packages/requests/api.py", line 72, in get return request('get', url, params=params, **kwargs) File "

Max retries exceeded with url requests Python

☆樱花仙子☆ 提交于 2020-12-06 11:51:28
问题 I am trying to web scrap this page and the code i use is this: page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page") I get this error when i run this code: Traceback (most recent call last): File "/Users/lakesh/WebScraping/Gold.py", line 46, in <module> page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page") File "/Library/Python/2.7/site-packages/requests/api.py", line 72, in get return request('get', url, params=params, **kwargs) File "

Max retries exceeded with url requests Python

浪子不回头ぞ 提交于 2020-12-06 11:49:26
问题 I am trying to web scrap this page and the code i use is this: page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page") I get this error when i run this code: Traceback (most recent call last): File "/Users/lakesh/WebScraping/Gold.py", line 46, in <module> page = get("https://www.uobgroup.com/online-rates/gold-and-silver-prices.page") File "/Library/Python/2.7/site-packages/requests/api.py", line 72, in get return request('get', url, params=params, **kwargs) File "