cherrypy

cherrypy.HTTPRedirect redirects to IP instead of hostname using abs path

微笑、不失礼 提交于 2019-11-30 00:02:48
问题 I'm running CherryPy behind nginx and need to handle redirects. On my dev machine running on 127.0.0.1:8080, this redirects correctly to 127.0.0.1:8080/login. However when running via nginx on cherrypy.mydomain.com (port 80), the redirects are still going to 127.0.0.1:8080/login rather than cherrypy.mydomain.com/login. 127.0.0.1:8080 is the correct local address for the application, however the application server in nginx is set to listen on port 80 and pipe requests to the local cherrypy

pylibmc: 'Assertion “ptr->query_id == query_id +1” failed for function “memcached_get_by_key”'

拟墨画扇 提交于 2019-11-29 20:03:02
问题 I have a python web app that uses the pylibmc module to connect to a memcached server. If I test my app with requests once per second or slower, everything works fine. If I send more than one request per second, however, my app crashes and I see the following in my logs: Assertion "ptr->query_id == query_id +1" failed for function "memcached_get_by_key" likely for "Programmer error, the query_id was not incremented.", at libmemcached/get.cc:107 Assertion "ptr->query_id == query_id +1" failed

Why is CTRL-C not captured and signal_handler called?

陌路散爱 提交于 2019-11-29 17:01:27
I have the following standard implementation of capturing Ctrl+C : def signal_handler(signal, frame): status = server.stop() print("[{source}] Server Status: {status}".format(source=__name__.upper(), status=status)) print("Exiting ...") sys.exit(0) signal.signal(signal.SIGINT, signal_handler) On server.start() I am starting a threaded instance of CherryPy. I created the thread thinking that maybe since CherryPy is running, the main thread is not seeing the Ctrl+C . This did not seem to have any affect but posting the code as I have it now: __main__ : server.start() server : def start(self): #

How can I get Bottle to restart on file change?

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-29 16:05:55
问题 I'm really enjoying Bottle so far, but the fact that I have to CTRL+C out of the server and restart it every time I make a code change is a big hit on my productivity. I've thought about using Watchdog to keep track of files changing then restarting the server, but how can I do that when the bottle.run function is blocking. Running the server from an external script that watches for file changes seems like a lot of work to set up. I'd think this was a universal issue for Bottle, CherryPy and

How to return data from a CherryPy BackgroundTask running as fast as possible

杀马特。学长 韩版系。学妹 提交于 2019-11-29 08:43:11
I'm building a web service for iterative batch processing of data using CherryPy. The ideal workflow is as follows: Users POST data to the service for processing When the processing job is free, it collects the queued data and starts another iteration While the job is processing, users are POSTing more data to the queue for the next iteration Once the current iteration is finished, the results are passed back so that users can GET them using the same API. The job starts again with the next batch of queued data. The key consideration here is that the processing should run as fast as possible

CherryPy with Cheetah as plugin + tool - blank pages

半腔热情 提交于 2019-11-28 14:21:53
CherryPy keeps returning blank pages or with the values I return in the controllers. I rewrote a django and jinja2 version that did work, apparently this one doesn't which is almost identical to the previous mentioned. I did some pprint's in the tool bit which does fill the request.body with parsed html but doesn't output it when pass is set in the controller. If I return a {'user':True} in the controller that is shown in the form of a simple "User". with a few examples online and the code of SickBeard I came to the following: controller: class RootController(object): @cherrypy.expose

Python: sending and receiving large files over POST using cherrypy

巧了我就是萌 提交于 2019-11-28 12:49:35
I have a cherrypy web server that needs to be able to receive large files over http post. I have something working at the moment, but it fails once the files being sent gets too big (around 200mb). I'm using curl to send test post requests, and when I try to send a file that's too big, curl spits out "The entity sent with the request exceeds the maximum allowed bytes." Searching around, this seems to be an error from cherrypy. So I'm guessing that the file being sent needs to be sent in chunks? I tried something with mmap, but I couldn't get it too work. Does the method that handles the file

Deploying CherryPy (daemon)

两盒软妹~` 提交于 2019-11-28 06:20:41
I've followed the basic CherryPy tutorial ( http://www.cherrypy.org/wiki/CherryPyTutorial ). One thing not discussed is deployment. How can I launch a CherryPy app as a daemon and "forget about it"? What happens if the server reboots? Is there a standard recipe? Maybe something that will create a service script (/etc/init.d/cherrypy...) Thanks! Benno There is a Daemonizer plugin for CherryPy included by default which is useful for getting it to start but by far the easiest way for simple cases is to use the cherryd script: > cherryd -h Usage: cherryd [options] Options: -h, --help show this

415 exception Cherrypy webservice

微笑、不失礼 提交于 2019-11-28 04:18:41
问题 I'm trying to build a Cherrypy/Python webservice. I already spend the whole day on finding out how to make a cross domain ajax request possible. That's finally working, but now I have the next issue. I think I already know the solution, but I don't know how to implement it. The problem is that when I'm sending the ajax request, the Cherrypy server responds with: 415 Unsupported Media Type Expected an entity of content type application/json, text/javascript Traceback (most recent call last):

How to return data from a CherryPy BackgroundTask running as fast as possible

倖福魔咒の 提交于 2019-11-28 02:02:43
问题 I'm building a web service for iterative batch processing of data using CherryPy. The ideal workflow is as follows: Users POST data to the service for processing When the processing job is free, it collects the queued data and starts another iteration While the job is processing, users are POSTing more data to the queue for the next iteration Once the current iteration is finished, the results are passed back so that users can GET them using the same API. The job starts again with the next