server-side

Server-Sent Events vs Polling

为君一笑 提交于 2019-11-27 18:05:48
Is there a big difference (in terms of performance, browser implementation availability, server load etc) between HTML5 SSEs and straight up Ajax polling? From the server side, it seems like an EventSource is just hitting the specified page every ~3 seconds or so (though I understand the timing is flexible). Granted, it's simpler to set up on the client side than setting up a timer and having it $.get every so often, but is there anything else? Does it send fewer headers, or do some other magic I'm missing? Ajax polling adds a lot of HTTP overhead since it is constantly establishing and

AngularJS client MVC pattern?

核能气质少年 提交于 2019-11-27 16:42:21
Until now I was mainly using Struts 2 , Spring , JQuery technology stack for building web applications. The point is, that mentioned stack uses server side MVC pattern. The main role of web browsers was limited to a request/response cycle (+ client side validation). Data retrieval, business logic, wiring and validation were mainly responsibilities of the server side. I have few questions regarding AngularJS framework that were inspired by following quotes I've read: From the AngularJS tutorial : For Angular apps, we encourage the use of the Model-View-Controller (MVC) design pattern to

The max message size quota for incoming messages (65536) …To increase the quota, use the MaxReceivedMessageSize property

。_饼干妹妹 提交于 2019-11-27 14:35:13
问题 I got this crazy problem I'm trying to deal with. I know that when we're getting huge amount of data we must increase the quota on client .config file, but what am I supposed to do if my client is sending huge data "to" the WCF server? It works perfectly normal when I'm sending small sized input parameter. Unfortunately, it breaks down when the input grows bigger. Debugger it says: Bad Request, 400; on trace file it is: The maximum message size quota for incoming messages (65536) has been

Google Analytics API - Programmatically fetch page views in server side

和自甴很熟 提交于 2019-11-27 13:19:09
问题 We have a web application that consists of several pages. We registered our web app domain to Google Analytics and page views tracking works as expected (In the Analytics panel we can see page views for each page). Now we want this page views info to be stored in the back-end inside our DB. So we want to create a back-end process that will run once each day, and fetch the page views from Analytics API. This is of course need to be done in code. From initial research it seems that in order to

Understanding Heroku server status 143

烈酒焚心 提交于 2019-11-27 11:50:37
问题 I'm wondering about Heroku server status and can't find any documentation about this topic. Example: Process exited with status 143 Can anyone explain this example? And where would I find resources for future reference? 回答1: Exit code 143 means that your process was terminated by a SIGTERM. This is generally sent when you do any commands that require your dynos to restart (config:set, restart, scale down...). 回答2: It is an idle state when it does not receive any request for a while. When it

Save the document generated by javascript

孤街浪徒 提交于 2019-11-27 11:20:42
问题 Javascript can manipulate the document the browser is displaying, so the following: <script> document.write("<table><tr><td>Hola</td><td>Adios</td></tr></table>"); </script> Will make the browser display a table just like if it was the original HTML document: <table> <tr> <td>Hola</td> <td>Adios</td> </tr> </table> Is there a way I can save/serve this document content? Currently we have some nicely generated reports using Ext-js, what I would like to do is to have the "text/html" version of

PHP: How to compress images without losing visible quality (automatically)?

。_饼干妹妹 提交于 2019-11-27 10:34:40
问题 I'm wondering how to figure out the best compress rate (small filesize + no quality loss) automatically. At the moment I'm using imagejpeg() with $quality = 85 for each .jpg . PageSpeed (Chrome Plugin) suggests, to lower the quality of a few images to save some kb. The percentage of reduction is different. I'd like to write a cronjob that crawls a specific directory and optimizes every image. How does PageSpeed or TinyPNG figure out the best optimized quality and is this possible with PHP or

Why and when to use Node.js? [duplicate]

人盡茶涼 提交于 2019-11-27 10:03:01
Possible Duplicate: How to decide when to use Node.js? Sorry if I'm a bit ambiguous, but I'm trying to understand the real advantages of using Node.js instead of other server-side language. I'm a JavaScript enthusiast, so I'm probably going to play with Node.js, but I want to know if I should use it in my projects. Raynos It's evented asynchronous non-blocking I/O build ontop of V8 . So we have all the performance gain of V8 which is the Google JavaScript interpreter. Since the JavaScript performance race hasn't ended yet, you can expect Google to constantly update performance on V8 (for free)

How do I access PHP REST API PUT data on the server side?

為{幸葍}努か 提交于 2019-11-27 09:13:44
-- Question -- I am just starting out with the REST API and am getting pretty confused. This is what my PHP cRUL client-side looks like for a PUT. case 'PUT': curl_setopt($handle, CURLOPT_CUSTOMREQUEST, 'PUT'); curl_setopt($handle, CURLOPT_POSTFIELDS, $data); break; Now when I look at the server my $_SERVER['REQUEST_METHOD'] shows PUT, but my question is how do I get the $data I sent with CURLOPT_POSTFIELDS. All I need to do is get the $data sent with a PUT request into the next line. Like $value = $data['curl_data']; I have seen so much clutter on this topic that it is giving me a headache.

How to PHP server-side check if a url of a web site is valid?

…衆ロ難τιáo~ 提交于 2019-11-27 08:03:22
问题 I've searched our SO questions but found solutions are based on calling ping command executed from using system PHP function. My web host server doesn't allow me to do that. What should I do? Please help, Nam. Update I need to check from the server side . 回答1: If by a valid URL you mean one which does not 404, then you could use get_headers() and look for 404 in the first returned array element. $url = 'http://google.com'; list($status) = get_headers($url); if (strpos($status, '404') !==