amazon

Using Uploadify to POST Directly to Amazon S3

流过昼夜 提交于 2019-12-01 17:25:09
Can anyone tell me how to use Uploadify to upload directly to Amazon S3? My code is as follows: $('#fileInput').uploadify({ 'fileDataName' : 'file', 'uploader' : 'uploadify.swf', 'script' : 'http://BUCKET-NAME-GOES-HERE.s3.amazonaws.com/', 'cancelImg' : 'cancel.png', 'method' : 'post', 'auto' : true, 'onError': function (a, b, c, d) { alert('error '+d.type+": "+d.info + ' name: ' + c.name + ' size: ' + c.size); }, 'scriptData' : { 'AWSAccessKeyId': "KEY-GOES-HERE", 'key': "${filename}", 'acl': "public-read", 'policy': "POLICY-STRING-GOES-HERE", 'signature': "SIGNATURE-GOES-HERE", 'success

Using Uploadify to POST Directly to Amazon S3

自古美人都是妖i 提交于 2019-12-01 16:25:07
问题 Can anyone tell me how to use Uploadify to upload directly to Amazon S3? My code is as follows: $('#fileInput').uploadify({ 'fileDataName' : 'file', 'uploader' : 'uploadify.swf', 'script' : 'http://BUCKET-NAME-GOES-HERE.s3.amazonaws.com/', 'cancelImg' : 'cancel.png', 'method' : 'post', 'auto' : true, 'onError': function (a, b, c, d) { alert('error '+d.type+": "+d.info + ' name: ' + c.name + ' size: ' + c.size); }, 'scriptData' : { 'AWSAccessKeyId': "KEY-GOES-HERE", 'key': "${filename}", 'acl'

Why can't I scrape Amazon by BeautifulSoup?

懵懂的女人 提交于 2019-12-01 14:54:28
Here is my python code: import urllib2 from bs4 import BeautifulSoup page = urllib2.urlopen("http://www.amazon.com/") soup = BeautifulSoup(page) print soup it works for google.com and many other websites, but it doesn't work for amazon.com. I can open amazon.com in my browser, but the resulting "soup" is still none. Besides, I find that it cannot scrape from appannie.com, either. However, rather than give none, the code returns an error: HTTPError: HTTP Error 503: Service Temporarily Unavailable So I doubt whether Amazon and App Annie block scraping. Please do try by yourself instead of just

Scrap amazon all deals php curl?

淺唱寂寞╮ 提交于 2019-12-01 13:05:11
I want to scrap amazon all deals page http://www.amazon.com/gp/goldbox/all-deals/ref=sv_gb_1 So i am using curl php $request = 'http://www.amazon.com/gp/goldbox/all-deals/ref=sv_gb_1'; $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,$request); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 80); $file_source = curl_exec($ch); print_r($file_source); exit; scrapping completed but response page content div empty. contents all came

Why is my terminal returning this s3 error?

穿精又带淫゛_ 提交于 2019-12-01 11:02:27
Here's the error I keep receiving: A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied I've triple-checked my credentials and googled this error to my wits' end. I edited my bucket policy to add an s3:ListBucket action, but to no avail. When I do so, it just returns a similar message: A client error (AccessDenied) occurred when calling the ListBuckets operation: Access Denied Thoughts? This is also my first time creating an s3 bucket so it's quite possible I missed some important step. I have triple-checked my keys and even tried creating an additional

Scrap amazon all deals php curl?

一世执手 提交于 2019-12-01 09:46:00
问题 I want to scrap amazon all deals page http://www.amazon.com/gp/goldbox/all-deals/ref=sv_gb_1 So i am using curl php $request = 'http://www.amazon.com/gp/goldbox/all-deals/ref=sv_gb_1'; $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,$request); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 80); $file_source = curl_exec($ch);

Search amazon example with new amazon service

心不动则不痛 提交于 2019-12-01 08:12:29
I can not find a working example of the new amazon service (or at least, within the last couple of years). The closest working example just comes back with a null item no matter what I put in the title. The code is: // Amazon ProductAdvertisingAPI client AWSECommerceServicePortTypeClient amazonClient = new AWSECommerceServicePortTypeClient(); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "C#"; request.Condition = Condition.All; //request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new

Is there an API for the Amazon Vendor Central? [closed]

寵の児 提交于 2019-12-01 05:11:38
In order to manage our products easier I would like to use an API if there is any. Is there any way to manage the products programmatically? Currently, my impression is that there is only the web access, which is quite cumbersome to use if there are hundreds of products. Our data comes from another system which I would like to integrate somehow. I know that data can be imported via Excel files to create products. If products could be also updated this way, it would an (inferior) alternative. But this is not possible. Or am I wrong? For Amazon MWS there is the Feed API to manage products. But

Amazon Cloudsearch : Filter if exists

こ雲淡風輕ζ 提交于 2019-12-01 03:50:17
I have an amazon cloudsearch domain. The aim is to filter if the field 'language' exists. Not all objects have a language, and I want to have the ones which do have a language filtered, but the ones that do not have any language to also be returned. I want to filter with ( or language:'en' language:null ) However null cannot be passed within a string. Is this possible? If so how would it be done. I looked elsewhere aswell, it seems : The simplest way to do that, is to set a default value for the field, and then use that value for your null. For example, set the default to the string "null",

SES AWS Error Code: SignatureDoesNotMatch, Status Code: 403

大城市里の小女人 提交于 2019-12-01 03:22:55
I'm getting a AWS Error Code: SignatureDoesNotMatch, Status Code: 403 when trying to send a mail through Amazon SES. I have confirmed that I am using the correct credentials which I created via https://console.aws.amazon.com/iam/home?#users and the error still persists. I assume the credentials I've created on the iam/home are in fact global but I do not know what I am doing wrong further. The entire error is: AWS Error Code: SignatureDoesNotMatch, Status Code: 403, AWS Request ID: xxx, AWS Error Type: client, AWS Error Message: Signature expired: 20140314T031111Z is now earlier than