Use Amazon S3 and Cloudfront for intelligently caching webapges

倾然丶 夕夏残阳落幕 提交于 2019-12-08 17:45:39

问题


I have a website (running within Tomcat on Elastic Beanstalk) that generates artist discographies (a single page for one artist). This can be resource intensive, so as the artist pages don't change over a month period I put a CloudFront Distribution in front of it.

I thought this would mean no artist request ever had to be served more than once by my server however its not quite as good as that. This post explains that every edge location (Europe, US etc.) will get a miss the first time they look up the resource and that there is a limit to how many resources are kept in the cloudfront cache so they could be dropped.

So to counter this I have changed by server code to store a copy of the webpage in a bucket within S3 AND to check this first when a request comes in, so if the artist page already exists in S3 then the server retrieves it and returns its contents as the webpage. This greatly reduces the processing as it only constructs a webpage for a particular artist once.

However:

  1. The request still has to go to the server to check if the artist page exists.
  2. If the artist page exists then the webpage (and they can sometimes be large up-to 20mb) is first downloaded to the server and then server returns the page.

So I wanted to know if I could improve this - I know you can construct an S3 bucket as a redirect to another website. Is there a per-page way I could get the artist request to go to the S3 bucket and then have it return the page if it exists or call server if it does not?

Alternatively could I get the server to check if page exists and then redirect to the S3 page rather than download the page to the server first?


回答1:


OP says:

they can sometimes be large up-to 20mb

Since the volume of data you serve can be pretty large, I think it is feasible for you to do this in 2 requests instead of one, where you decouple the content generation from the content serving part. The reason to do this is so as to minimize the amount of time/resources it takes on the server to fetch data from S3 and serve it.

AWS supports pre-signed URLs which can be valid for a short amount of time; We can try using the same here to avoid issues around security etc.

Currently, your architecture looks something like below, wherein. the client initiates a request, you check if the requested data exists on the S3 and then fetch and serve it if there, else you generate the content, and save it to S3:

                           if exists on S3
client --------> server --------------------> fetch from s3 and serve
                    |
                    |else
                    |------> generate content -------> save to S3 and serve

In terms of network resources, you always consume 2X the amount of bandwidth and time here. If the data exists, then once you have to pull it from server and serve it to customer (so it is 2X). If the data doesn't exist, you send it to customer and to S3 (so again it is 2X)


Instead, you can try 2 approaches below, both of which assume that you have some base template, and that the other data can be fetched via AJAX calls, and both of which bring down that 2X factor in the overall architecture.

  1. Serve the content from S3 only. This calls for changes to the way your product is designed, and hence may not be that easily integrable.

    Basically, for every incoming request, return the S3 URL for it if the data already exists, else create a task for it in SQS, generate the data and push it to S3. Based on your usage patterns for different artists, you should be having an estimate of how much time it takes to pull together the data on the average, and so return a URL which would be valid with the estimated_time_for_completetion(T) of the task.

    The client waits for time T, and then makes the request to the URL returned earlier. It makes upto say 3 attempts for fetching this data in case of failure. In fact, the data already existing on S3 can be thought of as the base case when T = 0.

    In this case, you make 2-4 network requests from the client, but only the first of those requests comes to your server. You transmit the data once to S3 only in the case it doesn't exists and the client always pulls in from S3.

                               if exists on S3, return URL
    client --------> server --------------------------------> s3
                        |
                        |else SQS task
                        |---------------> generate content -------> save to S3 
                         return pre-computed url
    
    
               wait for time `T`
    client  -------------------------> s3
    


  1. Check if data already exists, and make second network call accordingly.

    This is similar to what you currently do when serving data from the server in case it doesn't already exist. Again, we make 2 requests here, however, this time we try to serve data synchronously from the server in the case it doesn't exist.

    So, in the first hit, we check if the content had ever been generated previously, in which case, we get a successful URL, or error message. When successful, the next hit goes to S3.

    If the data doesn't exist on S3, we make a fresh request (to a different POST URL), on getting which, the server computes data, serves it, while adding an asynchronous task to push it to S3.

                               if exists on S3, return URL
    client --------> server --------------------------------> s3
    
    client --------> server ---------> generate content -------> serve it
                                           |
                                           |---> add SQS task to push to S3
    



回答2:


CloudFront cache redirect, but does not follow it: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RequestAndResponseBehaviorCustomOrigin.html#ResponseCustomRedirects.

You did not provide specific numbers, but will it work for you to pregenerate all these pages and put them to S3 and point CloudFront directly to S3?

If it is doable, there are couple of benefits:

  1. You will decouple content generation from content serving which will make system more stable in overall
  2. Performance requirements for the content generator will be much lower as it could move as slowly as it wish regenerating content

Definitely if you don't know which pages you have to generate in advance it won't work.




回答3:


Although I've not done it before, this would be a technique I'd look at.

  • Start by setting up the S3 bucket as you've described, as a "redirect" for a website.

  • Have a look at the S3 Event Handlers. They only deal with when an S3 object is created, but you could try doing a GET to start with and if it fails respond with a POST or PUT to that same path, placing in a "marker" file or calling an API that will trigger an event?

https://aws.amazon.com/blogs/aws/s3-event-notification/ http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html

  • Once the event is triggered, either have your server listen via SQS for an event, or move your artist creator code into AWS Lambda which will feed off of SNS.

My only concern is where that GET will be coming from. You don't want anyone hitting your S3 bucket with an invalid POST - you'd be generating all over the place. But I'll leave that as an exercise for the reader.




回答4:


Why not put a web server like ngx or apache in front of tomcat? Means tomat runs on some other port like 8085, web server runs on 80. It gets hits and has its own cache. Then you dont need S3 at all but can do back to your server + Cloudfront.

So Cloudfront hits your web server, if its in cache, return page directly. Else go to tomcat.

Cache can be in same process or a redis ... dependong on total size of data you need to cache.



来源:https://stackoverflow.com/questions/29845907/use-amazon-s3-and-cloudfront-for-intelligently-caching-webapges

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!