Nginx proxy Amazon S3 resources

后端 未结 3 2224
夕颜
夕颜 2020-11-28 01:04

I´m performing some WPO tasks, so PageSpeed suggested me to leverage browser caching. I have improved it successfully for some static files in my Nginx server, however my im

3条回答
  •  陌清茗
    陌清茗 (楼主)
    2020-11-28 01:56

    From an architectural review, what you're trying to do is a wrong way to go about:

    • Amazon S3 is presumably optimised to be a highly available cache; by introducing a hand-rolled proxying layer on top of it, you're merely introducing an unnecessary extra delay and a huge point of failure, and also losing all the benefits that would come out of S3

    • Your performance analysis with regards to the number of files is incorrect. If you have thousands of files on S3, the correct solution would be to write a one-time script to change the requisite attributes on S3, instead of hand-rolling a proxying mechanism that you don't fully understand, and that would be executed many times over (ad nauseam). Doing the proxying would likely be a band-aid, and, in reality, will likely decrease the performance, not increase it (even if you'd get to have a stateless automated tool tell you otherwise). Not to mention that it would also be an unnecessary resource drain, and may contribute to actual performance issues and heisenbugs down the line.


    That said, if you're still up for proxying with adding the headers, the correct way to do so with nginx would be by using the expires directive.

    E.g., you may place expires max; before or after your proxy_pass directive within the appropriate location.

    The expires directive automatically takes care of setting a correct Cache-Control header for you, too; but you could also use add_header directive should you wish to add any custom response headers manually.

提交回复
热议问题