SilverStripe behind load balancer

谁说我不能喝 提交于 2019-12-06 05:55:08

The default admin interface (assuming you're using 3.x) uses a javascript library called jquery.ondemand - this tracks files that have already been included (a rather ancient sort of precursor to the likes of 'require.js' - only without the AMD, and with CSS support).

To this end the likelihood of this having anything to do with the CMS itself is minimal - considering that the web by nature is stateless, and that the method you're using to save state is shared across your servers (both database and session data).

What is not shared across individual instances in your HA cluster is physical files. The cause here is likely (but not definitely going) to be the mtime stamp on the end of the URIs supplied to ondemand - originally intended to avoid issues with browser caching in respect to theme alterations (developer made or otherwise automated).

The headers as you've no doubt inspected include (always, no matter the endpoint chosen by HAProxy, nginx, ELB, or whatever) X-Include-CSS and X-Include-JS - of which an example looks like:

X-Include-JS:/framework/thirdparty/jquery/jquery.js?m=1481487203,/framework/javascript/jquery-ondemand/jquery.ondemand.js?m=1481487186,/framework/admin/javascript/lib.js?m=1481487181[...]

This is on each request, to which ondemand can inspect and see what is already included, and what needs to be added.

(Incidentally the size of these headers are what cause nginx header buffer issues causing 502 in a 'default' setup.)

So, what do?

The static files should be keeping the same mtime between balanced instances if you are deploying static code - but this is something to check. Generated files on the other hand (such as with Requirements::combine_files) will need to be synced on (re)generation between all instances as with all /assets for your site, in which case the mtime should persist. Zend_cache is quite unlikely to have any affect here, although APC may be a factor. Of course the first thing to check in any case is whether or not my premise holds true - e.g. to run the header responses from both back-ends through a diff tool.

To help those who might come across this and need a solution that hooks into the CMS here is what I did:

class SyncRequirements_Backend extends Requirements_Backend implements Flushable {

    protected static $flush = false;

    public static function flush() {
        static::$flush = true;
    }

    public function process_combined_files() {
        // You can write your own or copy from framework/view/Requirements.php
        // Do the required syncing like rsync at the appropriate spot like on successfulWrite
    }
}

Add Requirements::set_backend(new SyncRequirements_Backend()); to your _config.php (mine is a seperate extension but mysite will work too).

The issue with this solution is if the core Requirements_Backend updates you'll be running an older version of code however it is very unlikely to break anything, you've just implemented your own Requirements backend that uses the same code. You could just call the parent instead of doing it all yourself however I couldn't find a way to run the sync only on file write, it would run every time a combined file was requested.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!