What's the recommended way to implement a simple locking mechanism to be used in conjunction with S3?
Example of what I want to do:
- acquire lock by object id
- read object from S3
- modify data
- write object to S3
- release lock
Ideally looking for a cloud based locking mechanism. I could use memcached locally, but then I have to deal with scaling that. I don't see an obvious way to implement lightweight locking with any AWS APIs, but it seems like a common problem.
I wonder if you could use SimpleDB to do an atomic acquire lock operation. Has anyone tried that?
Ok, I spent some time this morning playing with boto and I think I have a solution that works using SimpleDB. You need the latest boto release so that conditional puts and consistent reads are supported.
Example code here: http://pastebin.com/3XzhPqfY
Please post comments/suggestions. I believe this code should be fairly safe -- my test in main() tries it with 10 threads.
One thing I haven't addressed is that S3 reads are not consistent (right?), so in theory a thread may be operating on an old copy of the S3 value. It looks like there may be a workaround for that as described here:
http://www.shlomoswidler.com/2009/12/read-after-write-consistency-in-amazon.html
i dont think that you can do this using S3 only, using simpleDB's consistency enhancements as james said is a good way that works
you can look for some examples here : Amazon SimpleDB Consistency Enhancements
another approach that might be fine is using the versioning feature of S3
so basically, store an object id/version id pair in simpleDB as the most "valid" version
and assure that all GET requests will retrieve that version
after a successful PUT of a modified object, update the version id in the DB
this way you could also use the ability to retrieve previous versions of an object for restoring if needed.
来源:https://stackoverflow.com/questions/3431418/locking-with-s3