I have tried both s3cmd:
$ s3cmd -r -f -v del s3://my-versioned-bucket/
And the AWS CLI:
$ aws s3 rm s3://my-v
I ran into issues with Abe's solution as the list_buckets generator is used to create a massive list called all_keys and I spent an hour without it ever completing. This tweak seems to work better for me, I had close to a million objects in my bucket and counting!
import boto
s3 = boto.connect_s3()
bucket = s3.get_bucket("your-bucket-name-here")
chunk_counter = 0 #this is simply a nice to have
keys = []
for key in bucket.list_versions():
keys.append(key)
if len(keys) > 1000:
bucket.delete_keys(keys)
chunk_counter += 1
keys = []
print("Another 1000 done.... {n} chunks so far".format(n=chunk_counter))
#bucket.delete() #as per usual uncomment if you're sure!
Hopefully this helps anyone else encountering this S3 nightmare!