tombstone

Overwrite row in cassandra with INSERT, will it cause tombstone?

心不动则不痛 提交于 2019-12-03 12:22:22
Writing data to Cassandra without causing it to create tombstones are vital in our case, due to the amount of data and speed. Currently we have only written a row once, and then never had the need to update the row again, only fetch the data again. Now there has been a case, where we actually need to write data, and then complete it with more data, that is finished after awhile. It can be made by either; overwrite all of the data in a row again using INSERT (all data is available), or performing an Update only on the new data. What is the best way to do it, bear in mind of the speed and not

What exactly happens when tombstone limit is reached

我的未来我决定 提交于 2019-12-03 05:43:59
问题 According to cassandra's log (see below) queries are getting aborted due to too many tombstones being present. This is happening because once a week I cleanup (delete) rows with a counter that is too low. This 'deletes' hundreds of thousands of rows (marks them as such with a tombstone .) It is not at all a problem if, in this table, a deleted row re-appears because a node was down during the cleanup process, so I set the gc grace time for the single affected table to 10 hours (down from

Can I force cleanup of old tombstones?

荒凉一梦 提交于 2019-12-01 02:54:11
I have recently lowered gc_grace_seconds for a CQL table. I am running LeveledCompactionStrategy . Is it possible for me to force purging of old tombstones from my SSTables? TL;DR Your tombstones will disappear on their own through compaction bit make sure you are running repair or they may come back from the dead. http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html Adding some more details: Tombstones are not immediately available for deletion until both: 1) gc_grace_seconds has expired 2) they meet the requirements configured in tombstone compaction sub