How i can process my payload to insert bulk data in multiple tables with atomicity/consistency in cassandra?

早过忘川 提交于 2020-03-05 05:05:08

问题


I have to design the database for customers having prices for millions of materials they acquire through multiple suppliers for the next 24 months. So the database will store prices on a daily basis for every material supplied by a specific supplier for the next 24 months. Now I have multiple use cases to solve so I created multiple tables to solve each use case in the best possible way. Now the insertion of data into these tables will happen on a regular basis in a big chunk (let's say for 1k items), which should ensure the data consistency as well i.e. the data should be inserted into all the tables or in none of them. Failure in doing so should be flagged as a "failure" with no inserts for further action. How can I solve this in Cassandra effectively?

On option I can think of is to use small BATCH processes (1K in number for 1k items for example). I might hit multiple partitions during insertion in different tables having a different set of primary keys;

Any Thoughts? Thanks


回答1:


If you are talking about with respect of Database(Cassandra) then you should consider many things for data modelling point. You need to go through the data modeling detail on below link with batch. https://docs.datastax.com/en/dse/6.0/cql/cql/ddl/dataModelingCQLTOC.html https://docs.datastax.com/en/dse/6.0/cql/cql/cql_reference/cql_commands/cqlBatch.html

Also, based on application nature you should think about compaction strategy for processing of high writes or reads.



来源:https://stackoverflow.com/questions/60274789/how-i-can-process-my-payload-to-insert-bulk-data-in-multiple-tables-with-atomici

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!