distributed-transactions

Synchronising transactions between database and Kafka producer

点点圈 提交于 2019-12-03 05:04:26
问题 We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully

How does three-phase commit avoid blocking?

江枫思渺然 提交于 2019-12-03 01:38:05
I am trying to understand how three-phase commit avoids blocking Consider the following two failure scenarios: Scenario 1: In phase 2 the coordinator sends preCommit messages to all cohorts and has gotten an ack from all except cohort A. Network problems prevent cohort A from receiving the coordinator's preCommit message. Cohort A times out waiting for the preCommit message and chooses to abort. Then both the coordinator and cohort A crash. Scenario 2: The protocol reaches phase 3. The coordinator sends a doCommit message to cohort A. But before it can send more doCommit messages the

Best way to handle LOBs in Oracle distributed databases

左心房为你撑大大i 提交于 2019-12-02 23:12:15
If you create an Oracle dblink you cannot directly access LOB columns in the target tables. For instance, you create a dblink with: create database link TEST_LINK connect to TARGETUSER IDENTIFIED BY password using 'DATABASESID'; After this you can do stuff like: select column_a, column_b from data_user.sample_table@TEST_LINK Except if the column is a LOB, then you get the error: ORA-22992: cannot use LOB locators selected from remote tables This is a documented restriction . The same page suggests you fetch the values into a local table, but that is... kind of messy: CREATE TABLE tmp_hello AS

Does the CAP theorem imply that ACID is not possible for distributed databases?

ぃ、小莉子 提交于 2019-12-02 21:19:48
There are NoSQL ACID (distributed) databases , despite CAP theorem. How this is possible? What's the relation between CAP theorem and (possible/not possible of) being ACID? Is impossible for a distributed computer system to simultaneously provide consistency, availability and partition tolerance. CAP theorem is actually a bit misleading. The fact you can have a CA design is nonsense because when a partition occurs you necessarily have a problem regarding consistency (data synchronization issue for example) or availability (latency). That's why there is a more accurate theorem stating that :

Synchronising transactions between database and Kafka producer

最后都变了- 提交于 2019-12-02 18:23:39
We have a micro-services architecture, with Kafka used as the communication mechanism between the services. Some of the services have their own databases. Say the user makes a call to Service A, which should result in a record (or set of records) being created in that service’s database. Additionally, this event should be reported to other services, as an item on a Kafka topic. What is the best way of ensuring that the database record(s) are only written if the Kafka topic is successfully updated (essentially creating a distributed transaction around the database update and the Kafka update)?

IBM MQManager as XA Transaction Manager with Spring-jms and Spring-tx

浪子不回头ぞ 提交于 2019-12-02 17:05:10
问题 We are trying to use IBM MQ manager as XA Transaction manager with spring-jms and spring transaction support. Does IBM MQ manager play well with spring-jta support? 回答1: You can't use the WMQ JMS client (which is what spring-jms would use) with the MQ QueueManager acting as the XA transaction manager. The intention is that a JMS application would be controlled via a JTA implemented transaction manager (i.e. a Java EE application server). You can however use the WMQ Java client (i.e. non JMS)

Spring batch and XA and local transactions

▼魔方 西西 提交于 2019-12-02 16:24:43
问题 It is possible to have a jobRepository in spring batch to use local transactions but execute particular job steps in distributed XA transaction? For XA I use Atomicos 3.8.0. Step is supposed to read the JMS message and update the DB after processing. The relevant part of spring configuration: <job id="job" xmlns="http://www.springframework.org/schema/batch"> <step id="inventorySync"> <tasklet transaction-manager="xaTransactionManager"> <chunk reader="jmsQueueReader" processor=

How to do distributed transaction cordination around SQL API and GraphDB in CosmosDB?

会有一股神秘感。 提交于 2019-12-02 08:30:10
I have a Customer container with items representing a single customer in SQL API (DocumentDB) in CosmosDB . I also have a Gremlin API (GraphDB) with the customers' shoppingcart data. Both these data are temporary/transient. The customer can choose clear shopping cart which will delete the temporary customer and the shoppingcart data. Currently I make separate calls, one to the SQL API (DocumentDB) and Gremlin API (GraphDB) which works but I want to do both as a transaction (ACID principle). To delete a customer , I call the Gremblin API and delete the shoppingcart data, then call the SQL API

IBM MQManager as XA Transaction Manager with Spring-jms and Spring-tx

非 Y 不嫁゛ 提交于 2019-12-02 08:19:56
We are trying to use IBM MQ manager as XA Transaction manager with spring-jms and spring transaction support. Does IBM MQ manager play well with spring-jta support? whitfiea You can't use the WMQ JMS client (which is what spring-jms would use) with the MQ QueueManager acting as the XA transaction manager. The intention is that a JMS application would be controlled via a JTA implemented transaction manager (i.e. a Java EE application server). You can however use the WMQ Java client (i.e. non JMS) and have the MQ QueueManager act as the XA transaction manager (non JTA). As @COLINHY said you can

Issue with configuring Atomikos on a Spring Boot / Spring Batch application

丶灬走出姿态 提交于 2019-12-02 01:56:39
问题 I am trying to get Atomikos to work with my Spring Boot/Spring Batch application. Here is are the relevant portions of my code: Datasource config: @Configuration public class DatasourceConfiguration extends AbstractCloudConfig { @Bean @Qualifier("batch_database") public DataSource batchDatasource() { return connectionFactory().dataSource("batch_database"); } @Bean public PlatformTransactionManager transactionManager(){ return new JtaTransactionManager(); } @Bean public TaskConfigurer