hazelcast

分布式任务调度框架hanzelcast使用

孤街醉人 提交于 2019-12-07 08:23:18
Hazelcast是一个高度可扩展的数据分发和集群平台,提供java.util.{Queue, Set, List, Map}分布式实现及其它特性。 可以作为 a.服务启动: 假设在两台可以通信的服务器A,B上部署hazelcast. 在A,B两台机器上启动hazelcast服务作为两个server节点, 当一台服务器down掉时,另一台机器继续提供服务,因为两个server节点含有相同的共享数据。 cd /a/b/c/hazelcast/bin nohup ./server.sh > server_node_1.log 2>&1 & nohup ./server.sh > server_node_2.log 2>&1 & server.sh脚本如下,在源码的bin目录下 #!/bin/sh java -server -Xms1G -Xmx1G -Djava.net.preferIPv4Stack=true -cp ../lib/hazelcast-2.1.2.jar com.hazelcast.examples.StartServer b.监控配置(hazelcast项目提供了监控系统使用情况的war包) 1.拷贝war包: 在\hazelcast-2.1.2目录下有一个mancenter.war包,直接将这个包拷贝到apache-tomcat-6.0.33\webapps目录下

after upgrade to Spring Boot 2, how to expose cache metrics to prometheus?

独自空忆成欢 提交于 2019-12-07 06:40:13
问题 I recently upgraded a spring boot application from 1.5 to 2.0.1. I also migrated the prometheus integration to the new actuator approach using micrometer. Most things work now - including some custom counters and gauges. I noted the new prometheus endpoint /actuator/prometheus does no longer publish the spring cache metrics (size and hit ratio). The only thing I could find was this issue and its related commit. Still I can't get cache metrics on the prometheus export. I tried settings some

HazelcastInstance vs HazelcastClient

痞子三分冷 提交于 2019-12-06 21:48:59
问题 I am novice in hazelcast and I have a few questions. As I understand hazelcast comes with two entities HazelcastInstance (as I understand it is server) and HazelcastClient . These entities even packed into different jars. I have noticed that in our project we use only HazelcastInstance . I have asked collegues why don't we use HazelcastClient . As I understand their explanation HazelcastInstance has more possibilities than HazelcastClient . Thus HazelcastInstance = HazelcastClient +

How to write client proxy for SPI and what the difference between client and server proxies?

烈酒焚心 提交于 2019-12-06 14:23:26
问题 I have developed own idGenerator based on Hazelcast IdGenerator class (with storing each last_used_id into db). Now I want to run hazelcast cluster as a single java application and my web-application as other app (web-application restart shouldn't move id values to next block). I move MyIdGeneratorProxy and MyIdGeneratorService to new application, run it, run web-application as a hazelcast-client and get IllegalArgumentException: No factory registered for service: ecs:impl:idGeneratorService

No DataSerializeFactory registered for namespace

空扰寡人 提交于 2019-12-06 07:30:48
I tried it in HZ 3.4 and 3.4.1 but with the same output I`m trying to import dummy data to my Hazelcast cluster, with following function HazelcastInstance cluster = HazelcastClient.newHazelcastClient(this.conf); Map<String, Customer> mapCustomers = cluster.getMap("customers"); System.out.println(mapCustomers.size()); System.out.println("hello world"); for (int customerID = 0; customerID < 2000; customerID++) { Customer p = new Customer(); mapCustomers.put(Integer.toString(customerID), p); System.out.println("inserted customer number " + Integer.toString(customerID)); } cluster.shutdown(); when

load all implementation for hazelcast

落花浮王杯 提交于 2019-12-06 00:58:44
I am trying to use hazelcast server over multiple nodes. I have implemented the load all in the map store implementation. I am wondering whether this should only be enabled on on server node or all of them? If I deploy the same on all nodes, would this not create database read operations which should not be needed. If I need to deploy the load all only on one node, what is the best strategy (code/API call based or config) that would allow me to cleanly implement the scenario whereby only one server node implements the load all implementation for map store. I can always deploy different code on

Hazelcast prevents the JVM from terminating

纵饮孤独 提交于 2019-12-06 00:36:35
We use Hazelcast 2.6.2 in a legacy Java clustered application. When the application is stopped the JVM does not terminate any more. It seems that it is caused by Hazelcast threads not being flagged daemon. I did not find a way way through the Hazelcast API to flag them daemon. Are there recommended solutions to prevent Hazelcast from preventing the JVM to terminate? Regards Looking at the Hazelcast Javadocs , I see that there is a shutdownAll(); method. To quote the javadocs: Shuts down all running Hazelcast Instances on this JVM, including the default one if it is running. It doesn't shutdown

Is Hazelcast Client thread safe?

僤鯓⒐⒋嵵緔 提交于 2019-12-05 18:18:24
I cannot find this in the docs or javadocs : do I need to create one client per thread or is a client created by: client = HazelcastClient.newHazelcastClient(cfg); thread safe? The client is thread-safe. Also when you get e.g. an IMap from it, it also is thread-safe. HazelcastInstance client = HazelcastClient.newHazelcastClient(cfg) IMap map = client.getMap("map"); So you can share this client instance with all your threads in the JVM. 来源: https://stackoverflow.com/questions/25567894/is-hazelcast-client-thread-safe

Hazelcast map synchronization

隐身守侯 提交于 2019-12-05 17:20:40
I am trying to implement distributed cache using Hazelcast in my application. I am using Hazelcast’s IMap . The problem I have is every time I get a value from a map and update the value, I need to do a put(key, value) again. If my value object has 10 properties and I have to update all 10, then I have to call put(key, value) 10 times. Something like - IMap<Integer, Employee> mapEmployees = hz.getMap("employees"); Employee emp1 = mapEmployees.get(100); emp1.setAge(30); mapEmployees.put(100, emp1); emp1.setSex(“F”); mapEmployees.put(100, emp1); emp1.setSalary(5000); mapEmployees.put(100, emp1);

In Hazelcast, is it possible to use clustered locks that do _not_ care about the local thread that performs the lock/unlock operations?

吃可爱长大的小学妹 提交于 2019-12-05 14:19:50
Hazelcast locks (such as http://www.hazelcast.com/docs/1.9.4/manual/multi_html/ch02s07.html ) as I understand it behave the same way as the Java concurrency primitives but across the cluster. The makes it possible to use to synchronize between thread in the local process as well as over the cluster. However, is there any way I can opt out of this behaviour? In my current project, I need a way of coordinating unique ownership of a resource across the cluster but want to aquire and release this ownership from multiple points in my application - can I do this in some way that does not involve