datastax

Dataxtax Agent error

戏子无情 提交于 2019-12-13 07:42:20
问题 While adding existing cluster in OpsCenter I receive an error: ERROR: Agent for XXX.XXX.XXX.XXX was unable to complete operation (http://XXX.XXX.XXX.XXX:61621/snapshots/pit/properties?): java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil On agent there is an error: java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil at

add new datacenter during datastax upgrade 4.8.8 to 5.0.2

岁酱吖の 提交于 2019-12-13 06:49:03
问题 I have multiple datacenters. One of them is Cassandra other one is Solr datacenter. I already started upgrading process. Still 1 node is being upgrading since "upgradesstables" command have been taking for 4 days. I want to add new cassandra datacenter and i dont have time to wait upgrading process is done. Can i add new cassandra datacenter with version 5.0.2 while there is upgrading process is going on. 回答1: Although you can run a cluster in a partially upgraded state, it is a transient

Am I using cassandra efficiently?

China☆狼群 提交于 2019-12-13 05:51:31
问题 I have these table CREATE TABLE user_info ( userId uuid PRIMARY KEY, userName varchar, fullName varchar, sex varchar, bizzCateg varchar, userType varchar, about text, joined bigint, contact text, job set<text>, blocked boolean, emails set<text>, websites set<text>, professionTag set<text>, location frozen<location> ); create table publishMsg ( rowKey uuid, msgId timeuuid, postedById uuid, title text, time bigint, details text, tags set<text>, location frozen<location>, blocked boolean,

Setting Query level consistency for Cassandra using Spring data

假如想象 提交于 2019-12-13 03:51:26
问题 We are setting up consistency level, LOCAL_QUORUM as default. We are using this at the time of Cluster building in config - Cluster.builder().addContactPoints(environment.getProperty("cassandra.contact-points").split(",")).withPort(port) .withQueryOptions( new QueryOptions().setConsistencyLevel(ConsistencyLevel.valueOf(environment.getProperty("cassandra.consistency-level"))) .setSerialConsistencyLevel(ConsistencyLevel.valueOf(environment.getProperty("cassandra.consistency-level"))))

How to perform accumulated avg for multiple companies using spark based on the results stored in Cassandra?

时光毁灭记忆、已成空白 提交于 2019-12-13 03:49:54
问题 I need to get avg and count for given dataframe and need to get previously stored avg and count from Cassandra table values for each company. Then need to calculate avg and count and persist back into the Cassandra table. How can I do it for each company ? I have two dataframe schemas as below ingested_df |-- company_id: string (nullable = true) |-- max_dd: date (nullable = true) |-- min_dd: date (nullable = true) |-- mean: double (nullable = true) |-- count: long (nullable = false) cassandra

Cassandra: Writes after setting a column to null are lost randomly. Is this a bug, or I am doing something wrong?

北城以北 提交于 2019-12-13 02:18:58
问题 @Test public void testWriteUpdateRead() throws Exception { Cluster cluster = Cluster.builder() .addContactPoint("127.0.0.1") .build(); Session cs = cluster.connect(); cs.execute("DROP KEYSPACE if exists readtest;"); cs.execute("CREATE KEYSPACE readtest WITH replication " + "= {'class':'SimpleStrategy', 'replication_factor':1};"); cs.execute("create table readtest.sessions(" + "id text primary key," + "passwordHash text," + ");"); for (int i = 0; i < 1000; i++) { String sessionID = UUID

executeGraph() is not really needed in Datastax DSE 5.0 Graph with Java?

人走茶凉 提交于 2019-12-12 20:01:58
问题 It seems that in both approaches the vertex is stored and can be retrieved properly later. Common configuration: DseCluster dseCluster = DseCluster.builder() .addContactPoint("192.168.1.43") .build(); DseSession dseSession = dseCluster.connect(); GraphTraversalSource g = DseGraph.traversal( dseSession, new GraphOptions().setGraphName("graph") ); Approach 1: Vertex v = g.addV("User").property("uuid","testuuid231").next(); Approach 2: GraphStatement graphStatement = DseGraph

Can't get Cassandra remote access on vagrant

≯℡__Kan透↙ 提交于 2019-12-12 13:17:47
问题 I am using vagrant/puppet to configure a VM with Apache Cassandra. Local access (via cqlsh ) works, but not remote access. Here is my Vagrantfile # -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure("2") do |config| config.vm.box = 'ubuntu/trusty32' config.vm.define "dev" do |dev| dev.vm.hostname = "devbox" dev.vm.network :private_network, ip: "192.168.10.200" end config.vm.provision "puppet" do |puppet| puppet.module_path = "puppet/modules" puppet.manifests_path = "puppet/manifests"

User Defined Type (UDT) behavior in Cassandra

不羁的心 提交于 2019-12-12 09:19:05
问题 if someone has some experience in using UDT (User Defined Types), I would like to understand how the backward compatibility would work. Say I have the following UDT CREATE TYPE addr ( street1 text, zip text, state text ); If I modify "addr" UDT to have a couple of more attributes (say for example zip_code2 int, and name text): CREATE TYPE addr ( street1 text, zip text, state text, zip_code2 int, name text ); how does the older rows that does have these attributes work? Is it even compatible?

How to keep 2 Cassandra tables within same partition

北战南征 提交于 2019-12-12 07:59:26
问题 I tried reading up on datastax blogs and documentation but could not find any specific on this Is there a way to keep 2 tables in Cassandra to belong to same partition? For example: CREATE TYPE addr ( street_address1 text, city text, state text, country text, zip_code text, ); CREATE TABLE foo ( account_id timeuuid, data text, site_id int, PRIMARY KEY (account_id) }; CREATE TABLE bar ( account_id timeuuid, address_id int, address frozen<addr>, PRIMARY KEY (account_id, address_id) ); Here I