snappydata

How can I get external table jdbc url in SnappyData

爱⌒轻易说出口 提交于 2021-01-29 01:37:37
问题 Previously I created an external table in SnappyData like this: create external table EXT_DIM_CITY using jdbc options(url 'jdbc:mysql://***:5002/***?user=***&password=***', driver 'com.mysql.jdbc.Driver', dbtable 'dim_city'); but now I forget the mysql jdbc url that EXT_DIM_CITY referred to. How can I get the jdbc url from SnappyData? 回答1: With the latest SnappyData release 1.0.2.1, all table properties can be seen with extended describe: describe extended EXT_DIM_CITY The properties will be

How can I get external table jdbc url in SnappyData

╄→尐↘猪︶ㄣ 提交于 2021-01-29 01:34:38
问题 Previously I created an external table in SnappyData like this: create external table EXT_DIM_CITY using jdbc options(url 'jdbc:mysql://***:5002/***?user=***&password=***', driver 'com.mysql.jdbc.Driver', dbtable 'dim_city'); but now I forget the mysql jdbc url that EXT_DIM_CITY referred to. How can I get the jdbc url from SnappyData? 回答1: With the latest SnappyData release 1.0.2.1, all table properties can be seen with extended describe: describe extended EXT_DIM_CITY The properties will be

Refresh Dataframe in Spark real-time Streaming without stopping process

大兔子大兔子 提交于 2019-12-14 03:53:23
问题 in my application i get a stream of accounts from Kafka queue (using Spark streaming with kafka) And i need to fetch attributes related to these accounts from S3 so im planning to cache S3 resultant dataframe as the S3 data will not updated atleast for a day for now, it might change to 1hr or 10 mins very soon in future .So the question is how can i refresh the cached dataframe periodically without stopping process. **Update:Im planning to publish an event into kafka whenever there is an

SnappyData snappy-sql PUT INTO cause error:spark.sql.execution.id is already set

无人久伴 提交于 2019-12-11 04:25:40
问题 I was using SnappyData SQL shell(snappy-sql) and running sql statements (PUT INTO) and ran into the error: ERROR 38000: (SQLState=38000 Severity=20000) (Server=localhost/127.0.0.1[1528] Thread=pool-3-thread-3) The exception 'com.pivotal.gemfirexd.internal.engine.jdbc.GemFireXDRuntimeException: myID: s4-03(19442)<v1>:43452, caused by java.lang.IllegalArgumentException: spark.sql.execution.id is already set' was thrown while evaluating an expression. Caused by: ServerException: Server STACK:

Unable to connect to snappydata store with spark-shell command

你离开我真会死。 提交于 2019-12-10 12:18:41
问题 SnappyData v0.5 My goal is to start a "spark-shell" from my SnappyData install's /bin directory and issue Scala commands against existing tables in my SnappyData store. I am on the same host as my SnappyData store, locator, and lead (and yes, they are all running). To do this, I am running this command as per the documentation here: Connecting to a Cluster with spark-shell ~/snappydata/bin$ spark-shell --master local[*] --conf snappydata.store.locators=10.0.18.66:1527 --conf spark.ui.port

How to store Array or Blob in SnappyData?

↘锁芯ラ 提交于 2019-12-08 04:00:38
问题 I'm trying to create a table with two columns like below: CREATE TABLE test (col1 INT ,col2 Array<Decimal>) USING column options(BUCKETS '5'); It is creating successfully but when i'm trying to insert data into it, it is not accepting any format of array. I've tried the following queries: insert into test1 values(1,Array(Decimal("1"), Decimal("2"))); insert into test1 values(1,Array(1,2)); insert into test1 values(1,[1,2,1]); insert into test1 values(1,"1,2,1"); insert into test1 values(1,<1