SnappyData snappy-sql PUT INTO cause error:spark.sql.execution.id is already set

无人久伴 提交于 2019-12-11 04:25:40

问题


I was using SnappyData SQL shell(snappy-sql) and running sql statements (PUT INTO) and ran into the error:

ERROR 38000: (SQLState=38000 Severity=20000) (Server=localhost/127.0.0.1[1528] Thread=pool-3-thread-3) The exception 'com.pivotal.gemfirexd.internal.engine.jdbc.GemFireXDRuntimeException: myID: s4-03(19442)<v1>:43452, caused by java.lang.IllegalArgumentException: spark.sql.execution.id is already set' was thrown while evaluating an expression.
Caused by: ServerException: Server STACK: java.sql.SQLException(38000): The exception 'com.pivotal.gemfirexd.internal.engine.jdbc.GemFireXDRuntimeException: myID: s4-03(19442)<v1>:43452, caused by java.lang.IllegalArgumentException: spark.sql.execution.id is already set' was thrown while evaluating an expression.
    at com.pivotal.gemfirexd.internal.iapi.error.StandardException.newException(StandardException.java:473)
    at com.pivotal.gemfirexd.internal.engine.Misc.processFunctionException(Misc.java:808)
    at com.pivotal.gemfirexd.internal.engine.Misc.processFunctionException(Misc.java:753)
    at com.pivotal.gemfirexd.internal.engine.sql.execute.SnappySelectResultSet.setup(SnappySelectResultSet.java:282)
    at com.pivotal.gemfirexd.internal.engine.distributed.message.GfxdFunctionMessage.e

xecuteFunction(GfxdFunctionMessage.java:332)
        at com.pivotal.gemfirexd.internal.engine.distributed.message.GfxdFunctionMessage.executeFunction(GfxdFunctionMessage.
(truncated for brevity.)

And this is what I did: Download SnappyData binaries from https://www.snappydata.io/download (v0.8). Unzipped it, ran

sbin/snappy-start-all.sh
bin/snappy-sql
snappy> connect client 'localhost:1527';

snappy> create table table_a(key1 INT primary key, val INT);

snappy> create table table_b(key1 INT primary key, val INT);

snappy> insert into table_a values (1, 1);

snappy> insert into table_b values (1, 2);

snappy> insert into table_b values (2, 3);

snappy> select * from table_a;
KEY1       |VAL        
-----------------------
1          |1          

1 row selected
snappy> select * from table_b;
KEY1       |VAL        
-----------------------
2          |3          
1          |2          

2 rows selected

snappy> put into table_a select * from table_b;
(then the above error.)

Searching the error (spark.sql.execution.id is already set) lead to here: https://issues.apache.org/jira/browse/SPARK-13747 (Concurrent execution in SQL doesn't work with Scala ForkJoinPool) which seems to be a bug fixed in Spark 2.2.0.

It's possible this might be due to SnappyData still using spark 2.0 (at least now in github it says moved to spark 2.0). But I am not sure.

Now I trying to use the PUT INTO statements in SnappyData if possible, it'll be greatly appreciated if someone can help me with this problem. Thanks in advance :)


回答1:


You just need to provide table_a schema in the put into statement - So it should be : snappy> put into table_a (key1, val) select * from table_b;

We will see what can be done for '0 rows inserted/updated/deleted' issue. 'put into' is little tricky as it can do inserts and updates in one DML.



来源:https://stackoverflow.com/questions/43402885/snappydata-snappy-sql-put-into-cause-errorspark-sql-execution-id-is-already-set

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!