I\'m using a tutorial here in this Github to run spark on cassandra using a java maven project: https://github.com/datastax/spark-cassandra-connector.
I\'ve figured
There is a limitation in the Spark Cassandra Connector that the where
method will not work on partitioning keys. In your table empByRole, role is a partitioning key, hence the error. It should work correctly on clustering columns or indexed columns (secondary indexes).
This is being tracked as issue 37 in the GitHub project and work has been ongoing.
On the Java API doc page, the examples shown used .where("name=?", "Anna")
. I assume that name is not a partitioning key, but the example could be more clear about that.