Spark SQL security considerations

感情迁移 提交于 2019-11-27 08:30:28

问题


What are the security considerations when accepting and executing arbitrary spark SQL queries?

Imagine the following setup:

Two files on hdfs are registered as tables a_secrets and b_secrets:

# must only be accessed by clients with access to all of customer a' data
spark.read.csv("/customer_a/secrets.csv").createTempView("a_secrets")

# must only be accessed by clients with access to all of customer b's data
spark.read.csv("/customer_b/secrets.csv").createTempView("b_secrets")

These two views, I could secure using simple hdfs file permissions. But say I have the following logical views of these tables, that I'd like to expose:

# only access for clients with access to customer a's account no 1
spark.sql("SELECT * FROM a_secrets WHERE account = 1").createTempView("a1_secrets")

# only access for clients with access to customer a's account no 2
spark.sql("SELECT * FROM a_secrets WHERE account = 2").createTempView("a2_secrets")


# only access for clients with access to customer b's account no 1
spark.sql("SELECT * FROM b_secrets WHERE account = 1").createTempView("b1_secrets")

# only access for clients with access to customer b's account no 2
spark.sql("SELECT * FROM b_secrets WHERE account = 2").createTempView("b2_secrets")

Now assume I receive an arbitrary (user, pass, query) set. I get a list of accounts the user can access:

groups = get_groups(user, pass)

and extract the logical query plan of the user's query:

spark.sql(query).explain(true)

giving me a query plan along the lines of (this exact query plan is made up)

== Analyzed Logical Plan ==
account: int, ... more fields
Project [account#0 ... more fields]
+- SubqueryAlias a1_secrets
   +- Relation [... more fields]
      +- Join Inner, (some_col#0 = another_col#67)
         :- SubqueryAlias a2_secrets
         :  +- Relation[... more fields] csv
== Physical Plan ==
... InputPaths: hdfs:/customer_a/secrets.csv ...

Assuming I can parse a logical query plan to determine exactly which tables and files are being accessed, is it safe to grant access to the data produced by the query? I'm thinking of potential problems like:

  • Are there ways to access registered tables without them showing up in a logical query plan?
  • Are the ways to load new data and register it as tables through pure spark SQL? (input to spark.sql(1))?
  • Do users have access to any sql functions with side effects (that modifies or accesses unathorized data)?
  • Are there ways to register UDFs/execute arbitrary code purely through spark.sql(1)?

To summarise: Can I safely accept arbitrary SQL, register it with df = spark.sql(1), analyse data access using df.explain(True), and then return results using e.g. df.collect()?

Edits: - 23 Jan 15:29: edited to include an "EXPLAIN" prefix in


回答1:


TL;DR You should never execute any untrusted code on your Spark cluster.

Are the ways to load new data and register it as tables through pure spark SQL?

Yes. CREATE TABLE can be executed using sql method so if as long as users have permissions to access filesystem they can create tables.

Are there ways to register UDFs/execute arbitrary code purely through spark.sql(1)?

Yes, as long they can control classpath which, can be modified with SQL.

spark.sql("""add jar URI""")

Do users have access to any sql functions with side effects (that modifies or accesses unathorized data)?

Effectively yes (by extension of the previous point).

Can I safely accept arbitrary SQL,

No.



来源:https://stackoverflow.com/questions/41805876/spark-sql-security-considerations

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!