I have used SQL in Spark, in this example:
results = spark.sql(\"select * from ventas\")
where ventas is a dataframe, previosuly cataloged
Sparksession is the preferred way of working with Spark object now. Both Hivecontext and SQLContext are available as a part of this single object SparkSession.
You are using the latest syntax by creating a view df.createOrReplaceTempView('ventas').