I have build the Spark-csv and able to use the same from pyspark shell using the following command
bin/spark-shell --packages com.databricks:spark-csv_2.10:1
Instead of placing the jars in any specific folder a simple fix would be to start the pyspark shell with the following arguments:
bin/pyspark --packages com.databricks:spark-csv_2.10:1.0.3
This will automatically load the required spark-csv jars.
Then do the following to read the csv file:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('file.csv')
df.show()