I implemented spark application. I've created spark context:
private JavaSparkContext createJavaSparkContext() { SparkConf conf = new SparkConf(); conf.setAppName("test"); if (conf.get("spark.master", null) == null) { conf.setMaster("local[4]"); } conf.set("fs.s3a.awsAccessKeyId", getCredentialConfig().getS3Key()); conf.set("fs.s3a.awsSecretAccessKey", getCredentialConfig().getS3Secret()); conf.set("fs.s3a.endpoint", getCredentialConfig().getS3Endpoint()); return new JavaSparkContext(conf); } And I try to get data from s3 via spark dataset API (Spark SQL):
String s = "s3a://" + getCredentialConfig().getS3Bucket(); Dataset<Row> csv = getSparkSession() .read() .option("header", "true") .csv(s + "/dataset.csv"); System.out.println("Read size :" + csv.count()); There is an error:
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: 1A3E8CBD4959289D, AWS Error Code: null, AWS Error Message: Bad Request, S3 Extended Request ID: Q1Fv8sNvcSOWGbhJSu2d3Nfgow00388IpXiiHNKHz8vI/zysC8V8/YyQ1ILVsM2gWQIyTy1miJc= Hadoop version: 2.7
AWS endpoint: s3.eu-central-1.amazonaws.com
(On hadoop 2.8 - all works fine)