amazon-athena

Getting all Buildings in range of 5 miles from specified coordinates

纵饮孤独 提交于 2020-06-22 03:54:41
问题 I have database table Building with these columns: name , lat , lng How can I get all Buildings in range of 5 miles from specified coordinates, for example these: -84.38653999999998 33.72024 My try but it does not work: SELECT ST_CONTAINS( SELECT ST_BUFFER(ST_Point(-84.38653999999998,33.72024), 5), SELECT ST_POINT(lat,lng) FROM "my_db"."Building" LIMIT 50 ); https://docs.aws.amazon.com/athena/latest/ug/geospatial-functions-list.html 回答1: Why are you storing x,y in separated columns? I would

Getting all Buildings in range of 5 miles from specified coordinates

馋奶兔 提交于 2020-06-22 03:52:46
问题 I have database table Building with these columns: name , lat , lng How can I get all Buildings in range of 5 miles from specified coordinates, for example these: -84.38653999999998 33.72024 My try but it does not work: SELECT ST_CONTAINS( SELECT ST_BUFFER(ST_Point(-84.38653999999998,33.72024), 5), SELECT ST_POINT(lat,lng) FROM "my_db"."Building" LIMIT 50 ); https://docs.aws.amazon.com/athena/latest/ug/geospatial-functions-list.html 回答1: Why are you storing x,y in separated columns? I would

Does the partion location change automatically when the athena table's location is changed?

自古美人都是妖i 提交于 2020-06-18 02:45:07
问题 I have created one table test and its partion with this location is s3://mocktest/test Now, I want to update my table location to s3://mocktest/test-new so that I used ALTER TABLE test set LOCATION s3://mocktest/test-new query location is updated in test table but not in the partion table. 'MSCK REPAIR TABLE' command not working for update the partion 回答1: The location of existing partitions is not related to the location of the table. If you want to move the location of all partitions you

Unable to convert varchar to array in Presto Athena

╄→гoц情女王★ 提交于 2020-06-12 08:59:11
问题 My data is in varchar format. I want to split both the elements of this array so that I can then extract a key value from the json. Data format: [ { "skuId": "5bc87ae20d298a283c297ca1", "unitPrice": 0, "id": "5bc87ae20d298a283c297ca1", "quantity": "1" }, { "skuId": "182784738484wefhdchs4848", "unitPrice": 50, "id": "5bc87ae20d298a283c297ca1", "quantity": "4" }, ] For e.g. I want to extract skuid from the above column. So my data after extraction should look like: 1 5bc87ae20d298a283c297ca1 2

How to create AWS Athena table via Glue crawler when the s3 data store has both json and .gz compressed files?

自作多情 提交于 2020-06-09 03:59:25
问题 I have two problems in my intended solution: 1. My S3 store structure is as following: mainfolder/date=2019-01-01/hour=14/abcd.json mainfolder/date=2019-01-01/hour=13/abcd2.json.gz ... mainfolder/date=2019-01-15/hour=13/abcd74.json.gz All json files have the same schema and I want to make a crawler pointing to mainfolder/ which can then create a table in Athena for querying. I have already tried with just one file format, e.g. if the files are just json or just gz then the crawler works

How to create AWS Athena table via Glue crawler when the s3 data store has both json and .gz compressed files?

我的未来我决定 提交于 2020-06-09 03:57:59
问题 I have two problems in my intended solution: 1. My S3 store structure is as following: mainfolder/date=2019-01-01/hour=14/abcd.json mainfolder/date=2019-01-01/hour=13/abcd2.json.gz ... mainfolder/date=2019-01-15/hour=13/abcd74.json.gz All json files have the same schema and I want to make a crawler pointing to mainfolder/ which can then create a table in Athena for querying. I have already tried with just one file format, e.g. if the files are just json or just gz then the crawler works

extract json in array in AWS Athena

自闭症网瘾萝莉.ら 提交于 2020-05-30 08:05:33
问题 I have sent logs from kubernetes to an S3 bucket and want to query it using Athena The log looks like this [{ "date":1589895855.077230, "log":"192.168.85.35 - - [19/May/2020:13:44:15 +0000] \"GET /healthz HTTP/1.1\" 200 3284 \"-\" \"ELB-HealthChecker/2.0\" \"-\"", "stream":"stdout", "time":"2020-05-19T13:44:15.077230187Z", "kubernetes":{ "pod_name":"myapp-deployment-cd984ffb-kjfbm", "namespace_name":"master", "pod_id":"eace0175-99cd-11ea-95e4-0aee746ae5d6", "labels":{ "app":"myapp", "pod

AWS Athena null values are replaced by N after table is created. How to keep them as it is?

大憨熊 提交于 2020-05-17 06:22:05
问题 I'm creating a table in Athena from csv data in S3. The data has some columns quoted, so I use: ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ( "separatorChar" = ",", 'serialization.null.format' = '') The serde works fine but then the null values in the resultant table are replaced with N. How can I keep the null values as empty or like Null etc, but not as N. Thanks. 来源: https://stackoverflow.com/questions/61020631/aws-athena-null-values-are-replaced-by-n

Accessing tables being updated in Athena

天大地大妈咪最大 提交于 2020-04-30 16:33:54
问题 When issuing the msck repair table statement, is the table still accessible for querying during the udpate? I ask because I'm trying to figure out the best update schedule for a relatively large S3 hive table that is used to drive some reports in QuickSight. Will issuing this command break anyone who happens to simultaneously be running a QuickSight report based on this table? 回答1: Yes, the table will be available for running queries while you are running MSCK REPAIR TABLE , it's a background

Access Denied while querying S3 files from AWS Athena within Lambda in different account

戏子无情 提交于 2020-04-16 21:16:22
问题 I am trying to query Athena View from my Lambda code. Created Athena table for S3 files which are in different account. Athena Query editor is giving me below error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; I tried accessing Athena View from my Lambda code. Created Lambda Execution Role and allowed this role in Bucket Policy of another account S3 bucket as well like below: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS"