aws-batch

ssh into AWS Batch jobs

只谈情不闲聊 提交于 2021-01-29 11:31:36
问题 I would like to communicate with AWS Batch jobs from a local R process in the same way that Davis Vaughn demonstrated for EC2 at https://gist.github.com/DavisVaughan/865d95cf0101c24df27b37f4047dd2e5. The AWS Batch documentation describes how to set up a key pair and security group for batch jobs. However, I could not find detailed instructions about how to find the IP address of a job's instance or what user name I need. The IP address in particular is not available in the console when I run

AWS Batch Job Execution Results in Step Function

给你一囗甜甜゛ 提交于 2021-01-29 09:14:04
问题 I'm newbie to AWS Step Functions and AWS Batch. I'm trying to integrate AWS Batch Job with Step Function. AWS Batch Job executes simple python scripts which output string value (High level simplified requirement) . I need to have the python script output available to the next state of the step function. How I should be able to accomplish this. AWS Batch Job output does not contain results of the python script. instead it contains all the container related information with input values.

how to retrieve aws batch parameter value in python?

徘徊边缘 提交于 2021-01-27 18:53:10
问题 Flow :- Dynamo DB --> Lambda --> Batch If a role arn is inserted in dynamo DB , it is retrieved from lambda event, it is then submitted to batch using submit_job API with role arn being passed as parameters={ 'role_arn': 'arn:aws:iam::accountid:role/role_name' } How to read the parameter value in python running in batch? 回答1: First you need to specify the parameter reference in your docker file or in AWS Batch job definition command like this /usr/bin/python/pythoninbatch.py Ref::role_arn In

Terraform import AWS Batch job definition from another project

偶尔善良 提交于 2020-06-17 09:41:44
问题 I have multiple projects, each with their own Terraform to manage the AWS infrastructure specific to that project. Infrastructure that's shared (a VPC for example): I import into the projects that need it. I want to glue together a number of different tasks from across different services using step functions, but some of them are Batch jobs. This means I need to specify the job definition ARN in the step function. I can import a job definition but if I later update the project that manages

boto3 can't connect to S3 from Docker container running in AWS batch

a 夏天 提交于 2020-05-29 10:43:50
问题 I am attempting to launch a Docker container stored in ECR as an AWS batch job. The entrypoint python script of this container attempts to connect to S3 and download a file. I have attached a role with AmazonS3FullAccess to both the AWSBatchServiceRole in the compute environment and I have also attached a role with AmazonS3FullAccess to the compute resources. This is the following error that is being logged: botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://s3

AWS batch to always launch new ec2 instance for each job

帅比萌擦擦* 提交于 2020-05-16 20:48:31
问题 I have setup a batch environment with Managed Compute environment Job Queue Job Definitions The actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved. My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same

AWS batch to always launch new ec2 instance for each job

非 Y 不嫁゛ 提交于 2020-05-16 20:46:59
问题 I have setup a batch environment with Managed Compute environment Job Queue Job Definitions The actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved. My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same

AWS batch to always launch new ec2 instance for each job

折月煮酒 提交于 2020-05-16 20:46:20
问题 I have setup a batch environment with Managed Compute environment Job Queue Job Definitions The actual job(docker container) does a lot of video encoding and hence uses up most of the CPU. The process itself takes a few minutes (close to 5 minutes to get all the encoders initialized). Ideally I would want one job per instance so that the encoders are not CPU starved. My issue is when I launch multiple jobs at the same time or close enough, AWS batch decides launch both of them in the same