terraform

Terraform stucks when instance_count is more than 2 while using remote-exec provisioner

蹲街弑〆低调 提交于 2019-12-24 04:29:08
问题 I am trying to provision multiple Windows EC2 instance with Terraform's remote-exec provisioner using null_resource. $ terraform -v Terraform v0.12.6 provider.aws v2.23.0 provider.null v2.1.2 Originally, I was working with three remote-exec provisioners (Two of them involved rebooting the instance) without null_resource and for a single instance , everything worked absolutely fine. I then needed to increase the count and based on several links, ended up using null_resource. So, I have reduced

How to convert terraform.tfstate to config file?

寵の児 提交于 2019-12-24 04:22:14
问题 I created an AWS resource in the AWS management console. I then ran terraform import to import the AWS resource into Terraform. Now I have this terraform.tfstate file. But how can I convert this back to a Terraform configuration file? 回答1: As the terraform import docs explain, currently Terraform will only import the resource into your state file and won't generate the config for you. If you try this without even defining the resource Terraform will throw an error telling you to define the

How to use in Terraform resources already in AWS (created manually)?

与世无争的帅哥 提交于 2019-12-24 01:39:26
问题 Is there a way to use in my terraform resources that already exist in my AWS account, which were created manually? I don't want to change them, and honestly, I don't want to "touch" them. I just need some of those resources for the environment I'm creating. For example, vpc and IAM. I have read a bit about import, but I am not sure that it is the answer? 回答1: Terraform has 2 ways of using resources that exist outside of the context or directory it's being applied on. The first is data sources

Why am I getting a permissions error when attempting to auto_accept vpc peering in Terraform?

谁说胖子不能爱 提交于 2019-12-24 01:27:36
问题 I am trying to create a VPC peer between accounts and auto accepting it but it fails with permissions error. Here are the providers in the main.tf provider "aws" { region = "${var.region}" shared_credentials_file = "/Users/<username>/.aws/credentials" profile = "sandbox" } data "aws_caller_identity" "current" { } Here is the vpc_peer module: resource "aws_vpc_peering_connection" "peer" { peer_owner_id = "${var.peer_owner_id}" peer_vpc_id = "${var.peer_vpc_id}" vpc_id = "${var.vpc_id}" auto

terraform autoscaling group destroy timeouts

試著忘記壹切 提交于 2019-12-23 22:36:37
问题 Is there any way to change the terraform default timeouts? For example on terraform apply I frequently timeout trying to destroy autoscaling groups: module.foo.aws_autoscaling_group.bar (deposed #0): Still destroying... (10m0s elapsed) Error applying plan: 1 error(s) occurred: * aws_autoscaling_group.bar (deposed #0): group still has 1 instances If I re-run the terraform apply, it works. It seems like the timeout is 10 minutes -- I'd like to double the time so that it finishes reliably.

Terraform: How to append the server count and assign servers to multiple AZ's?

流过昼夜 提交于 2019-12-23 22:12:23
问题 main.tf resource "aws_instance" "service" { ami = "${lookup(var.aws_winamis, var.awsregion)}" count = "${var.count}" key_name = "${var.key_name}" instance_type = "t2.medium" subnet_id = "${aws_subnet.private.id}" # private_ip = "${lookup(var.server_instance_ips, count.index)}" vpc_security_group_ids = ["${aws_security_group.private-sg.id}"] associate_public_ip_address = false availability_zone = "${var.awsregion}a" tags { Name = "${format("server-%01d", count.index + 1)}" Environment = "${var

Run simple web server with Terraform remote-exec

拜拜、爱过 提交于 2019-12-23 18:54:37
问题 # example.tf provider "aws" { region = "us-east-1" } resource "aws_instance" "example" { ami = "ami-0d44833027c1a3297" instance_type = "t2.micro" security_groups = ["${aws_security_group.example.name}"] key_name = "${aws_key_pair.generated_key.key_name}" provisioner "remote-exec" { inline = [ "cd /home/ubuntu/", "nohup python3 -m http.server 8080 &", ] connection { type = "ssh" private_key = "${tls_private_key.example.private_key_pem}" user = "ubuntu" timeout = "1m" } } } resource "tls

Terraform grant azure function app with msi access to azure keyvault

别说谁变了你拦得住时间么 提交于 2019-12-23 14:48:25
问题 I'm experimenting with using Terraform to set up a scenario in Azure where Terraform creates: - an Azure function app with Managed Service Identity - an Azure Key Vault - a Key Vault access policy that allows the function app to access secrets in the key vault My problem is around using the object id (principle id) of the MSI set up for the function app in the definition of the key vault access policy, I suspect I doing something wrong (and/or stupid)... The error I get from a Terraform apply

Terraform - Upload file to S3 on every apply

ぃ、小莉子 提交于 2019-12-23 13:58:13
问题 I need to upload a folder to S3 Bucket. But when I apply for the first time. It just uploads. But I have two problems here: uploaded version outputs as null. I would expect some version_id like 1, 2, 3 When running terraform apply again, it says Apply complete! Resources: 0 added, 0 changed, 0 destroyed . I would expect to upload all the times when I run terraform apply and create a new version. What am I doing wrong? Here is my Terraform config: resource "aws_s3_bucket" "my_bucket" { bucket

How to keep the last X ECS task definitions active?

做~自己de王妃 提交于 2019-12-23 12:50:58
问题 I have the following Terraform code to update a service with a new task definition: resource "aws_ecs_task_definition" "app_definition" { family = "my-family" container_definitions = "${data.template_file.task_definition.rendered}" network_mode = "bridge" } resource "aws_ecs_service" "app_service" { name = "my-service" cluster = "my-cluster" task_definition = "${aws_ecs_task_definition.app_definition.arn}" desired_count = "1" iam_role = "my-iam-role" } When updating my service, the last