As an engineer that designs and builds on EC2, I am pro Infrastructure as Code:
- I like programming rather than tedious, error-prone clicking though a UI.
- I want to minimize my work if I have to change cloud providers (e.g., move from AWS to Azure, or on-prem to AWS)
- I want to generate a graph of dependencies among infrastructure components
Most of all, I want my team members who are not keen on, as some of them say, “DevOpsy” work, to see that DevOps, infrastructure engineering, and IT Operations are not the same thing! Those three topics are all big ideas, but this exercise will just tackle one of them: Infrastructure as Code. We can make infrastructure something real and more familiar. Once we have repeatable, usable infrastructure, we at least have a part of the automation required for implementing DevOps in an organization.
I decided to put together an exercise using Terraform to provision a TeamCity installation on AWS. This tutorial provides a very basic installation. If you need a more sophisticated setup, this is a good starting point for you, and you can add your specific infrastructure requirements. High-availability, zero-downtime deployments, DR, monitoring, etc. are out of scope for now.
You can find this code on github
Tools:
- AWS to host TeamCity. You need familiarity with AWS. We won’t be focusing on the AWS UI and how to set things up via point and click, though I’ll show you how to navigate AWS to visually see your changes.
- TeamCity for continuous integration. You also need to be familiar with TeamCity. We will not be going through TeamCity training. We will just confirm that it works properly.
- Terraform for infrastructure coding. We won’t go deep into learning Terraform. You can refer to the HashiCorp documents if you want to learn more.
- Docker to install TeamCity. Basic familiarity with Docker is helpful but not required.
Getting started:
- We will be using
make
later in the tutorial. If you’re on OS X like me, you may need to installbrew install make --with-default-names
.- If you don’t want to use
make
, copy the commands from the makefile and run them manually in your terminal
- If you don’t want to use
- TeamCity 2017.2+ has a free account with 3 build agents for 100 build configurations. You can start there and purchase the license if you need more. There’s no need to download anything right now…we’ll be using a Docker image of TeamCity.
- Install terraform
- brew install terraform
- Setup your AWS account
- create a non-admin user with proper accesses to perform the following. Avoid using your admin user to make changes in AWS. Here are my Permissions…adjust based on your needs:
- AmazonRDSFullAccess
- AmazonEC2FullAccess
- AmazonS3FullAccess
- AmazonDynamoDBFullAccess
- AmazonVPCFullAccess
- create a non-admin user with proper accesses to perform the following. Avoid using your admin user to make changes in AWS. Here are my Permissions…adjust based on your needs:
NOTE:
- I am using resources available in us-east-2. If you prefer to use a different region or change instances you will need to change the following accordingly:
- region in variables.tf
- debian_ami in variables.tf
- instance_type in ec2/main.tf
- instance_class in rds/main.tf
- This is a great source for aws instance price information and region availability: ec2instances.info
- To choose a different ami:
- Visit the AWS marketplace and filter the results (i.e. All Infrastructure, Amazon Machine Image, Free, API, in the next window select your region …)
- Pick your image and “Continue to Subscribe” > “Continue to configuration”
- Make a note of the Ami Id: In my case, it’s
ami-0bd9223868b4778d7
- DO NOT CONTINUE TO LAUNCH…we’re going to do this in terraform!
- Protip: To keep AWS charges to a minimum, run terraform destroy at the end of the tutorial, if you wish.
- I am using main.tf, variables.tf, and outputs.tf consistently throughout this project. You may wish to change the names, or just develop in one single file. All tf files in a directory will be compiled together. Separating them is my personal preference to keep things modular and minimize the amount of code I have to read when I want to go back and make a change later.
- I am using MAC OS Majave, so change the bash commands appropriately if you are using any other OS.
What are we building?
In this tutorial, first, we are going to build a Virtual Private Cloud. We do not want the RDS instance to be publicly available. Hence, next, we will add a private subnet that will contain the TeamCity Database. Then, we’re going to add a NAT gateway, and a public subnet. We will add routing and security groups for the public and the private subnets. In this example, I am using RDS instance for the postgresql database and would like to have S3 bucket for the backups. Finally, we’re going to launch an EC2 instance inside the public subnet. We’ll use TeamCity’s docker image to host TeamCity inside the EC2 instance.
- Create a VPC
- Build a private subnet for TeamCity’s database on RDS
- Create a public subnet for NAT
- Add NAT
- Create a public subnet
- Create routing
- Create a security group for the public Subnet
- Create a security group for the private Subnet
- Add RDS
- Create the database
- Create S3 backup
- Launch a public EC2 instance to host the TeamCity (using docker image) and connect to RDS PostgreSQL
Let’s get started!!
Create a VPC
If you have created your Amazon account within the past couple of years, it’s likely that it comes with a default VPC. We are going to create an isolated Virtual Private Cloud and subnets and use them to host TeamCity.
Once you’ve installed terraform, create an empty folder where you want to write your code, and cd
into it:
In the root directory, create a file called main.tf
. tf is a terraform convention. We will use this main.tf as a driver that contains other modules to build the pieces we need.
Now let’s write some terraform code. An advantage of infrastructure as code is that it’s scalable and makes it easy to change your infrastructure provider. In this tutorial we’re going to use AWS, thus, our provider is aws
.
I am going to use us-east-2. You can use any region you like. To prevent automatic upgrades to new major versions that may contain breaking changes, add version = “…” constraints to the corresponding provider blocks in configuration with the constraint strings suggested below. You can find the possible provider version constraints in the Terraform documentation.
_main.tf
Next, we want to add an aws_vpc resource to create our VPC. We’ll call it vpc. This is like naming a variable so you can access it later throughout your project.
For the cidr_block, we’re using the 10.0.0.0 as opposed to 192.168 because it’s more common: 192.168 is mainly associated with your personal IP.
enable_dns_hostnames
by default is false. We want to enable DNS hostnames in the VPC, so set it to true.
Add a tag for “Name” to see the name in the VPC list, esspecially if you have more than 1 VPC, so that you can easily distinguish them.
main.tf
Now you have enough code to see terraform in action. In your terminal, initialize terraform:
If you look at your project’s root directory, you see a .terraform folder. This is to keep the current status of your terraform commands and keep track of what’s been created or changed.
To see the plan before you execute it, run the following command:
You will see the following error because you haven’t told terraform how to access your Amazon account!
You can do this in a couple of different ways. I’m going to assume that you’re familiar with AWS’s Best Practices for Managing AWS Access Keys. As a brief primer, there are only a few AWS tasks that require root. You should, at a minimum, have created an IAM Admin User and Group. Use your IAM user’s access keys, not keys attached to your root user. Setting this up is outside the scope of this tutorial, but refer to the AWS documentation if you need to do this step.
You can export the credentials as environment variables into the terminal shell and you won’t be prompted as long as you use that shell. Or you can save them into a .tfvars file and add it to the provider (make sure to include this file in .gitignore).
Export into the shell (if you choose this skip creating variables.tf and modifications to main.tf. You can run terraform plan
and you won’t see any errors)
If you want to use a .tfvars file instead, add the terraform file to your root directory:
terraform.tfvars
If you exported the values in the shell, skip the following and variables.tf and run terraform plan
main.tf
Run
Now you see a different error: :/
In the root directory create a variables.tf file. For the rest of this tutorial, we’ll use this file to declare what variables we need for the modules we are going to build.
Terraform variables are declared in a variable block. You can declare a type, description, and a default value:
variables.tf
Try again to see the plan that’s going to be executed and verify that’s what your want to do. In our case, we want to create a vpc:
Looks great, run the following command to execute the plan. You will be prompted to verify if that’s what you want to do…enter yes:
Great!!!! Now if you navigate to VPC on your AWS account (AWS > Services > VPC). You see that the VPC is created! This is step 1 in your architecture diagram.
Building a private subnet for RDS
In this section we’re going to add more code to our terraform project. Before we do so, let’s do a small refactoring to keep things organized going forward. Some of you may think this isn’t needed yet unless I call the same block of code 3 times…you can go ahead and use a single file. Personally, I like to add a little structure to my projects, knowing this is going to grow:
In terraform, modules, are similar to classes in OO programing. I want to move the job of creating VPC, subnets, NAT, and routing to separate modules.
Now let’s move the “aws_vpc” from main.tf to vpc/main.tf
vpc/main.tf
Now change the main.tf to call module vpc. The source’s value is the path to the vpc module. You can also include a git url, if a different team is in charge of that module.
main.tf
Because we just added a folder (module vpc), we need to run terraform init to let terraform know about this change:
Moving forward…
RDS needs multiple zones and at least 2 private subnets. Before we add a private subnet, however, we need to create a NAT. A NAT needs an Elastic IP to work, and needs to be in the public subnet.
Let’s go ahead and make those changes. We will be using aws_eip, aws_nat_gateway
vpc/nat.tf
Now we’re ready to add a single public subnet.
Navigate to vpc/subnets.tf inside the vpc folder. We want to add an aws_subnet resource and name it public. I am passing a variable availability_zone since I don’t have a preference. If you do, you can hard-code this to be “us-east-2a” for example.
Now we’re ready to create a route table and association for the NAT.
In this file, we need to create an aws_internet_gateway (we name it vpc_igw). Then create an aws_route_table (vpc_public). Last but not the least, an association resource, aws_route_table_association (vpc_public).
vpc/routing.tf
Everything looks good. Since we are already calling the vpc module from main.tf, let’s go ahead and see the plan so far:
Looks like we need to pass in some variables:
Let’s create a default cidr_block inside the vpc/variables.tf but pass in the availability_zones from main.tf.
vpc/variables.tf
and change the main.tf, where we call the vpc module:
Before we apply our changes, let’s create a security group for this public VPC as well:
Inside sg/main.tf
, we are going to create an aws_security_group (teamcity_web_sg) and add ingress and egress rules:
sp/main.tf
We are using the vpc_id to create the routing, let’s go ahead and pass that:
sg/variables.tf
Last, but not least, we need to call this module from main.tf
Looks like we have solid code to build our public subnet. Since we added sg module, we need to init again:
And there’s error:
This is because we’re passing vpc_id, but we never defined it anywhere:
vpc/outputs.tf
Let’s try again:
If you want to see all the changes you made, navigate to:
- AWS > VPCs
- Subnets
- Route Tables
- Internet Gateways
- Elastic IPs
- Nat Gateways
- Network ACLs
- Security Groups
That’s a lot of changes! This is a good place to commit our changes to git, if you haven’t done it yet. This is my .gitignore file:
Now, we’re ready to create our private subnet and security group for the RDS:
vpc/subnets.tf
vpc/routing.tf
vpc/variables.tf
and the RDS security group:
Apply the changes:
Congrats! You have everything you need to build your RDS now! Don’t forget to commit your changes often!
So far, this is what my folder structure looks like:
Add RDS
Let’s add our RDS module first:
We will be using an aws_db_instance resource. I am using an instance_class that’s available in us-east-2. Change the instance_class appropriately if you are using a different region (please refer to the Notes section in the beginning of this tutorial).
rds/main.tf
rds/variables.tf
Call the rds module from main.tf
If you run the init, you will see that we’re missing a few variables. Let’s go ahead and get rid of those errors. First is db_password. We are going to add the password to the terraform.tfvars. This file is not getting checked and it’s a safe place to store our secrets.
terraform.tfvars
variables.tf
We also need to add a few outputs:
sg/outputs.tf
vpc/outputs/tf
Now we’re ready:
This is going to take a while… so go out for a coffee break and stretch. See you in about 15-20 min! . . .
Nice! Navigate to AWS > RDS to see your newly created RDS!
While we’re at it, let’s create an S3 bucket for the backup
This is going to be very short and sweet compared to the last step:
s3/main.tf
s3/variables.tf
s3/outputs.tf
main.tf
variables.tf
Apply changes:
Looks good!
Summary: So far we have created a VPC, a public subnet, NAT, 2 private subnets, public, private and rds security groups, an RDS instance in the private subnet, and an S3 bucket for the backup. At this point, my folder structure looks like this:
Launch a public EC2 instance to host TeamCity
Now we’re ready to build our EC2 instance. We are going to launch this in the public subnet. Next we’re going to use TeamCity Docker image and run it in our instance and configure it to use the RDS we just built.
I am going to use a debian trusted image that I have used before. You may choose a debian, ubuntu, etc. Just keep in mind that they might have a different default username to use for ssh. For example, debian uses admin
.
Before we create the ec2 module, AWS requires a Key Pair. For simplicity, we can generate a key-pair and upload it to teamcity. I am using a makefile to generate the key and save it in ~/.ssh
, you can change the path if you wish. Let’s call our key teamcity. Create makefile in the root.
makefile
Make is very particular about spacing, so make sure you get the tabs right!
Then, we need to upload the public key to aws.
ec2/main.tf
Now let’s add the ec2 instance. Again, git I am using an instance_type that’s available and affordable on us-east-2, change this appropriately if you are using a different region (please refer to the Notes section in the beginning of this tutorial).
ec2/main.tf
I want to use the user_data to configure the instance immediately using a template, using resource. Do add this to ec2/main.tf
db_setup.sh will contain all the steps that I want to run in order to configure my instance. You can also ssh into the machine and run these manually. I am also installing “tree” and “touch” because I use them often: You can customize it however you want.
ec2/scripts/setup.sh
Let’s go ahead and create our variables.
ec2/variables.tf
It’s time to call our ec2 module.
main.tf
I kept my debian_ami in the variables.tf because I am going to use that image in a couple of other places later…you can hard-code it if you want to. Also, I am using the debian_ami that’s available on us-east-2. Change the image if you are using a different region (please refer to the Notes section in the beginning of this tutorial).
Notice: key_name should be the name of the key you added in AWS. Also, I keep my keys is ~/.ssh
…modify this variable accordingly.
variables.tf
Let’s add the rest of the variables and outputs we need.
rds/outputs.tf
vpc/outputs.tf
sg/outouts.tf
Let’s output the ssh command so we can easily access our instance.
outputs.tf
Now we need to add an output for ec2 to give us the teamcity_web_ip:
ec2/outputs.tf
Let’s create our key. Rmember that you need to have make
installed!
At this point, this is how my folder structure looks:
NOW … It’s the moment of truth!!!
You should see a similar output to the following:
BEAUTIFUL!
Navigate to AWS > EC2. You know have 1 running instance. Wait for the initialization to be done. When the status check is done and server is up you can access your TeamCity via browser: ex.y.z.d.compute-1.amazonaws.com:8111
The first time you access TeamCity via browser, TeamCity might take a long time to initialize, depending on the image and instance_type you chose for EC2.
Congratulations!
I hope this was helpful. Please let me know your thoughts. You can find the code on github: https://github.com/saslani/terraform_teamcity_aws This is a very simple setup, but is on the right track for something production-ready. If you would like to add features, please fork and send me a PR!