In our third ‘how-to’, we wanted to focus on a very basic introduction to the DevOps tools that our developers here at JAM use to streamline and simplify workflow through automation. Automation increases agility, as it enables devs to multitask different projects all at once.
There are hundreds of DevOps tools that offer ways to make the process of building and deploying a project faster, more flexible and more efficient through automation. We asked our Devs to pick their top three ways in which automation can be used, and their favourite DevOps tools that can be used to achieve this.
End-to-end automation for service deployment and orchestration
End-to-end automation enables devs to write code that will automatically replicate the actions they would take when using a system manually. This is used specifically when setting up a Cloud platform for a client. Our devs can write code creating the whole environment, using AWS, Azure, Alibaba or any other platform.
There are many tools that can be used to achieve this. These can be specific to each platform, for example, for Azure, you can use Microsoft Push for arm templates, whilst CloudFormation can be used for AWS. At JAM, we prefer to use a tool called Terraform, which is cross platform. This benefits us as it means our experts are able to harness this one tool for any of the platforms we use. There are differing command lines for each platform as well as other variables, but the code itself is easier to familiarise yourself with than learning individual tools.
As well as this, Terraform is scalable and robust: there’s no need to hard-code machine names, and you can create variable names so that it’s easy to replicate projects and collaborate on projects with the wider team. This collaborative feature makes the code better over time.
We’re also able to make that code better using modules. This means you we automatically input functionalities so that you don’t have to keep repeating them.
It’s easy to use as a team, as we’re able to track changes made by individuals using code repository. This makes it easier to spot where things might have gone wrong and can roll back if needed.
Whilst these are the benefits and reasons why we ourselves use Terraform, at the end of the day, there are many tools that can help with end-to-end automation, and the end goal is the same with all of them.
Infrastructure as Code (IaC)
Our team use Infrastructure as Code (IaC) to duplicate projects as needed. This is done by running a script which spins up an entire infrastructure. The possibilities with this are endless: we can deploy virtual servers, launch pre-configured databases, network infrastructure, storage systems, load balancers, and any other cloud service that we may need.
As an example, this would come in useful if we were to have a client who is working on a Sitecore project with one web server and one CM server, and another client asks for the same thing. Usually, we would have one of our engineers create the first project and then the second. However, if we have already created the first project, and have written the code, we’re easily able to replicate this for the second client. The only barrier to this is that not all functions are available in all regions – this means we need to check when duplicating a project in another region that all the functionality needed is still available.
Similarly to end-to-end automation, there are both platform-specific and cross-platform IaC tools. AWS CloudFormation will be able to assist with IaC, as well as Azure Resource Manager and Google Cloud Deployment Manager. Previously mentioned Terraform can also be used for IaC as well as end-to-end automation, as well as Chef and Puppet, which are all cross-platform. With Chef, you create “recipes” and “cookbooks” which specify the steps to follow which will achieve the desired results for your applications and utilities on existing servers.
Puppet, in comparison, lets the dev describe the result they would like to achieve for their infrastructure, then automatically configures how to get there, does the job and then fixes any issues.
Testing is an essential part of the deployment process, although it can be a tedious task when done multiple times. To avoid this, JAM uses continuous integration (CI) and continuous deployment (CD):
- Continuous integration (CI): continuously integrates code into the application, pushing new features and/or fixes to a chosen schedule
- Continuous Delivery (CD): continuously releases new versions of software safely and quickly
Enabling CI/CD: Automation in the Continuous Integration and Continuous Delivery pipeline ensures that appropriate software builds, data, tests and code changes are delivered to appropriate target environments. DevOps teams can therefore perform frequent code changes, stage the builds for testing and ultimately push frequent software changes to the market.
To build a project, JAM uses a combination of tools, for example, Octopus Deploy and TeamCity. Between the build stage and deployment stage, the tools will test the build to check that it is successful, then it would go ahead with the deployment. Blue-green deployments can be used here to ensure that there is no downtime experienced by the end-user. With one server, a temporary environment can be swapped from active to passive. With two servers, our dev would take one out of the load balancer and deploy to it, warming it up and then putting it back into the pool before doing the same with the second server.
JAM uses many automation tools to ensure that the projects we undertake for our clients run smoothly, and that we are able to ensure fast and efficient customer service.
Hopefully, this has been an insight into some of the DevOps processes JAM use to implement projects and has inspired you to try out some automation tools for yourself. If you have any questions or would like any advice, feel free to have a look at our DevOps as a service page or get in touch.