Working with your IT Teams to deliver business solutions
  • There are no suggestions because the search field is empty.

Innovation Hub

This is the place where you can read our newsletter ICEbreaker, our latest
articles as well as details of the next events we’ll be visiting.

ICE Technology Services project update: Automation

by Asa Sargeant | September 18, 2019 | Q&A | 0 Comments

A Q&A session with one of our Cloud Engineers...

Are there any specific areas ICE Technology Services are focussing upon right now for a client or clients, or any key projects that you involved with?

At the moment I am working on front-end and back-end resilience for Seaware, both of which are proving to be an interesting challenge.

How do you tackle front end resilience?

In this case we have changed the configuration between the Web servers and the Bizlogic servers, so that the Bizlogic servers now sit behind a load balancer. We are still testing this but so far it seems to be working, and if it stands up to load testing then it will allow us to use a pool of Bizlogic servers that are independent from any specific web server.

Our next step will then be to create Scalability Groups for the Web servers, as well as the Bizlogic servers. This will allow new servers to be deployed automatically if one of the existing servers fails or if there is heavy traffic and additional resources are required.

How do you tackle back end resilience?

As we cannot run more than one Inventory or Broker instance in any environment, we cannot have multiples for the sake of resilience, so instead we will be creating Scalability Groups that are sized to a single server. This will mean that if the Broker server fails then a new one will be automatically spun up to take its place.

What else are you working on for Seaware as part of this project?

We are also looking at scripting a clean restart of all Seaware components in an environment using Ansible. This will allow us to be sure that the backend components are started in the right order before the front end components are started, without the need to log on to multiple servers.

Then there is the deployment of a training environment; we are in the process of constructing a training environment in AWS to help train our engineers on Seaware. As we want a ‘playground’ where they can attempt to fix issues without the fear of breaking things, we wanted a way that they could start working and we could quickly fix any issues caused during the training session.

This will be achieved using AWS Cloud Formation which will be scripted to deploy the Training Environment from scratch each time we want to run a training session. This will allow for a very fast deployment that we know is configured correctly and has not been changed by anyone. Once the training has ended, we will terminate the environment which will also help to reduce costs for infrastructure that is not in use.

We are also using tools like Packer so that we can script server image builds. This allows us to quickly deploy images that utilise best practice and are properly hardened, with automatically installed and configured software.

Will this automation deliver tangible benefits?

Absolutely, it will help to prevent issues commonly encountered when deploying servers using the default server images that can take time to reconfigure while increasing the risk that best practice will not be followed or that steps will be missed during the build process.

Storing the images as scripts allows us to easily make changes when updates or configurations are changed, rather than just storing them as images as they can be left there and forgotten until needed, only to find they are then out of date.

This also allows us to re-use gold standard builds across different clients easily, rather than re-inventing the basic design of the servers for each new client.

Are there any other areas that ICE are looking into in automation?

A lot of our automation efforts are currently going into scripted environment deployments, because by scripting and deploying client environments automatically, we can reduce the time it takes to build and configure whilst guaranteeing that the AWS Well Architected Framework best practices are followed.

These scripts will deploy the AWS infrastructure and configure them to be as secure and robust as possible, plus they will also deploy servers that have been built using Packer scripted images, which also helps with best practice.

How much input do clients have on this when working with ICE Technology Services?

Whilst we definitely see ourselves as the experts who are here to guide our clients, it is very much a collaborative process and the exact requirements of the client are established very early on in the project timeline. That means they can be plugged into the script so that as they are defined and documented, they are also preconfigured. This can save a lot of time over the old way of documenting early on and then configuring once the build stage was reached.

Finally, this also helps with ensuring any pre-prod or staging environments are exact copies of the production environment as they will all have been deployed using the same scripts.

What actions lay ahead for this automation project?

We have started looking at possibly using Docker containers to host the Seaware software components rather than using AWS EC2 instances. This has been analysed and checked and it looks like it will work but we are still at a very early stage in the process so we haven’t begun any testing as yet. If this does work though, it should seriously reduce the hosting costs for Seaware and will also include all of the resilient and scaling features we are currently using with EC2.

Glossary

Front-end / Back-end

Public web servers are the ‘front end’ that users can reach, and private application servers are the ‘back end’ that provide the main functionality of the application.

Web servers

Web servers deliver web pages to web users. Web users make a request on their computers, which is forwarded on to the Web servers by their computers' HTTP clients. The Web servers then use ‘Hypertext Transfer Protocol’ (HTTP) to deliver the files that form Web pages.

Load balancer

A piece of hardware (or virtual hardware) that distributes network and / or application traffic across different servers by acting like a reverse proxy. Load balancers distribute the workload across multiple servers, thus decreasing the burden placed on each individual server.

Ansible

Ansible is a tool used to deploy applications, provision open-source software and for configuration management. It uses its own declarative language to describe system configuration and was written by Michael DeHaan.

AWS / AWS Cloud Formation

Amazon Web Services, an evolving and comprehensive cloud computing platform; AWS Cloud Formation is a service that provides developers and businesses with a simple method for collecting related AWS resources and provision them in a predictable and orderly fashion.

EC2 / EC2 instances

EC2 is Amazon's Elastic Compute Cloud; an EC2 instance is a virtual server in EC2 used for running applications on the AWS infrastructure.

Packer Tool

A lightweight open source tool that is able to run on every major operating system, Packer is used to create identical machine images for multiple platforms from a single source configuration.

Docker containers

Docker containers are open source software development platforms. They are mainly used to package applications in containers, making them portable to any system running a Linux or Windows operating system. Docker is one form of container technology but is seen as a major player in its field.

Leave a Comment