The cloud continues to evolve in exciting ways. First, the cloud offered organizations a way to build applications without the need for any hardware by offering virtual machines. Other services followed, but when it came to general-purpose compute, virtual machines were the way.

These virtual machines offered many benefits as organizations didn’t need to manage, procure, and replace physical hardware.

Virtual machines, however, do require management and maintenance. Like physical machines, organizations must manage virtual machines as they require the same effort and overhead as any machine.

Serverless Technology

The latest change we’ve seen in the evolution of the cloud is serverless technology. Serverless does to virtual machines what the first cloud systems did to physical machines. By embracing serverless deployment options, organizations now focus purely on their application rather than on machine resources—no more tracking, patching, or determining how to utilize physical machines.

For enterprise web applications, this is exciting. Now the underlying cloud infrastructure does the machine heavy-lifting making it easier to scale applications as needed for users. There are many benefits to ‘serverless’ beyond the management of servers, which we will discuss below.

In this blog, we share our experience of serverless deployment of our application (FME) on AWS Fargate.

The Benefits of Containers as a Service (CaaS)

Kubernetes is an open standard for running containers across a variety of public and private cloud environments. AWS supports a managed Kubernetes hosting environment called Amazon EKS (Elastic Kubernetes Service). When EKS is paired together with AWS Fargate, you get a “Containers as a Service” that, when used, enables a “serverless” experience to application users.

If you are already using Kubernetes or are looking to utilize a defacto industry standard for open source orchestration of containers, EKS provides the many benefits of Kubernetes without the operations responsibility of hosting and configuring the Kubernetes environment.

Most public cloud vendors now have EKS-like managed offerings, including Azure (AKS), Google Cloud (GKE), and Digital Ocean Kubernetes. This is in addition to the availability of on-premises, private cloud configurations for large organizations.

With Kubernetes, organizations can exclusively use cloud-native technologies or embrace hybrid computing models where some capabilities are in the cloud and some are on-premises.

Security

When you deploy an application with AWS Fargate using Amazon EKS, AWS is responsible for delivering security-related operational tasks such as updating the operating systems of the underlying virtual machines used to run pods. The Amazon EKS Best Practices Guide for Security calls out some security-related differences between using EC2 and Fargate to run Kubernetes pods.

This not only helps us and other application developers implement the best security measures, but it also reassures our users that the application being used to process their data meets the quality industry standards.

Compliance

Customers that operate in highly regulated industries, like many FME users, spend a lot of time making sure the stack they are running is compliant. Be it ISO compliance, HIPAA compliance, PCI compliance, the amount of engineering effort is significant.

One of the many advantages of using managed services such as AWS Fargate is that you can offload this burden to AWS and point the auditor to the relevant AWS documentation for a particular (compliant) service. This is an attractive alternative to using compute primitives (such as Amazon EC2), where you spend time and money creating and documenting a compliant setup.

Portability

As EKS is based on Kubernetes, you are not committed to AWS. We used our standard Kubernetes Helm charts with minor tweaks. We didn’t have to go “all in” on AWS and spend huge effort to build something specific. This kind of portability is a huge asset because it also provides flexibility for changes that may happen in the future.

Our ultimate goal with cloud technology is to build a flexible cloud agnostic deployment where capacity is not wasted and adjusts “just in time” to meet the needs of the application. That is, unused capacity is kept to a minimum. Unused capacity, whether it be machines or containers, is wasted capacity and ultimately costs money.

FME Server Deployment on AWS Fargate using EKS

We chose to deploy with AWS Fargate using Amazon EKS because FME Server already runs on Kubernetes. This speaks to one of the big benefits of AWS Fargate using Amazon EKS: It embraces the Kubernetes standard.

This option allowed us to both continue using our Kubernetes deployment experience and knowledge while leveraging the power of “serverless” with Fargate. Compared to proprietary AWS CaaS like Fargate ECS, we wouldn’t have been able to deploy FME Server with the same kind of ease.

With this in mind, we embarked on our mission to make FME available on AWS Fargate (EKS).

Optimizing for Serverless

While deploying FME Server with Fargate was more straightforward than other options, there were, however, changes we needed to make to optimize for our AWS deployment.

The first change being deciding where to run the database. Fargate is not a good candidate for running databases due to its limited support for volumes. For this reason we chose to embrace a proper cloud database service and deployed the database using AWS RDS. Using a managed database solution for enterprise applications is always a good idea for production deployments.

Another consideration was the ingress. Our typical FME Server deployment uses an NGINX ingress, but for Fargate pods we needed to use Amazon’s ALB ingress controller. Switching to this ingress controller required just a few small tweaks to our helm chart.

Finally, our application needs a shared storage volume. For Fargate, the only supported volume type is EFS. When deploying FME Server in Kubernetes on AWS, EFS is what we recommend for the system share, so this works great with our AWS Fargate using Amazon EKS deployment as well.

With these considerations in mind, we were able to deploy FME Server into an EKS cluster using Fargate. Once deployed, FME Server runs as if it were deployed in any ordinary Kubernetes cluster.

By deploying on AWS Fargate using Amazon EKS we are able to:

As hoped, we found that deploying FME Server with AWS Fargate using Amazon EKS was straightforward as we were able to use much of the existing Kubernetes deployment of FME Server through our FME Server helm chart.

How Amazon EKS on AWS Fargate Solves Previous Challenges

AWS Fargate provides a true serverless cloud container technology completely hiding the underlying infrastructure. For us at Safe, this means it appears as if all the containers are on a single machine. When launching a container (pod), we simply specify the resources needed and Fargate looks after the rest. No more managing of our Kubernetes cluster or any concern for the machines where the pod is being run.

Coupling with FME Server Dynamic Engines

Dynamic Engines on FME Server allow you to scale your data processing up or down based on your business needs paying for just the work that is done. AWS Fargate on Amazon EKS is the perfect scaling environment

Dynamic Engines were built with elastic environments in mind where compute resources change as an application is being run. In this environment, users pay for the work that is done rather than the capacity. This fits with the trend of “just in time” compute that is possible with environments like AWS Fargate using Amazon EKS.

The Bottom Line

Amazon EKS on AWS Fargate is a great choice for a CaaS if:

As you can see, opting for EKS on AWS Fargate has numerous benefits for Safe when it comes to deploying FME Server, but also greatly assists our users. It’s a win-win situation. So, like our users, we can spend more time creating a product that suits our users needs instead of allocating unnecessary amounts of time to infrastructure maintenance.

What’s Next?

From a technology perspective “serverless” options are expanding quickly, with containers and serverless options becoming a common pairing. It is clear that a future application deployment model will be serverless deployment of “containers”.

AWS Re: Invent recently made many exciting announcements about serverless container based deployment options. A good overview can be found here.

For FME Server, we are updating our Helm chart for deploying on Kubernetes so that no edits need to be made. We are also looking towards other serverless deployment options with particular attention being paid to other cloud vendors as they come out with Fargate and EKS equivalent solutions.

Our current goal is to have our new “serverless” version of FME Server ready for users by 2021. Stay tuned about all updates by following us on Twitter @safesoftware.

Until then, see our other deployment methods or get started with a free trial of FME Server.

Already a customer? Download the latest version of FME Server here!

About FME

Don Murray

Don is the co-founder and President of Safe Software. Safe Software was founded originally doing work for the BC Government on a project sharing spatial data with the forestry industry. During that project Don and other co-founder, Dale Lutz, realized the need for a data integration platform like FME. When Don’s not raving about how much he loves XML, you can find Don working with the team at Safe to take the FME product to the next level. You will also find him on the road talking with customers and partners to learn more about what new FME features they’d like to see.

Grant Arnold

Grant is a DevOps Technical Lead for FME Server and works on all types of deployments for FME Server. In his spare time, Grant likes to play games and plays guitar.

Related Posts