Product Comparison

Here we will compare some features between Attini, AWS Developer Tools and Github actions to better explain what Attini is, and what it is not.


Source control

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

Most organizations use some form of Source control today, Git is a very popular option.

AWS has a service called AWS CodeCommit which is a managed Git service for private repositories.

Github is one of the most popular Git SaaS providers on the market for both private and public repositories, it is also the largest provider for open source code repositories.

Attini has no intention of providing a service for source control because we believe that there are already very good alternatives on the market like AWS CodeCommit or Github. Instead, Attini should be used with a source control service of your choice.


Build server

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

A build server is used to build, compile or package your software artifacts, for example docker containers. The build server usually integrates with your source control and your deployment pipeline. The build server often runs automatic tests and pushes artifacts into a central repository.

AWS has a service called AWS CodeBuild which is a container-based serverless build server that integrates very well with other services in AWS for example CodePipeline, StepFunctions, VPC, IAM and ECR which makes it a good alternative.

Github actions is a container-based build server with great source control integration. Github actions can be serverless (GitHub-hosted runners) but that comes with some configuration limitations. Github actions can also run in your AWS environment (self-hosted runners), then you can run your builds in your VPC and it enables IAM Roles instead of IAM Access Keys. However self-hosted runners are not serverless so you need to think about capacity provisioning, patching etc.

Attini has no intention of providing a build server because we believe that there is already very good alternatives on the market like AWS CodeBuild or Github actions. Attini also recommends our customers to use a build server together with the Attini CLI to build Attini distributions.


Code-first pipeline

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

Code-first pipeline means that we create a deployment pipeline based on code that is included in your deployment source. This ensures that the deployment configuration is in sync with the artifacts that will be deployed.

AWS Developer Tools can be defined with code using ex CloudFormation or the CDK. However, the Cloud Engineer will have to separately manage that CloudFormation stack to keep it in sync with the artifacts that are being deployed.

Github actions have a nicely integrated build and deployment process defined as code that are automatically triggered when anything changes in a Git repository’s.

Attini deployment plans are quite similar to AWS Developer Tools in the sense that it can be defined as CloudFormation which is very powerful. In addition to this, the Attini Framework uses the Init deploy to automatically update the deployment plan (pipeline) before a deployment is started. This ensures that the deployment plan is always in sync and up to date with the artifact that is being deployed.


Serverless

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

Yes

With Serverless we mean the ability to do deployments without the need for virtual machines.

AWS Development Tools fully support serverless deployments.

Github actions can be serverless (GitHub-hosted runners) OR it can run in your AWS Cloud environment (self-hosted runners). The GitHub-hosted runners come with configuration limitations and the self-hosted runners are not serverless.

Attini is built on top of AWS Serverless services. We have carefully selected the right underlying service for the right job. This is how Attini can be extremely fast, very flexible, highly available with an On-Demand pricing model.


Central artifact management

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

With “central artifact management” we mean a central location for build artifacts (docker containers, software packages etc). These artifacts have to be made available to all your IT environments or to your build server so that the artifacts can be used for builds or deployments.

AWS have good services for central artifact management, they have AWS ECR for containers, AWS CodeArtifact for software packages and S3 for arbitrary objects.

Github has Github packages for containers and software packages.

Attini does not have native support for central artifact management. Attini does integrate with S3 so this can be accomplished by setting up a central S3 bucket with a properly configured S3 bucket policy. See our example of how this can be done here. However this is something users will need to configure themselves. If you don’t want to use S3 for this any other artifact management tool or object storage will work. However you will need to manage the integration yourself.


Distributed artifact management

Vendor

Support

AWS Developer Tools

Github actions

Attini Framework

Yes

With “distributed artifact management” we mean that every IT environment keeps a versioned redundant copy of all artifacts that were used for a deployment. This means that the IT environment don’t have a tight dependency on a central artifact store. This allows us to decouple deployments and do contained rollbacks.

With AWS CodePipeline you can use a zip file in S3 as the source for your deployments. If you have one S3 bucket per environment you can achieve this setup. However, the customer will have to design the artifact management themselves which is a significant engineering effort.

Github actions do not maintain a redundant copy of your artifact inside of your IT environments.

Attini is designed to first upload all artifacts into the IT environment and save the artifacts in a predictable location with life cycle policies attached. Those artifacts then work as the source for the deployment.


Support local deployment

Vendor

Support

AWS Developer Tools

Yes

Github actions

Attini Framework

Yes

When your deployment needs a lot of scripts and configuration. You often end up with a tight dependency on your build server. This often results in a “development through Git” workflow which is slow and resource-intensive. So during development you often want to “bypass” Git and the build server, but this can be hard to configure and maintain. This means that you want a tool that helps the Cloud Engineer to “mock” the build server on his local computer in a standardized way.

AWS CodePipeline (which is the AWS recommended way to orchestrate deployments) do not support this pattern, it needs you to commit your code to source control or manually upload a zip file to S3 to trigger a deployment. However CodeBuild supports local builds which can sometimes be very useful.

With Attini, you can add all your build scripts and packaging logic into the package section in the attini-config file. Then you can build/package your distributions using the Attini CLI. Your build server can then use the same Attini CLI command as when you develop locally. If you need to do some environment configuration to be able to mock your build server, the Attini CLI has a flag called --environment-config-script. This will execute bash/zsh script before the packaging face, making it easy to configure anything your need, ex environment variables. The Attini package command can also run your scripts inside a container using the --container-build flag. This allows you to use the same docker image as your build server is using.


Native multi-environment support

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

Maintaining your infrastructure in multiple environments is often a problem for organizations and creating a workflow that configures these environments can be a significant engineering challenge. If this is not managed properly, discrepancies between environments can cause issues.

AWS Developer Tools don’t have the concept of environments so you will have to design a workflow yourself.

Github actions have support for deploying to multiple environments in a structured way.

Attini is designed around the multi-environment concept, every time you do a deployment you build an immutable artifact (Attini distribution). Then you deploy it into your environment, ex development, when you are happy with the result of the deployment you just switch role into your production environment and deploy the same Attini distribution with the --environment flag set to (for example) production instead.


Manage CloudFormation stacks dependencies

Vendor

Support

AWS Developer Tools

Yes

Github actions

Attini Framework

Yes

When AWS CloudFormation is used to provision cloud resources, it’s often preferable to split the resources into multiple CloudFormation stacks. That means that we need an easy and flexible way to manage the dependencies between these stacks.

AWS CloudFormation has two native ways of doing this, both come with management limitations. One way is nested stacks which promotes a tree structure of stacks that are managed by a “root stack”. This pattern works well for simpler setups but it does not scale well. CloudFormation also has Exports which is an improvement over nested stacks in many aspects. But Exports also come with limitations, like hard coupled dependencies and no cross-region or cross-account support.

There is a workaround to avoid Exports hard coupling using SSM Parameter Store. You can save the “output” from one stack in AWS SSM Parameter Store using the AWS::SSM::Parameter resource. Then it’s possible to use the value as a Parameter type AWS::SSM::Parameter::Value<String> in other stacks to read the value. This workaround is a lite verbose and it creates invisible dependencies, and it still only works within one region and one account.

You can also use AWS CodePipeline to deploy multiple CloudFormation stacks and carry configuration between stacks using parameter overrides. However, this does not work cross-region or cross-account. CodePipeline also uses a polling pattern for executing steps which can lead to slow deployments.

When deploying stacks using an Attini deployment plan the AttiniCfn step will add the the stacks output to the deployment plan payload which carries information to the later steps in a loosely coupled manner. This also works cross account and cross region. Deployment plans can also be designed in a tree structure which visualizes dependencies between stacks and can be used as an up to date map over your IT environment.


CloudFormation configuration automation

Vendor

Support

AWS Developer Tools

Yes

Github actions

Attini Framework

Yes

Most configuration tools have smart features to help with configuration management, for example inheritance structures are common.

AWS CodePipeline can use a JSON file as configuration and parameter overrides. This works fine for simple situations, but these files are hard to manage at a big scale because they lack any features that enable reusability. Because CodePipeline also only supports JSON files, comments are not supported.

The AttiniCfn resource have good support to reference any data in the *payload. The ConfigFile have the “extends”feature which enables an inheritance structure. The attini-config file have support for default and environment-specific configuration for the Init deploy which can easily be forwarded to CloudFormation stacks via the Attini deployment plan. Attini also have the fallback feature that helps with more complex workflows. Attini can also read configuration from the SSM parameter store with a default value that is not supported natively by Cloudformation (you can only configure a default key, not the actual value).

*The payload includes all deployment metadata and all CloudFormation stack outputs (even from other regions and accounts). You can also add your own data to the customData section, for example using a Lambda function.


Deployment payload

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

Managing state through a pipeline is critical for any flexible and dynamic deployment. We often need to carry output from one step and pass it as input to another.

AWS CodePipeline uses zip files on S3 to manage states between steps. They have added support for integration with these zip files in some standard use cases, ex parameter overrides. This feature is still fairly limited and many use cases would still require workarounds. For example, this does not work if your output comes from a CloudFormation stack in a different AWS Region.

Github actions have Contexts and outputs which is a good way to communicate data through deployment.

Attini stores all the deployment artifact (files) in an S3 bucket much like CodePipeline does, but Attini works with plain (not zip) files with predictable names so that you can easily integrate the files with other services or technologies. In addition to this, Attini uses the AWS StepFunctions Payload to carry deployment metadata and step outputs. This payload can then be used as input for any step, ex an AWS Lambda function. The deployment plan can also make choices to only run certain steps if a condition is true. For example to run a load test if the environment is development.


Advanced deployment process

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

When we are building code artifacts or deploying one application, the deployment process is usually fairly straightforward. But when we deploy our whole IT environment using Infrastructure as Code the deployment process can quickly become complex. If your deployment tool doesn’t natively support the deployment process you need, you might have to build complex workarounds.

AWS CodePipeline have a linear deployment process with support for parallel actions and manual approval steps. There is no support for branches (tree structure), choices, or retries.

Github actions have a linear deployment process with support for conditional steps and manual approval features. There is no support for parallel actions or branches.

Attini has support for parallel, branches, choices and retries. It’s also easy for users to integrate custom lambdas which can perform any custom logic you might need. There is no support for manual approvals in the Attini Framework yet.


Native rollbacks to a previous state

Vendor

Support

AWS Developer Tools

Github actions

Attini Framework

Yes

Doing a rollback can be critical after a failed deployment. Normally this is done either by reverting your code to a previous state or by using an old version of an artifact. Reverting your code is problematic because there is no guarantee that the result will be exactly the same as the previous version. Versioned artifacts is better but is still flawed because unless the versioned artifact include all dependencies there is still no guarantee that the result will be predictable. This is one of the reasons containers are popular, because if your application roles backs to an old image, you can be sure that the service will run in a predictable way.

AWS CodePipeline and Github actions don’t have a native stateful way to save your deployment artifacts in a “per environment” manner. This means that the Cloud engineer have to build logic to store any artifact somewhere, then build a custom process to use them in a rollback scenario.

Attini always build a immutable package that is the only source for the deployment, then Attini will keep isolated copies of old deployments inside the IT environments. This means that you can always re-deploy an old version without rebuilding it. There is also an advantage in the fact that the environments have there own copy, because you never have to worry about artifacts being deleted in a central environment.


CLI follow

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

When you are deploying changes to an IT environment you have two options. You can deploy things straight from your computer using a CLI or a deployment script, this method works but it has some limitations. It means that the deployment is dependent on the configuration of the Cloud Engineers computer and his IAM access. For this reason, we often do deployment from a build server or via another form of central deployment service. If the deployment is not being performed from your computer, it can sometimes be hard to follow its progress. This often results in a shattered workflow that forces the Cloud engineer to jump around between different GUIs. This makes troubleshooting and development slow.

The AWS CLI has support for describing CodePipeline executions, but it does not have a “follow” or “watch” feature.

Github CLI has the gh run watch command which makes it easy to follow a deployment from a terminal.

Attini CLI follows the deployment by default. The Attini CLI also exits with proper exit codes (0 for success and higher numbers for failure), making it easy to hook in Attini deployments into your current build server or scripts.