Product Comparison

Here we will compare some features between Attini, AWS Developer Tools and Github actions to better explain what Attini is, and what it is not.


Source control

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

Most organizations use some form of Source control today, git is a very popular option.

AWS have a service called AWS CodeCommit which is a managed git service for private repositories.

Github is a SaaS service which is one of the most popular git services on the market for both private and public repositories, it is also the largest provider for open source code repositories.

Attini has no intention of providing a service for source control because we believe that there are already very good alternatives on the market like AWS CodeCommit or Github. Instead Attini should be used with a source control service of your choice.


Build server

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

A build server is used to build, compile or package your software artifacts, for example docker containers. The build server usually integrates with your source control or your deployment pipeline. The build server often runs automatic test and pushes artifacts into central repository’s.

AWS have a service called AWS CodeBuild which is container based serverless build server that integrates very well with other services in AWS for example CodePipeline, StepFunctions, VPC, IAM and ECR which makes it a good alternative.

Github actions is a container based build server with great source control integration. Github actions can be serverless (GitHub-hosted runners) but that comes with some configuration limitations. Github actions can also run in your AWS environment (self-hosted runners), then you can run your builds in your own VPC and it enables IAM Roles instead of IAM Access Keys. However self-hosted runners are not serverless so you need to think about capacity provisioning, patching etc.

Attini has no intention of providing a build server because we believe that there are already very good alternate on the market like AWS CodeBuild or Github actions. Attini also recommend our customers to use a build server together with the Attini CLI to build Attini distributions.


Code-first pipeline

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

Code-first pipeline means that we create a deployment pipeline based on code that is include in your deployment source. This ensures that the deployment configuration is in sync with the artifacts that will be deployed.

AWS Developer Tools can be defined with code using ex CloudFormation or the CDK. However, the Cloud Engineer will have to separately manage that CloudFormation stack to keep it in sync with the artifacts that are being deployed.

Github actions have a nicely integrated build and deployment process defined as code that are automatically triggers when anything changes in a git repository’s.

Attini deployment plans is quite similar to AWS Developer Tools in the sense that it can be defined as CloudFormation which is very powerful. In addition to this, the Attini Framework use the Init deploy to automatically update the deployment plan (pipeline) before a deployment is started. This ensures that the deployment plan is always in sync and up to date with the artifact that is being deployed.


Serverless

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

Yes

With Serverless we mean that the ability to do deployments without the need for virtual machines.

AWS Development Tools fully support serverless deployments.

Github actions can be serverless (GitHub-hosted runners) OR it can run in your AWS Cloud environment (self-hosted runners). The GitHub-hosted runners comes with configuration limitations and the self-hosted runners are not serverless.

Attini is built on top of AWS Serverless services. We have carefully selected the right underlying service for the right job. This is how Attini can be extremely fast, very flexible, highly available with a On-Demand pricing model.


Central artifact management

Vendor

Support

AWS Developer Tools

Yes

Github actions

Yes

Attini Framework

With “central artifact management” we mean a central location for build artifact (docker containers, software packages etc). These artifact have to be made available to all your IT environments or to your build server so that the artifacts can be used for builds or deployments.

AWS have good services for central artifact management, they have AWS ECR for containers, AWS CodeArtifact for software packages and S3 for arbitrary objects.

Github have Github packages for containers and software packages.

Attini does not have native support for central artifact management. Attini does integrate with S3 so this can be accomplished by setting up a central S3 bucket with a properly configured S3 bucket policy. See our example of how this can be done here. However this is something users will need to configure themselves. If you don’t want to use S3 for this any other artifact management tool or object storage will work. However you will need to manage the integration yourself.


Distributed artifact management

Vendor

Support

AWS Developer Tools

Github actions

Attini Framework

Yes

With “distributed artifact management” we mean that every IT environment keeps a versioned redundant copy of all artifacts that was used for a deployment. This means that the IT environment don’t have a tight dependency on a central artifact store. This allows us to decouple deployments and do contained rollbacks.

With AWS CodePipeline you can use a zip file in S3 as the source for your deployments. If you have one S3 bucket per environment you can achieve this setup. However, the customer will have to design the artifact management themselves which is a significant engineering effort.

Github actions do not maintain a redundant copy of your artifact inside of your IT environments.

Attini is designed to first upload all artifact into the IT environment and save the artifacts in a predictable location with life cycle policies attached. Those artifacts then work as the source for the deployment.


Support local deployment

Vendor

Support

AWS Developer Tools

Yes

Github actions

Attini Framework

Yes

When your deployment depends on a lot scripts, automation, configuration and other types of dependency’s you often end up with a tight dependency on your build sever. This results in a “development through git” situation. This workflow is often slow and resource intensive, so there is often a need to “by pass” git and the build server during development. This means the the Cloud Engineer should be able to “mock” the build server on his local computer.

AWS CodePipeline (which is the AWS recommended way to orchestrate deployments) do not support this pattern, it needs you to commit your code to source control or manually upload a zip file to S3 to trigger a deployment. However CodeBuild supports local builds which can sometimes be very useful!

With Attini you can add all your build scripts and packaging logic into the package section in the attini-config file. Then you can build/package your distributions using the Attini CLI. Your build server can then use the same Attini CLI command as when you develop locally. If you need to do some environment configuration to be able to mock your build server, the Attini CLI have a flag called --environment-config-script. This will execute bash/zsh script before the packaging face, making it easy to configure anything your need, ex environment variables.


Native multi-environment support

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

Maintaining your infrastructure in multiple environments is a often a problem for organizations and creating a workflow that configure these environments can be a significant engineering effort. If this is not managed properly, discrepancies between environments (example development and production) can cause problems.

AWS Developer Tools don’t have the concept of environments so the users will have to design a workflow them self.

Github actions have support for deploying to multiple environments in a structured way.

Attini is designed around the multi-environment concept, every time you do a deployment you build a immutable artifact (Attini distribution). Then you deploy it into your environment, example development, when you are happy with the result of the deployment you just switch role into you production environment and deploy the same Attini distribution with the --environment flag set to (for example) production instead.


Manage CloudFormation stacks dependencies

Vendor

Support

AWS Developer Tools

Yes

Github actions

Attini Framework

Yes

When AWS CloudFormation is used to provision cloud resources, its often preferable to split the resources into multiple CloudFormation stacks. That means that we need a easy and flexible way to manage the dependencies between these stacks.

AWS CloudFormation have two native ways of doing this, both comes with management limitations. One way is nested stacks which promotes a tree structure of stacks that are managed by a “root stack”. This pattern works well for simpler setups but it does not scale well. CloudFormation also have Exports which is a improvement over nested stack in many aspects. But Exports also comes with limitations, like hard coupled dependencies and no cross region or cross account support.

There is a workaround to avoid Exports hard coupling using SSM Parameter Store. You can save the “output” from one stack in AWS SSM Parameter Store using the AWS::SSM::Parameter resource. Then its possible to use the value as a Parameter type AWS::SSM::Parameter::Value<String> in other stack to read the value. This workaround is a lite verbose and it creates invisible dependencies, and it still only works within one region and one account.

You can also use AWS CodePipeline to deploy multiple CloudFormation stacks and carry configuration between stacks using parameter overrides. However, this does not work cross region or cross account. Codepipeline also uses a polling pattern for executing steps which can lead to slow deployments.

When deploying stacks using an Attini deployment plan the AttiniCfn step will add the the stacks output to the deployment plan payload which carries information to the later steps in a loosely coupled manner. This also works cross account and cross region. Deployment plans can also be designed in a tree structure which visualizes dependencies between stacks and can be used as an up to date map over your IT environment.


CloudFormaiton configuration automation

Vendor

Support

AWS Developer Tools

Yes

Github actions

Attini Framework

Yes

Most configuration tools have smart features to help with configuration management, for example inheritance structures are common.

AWS CodePipeline have the ability to use a json file as configuration and parameter overrides. This works fine for simple situations, but these files are hard to manage at a big scale because they lack any features that enables reusability. Because CodePipeline also only support json files, comments are not supported.

The AttiniCfn resource have good support to reference any data in the *payload. The ConfigFile have the “extends”feature which enables an inheritance structure. The attini-config file have support for default and environment specific configuration for the Init deploy which can easily be forwarded to CloudFormation stacks via the Attini deployment plan. Attini also have the fallback feature that helps with more complex workflows. Attini can also read configuration from the SSM parameter store with a default value which is not supported natively by Cloudformation (you can only configure a default key, not the actual value).

*The payload includes all deployment metadata and all CloudFormation stack outputs (even from other regions and account). You can also add your own data to the customData section, for example using a Lambda function.


Deployment payload

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

How to maintain state thorough a pipeline is critical for a flexible and dynamic deployment. We often need to carry output from one step and pass it as input to another step.

AWS CodePipeline have zip files on S3 to manage states between steps. They have added support for an integration with these zip files in some standard use cases ex parameter overrides. This feature is still fairly limited and many use cases would still require workarounds. For example this does not work if your output comes from a CloudFormation stack in a different AWS Region.

Github actions have Contexts and outputs which is a good way to communicate data through the deployment.

Attini stores all the deployment artifact (files) in an S3 bucket much like CodePipeline does, but Attini works with plain (not zip) files with predictable names so that you can easily integrate the files with other services or technology’s. In addison to this, Attini uses the AWS StepFunctions Payload to carry deployment metadata and step outputs. This payload can then be used as input for any step, for example a Lambda function. The deployment plan can also make choices to only run certain steps if some condition is true. For example to run a load test if the environment is development.


Advanced deployment process

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

When we are building code artifacts or deploying one application, the deployment process is usually fairly straight forward. But when we deploy our whole IT environment using Infrastructure as Code the deployment process can quickly become complex. If your deployment tool don’t natively support the deployment process you need, you might have to build complex workarounds.

AWS CodePipeline have a linear deployment process with support for parallel actions and manual approval steps. There is no support for branches (tree structure), choices, or retries.

Github actions have a linear deployment process with support for conditional steps and manual approval features. There is no support for parallel actions or branches.

Attini have support for parallel, branches, choices, retries. Its also easy for users tie in their own custom lambdas which can preform any custom logic you might need. There is no support for manual approvals in the Attini Framework yet.


Native rollbacks to a previous state

Vendor

Support

AWS Developer Tools

Github actions

Attini Framework

Yes

Doing a rollback can be critical after a failed deployment. Normally this is done either by reverting your code to a previous state or by using an old version of an artifact. Reverting your code is problematic because there is no guarantee that the result will be exactly the same as the previous version. Versioned artifacts is better but is still flawed because unless the versioned artifact include all dependencies there is still no guarantee that the result will be predictable. This is one of the reasons containers are popular, because if your application roles backs to an old image, you can be sure that the service will run in a predictable way.

AWS CodePipeline and Github actions don’t have a native stateful way to save your deployment artifacts in a “per environment” manner. This means that the Cloud engineer have to build logic to store any artifact somewhere, then build a custom process to use them in a rollback scenario.

Attini always build a immutable package that is the only source for the deployment, then Attini will keep isolated copies of old deployments inside the IT environments. This means that you can always re-deploy an old version without rebuilding it. There is also an advantage in the fact that the environments have there own copy, because you never have to worry about artifacts being deleted in a central environment.


CLI follow

Vendor

Support

AWS Developer Tools

Github actions

Yes

Attini Framework

Yes

When you are deploying changes to a IT environment you have two options. You can deploy things straight from your computer using a CLI or a script, this method works but its has some limitations. It means that the deployment is dependent on the configuration on the Cloud Engineers computer and his IAM access. For this reason we often do deployment from a build server or via another form of central deployment service. If the deployment is not being performed from your computer, it can sometimes be hard to follow its progress. This often results a shattered workflow that forces the Cloud engineer to jump around between different GUIs. This makes troubleshooting and development slow.

The AWS CLI have support for describing CodePipeline executions, but its does not have a “follow” or “watch” feature.

Github CLI have the gh run watch command which makes it easily to follow a deployment from a terminal.

Attini CLI follows the deployment by default. The Attini CLI also exits with proper exit codes (0 for success and higher numbers for failure), this makes it easy to hook in Attini deployments into your current build server or scripts.