Reference Architecture

Here you will find information on how Attini works and how different components of the Attini framework are intended to be used.

Subsections of Reference Architecture

Artifact Life Cycle

Over time, your distributions will pile up and start driving cost.

Attini, therefore, supports life cycle policies for your deployed distributions. The life cycle policies are configured using 2 configurations in attini-setup.

  1. RetainDistributionDays
  2. RetainDistributionVersions

This means that you will always be able to roll back X days and Y versions.

When we clean up old deployments, everything with the key prefix /${environment}/${distribution-name}/${distribution-id}/ in the Attini artifact store will be deleted.

The clean-up will be done by the Init deploy so if no new deployments are done, the life cycle management will never be triggered.

Example

If you configure your deployments for 10 days and 5 deployments, all the files associated with an older distribution AND have more than 5 new deployments will be permanently deleted.

If you deploy 20 times within 1 day, you will have 20 versions of your distribution in your environment because all deployments are newer than 10 days. But if you wait for 10 days and do 1 more deployment. Then the 15 oldest versions will be deleted.

If you deploy once a month, the distributions will be saved for 5 months.

Attini Dependencies

An Attini distribution can be dependent on other Attini distributions, for example, your database distribution can be dependent on your network distribution. Attini dependencies are configured in the attini-config dependencies section.

The Attini Framework will then verify that your dependencies are successfully deployed to your environment.

Example
distributionName: database
dependencies:
  - distributionName: network
    version: ">1.0.0"

distributionName
Name of another distribution that your distribution is dependent on.

version
The version of the distribution your current distribution is dependent on. The version follows the NPM versioning rules for deciding if a version is valid. If the deployed version does not match the specified version of the dependency, then the deployment will fail. The version field is optional.

Access payloads from dependencies

When an Attini deployment plan finishes successfully, the Attini Framework will save the output payload from the last steps to the Attini artifact store with the S3 key outputs/{environment}/{distributionName}/{distributionId}/output.json. The name of the Attini artifact store S3 bucket can also be found in the payload under the path deploymentSource.deploymentSourceBucket

If a deployment plan has multiple end states, the payload will be saved as a list. For this reason, it’s a good idea to end a deployment plan with a Attini merge output state to make the output data easier to search through.

When you have a dependency configured, the output.json S3 key will be included in the payload so that this file be accessed using an AWS Lambda function (see AttiniLambdaInvoke or the AttiniRunnerJob.

Attini Environments

All Attini deployments target an environment which is a logical separation between cloud resources, meaning that the framework will separate files and other deployment data from each other.

Warning

If you have multiple environments in one AWS account, there’s a high risk that environments affect each other in different ways, and you have to design your cloud environments to avoid this.

The biggest problems related to running multiple environments in one AWS account are:

  1. Difficulties designing least privileges IAM Roles/Polices.
  2. Environments share AWS Service limits.
  3. Conflicting names (this can be mitigated by including the environment name in any cloud resource name, including CloudFormation stacks).

For these reasons, we recommend keeping different environments in different AWS accounts.

Your deployments can automatically access the environment name via:

  1. The AttiniEnvironmentName CloudFormation Parameter.
  2. The Attini Runner environment variable ATTINI_ENVIRONMENT_NAME.
  3. The Deployment plan payload .

Create environments

Before you can deploy a distribution to an environment, Attini requires you to create it using the command:

attini environment create {environment-name} --env-type {production | test }

The attini environment create CLI command is idempotent, so it can be run multiple times within any scripts, and it will create environments if It’s not already created. The command can also be used to update the environment type.

Examples
attini environment create prod --env-type production
attini environment create dev --env-type test

–env-type flag

Allowed Values: production | test

Default: production

If you’re deploying to a production environment, the Attini CLI will prompt for confirmation when doing new deployments. If the environment type is test, new deployments will run directly.

Remove environments

If you’re deprecating an environment, or if the environment was created by mistake, they can easily be removed with the command:

attini environment remove {environment-name}

The “attini environment” is just a logical confirmation that enables or disables new deployments, so removing them is always safe (the command will NOT remove any cloud resources belonging to the environment). If you remove an environment by mistake, you can easily re-create it using attini environment create command.

Note

If you want to delete all resources belonging to an environment, you must do it yourself.

Attini Init Deploy

Attini Init Deploy Attini Init Deploy

Start a deployment

A deployment starts by uploading a distribution to the Attini deployment origin. Attini deployment origin is a s3 bucket that is created by the attini-setup onboarding process, it acts as an entrance to your environments and is the starting point for your deployments.

When the distribution is put in the Attini deployment origin bucket, it will trigger the Init deploy lambda.

Note

Anyone permitted to put objects in this s3 bucket can initiate new deployments, which is a very privileged action. Therefore, you must be careful with s3:PutObject permissions. We highly recommend you apply a bucket policy that only allows the appropriate personnel to put objects in the bucket.

Init deploy lambda

The Init deploy lambda will download the distribution from the deployment origin bucket and:

  1. Extract the files in the distribution
  2. Read the attini-config file.
  3. Upload the distribution content to the attini-artifact-store
  4. Update the reference parameter
  5. Update the Init deploy stack

When the Init deploy stack is finished updating/creating, the Attini Framework will find the deployment plan and trigger it.

Distribution content

All the content in your distribution will be extracted and put in Attini artifact store.

The distribution zip file will also be copied as-is to the artifact store, so you can still work with the original zip file if needed.

You can integrate and work with these files however you see fit using the AWS CLI, AWS SDK, or Attini deployment plan integrations.

The namespace “…/distribution-origin/” is only there to distinguish the content from the origin distribution. A step in the deployment plan can fetch or create new files and save them under any namespace.

For example, you can have one step in the deployment plan that polls config files from an external source, and you can put it under /${environment}/${distribution-name}/${distribution-id}/external-config/ or if you use the AWS CDK you can put the synthesized templates under /${environment}/${distribution-name}/${distribution-id}/synthesized-templates/.

All filed with the prefix /${environment}/${distribution-name}/${distribution-id}/ will be subject to the life cycle policy.

Find the distribution artifacts

You often end up with a use case that requires your applications to find the latest version of your distribution files.

The Init deploy will save the latest distribution id in AWS SSM parameter store with the artifact with the path: /attini/distributions/${environment}/${distribution-name}/latest. So any system in your environment can easily find the latest version of your files.

Example

I have a distribution called “config” that contains a file called “vpc-config.yaml” deployed to my prod environment.

To download the file from the artifact store, I can run the following commands:

ENVIRONMENT=prod
DISTRIBUTION_NAME=config
DISTRIBUTION_ID=`aws ssm get-parameter --name /attini/${ENVIRONMENT}/distributions/${DISTRIBUTION_NAME}/latest --query Parameter.Value --output text`
aws s3 cp s3://attini-artifact-store-${region}-${accountId}/${ENVIRONMENT}/${DISTRIBUTION_NAME}/${DISTRIBUTION_ID}/distribution-origin/vpc-config.yaml .

Init deploy stack

When a distribution is deployed to an environment, the deployment needs a flexible way to initiate. The Attini Framework accomplishes this with the “Init deploy stack”. The Init deploy stack is a CloudFormation stack that is automatically deployed from a template in your distribution, and it can deploy any AWS resources you might need. This stack can also contain your Attini::Deploy::DeploymentPlan.

Find more information on how to configure the Init deploy stack using the attini-config.

Init deploy stack change detection

In order to optimize performance, Attini will only update the Init deploy stack if the stack has changed or if its input from attini-config has changed. This allows for faster deployments, but sometimes a forced update is needed. For example, if the Init deploy stack reads parameters from the SSM Parameter Store and the value of the parameter has changed since the last deployment. In this scenario, Attini is unable to detect the change and a forced update is required. This can be triggered by setting forceUpdate: true in the attini-config file. The same is true if the Init deploy stack has any custom transformations.

In order to reduce the number of times the Init deploy stack needs to be updated, it is best to minimize the amount of configuration for and logic done directly in the deployment plan. Instead, put you logic and config in other files and scripts that the deployment plan uses.

Example
Type: AttiniRunnerJob
Properties:
  Runner: HelloWorldRunner
  Commands:
    - bash my-script.sh

This is also good practice in general because it will make the Init deploy template less bloated and make it easier to manage.

Attini Runner

The Attini Runner is a fast, flexible, and cost-efficient way to execute code from a container as a part of your deployment plan.

Find detailed implementation details and API information here.

Quick info

  • The Attini Runner is executed as an Amazon ECS task.
  • The same task can be reused between different steps and executions without restarting the container.
  • It can run within a private network so that it can integrate with private services and databases.

Architecture

When an AttiniRunnerJob step is started (see AttiniRunnerJob deployment plan type the following will occur:

  1. The Attini Framework will check if there is a warm runner with the correct configuration. If not, the Attini Framework will start one.
  2. The Attini Deployment Plan will add a message to the Runner’s SQS queue with the AttiniRunnerJob commands, the deployment plan payload, and some other required information.
  3. If it is the first job in the deployment plan execution, the Attini Runner will download your attini distribution from the Attini artifact store.
  4. The Attini Runner will then:
    • Receive the message from the SQS queue.
    • Create a working directory for the job.
    • Extract the distribution into the working directory.
    • Remove the message from the SQS queue.
    • Run the commands.
    • Report the result to the deployment plan. See runner output for more information.

Attini runner job workflow Attini runner job workflow

Warm and cold starts

To increase development speed, decrease rollback time, and enable parallel jobs, the Attini Runner is kept warm, much like a Lambda function but with some differences. A Lambda will start a new instance if a new request is received while another is being processed. An Attini Runner will instead handle multiple parallel requests by executing them simultaneously in the same instance. How many jobs the Attini Runner will execute simultaneously is configured in the Runner configuration. If the number of parallel jobs exceeds the configured maximum, the requests (jobs) will remain in the SQS queue until a job has finished executing.

An Attini Runner can be reused between executions. How long a Runner will stay alive without a job request can be configured in the Runner configuration. If the configuration for the Runner changes, Attini will restart the task with the new configuration.

The same Attini Runner can not be used for different distributions. However, the same ECS task definition can be used for different runners.

Default Runner

Attini Setup will create resources for a default Runner.

The default will be used for some “out-of-box” automation like the AttiniSam type.

Default runner configuration

The default runner uses the default values from Attini::Deploy::Runner resource.

The following AWS Resources are created by Attini Setup, and they are used for the default runner.

  1. Default ECS cluster called attini-default.

  2. Default IAM Role for the Runner.

  3. CloudWatch log group called /attini/runner/default.

  4. An ECS task-definition called attini-default-runner, and it uses the following configuration:

EC2 Runner

If Ec2Configuration is configured for the runner, the ECS EC2 Launch type will de used.

Main advantages to using en EC2:

  1. It allows for docker builds.
  2. EC2 has a lower cost per CPU Core and GB Memory and more infrastructure options, which can be relevant for jobs with special requirements.

Drawbacks of using EC2 runners are:

  1. Longer cold stars.
  2. Increased complexity.

EC2 Runner configuration

AMI

You can use your own AMI or the recommended version the Amazon ECS-optimized AMI. You can specify the AMI in the Runner Ec2Configuration.

If you configure your own AMI, it needs the ECS agent installed and enabled.

Disk
The EC2 Host will have a 50 GB gp3 EBS volume.

Default docker build configuration

By default, the Attini runner on EC2 will be using a Docker-out-of-Docker (DooD) configuration, find more information on this subject in this Docker-in-Docker in docker blog.

This means that the Attini runner container will mount the host’s Docker’s socket. The Attini runner container can and then use the docker CLI to interact with the docker daemon on the EC2 Host.

The drawbacks of DooD configuration are usually not relevant to the Attini runner because:

  1. The EC2 host is not shared with other containers.
  2. The Attini runner or the EC2 host usually does not need a port mapping because it should have no incoming traffic.
  3. Mount points are only relevant when you run a container, and the Attini runner should be used to build containers.

If this method doesn’t work for your use-case, you can provide your own ECS TaskDefinitionArn with privileged mode enabled to run Docker-in-Docker (DinD) or have a look at the nestybox project.

EC2 host management and troubleshooting

The EC2 host will be stared and terminated automatically by Attini and the AWS IAM permissions to do this are included in the runners least privilege policy arn:aws:iam::{AWS::AccountId}:policy/attini-runner-basic-execution-policy-{AWS::Region}. If the IAM role for the runners does not have this policy, there is a risk for zombie EC2s.

The EC2 host should have the AWS SSM agent enabled so you can connect to the instance using AWS Systems Manager Session Manager. If its not working, make sure the IAM role used by the InstanceProfile has the manged policy AmazonSSMManagedInstanceCore attached. If you have any issues, you can connect to the host and investigate it.

The ECS agent logs will be sent to a CloudWatch log group with a 90-days retention period. You can find the CloudWatch log group your Init deploy stack with the logical name {RunnerName}EcsClientLogGroup.

Attini System Architecture

This is the end-to-end description of how Attini performs a deployment:

Attini AWS System Architecture Attini AWS System Architecture

AWS Serverless Application Model (AWS SAM) with Attini

The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS.

If an Attini deployment plan contains an AttiniSam step, Attini will build and package the SAM app automatically.

What does AttiniSam do?

The following steps are done automatically when the AttiniSam step is used.

  1. After the Attini CLI is done with the prePackage, the Attini CLI will look for a build directory within your SAM project. If the directory does not exist, Attini will run sam build.

    Note

    You need AWS SAM CLI installed on your computer or build container.

  2. Attini will then zip the whole SAM project, including the build directory. This means that the file names within your SAM app do not need to be compatible with S3 naming guidelines and the files will not be excluded by Attini ignore.

  3. Attini will convert the AttiniSam step into two steps. One AttiniRunnerJob that will run the sam package command, and one AttiniCfn step that will deploy the SAM template.

    If you have configured an custom default image, your image will need the SAM CLI installed.

Backup and Recovery

The Attini Frameworks has state in 3 places:

  1. AWS S3
  2. Amazon DynamoDb
  3. AWS Systems Manager Parameter Store

All data that Attini used during deployments is created during the deployment execution, so you can always delete/truncate all Attini data without risking your ability to re-deploy. However, this data is useful to if you want your deployment history and enabling you to do rollbacks.


Attini data in AWS S3 Buckets

Attini have 2 AWS S3 Buckets per installation:

  1. attini-deployment-origin-{region}-{aws account}
  2. attini-artifact-store-{region}-{aws account}

Both buckets have version history enabled and a life cycle policy to delete non current objects after 30 days. So any objects that are overwritten or deleted can be restored for 30 days.

Note

When you deploy a new version of an Attini Distribution, it does not overwrite any files in attini-artifact-store-{region}-{aws account}. So the files life cycle will be determined by the retention configuration.

Attini data in Amazon DynamoDb

Attini have 2 Amazon DynamoDb tables:

  1. AttiniDeployData
  2. AttiniResourceStates

Both tables have Point in time recovery enabled. If you want to make extra backups of these tables, you are free to do so using your own backup tools.

This data is only used during deployments, so if it is deleted, it will not affect your new deployments. The only Attini features you will lose if the data is lost is the CLI command attini deploy history.

If any data in these tables are deleted by mistake, you can follow these steps:

  1. Do a restore to a new DynamoDb table from the point in time recovery backup.
  2. Copy the data from the new table into AttiniDeployData or AttiniResourceStates tables.

If any of the tables are accidentally deleted and you do not have any manual backups the data is lost.

Manually getting the Deployment history if DynamoDb tables are lost

An ordered list of the 100 latest versions of all distribution IDs are also saved as parameter history in AWS Systems Manager Parameter Store. You can find then under the path /attini/distributions/{environment}/{distribution name}/latest.

Using these parameters, you can find any distribution in the s3 bucket attini-artifact-store-{region}-{aws account} that is still saved by your retention configuration (see RetainDistributionDays and RetainDistributionVersions).

Attini data in AWS Systems Manager Parameter Store

An ordered list of the 100 latest versions of all distribution IDs are saved as parameter history in AWS Systems Manager Parameter Store. You can find then under the path /attini/distributions/{environment}/{distribution name}/latest.

These parameters are not backed up, so if they are deleted, the data is lost.

If the data in DynamoDb is intact, you can still get the versions within your retention configuration retention configuration (see RetainDistributionDays and RetainDistributionVersions) by using the Attini CLI attini deploy history -e {environment} -n {distribution name}

CloudFormation Configuration

Within the Attini Framework, you have 2 options to configure your CloudFormation stacks, Inline configuration and Configuration Files

The Attini Framework also automatically populates some CloudFormation parameters. See more at: Framework Parameters.


Inline configuration

Inline configuration is configured in the AttiniCfn Properties section. This configuration has the highest priority and will override the configuration set in the Configuration Files.

Example
Type: AttiniCfn
Properties:
  Template: /path/to/my/template.yaml
  StackName: my-stack
  ConfigFile: /path/to/my/config.json
  Parameters:
    MyParameter: MyValue
    MyDbURL: dscsdcscsd.ndckjsndc.us-east-1.rds.amazonaws.com
  RoleArn: arn:aws:iam:my:role:arn

Configuration file

A configuration file is in JSON or YAML format, containing configuration for a CloudFormation stack.

extends: String
stackName: String
templatePath: String
region: String
executionRole: String
stackRoleArn: String
outputPath: String
action: String
parameters:
    ParameterKey: ParameterValue
tags:
    Key: Value
{
    "extends": "String",
    "stackName": "String",
    "templatePath": "String",
    "region": "String",
    "executionRole": "String",
    "stackRoleArn": "String",
    "outputPath": "String",
    "action": "String",
    "parameters": {
        "ParameterKey": "ParameterValue"
    },
    "tags": {
        "Key": "Value"
    }
}

Please see AttiniCfn for property-specific documentation.


Configuration inheritance

The extends property allows you to build an inheritance hierarchy for your configuration, similar to how inheritance works in most object-oriented programming languages. The extends property should contain a path to another configuration file in the distribution (the path should be specified as an absolute path from the distribution root). Attini will merge any configuration in that file into your current configuration. The configuration file that is referenced under extends will have a lower priority than the current configuration file.

Priority hierarchy
  1. Inline configuration have the highest priority
  2. A configuration file will always have a higher priority than the file it extends.
Example

These files could be used to configure a CloudFormation template containing an AWS::Lambda::Function.

/config/default.yaml

parameters:
  Runtime: python3.7
  MemorySize: 256

/config/app.yaml

extends: "/config/default.yaml"
parameters:
  MemorySize: 512
tags:
  Version: 1.0.0

The final configuration will now look like this:

parameters:
  Runtime: python3.7
  MemorySize: 512
tags:
  Version: 1.0.0

Parameters and tags

Your stack's parameters and tags can be specified as a String, Integer, Boolean or an Object. If it's a String, Integer or a Boolean, it will directly be applied as a CloudFormation parameter or tag. Attini also allows an Object structure if you want to use Attinis configuration features, like fallback properties or SSM parameter.

Note

The object structure (“fallback properties” or “SSM parameter configuration”) is only supported in Configuration files and can not be used in Inline configuration}.


Fallback properties

ParameterKey:
    fallback: Boolean
    value: String
{
    "ParameterKey": {
        "fallback" : "Boolean",
        "value": "String"
    }
}

When fallback is true, the Attini Framework will always honor the current configuration. This means that the value will only have relevance when a parameter is set for the first time, either when creating the stack, or adding a new parameter to an existing stack. This is useful if you have people in your organization that should be able to update some parameters straight in the console or via the AWS CLI. For example, if you have a DBA that should be able to update the “AllocatedStorage” parameter on a stack containing an RDS instance manually but the rest of the configuration should be managed by the Attini Framework, the config file could look like this:

parameters:
    Engine: postgres
    AllocatedStorage:
        fallback: true
        value: 500
{
    "parameters": {
        "Engine": "postgres",
        "AllocatedStorage": {
            "fallback" : true,
            "value": "500"
        }
    }
}

SSM Parameters

Attini supports reading from AWS SSM Parameter Store. In this case, the value should be set to the SSM parameters name. An optional default can be specified if the SSM Parameter does not exist.

parameters:
    parameterKey:
        value: String
        type: ssm-parameter
        default: String
{
    "parameters": {
        "parameterKey": {
            "value": "String",
            "type": "ssm-parameter",
            "default": "String"
        }
    }
}

SSM parameters can also be used as a fallback property.

Example
parameters:
    parameterKet:
        value: /my/ssm/parameter/name
        type: ssm-parameter
        default: some-default-value
{
    "parameters": {
        "parameterKey": {
            "value": "/my/ssm/parameter/name",
            "type": "ssm-parameter",
            "default": "some-default-value"
        }
    }
}

Framework parameters

Attini will automatically populate some parameters if you specify them in your CloudFormation template. These parameters are:

  • AttiniEnvironmentName (Can be configured in the attini-setup)
  • AttiniDistributionName
  • AttiniDistributionId
  • AttiniRandomString
AttiniEnvironmentName

Your current environment. This name can be “Re-mapped” by configuring the “EnvironmentParameterName” parameter in the attini-setup.

AttiniDistributionName

The name of the deployment plans distribution.

AttiniDistributionId

The distributionId of the deployment plans distribution.

AttiniRandomString

A random UUID

This can be useful to trigger CloudFormation custom resources.

Example

The Attini Framework will automatically configure the parameters in this stack:

AWSTemplateFormatVersion: "2010-09-09"

Parameters:
  AttiniEnvironmentName:
    Type: String

  AttiniDistributionName:
    Type: String

  AttiniDistributionId:
    Type: String

  AttiniRandomString:
    Type: String

Resources:
  ...

Cross-Account CloudFormation Deployments

Attini supports deploying CloudFormation across AWS accounts. To enable cross-account deployments, we need to do the following:

  1. Create an IAM Role in the target account that the Attini Framework can assume.
  2. Configure the ExecutionRoleArn in the Attini deployment plan to reference the IAM Role from step 1.
  3. Apply an S3 bucket policy to the attini-artifact-store-{Region}-{AccountId} bucket that allows the IAM Role from step 1 to get the template from the bucket.

In the example below, the Attini Framework is deployed into eu-west-1 in account 111111111111, and we deploy a CloudFormation stack into account 222222222222.

Cross account cfn deploys Cross account cfn deploys


1. Create a custom execution IAM Role

First, we need to create an IAM Role in the target AWS account. This IAM Role requires permission to manage all the resources in your CloudFormation Stack OR the iam:PassRole permission if you are using a Stack role. It also needs permissions to deploy CloudFormation and accessing S3 Objects from attini-artifact-store-{Region**}-{AccountId*} bucket.

We must also ensure that the Attini Framework can assume the IAM Role. This is done by allowing the IAM Role to be assumed by the arn:aws:iam::{AccountId*}:role/attini/attini-action-role-{Region**} IAM Role.

* The AWS account id you are performing the deployment from.

** The region you are performing the deployment from.


The example IAM Roles below have full EC2 access. However, you should configure the IAM Role to only have access to resources the stack should manage.

Example role configuration
CreateS3ExecutionRole:
  Type: AWS::IAM::Role
  Properties:
    RoleName: custom-execution-role
    AssumeRolePolicyDocument:
      Statement:
        - Action: sts:AssumeRole
          Effect: Allow
          Principal:
            AWS: arn:aws:iam::111111111111:role/attini/attini-action-role-eu-west-1
      Version: 2012-10-17
    Policies:
      - PolicyName: !Ref AWS::StackName
        PolicyDocument:
          Version: 2012-10-17
          Statement:
            - Action: s3:GetObject
              Effect: Allow
              Resource: arn:aws:s3:::attini-artifact-store-eu-west-1-111111111111/*
            - Action:
                - cloudformation:Describe*
                - cloudformation:List*
                - cloudformation:Get*
                - cloudformation:ValidateTemplate
                - cloudformation:CreateStack
                - cloudformation:TagResource
                - cloudformation:UntagResource
                - cloudformation:CancelUpdateStack
                - cloudformation:CreateChangeSet
                - cloudformation:ContinueUpdateRollback
                - cloudformation:UpdateStack
                - cloudformation:DeleteStack
              Effect: Allow
              Resource: “*”
            - Action: ec2:*
              Effect: Allow
              Resource: “*”
{
  "Role": {
    "RoleName": "custom-execution-role",
    "Arn": "arn:aws:iam::222222222222:role/custom-execution-role",
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Principal": {
          "AWS": [
            "arn:aws:iam::111111111111:role/attini/attini-action-role-eu-west-1"
          ]
        },
        "Action": "sts:AssumeRole"
      }]
    },
    "Policies": [{
        "Effect": "Allow",
        "Action": "s3:GetObject",
        "Resource": "arn:aws:s3:::attini-artifact-store-eu-west-1-111111111111/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "cloudformation:Describe*",
          "cloudformation:List*",
          "cloudformation:Get*",
          "cloudformation:ValidateTemplate",
          "cloudformation:CreateStack",
          "cloudformation:TagResource",
          "cloudformation:UntagResource",
          "cloudformation:CancelUpdateStack",
          "cloudformation:CreateChangeSet",
          "cloudformation:ContinueUpdateRollback",
          "cloudformation:UpdateStack",
          "cloudformation:DeleteStack"
        ],
        "Resource": "*"
      },
      {
        "Effect": "Allow",
        "Action": "ec2:*",
        "Resource": "*"
      }
    ],
    "Description": "Cross account deployment role"
  }
}

2. Configure the step in the deployment plan

We need to configure the deployment plan to use our execution IAM Role when deploying the template. This is done by setting the ExecutionRoleArn property in the AttiniCfn step.

Example of a deployment plan step using an ExecutionRoleArn
Resources:
  CrossAccountDeploymentPlan:
    Type: Attini::Deploy::DeploymentPlan
    Properties:
      DeploymentPlan:
        StartAt: DeployVpc
        States:
          DeployVpc:
            Type: AttiniCfn
            Properties:
              Template: /network/vpc.yaml
              StackName: vpc
              ExecutionRoleArn: arn:aws:iam::222222222222:role/custom-execution-role
            End: true

3. Apply an s3 bucket policy

Because we are using a custom execution IAM Role we need to make sure that it's allowed to read the CloudFormation template from S3. This is easily done by applying an S3 Bucket Policy to the attini-artifact-store-{Region}-{AccountId} bucket in the account we are performing the deployment from.

Example of an S3 Bucket Policy
{
  "Version":"2012-10-17",
  "Statement": [
    {
      "Sid": "AllowCrossAccountAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::222222222222:role/custom-execution-role"
      },
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::attini-artifact-store-eu-west-1-111111111111/*"
    }
  ]
}

That’s it! The deployment plan will now deploy the stack to the target account.

Distribution File Names

All files in an Attini distribution are put on AWS S3, which means that all file names have to be compliant with S3 object key name standards. Therefore, only the “Safe characters” from the documentation are recommended for filenames.

Forbidden characters in filenames

Any characters that require special handling or character that should be avoided are not supported by the Attini framework.

Note

If you have file names that include non-supported characters, the Attini CLI will ignore them.

To learn how to automatically exclude files from your distribution, see Attini ignore.

Ignore Files

Sometimes you have files in your repository that you do not want to include in your Attini Distribution. For example, you might have code you compile in the package phase that you want to exclude from the distribution and only package the build artifacts. You can easily do this with .attini-ignore.

.attini-ignore

.attini-ignore is a file with glob patterns that tells the Attini CLI which files to ignore when it packages the distribution. It’s a similar structure to .gitignore.

The .gitignore file must be located in the project's root directory, next to the attini-config file.

.
├── attini-config.yaml
├── .attini-ignore
├── ...

Ignore files process Ignore files process

Default .attini-ignore

If the .attini-ignore file is missing, the Attini CLI will apply the following configuration.

**[#%\{}`~<>|^ &;?$]*
**\[*
**\]*
**/.*
.*
**/node_modules/**

The default file will ignore all hidden files and files with the following characters in the name:

#\{}^%` &;?$][]"<>~|

If you write your own ignore file, we recommended you include the default patterns in the file as well so that all files stay compliant with S3 naming conventions.

Monitoring

To get notifications from the Attini Framework, you can subscribe to the AWS SNS Topic attini-deployment-status.

This topic will get a notification from different stages in an Attini Deployment:

  1. InitDeploy triggered
  2. DeploymentPlan starts
  3. DeploymentPlan ends

All messages will have the following message attributes that can be used for more precise subscriptions:

  • status
  • type
  • environment
  • distributionName

These notifications will be an AWS SNS message with the following structure:

{
  "Type" : "Notification",
  "MessageId" : "String",
  "TopicArn" : "String",
  "Message" : "{encoded json sting with attini data}",
  "Timestamp" : "yyyy-MM-ddTHH:mm:ss.SSSZ",
  "SignatureVersion" : "1",
  "Signature" : "String",
  "SigningCertURL" : "String",
  "UnsubscribeURL" : "String"
}

When encoded json string with attini data is decoded it will have the structure bellow:

Data from InitDeploy

{
  "type": "InitDeploy",
  "distributionId": "String",
  "distributionName": "String",
  "initStackName": "String",
  "distributionId": "String",
  "environment": "String"
  "tags": {
    "key": "value",
  }
}

Data from DeploymentPlan start

{
  "type": "DeploymentPlanStart",
  "deploymentOriginData": {
    "distributionTags": {
      "key": "value",
  },
  "distributionId": "String",
  "environment": "String",
  "deploymentPlanCount": "Number",
  "distributionName": "String",
  "deploymentSource": {
    "deploymentSourcePrefix": "String",
    "deploymentSourceBucket": "String"
  },
  "stackName": "String",
  "deploymentName": "String",
  "objectIdentifier": "String",
  "deploymentTime": "Number"
  },
  "executionArn": "String",
  "region": "String"
}

Data from DeploymentPlan end

{
  "type": "DeploymentPlanEnd",
  "deploymentPlanStepsCount": "Number",
  "environment": "String",
  "executionArn": "String",
  "distributionsName": "String",
  "distributionId": "String",
  "region": "String",
  "status": "SUCCEEDED | FAILED | TIMED_OUT | ABORTED"
}

Most AWS Resources that Attini is built on have standard metrics which you get for free, and you can monitor them however you see fit.

Attini is built on AWS Serverless services, so if you want to monitor the Attini components, you can find these metrics in the CloudWatch Console.

Most of the Attini Framework is built with components that make operational issues very unlikely, for example, Attini should never reach any capacity limits on S3, SNS, or DynamoDB (and if the services have other issues, Attini is configured with retries). If the Attini experiences any operational issues (service limits/errors caused by the underlying AWS Services), you can catch any unexpected failed deployments by Monitoring Attini deployments.

Some service metrics are good to set up alarms for to keep the Attini Framework in shape, but this is not required.

Crashing InitDeploy

When you preform a deployment an AWS Lambda function called attini-init-deploy is triggered. Some of the tasks that this AWS Lambda function preforms are:

  1. Unpacking the Attini Distribution and puts the content in the attini-artifact-store bucket.
  2. Creating/Updating the InitDeploy CloudFormation Stack.
  3. Populating Attini databases with configuration.
  4. Deleting old Attini Distributions according to your retention configuration.

Most issues that this Lambda can have will be logged by the Attini CLI. However, there are some edge cases that can crash silently, so we recommend you to monitor the metric:

Namespace: AWS/Lambda

FunctionName: attini-init-deploy

Metric Name: Error

If this metric is ever 1 or higher, it means that something is wrong, for example, the Attini Distribution could be too big, or there is a permission issue somewhere. You can then look in the logs for the attini-init-deploy or file a support ticket with Attini support.

Lambda Throttles

There is always a risk for throttles in systems that are under a heavy load. Attini is configured with retires so a few throttling errors should go unnoticed. However, it can make the deployments slower or crash if the load is high enough.

This problem can happen if the Resource allocation poorly configured.

If the ResourceAllocation is “[Dynamic]”, this can be caused by other AWS Lambdas in your AWS Account that is using upp your non-reserved concurrency pool.

If the ResourceAllocation is “[Small]”, “[Medium]” or “[Large]” it means that you are using Attini more than the configuration can handle, if you are using the “[Small]” or “[Medium]” configuration, you can just increase the size one step. If you’re running with the “[Large]{.title-ref}” installation and you are experiencing this issue, please contact Attini support (ticket will be free of charge).

Namespace: AWS/Lambda

FunctionName: attini-init-deploy & attini-deployment-plan-setup & attini-step-guard & attini-deploy-cfn

Metric Name: Throttles

Securing Attini

To increase the security in your environment and enable traceability, we recommend that you apply the following configuration:

  1. Applying a s3 bucket policy to attini-deployment-origin-${Region}-${AccountId} bucket.
  2. Applying a s3 bucket policy to attini-artifact-store-${Region}-${AccountId} bucket.
  3. Securely configure the attini-setup CloudFormation stack.
  4. Configure AWS CloudTrail data events on Attini resources (s3 and lambda).

IamRoleDescriptions IamRoleDescriptions


Bucket policies

By creating specific bucket policies on the attini-deployment-origin and the attini-artifact-store bucket you can ensure that you don't give this access to anyone by mistake.

You should also consider adding a restriction on the s3:PutBucketPolicy API for these buckets so that no one can tamper with your security configuration.

Warning

By restricting access to the s3:PutBucketPolicy API you can lock yourself out of your buckets so be careful. You should never rely on AWS SSO Roles to manage this API because those roles might be replaced.

Attini deployment origin bucket

Anyone that have s3:PutObject access to the attini-deployment-origin bucket will be able to do deployments which is a very high privilege action.

The attini-deployment-origin bucket policy need to allow the following:

  1. Give access to anyone that need to start a deployment, ex your build server or your DevOps personal.
  2. Allow the Attini framework to read the distribution (s3 object)
Deployment origin bucket policy example
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "RestrictDeploymentAccess",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::attini-deployment-origin-${Region}-${AccountId}/*",
            "Condition": {
                "ArnNotEquals": {
                    "aws:PrincipalArn": "{ARN of the user or role that should be able to deploy}"
                }
            }
        }
    ]
}

Attini artifact store bucket

Anyone that have s3:PutObject access to attini-artifact-store bucket will be able to replace files that will be used in the next re-run of a deployment, that way they could tamper with your cloud environment. To manage this risk, it’s important to have cloud trail object level logging activated on the attini-artifact-store bucket and use least privilege IAM roles for your deployment steps.

If you have any distributions with sensitive information, you can also use bucket policies to restrict read access to specific paths in the attini buckets.

Artifact store policy example
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "RestrictAccessToArtifactStore",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::attini-artifact-store-${Region}-${AccountId}/*",
            "Condition": {
                "ArnNotEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::${AccountId}:role/attini/attini-init-deploy-lambda-role-${Region}"
                }
            }
        }
    ]
}
Bucket policies CloudFormation example
AWSTemplateFormatVersion: 2010-09-09
Description: Bucket policies for the Attini Framework

Resources:
  AttiniDeploymentOriginBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Sub attini-deployment-origin-${AWS::Region}-${AWS::AccountId}
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: RestrictDeploymentAccess
            Principal: "*"
            Action:
              - "s3:PutObject"
            Effect: Deny
            Resource:
              - !Sub arn:aws:s3:::attini-deployment-origin-${AWS::Region}-${AWS::AccountId}/*
            Condition:
              ArnNotLike:
                !Sub aws:PrincipalArn: arn:aws:iam::${AWS::AccountId}:role/aws-reserved/sso.amazonaws.com/${AwsSsoRegion}/AWSReservedSSO_AdministratorAccess_*
                # Fill in any IAM Role or User that should be able to do deployments


  AttiniArtifactStoreBucketPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Sub attini-artifact-store-${AWS::Region}-${AWS::AccountId}
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: RestrictAccessToArtifactStore
            Principal: "*"
            Action:
              - "s3:PutObject"
            Effect: Deny
            Resource:
              - !Sub arn:aws:s3:::attini-artifact-store-${AWS::Region}-${AWS::AccountId}/*
            Condition:
              ArnNotEquals:
                aws:PrincipalArn: !Sub arn:aws:iam::${AWS::AccountId}:role/attini/attini-init-deploy-lambda-role-${AWS::Region}
                # Fill in any IAM Role or User that should be able to manipulate files in the artifact store

How can I configure the S3 Bucket policies using AWS SSO Roles?

If you use AWS SSO, the underlying AWS IAM roles have unpredictable names, making it hard to configure. AWS SSO also replaces the roles sometimes meaning that you should never use AWS SSO Roles as a Principal in any IAM policy because that trust can be broken and sometimes hard to restore.

To reference an AWS SSO Role in an IAM/S3 Policy in a maintainable manner, you can use ArnLike/ArnNotLike condition in your bucket policy.

ArnLike/ArnNotLike condition

Warning

AWS has in no way guaranteed that the naming convention for these roles, so there is a risk that they will change and break your permission configuration.

Therefore, we recommend always having a break the glass User/Role that does not rely on AWS SSO.

Hint: To get the current IAM role arn you are using, you can find your current AWS role in the IAM console our using the AWS CLI command:

aws sts get-caller-identity
Example
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "RestrictDeploymentAccess",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::attini-deployment-origin-${Region}-${AccountId}/*",
            "Condition": {
                "ArnNotLike": {
                    "aws:PrincipalArn": "arn:aws:iam::${AccountId}:role/aws-reserved/sso.amazonaws.com/${SsoRegion}/AWSReservedSSO_${PermissionSetName}_*"
                }
            }
        }
    ]
}

Securely configure the attini-setup CloudFormation stack and my deployment plans

This chapter is about configuring the attini-setup CloudFormation stack and Attini DeploymentPlans.

When configuring a CI/CD system, you often have a “catch 22” situation. You need certain resources (in Attinis’ case it is IAM roles) for the CI/CD system to work, but you want to use the CI/CD system to create the resources you need.

To work around this, you can do one of the following:

  1. Create the resources yourself, then deploy the attini-setup CloudFormation stack using the IAM roles you just created.
  2. Deploy attini-setup with high privileges, then use Attini to create and configure the required resources for you. After that you can re-configure the Attini Framework with your least privilege IAM roles.

Keep in mind that you can reconfigure Attini at any time, so you can adjust, add or remove privileges as needed.


InitDeployRole

The InitDeploy Role is configured by the CreateInitDeployDefaultRole and InitDeployRoleArn parameters in the attini-setup

These 2 parameters configure the access for the InitDeploy, which is the CloudFormation stack that Attini automatically creates when a distribution is deployed. This CloudFormation stack is intended to create CI/CD resources like the deployment plan.

Note

If you set the “CreateInitDeployDefaultRole”=“true” attini-setup will create a default role with high privileges to make it easy to get started with Attini. However, in production environments, we recommend that you create your own InitDeploy Role. Then you can control which resources can be created by the Attini InitDeploy.

If you create an IAM Role and configure the InitDeployRoleArn parameter, this role have to:

  1. Be a AWS Lambda service role. Assume role document:

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }
  2. Add whatever privileges you might need for your deployment resources (to create Attini deployment plans, this role needs permission to create and update AWS StepFunction and CloudWatch events).

Note

attini-setup will add a least privilege inline IAM policy to your InitDeploy Role that will enable it to work with the Attini Framework.


DeploymentPlan Role

The DeploymentPlan Role is configured via the RoleArn parameter in the Attini::Deploy::DeploymentPlan resource. By configuring this role, you can allow the Attini deployment plan to do anything you might need in AWS while sticking to the least privilege principal.

The “CreateDeploymentPlanDefaultRole” parameter in the attini-setup sets up a default role for the deployment plans with quite broad permission to make the Attini Framework a bit easier to work with. In production environments, we recommend setting “CreateDeploymentPlanDefaultRole”=“false”


ExecutionRole

The ExecutionRole is configured via the ExecutionRoleArn parameter in the AttiniCfn step. By configuring this role, you can allow the AttiniCfn step to do anything you might need in AWS while sticking to the least privilege principal.

These ExecutionRole permissions are automatically transferred to the AWS CloudFormation stack that the AttiniCfn is deploying unless you also specify a StackRoleArn, then CloudFormation will update its resources with the StackRole credentials. So if the StackRoleArn is configured, the ExecutionRole will only need permissions to create/update the CloudFormation stack.

Configuring the ExecutionRole you can also deploy CloudFormation stacks to other AWS accounts, find more info here.


StackRole

The StackRole is configured via the StackRoleArn parameter in the AttiniCfn step. By configuring this role, you can apply an AWS CloudFormation service role.

To allow a Attini deployment plan to apply a stack role, you have to configure AwsServiceRolesContainsString parameter in the attini-setup correctly. If you configure an ExecutionRole in combination with StackRole, the ExecutionRole will need iam:PassRole permission to the StackRole.

StackRoles can be a powerful tool if you ever need to give out granular access to different members of your organization.

For example, let’s say that you have a development team in your organization that should be able to update a few specific ECS Services. Instead of giving the developers esc:UpdateService permission, you can give them cloudformation:UpdateStack permission and then control which stacks via AWS IAM Resource or Condition configuration. Then you give the StackRole Permissions to do esc:UpdateService. This lets your staff escalate their own privileges in a controlled way.

If you have a few CloudFormation Parameters that you want to be manually configured/maintained, please see fallback configuration in the CloudFormation ParameterValue config.

Warning

This is actual privilege escalation which comes with certain risks. If the StackRole, for example, have the permission to do iam:UpdateRole, nothing will stop it from giving the IAM Role AdministerAccess and give a potential hacker the ability to assume the role. So be careful with IAM permissions on StackRoles.

A way to work around the issue is with IAM Permissions Boundaries in combination with IAM Condition key iam:PermissionsBoundary.

ExecutionRole vs StackRole

From an Attini deployment perspective, it does not matter if ExecutionRoles or StackRoles are being used, however, StackRole comes with extra capabilities which often create unnecessary complexity.

Therefore, Attini recommends you to use ExecutionRoles as default and only apply StackRoles when you have a specific use case for it.


Activate CloudTrail

Activating CloudTrail on the Attini resources gives you a much better auditability, and it’s therefore highly recommended. If you have activated data events on all S3 buckets and Lambdas, you are already done.

Note

Be aware that data can generate additional cost

We at Attini recommend that you activate CloudTrail data events on the following Attini resources to give your proper traceability:

  • The attini-deployment-origin S3 bucket
  • The attini-artifact-store S3 bucket
  • The attini-action Lambda
  • The attini-init-deploy Lambda
  • The attini-auto-update Lambda
  • The attini-step-guard Lambda

This will log any new deployments which are critical information if you are looking for an intruder, it will also trace any tampering with your current artifacts or lambda functions invocations.

Note

These data events do not appear in the normal cloud trail logs, you have to find them in the logs shipped to s3.

Minimum recommend CloudTrail logging:

[
  {
    "name": "Minimum logs",
    "fieldSelectors": [
      {
        "field": "eventCategory",
        "equals": [
          "Data"
        ]
      },
      {
        "field": "resources.type",
        "equals": [
          "AWS::S3::Object"
        ]
      },
      {
        "field": "resources.ARN",
        "startsWith": [
          "arn:aws:s3:::attini-deployment-origin",
          "arn:aws:s3:::attini-artifact-store",
          "arn:aws:s3:::prod-attini-support-logs",
          "arn:aws:s3:::acc-attini-support-logs"
        ]
      }
    ]
  },
  {
    "name": "Lambda events",
    "fieldSelectors": [
      {
        "field": "eventCategory",
        "equals": [
          "Data"
        ]
      },
      {
        "field": "resources.type",
        "equals": [
          "AWS::Lambda::Function"
        ]
      },
      {
        "field": "resources.ARN",
        "endsWith": [
          "attini-action",
          "attini-init-deploy",
          "attini-auto-update",
          "attini-step-guard"
        ]
      }
    ]
  }
]

Security considerations

Administrating our IT environments with a high level of automation brings many advantages, and done right it can increase your security by:

  1. Decreasing the privileges your staff need in their day-to-day activities
  2. Easier to maintain a minimum privileges access model
  3. Easier to implement tests that can find vulnerabilities
  4. Easier to update and patch the underlying infrastructure
  5. Easier to implement and maintain security resources like Alarms and AWS Config Rules

The drawback to this is that the tools you need to Administer your IT environment need Admin access to your resources, which, of course, is associated with risks.

Anyone that can put distributions (s3 files) into attini-deployment-origin bucket can use the framework to deploy new IAM roles, which means that in the worst case scenario, someone can deploy a new admin role that only they can assume and therefore escalate their own privileges.

Warning

The possibility to escalate privileges is always a risk you take when you automate the creation and updates of IAM entities (Roles, Users, and Policy). For this reason, you have to be very careful when you use deployment tools for IAM configuration.

For example, if you give a CloudFormation stack a Service role with IAM privileges, then give someone access to the “CloudFormation:StackUpdate” API, that person can give themselves AdministratorAccess by creating or updating a role that is created by the CloudFormation stack. For this reason, we at Attini always recommend you to have separate CloudFormation stacks for security-related resources (IAM, KMS, EC2 Security groups etc) so that you can easily separate access to them.

Another recommended way to avoid escalation of privileges is AWS Permissions boundaries.

Temporary Storage

Sometimes you need temporary storage in s3 for your deployments, for example, the lambda zip that is being created by the AWS SAM CLI. Therefore, we have added a s3 life cycle policy to the attini-artifact-store-${Region}-${AccountId} bucket with the prefix attini/tmp/.

All objects with that prefix will get a S3 delete marker after 30 days, and they will be permanently deleted after 60 days.

Example

The object attini-artifact-store-eu-west-1-111111111111/attini/tmp/my-temp-file.zip will be permanently deleted after 60 days.


Content