Skip to main content
Version: 1.0

Ratify with AWS Signer

This guide will explain how to get started with Ratify on AWS using EKS, ECR, and AWS Signer. This will involve setting up necessary AWS resources, installing necessary components, and configuring them properly. Once everything is set up we will walk through a simple scenario of verifying the signature on a container image at deployment time.

By the end of this guide you will have a public ECR repository, an EKS cluster with Gatekeeper and Ratify installed, and have validated that only images signed by a trusted AWS Signer SigningProfile can be deployed.

This guide assumes you are starting from scratch, but portions of the guide can be skipped if you have an existing EKS cluster, ECR repository, or AWS Signer resources.

Table of Contents

  1. Prerequisites
  2. Set up ECR
  3. Set up EKS
  4. Prepare Container Image
  5. Sign Container Image
  6. Deploy Gatekeeper
  7. Configure IAM Permissions
  8. Deploy Ratify
  9. Deploy Container Image
  10. Cleaning Up


There are a couple tools you will need locally to complete this guide:

  • awscli: This is used to interact with AWS and provision necessary resources
  • eksctl: This is used to easily provision EKS clusters
  • kubectl: This is used to interact with the EKS cluster we will create
  • helm: This is used to install ratify components into the EKS cluster
  • docker: This is used to build the container image we will deploy in this guide
  • ratify: This is used to check images from ECR locally
  • jq: This is used to capture variables from json returned by commands
  • notation: This is used to sign the container image we will deploy in this guide
  • AWS Signer notation plugin: this is required to use notation with AWS Signer resources

If you have not done so already, configure awscli to interact with your AWS account by following these instructions.

Set Up ECR

We need to provision a public container repository to make our container images and their associated artifacts available to our EKS cluster. We will do this using awscli. For this guide we will be provisioning a public ECR repository to keep things simple.

export REPO_NAME=ratifydemo
export REPO_URI=$(aws ecr-public create-repository --repository-name $REPO_NAME --region us-east-1 | jq -r ."repository"."repositoryUri" )

We will use the repository URI returned by the create command later to build and tag the images we create.

For more information on provisioning ECR repositories check the documentation.

Set up EKS

We will need to provision a Kubernetes cluster to deploy everything on. We will do this using the eksctl command line utility. Before provisioning our EKS cluster we will need to create a key pair for the nodes:

aws ec2 create-key-pair --region us-east-1 --key-name ratifyDemo

Save the output to your local machine, then run the following to create the cluster:

eksctl create cluster \
--name ratify-demo \
--region us-east-1 \
--zones us-east-1c,us-east-1d \
--with-oidc \
--ssh-access \
--ssh-public-key ratifyDemo

aws eks update-kubeconfig --name ratify-demo

The template will provision a basic EKS cluster with default settings.

Additional information on EKS deployment can be found in the EKS documentation.

Prepare Container Image

For this guide we will create a basic container image we can use to simulate deployments of a service. We will start by building the container image:

docker build -t $REPO_URI:v1

After the container is built we need to push it to the repository:

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin $REPO_URI

docker push $REPO_URI:v1

Sign Container Image

For this guide, we will sign the image using notation and AWS Signer resources. First, we will create a SigningProfile in AWS Signer and get the ARN:

aws signer put-signing-profile \
--profile-name ratifyDemo \
--platform-id Notation-OCI-SHA384-ECDSA

export PROFILE_ARN=$(aws signer get-signing-profile --profile-name ratifyDemo | jq .arn -r)

To use the SigningProfile in notation, we will add the profile as signing key:

notation key add \
--plugin com.amazonaws.signer.notation.plugin \
--default ratifyDemo

After the profile has been added, we will use notation to sign the image with the SigningProfile:

notation sign $REPO_URI:v1

Both the container image and the signature should now be in the public ECR repository. We can also inspect the signature information using notation:

notation inspect $REPO_URI:v1

More information on signing can be found in the AWS Signer and notation documentation.

Deploy Gatekeeper

The Ratify container will perform the actual validation of images and their artifacts, but Gatekeeper is used as the policy controller for Kubernetes.

We first need to install Gatekeeper into the cluster. We will use the Gatekeeper helm chart with some customizations:

helm repo add gatekeeper

helm install gatekeeper/gatekeeper \
--name-template=gatekeeper \
--namespace gatekeeper-system --create-namespace \
--set enableExternalData=true \
--set validatingWebhookTimeoutSeconds=5 \
--set mutatingWebhookTimeoutSeconds=2

Next, we need to deploy a Gatekeeper policy and constraint. For this guide, we will use a sample policy and constraint that requires images to have at least one trusted signature.

kubectl apply -f
kubectl apply -f

More complex combinations of regos and Ratify verifiers can be used to accomplish many types of checks. See the Gatekeeper docs for more information on rego authoring.

Configure IAM Permissions

Before deploying Ratify, we need to configure permissions for Ratify to be able to make requests to AWS Signer. To do this we will use the IAM Roles for Service Accounts integration. First, we need to create an IAM policy that has AWS Signer permissions:

cat > signer_policy.json << EOF
"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": [
"Resource": "*"

export POLICY_ARN=$(aws iam create-policy \
--policy-name signerGetRevocationStatus \
--policy-document file://signer_policy.json \
| jq ."Policy"."Arn" -r)

Then, we will use eksctl to create a service account and role and attach the policies to the role:

eksctl create iamserviceaccount \
--name ratify-admin \
--namespace gatekeeper-system \
--cluster ratify-demo \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
--attach-policy-arn $POLICY_ARN \

We can validate that the service account was created by using kubectl:

kubectl -n gatekeeper-system get sa ratify-admin -oyaml

Deploy Ratify

Now we can deploy Ratify to our cluster with the AWS Signer root as the notation verification certificate:

curl -sSLO

helm install ratify \
ratify/ratify --atomic \
--namespace gatekeeper-system \
--set-file notationCert=./aws-signer-notation-root.cert \
--set serviceAccount.create=false \
--set oras.authProviders.awsEcrBasicEnabled=true

After deploying Ratify, we will download the AWS Signer notation plugin to the Ratify pod using the Dynamic Plugins feature:

cat > aws-signer-plugin.yaml << EOF
kind: Verifier
name: aws-signer-plugin
name: notation-com.amazonaws.signer.notation.plugin
artifactTypes: application/vnd.oci.image.manifest.v1+json

kubectl apply -f aws-signer-plugin.yaml

Finally, we will create a verifier that specifies the trust policy to use when verifying signatures. In this guide, we will use a trust policy that only trusts images signed by the SigningProfile we created earlier:

cat > notation-verifier.yaml << EOF
kind: Verifier
name: verifier-notation
name: notation
artifactTypes: application/vnd.cncf.notary.signature
- ratify-notation-inline-cert
version: "1.0"
- name: default
- "*"
level: strict
- signingAuthority:certs

kubectl apply -f notation-verifier.yaml

More complex trust policies can be used to customize verification. See notation documentation for more information on writing trust policies.

Deploy Container Image

Now that the signed container image is in the registry and Ratify is installed into the EKS cluster we can deploy our container image:

kubectl run demosigned --image $REPO_URI:v1

We should be able to see from the Ratify and Gatekeeper logs that the container signature was validated. The pod for the container should also be running.

kubectl logs -n gatekeeper-system deployment/ratify

We can also test that an image without a valid signature is not able to run:

kubectl run demounsigned --image hello-world

The command should fail with an error and we should be able to see from the Ratify and Gatekeeper logs that the signature validation failed.

Cleaning Up

We can use awscli and eksctl to delete any resources created:

aws ecr-public delete-repository --region us-east-1 --repository-name $REPO_NAME

eksctl delete cluster --region us-east-1 --name ratify-demo

aws signer cancel-signing-profile --profile-name ratifyDemo