AWS Security Specialty - Study Notes - Domain 5
Module 1: CloudHSM
14.1 Overview of Hardware Security Module
HSM stands for Hardware Security Module.
They are special devices that safeguard and manage digital keys for strong authentication.
These devices are tamper-resistant, which means if anyone tries to tamper, they will automatically delete the keys stored.
14.2 CloudHSM
Cloud HSM is AWS offering of using a dedicated HSM within your AWS Cloud.
Prior to this, the company’s had to store HSM on-premise and if the infrastructure was on AWS, there was a lot of latency involved.
As HSM plays a very critical role in storing sensitive data, HSM is typically certified against an internationally recognized standard such as Common Criteria and FIPS 140.
Important Pointers for CloudHSM:
Cloud HSM is Single Tenanted ( Single Physical Device only for you )
It must be used within a VPC.
We can integrate Cloud HSM with RedShift & RDS for Oracle.
For fault tolerance, we will need to build a cluster of 2 Cloud HSM.
AWS uses Safenet Luna SA HSM appliance for Cloud HSM.
They are FIPS validated.
It generally has 2 partitions, one for AWS to monitor and second is a cryptographic partition that you have access to and has stored keys.
Module 2: Key Management Service
AWS KMS is a managed service which allows to create, manage, and control the ENCRYPTION keys and uses the HSMs to protect the security of the keys.
It does not have any upfront cost and is pay as you go model.
2.1 Encryption Concepts
PlainText & CipherText
PT refers to data in an unencrypted form.
CT refers to data after it’s encrypted.
Algorithm & Keys
An encryption algorithm is a step by step approach that tells on how the PT will be converted to the CipherText.
KMS supports only the Symmetric Key algorithm and uses AES-GCM with 256-bit keys.
2.2 Overview Working
We create a CMK.
We definite the Administrative User & Key User.
The key user can reference to the KMS Key-ID and encrypt and decrypt data.
Step 1: Create KMS Key
Here we go ahead and create KMS Key. Alias helps us in referencing the keys if we have multiple KMS keys.
Step 2: Create Key Administrators
We definite Key Administrators who have full permission on the key.
Step 3: Create Usage Permission
We definite Key Usage Permission on who will have access to use this key for Encryption and Decryption.
Step 4: Verify the Key Policy
AWS will generate the Key policy accordingly, you just verify and click “ Finish “
Step 5: It’s Done
Your KMS key is now created and ready to use.
The customer cannot get the CMK, you can reference the key id to encrypt.decrypt but not get a copy
Module 3: KMS Architecture
3.1 Important Limitation
We can encrypt a maximum of 4 KB of data with CMK.
Since data travels over the network, there can be latency issues.
AWS Suggests the Customer Master Key + Data Key based approach
3.2 Envelope Encryption (research**)
We generate 1 CMK.
We then generate the Data Key. AWS returns PT & CT version of it.
We use the PlainText data key to encrypt the files in server.
We then store CipherText Data Key along with Encrypted file.
Decryption Steps
Use the decrypt operation to decrypt the encrypted data key into a plaintext copy of the data key.
Use the plaintext data key to decrypt data locally.
Supported key formats
Supports symmetric and asymmetric
Symmetric represents a single 256-bit secret key, to use your symmetric CMK you must call KMS
Asymmetric key: Represents public and pricate keye pair that can be used for various operations like enc/decr or signing (sign/verify)
CLI Version 1 doesn't work well with asymmetric key
Research Base64 **
Module 4: Deleting CMK
Deleting CMK in AWS KMS would delete the key material and all the associated metadata associated with the CMK. This process is irreversible.
After CMK is deleted, we can no longer decrypt the data that was encrypted by that CMK.
Before it is an irreversible process, AWS KMS enforces a waiting period.
The waiting period can be from a minimum of 7 days to up to a maximum 30 days. The default is 30.
During the waiting period, CMK cannot be used in any cryptographic operation.
Unmanageable CMK, if user that had permission on Key is deleted it's unmanageable. Generally use root user as it cannot be deleted.
Module 5: Data Key Caching
AWS has recently introduced a feature called “Data Key Caching” in its AWS Encryption SDK.
Data key caching lets us reuse the data-keys that protects our data, instead of generating new data key for each of the encrypt operation.
This definitely comes with security tradeoffs, as the encryption best practices discourage extensive re-use of the data-keys.
In AWS Encryption SDK, by default there is a new data key generated for each encrypt operation that is performed.
This is the most secure practice. It does bring overhead as well.
Important Pointers for Data Key Caching
Data key caching saves the plaintext and ciphertext of the data keys you use in a configurable cache.
When you need a key to encrypt or decrypt data, you can reuse a data key from the cache instead of creating a new data key.
It is preferred to use data-key caching when there are high frequency needed, the latency involved, slow master key operations.
Module 6: KMS Access Control
In AWS, there are two types of policies that you will work with:
i) IAM policies (user, role)
ii) Resource Policies
In KMS, by default, all the CMK’s have a key policy attached to it.
We can control access to KMS CMK’s using the following three ways:
i) Using Key Policies
ii) Using IAM Policy in combination with key policies
iii) Using KMS Grants
KMS and IAM Policy Evaluation Logic (Study**)
If user has permissions on the key but not in their IAM policy they will still have permission to perform the actions in the KMS policy
Also, if the user has Encrypt on the KMS policy but not in IAM the encrypt is still permitted as long as no explicit deny
In other words, for IAM policies to grant any form of access to IAM users on CMK, you MUST give AWS account (in this case, root) FULL ACCESS to the CMK as a pre-requisite.
2. Enables IAM policies to allow access to the CMK.
IAM policies by themselves are not sufficient to allow access to a CMK. However, you can use them in combination with a CMK's key policy if the key policy enables it. Giving the AWS account full access to the CMK does this; it enables you to use IAM policies to give IAM users and roles in the account access to the CMK. It does not by itself give any IAM users or roles access to the CMK, but it enables you to use IAM policies to do so. For more information, see Managing Access to AWS KMS CMKs.
Module 7: KMS Grants
In KMS by default all the CMKs have a key policy attached to it
Can manage access to CMK via:
Key Policies
IAM Policies in combination with key policies
Using KMS Grants
During the process of Grant, there are two entities which are involved:
Grant user: User who generated the Grant. (already has access to key)
Grantee: User who will use the grant generated by the Grant user.
Grant is like a secret token.
The token has specific permission like encryption, decryption or others. (aws kms create-grant)
The Grantee will use this secret token to perform operations on the CMK. (aws kms encrypt --grant-tokens)
Module 8: Importing Key Material in KMS
A customer master key (CMK) contains the key material used to encrypt and decrypt data.
When we create a CMK, by default, AWS creates key-material for that CMK. However, we do have an option to create a CMK without key material and then import our key-material into the CMK.
The following overview explains how to import your key material into AWS KMS. For more details about each step in the process, see the corresponding topic.
Create a symmetric CMK with no key material – To get started with importing key material, first create a symmetric CMK whose origin is EXTERNAL. This indicates that the key material was generated outside of AWS KMS and prevents AWS KMS from generating key material for the CMK. In a later step, you will import your own key material into this CMK.
Download the public key and import token – After completing step 1, download a public key and an import token. These items protect the import of your key material to AWS KMS.
Encrypt the key material – Use the public key that you downloaded in step 2 to encrypt the key material that you created on your own system.
Import the key material – Upload the encrypted key material that you created in step 3 and the import token that you downloaded in step 2.
Module 9: KMS ViaService
The kms:ViaService condition key limits use of an AWS KMS customer master key (CMK) to requests from specified AWS services. Effect: Allow
"Condition": {
"ForAnyValue:StringEquals": {
"kms:ViaService": [
"ec2.us-west-2.amazonaws.com",
]
}
Module 10: KMS Migration
KMS Keys are region-specific.
We cannot call a KMS CMK from one region for services in different regions.
During migration, services like AWS EBS has out an of box solution to change the CMK to the destination region.
Use-Case - Encrypted RDS Migration
Earlier due to the limitation of KMS being region-specific, RDS used to only support the migration of unencrypted RDS snapshots across regions.
Now we can easily migrate even the encrypted RDS snapshots across regions.
Important Pointers to Remember
If you copy an encrypted snapshot within the same AWS Region, you can encrypt the copy with the same KMS encryption key as the original snapshot, or you can specify a different KMS encryption key.
For cross-region, we cannot use the same KMS key as a snapshot. Instead, we must specify a different KMS CMK that belongs to the destination region.
Default Encryption Keys cannot be used while copying of snapshots across AWS regions.
Very Important Points to Remember:
If you have been using envelope encryption and have encrypted data with data-keys, then you will have to decrypt all that data before migrating to a different region.
Module 11: Benefits of CloudHSM over KMS
When should I use AWS CloudHSM instead of AWS KMS?
Keys stored in dedicated, third-party validated hardware security modules under your exclusive control (only your org team can administer the keys and not AWS)
Integration with applications using PKCS#11, Java JCE, or Microsoft CNG interfaces. (research**)
High-performance in-VPC cryptographic acceleration (bulk crypto).
Organization Administrator can export and share keys as needed.
Module 12: S3 Encryption
AWS S3 offers multiple approaches to encrypt the data being stored in S3.
i) Server-Side Encryption
Request Amazon S3 to encrypt your object before saving it on disks in its data centers and then decrypt it when you download the objects.
ii) Client-Side Encryption
Encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.
12.1 Server Side Encryption:
Within Server-Side encryption, there are three options that can be used depending on the use-case.
Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
Server-Side Encryption with Customer Master Keys (CMKs) Stored in AWS Key Management Service (SSE-KMS)
Server-Side Encryption with Customer-Provided Keys (SSE-C)
i) Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)
In this approach, each object is encrypted with a unique key.
Uses one of the strongest block ciphers to encrypt the data, AES 256.
ii) Server-Side Encryption with CMKs Stored in AWS Key Management Service (SSE-KMS)
Encrypting data with own CMK allows customers to create, rotate, disable customer-managed CMK’s. We can also define access controls and enable auditing.
iii) SSE with Customer-Provided Keys (SSE-C)
Allows customers to set their own encryption keys. (aws s3 cp --sse-c -sse-c-key keyvalue)
Encryption key needs to be provided as part of the request and S3 will manage both the encryption as well as the decryption options.
Client side encryption = encrypted before sending to s3 (not the same as sse-c)
Module 13: Load Balancer Types in AWS
AWS currently offers 3 major types of Load Balancers:
i) Classic Load Balancers.
ii) Network Load Balancers.
iii) Application Load Balancers.
Classic Load Balancers were the old generations and are recommended if you still have instances in EC2 classic. If not, it’s recommended to move to the Application / Network load balancer.
Module 14: Classic Load Balancer
These are the older generation of load balancers which can be used for instances both under the VPC and the EC2-Classic network.
Provides a basic set of features for HTTP, HTTPS, TCP, and SSL protocols.
Limitation of Classic Load Balancers
Does not support native HTTP/2 protocol.
IP address as targets are not supported.
Path based routing is not supported. (eg: /images should go to server 1 & /php to server 02)
Many Many more …..
Module 15: Application Load Balancer
Application Load Balancers are the next generation load balancer by AWS.
They support the HTTP and HTTPS protocol.
New Features:
Path and Host-Based Routing
Register IP as targets.
SNI support.
Load Balancing to multiple ports on the same instance.
Many more …
15.1 Understanding Path-based Routing
The request is routed based on the URI path.
example.com/images/ → server 01
example.com/work/ → server 02
Network load balancer
Offers static and EIP
Module 16: ELB Listeners
When we configure an ELB, we must configure one or more listeners.
The listener is configured with two parts:
- Protocol + Port for front end connection
- Protocol + Port for back end connection
Listener Types - HTTP and HTTPS
Use Case
Front End Protocol
Front end options
Back end protocol
Back end options
Pointers
Basic HTTP Load balancer
HTTP
NA
HTTP
NA
X-Forwarded-For header supported
Websites using ELB to offload SSL decryption
HTTPS
SSL negotiation
HTTP
NA
SSL deployed in ELB
Websites needing an end to end encryption
HTTPS
SSL negotiation
HTTPS
Back-end authentication
SSL deployed in ELB and Backend.
HTTPS to HTTP Listener
End to End Encryption
Listener Types - TCP and SSL
Use Case
Front End Protocol
Front end options
Back end protocol
Back end options
Pointers
Basic TCP Load Balancer
TCP
NA
TCP
NA
Supports proxy protocol header
Application which wants ELB to offload SSL
SSL
SSL negotiation
TCP
NA
SSL certification needs to be deployed on ELB.
Websites needing an end to end encryption
SSL
SSL negotiation
SSL
Back-end authentication
Supports SSL certificate in both ELB and backend.
HTTP listeners
Can modify headers etc
TCP Listeners
Will not modify, will just forward headers as is
Cannot change OSI Layer via the ELB, frontend is TCP/SSL then backend is, Frontend is HTTP/S Backend is
Module 17: AWS Certificate Manager
AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.
AWS Certificate Manager (ACM) can integrate with the following AWS services:
Elastic Load Balancer
CloudFront
API Gateway
Module 18: Glacier Vault
AWS Glacier is an extremely low-cost storage service that provides security as well as durable storage for data backup and archival.
With respect to security, there are two things to remember:
Access to the data in Glacier can be controlled with IAM.
Data in the glacier is also encrypted using SSE (server-side encryption).
For customers who intend to manage own keys, they can encrypt data before uploading it.
18.1 Understanding Vault
In Glacier, data is stored as archives.
Vault is a way in which the archives are grouped together in Glacier
We can control who has access to the data by setting up vault-level access policies using IAM.
We also have a vault-level policy that we can attach directly to the Glacier Vault.
18.2 Glacier Vault Lock
Glacier Vault Lock allows you to easily deploy and enforce compliance controls for individual Glacier vaults with a vault lock policy.
You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. (there is a wide range of controls, not just WORM)
One great thing about the Vault Lock policy is that they are immutable.
Gives a LockID, which you use to "Complete Vault Lock"
Module 19: DynamoDB Encryption
If an organization is storing sensitive data in DynamoDB, it is ideal to encrypt the data as close to the origin so that the data remains protected throughout the lifecycle.
We can make use of DynamoDB Encryption Client to protect the data even before we send it to the DynamoDB table.
DynamoDB Client can be used with AWS KMS or even CloudHSM.
The library (github, client side encryption java) by itself does not require AWS service, we can use our own crypto keys and manage them ourselves.
19.1 DynamoDB Encryption At Rest
AWS came up with a new feature of encryption at rest for DynamoDB.
This allows us to encrypt our data at rest in DynamoDB using AWS KMS.
The table will be encrypted using AES-256.
Module 20: AWS Secrets Manager
AWS Secrets Manager enables customers to rotate, manage and retrieve database credentials, API keys and other secrets throughout their lifecycle.
Lots of developers store secrets as plain text or DevOps team adds them as an environmental variable. This creates security risks.
Compliance like PCI DSS requires secrets that must be rotated and audit on who does what with secrets.
Key Features of AWS Secrets Manager:
Built-In integration for rotating MySQL, PostgreSQL, and Aurora on RDS.
Use versioning so applications do not break when secrets are rotated.
Fine-grained access control to control who has access to secrets with the help of IAM and Resource-based policies.
Encryption Context
All AWS KMS Cryptographic operations with symmetric CMKs accept an ecryption context, and optional set of key-value pairs that can contain additional contextual information about the data
AWS KMS uses the encryption context as additional authenticated data (AAD) to support authenticated encryption

Last updated
Was this helpful?