Cloud computing has become a buzz word for many business insiders and IT industry. Every new paradigm shift in IT creates new challenges and scope for growth and opportunities. Cloud computing over the years has evolved with new delivery models and more flexible cost structures. Many organizations are struck whether they can move their enterprise data to cloud or not due to concerns related to the security of enterprise sensitive data, data breaches, hacking.
This document focuses on the security best practices that needs to be enforced.
Amazon has invested heavily in building a powerful set of security controls for its customers to use across all AWS services. With CloudTrail and CloudWatch, for example, customers can monitor and track both the health and security of their AWS resources. The Identity and Access Management service gives AWS customers granular control over managing users and enforcing access control policies. It is therefore incumbent on the AWS customers to configure AWS security controls appropriately to tighten their security posture
When moving to an AWS infrastructure, responsibility for security is shared between Amazon and your organization. Amazon’s Shared Responsibility Model clearly shows where both parties’ responsibilities begin and end. AWS secures the lower layers of the infrastructure stack, while the organization is accountable for everything else up to and including the application layer.
AWS responsibility “Security of the Cloud” – AWS is responsible for protecting the infrastructure that runs all the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Customer responsibility “Security in the Cloud” – Customer responsibility will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities. For example, services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and Amazon S3 are categorized as Infrastructure as a Service (IaaS) and, as such, require the customer to perform all of the necessary security configuration and management tasks. If a customer deploys an Amazon EC2 instance, they are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
~ Ensure CloudTrail is enabled across all AWS.
CloudTrail allows AWS customers to record API calls and log monitoring service, sending log files to S3 buckets for storage.
~ Enable CloudTrail log file integrity validation and multi-region logging
~ Enabling this feature allows us to validate the integrity of CloudTrail log files and determine if the files were changed once delivered to the specified S3 bucket. (log files should never be modified i.e. remain unchanged). By having CloudTrail enabled in all regions, organizations will be able to detect unexpected activity in unused regions. This capability provides an additional layer of security.
~ Integrate CloudTrail with CloudWatch
CloudTrail events are being monitored with CloudWatch logs for management and security purposes. We can receive an SNS notification whenever an authorization failure occurs for your AWS account so that you can have finer control over the account. It also supports setting up alarms and notifications for sensitive account activity.
~ Enable access logging for CloudTrail S3 buckets
As CloudTrail S3 buckets contain log data captured by CloudTrail, enabling access logging for CloudTrail for S3 buckets makes customers track access requests and identify potential threats or access attempts.
~ Enable Virtual Private Cloud (VPC) flow logging
Once enabled, the flow logs feature will start collecting network traffic data to and from your VPC, data that can be useful to detect and troubleshoot security issues, real-time security analysis, understand traffic growth for capacity forecasting, network forensics and make sure network access rules are not overly permissive for Security groups, Network ACLs, alerts.
AWS VPC Flow logs
~ Create individual IAM users
Do not use or share root account details for accessing AWS portal. Instead, create an individual user account in IAM for accessing AWS infrastructure. Creating separate IAM user will provide separate credentials for each user and provide you facility to assign different permissions to each user as per your requirement.
~ Enable MFA (multi-factor authentication) for IAM users and root account
Adds an extra higher level of security. You can protect your account with something you know(password) and something you have (Phone). You can enable MFA for your AWS account and for individual IAM users you have created under your account. Taken together, these multiple factors provide increased security for your AWS account and resources.
~ Require MFA (multi-factor authentication) to delete CloudTrail buckets
By enabling this, users will not be able to delete an S3 bucket containing CloudTrail logs which contains log data.
~ IAM users must be enabled for both API access and Console access
IAM users must be enabled for both API access and for console access to reduce the risk of unauthorized access. Application users should only use access keys to programmatically access data in AWS and administrators who need console access should only use passwords to manage AWS resources.
~ Ensure IAM policies are attached to groups or roles to assign permissions
Instead of assigning policies and permissions to users directly, provision permissions to users at the group and role level. Doing so makes managing permissions more efficient, makes it simpler to remove or reassign permissions based on a change in responsibilities.
~ Rotate IAM access keys regularly, and standardize on the selected number of days
Changing access keys (which consist of an access key ID and a secret access key) on a regular schedule is a well-known security best practice because it shortens the period an access key is active and therefore reduces the business impact if they are compromised.
~ Grant least privilege
Create the policies with the standard security principle of least privilege. Grant only the rights required to perform the required tasks and not more than that. You can start with the smallest permission and go on adding additional privileges as required.
~ Set up a strict password policy
Implement strong password policy for all IAM users which includes minimum required length, format (alphanumeric, special character, symbol etc.), and password reuse setting and configure the password rotation policy to change them on the frequent basis. You can configure it by going to Account Settings link in IAM.
~Set the password expiration period to 90 days, and ensure the IAM password policy prevents reuse.
~ Don’t use expired SSL/TLS certificates
When using CloudFront, ensure CloudFront distributions use HTTPS
Enabling SSL/TLS ensures all traffic to and from CloudFront is encrypted and minimizes the risk of a man-in-the-middle attack
~ Restrict access to CloudTrail bucket
Unrestricted access to CloudTrail logs should never be enabled for any user or administrator account. While most AWS users and administrators will not have any malicious intent to cause harm, they are still susceptible to phishing attacks that could expose their account credentials and lead to an account compromise. Restricting access to CloudTrail logs will decrease the risk of unauthorized access to the logs
~ Encrypt CloudTrail log files at rest
To decrypt encrypted CloudTrail log files, a user must have decryption permission by the customer-created key management, or customer master key (CMK), policy along with permission for accessing the S3 buckets containing the logs. This means that only users whose job duties require it should have both decryption permission and access permission to S3 buckets containing CloudTrail logs
Encrypt Elastic Block Store (EBS)
As an added layer of data security, ensure that the EBS database is being encrypted. Keep in mind that this can only be done at the time when you create the EBS volume. To encrypt EBS volumes that weren’t encrypted at creation, you must create a new encrypted EBS volume and transfer the data from the unencrypted volume to the encrypted one.
~ Provision access to resources using IAM roles
Provisioning access to resources using IAM roles is recommended versus providing an individual set of credentials for access. This ensures that credentials are not lost or misplaced accidentally, leading to account compromise
~ Separate VPCs for different environments
It’s always better to create a distinct Amazon VPC for different types of environments or requirements to reduce the impact of any unwanted incident(s) on the entire setup.
~ Ensure EC2 security groups don’t have large ranges of ports open
With large port ranges open, vulnerabilities could be exposed. An attacker can scan the ports and identify vulnerabilities of hosted applications without easy traceability due to large port ranges being open.
~ Configure EC2 security groups to restrict inbound access to EC2
Excessive permission to access EC2 instances should not be allowed. Instead of whitelisting large IP ranges to access EC2 instances, be specific and only whitelist individual private IP addresses for EC2 instance access.
~ Avoid using root user accounts
The root user is created automatically when creating an AWS account for the first time. This user has access to all services and resource in the AWS account, making it the most privileged user account. As such, the root user account should only be used in the instance of creating the first IAM user. Beyond that, root user credentials should be securely locked away and access to them forbidden
~ Make sure SSH/RDP is open for Bastion hosts for your VPC
~ Make sure SSH/RDP connection is open in AWS Security Group only for jump box/bastion hosts for your VPC/subnets. Have stricter controls/policies avoid opening SSH/RDP to other instances of production environment. Periodically check, alert and close for this loop hole as part of your operations.
~ Use standard naming (tagging) convention for EC2 and all other AWS services
EC2 instances and other AWS services must use a standard/custom convention to reduce the risk of misconfiguration.
~ Encrypt Amazon’s Relational Database Service (RDS) and Redshift
As an added layer of security and to ensure you’re compliant with possible data security policies, ensure that your database in RDS and Redshift is being encrypted.
~ Ensure access keys are not being used with root accounts
Using access keys with the root account is a direct vector for account compromise. Anyone who gets access to the key has access to all the services in the AWS account. ~ Creating role-based accounts with appropriate privileges and access keys is the recommended best practice.
~ Rotate SSH keys periodically
Rotating SSH keys periodically will significantly reduce the risk of account compromise due to developers accidently sharing SSH keys inappropriately.
~ Minimize the number of discrete security groups
Enterprises must consciously minimize the number of discrete security groups to decrease the risk of misconfiguration leading to account compromise
~ Reduce the number of IAM groups
Reducing unused or stale IAM groups also reduces the risk of accidentally provisioning entities with older security configurations.
~ Terminate unused access keys
Unused access keys increase the threat surface of an enterprise to a compromised account or insider threat. It is highly recommended that any access keys unused for over 30 days be terminated.
~ Disable access for inactive or unused IAM users
As a best practice, unused IAM user accounts, or users who haven’t logged into their AWS accounts in over 90 days, should have their accounts disabled. This reduces the likelihood of an abandoned account being compromised and leading to a data breach.
~ Delete unused SSH Public Keys
Deleting unused SSH Public Keys lowers the risk of unauthorized access to data using SSH from unrestricted locations
~ Restrict access to Amazon Machine Images (AMIs)
Unrestricted access to AMIs makes these AMIs available in the Community AMIs, where everyone with an AWS account can use them to launch EC2 instances. Most of the time, AMIs will contain snapshots of enterprise-specific applications (including configuration and application data). Hence, unrestricted access to AMIs is not recommended.
~ Disallow unrestricted ingress access on uncommon ports
Allowing unrestricted inbound access to uncommon ports can increase opportunities for malicious activity such as hacking, data loss, brute-force attacks, DoS attacks, and others.
~ Restrict access to EC2 security groups
Unrestricted access to EC2 security groups opens an enterprise to malicious attacks such as brute-force attacks, DoS attacks, or man-in-the-middle attacks
~ Restrict access to RDS and Redshift instances
When the VPC security group associated with an RDS instance allows unrestricted access (that is, the source is set to 0.0.0.0/0), entities on the Internet can establish a connection to your database. This increases the risk of malicious activities such as brute force attacks, SQL injections, or DoS attacks.
~ Restrict outbound access
Outbound access from ports must be restricted to required entities only, such as specific ports or specific destinations.
~ Restrict access to well-known ports such as
CIFS—Ensure that access through port 445 is restricted to required entities only. CIFS is a commonly used protocol for communication and sharing data. Unrestricted access could potentially lead to unauthorized access to data.
FTP—Ensure that access through port 20/21 is restricted to required entities only. FTP is a commonly used protocol for sharing data, and if left unrestricted, could lead to unauthorized access to data or an accidental breach.
ICMP—Ensure that access for ICMP is restricted to required entities only. Unrestricted access could lead to unauthorized access to data, as attackers could use ICMP to test for network vulnerabilities or employ DoS against the infrastructure.
MongoDB—Ensure that access through port 27017 is restricted to required entities only.
MSSQL—Ensure that access through port 1433 is restricted to required entities only. f. MySQL—Ensure that access through port 3306 is restricted to required entities only. g. Oracle DB—Ensure that access through port 1521 is restricted to required entities only.
PostgreSQL—Ensure that access through port 5432 is restricted to required entities only.
Remote desktop—Ensure that access through port 3389 is restricted to required entities only.
RPC—Ensure that access through port 135 is restricted to required entities only
SMTP—Ensure that access through port 25 is restricted to required entities only. Unrestricted SMTP access can be misused to spam your enterprise, and launch DoS and other attacks.
SSH—Ensure that access through port 22 is restricted to required entities only.
Telnet—Ensure that access through port 23 is restricted to required entities only.
DNS—Ensure that access through port 53 is restricted to required entities only.
AWS Config is an auditing tool to help customers of AWS actively track/audit their resources and monitor AWS assets. It allows administrators to determine compliance with corporate internal policies and security standards.
~ Enable AWS config in all accounts and regions. This is an industry best practice recommended by Center for Internet Security.
~ Record configuration changes for all resources and collect the configuration history and snapshot files in secured S3 bucket which should not be publicly readable and writable.
~ Enable CloudWatch events to filter AWS Config notifications and get notified with Amazon SNS (Simple Notification Service)
~ Turn on periodic snapshots with frequency of 1 per day to ensure latest configuration of all resources are backed up daily.
Amazon GuardDuty- Continuous Security Monitoring & Threat Detection
Amazon has launched GuardDuty last November 2017 a continuous security monitoring and threat detection service that incorporated threat intelligence, anomaly detection, and machine learning to help protect AWS resources, including your AWS accounts.
In combination with information gleaned from your VPC Flow Logs, AWS CloudTrail Event Logs, and DNS logs, this allows GuardDuty to detect many different types of dangerous and mischievous behavior including probes for known vulnerabilities, port scans and probes, and access from unusual locations. On the AWS side, it looks for suspicious AWS account activity such as unauthorized deployments, unusual CloudTrail activity, patterns of access to AWS API functions, and attempts to exceed multiple service limits. GuardDuty will also look for compromised EC2 instances talking to malicious entities or services, data exfiltration attempts, and instances that are mining cryptocurrency.
GuardDuty operates completely on AWS infrastructure and does not affect the performance or reliability of your workloads. You do not need to install or manage any agents, sensors, or network appliances. This clean, zero-footprint model should appeal to your security team and allow them to green-light the use of GuardDuty across your AWS accounts.
Findings are presented to you at one of three levels (low, medium, or high), accompanied by detailed evidence and recommendations for remediation. The findings are also available as Amazon CloudWatch Events; this allows you to use your own AWS Lambda functions to automatically remediate specific types of issues. This mechanism also allows you to easily push GuardDuty findings into event management systems such as Splunk, Sumo Logic, and PagerDuty and to workflow systems like JIRA, ServiceNow, and Slack.