AWS SA Associate Practice Questions-6

51. As AWS grows, most of your cIients’ main concerns seem to be about security, especially when all of their competitors also seem to be using AWS. One of your clients asks you whether having a competitor who hosts their EC2 instances on the same physical host would make it easier for the competitor to hack into the cIient’s data. Which of the following statements would be the best choice to put your cIient’s mind at rest?

A. Different instances running on the same physical machine are isolated from each other via a 256-bit Advanced Encryption Standard (AES-256).

B. Different instances running on the same physical machine are isolated from each other via the Xen hypervisor and via a 256-bit Advanced Encryption Standard (AES-256).

C. Different instances running on the same physical machine are isolated from each other via the Xen hypervisor.

D. Different instances running on the same physical machine are isolated from each other via IAM permissions.

Answer: C

Explanation:

Amazon Elastic Compute Cloud (EC2) is a key component in Amazon’s Infrastructure as a Service (IaaS), providing resizable computing capacity using server instances in AWS’s data centers. Amazon EC2 is designed to make web-scale computing easier by enabling you to obtain and configure capacity with minimal friction. You create and launch instances, which are collections of platform hardware and software. Different instances running on the same physical machine are isolated from each other via the Xen hypervisor. Amazon is active in the Xen community, which provides awareness of the latest developments. In addition, the AWS firewall resides within the hypervisor layer, between the physical network interface and the instance’s virtual interface. All packets must pass through this layer, thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts. The physical RAM is separated using similar mechanisms. Reference:http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

52. In Amazon RDS, security groups are ideally used to:

A. Define maintenance period for database engines

B. Launch Amazon RDS instances in a subnet

C. Create, describe, modify, and delete DB instances

D. Control what IP addresses or EC2 instances can connect to your databases on a DB instance

Answer: D

Explanation:

In Amazon RDS, security groups are used to control what IP addresses or EC2 instances can connect to your databases on a DB instance. When you first create a DB instance, its firewall prevents any database access except through rules specified by an associated security group.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.htmI

53. You need to set up a complex network infrastructure for your organization that will be reasonably easy to deploy, replicate, control, and track changes on. Which AWS service would be best to use to help you accomplish this?

A. AWS Import/Export

B. AWS CIoudFormation

C. Amazon Route 53

D. Amazon CIoudWatch

Answer: B

Explanation:

AWS CIoudFormation is a service that helps you model and set up your Amazon Web Services resources

so that you can spend less time managing those resources and more time focusing on your applications

that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon

EC2 instances or Amazon RDS DB instances), and AWS CIoudFormation takes care of provisioning and

configuring those resources for you. You don’t need to indMdually create and configure AWS resources

and figure out what’s dependent on what. AWS CIoudFormation handles all of that.

Reference: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/WeIcome.htmI

54. You have just been given a scope for a new client who has an enormous amount of data(petabytes) that he constantly needs analysed. Currently he is paying a huge amount of money for a data warehousing company to do this for him and is wondering if AWS can provide a cheaper solution. Do you think AWS has a solution for this?

A. Yes. Amazon SimpIeDB

B. No. Not presently

C. Yes. Amazon Redshift

D. Yes. Your choice of relational AMIs on Amazon EC2 and EBS

Answer: C

Explanation:

Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple

and cost-effective to efficiently analyze all your data using your existing business intelligence tools. You can start small for just $0.25 per hour with no commitments or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions. Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and

parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host. Reference: https://aws.amazon.com/running_databases/#redshift_anchor

55. In an experiment, if the minimum size for an Auto Scaling group is 1 instance, which of the following

statements holds true when you terminate the running instance?

A. Auto Scaling must launch a new instance to replace it.

B. Auto Scaling will raise an alarm and send a notification to the user for action.

C. Auto Scaling must configure the schedule actMty that terminates the instance after 5 days.

D. Auto Scaling will terminate the experiment.

Answer: A

Explanation:

If the minimum size for an Auto Scaling group is 1 instance, when you terminate the running instance, Auto Scaling must launch a new instance to replace it. Reference:http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/AS_Concepts.htmI

56. In Amazon EC2, while sharing an Amazon EBS snapshot, can the snapshots with AWS MarketpIace

product codes be public?

A. Yes, but only for US-based providers.

B. Yes, they can be public.

C. No, they cannot be made public.

D. Yes, they are automatically made public by the system.

Answer: C

Explanation:

Snapshots with AWS Marketplace product codes can’t be made public. Reference:

http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshotpermissions.ht

ml

57. An organization has created an application which is hosted on the AWS EC2 instance. The application

stores images to S3 when the end user uploads to it. The organization does not want to store the AWS

secure credentials required to access the S3 inside the instance. Which of the below mentioned options is a possible solution to avoid any security threat?

A. Use the IAM based single sign between the AWS resources and the organization application.

B. Use the IAM role and assign it to the instance.

C. Since the application is hosted on EC2, it does not need credentials to access S3.

D. Use the X.509 certificates instead of the access and the secret access keys.

Answer: B

Explanation:

The AWS IAM role uses temporary security credentials to access AWS services. Once the role is assigned

to an instance, it will not need any security credentials to be stored on the instance. Reference:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

58. Can resource record sets in a hosted zone have a different domain suffix (for example, www.bIog. acme.com and www.acme.ca)?

A. Yes, it can have for a maximum of three different TLDs.

B. Yes

C. Yes, it can have depending on the TLD. D. No

Answer: D

Explanation:

The resource record sets contained in a hosted zone must share the same suffix. For example, the

exampIe.com hosted zone can contain resource record sets for www.exampIe.com and

wvvw.aws.exampIe.com subdomains, but it cannot contain resource record sets for a www.exampIe.ca

subdomain. Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/AboutHostedZones.html

59. You are running PostgreSQL on Amazon RDS and it seems to be all running smoothly deployed in one

availability zone. A database administrator asks you if DB instances running PostgreSQL support MuIti-AZ deployments. What would be a correct response to this QUESTION ?

A. Yes.

B. Yes but only for small db instances.

C. No.

D. Yes but you need to request the service from AWS.

Answer: A

Explanation:

Amazon RDS supports DB instances running several versions of PostgreSQL. Currently we support

PostgreSQL versions 9.3.1, 9.3.2, and 9.3.3. You can create DB instances and DB snapshots, point-in-time restores and backups. DB instances running PostgreSQL support MuIti-AZ deployments, Provisioned IOPS, and can be created inside a VPC. You can also use SSL to connect to a DB instance running PostgreSQL. You can use any standard SQL client application to run commands for the instance from your client computer. Such applications include pgAdmin, a popular Open Source administration and development tool for PostgreSQL, or psql, a command line utility that is part of a PostgreSQL installation. In order to deliver a managed service experience, Amazon RDS does not provide host access to DB instances, and it restricts access to certain system procedures and tables that require advanced prMleges. Amazon RDS supports access to databases on a DB instance using any standard SQL client application. Amazon RDS does not allow direct host access to a DB instance via Telnet or Secure Shell (SSH). Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.htmI

60. A user has launched 10 EC2 instances inside a placement group. Which of the below mentioned

statements is true with respect to the placement group?

A. All instances must be in the same AZ

B. All instances can be across multiple regions

C. The placement group cannot have more than 5 instances

D. All instances must be in the same region

Answer: A

Explanation:

A placement group is a logical grouping of EC2 instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput or both. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

61. Which of the following AWS CLI commands is syntactically incorrect?

1. $ aws ec2 describe-instances

2. $ aws ec2 start-instances –instance-ids i-1348636c

3. $ aws sns publish –topic-arn arn:aws:sns:us-east-1:546419318123:OperationsError -message “Script

Failure”

4. $ aws sqs receive-message –queue-urI https://queue.amazonaws.com/546419318123/Test

A. 3

B. 4

C. 2

D. 1

Answer: A

Explanation:

The following CLI command is missing a hyphen before “-message”. aws sns publish –topic-arn arn:aws:sns:us-east-1:546419318123:OperationsError -message “Script

Failure”.It has been added below in red

aws sns publish –topic-arn arn:aws:sns:us-east-1:546419318123:OperationsError —message “Script

Failure” Reference: http://aws.amazon.com/cli/

62. An organization has developed a mobile application which allows end users to capture a photo on their mobile device, and store it inside an application. The application internally uploads the data to AWS S3. The organization wants each user to be able to directly upload data to S3 using their Google ID. How will the mobile app allow this?

A. Use the AWS Web identity federation for mobile applications, and use it to generate temporary security credentials for each user.

B. It is not possible to connect to AWS S3 with a Google ID.

C. Create an IAM user every time a user registers with their Google ID and use IAM to upload files to S3. D. Create a bucket policy with a condition which allows everyone to upload if the login ID has a Google part to it.

Answer: A

Explanation:

For Amazon Web Services, the Web identity federation allows you to create cloud-backed mobile apps that use public identity providers, such as login with Facebook, Google, or Amazon. It will create temporary security credentials for each user, which will be authenticated by the AWS services, such as S3. Reference: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingWIF.htmI

63. You are architecting an auto-scalable batch processing system using video processing pipelines and

Amazon Simple Queue Service (Amazon SQS) for a customer. You are unsure of the limitations of SQS

and need to find out. What do you think is a correct statement about the limitations of Amazon SQS?

A. It supports an unlimited number of queues but a limited number of messages per queue for each user

but automatically deletes messages that have been in the queue for more than 4 weeks.

B. It supports an unlimited number of queues and unlimited number of messages per queue for each user but automatically deletes messages that have been in the queue for more than 4 days.

C. It supports an unlimited number of queues but a limited number of messages per queue for each user but automatically deletes messages that have been in the queue for more than 4 days.

D. It supports an unlimited number of queues and unlimited number of messages per queue for each user but automatically deletes messages that have been in the queue for more than 4 weeks

Answer: B

Explanation:

Amazon Simple Queue Service (Amazon SQS) is a messaging queue service that handles message or

workflows between other components in a system. Amazon SQS supports an unlimited number of queues and unlimited number of messages per queue for each user. Please be aware that Amazon SQS automatically deletes messages that have been in the queue for more than 4 days. Reference: http://aws.amazon.com/documentation/sqs/

64. An online gaming site asked you if you can deploy a database that is a fast, highly scalable NoSQL

database service in AWS for a new site that he wants to build. Which database should you recommend?

A. Amazon DynamoDB

B. Amazon RDS

C. Amazon Redshift

D. Amazon SimpIeDB

Answer: A

Explanation:

Amazon DynamoDB is ideal for database applications that require very low latency and predictable

performance at any scale but don’t need complex querying capabilities like joins or transactions. Amazon DynamoDB is a fully-managed NoSQL database service that offers high performance, predictable throughput and low cost. It is easy to set up, operate, and scale. With Amazon DynamoDB, you can start small, specify the throughput and storage you need, and easily

scale your capacity requirements on the fly. Amazon DynamoDB automatically partitions data over a

number of servers to meet your request capacity. In addition, DynamoDB automatically replicates your data synchronously across multiple Availability Zones within an AWS Region to ensure high-availability and data durability. Reference: https://aws.amazon.com/running_databases/#dynamodb_anchor

65. You have been doing a lot of testing of your VPC Network by deliberately failing EC2 instances to test

whether instances are failing over properly. Your customer who will be paying the AWS bill for all this asks you if he being charged for all these instances. You try to explain to him how the billing works on EC2 instances to the best of your knowledge. What would be an appropriate response to give to the customer in regards to this?

A. Billing commences when Amazon EC2 AM instance is completely up and billing ends as soon as the

instance starts to shutdown.

B. Billing only commences only after 1 hour of uptime and billing ends when the instance terminates.

C. Billing commences when Amazon EC2 initiates the boot sequence of an AM instance and billing ends

when the instance shuts down.

D. Billing commences when Amazon EC2 initiates the boot sequence of an AM instance and billing ends as soon as the instance starts to shutdown.

Answer: C

Explanation:

Billing commences when Amazon EC2 initiates the boot sequence of an AM instance. Billing ends when

the instance shuts down, which could occur through a web services command, by running “shutdown -h”, or through instance failure. Reference: http://aws.amazon.com/ec2/faqs/#BiIIing

66. You log in to IAM on your AWS console and notice the following message. “Delete your root access

keys.” Why do you think IAM is requesting this?

A. Because the root access keys will expire as soon as you log out.

B. Because the root access keys expire after 1 week.

C. Because the root access keys are the same for all users.

D. Because they provide unrestricted access to your AWS resources.

Answer: D

Explanation:

In AWS an access key is required in order to sign requests that you make using the command-line interface (CLI), using the AWS SDKs, or using direct API calls. Anyone who has the access key for your root account has unrestricted access to all the resources in your account, including billing information. One of the best ways to protect your account is to not have an access key for your root account. We recommend that unless you must have a root access key (this is very rare), that you do not generate one. Instead, AWS best practice is to create one or more AWS Identity and Access Management (IAM) users, give them the necessary permissions, and use IAM users for everyday interaction with AWS. Reference:

http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.htmI#root-password

67. Once again your customers are concerned about the security of their sensitive data and with their latest enquiry ask about what happens to old storage devices on AWS. What would be the best answer to this QUESTION ?

A. AWS reformats the disks and uses them again.

B. AWS uses the techniques detailed in DoD 5220.22-M to destroy data as part of the decommissioning

process.

C. AWS uses their own proprietary software to destroy data as part of the decommissioning process.

D. AWS uses a 3rd party security organization to destroy data as part of the decommissioning process. Answer: B

Explanation:

When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized indMduals. AWS uses the techniques detailed in DoD 5220.22-M (“Nationa| Industrial Security Program Operating ManuaI “) or NIST 800-88 (“GuideIines for Media Sanitization”) to destroy data as part of the decommissioning process. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices. Reference: http://d0.awsstatic.com/whitepapers/Security/AWS%20Security%20Whitepaper.pdf

68. Your company has been storing a lot of data in Amazon Glacier and has asked for an inventory of what is in there exactly. So you have decided that you need to download a vault inventory. Which of the following statements is incorrect in relation to Vault Operations in Amazon Glacier?

A. You can use Amazon Simple Notification Service (Amazon SNS) notifications to notify you when the job completes.

B. A vault inventory refers to the list of archives in a vault.

C. You can use Amazon Simple Queue Service (Amazon SQS) notifications to notify you when the job

completes.

D. Downloading a vault inventory is an asynchronous operation.

Answer: C

Explanation:

Amazon Glacier supports various vault operations. A vault inventory refers to the list of archives in a vault. For each archive in the list, the inventory provides archive information such as archive ID, creation date, and size. Amazon Glacier updates the vault inventory approximately once a day, starting on the day the first archive is uploaded to the vault. A vault inventory must exist for you to be able to download it. Downloading a vault inventory is an asynchronous operation. You must first initiate a job to download the inventory. After receiving the job request, Amazon Glacier prepares your inventory for download. After the job completes, you can download the inventory data. Given the asynchronous nature of the job, you can use Amazon Simple Notification Service (Amazon SNS) notifications to notify you when the job completes. You can specify an Amazon SNS topic for each individual job request or configure your vault to send a notification when specific vault events occur. Amazon Glacier prepares an inventory for each vault periodically, every 24 hours. If there have been no archive additions or deletions to the vault since the last inventory, the inventory date is not updated. When you initiate a job for a vault inventory, Amazon Glacier returns the last inventory it generated, which is a point-in-time snapshot and not real-time data. You might not find it useful to retrieve vault inventory for each archive upload. However, suppose you maintain a database on the client-side associating metadata about the archives you

upload to Amazon Glacier. Then, you might find the vault inventory useful to reconcile information in your database with the actual vault inventory. Reference: http://docs.aws.amazon.com/amazongIacier/latest/dev/working-with-vaults.html

69. A customer enquires about whether all his data is secure on AWS and is especially concerned about

Elastic Map Reduce (EMR) so you need to inform him of some of the security features in place for AWS. Which of the below statements would be an incorrect response to your customers enquiry?

A. Amazon ENIR customers can choose to send data to Amazon S3 using the HTTPS protocol for secure

transmission.

B. Amazon S3 provides authentication mechanisms to ensure that stored data is secured against

unauthorized access.

C. Every packet sent in the AWS network uses Internet Protocol Security (IPsec).

D. Customers may encrypt the input data before they upload it to Amazon S3.

Answer: C

Explanation:

Amazon S3 provides authentication mechanisms to ensure that stored data is secured against

unauthorized access. Unless the customer who is uploading the data specifies otherwise, only that

customer can access the data. Amazon EMR customers can also choose to send data to Amazon S3

using the HTTPS protocol for secure transmission. In addition, Amazon EMR always uses HTTPS to send

data between Amazon S3 and Amazon EC2. For added security, customers may encrypt the input data

before they upload it to Amazon S3 (using any common data compression tool); they then need to add a

decryption step to the beginning of their cluster when Amazon EMR fetches the data from Amazon S3. Reference: https://aws.amazon.com/elasticmapreduce/faqs/

70. You are in the process of building an online gaming site for a client and one of the requirements is that it must be able to process vast amounts of data easily. Which AWS Service would be very helpful in

processing all this data?

A. Amazon S3

B. AWS Data Pipeline

C. AWS Direct Connect

D. Amazon EMR

Answer: D

Explanation:

Managing and analyzing high data volumes produced by online games platforms can be difficult. The

back-end infrastructures of online games can be challenging to maintain and operate. Peak usage periods, multiple players, and high volumes of write operations are some of the most common problems that operations teams face. Amazon Elastic MapReduce (Amazon EMR) is a service that processes vast amounts of data easily. Input data can be retrieved from web server logs stored on Amazon S3 or from player data stored in Amazon DynamoDB tables to run analytics on player behavior, usage patterns, etc. Those results can be stored again on Amazon S3, or inserted in a relational database for further analysis with classic business intelligence tools. Reference: http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_games_10.pdf

71. You need to change some settings on Amazon Relational Database Service but you do not want the

database to reboot immediately which you know might happen depending on the setting that you change. Which of the following will cause an immediate DB instance reboot to occur?

A. You change storage type from standard to PIOPS, and Apply Immediately is set to true.

B. You change the DB instance class, and Apply Immediately is set to false.

C. You change a static parameter in a DB parameter group.

D. You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0, and Apply Immediately is set to false.

Answer: A

Explanation:

A DB instance outage can occur when a DB instance is rebooted, when the DB instance is put into a state

that prevents access to it, and when the database is restarted. A reboot can occur when you manually

reboot your DB instance or when you change a DB instance setting that requires a reboot before it can take effect. A DB instance reboot occurs immediately when one of the following occurs:

You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero

value to 0 and set Apply Immediately to true. You change the DB instance class, and Apply Immediately is set to true. You change storage type from standard to PIOPS, and Apply Immediately is set to true.

A DB instance reboot occurs during the maintenance window when one of the following occurs:

You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero

value to 0, and Apply Immediately is set to false. You change the DB instance class, and Apply Immediately is set to false. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troub|eshooting.htm|#CHAP_TroubIeshooting.Security

72. What does the following policy for Amazon EC2 do?

{“Statement”:[{ “Effect”:”AI|ow”, “Action”:”ec2:Describe*”, “Resource”:”*”

II

}

A. Allow users to use actions that start with “Describe” over all the EC2 resources. B. Share an AMI with a partner

C. Share an AMI within the account

D. Allow a group to only be able to describe, run, stop, start, and terminate instances

Answer: A

Explanation:

You can use IAM policies to control the actions that your users can perform against your EC2 resources. For instance, a policy with the following statement will allow users to perform actions whose name start with

“Describe” against all your EC2 resources. {“Statement”:[{ “Effect”:”AI|ow”, “Action”:”ec2:Describe*”, “Resource”:”*”

}l

}

Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/UsingIAM.htmI

73. You are setting up a very complex financial services grid and so far it has 5 Elastic IP (EIP) addresses. You go to assign another EIP address, but all accounts are limited to 5 Elastic IP addresses per region by

default, so you aren’t able to. What is the reason for this?

A. For security reasons.

B. Hardware restrictions.

C. Public (IPV4) internet addresses are a scarce resource.

D. There are only 5 network interfaces per instance.

Answer: C

Explanation:

Public (IPV4) internet addresses are a scarce resource. There is only a limited amount of public IP space

available, and Amazon EC2 is committed to helping use that space efficiently. By default, all accounts are limited to 5 Elastic IP addresses per region. If you need more than 5 Elastic IP addresses, AWS asks that you apply for your limit to be raised. They will ask you to think through your use case and help them understand your need for additional addresses. Reference: http://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2

74. Amazon RDS provides high availability and failover support for DB instances using .

A. customized deployments

B. Appstream customizations

C. log events

D. MuIti-AZ deployments

Answer: D

Explanation:

Amazon RDS provides high availability and failover support for DB instances using MuIti-AZ deployments. MuIti-AZ deployments for Oracle, PostgreSQL, MySQL, and MariaDB DB instances use Amazon technology, while SQL Server DB instances use SQL Server Mirroring. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.IV|u|tiAZ.htmI

75. A major customer has asked you to set up his AWS infrastructure so that it will be easy to recover in the case of a disaster of some sort. Which of the following is important when thinking about being able to quickly launch resources in AWS to ensure business continuity in case of a disaster?

A. Create and maintain AM|s of key sewers where fast recovery is required.

B. Regularly run your sewers, test them, and apply any software updates and configuration changes.

C. All items listed here are important when thinking about disaster recovery.

D. Ensure that you have all supporting custom software packages available in AWS.

Answer: C

Explanation:

In the event of a disaster to your AWS infrastructure you should be able to quickly launch resources in

Amazon Web Services (AWS) to ensure business continuity. The following are some key steps you should have in place for preparation:

1. Set up Amazon EC2 instances to replicate or mirror data.

2. Ensure that you have all supporting custom software packages available in AWS.

3. Create and maintain AMIs of key servers where fast recovery is required.

4. Regularly run these servers, test them, and apply any software updates and configuration changes.

5. Consider automating the provisioning of AWS resources. Reference: http://d36cz9buwru1tt.cIoudfront.net/AWS_Disaster_Recovery.pdf

76. What does Amazon DynamoDB provide?

A. A predictable and scalable MySQL database

B. A fast and reliable PL/SQL database cluster

C. A standalone Cassandra database, managed by Amazon Web Services

D. A fast, highly scalable managed NoSQL database service

Answer: D

Explanation:

Amazon DynamoDB is a managed NoSQL database service offered by Amazon. It automatically manages

tasks like scalability for you while it provides high availability and durability for your data, allowing you to concentrate in other aspects of your application. Reference: check link – https://aws.amazon.com/running_databases/

77. You want to use AWS Import/Export to send data from your S3 bucket to several of your branch offices. What should you do if you want to send 10 storage units to AWS?

A. Make sure your disks are encrypted prior to shipping.

B. Make sure you format your disks prior to shipping.

C. Make sure your disks are 1TB or more.

D. Make sure you submit a separate job request for each device.

Answer: D

Explanation:

When using Amazon Import/Export, a separate job request needs to be submitted for each physical device even if they belong to the same import or export job. Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html

78. What would be the best way to retrieve the public IP address of your EC2 instance using the CLI?

A. Using tags

B. Using traceroute

C. Using ipconfig

D. Using instance metadata

Answer: D

Explanation:

To determine your instance’s public IP address from within the instance, you can use instance metadata. Use the following command to access the public IP address: For Linux use, $ curl

http://169.254.169.254/latest/meta-data/public-ipv4, and for Windows use, $ wget

http://169.254.169.254/latest/meta-data/public-ipv4. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.htm|

79. You need to measure the performance of your EBS volumes as they seem to be under performing. You have come up with a measurement of 1,024 KB I/O but your colleague tells you that EBS volume

performance is measured in IOPS. How many IOPS is equal to 1,024 KB I/O?

A. 16

B. 256

C. 8

D. 4

Answer: D

Explanation:

Several factors can affect the performance of Amazon EBS volumes, such as instance configuration, I/O

characteristics, workload demand, and storage configuration.

IOPS are input/output operations per second. Amazon EBS measures each I/O operation per second

(that is 256 KB or smaller) as one IOPS. I/O operations that are larger than 256 KB are counted in 256 KB

capacity units. For example, a 1,024 KB I/O operation would count as 4 IOPS. When you provision a 4,000 IOPS volume and attach it to an EBS-optimized instance that can provide the

necessary bandwidth, you can transfer up to 4,000 chunks of data per second (provided that the I/O does not exceed the 128 MB/s per volume throughput limit of General Purpose (SSD) and Provisioned IOPS (SSD) volumes). Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.htmI

80. Having set up a website to automatically be redirected to a backup website if it fails, you realize that

there are different types of failovers that are possible. You need all your resources to be available the

majority of the time. Using Amazon Route 53 which configuration would best suit this requirement?

A. Active-active failover.

B. None. Route 53 can’t failover.

C. Active-passive failover.

D. Active-active-passive and other mixed configurations.

Answer: A

Explanation:

You can set up a variety of failover configurations using Amazon Route 53 alias: weighted, latency, geolocation routing, and failover resource record sets Active-active failover: Use this failover configuration when you want all of your resources to be available the

majority of the time. When a resource becomes unavailable, Amazon Route 53 can detect that it’s

unhealthy and stop including it when responding to queries. Active-passive failover: Use this failover configuration when you want a primary group of resources to be available the majority of the time and you want a secondary group of resources to be on standby in case all of the primary resources become unavailable. When responding to queries, Amazon Route 53 includes only the healthy primary resources. If all of the primary resources are unhealthy, Amazon Route 53 begins to include only the healthy secondary resources in response to DNS queries. Active-active-passive and other mixed configurations: You can combine alias and non-alias resource

record sets to produce a variety of Amazon Route 53 behaviors. Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/dns-failover.html

81. AWS CIoudFormation is a service that helps you model and set up your Amazon Web Services

resources so that you can spend less time managing those resources and more time focusing on your

applications that run in AWS. You create a template that describes all the AWS resources that you want

(like Amazon EC2 instances or Amazon RDS DB instances), and AWS CIoudFormation takes care of

provisioning and configuring those resources for you. What formatting is required for this template?

A. JSON-formatted document

B. CSS-formatted document

C. XML-formatted document

D. HTML-formatted document

Answer: A

Explanation:

You can write an AWS CIoudFormation template (a JSON-formatted document) in a text editor or pick an

existing template. The template describes the resources you want and their settings. For example, suppose you want to create an Amazon EC2. Your template can declare an instance Amazon EC2 and

describe its properties, as shown in the following example:

{“AWSTemp|ateFormatVersion” : “2010-09-O9”

“Description” : “A simple Amazon EC2 instance”, “Resources” : { “MyEC2Instance” : { “Type” : “AWS::EC2::Instance”, “Properties” : { “Image|d” : “ami-2f726546”, “|nstanceType” : “t1.micro”

}

}

}

}

Reference:

http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-whatis-howdoesitwork.html

82. True or False: In Amazon Route 53, you can create a hosted zone for a top-level domain (TLD).

A. FALSE

B. False, Amazon Route 53 automatically creates it for you.

C. True, only if you send an XML document with a CreateHostedZoneRequest element for TLD.

D. TRUE

Answer: A

Explanation:

In Amazon Route 53, you cannot create a hosted zone for a top-level domain (TLD). Reference: http://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.htmI

83. You decide that you need to create a number of Auto Scaling groups to try and save some money as

you have noticed that at certain times most of your EC2 instances are not being used. By default, what is

the maximum number of Auto Scaling groups that AWS will allow you to create?

A. 12

B. Unlimited

C. 20

D. 2

Answer: C

Explanation:

Auto Scaling is an AWS service that allows you to increase or decrease the number of EC2 instances within your appIication’s architecture. With Auto Scaling, you create collections of EC2 instances, called Auto Scaling groups. You can create these groups from scratch, or from existing EC2 instances that are already in production. Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_|imits.htm|#Iimits_autoscaIing

84. A user needs to run a batch process which runs for 10 minutes. This will only be run once, or at

maximum twice, in the next month, so the processes will be temporary only. The process needs 15 X-Large instances. The process downloads the code from S3 on each instance when it is launched, and then generates a temporary log file. Once the instance is terminated, all the data will be lost. Which of the below mentioned pricing models should the user choose in this case?

A. Spot instance.

B. Reserved instance.

C. On-demand instance.

D. EBS optimized instance.

Answer: A

Explanation:

In Amazon Web Services, the spot instance is useful when the user wants to run a process temporarily. The spot instance can terminate the instance if the other user outbids the existing bid. In this case all

storage is temporary, and the data is not required to be persistent. Thus, the spot instance is a good option to save money. Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/

85. Which of the following is NOT a characteristic of Amazon Elastic Compute Cloud (Amazon EC2)?

A. It can be used to launch as many or as few virtual servers as you need.

B. It increases the need to forecast traffic by providing dynamic IP addresses for static cloud computing. C. It eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.

D. It offers scalable computing capacity in the Amazon Web Services (AWS) cloud

Answer: B

Explanation:

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web

Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you

can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual

servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you

to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to

forecast traffic. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html

86. You have been storing massive amounts of data on Amazon Glacier for the past 2 years and now start to wonder if there are any limitations on this. What is the correct answer to your QUESTION ?

A. The total volume of data is limited but the number of archives you can store are unlimited.

B. The total volume of data is unlimited but the number of archives you can store are limited.

C. The total volume of data and number of archives you can store are unlimited.

D. The total volume of data is limited and the number of archives you can store are limited.

Answer: C

Explanation:

An archive is a durably stored block of information. You store your data in Amazon Glacier as archives. You may upload a single file as an archive, but your costs will be lower if you aggregate your data. TAR and ZIP are common formats that customers use to aggregate multiple files into a single file before uploading to Amazon Glacier. The total volume of data and number of archives you can store are unlimited. Individual Amazon Glacier archives can range in size from 1 byte to 40 terabytes. The largest archive that can be uploaded in a single upload request is 4 gigabytes. For items larger than 100 megabytes, customers should consider using the MuItipart upload capability. Archives stored in Amazon Glacier are immutable, i.e. archives can be uploaded and deleted but cannot be

edited or overwritten. Reference: https://aws.amazon.com/gIacier/faqs/

87. You are setting up your first Amazon Virtual Private Cloud (Amazon VPC) so you decide to use the VPC wizard in the AWS console to help make it easier for you. Which of the following statements is correct regarding instances that you launch into a default subnet via the VPC wizard?

A. Instances that you launch into a default subnet receive a public IP address and 10 private IP addresses.

B. Instances that you launch into a default subnet receive both a public IP address and a private IP

address.

C. Instances that you launch into a default subnet don’t receive any ip addresses and you need to define

them manually.

D. Instances that you launch into a default subnet receive a public IP address and 5 private IP addresses. Answer: B

Explanation:

Instances that you launch into a default subnet receive both a public IP address and a private IP address.

Instances in a default subnet also receive both public and private DNS hostnames. Instances that you

launch into a nondefault subnet in a default VPC don’t receive a public IP address or a DNS hostname. You can change your subnet’s default public IP addressing behavior. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html

88. A user has configured ELB with two EBS backed EC2 instances. The user is trying to understand the

DNS access and IP support for ELB. Which of the below mentioned statements may not help the user

understand the IP mechanism supported by ELB?

A. The client can connect over IPV4 or IPV6 using Dual stack

B. Communication between the load balancer and back-end instances is always through IPV4

C. ELB DNS supports both IPV4 and IPV6

D. The ELB supports either IPV4 or IPV6 but not both

Answer: D

Explanation:

Elastic Load Balancing supports both Internet Protocol version 6 (IPv6) and Internet Protocol version 4

(IPv4). Clients can connect to the user’s load balancer using either IPv4 or IPv6 (in EC2-Classic) DNS

However, communication between the load balancer and its back-end instances uses only IPv4. The user

can use the Dualstack-prefixed DNS name to enable IPv6 support for communications between the client and the load balancers. Thus, the clients are able to access the load balancer using either IPv4 or IPv6 as their indMdual connectMty needs dictate. Reference:

http://docs.aws.amazon.com/EIasticLoadBaIancing/latest/DeveIoperGuide/UserScenariosForEC2.html

89. Does AWS CIoudFormation support Amazon EC2 tagging?

A. Yes, AWS CIoudFormation supports Amazon EC2 tagging

B. No, CIoudFormation doesn’t support any tagging

C. No, it doesn’t support Amazon EC2 tagging.

D. It depends if the Amazon EC2 tagging has been defined in the template.

Answer: A

Explanation:

In AWS CIoudFormation, Amazon EC2 resources that support the tagging feature can also be tagged in an AWS template. The tag values can refer to template parameters, other resource names, resource attribute values (e.g. addresses), or values computed by simple functions (e.g., a concatenated list of strings). Reference: http://aws.amazon.com/c|oudformation/faqs/

90. An existing client comes to you and says that he has heard that launching instances into a VPC (virtual private cloud) is a better strategy than launching instances into a EC2-classic which he knows is what you currently do. You suspect that he is correct and he has asked you to do some research about this and get back to him. Which of the following statements is true in regards to what ability launching your instances into a VPC instead of EC2-Classic gives you?

A. All of the things listed here. B. Change security group membership for your instances while they’re running

C. Assign static private IP addresses to your instances that persist across starts and stops

D. Define network interfaces, and attach one or more network interfaces to your instances

Answer: A

Explanation:

By launching your instances into a VPC instead of EC2-Classic, you gain the ability to: Assign static private

IP addresses to your instances that persist across starts and stops Assign multiple IP addresses to your

Instances Define network interfaces, and attach one or more network interfaces to your instances Change security group membership for your instances while they’re running

Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering) Add an additional layer of access control to your instances in the form of network access control lists (ACL) Run your instances on single-tenant hardware

Reference: http://media.amazonwebservices.com/AWS_CIoud_Best_Practices.pdf

91. Amazon S3 allows you to set per-file permissions to grant read and/or write access. However you have decided that you want an entire bucket with 100 files already in it to be accessible to the public. You don’t want to go through 100 files individually and set permissions. What would be the best way to do this?

A. Move the bucket to a new region

B. Add a bucket policy to the bucket.

C. Move the files to a new bucket.

D. Use Amazon EBS instead of S3

Answer: B

Explanation:

Amazon S3 supports several mechanisms that give you flexibility to control who can access your data as

well as how, when, and where they can access it. Amazon S3 provides four different access control

mechanisms: AWS Identity and Access Management (IAM) policies, Access Control Lists (ACLs), bucket

policies, and query string authentication. IAM enables organizations to create and manage multiple users under a single AWS account. With IAM policies, you can grant IAM users fine-grained control to your Amazon S3 bucket or objects. You can use ACLs to selectively add (grant) certain permissions on

indMdual objects. Amazon S3 bucket policies can be used to add or deny permissions across some or all of the objects within a single bucket. With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are valid for a specified period of time. Reference: http://aws.amazon.com/s3/detai|s/#security

92. A user is accessing an EC2 instance on the SSH port for IP 10.20.30.40. Which one is a secure way to

configure that the instance can be accessed only from this IP?

A. In the security group, open port 22 for IP 10.20.30.40

B. In the security group, open port 22 for IP 10.20.30.40/32

C. In the security group, open port 22 for IP 10.20.30.40/24

D. In the security group, open port 22 for IP 10.20.30.40/0

Answer: B

Explanation:

In AWS EC2, while configuring a security group, the user needs to specify the IP address in CIDR notation. The CIDR IP range 10.20.30.40/32 says it is for a single IP 10.20.30.40. If the user specifies the IP as 10.20.30.40 only, the security group will not accept and ask it in a CIRD format. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

93. Which of the following statements is true of creating a launch configuration using an EC2 instance?

A. The launch configuration can be created only using the Query APIs.

B. Auto Scaling automatically creates a launch configuration directly from an EC2 instance.

C. A user should manually create a launch configuration before creating an Auto Scaling group.

D. The launch configuration should be created manually from the AWS CLI.

Answer: B

Explanation:

You can create an Auto Scaling group directly from an EC2 instance. When you use this feature, Auto

Scaling automatically creates a launch configuration for you as well. Reference:

http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/create-Ic-with-instancelD.htmI

94. You need to set up a high level of security for an Amazon Relational Database Service (RDS) you have

just built in order to protect the confidential information stored in it. What are all the possible security groups that RDS uses?

A. DB security groups, VPC security groups, and EC2 security groups.

B. DB security groups only.

C. EC2 security groups only.

D. VPC security groups, and EC2 security groups.

Answer: A

Explanation:

A security group controls the access to a DB instance. It does so by allowing access to IP address ranges

or Amazon EC2 instances that you specify. Amazon RDS uses DB security groups, VPC security groups, and EC2 security groups. In simple terms, a DB security group controls access to a DB instance that is not in a VPC, a VPC security group controls access to a DB instance inside a VPC, and an Amazon EC2 security group controls access to an EC2 instance and can be used with a DB instance. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

95. You have been using T2 instances as your CPU requirements have not been that intensive. However

you now start to think about larger instance types and start looking at M and M3 instances. You are a little confused as to the differences between them as they both seem to have the same ratio of CPU and

memory. Which statement below is incorrect as to why you would use one over the other?

A. M3 instances are less expensive than M1 instances.

B. M3 instances are configured with more swap memory than M instances.

C. M3 instances provide better, more consistent performance that M instances for most use-cases.

D. M3 instances also offer SSD-based instance storage that delivers higher I/O performance.

Answer: B

Explanation:

Amazon EC2 allows you to set up and configure everything about your instances from your operating

system up to your applications. An Amazon Machine Image (AMI) is simply a packaged-up environment

that includes all the necessary bits to set up and boot your instance. M1 and M3 Standard instances have the same ratio of CPU and memory, some reasons below as to why

you would use one over the other.M3 instances provide better, more consistent performance that M instances for most use-cases. M3 instances also offer SSD-based instance storage that delivers higher I/O performance. M3 instances are also less expensive than M1 instances. Due to these reasons, we recommend M3 for applications that require general purpose instances with a balance of compute, memory, and network resources. However, if you need more disk storage than what is provided in M3 instances, you may still find M1 instances useful for running your applications. Reference: https://aws.amazon.com/ec2/faqs/

96. You have set up an Elastic Load Balancer (ELB) with the usual default settings, which route each

request independently to the application instance with the smallest load. However, someone has asked you to bind a user’s session to a specific application instance so as to ensure that all requests coming from the user during the session will be sent to the same application instance. AWS has a feature to do this. What is it called?

A. Connection draining

B. Proxy protocol

C. Tagging

D. Sticky session

Answer: D

Explanation:

An Elastic Load BaIancer(ELB) by default, routes each request independently to the application instance

with the smallest load. However, you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user’s session to a specific application instance. This ensures

that all requests coming from the user during the session will be sent to the same application instance. The key to managing the sticky session is determining how long your load balancer should consistently

route the user’s request to the same application instance. If your application has its own session cookie,

then you can set Elastic Load Balancing to create the session cookie to follow the duration specified by the appIication’s session cookie. If your application does not have its own session cookie, then you can set Elastic Load Balancing to create a session cookie by specifying your own stickiness duration. You can

associate stickiness duration for only HTTP/HTTPS load balancer listeners. An application instance must always receive and send two cookies: A cookie that defines the stickiness duration and a special Elastic Load Balancing cookie named AWSELB, that has the mapping to the application instance. Reference:

http://docs.aws.amazon.com/E|asticLoadBaIancing/latest/DeveIoperGuide/TerminoIogyandKeyConcepts. htmI#session-stickiness

97. A user wants to achieve High Availability with PostgreSQL DB. Which of the below mentioned

functionalities helps achieve HA?

A. Mu|ti AZ

B. Read Replica

C. Multi region

D. PostgreSQL does not support HA

Answer: A

Explanation:

The Multi AZ feature allows the user to achieve High Availability. For Multi AZ, Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone. Reference:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

98. A user has created an application which will be hosted on EC2. The application makes calls to

DynamoDB to fetch certain data. The application is using the DynamoDB SDK to connect with from the

EC2 instance. Which of the below mentioned statements is true with respect to the best practice for security in this scenario?

A. The user should create an IAM user with DynamoDB access and use its credentials within the

application to connect with DynamoDB

B. The user should attach an IAM role with DynamoDB access to the EC2 instance

C. The user should create an IAM role, which has EC2 access so that it will allow deploying the application

D. The user should create an IAM user with DynamoDB and EC2 access. Attach the user with the

application so that it does not use the root account credentials

Answer: B

Explanation:

With AWS IAM a user is creating an application which runs on an EC2 instance and makes requests to

AWS, such as DynamoDB or S3 calls. Here it is recommended that the user should not create an IAM user

and pass the user’s credentials to the application or embed those credentials inside the application. Instead, the user should use roles for EC2 and give that role access to DynamoDB /S3. When the roles are attached to EC2, it will give temporary security credentials to the application hosted on that EC2, to connect with DynamoDB / S3. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.htmI

99. After setting up several database instances in Amazon Relational Database Service (Amazon RDS)

you decide that you need to track the performance and health of your databases. How can you do this?

A. Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB snapshot, DB parameter group, or DB security group.

B. Use the free Amazon CIoudWatch service to monitor the performance and health of a DB instance.

C. All of the items listed will track the performance and health of a database.

D. View, download, or watch database log files using the Amazon RDS console or Amazon RDS APIs. You

can also query some database log files that are loaded into database tables.

Answer: C

Explanation:

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an

industry-standard relational database and manages common database administration tasks. There are several ways you can track the performance and health of a database or a DB instance. You can

Use the free Amazon CIoudWatch service to monitor the performance and health of a DB instance. Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB snapshot, DB

parameter group, or DB security group. View, download, or watch database log files using the Amazon RDS console or Amazon RDS APIs. You can also query some database log files that are loaded into database tables. Use the AWS CIoudTraiI service to record AWS calls made by your AWS account. The calls are recorded in log files and stored in an Amazon S3 bucket. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.htmI

100. You are building a system to distribute confidential documents to employees. Using CIoudFront, what method could be used to serve content that is stored in S3, but not publicly accessible from S3 directly?

A. Add the CIoudFront account security group “amazon-cf/amazon-cf-sg” to the appropriate S3 bucket

policy.

B. Create a S3 bucket policy that lists the C|oudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).

C. Create an Identity and Access Management (IAM) User for CIoudFront and grant access to the objects

in your S3 bucket to that IAM User.

D. Create an Origin Access Identity (OAI) for CIoudFront and grant access to the objects in your S3 bucket to that OAI.

Answer: D

Explanation:

You restrict access to Amazon S3 content by creating an origin access identity, which is a special

CIoudFront user. You change Amazon S3 permissions to give the origin access identity permission to

access your objects, and to remove permissions from everyone else. When your users access your Amazon S3 objects using CIoudFront URLs, the CIoudFront origin access identity gets the objects on your

users’ behalf. If your users try to access objects using Amazon S3 URLs, they’re denied access. The origin

access identity has permission to access objects in your Amazon S3 bucket, but users don’t. Reference:

http://docs.aws.amazon.com/AmazonCIoudFront/latest/Deve|operGuide/private-content-restricting-access-to-s3.htmI

©2019 by Raghavendra Kambhampati