
AWS SA Associate Practice Questions-10
Updated: Jul 17, 2020
301. Your department creates regular analytics reports from your company’s log files All log data is collected in Amazon 53 and processed by daily Amazon Elastic MapReduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse.Your CFO requests that you optimize the cost structure for this system.Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?
A. Use reduced redundancy storage (RRS) for all data In 53. Use a combination of Spot Instances and Reserved Instances for Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.
B. Use reduced redundancy storage (RRS) for PDF and .csv data in 53. Add Spot Instances to EMR jobs.Use Spot Instances for Amazon Redshift.
C. Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon 53. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.
D. Use reduced redundancy storage (RRS) for all data in Amazon 53. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.
Answer: C
Explanation:
Using Reduced Redundancy Storage Amazon 53 stores objects according to their storage class. It assigns the storage class to an object when
it is written to Amazon 53. You can assign objects a specific sto rage class (standard or reduced
redundancy) only when you write the objects to an Amazon 53 bucket or when you copy objects that are
already stored in Amazon 53. Standard is the default storage class. For information about storage classes,
see Object Key and Metadata.
In order to reduce storage costs, you can use reduced redundancy storage for noncritical, reproducible data
at lower levels of redundancy than Amazon 53 provides with standard storage. The lower level of
redundancy results in less durability and availability, but in many cases, the lower costs can make
reduced redundancy storage an acceptable storage solution. For example, it can be a cost effective
solution for sharing media content that is durably stored elsewhere. It can also make sense if you are
storing thumbnails and other resized images that can be easily reproduced from an original image.
Reduced redundancy storage is designed to provide 99.99% durability of objects over a given year.
This durability level corresponds to an average annual expected loss of 0.01% of objects. For example, if
you store 10,000 objects using the RRS option, you can, on average, expect to incur an annual loss of a
single object per year (0.01% of 10,000 objects).
Note
This annual loss represents an expected average and does not guarantee the loss of less than 0.01% of
objects in a given year.
Reduced redundancy storage stores objects on multiple devices across multiple facilities, providing 400
times the durability of a typical disk drive, but it does not replicate objects as many times as Amazon 53
standard storage. In addition, reduced redundancy storage is designed to sustain the loss of data in a
single facility.
If an object in reduced redundancy storage has been lost, Amazon 53 will return a 405 error on requests
made to that object. Amazon 53 also offers notifications for reduced redundancy storage object loss: you
can configure your bucket so that when Amazon 53 detects the loss of an RRS object, a notification will
be sent through Amazon Simple Notification Service (Amazon SNS). You can then replace the lost object.
To enable notifications, you can use the Amazon 53 console to set the Notifications property of your bucket.
302. You are the new IT architect in a company that operates a mobile sleep tracking application
When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to
your backend
The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB
table.
Every morning, you scan the table to extract and aggregate last night’s data on a per user basis, and store
the results in Amazon 53.
Users are notified via Amazon 5NI5 mobile push notifications that new data is available, which is parsed
and visualized by (The mobile app Currently you have around IOOk users who are mostly based out of
North America.
You have been tasked to optimize the architecture of the backend system to lower cost what would you
recommend? (Choose 2 answers}
A. Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is
on Amazon 53.
B. Have the mobile app access Amazon DynamoDB directly instead of J50N files stored on Amazon 53.
C. Introduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce
provisioned write throughput.
D. Introduce Amazon Elasticache Io cache reads from the Amazon DynamoDB table and reduce
provisioned read throughput.
E. Write data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon 53.
Answer: B, D
303. Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in
high resolution MP4 format. Your workforce is distributed globally often on the move and using
company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your
company has no video transcoding expertise and it required you may need to pay for a consultant.
How do you implement the most cost-efficient architecture without compromising high availability and
quality of video delivery’?
A. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the
number of nodes depending on the length of the queue. EBS volumes to host videos and EBS snapshots to
incrementally backup original files after a few days. CIoudFront to serve HLS transcoded videos from EC2.
B. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS. EBS volumes to host videos
and EBS snapshots to incrementally backup original files after a few days. CIoudFront to serve HLS
transcoded videos from EC2.
C. Elastic Transcoder to transcode original high-resolution NIP4 videos to HLS. 53 to host videos with
Lifecycle Management to archive original files to Glacier after a few days. C|oudFront to serve HLS
transcoded videos from 53.
D. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust
the number of nodes depending on the length of the queue. 53 to host videos with Lifecycle Management to
archive all files to Glacier after a few days. CIoudFront to serve HLS transcoded videos from Glacier.
Answer: C
304. You’ve been hired to enhance the overall security posture for a very large e-commerce site They have
a well architected multi-tier application running in a VPC that uses ELBs in front of both the web and the
app
tier with static assets served directly from 53 They are using a combination of RDS and DynamoOB for their
dynamic data and then archMng nightly into 53 for further processing with EMR
They are concerned because they found QUESTION able log entries and suspect someone is attempting to
gain unauthorized access.
Which approach provides a cost effective scalable mitigation to this kind of attack?
A. Recommend that they lease space at a DirectConnect partner location and establish a IG DirectConnect
connection to their vPC they would then establish Internet connectMty into their space, filter the traffic in
hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection
into their application running in their VPC,
B. Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier sub net.
C. Add a WAF tier by creating a new ELB and an AutoScaIing group of EC2 Instances running a host based
WAF They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the
traffic to the current web tier The web tier Security Groups would be updated to only allow traffic from the
WAF tier Security Group
D. Remove all but TLS 1 2 from the web tier ELB and enable Advanced Protocol Filtering This will enable
the ELB itself to perform WAF functionality.
Answer: C
305. You currently operate a web application In the AWS US-East region The application runs on an
autoscaled layer of EC2 instances and an RDS Multi-AZ database Your IT security compliance officer has
tasked you to develop a reliable and durable logging solution to track changes made to your EC2.1AM And
RDS resources. The solution must ensure the integrity and confidentiality of your log data. Which of these
solutions would you recommend?
A. Create a new C|oudTraiI trail with one new 53 bucket to store the logs and with the global services option
selected Use IAM roles 53 bucket policies and Multi Factor Authentication (MFA) Delete on the 53 bucket
that stores your logs.
B. Create a new CIoudTraiI with one new 53 bucket to store the logs Configure SNS to send log file delivery
notifications to your management system Use IAM roles and 53 bucket policies on the 53 bucket mat stores
your logs.
C. Create a new CIoudTraiI trail with an existing 53 bucket to store the logs and with the global services
option selected Use 53 ACLs and Multi Factor Authentication (MFA) Delete on the 53 bucket that stores
your logs.
D. Create three new C|oudTrai| trails with three new 53 buckets to store the logs one for the AWS
Management console, one for AWS 5DKs and one for command line tools Use IAM roles and 53 bucket
policies on the 53 buckets that store your logs.
Answer: A
306. An enterprise wants to use a third-party SaaS application. The SaaS application needs to have access
to issue several API commands to discover Amazon EC2 resources running within the enterprise’s account
The enterprise has internal security policies that require any outside access to their environment must
conform to the principles of least prMlege and there must be controls in place to ensure that the
credentials used by the 5aa5 vendor cannot be used by any other third party. Which of the following would
meet all of these conditions?
A. From the AW5 Management Console, navigate to the Security Credentials page and retrieve the access
and secret key for your account.
B. Create an IAM user within the enterprise account assign a user policy to the IAM user that allows only
the actions required by the SaaS application create a new access and secret key for the user and provide
these credentials to the 5aa5 provider.
C. Create an IAM role for cross-account access allows the SaaS provider’s account to assume the role and
assign it a policy that allows only the actions required by the SaaS application.
D. Create an IAM role for EC2 instances, assign it a policy that allows only the actions required tor the Saas
application to work, provide the role ARM to the SaaS provider to use when launching their application
instances.
Answer: C
Explanation:
Granting Cross-account Permission to objects It Does Not Own
In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects.
That is, your bucket can have objects that other AWS accounts own.
Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of
who the owner is, to a user in another account. For example, that user could be a billing application that
needs to access object metadata. There are two core issues:
The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket
owner to grant permissions on objects it does not own, the object owner, the AWS account that created the
objects, must first grant permission to the bucket owner. The bucket owner can then delegate those
permissions.
Bucket owner account can delegate permissions to users in its own account but it cannot delegate
permissions to other AWS accounts, because cross-account delegation is not supported.
In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with
permission to access objects, and grant another AWS account permission to assume the role temporarily
enabling it to access objects in the bucket.
Background: Cross-Account Permissions and Using IAM Roles
IAM roles enable several scenarios to delegate access to your resources, and cross-account access is
one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily
delegate object access cross-account to users in another AWS account, Account C. Each IAM role you
create has two policies attached to it:
A trust policy identifying another AWS account that can assume the role.
An access policy defining what permissions-for example, s3:Get0bject-are allowed when someone
assumes the role. For a list of permissions you can specify in a policy, see Specifying Permissions in a
Policy.
The AWS account identified in the trust policy then grants its user permission to assume the role. The user
can then do the following to access objects:
Assume the role and, in response, get temporary security credentials. Using the temporary security
credentials, access the objects in the bucket.
For more information about IAM roles, go to Roles (Delegation and Federation) in IAM User Guide. The
following is a summary of the walkthrough steps:
Account A administrator user attaches a bucket policy granting Account B conditional permission to upload
objects.
Account A administrator creates an IAM role, establishing trust with Account C, so users in that account can
access Account A. The access policy attached to the ro Ie limits what user in Account C can do when the
user accesses Account A.
Account B administrator uploads an object to the bucket owned by Account A, granting full —controI
permission to the bucket owner.
Account C administrator creates a user and attaches a user policy that allows the user to assume the role.
User in Account C first assumes the role, which returns the user temporary security credentials.
Using those temporary credentials, the user then accesses objects in the bucket.
For this example, you need three accounts. The following table shows how we refer to these accounts and
the administrator users in these accounts. Per IAM guidelines (see About Using an Administrator User to
Create Resources and Grant Permissions) we do not use the account root
credentials in this walkthrough. Instead, you create an administrator user in each account and use those
credentials in creating resources and granting them permissions
307. You are designing a data leak prevention solution for your VPC environment. You want your VPC
Instances to be able to access software depots and distributions on the Internet for product updates. The
depots and distributions are accessible via third party CONs by their URLs. You want to explicitly deny any
other outbound connections from your VPC instances to hosts on the internet.
Which of the following options would you consider?
A. Configure a web proxy server in your VPC and enforce URL-based ru les for outbound access Remove
default routes.
B. Implement security groups and configure outbound rules to only permit traffic to software depots.
C. Move all your instances into private VPC subnets remove default routes from all routing tables and add
specific routes to the software depots and distributions only.
D. Implement network access control lists to all specific destinations, with an Implicit deny as a rule.
Answer: A
308. An administrator is using Amazon CIoudFormation to deploy a three tier web application that consists
of a web tier and application tier that will utilize Amazon DynamoDB for storage when creating the
CIoudFormation template which of the following would allow the application instance access to the
DynamoDB tables without exposing API credentials?
A. Create an Identity and Access Management Role that has the required permissions to read and write
from the required DynamoDB table and associate the Role to the application instances by referencing an
instance profile.
B. Use the Parameter section in the Cloud Formation template to nave the user input Access and Secret
Keys from an already created IAM user that has me permissions required to read and write from the
required DynamoDB table.
C. Create an Identity and Access Management Role that has the required permissions to read and write
from the required DynamoDB table and reference the Role in the instance profile property of the application
instance.
D. Create an identity and Access Management user in the CIoudFormation template that has permissions
to read and write from the required DynamoDB table, use the GetAtt function to retrieve the Access and
secret keys and pass them to the application instance through user-data.
Answer: C
309. An AWS customer is deploying an application mat is composed of an AutoScaIing group of EC2
Instances.
The customers security policy requires that every outbound connection from these instances to any other
service within the customers Virtual Private Cloud must be authenticated using a unique x 509 certificate
that contains the specific instance-id.
In addition an x 509 certificates must Designed by the customer’s Key management service in order to be
trusted for authentication.
Which of the following configurations will support these requirements?
A. Configure an IAM Role that grants access to an Amazon 53 object containing a signed certificate and
configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the
certificate from Amazon 53 upon first boot.
B. Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the
launched instances generate a certificate signature request with the instance’s assigned instance- id to
the Key management service for signature.
C. Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the
trusted key management service. Have the Key management service generate a signed certificate and
send it directly to the newly launched instance.
D. Configure the launched instances to generate a new certificate upon first boot Have the Key
management service poll the AutoScaIing group for associated instances and send new instances a
certificate signature (hat contains the specific instance-id.
Answer: A
310. Your company has recently extended its datacenter into a VPC on AVVS to add burst computing
capacity as needed Members of your Network Operations Center need to be able to go to the AWS
Management Console and administer Amazon EC2 instances as necessary You don’t want to create new
IAM users for each NOC member and make those users sign in again to the AWS Management Console
Which option below will meet the needs for your NOC members?
A. Use OAuth 2 0 to retrieve temporary AWS security credentials to enable your NOC members to sign in to
the AVVS Management Console.
B. Use web Identity Federation to retrieve AWS temporary security credentials to enable your NOC
members to sign in to the AWS Management Console.
C. Use your on-premises SAML 2.0-compliant identity provider (IOP) to grant the NOC members federated
access to the AWS Management Console via the AWS sing Ie sign-on (550) endpoint.
D. Use your on-premises SAML2.0-comp|iam identity provider (IOP) to retrieve temporary security
credentials to enable NOC members to sign in to the AWS Management Console.
Answer: D
311. You are designing an SSUTLS solution that requires HTIPS clients to be authenticated by the Web
server using client certificate authentication. The solution must be resilient.
Which of the following options would you consider for configuring the web server infrastructure? (Choose 2
answers)
A. Configure ELB with TCP listeners on TCP/4d3. And place the Web servers behind it.
B. Configure your Web servers with EIPS Place the Web servers in a Route53 Record Set and configure
health checks against all Web servers.
C. Configure ELB with HTIPS listeners, and place the Web servers behind it.
D. Configure your web servers as the origins for a Cloud Front distribution. Use custom SSL certificates on
your Cloud Front distribution.
Answer: A, B
312. You are designing a connectMty solution between on-premises infrastructure and Amazon VPC. Your
server’s on-premises will De communicating with your VPC instances. You will De establishing IPSec
tunnels over the internet You will be using VPN gateways and terminating the IPsec tunnels on AWS
supported customer gateways.
Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above?
(Choose 4 answers)
A. End-to-end protection of data in transit
B. End-to-end Identity authentication
C. Data encryption across the Internet
D. Protection of data in transit over the Internet
E. Peer identity authentication between VPN gateway and customer gateway
F. Data integrity protection across the Internet
Answer: C, 0, E, F
313. You are designing an intrusion detection prevention (IDS/IPS) solution for a customer web application
in a single VPC. You are considering the options for implementing IOS IPS protection for traffic coming from
the Internet.
Which of the following options would you consider? (Choose 2 answers)
A. Implement IDS/IPS agents on each Instance running In VPC
B. Configure an instance in each subnet to switch its network interface card to promiscuous mode and
analyze network traffic.
C. Implement Elastic Load Balancing with SSL listeners In front of the web applications
D. Implement a reverse proxy layer in front of web servers and configure IDS/ IPS agents on each reverse
proxy server.
Answer: B, D
314. You are designing a photo sharing mobile app the application will store all pictures in a single Amazon
53 bucket.
Users will upload pictures from their mobile device directly to Amazon 53 and will be able to view and
download their own pictures directly from Amazon 53.
You want to configure security to handle potentially millions of users in the most secure manner possible.
What should your server-side application do when a new user registers on the photo sharing mobile
application?
A. Create a set of long-term credentials using AWS Security Token Service with appropriate permissions
Store these credentials in the mobile app and use them to access Amazon 53.
B. Record the user’s Information in Amazon RDS and create a role in IAM with appropriate permissions.
When the user uses their mobile app create temporary credentials using the AWS Security Token Service
‘Assume Role’ function Store these credentials in the mobile app’s memory and use them to access
Amazon 53 Generate new credentials the next time the user runs the mobile app.
C. Record the user’s Information In Amazon DynamoDB. When the user uses their mobile app create
temporary credentials using AWS Security Token Service with appropriate permissions Store these
credentials in the mobile app’s memory and use them to access Amazon 53 Generate new credentials the
next time the user runs the mobile app.
D. Create IAM user. Assign appropriate permissions to the IAM user Generate an access key and secret
key for the IAM user, store them in the mobile app and use these credentials to access Amazon 53.
E. Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user Generate an
access Key and secret Key for the IAM user, store them In the mobile app and use these credentials to
access Amazon 53.
Answer: B
315. You have an application running on an EC2 Instance which will allow users to download fl ies from a
private 53 bucket using a pre-assigned URL. Before generating the URL the application should verify the
existence of the fi Ie in 53.
How should the application use AWS credentials to access the 53 bucket securely?
A. Use the AWS account access Keys the application retrieves the credentials from the source code of the
application.
B. Create an IAM user for the application with permissions that allow list access to the 53 bucket launch the
instance as the IAM user and retrieve the IAM user’s credentials from the EC2 instance user data.
C. Create an IAM role for EC2 that allows list access to objects in the 53 bucket. Launch the instance with
the role, and retrieve the roIe’s credentials from the EC2 Instance metadata
D. Create an IAM user for the application with permissions that allow list access to the 53 bucket. The
application retrieves the IAM user credentials from a temporary directory with permissions that allow read
access only to the application user.
Answer: C
316. You are designing a social media site and are considering how to mitigate distributed denial-of service
(DDoS) attacks. Which of the below are viable mitigation techniques? (Choose 3 answers)
A. Add multiple elastic network interfaces (ENis) to each EC2 instance to increase the network bandwidth.
B. Use dedicated instances to ensure that each instance has the maximum performance possible.
C. Use an Amazon C|oudFront distribution for both static and dynamic content.
D. Use an Elastic Load Balancer with auto scaling groups at the web. App and Amazon Relational
Database Service (RDS) tiers
E. Add alert Amazon CIoudWatch to look for high Network in and CPU utilization.
F. Create processes and capabilities to quickly add and remove rules to the instance OS firewall.
Answer: C, E, F
317. A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS which
includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned
capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra
overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon
investigation using CIoudWatch and other monitoring tools it is discovered that there is an extremely large
and unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over port 80 from
a country where the benefits company has no customers. The web tier instances are so overloaded that
benefit enrollment administrators cannot even SSH into them. Which actMty would be useful in defending
against this attack?
A. Create a custom route table associated with the web tier and block the attacking IP addresses from the
IGW (Internet Gateway)
B. Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main
Route Table with the new EIP
C. Create 15 Security Group rules to block the attacking IP addresses over port 80
D. Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny
rules to block the attacking IP addresses
Answer: D
Explanation:
Use AWS Identity and Access Management (IAM) to control who in your organization has permission to
create and manage security groups and network ACLs (NACL). Isolate the responsibilities and roles for
better defense. For example, you can give only your network administrators or security ad min the
permission to manage the security groups and restrict other roles.
318. Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon 53 versus
acquiring more hardware The outcome was that ail employees would be granted access to use Amazon 53
for storage of their personal documents.
Which of the following will you need to consider so you can set up a solution that incorporates single
sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user
folder in a bucket? (Choose 3 Answers)
A. Setting up a federation proxy or identity provider
B. Using AWS Security Token Service to generate temporary tokens
C. Tagging each folder in the bucket
D. Configuring IAM role
E. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in
the bucket
Answer: A, B, D
319. Your company policies require encryption of sensitive data at rest. You are considering the possible
options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance.
Which of these options would allow you to encrypt your data at rest? (Choose 3 answers)
A. Implement third party volume encryption tools
B. Do nothing as EBS volumes are encrypted by default
C. Encrypt data inside your applications before storing it on EBS
D. Encrypt data using native data encryption drivers at the file system level
E. Implement SSL/TLS for all services running on the server
Answer: A, C, D
320. You have a periodic Image analysis application that gets some files In Input analyzes them and tor
each file writes some data in output to a ten file the number of files in input per day is high and concentrated
in a few hours of the day.
Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results it
takes almost 20 hours per day to complete the process
What services could be used to reduce the elaboration time and improve the availability of the solution?
A. 53 to store 1/0 files. SOS to distribute elaboration commands to a group of hosts working in parallel. Auto
scaling to dynamically size the group of hosts depending on the length of the SOS queue
B. EBS with Provisioned IOPS (PIOPS) to store 1/0 files. SNS to distribute elaboration commands to a
group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the
number of SNS notifications
C. 53 to store 1/0 files, SNS to distribute evaporation commands to a group of hosts working in parallel.
Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications
D. EBS with Provisioned IOPS (PIOPS) to store 1/0 files SOS to distribute elaboration commands to a
group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the
length of the SOS queue.
Answer: D
Explanation:
Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances. Once
attached, you can create a file system on top of these volumes, run a database, or use them in any other
way you would use a block device. Amazon EBS volumes are placed in a specific Availability Zone, where
they are automatically replicated to protect you from the failure of a single component.
Amazon EBS provides three volume types: General Purpose (SSD), Provisioned IOPS (SSD), and
Magnetic. The three volume types differ in performance characteristics and cost, so you can choose the
right storage performance and price for the needs of your applications. All EBS volume types offer the same
durable snapshot capabilities and are designed for 99.999% availability.
321. You require the ability to analyze a customer’s clickstream data on a website so they can do
behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked
on. This data will be used in real time to modify the page layouts as customers click through the site to
increase stickiness and advertising click-through. Which option meets the requirements for captioning and
analyzing this data?
A. Log clicks in weblogs by URL store to Amazon 53, and then analyze with Elastic MapReduce
B. Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
C. Write click events directly to Amazon Redshift and then analyze with SQL
D. Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon
RDS and analyze with sol
Answer: B
Explanation:
Reference: http:/ /www.slideshare.net/AmazonWebServices/aws-webcast-introduction-to-amazon-kinesis
322. An AWS customer runs a public blogging website. The site users upload two million blog entries a
month. The average blog entry size is 200 KB. The access rate to blog entries drops to negligible 6 months
after publication and users rarely access a blog entry 1 year after publication. Additionally, blog entries
have a high update rate during the first 3 months following publication, this drops to no updates after 6
months. The customer wants to use CIoudFront to improve his user’s load times.
Which of the following recommendations would you make to the customer?
A. Duplicate entries into two different buckets and create two separate CIoudFront distributions where 53
access is restricted only to Cloud Front identity
B. Create a CIoudFront distribution with “US” Europe price class for US/ Europe users and a different
CIoudFront distribution with AI I Edge Locations’ for the remaining users.
C. Create a CIoudFront distribution with 53 access restricted only to the CIoudFront identity and partition
the blog entry’s location in 53 according to the month it was uploaded to be used with CIoudFront
behaviors.
D. Create a CIoudFronI distribution with Restrict Viewer Access Forward Query string set to true and
minimum TTL of 0.
Answer: C
323. Your company is getting ready to do a major public announcement of a social media site on AWS. The
website is running on EC2 instances deployed across multiple Availability Zones with a MuIti-AZ RDS
MySQL Extra Large DB Instance. The site performs a high number of small reads and writes per second
and relies on an eventual consistency model. After comprehensive tests you discover that there is read
contention on RDS MySQL. Which are the best approaches to meet these requirements? (Choose 2
answers)
A. Deploy EIasticCache in-memory cache running in each availability zone
B. Implement sharding to distribute load to multiple RDS MySQL instances
C. Increase the RDS MySQL Instance size and Implement provisioned IQPS
D. Add an RDS MySQL read replica in each availability zone
Answer: A, C
324. A company is running a batch analysis every hour on their main transactional DB. running on an RDS
MySQL instance to populate their central Data Warehouse running on Redshift During the execution of the
batch their transactional applications are very slow When the batch completes they need to update the top
management dashboard with the new data The dashboard is produced by another system running
on-premises that is currently started when a manually-sent email notifies that an update is required The
on-premises system cannot be modified because is managed by another team.
How would you optimize this scenario to solve performance issues and automate the process as much as
possible?
A. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update
the dashboard
B. Replace ROS with Redshift for the oaten analysis and SQS to send a message to the on-premises
system to update the dashboard
C. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update
the dashboard
D. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises
system to update the dashboard.
Answer: A
325. You are implementing a URL whitelisting system for a company that wants to restrict outbound
HTTP’S connections to specific domains from their EC2-hosted applications you deploy a single EC2
instance running proxy software and configure It to accept traffic from all subnets and EC2 instances in the
VPC. You configure the proxy to only pass through traffic to domains that you define in its whitelist
configuration You have a nightly maintenance window or 10 minutes where all instances fetch new software
updates. Each update Is about 200MB In size and there are 500 instances In the VPC that routinely fetch
updates After a few days you notice that some machines are failing to successfully download some, but not
all of their updates within the maintenance window. The download URLs used for these updates are
correctly listed in the proxy’s whitelist configuration and you are able to access them manually using a web
browser on the instances. What might be happening? {Choose 2 answers)
A. You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for
all instances to download their updates in time.
B. You are running the proxy on a sufficiently-sized EC2 instance in a private subnet and its network
throughput is being throttled by a NAT running on an undersized EC2 instance.
C. The route table for the subnets containing the affected EC2 instances is not configured to direct network
traffic for the software update locations to the proxy.
D. You have not allocated enough storage to t he EC2 instance running the proxy so the network buffer is
filling up, causing some requests to fail.
E. You are running the proxy in a public subnet but have not allocated enough EIPs to support the needed
network throughput through the Internet Gateway {IGW).
Answer: A, B
326. To serve Web traffic for a popular product your chief financial officer and IT director have purchased
10 ml large heavy utilization Reserved Instances (Rls) evenly spread across two availability zones:
Route 53 is used to deliver the traffic to an Elastic Load Balancer (ELB). After several months, the product
grows even more popular and you need additional capacity As a result, your company purchases two
C3.2x|arge medium utilization Rls You register the two c3 2xIarge instances with your ELB and quickly find
that the ml large instances are at 100% of capacity and the c3 2xIarge instances have significant capacity
that’s unused Which option is the most cost effective and uses EC2 capacity most effectively?
A. Use a separate ELB for each instance type and distribute load to ELBs with Route 53 weighted round
robin
B. Configure Autoscaning group and Launch Configuration with ELB to add up to 10 more on-demand ml
large instances when triggered by Cloudwatch shut off c3 2xIarge instances
C. Route traffic to EC2 ml large and c3 2xIarge instances directly using Route 53 latency based routing and
health checks shut off ELB
D. Configure ELB with two c3 2xiarge Instances and use on-demand Autoscaling group for up to two
additional c3.2x|arge instances Shut on mi .|arge instances.
Answer: D
327. A read only news reporting site with a combined web and application tier and a database tier that
receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations
automatically. What AWS services should be used meet these requirements?
A. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an
autoscaimg group monitored with CIoudWatch. And RDSwith read replicas.
B. Stateful instances for the web and application tier in an autoscaling group monitored with CIoudWatch
and RDS with read replicas.
C. Stateful instances for the web and application tier in an autoscaling group monitored with CIoudWatch.
And multi-AZ RDS.
D. Stateless instances for the web and application tier synchronized using EIastiCache Memcached in an
autoscaling group monitored with CIoudWatch and multi-AZ RDS.
Answer: A
328. You are running a news website in the eu-west-1 region that updates every 15 minutes. The website
has a world-wide audience it uses an Auto Scaling group behind an Elastic Load Balancer and an
Amazon RDS database Static content resides on Amazon 53, and is distributed through Amazon
CIoudFront. Your Auto Scaling group is set to trigger a scale up event at 60% CPU utilization, you use an
Amazon RDS extra large DB instance with 10.000 Provisioned IOPS its CPU utilization is around 80%.
While freeable memory is in the 2GB range.
Web analytics reports show that the average load time of your web pages is around 1 5 to 2 seconds, but
your SEO consultant wants to bring down the average load time to under 0.5 seconds.
How would you improve page load times for your users? (Choose 3 answers)
A. Lower the scale up trigger of your Auto Scaling group to 30% so it scales more aggressively.
B. Add an Amazon EIastiCache caching layer to your application for storing sessions and frequent DB
quenes
C. Configure Amazon CIoudFront dynamic content support to enable caching of re-usable content from
your site
D. Switch Amazon RDS database to the high memory extra large Instance type
E. Set up a second installation in another region, and use the Amazon Route 53 latency-based routing
feature to select the right region.
Answer: A, B, D
329. A large real -estate brokerage is exploring the option o( adding a cost-effective location based alert to
their existing mobile application The application backend infrastructure currently runs on AWS Users who
opt in to this service will receive alerts on their mobile device regarding real-estate otters in proximity to
their location. For the alerts to be relevant delivery time needs to be in the low minute count the existing
mobile app has 5 million users across the us Which one of the following architectural suggestions would
you make to the customer?
A. The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing
and EC2 instances: DynamoDB will be used to store and retrieve relevant otters EC2 instances will
communicate with mobile earners/device providers to push alerts back to mobile application.
B. Use AWS DirectConnect or VPN to establish connectMty with mobile carriers EC2 instances will receive
the mobile applications ‘ location through carrier connection: ROS will be used to store and relevant
relevant offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile
application
C. The mobile application will send device location using SOS. EC2 instances will retrieve the re Ievant
others from DynamoDB AWS MobiIe Push will be used to send offers to the mobile application
D. The mobile application will send device location using AWS Nlobile Push EC2 instances will retrieve the
relevant offers from DynamoDB EC2 instances will communicate with mobile carriers/device providers to
push alerts back to the mobile application.
Answer: A
330. A company is building a voting system for a popular TV show, viewers win watch the performances
then visit the show’s website to vote for their favorite performer. It is expected that in a short period of time
after the show has finished the site will receive millions of visitors. The visitors will first login to the site using
their Amazon.com credentials and then submit their vote. After the voting is completed the page will display
the vote totals. The company needs to build the site such that can handle the rapid influx of traffic while
maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns
below should they use?
A. Use Cloud Front and an Elastic Load balancer in front of an auto-scaled set of web servers, the web
servers will first can the Login With Amazon service to authenticate the user then process the users vote
and store the result into a multi-AZ Relational Database Service instance.
B. Use CIoudFront and the static website hosting feature of 53 with the Javascript SDK to call the Login
With Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to
store the users vote.
C. Use Cloud Front and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web
servers will first call the Login with Amazon service to authenticate the user, the web servers will process
the users vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain
permissions to the DynamoDB table.
D. Use Cloud Front and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web
servers will first call the Login. With Amazon service to authenticate the user, the web sewers win process
the users vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain
permissions to the SQS queue. A set of application sewers will then retrieve the items from the queue and
store the result into a DynamoDB table.
Answer: D
331. You are developing a new mobile application and are considering storing user preferences in AWS.2w
This would provide a more uniform cross-device experience to users using multiple mobile devices to
access the application. The preference data for each user is estimated to be SOKB in size Additionally 5
million customers are expected to use the application on a regular basis. The solution needs to be
cost-effective, highly available, scalable and secure, how would you design a solution to meet the above
requirements?
A. Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public
facing application on a server in front of the database to manage security and access credentials
B. Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user
preferences. The mobile application will query the user preferences directly from the DynamoDB table.
Utilize STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and
authorize access.
C. Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user
preference data .The mobile application will query the user preferences from the read replicas. Leverage
the MySQL user management and access prMlege system to manage security and access credentials.
D. Store the user preference data in 53 Setup a DynamoDB table with an item for each user and an item
attribute pointing to the user’ 53 object. The mobile application will retrieve the 53 URL from DynamoDB
and then access the 53 object directly utilize STS, Web identity Federation, and 53 ACLs to authenticate
and authorize access.
Answer: B
332. Your team has a tomcat-based Java application you need to deploy into development, test and
production environments. After some research, you opt to use Elastic Beanstalk due to its tight integration
with your developer tools and RDS due to its ease of management. Your QA team lead points out that you
need to roll a sanitized set of production data into your environment on a nightly basis. Similarly, other
software teams in your org want access to that same restored data via their EC2 instances in your
VPC .The
optimal setup for persistence and security that meets the above requirements would be the following.
A. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow
access to it from hosts in your application subnets.
B. Create your RDS instance separately and add its IP address to your appIication’s DB connection strings
in your code Alter its security group to allow access to it from hosts within your VPC’s IP address block.
C. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an
environment variable. Create a security group for client machines and add it as a valid source for DB traffic
to the security group of the RDS instance itself.
D. Create your RDS instance separately and pass its DNS name to your’s DB connection string as an
environment variable Alter its security group to allow access to It from hosts In your application subnets.
Answer: A
333. You are looking to migrate your Development (Dev) and Test environments to AWS. You have
decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a
Master AWS account using Consolidated Billing. To make sure you Keep within budget you would like to
implement a way for administrators in the Master account to have access to stop, delete and/or terminate
resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.
A. Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the
Dev and Test accounts that grant the Master account access to the resources in the account by inheriting
permissions from the Master account.
B. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to
the Dev and Test accounts.
C. Create IAM users in the Master account Create cross-account roles in the Dev and Test accounts that
have full Admin permissions and grant the Master account access.
D. Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to
resources in the Dev and Test accounts
Answer: C
Explanation:
Bucket Owner Granting Cross-account Permission to objects It Does Not Own
In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects.
That is, your bucket can have objects that other AWS accounts own.
Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of
who the owner is, to a user in another account. For example, that user could be a billing application that
needs to access object metadata. There are two core issues:
The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket
owner to grant permissions on objects it does not own, the object owner, the AWS account that created the
objects, must first grant permission to the bucket owner. The bucket owner can then delegate those
permissions.
Bucket owner account can delegate permissions to users in its own account but it cannot delegate
permissions to other AWS accounts, because cross-account delegation is not supported.
In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with
permission to access objects, and grant another AWS account permission to assume the role temporarily
enabling it to access objects in the bucket.
Background: Cross-Account Permissions and Using IAM Roles
IAM roles enable several scenarios to delegate access to your resources, and cross-account access is
one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily
delegate object access cross-account to users in another AWS account, Account C. Each IAM role you
create has two policies attached to it:
A trust policy identifying another AWS account that can assume the role.
An access policy defining what permissions-for example, s3:Get0bject-are allowed when someone
assumes the role. For a list of permissions you can specify in a policy, see Specifying Permissions in a
Policy.
The AWS account identified in the trust policy then grants its user permission to assume the role. The user
can then do the following to access objects:
Assume the role and, in response, get temporary security credentials. Using the temporary security
credentials, access the objects in the bucket.
For more information about IAM roles, go to Roles (Delegation and Federation) in IAM User Guide. The
following is a summary of the walkthrough steps:
Account A administrator user attaches a bucket policy granting Account B conditional permission to upload
objects.
Account A administrator creates an IAM role, establishing trust with Account C, so users in t hat account
can access Account A. The access policy attached to the role limits what user in Account C can do when
the user accesses Account A.
Account B administrator uploads an object to the bucket owned by Account A, granting full-control
permission to the bucket owner.
Account C administrator creates a user and attaches a user policy that al lows the user to assume the role.
User in Account C first assumes the role, which returns the user temporary security credentials. Using
those temporary credentials, the user then accesses objects in the bucket.
For this example, you need three accounts. The following tab Ie shows how we refer to these accounts and
the administrator users in these accounts. Per IAM guidelines (see About Using an
Administrator User to Create Resources and Grant Permissions) we do not use the account root
credentials in this walkthrough. Instead, you create an administrator user in each account and use those
credentials in creating resources and granting them permissions
334. Your customer is willing to consolidate their log streams (access logs application logs security logs etc.)
in one single system. Once consolidated, the customer wants to analyze these logs in real time based on
heuristics. From time to time, the customer needs to validate heuristics, which requires going back to data
samples extracted from the last 12 hours?
What is the best approach to meet your customer’s requirements?
A. Send all the log events to Amazon SQS. Setup an Auto Scaling group of EC2 sewers to consume the
logs and apply the heuristics.
B. Send all the log events to Amazon Kinesis develop a client process to apply heuristics on the logs
C. Configure Amazon Cloud Trail to receive custom logs, use EMR to apply heuristics the logs
D. Setup an Auto Scaling group of EC2 syslogd servers, store the logs on 53 use EMR to apply heuristics
on the logs
Answer: B
Explanation:
The throughput of an Amazon Kinesis stream is designed to scale without limits via increasing the number
of shards within a stream. However, there are certain limits you should keep in mind while using Amazon
Kinesis Streams:
By default, Records of a stream are accessible for up to 24 hours from the time they are added to the
stream. You can raise this limit to up to 7 days by enabling extended data retention.
The maximum size of a data blob (the data payload before Base64-encoding) within one record is 1
megabyte (MB).
Each shard can support up to 1000 PUT records per second.
For more information about other API level limits, see Amazon Kinesis Streams Limits.
335. You deployed your company website using Elastic Beanstalk and you enabled log file rotation to 53.
An Elastic Map Reduce job is periodically analyzing the logs on 53 to build a usage dashboard that you
share with your CIO.
You recently improved overall performance of the website using Cloud Front for dynamic content delivery
and your website as the origin.
After this architectural change, the usage dashboard shows that the traffic on your website dropped by an
order of magnitude. How do you fix your usage dashboard’?
A. Enable Cloud Front to deliver access logs to 53 and use them as input of the Elastic Map Reduce job.
B. Turn on Cloud Trail and use trail log tiles on 53 as input of the Elastic Map Reduce job
C. Change your log collection process to use Cloud Watch ELB metrics as input of the Elastic Map
Reducejob
D. Use Elastic Beanstalk “Rebuild Environment” option to update log delivery to the Elastic Map Reduce
job.
E. Use Elastic Beanstalk ‘Restart App server(s)” option to update log delivery to the Elastic Map Reduce
job.
Answer: D
336. You are running a successful multitier web application on AWS and your marketing department has
asked you to add a reporting tier to the application. The reporting tier will aggregate and publish status
reports every 30 minutes from user-generated information that is being stored in your web application s
database. You are currently running a MuIti-AZ RDS MySQL instance for the database tier. You also have
implemented Elasticache as a database caching layer between the application tier and database tier.
Please select the answer that will allow you to successful ly implement the reporting tier with as little impact
as possible to your database.
A. Continually send transaction logs from your master database to an 53 bucket and generate the reports
off the 53 bucket using 53 byte range request s.
B. Generate the reports by querying the synchronously replicated standby RDS MySQL instance
maintained through Multi-AZ.
C. Launch a RDS Read Replica connected to your MuIti AZ master database and generate reports by
querying the Read Replica.
D. Generate the reports by querying the EIastiCache database caching tier.
Answer: C
Explanation:
Amazon RDS allows you to use read replicas with MuIti-AZ deployments. In Multi-AZ deployments for
MySQL, Oracle, SQL Server, and PostgreSQL, the data in your primary DB Instance is synchronously
replicated to a standby instance in a different Availability Zone (AZ). Because of their synchronous
replication, MuIti-AZ deployments for these engines offer greater data durability benefits than do read
replicas. (In all Amazon RDS for Aurora deployments, your data is automatically replicated across 3
Availability Zones.)
You can use MuIti-AZ deployments and read replicas in conjunction to enjoy the complementary benefits
of each. You can simply specify that a given Multi-AZ deployment is the source DB Instance for your Read
replicas. That way you gain both the data durability and availability benefits of Multi -AZ deployments and
the read scaling benefits of read replicas.
Note that for MuIti-AZ deployments, you have the option to create your read replica in an AZ other than that
of the primary and the standby for even more redundancy. You can identify the AZ corresponding to your
standby by looking at the “Secondary Zone” field of your DB Instance in the AWS Management Console.
337. A web company is looking to implement an intrusion detection and prevention system into their
deployed VPC. This platform should have the ability to scale to thousands of instances running inside of the
VPC, How should they architect t heir solution to achieve these goals?
A. Configure an instance with monitoring software and the elastic network interface (ENI) set to
promiscuous mode packet sniffing to see an traffic across the VPC,
B. Create a second VPC and route all traffic from the primary application VPC through the second VPC
where the scalable virtualized IDS/IPS platform resides.
C. Configure servers running in the VPC using the host-based ‘route’ commands to send all traffic through
the platform to a scalable virtualized IDS/IPS.
D. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS
platform for inspection.
Answer: C
338. A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load
Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The
main web-application best runs on m2 x large instances since it is highly memory- bound Each new
deployment requires semi-automated creation and testing of a new AM for the application servers which
takes quite a while ana is therefore only done once per week.
Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture.
First tests show that the new component is CPU bound Because the company has some experience with
using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application
life cycle tool to simplify management of the application and reduce the deployment cycles.
What configuration in AWS Ops Works is necessary to integrate the new chat module in the most
cost-efficient and filexible way?
A. Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe
B. Create one AWS OpsWorks stack create two AWS Ops Works layers create one custom recipe
C. Create two AWS OpsWorks stacks create two AWS Ops Works layers create one custom recipe
D. Create two AWS OpsWorks stacks create two AWS Ops Works layers create two custom recipe
Answer: C
339. Your firm has uploaded a large amount of aerial image data to 53 In the past, in your on-premises
environment, you used a dedicated group of servers to oaten process this data and used Rabbit MOAn
open source messaging system to get job information to the servers. Once processed the data would go
to tape and be shipped offsite. Your manager told you to stay with the current design, and leverage AWS
archival storage and messaging services to minimize cost. Which is correct?
A. Use SOS for passing job messages use Cloud Watch alarms to terminate EC2 worker instances when
they become idle. Once data is processed, change the storage class of the 53 objects to Reduced
Redundancy Storage.
B. Setup Auto-Scaled workers triggered by queue depth that use spot instances to process messages in
SOS Once data is processed,
C. Change the storage class of the 53 objects to Reduced Redundancy Storage. Setup Auto-Scaled
workers triggered by queue depth that use spot instances to process messages in SOS Once data is
processed, change the storage class of the 53 objects to Glacier.
D. Use SNS to pass job messages use Cloud Watch alarms to terminate spot worker instances when they
become idle. Once data is processed, change the storage class of the 53 object to Glacier.
Answer: D
340. What does Amazon 53 stand for?
A. Simple Storage Solution.
B. Storage Storage Storage (triple redundancy Storage).
C. Storage Sewer Solution.
D. Simple Storage Sewice.
Answer: D
341. You must assign each sewer to at least _ security group
A. 3
B. 2
C. 4
D. 1
Answer: A
342. Before I delete an EBS volume, what can I do if I want to recreate the volume later?
A. Create a copy of the EBS volume (not a snapshot)
B. Store a snapshot of the volume
C. Download the content to an EC2 instance
D. Back up the data in to a physical disk
Answer: B
343. Select the most correct
The device name /dev/sdal (within Amazon EC2) is _
A. Possible for EBS volumes
B. Reserved for the root device
C. Recommended for EBS volumes
D. Recommended for instance store volumes
Answer: B
344. If I want an instance to have a public IP address, which IP address should I use’?
A. Elastic I P Address
B. Class B IP Address
C. Class A IP Address
D. Dynamic IP Address
Answer: A
345. What does RRS stand for when talking about 53?
A. Redundancy Removal System
B. Relational Rights Storage
C. Regional Rights Standard
D. Reduced Redundancy Storage
Answer: D
346. All Amazon EC2 instances are assigned two IP addresses at launch, out of which one can only be
reached from within the Amazon EC2 network?
A. Multiple IP address
B. Public IP address
C. Private IP address
D. Elastic I P Address
Answer: C
347. What does Amazon SWF stand for?
A. Simple Web Flow
B. Simple Work Flow
C. Simple Wireless Forms
D. Simple Web Form
Answer: B
348. What is the Reduced Redundancy option in Amazon 53?
A. Less redundancy for a lower cost.
B. It doesn’t exist in Amazon 53, but in Amazon EBS.
C. It allows you to destroy any copy of your files outside a specific jurisdiction.
D. It doesn’t exist at all
Answer: A
349. Fill in the blanks: Resources that are created in AWS are identified by a unique identifier called an
A. Amazon Resource Number
B. Amazon Resource Nametag
C. Amazon Resource Name
D. Amazon Resource Namespace
Answer: C
350. If I write the below command, what does it do? ec2-run ami-e3a5408a -n 20 -g appserver
A. Start twenty instances as members of appserver group.
B. Creates 20 rules in the security group named appserver
C. Terminate twenty instances as members of appserver group.
D. Start 20 security groups
Answer: A
351. While creating an Amazon RDS DB, your first task is to set up a DB _ that controls what IP addresses
or EC2 instances have access to your DB Instance.
A. Security Pool
B. Secure Zone
C. Security Token Pool
D. Security Group
Answer: D
352. When you run a DB Instance as a Multi-AZ deployment, the ” _ ” serves database writes and reads
A. secondary
B. backup
C. stand by
D. primary
Answer: D
353. Every user you create in the IAM system starts with _ _
A. Partial permissions
B. Full permissions
C. No permissions
Answer: C
354. Can you create IAM security credentials for existing users?
A. Yes, existing users can have security credentials associated with their account.
B. No, IAM requires that all users who have credentials set up are not existing users
C. No, security credentials are created within GROUPS, and then users are associated to GROUPS at a
later time.
D. Yes, but only IAM credentials, not ordinary security credentials.
Answer: A
355. What does Amazon EC2 provide?
A. Virtual sewers in the Cloud.
B. A platform to run code (Java, PHP, Python), paying on an hourly basis.
C. Computer Clusters in the Cloud.
D. Physical sewers, remotely managed by the customer.
Answer: A
356. Amazon SWF is designed to help users
A. Design graphical user interface interactions
B. Manage user identification and authorization
C. Store Web content
D. Coordinate synchronous and asynchronous tasks which are distributed and fault tolerant.
Answer: D
357. Can I control if and when MySQL based RDS Instance is upgraded to new supported versions?
A. No
B. Only in VPC
C. Yes
Answer: C
358. If I modify a DB Instance or the DB parameter group associated with the instance, should I reboot the
instance for the changes to take effect?
A. No
B. Yes
Answer: B
359. When you view the block device mapping for your instance, you can see only the EBS volumes, not
the instance store volumes.
A. Depends on the instance type
B. FALSE
C. Depends on whether you use API call
D. TRUE
Answer: D
360. By default, EBS volumes that are created and attached t o an instance at launch are deleted when t
hat instance is terminated. You can modify this behavior by changing the value of the flag _ to false when
you launch the instance
A. Delete On Termination
B. Remove On Deletion
C. Remove On Termination
D. Terminate On Deletion
Answer: A
361. What are the initial settings of an user created security group?
A. Allow all inbound traffic and Allow no outbound traffic
B. AI low no inbound traffic and AI low no outbound traffic
C. AI low no inbound traffic and AI low all outbound traffic
D. Allow all inbound traffic and Allow all outbound traffic
Answer: C
362. Will my standby RDS instance be in the same Region as my primary?
A. Only for Oracle RDS types
B. Yes
C. Only if configured at launch
D. No
Answer: B
363. What does Amazon Elastic Beanstalk provide?
A. A scalable storage appliance on top of Amazon Web Services.
B. An application container on top of Amazon Web Services.
C. A service by this name doesn’t exist.
D. A scalable cluster of EC2 instances.
Answer: B
364. True or False: When using IAM to control access to your RDS resources, the key names that can be
used are case sensitive. For example, aws:CurrentTime is NOT equivalent to AWS:currenttime.
A. TRUE
B. FALSE
Answer: A
365. What will be the status of the snapshot until the snapshot is complete.
A. running
B. working
C. progressing
D. pending
Answer: D
366. Can we attach an EBS volume to more than one EC2 instance at the same time?
A. No
B. Yes.
C. Only EC2-optimized EBS volumes.
D. Only in read mode.
Answer: A
367. True or False: Automated backups are enabled by default for a new DB Instance.
A. TRUE
B. FALSE
Answer: A
368. What does the AWS Storage Gateway provide?
A. It allows to integrate on-premises IT environments with Cloud Storage.
B. A direct encrypted connection to Amazon 53.
C. It’s a backup solution that provides an on-premises Cloud storage.
D. It provides an encrypted SSL endpoint for backups in the Cloud.
Answer: A
369. Amazon RDS automated backups and DB Snapshots are currently supported for only the _ _ storage
engine
A. InnoDB
B. MyISAM
Answer: A
370. How many relational database engines does RDS currently support?
A. Three: MySQL, Oracle and Microsoft SQL Sewer.
B. Just two: MySQL and Oracle.
C. Five: MySQL, PostgreSQL, MongoDB, Cassandra and SQLite.
D. Just one: MySQL.
Answer: A
371. Fill in the blanks: The base URI for all requests for instance metadata is _ _
A. http://254.169.169.254/Iatest/
B. http://169.169.254.254/|atesU
C. http://127.0.0.1/|atest/
D. http://I69.254.169.254/|atest/
Answer: D
372. While creating the snapshots using the command line tools, which command should I be using?
A. ec2-deploy-snapshot
B. ec2-fresh-snapshot
C. ec2-create-snapshot
D. ec2-new-snapshot
Answer: C
373. Typically, you want your application to check whether a request generated an error before you spend
any time processing results. The easiest way to find out if an error occurred is to look for an _ node in the
response from the Amazon RDS API.
A. Incorrect
B. Error
C. FALSE
Answer: B
374. What are the two permission types used by AWS’?
A. Resource-based and Product-based
B. Product-based and Service-based
C. Service-based
D. User-based and Resource-based
Answer: D
375. In the Amazon cloudwatch, which metric should I be checking to ensure that your DB Instance has
enough free storage space?
A. Free Storage
B. Free Storage Space
C. Free Storage Volume
D. Free DB Storage Space
Answer: B
376. Amazon RDS DB snapshots and automated backups are stored in
A. Amazon 53
B. Amazon ECS Volume
C. Amazon RDS
D. Amazon EMR
Answer: A
377. What is the maximum key length of a tag’?
A. 512 Unicode characters
B. 64 Unicode characters
C. 256 Unicode characters
D. 128 Unicode characters
Answer: D
378. Groups can’t _.
A. be nested more than 3 levels
B. be nested at all
C. be nested more than 4 levels
D. be nested more than 2 levels
Answer: B
379. You must increase storage size in increments of at least _ %
A. 40
B. 20
C. 50
D. 10
Answer: D
380. Changes to the backup window take effect _ _
A. from the next billing cycle
B. after 30 minutes
C. immediately
D. after 24 hours
Answer: C
381. Using Amazon C|oudWatch’s Free Tier, what is the frequency of metric updates which you receive?
A. 5 minutes
B. 500 milliseconds.
C. 30 seconds
D. 1 minute
Answer: A
382. Which is the default region in AWS?
A. eu-west-1
B. us-east-1
C. us-east-2
D. ap-southeast-1
Answer: B
383. What are the Amazon EC2 API tools?
A. They don’t exist. The Amazon EC2 AMI tools, instead, are used to manage permissions.
B. Command-line tools to the Amazon EC2 web service.
C. They are a set of graphical tools to manage EC2 instances.
D. They don’t exist. The Amazon API tools are a client interface to Amazon Web Senrices.
Answer: B
384. What are the two types of licensing options available for using Amazon RDS for Oracle?
A. BYOL and Enterprise License
B. BYOL and License Included
C. Enterprise License and License Included
D. Role based License and License Included
Answer: B
385. What does a “Domain” refer to in Amazon SWF?
A. A security group in which only tasks inside can communicate with each other
B. A special type of worker
C. A collection of related Workflows
D. The DNS record for the Amazon SWF service
Answer: C
386. EBS Snapshots occur _
A. Asynchronously
B. Synchronously
C. Weekly
Answer: A
387. Disabling automated backups _ disable the point-in-time recovery.
A. if configured to can
B. will never
C. will
Answer: C
388. Out of the stripping options available for the EBS volumes, which one has the following disadvantage :
‘Doubles the amount of 1/0 required from the instance to EBS compared to RAID 0, because you’re
mirroring all writes to a pair of volumes, limiting how much you can stripe.’?
A. Raid 0
B. RAID 1+0 (RAID 10)
C. Raid 1
D. Raid
Answer: B
389. Is creating a Read Replica of another Read Replica supported?
A. Only in certain regions
B. Only with MSSQL based RDS
C. Only for Oracle RDS types
D. No
Answer: D
390. Can Amazon 53 uploads resume on failure or do they need to restart?
A. Restart from beginning
B. You can resume them, if you flag the “resume on fai lure” option before uploading.
C. Resume on failure
D. Depends on the file size
Answer: C
391. Which of the following cannot be used in Amazon EC2 to control who has access to specific Amazon
EC2 instances?
A. Security Groups
B. IAM System
C. SSH keys
D. Windows passwords
Answer: B
392. Fill in the blanks: _ let you categorize your EC2 resources in different ways, for example, by purpose,
owner, or environment.
A. wildcards
B. pointers
C. Tags
D. special filters
Answer: C
393. How can I change the security group membership for interfaces owned by other AWS, such as Elastic
Load Balancing?
A. By using the service specific console or API\CLI commands
B. None of these
C. Using Amazon EC2 API/CLI
D. using all these methods
Answer: A