Updated: Jul 17, 2020
1.You are trying to launch an EC2 instance, however the instance seems to go into a terminated status immediately. What would probably not be a reason that this is happening?
A. The AMI is missing a required part.
B. The snapshot is corrupt.
C. You need to create storage in EBS first.
D. You’ve reached your volume limit.
Amazon EC2 provides a virtual computing environment, known as an instance. After you launch an instance, AWS recommends that you check its status to confirm that it goes from the pending status to the running status, the not terminated status. The following are a few reasons why an Amazon EBS-backed instance might immediately terminate:
You’ve reached your volume limit. The AM is missing a required part. The snapshot is corrupt. Reference:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_|nstanceStraightToTerminated.html
2. You have set up an Auto Scaling group. The cool down period for the Auto Scaling group is 7 minutes. The first instance is launched after 3 minutes, while the second instance is launched after 4 minutes. How many minutes after the first instance is launched will Auto Scaling accept another scaling activity request?
A. 11 minutes
B. 7 minutes
C. 10 minutes
D. 14 minutes
If an Auto Scaling group is launching more than one instance, the cool down period for each instance starts after that instance is launched. The group remains locked until the last instance that was launched has completed its cool down period. In this case the cool down period for the first instance starts after 3minutes and finishes at the 10th minute (3+7 cool down), while for the second instance it starts at the 4thminute and finishes at the 11th minute (4+7 cool down). Thus, the Auto Scaling group will receive another request only after 11 minutes. Reference:http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/AS_Concepts.htmI
3. In Amazon EC2 Container Service components, what is the name of a logical grouping of container instances on which you can place tasks?
A. A cluster
B. A container instance
C. A container
D. A task definition
Amazon ECS contains the following components:
A Cluster is a logical grouping of container instances that you can place tasks on. A Container instance is an Amazon EC2 instance that is running the Amazon ECS agent and has been registered into a cluster. A Task definition is a description of an application that contains one or more container definitions. A Scheduler is the method used for placing tasks on container instances. A Service is an Amazon ECS service that allows you to run and maintain a specified number of instances of a task definition simultaneously. A Task is an instantiation of a task definition that is running on a container instance. A Container is a Linux container that was created as part of a task. Reference: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
4. In the context of AWS support, why must an EC2 instance be unreachable for 20 minutes rather than allowing customers to open tickets immediately?
A. Because most reachability issues are resolved by automated processes in less than 20 minutes
B. Because all EC2 instances are unreachable for 20 minutes every day when AWS does routine
C. Because all EC2 instances are unreachable for 20 minutes when first launched
D. Because of all the reasons listed here
An EC2 instance must be unreachable for 20 minutes before opening a ticket, because most reachability issues are resolved by automated processes in less than 20 minutes and will not require any action on the part of the customer. If the instance is still unreachable after this time frame has passed, then you should open a case with support. Reference: https://aws.amazon.com/premiumsupport/faqs/
5. Can a user get a notification of each instance start / terminate configured with Auto Scaling?
A. Yes, if configured with the Launch Config
B. Yes, always
C. Yes, if configured with the Auto Scaling group
The user can get notifications using SNS if he has configured the notifications while creating the Auto
Scaling group. Reference: http://docs.aws.amazon.com/AutoScaIing/latest/DeveIoperGuide/GettingStartedTutoriaI.html
6. Amazon EBS provides the ability to create backups of any Amazon EC2 volume into what is known as
C. instance backups
Amazon allows you to make backups of the data stored in your EBS volumes through snapshots that can later be used to create a new EBS volume.
7. To specify a resource in a policy statement, in Amazon EC2, can you use its Amazon Resource Name (ARN)?
A. Yes, you can.
B. No, you can’t because EC2 is not related to ARN.
C. No, you can’t because you can’t specify a particular Amazon EC2 resource in an IAM policy.
D. Yes, you can but only for the resources that are not affected by the action.
Some Amazon EC2 API actions allow you to include specific resources in your policy that can be created or modified by the action. To specify a resource in the statement, you need to use its Amazon Resource Name (ARN). Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-ug.pdf
8. After you recommend Amazon Redshift to a client as an alternative solution to paying data warehouses to analyze his data, your client asks you to explain why you are recommending Redshift. Which of the following would be a reasonable response to his request?
A. It has high performance at scale as data and query complexity grows.
B. It prevents reporting and analytic processing from interfering with the performance of OLTP workloads.
C. You don’t have the administrative burden of running your own data warehouse and dealing with setup, durability, monitoring, scaling, and patching.
D. All answers listed are a reasonable response to his QUESTION
Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing you to use a wide range of familiar SQL clients. Data load speed scales linearly with cluster size, with integrations to Amazon S3, Amazon DynamoDB, Amazon Elastic MapReduce, Amazon Kinesis or any SSH-enabled host. AWS recommends Amazon Redshift for customers who have a combination of needs, such as: High performance at scale as data and query complexity grows Desire to prevent reporting and analytic processing from interfering with the performance of OLTP workloads Large volumes of structured data to persist and query using standard SQL and existing BI tools Desire to the administrative burden of running one’s own data warehouse and dealing with setup, durability, monitoring, scaling and patching.
9. One of the criteria for a new deployment is that the customer wants to use AWS Storage Gateway. However, you are not sure whether you should use gateway-cached volumes or gateway-stored volumes or even what the differences are. Which statement below best describes those differences?
A. Gateway-cached lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-stored enables you to configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.
B. Gateway-cached is free whilst gateway-stored is not.
C. Gateway-cached is up to 10 times faster than gateway-stored.
D. Gateway-stored lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached enables you to configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.
Volume gateways provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application sewers. The gateway supports the following volume configurations:
Gateway-cached volumes — You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. Gateway-stored volumes — If you need low-latency access to your entire data set, you can configure your
on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive off-site backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. Reference: http://docs.aws.amazon.com/storagegateway/latest/userguide/volume-gateway.html
10. A user is launching an EC2 instance in the US East region. Which of the below mentioned options is recommended by AWS with respect to the selection of the availability zone?
A. Always select the AZ while launching an instance
B. Always select the US-East-1-a zone for HA
C. Do not select the AZ; instead let AWS select the AZ
D. The user can never select the availability zone while launching an instance
When launching an instance with EC2, AWS recommends not to select the availability zone (AZ). AWS specifies that the default Availability Zone should be accepted. This is because it enables AWS to select the best Availability Zone based on the system health and available capacity. If the user launches additional instances, only then an Availability Zone should be specified. This is to specify the same or different AZ from the running instances.
11. A user is storing a large number of objects on AWS S3. The user wants to implement the search functionality among the objects. How can the user achieve this?
A. Use the indexing feature of S3.
B. Tag the objects with the metadata to search on that.
C. Use the query functionality of S3.
D. Make your own DB system which stores the S3 metadata for the search functionality.
In Amazon Web Services, AWS S3 does not provide any query facility. To retrieve a specific object the user needs to know the exact bucket / object key. In this case it is recommended to have an own DB system which manages the S3 metadata and key mapping. Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf
12. After setting up a Virtual Private Cloud (VPC) network, a more experienced cloud engineer suggests that to achieve low network latency and high network throughput you should look into setting up a placement group. You know nothing about this but begin to do some research about it and are especially curious about its limitations. Which of the below statements is wrong in describing the limitations of a placement group?
A. Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed.
B. A placement group can span multiple Availability Zones.
C. You can’t move an existing instance into a placement group.
D. A placement group can span peered VPCs
A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking. Placement groups have the following limitations .The name you specify for a placement group a name must be unique within your AWS account. A
placement group can’t span multiple Availability Zones. Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed. We recommend using the same instance type for all instances in a placement group. You can’t merge placement groups. Instead, you must terminate the instances in one placement group, and then relaunch those instances into the other placement group. A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. For more information about VPC peering connections, see VPC Peering in the Amazon VPC User Guide. You can’t move an existing instance into a placement group. You can create an AM from your existing instance, and then launch a new instance from the AMI into a placement group.
13. What is a placement group in Amazon EC2?
A. It is a group of EC2 instances within a single Availability Zone.
B. It the edge location of your web content.
C. It is the AWS region where you run the EC2 instance of your web content.
D. It is a group used to span multiple Availability Zones.
A placement group is a logical grouping of instances within a single Availability Zone.
14. You are migrating an internal sewer on your DC to an EC2 instance with EBS volume. Your server disk usage is around 500 GB so you just copied all your data to a 2TB disk to be used with AWS Import/Export. Where will the data be imported once it arrives at Amazon?
A. to a 2TB EBS volume
B. to an S3 bucket with 2 objects of 1TB
C. to an 500GB EBS volume
D. to an S3 bucket as a 2TB snapshot
An import to Amazon EBS will have different results depending on whether the capacity of your storage device is less than or equal to 1 TB or greater than 1 TB. The maximum size of an Amazon EBS snapshot is 1 TB, so if the device image is larger than 1 TB, the image is chunked and stored on Amazon S3. The target location is determined based on the total capacity of the device, not the amount of data on the device. Reference: http://docs.aws.amazon.com/AWSImportExport/latest/DG/Concepts.html
15. A client needs you to import some existing infrastructure from a dedicated hosting provider to AWS to try and save on the cost of running his current website. He also needs an automated process that manages backups, software patching, automatic failure detection, and recovery. You are aware that his existing set up currently uses an Oracle database. Which of the following AWS databases would be best for accomplishing this task?
A. Amazon RDS
B. Amazon Redshift
C. Amazon SimpIeDB
D. Amazon EIastiCache
Amazon RDS gives you access to the capabilities of a familiar MySQL, Oracle, SQL Server, or PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery.
16. True or false A VPC contains multiple subnets, where each subnet can span multiple Availability Zones.
A. This is true only if requested during the set-up of VPC.
B. This is true.
C. This is false.
D. This is true only for US regions.
A VPC can span several Availability Zones. In contrast, a subnet must reside within a single Availability Zone. Reference: https://aws.amazon.com/vpc/faqs/
17. An edge location refers to which Amazon Web Service?
A. An edge location is referred to the network configured within a Zone or Region
B. An edge location is an AWS Region
C. An edge location is the location of the data center used for Amazon CIoudFront.
D. An edge location is a Zone within an AWS Region
18. You are looking at ways to improve some existing infrastructure as it seems a lot of engineering resources are being taken up with basic management and monitoring tasks and the costs seem to be excessive. You are thinking of deploying Amazon elastic Cache to help. Which of the following statements is true in regard to EIastic Cache?
A. You can improve load and response times to user actions and queries however the cost associated with scaling web applications will be more.
B. You can’t improve load and response times to user actions and queries, but you can reduce the cost associated with scaling web applications.
C. You can improve load and response times to user actions and queries however the cost associated with scaling web applications will remain the same.
D. You can improve load and response times to user actions and queries and also reduce the cost associated with scaling web applications.
Amazon EIastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon EIastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory caching system, instead of relying entirely on slower disk-based databases. The service simplifies and offloads the management, monitoring and operation of in-memory cache environments, enabling your engineering resources to focus on developing applications. Using Amazon EIastiCache, you can not only improve load and response times to user actions and queries, but also reduce the cost associated with scaling web applications. Reference: https://aws.amazon.com/eIasticache/faqs/
19. Do Amazon EBS volumes persist independently from the running life of an Amazon EC2 instance?
A. Yes, they do but only if they are detached from the instance
B. No, you cannot attach EBS volumes to an instance.
C. No, they are dependent.
D. Yes, they do.
An Amazon EBS volume behaves like a raw, unformatted, external block device that you can attach to a single instance. The volume persists independently from the running life of an Amazon EC2 instance. Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/Storage.html
20. Your supervisor has asked you to build a simple file synchronization service for your department. He doesn’t want to spend too much money and he wants to be notified of any changes to files by email. What do you think would be the best Amazon service to use for the email solution?
A. Amazon SES
B. Amazon CIoudSearch
C. Amazon SWF
D. Amazon AppStream
File change notifications can be sent via email to users following the resource with Amazon Simple Email Service (Amazon SES), an easy-to-use, cost-effective email solution.
21. Does DynamoDB support in-place atomic updates?
C. It does support in-place non-atomic updates
D. It is not defined
DynamoDB supports in-place atomic updates.
22. Your manager has just given you access to multiple VPN connections that someone else has recently set up between all your company’s offices. She needs you to make sure that the communication between the VPNs is secure. Which of the following services would be best for providing a low-cost hub-and-spoke model for primary or backup connectivity between these remote offices?
A. Amazon C|oudFront
B. AWS Direct Connect
C. AWS C|oudHSM
D. AWS VPN CIoudHub
If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CIoudHub. The VPN CIoudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices. Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CIoudHub.htmI
23. Amazon EC2 provides a . It is an HTTP or HTTPS request that uses the HTTP verbs GET or POST.
A. web database
B. .net framework
C. Query API
D. C library
Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action.
24. In Amazon AWS, which of the following statements is true of key pairs?
A. Key pairs are used only for Amazon SDKs.
B. Key pairs are used only for Amazon EC2 and Amazon CIoudFront.
C. Key pairs are used only for Elastic Load Balancing and AWS IAM.
D. Key pairs are used for all Amazon services.
Key pairs consist of a public and private key, where you use the private key to create a digital signature, and then AWS uses the corresponding public key to validate the signature. Key pairs are used only for Amazon EC2 and Amazon CIoudFront. Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws-sec-cred-types.html
25. Does Amazon DynamoDB support both increment and decrement atomic operations?
A. Only increment, since decrement are inherently impossible with DynamoDB’s data model.
B. No, neither increment nor decrement operations.
C. Yes, both increment and decrement operations.
D. Only decrement, since increment are inherently impossible with DynamoDB’s data model.
Explanation: Amazon DynamoDB supports increment and decrement atomic operations. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html
26. An organization has three separate AWS accounts, one each for development, testing, and production. The organization wants the testing team to have access to certain AWS resources in the production account. How can the organization achieve this?
A. It is not possible to access resources of one account with another account.
B. Create the IAM roles with cross account access.
C. Create the IAM user in a test account, and allow it access to the production environment with the IAM
D. Create the IAM users with cross account access.
An organization has multiple AWS accounts to isolate a development environment from a testing or production environment. At times the users from one account need to access resources in the other account, such as promoting an update from the development environment to the production environment.In this case the IAM role with cross account access will provide a solution. Cross account access lets one account share access to their resources with users in the other AWS accounts. Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf
27. You need to import several hundred megabytes of data from a local Oracle database to an Amazon RDS DB instance. What does AWS recommend you use to accomplish this?
A. Oracle export/import utilities
B. Oracle SQL Developer
C. Oracle Data Pump
How you import data into an Amazon RDS DB instance depends on the amount of data you have and the number and variety of database objects in your database. For example, you can use Oracle SQL Developer to import a simple, 20 MB database; you want to use Oracle Data Pump to import complex databases or databases that are several hundred megabytes or several terabytes in size. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.htmI
28. A user has created an EBS volume with 1000 IOPS. What is the average IOPS that the user will get for most of the year as per EC2 SLA if the instance is attached to the EBS optimized instance?
As per AWS SLA if the instance is attached to an EBS-Optimized instance, then the Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time in a given year. Thus, if the user has created a volume of 1000 IOPS, the user will get a minimum 900 IOPS 99.9% time of the year. Reference: http://aws.amazon.com/ec2/faqs/
29. You need to migrate a large amount of data into the cloud that you have stored on a hard disk and you decide that the best way to accomplish this is with AWS Import/Export and you mail the hard disk to AWS. Which of the following statements is incorrect in regards to AWS Import/Export?
A. It can export from Amazon S3
B. It can Import to Amazon Glacier
C. It can export from Amazon Glacier.
D. It can Import to Amazon EBS
AWS Import/Export supports: Import to Amazon S3 Export from Amazon S3 Import to Amazon EBS Import to Amazon Glacier AWS Import/Export does not currently support export from Amazon EBS or Amazon Glacier.
30. You are in the process of creating a Route 53 DNS failover to direct traffic to two EC2 zones. Obviously if one fails, you would like Route 53 to direct traffic to the other region. Each region has an ELB with some instances being distributed. What is the best way for you to configure the Route 53 health check?
A. Route 53 doesn’t support ELB with an internal health check.You need to create your own Route 53 health check of the ELB
B. Route 53 natively supports ELB with an internal health check. Turn “Eva|uate target health” off and “Associate with Health Check” on and R53 will use the ELB’s internal health check.
C. Route 53 doesn’t support ELB with an internal health check. You need to associate your resource record set for the ELB with your own health check
D. Route 53 natively supports ELB with an internal health check. Turn “Eva|uate target health” on and”Associate with Health Check” off and R53 will use the ELB’s internal health check.
With DNS Failover, Amazon Route 53 can help detect an outage of your website and redirect your end users to alternate locations where your application is operating properly. When you enable this feature, Route 53 uses health checks-regularly making Internet requests to your appIication’s endpoints from multiple locations around the world-to determine whether each endpoint of your application is up or down. To enable DNS Failover for an ELB endpoint, create an Alias record pointing to the ELB and set the”EvaIuate Target HeaIth” parameter to true. Route 53 creates and manages the health checks for your ELB automatically. You do not need to create your own Route 53 health check of the ELB. You also do not needto associate your resource record set for the ELB with your own health check, because 53automatically associates it with the health checks that Route 53 manages on your behalf. The ELB health check will also inherit the health of your backend instances behind that ELB.
31. A user wants to use an EBS-backed Amazon EC2 instance for a temporary job. Based on the input data, the job is most likely to finish within a week. Which of the following steps should be followed to terminate the instance automatically once the job is finished?
A. Configure the EC2 instance with a stop instance to terminate it.
B. Configure the EC2 instance with ELB to terminate the instance when it remains idle.
C. Configure the CIoudWatch alarm on the instance that should perform the termination action once the
instance is idle.
D. Configure the Auto Scaling schedule activity that terminates the instance after 7 days.
Auto Scaling can start and stop the instance at a pre-defined time. Here, the total running time is unknown. Thus, the user has to use the CIoudWatch alarm, which monitors the CPU utilization. The user can create an alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 24 hours, signaling that it is idle and no longer in use. When the utilization is below the threshold limit, it will terminate the instance as a part of the instance action.
32. Which of the following is true of Amazon EC2 security group?
A. You can modify the outbound rules for EC2-Classic.
B. You can modify the rules for a security group only if the security group controls the traffic for just one
C. You can modify the rules for a security group only when a new instance is created.
D. You can modify the rules for a security group at any time.
A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.
33. An Elastic IP address (EIP) is a static IP address designed for dynamic cloud computing. With an EIP, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. Your EIP is associated with your AWS account, not a particular EC2 instance, and it remains associated with your account until you choose to explicitly release it. By default,t how many EIPs is each AWS account limited to on a per region basis?
By default, all AWS accounts are limited to 5 Elastic IP addresses per region for each AWS account, because public (IPv4) Internet addresses are a scarce public resource. AWS strongly encourages you to use an EIP primarily for load balancing use cases, and use DNS hostnames for all other inter-node communication. If you feel your architecture warrants additional EIPs, you would need to complete the Amazon EC2 Elastic IP Address Request Form and give reasons as to your need for additional addresses. Reference:
34. In Amazon EC2, partial instance-hours are billed .
A. per second used in the hour
B. per minute used
C. by combining partial segments into full hours
D. as full hours
Partial instance-hours are billed to the next hour. Reference: http://aws.amazon.com/ec2/faqs/
35. In EC2, what happens to the data in an instance store if an instance reboots (either intentionally or unintentionally)?
A. Data is deleted from the instance store for security reasons.
B. Data persists in the instance store.
C. Data is partially present in the instance store.
D. Data in the instance store will be lost.
The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data on instance store volumes is lost under the following circumstances. Failure of an underlying drive Stopping an Amazon EBS-backed instance Terminating an instance
36. You are setting up a VPC and you need to set up a public subnet within that VPC. Which following requirement must be met for this subnet to be considered a public subnet?
A. Subnet’s traffic is not routed to an internet gateway but has its traffic routed to a virtual private gateway.
B. Subnet’s traffic is routed to an internet gateway.
C. Subnet’s traffic is not routed to an internet gateway.
D. None of these answers can be considered a public subnet.
A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can configure your VPC: you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings. A subnet is a range of IP addresses in your VPC. You can launch AWS resources into a subnet that you select. Use a public subnet for resources that must be connected to the internet, and a private subnet for resources that won’t be connected to the Internet.If a subnet’s traffic is routed to an internet gateway, the subnet is known as a public subnet.If a subnet doesn’t have a route to the internet gateway, the subnet is known as a private subnet.If a subnet doesn’t have a route to the internet gateway, but has its traffic routed to a virtual private gateway,the subnet is known as a VPN-only subnet.
37. Can you specify the security group that you created for a VPC when you launch an instance in EC2-Classic?
A. No, you can specify the security group created for EC2-Classic when you launch a VPC instance.
D. No, you can specify the security group created for EC2-Classic to a non-VPC based instance only.
If you’re using EC2-Classic, you must use security groups created specifically for EC2-Classic. When you launch an instance in EC2-Classic, you must specify a security group in the same region as the instance. You can’t specify a security group that you created for a VPC when you launch an instance in EC2-Classic. Reference:
38. While using the EC2 GET requests as URLs, the is the URL that serves as the entry point for the web service.
D. None of these
The endpoint is the URL that serves as the entry point for the web service. Reference: http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-query-api.htmI
39. You have been asked to build a database warehouse using Amazon Redshift. You know a little about it,including that it is a SQL data warehouse solution, and uses industry standard ODBC and JDBC connections and PostgreSQL drivers. However you are not sure about what sort of storage it uses fordatabase tables. What sort of storage does Amazon Redshift use for database tables?
A. InnoDB Tables
B. NDB data storage
C. Columnar data storage
D. NDB CLUSTER Storage
Amazon Redshift achieves efficient storage and optimum query performance through a combination of massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding schemes. Columnar storage for database tables is an important factor in optimizing analytic query performance because it drastically reduces the overall disk I/O requirements and reduces the amount of data you need to load from disk.
40. You are checking the workload on some of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes and it seems that the I/O latency is higher than you require. You should probably check the to make sure that your application is not trying to drive more IOPS than you have provisioned. A. Amount of IOPS that are available
B. Acknowledgement from the storage subsystem
C. Average queue length
D. Time it takes for the I/O operation to complete
In EBS workload demand plays an important role in getting the most out of your General Purpose (SSD) and Provisioned IOPS (SSD) volumes. In order for your volumes to deliver the amount of IOPS that are
available, they need to have enough I/O requests sent to them. There is a relationship between the demand on the volumes, the amount of IOPS that are available to them, and the latency of the request (the amount of time it takes for the I/O operation to complete). Latency is the true end-to-end client time of an I/O operation; in other words, when the client sends a IO, how long does it take to get an acknowledgement from the storage subsystem that the IO read or write is complete. If your I/O latency is higher than you require, check your average queue length to make sure that your application is not trying to drive more IOPS than you have provisioned. You can maintain high IOPS while keeping latency down by maintaining a low average queue length (which is achieved by provisioning more IOPS for your volume).
41. Which of the below mentioned options is not available when an instance is launched by Auto Scaling with EC2 Classic?
A. Public IP
B. Elastic IP
C. Private DNS
D. Private IP
Auto Scaling supports both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS
42. You have been given a scope to deploy some AWS infrastructure for a large organisation. The requirements are that you will have a lot of EC2 instances but may need to add more when the average utilization of your Amazon EC2 fileet is high and conversely remove them when CPU utilization is low. Which AWS services would be best to use to accomplish this?
A. Auto Scaling, Amazon CIoudWatch and AWS Elastic Beanstalk
B. Auto Scaling, Amazon CIoudWatch and Elastic Load Balancing.
C. Amazon CIoudFront, Amazon CIoudWatch and Elastic Load Balancing.
D. AWS Elastic Beanstalk , Amazon CIoudWatch and Elastic Load Balancing.
Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to manually provision Amazon EC2 capacity in advance. For example, you can set a condition to add new Amazon EC2 instances in increments to the Auto Scaling group when the average utilization of your Amazon EC2 fleet is high; and similarly, you can set a condition to remove instances in the same increments when CPU utilization is low. If you have predictable load changes, you can set a schedule through Auto Scaling to plan your scaling activities. You can use Amazon CIoudWatch to send alarms to trigger scaling activities and Elastic Load Balancing to help distribute traffic to your instances within Auto Scaling groups. Auto Scaling enables you to run your Amazon EC2 fileet at optimal utilization. Reference:http://aws.amazon.com/autoscaIing/
43. You are building infrastructure for a data warehousing solution and an extra request has come through that there will be a lot of business reporting queries running all the time and you are not sure if your current DB instance will be able to handle it. What would be the best solution for this?
A. DB Parameter Groups
B. Read Replicas
C. Multi-AZ DB Instance deployment
D. Database Snapshots
Read Replicas make it easy to take advantage of MySQL’s built-in replication functionality to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. There are a variety of scenarios where deploying one or more Read Replicas for a given source DB Instance may make sense. Common reasons for deploying a Read Replica include:
Scaling beyond the compute or I/O capacity of a single DB Instance for read-heavy database workloads. This excess read traffic can be directed to one or more Read Replicas. Serving read traffic while the source DB Instance is unavailable. If your source DB Instance cannot take I/O requests (e.g. due to I/O suspension for backups or scheduled maintenance), you can direct read traffic to your Read RepIica(s). For this use case, keep in mind that the data on the Read Replica may be “staIe” since the source DB Instance is unavailable. Business reporting or data warehousing scenarios; you may want business reporting queries to run against a Read Replica, rather than your primary, production DB Instance. Reference: https://aws.amazon.com/rds/faqs/
44. In DynamoDB, could you use IAM to grant access to Amazon DynamoDB resources and API actions?
A. In DynamoDB there is no need to grant access
B. Depended to the type of access
Amazon DynamoDB integrates with AWS Identity and Access Management (IAM). You can use AWS IAM to grant access to Amazon DynamoDB resources and API actions. To do this, you first write an AWS IAM policy, which is a document that explicitly lists the permissions you want to grant. You then attach that policy to an AWS IAM user or role. Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/UsingIAMWithDDB.htmI
45. Much of your company’s data does not need to be accessed often, and can take several hours for retrieval time, so it’s stored on Amazon Glacier. However, someone within your organization has expressed concerns that his data is more sensitive than the other data, and is wondering whether the high level of encryption that he knows is on S3 is also used on the much cheaper Glacier service. Which of the following statements would be most applicable in regard to this concern?
A. There is no encryption on Amazon Glacier, that’s why it is cheaper.
B. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3 but you can change it to AES-256 if you are willing to pay more.
C. Amazon Glacier automatically encrypts the data using AES-256, the same as Amazon S3.
D. Amazon Glacier automatically encrypts the data using AES-128 a lesser encryption method than Amazon S3.
Like Amazon S3, the Amazon Glacier service provides low-cost, secure, and durable storage. But where S3 is designed for rapid retrieval, Glacier is meant to be used as an archival service for data that is not accessed often, and for which retrieval times of several hours are suitable. Amazon Glacier automatically encrypts the data using AES-256 and stores it durably in an immutable form. Amazon Glacier is designed to provide average annual durability of 99.999999999% for an archive. It stores each archive in multiple facilities and multiple devices. Unlike traditional systems which can require laborious data verification and manual repair, Glacier performs regular, systematic data integrity checks, and is built to be automatically self-healing.
46. Your EBS volumes do not seem to be performing as expected and your team leader has requested you look into improving their performance. Which of the following is not a true statement relating to the performance of your EBS volumes?
A. Frequent snapshots provide a higher level of data durability and they will not degrade the performance of your application while the snapshot is in progress.
B. General Purpose (SSD) and Provisioned IOPS (SSD) volumes have a throughput limit of 128 MB/s per volume.
C. There is a relationship between the maximum performance of your EBS volumes, the amount of I/O you are driving to them, and the amount of time it takes for each transaction to complete.
D. There is a 5 to 50 percent reduction in IOPS when you first access each block of data on a newly created or restored EBS volume
Several factors can affect the performance of Amazon EBS volumes, such as instance configuration, I/O characteristics, workload demand, and storage configuration. Frequent snapshots provide a higher level of data durability, but they may slightly degrade the performance of your application while the snapshot is in progress. This trade off becomes critical when you have data that changes rapidly. Whenever possible, plan for snapshots to occur during off-peak times in order to minimize workload impact. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.htmI
47. You’ve created your first load balancer and have registered your EC2 instances with the load balancer. Elastic Load Balancing routinely performs health checks on all the registered EC2 instances and automatically distributes all incoming requests to the DNS name of your load balancer across your registered, healthy EC2 instances. By default, the load balancer uses the _ protocol for checking the health of your instances.
In Elastic Load Balancing a health configuration uses information such as protocol, ping port, ping path (URL), response timeout period, and health check interval to determine the health state of the instances registered with the load balancer.Currently, HTTP on port 80 is the default health check. Reference:
48. A major finance organisation has engaged your company to set up a large data mining application. Using AWS you decide the best service for this is Amazon Elastic MapReduce(EMR) which you know uses Hadoop. Which of the following statements best describes Hadoop?
A. Hadoop is 3rd Party software which can be installed using AMI
B. Hadoop is an open source python web framework
C. Hadoop is an open source Java software framework
Amazon EMR uses Apache Hadoop as its distributed data processing engine. Hadoop is an open source, Java software framework that supports data-intensive distributed applications running on large clusters of commodity hardware. Hadoop implements a programming model named “MapReduce,” where the data is divided into many small fragments of work, each of which may be executed on any node in the cluster. This framework has been widely used by developers, enterprises and startups and has proven to be a reliable software platform for processing up to petabytes of data on clusters of thousands of commoditymachines. Reference: http://aws.amazon.com/elasticmapreduce/faqs/
49. In Amazon EC2 Container Service, are other container types supported?
A. Yes, EC2 Container Service supports any container service you need.
B. Yes, EC2 Container Service also supports Microsoft container service.
C. No, Docker is the only container platform supported by EC2 Container Service presently.
D. Yes, EC2 Container Service supports Microsoft container service and Openstack.
In Amazon EC2 Container Service, Docker is the only container platform supported by EC2 Container Service presently. Reference: http://aws.amazon.com/ecs/faqs/
50. is a fast, flexible, fully managed push messaging service.
A. Amazon SNS
B. Amazon SES
C. Amazon SQS
D. Amazon FPS
Amazon Simple Notification Service (Amazon SNS) is a fast, filexible, fully managed push messaging service. Amazon SNS makes it simple and cost-effective to push to mobile devices such as iPhone, iPad, Android, Kindle Fire, and internet connected smart devices, as well as pushing to other distributed services. Reference: http://aws.amazon.com/sns/?nc1=h_I2_as