AWS SA Professional Practice Questions -2

Updated: Mar 7

NEW QUESTION 102

You have been given the task to define multiple AWS Data Pipeline schedules for different activities in the same pipeline. Which of the following would successfully accomplish this task?

A. Creating multiple pipeline definition files

B. Defining multiple pipeline definitions in your schedule objects file and associating the desired schedule to the correct activity via its schedule field

C. Defining multiple schedule objects in your pipeline definition file and associating the desired schedule to the correct activity via its schedule field

D. Defining multiple schedule objects in the schedule field

Answer: C

Explanation: To define multiple schedules for different activities in the same pipeline, in AWS Data Pipeline, you should define multiple schedule objects in your pipeline definition file and associate the desired schedule to the correct activity via its schedule field. As an example of this, it could allow you to define a pipeline in which log files are stored in Amazon S3 each hour to drive generation of an aggregate report once a day. Reference: https://aws.amazon.com/datapipeIine/faqs/

NEW QUESTION 107

Which statement is NOT true about a stack which has been created in a Virtual Private Cloud (VPC) in AWS OpsWorks?

A. Subnets whose instances cannot communicate with the Internet are referred to as public subnets.

B. Subnets whose instances can communicate only with other instances in the VPC and cannot communicate directly with the Internet are referred to as private subnets.

C. All instances in the stack should have access to any package repositories that your operating system depends on, such as the Amazon Linux or Ubuntu Linux repositories.

D. Your app and custom cookbook repositories should be accessible for all instances in the stack

Answer: A

Explanation: In AWS OpsWorks, you can control user access to a stack's instances by creating it in a virtual private cloud (VPC). For example, you might not want users to have direct access to your stack's app servers or databases and instead require that all public traffic be channeled through an Elastic Load Balancer. A VPC consists of one or more subnets, each of which contains one or more instances. Each subnet has an associated routing table that directs outbound traffic based on its destination IP address.

Instances within a VPC can generally communicate with each other, regardless of their subnet. Subnets whose instances can communicate with the Internet are referred to as public subnets. Subnets whose instances can communicate only with other instances in the VPC and cannot communicate directly with the Internet are referred to as private subnets.

AWS OpsWorks requires the VPC to be configured so that every instance in the stack, including instances in private subnets, has access to the following endpoints:

The AWS OpsWorks service, https://opsworks-instance-service.us-east-1.amazonaws.com . Amazon S3

The package repositories for Amazon Linux or Ubuntu 12.04 LTS, depending on which operating system you specify. Your app and custom cookbook repositories. Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks-vpc.htmI#workingstacks-vpc-basi cs

NEW QUESTION 109

By default, temporary security credentials for an IAM user are valid for a maximum of 12 hours, but you can request a duration as long as hours.

A. 24

B. 36

C. 10

D. 48

Answer: B

Explanation: By default, temporary security credentials for an IAM user are valid for a maximum of 12 hours, but you can request a duration as short as 15 minutes or as long as 36 hours.

Reference: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingSessionTokens.html

NEW QUESTION 114

One of the AWS account owners faced a major challenge in June as his account was hacked and the hacker deleted all the data from his AWS account. This resulted in a major blow to the business.

Which of the below mentioned steps would not have helped in preventing this action?

A. Setup an MFA for each user as well as for the root account user.

B. Take a backup of the critical data to offsite / on premise.

C. Create an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions.

D. Do not share the AWS access and secret access keys with others as well do not store it inside programs, instead use IAM roles.

Answer: C

Explanation: AWS security follows the shared security model where the user is as much responsible as Amazon. If the user wants to have secure access to AWS while hosting applications on EC2, the first security rule to follow is to enable MFA for all users. This will add an added security layer. In the second step, the user should never give his access or secret access keys to anyone as well as store inside programs. The

better solution is to use IAM roles. For critical data of the organization, the user should keep an offsite/ in premise backup which will help to recover critical data in case of security breach.

It is recommended to have AWS AMIs and snapshots as well as keep them at other regions so that they will help in the DR scenario. However, in case of a data security breach of the account they may not be very helpful as hacker can delete that.

Therefore ,creating an AMI and a snapshot of the data at regular intervals as well as keep a copy to separate regions, would not have helped in preventing this action.

Reference: http://media.amazonwebservices.com/pdf/AWS_Security_Whitepaper.pdf

NEW QUESTION 117

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. The cluster is managed using an open-source framework called Hadoop.

You have set up an application to run Hadoop jobs. The application reads data from DynamoDB and generates a temporary file of 100 TBs.

The whole process runs for 30 minutes and the output of the job is stored to S3. Which of the below mentioned options is the most cost effective solution in this case?

A. Use Spot Instances to run Hadoop jobs and configure them with EBS volumes for persistent data storage.

B. Use Spot Instances to run Hadoop jobs and configure them with ephermal storage for output file storage.

C. Use an on demand instance to run Hadoop jobs and configure them with EBS volumes for persistent storage.

D. Use an on demand instance to run Hadoop jobs and configure them with ephemeral storage for output file storage.

Answer: B

Explanation: AWS EC2 Spot Instances allow the user to quote his own price for the EC2 computing capacity. The user can simply bid on the spare Amazon EC2 instances and run them whenever his bid exceeds the current Spot Price. The Spot Instance pricing model complements the On-Demand and Reserved Instance pricing models, providing potentially the most cost-effective option for obtaining compute capacity, depending on the application. The only challenge with a Spot Instance is data persistence as the instance can be terminated whenever the spot price exceeds the bid price.

In the current scenario a Hadoop job is a temporary job and does not run for a longer period. It fetches data from a persistent DynamoDB. Thus, even if the instance gets terminated there will be no data loss and the job can be re-run. As the output files are large temporary files, it will be useful to store data on ephermal storage for cost savings.

Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/

NEW QUESTION 119

True or False : "|n the context of Amazon EIastiCache, from the appIication's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node."

A. True, from the appIication's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node since, each has a unique node identifier.

B. True, from the appIication's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node.

C. False, you can connect to a cache node, but not to a cluster configuration endpoint.

D. False, you can connect to a cluster configuration endpoint, but not to a cache nod

Answer: B

Explanation: This is true. From the appIication's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node. In the process of connecting to cache nodes, the application resolves the configuration endpoint's DNS name. Because the configuration endpoint maintains CNAME entries for all of the cache nodes, the DNS name resolves to one of the nodes; the client can then connect to that node.

Reference: http://docs.aws.amazon.com/AmazonEIastiCache/latest/UserGuide/AutoDiscovery.HowAutoDiscoveryWorks.htmI

NEW QUESTION 123

An organization is setting up a highly scalable application using Elastic Beanstalk. They are using Elastic Load Balancing (ELB) as well as a Virtual Private Cloud (VPC) with public and private subnets. They have the following requirements:

. All the EC2 instances should have a private IP

. All the EC2 instances should receive data via the ELB's. Which of these will not be needed in this setup?

A. Launch the EC2 instances with only the public subnet.

B. Create routing rules which will route all inbound traffic from ELB to the EC2 instances.

C. Configure ELB and NAT as a part of the public subnet only.

D. Create routing rules which will route all outbound traffic from the EC2 instances through NA

Answer: A

Explanation: The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. If the organization wants the Amazon EC2 instances to have a private IP address, he should create a public and private subnet for VPC in each Availability Zone (this is an AWS Elastic Beanstalk requirement). The organization should add their public resources, such as ELB and NAT to the public subnet, and AWS Elastic Beanstalk will assign them unique elastic IP addresses (a static, public IP address). The organization should launch Amazon EC2 instances in a private subnet so that AWS Elastic Beanstalk assigns them non-routable private IP addresses. Now the organization should configure route tables with the following rules:

. route all inbound traffic from ELB to EC2 instances

. route all outbound traffic from EC2 instances through NAT

Reference: http://docs.aws.amazon.com/elasticbeanstaIk/latest/dg/AWSHowTo-vpc.html

NEW QUESTION 125

An organization has created multiple components of a single application for compartmentalization. Currently all the components are hosted on a single EC2 instance. Due to security reasons the organization wants to implement two separate SSLs for the separate modules although it is already using VPC. How can the organization achieve this with a single instance?

A. You have to launch two instances each in a separate subnet and allow VPC peering for a single IP.

B. Create a VPC instance which will have multiple network interfaces with multiple elastic IP addresses.

C. Create a VPC instance which will have both the ACL and the security group attached to it and have separate rules for each IP address.

D. Create a VPC instance which will have multiple subnets attached to it and each will have a separate IP address.

Answer: B

Explanation: A Virtual Private Cloud (VPC) is a virtual network dedicated to the user’s AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. With VPC the user can specify multiple private IP addresses for his instances.

The number of network interfaces and private IP addresses that a user can specify for an instance depends on the instance type. With each network interface the organization can assign an EIP. This scenario helps when the user wants to host multiple websites on a single EC2 instance by using multiple SSL certificates on a single server and associating each certificate with a specific EIP address. It also helps in scenarios for operating network appliances, such as firewalls or load balancers that have multiple private IP addresses for each network interface.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/NlultiplelP.htmI

NEW QUESTION 127

An organization is making software for the CIA in US

A. CIA agreed to host the application on AWS but in a secure environment

B. The organization is thinking of hosting the application on the AWS GovCloud region

C. Which of the below mentioned difference is not correct when the organization is hosting on the AWS GovCIoud in comparison with the AWS standard region?

D. The billing for the AWS GovCLoud will be in a different account than the Standard AWS account.

E. GovCIoud region authentication is isolated from Amazon.com.

F. Physical and logical administrative access only to U.

G. Persons.

H. It is physically isolated and has logical network isolation from all the other region

Answer: A

Explanation: AWS GovCIoud (US) is an isolated AWS region designed to allow U.S. government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. The AWS GovCIoud (US) Region adheres to the U.S. International Traffic in

Arms Regulations (ITAR) requirements. It has added advantages, such as: Restricting physical and logical administrative access to U.S. persons only There will be a separate AWS GovCIoud (US) credentials, such as access key and secret access key than the standard AWS account

The user signs in with the IAM user name and password

The AWS GovCIoud (US) Region authentication is completely isolated from Amazon.com

If the organization is planning to host on EC2 in AWS GovCIoud then it will be billed to standard AWS account of organization since AWS GovCIoud billing is linked with the standard AWS account and is not be billed separately

Reference: http://docs.aws.amazon.com/govcloud-us/latest/UserGuide/whatis.htmI

NEW QUESTION 129

How does in-memory caching improve the performance of applications in ElastiCache?

A. It improves application performance by deleting the requests that do not contain frequently accessed data.

B. It improves application performance by implementing good database indexing strategies.

C. It improves application performance by using a part of instance RAM for caching important data.

D. It improves application performance by storing critical pieces of data in memory for low-latency acces

Answer: D

Explanation: In Amazon EIastiCache, in-memory caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally intensive calculations.

Reference: http://aws.amazon.com/elasticache/faqs/#g4

NEW QUESTION 131

A user is thinking to use EBS PIOPS volume. Which of the below mentioned options is a right use case for the PIOPS EBS volume?

A. Analytics

B. System boot volume

C. Mongo DB

D. Log processing

Answer: C

Explanation: Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency in random access I/O throughput. Provisioned IOPS volumes are designed to meet the needs of I/O-intensive workloads, particularly database workloads, that are sensitive to storage performance and consistency in random access I/O throughput business applications, database workloads, such as NoSQL DB, RDBMS, etc. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVo|umeTypes.htm|

NEW QUESTION 134

An organization is setting up a multi-site solution where the application runs on premise as well as on AWS to achieve the minimum recovery time objective(RTO).

Which of the below mentioned configurations will not meet the requirements of the multi-site solution scenario?

A. Configure data replication based on RTO.

B. Keep an application running on premise as well as in AWS with full capacity.

C. Setup a single DB instance which will be accessed by both sites.

D. Setup a weighted DNS service like Route 53 to route traffic across site

Answer: C

Explanation: AWS has many solutions for DR(Disaster recovery) and HA(High Availability). When the organization wants to have HA and DR with multi-site solution, it should setup two sites: one on premise and the other on AWS with full capacity. The organization should setup a weighted DNS service which can route traffic to both sites based on the weightage. When one of the sites fails it can route the entire load to another site. The organization would have minimal RTO in this scenario. If the organization setups a single DB instance, it will not work well in failover.

Instead they should have two separate DBs in each site and setup data replication based on RTO(recovery time objective )of the organization. Reference: http://d36cz9buwru1tt.cIoudfront.net/AWS_Disaster_Recovery.pdf

NEW QUESTION 136

In the context of policies and permissions in AWS IAM, the Condition element is .

A. crucial while writing the IAM policies

B. an optional element

C. always set to null

D. a mandatory element

Answer: B

Explanation: The Condition element (or Condition block) lets you specify conditions for when a policy is in effect. The Condition element is optional. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPoIicyLanguage_EIementDescriptions.html

NEW QUESTION 140

Which of the following is true while using an IAM role to grant permissions to applications running on Amazon EC2 instances?

A. All applications on the instance share the same role, but different permissions.

B. All applications on the instance share multiple roles and permissions.

C. MuItipIe roles are assigned to an EC2 instance at a time.

D. Only one role can be assigned to an EC2 instance at a tim

Answer: D

Explanation: Only one role can be assigned to an EC2 instance at a time, and all applications on the instance share the same role and permissions. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.htmI

NEW QUESTION 141

When using string conditions within IAM, short versions of the available comparators can be used instead of the more verbose ones. streqi is the short version of the string condition.

A. StringEquaIsIgnoreCase

B. StringNotEquaIsIgnoreCase

C. StringLikeStringEqua|s

D. StringNotEqua|s

Answer: A

Explanation: When using string conditions within IANI, short versions of the available comparators can be used instead of the more verbose versions. For instance, streqi is the short version of StringEqua|s|gnoreCase that checks for the exact match between two strings ignoring their case.

Reference: http://awsdocs.s3.amazonaws.com/SNS/20100331/sns-gsg-2010-03-31.pdf

NEW QUESTION 143

Attempts, one of the three types of items associated with the schedule pipeline in the AWS Data Pipeline, provides robust data management. Which of the following statements is NOT true about Attempts?

A. Attempts provide robust data management.

B. AWS Data Pipeline retries a failed operation until the count of retries reaches the maximum number of allowed retry attempts.

C. An AWS Data Pipeline Attempt object compiles the pipeline components to create a set of actionable instances.

D. AWS Data Pipeline Attempt objects track the various attempts, results, and failure reasons if applicable.

Answer: C

Explanation: Attempts, one of the three types of items associated with a schedule pipeline in AWS Data Pipeline, provides robust data management. AWS Data Pipeline retries a failed operation. It continues to do so until the task reaches the maximum number of allowed retry attempts. Attempt objects track the various attempts, results, and failure reasons if applicable. Essentially, it is the instance with a counter. AWS Data Pipeline performs retries using the same resources from the previous attempts, such as Amazon EMR clusters and EC2 instances.

Reference:

http://docs.aws.amazon.com/datapipeline/latest/DeveIoperGuide/dp-how-tasks-scheduled.htmI

NEW QUESTION 147

Select the correct statement about Amazon EIastiCache.

A. It makes it easy to set up, manage, and scale a distributed in-memory cache environment in the cloud.

B. It allows you to quickly deploy your cache environment only if you install software.

C. It does not integrate with other Amazon Web Services.

D. It cannot run in the Amazon Virtual Private Cloud (Amazon VPC) environment

Answer: A

Explanation: EIastiCache is a web service that makes it easy to set up, manage, and scale a distributed in-memory cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with deploying and managing a distributed cache environment. With EIastiCache, you can quickly deploy your cache environment, without having to provision hardware or install software.

Reference: http://docs.aws.amazon.com/AmazonE|astiCache/latest/UserGuide/Whatls.html

NEW QUESTION 152

In Amazon RDS for PostgreSQL, you can provision up to 3TB storage and 30,000 IOPS per database instance. For a workload with 50% writes and 50% reads running on a cr1.8xIarge instance, you can realize over 25,000 IOPS for PostgreSQL. However, by provisioning more than this limit, you may be able to achieve:

A. higher latency and lower throughput.

B. lower latency and higher throughput.

C. higher throughput only.

D. higher latency onl

Answer: B

Explanation: You can provision up to 3TB storage and 30,000 IOPS per database instance. For a workload with 50% writes and 50% reads running on a cr1.8xIarge instance, you can realize over 25,000 IOPS for PostgreSQL. However, by provisioning more than this limit, you may be able to achieve lower latency and higher throughput. Your actual realized IOPS may vary from the amount you provisioned based on your database workload, instance type, and database engine choice.

Reference: https://aws.amazon.com/rds/postgresq|/

NEW QUESTION 154

Which of the following cannot be done using AWS Data Pipeline?

A. Create complex data processing workloads that are fault tolerant, repeatable, and highly available.

B. Regularly access your data where it's stored, transform and process it at scale, and efficiently transfer the results to another AWS service.

C. Generate reports over data that has been stored.

D. Move data between different AWS compute and storage services as well as on-premise data sources at specified intervals.

Answer: C

Explanation: AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services as well as on-premise data sources at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to another AWS.

AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. AWS Data Pipeline also allows you to move and process data that was

previously locked up in on-premise data silos. Reference: http://aws.amazon.com/datapipe|ine/

NEW QUESTION 158

Identify an application that polls AWS Data Pipeline for tasks and then performs those tasks.

A. A task executor

B. A task deployer

C. A task runner

D. A task optimizer

Answer: C

Explanation: A task runner is an application that polls AWS Data Pipeline for tasks and then performs those tasks. You can either use Task Runner as provided by AWS Data Pipeline, or create a custom Task Runner application.

Task Runner is a default implementation of a task runner that is provided by AWS Data Pipeline. When Task Runner is installed and configured, it polls AWS Data Pipeline for tasks associated with pipelines that you have activated. When a task is assigned to Task Runner, it performs that task and reports its status back to AWS Data Pipeline. If your workflow requires non-default behavior, you'II need to implement that functionality in a custom task runner.

Reference:

http://docs.aws.amazon.com/datapipeline/latest/DeveIoperGuide/dp-how-remote-taskrunner-client.html

NEW QUESTION 161

Within an IAM policy, can you add an IfExists condition at the end of a Null condition?

A. Yes, you can add an IfExists condition at the end of a Null condition but not in all Regions.

B. Yes, you can add an IfExists condition at the end of a Null condition depending on the condition.

C. No, you cannot add an IfExists condition at the end of a Null condition.

D. Yes, you can add an IfExists condition at the end of a Null condition

Answer: C

Explanation: Within an IAM policy, IfExists can be added to the end of any condition operator except the Null condition. It can be used to indicate that conditional comparison needs to happen if the policy key is present in the context of a request; otherwise, it can be ignored.

Reference: http://docs.aws.amazon.com/IAM/Iatest/UserGuide/reference_poIicies_eIements.html

NEW QUESTION 164

Regarding Identity and Access Management (IAM), Which type of special account belonging to your application allows your code to access Google services programmatically?

A. Service account

B. Simple Key

C. OAuth

D. Code account

Answer: A

Explanation: A service account is a special Google account that can be used by applications to access Google

services programmatically. This account belongs to your application or a virtual machine (VM), instead of to an individual end user. Your application uses the service account to call the Google API of a service, so that the users aren't directly involved.

A service account can have zero or more pairs of service account keys, which are used to authenticate to Google. A service account key is a public/private keypair generated by Google. Google retains the public

key, while the user is given the private key.

Reference: https://cloud.googIe.com/iam/docs/service-accounts

NEW QUESTION 168

An organization is planning to use NoSQL DB for its scalable data needs. The organization wants to host an application securely in AWS VPC. What action can be recommended to the organization?

A. The organization should setup their own NoSQL cluster on the AWS instance and configure route tables and subnets.

B. The organization should only use a DynamoDB because by default it is always a part of the default subnet provided by AWS.

C. The organization should use a DynamoDB while creating a table within the public subnet.

D. The organization should use a DynamoDB while creating a table within a private subnet

Answer: A

Explanation: The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. Currently VPC does not support DynamoDB. Thus, if the user wants to implement VPC, he has to setup his own NoSQL DB within the VPC. Reference: http://docs.aws.amazon.com/AmazonVPC/Iatest/UserGuide/VPC_Introduction.htm|

NEW QUESTION 172

What happens when Dedicated instances are launched into a VPC?

A. If you launch an instance into a VPC that has an instance tenancy of dedicated, you must manually create a Dedicated instance.

B. If you launch an instance into a VPC that has an instance tenancy of dedicated, your instance is created as a Dedicated instance, only based on the tenancy of the instance.

C. If you launch an instance into a VPC that has an instance tenancy of dedicated, your instance is automatically a Dedicated instance, regardless of the tenancy of the instance.

D. None of these are true

Answer: C

Explanation: If you launch an instance into a VPC that has an instance tenancy of dedicated, your instance is automatically a Dedicated instance, regardless of the tenancy of the instance.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html

NEW QUESTION 174

You create a VPN connection, and your VPN device supports Border Gateway Protocol (BGP). Which of the following should be specified to configure the VPN connection?

A. Classless routing

B. Classfull routing

C. Dynamic routing

D. Static routing

Answer: C

Explanation: If you create a VPN connection, you must specify the type of routing that you plan to use, which will depend upon on the make and model of your VPN devices. If your VPN device supports Border Gateway Protocol (BGP), you need to specify dynamic routing when you configure your VPN connection. If your device does not support BGP, you should specify static routing.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.htmI

NEW QUESTION 179

An organization has developed an application which provides a smarter shopping experience. They need to show a demonstration to various stakeholders who may not be able to access the in premise

application so they decide to host a demo version of the application on AWS. Consequently they will need a fixed elastic IP attached automatically to the instance when it is launched.

In this scenario which of the below mentioned options will not help assign the elastic IP automatically?

A. Write a script which will fetch the instance metadata on system boot and assign the public IP using that metadata.

B. Provide an elastic IP in the user data and setup a bootstrapping script which will fetch that elastic IP and assign it to the instance.

C. Create a controlling application which launches the instance and assigns the elastic IP based on the parameter provided when that instance is booted.

D. Launch instance with VPC and assign an elastic IP to the primary network interface

Answer: A

Explanation: EC2 allows the user to launch On-Demand instances. If the organization is using an application temporarily only for demo purposes the best way to assign an elastic IP would be:

Launch an instance with a VPC and assign an EIP to the primary network interface. This way on every instance start it will have the same IP Create a bootstrapping script and provide it some metadata, such as user data which can be used to assign an EIP Create a controller instance which can schedule the start and stop of the instance and provide an EIP as a parameter so that the controller instance can check the instance boot and assign an EIP

The instance metadata gives the current instance data, such as the public/private IP. It can be of no use for assigning an EIP. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AESDG-chapter-instancedata.html

NEW QUESTION 184

Can a Direct Connect link be connected directly to the Internet?

A. Yes, this can be done if you pay for it.

B. Yes, this can be done only for certain regions.

C. Yes

D. No

Answer: D

Explanation: AWS Direct Connect is a network service that provides an alternative to using the Internet to utilize AWS cloud service. Hence, a Direct Connect link cannot be connected to the Internet directly.

Reference: http://aws.amazon.com/directconnect/faqs/

NEW QUESTION 185

True or False: The Amazon EIastiCache clusters are not available for use in VPC at this time.

A. TRUE

B. True, but they are available only in the GovCIoud.

C. True, but they are available only on request.

D. FALSE

Answer: D

Explanation: Amazon Elasticache clusters can be run in an Amazon VPC. With Amazon VPC, you can define a virtual network topology and customize the network configuration to closely resemble a traditional network that you might operate in your own datacenter. You can now take advantage of the manageability, availability and scalability benefits of Amazon EIastiCache Clusters in your own isolated network. The same functionality of Amazon EIastiCache, including automatic failure detection, recovery, scaling, auto discovery, Amazon CIoudWatch metrics, and software patching, are now available in Amazon VPC. Reference: http://aws.amazon.com/about-aws/whats-new/2012/12/20/amazon-elasticache-announces-support-for-amazon-vpc/

NEW QUESTION 189

In Amazon Redshift, how many slices does a dw2.8xIarge node have?

A. 16

B. 8

C. 32

D. 2

Answer: C

Explanation: The disk storage for a compute node in Amazon Redshift is divided into a number of slices, equal to the number of processor cores on the node. For example, each DW1.XL compute node has two slices, and each DW2.8XL compute node has 32 slices.

Reference: http://docs.aws.amazon.com/redshift/latest/dg/t_Distributing_data.htmI

NEW QUESTION 191

Identify a true statement about using an IAM role to grant permissions to applications running on Amazon EC2 instances.

A. When AWS credentials are rotated, developers have to update only the root Amazon EC2 instance that uses their credentials.

B. When AWS credentials are rotated, developers have to update only the Amazon EC2 instance on which the password policy was applied and which uses their

credentials.

C. When AWS credentials are rotated, you don't have to manage credentials and you don't have to worry about long-term security risks.

D. When AWS credentials are rotated, you must manage credentials and you should consider precautions for long-term security risks.

Answer: C

Explanation: Using IAM roles to grant permissions to applications that run on EC2 instances requires a bit of extra configuration. Because role credentials are temporary and rotated automatically, you don't have to manage credentials, and you don't have to worry about long-term security risks.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.htmI

NEW QUESTION 193

Out of the striping options available for the EBS volumes, which one has the following disadvantage: 'Doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you're mirroring all writes to a pair of volumes, limiting how much you can stripe.'?

A. Raid 1

B. Raid 0

C. RAID 1+0 (RAID 10)

D. Raid 2

Answer: C

Explanation: RAID 1+0 (RAID 10) doubles the amount of I/O required from the instance to EBS compared to RAID 0, because you're mirroring all writes to a pair of volumes, limiting how much you can stripe.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

NEW QUESTION 197

In the context of IAM roles for Amazon EC2, which of the following NOT true about delegating permission to make API requests?

A. You cannot create an IAM role.

B. You can have the application retrieve a set of temporary credentials and use them.

C. You can specify the role when you launch your instances.

D. You can define which accounts or AWS services can assume the rol

Answer: A

Explanation: Amazon designed IANI roles so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles as follows: Create an IAM role. Define which accounts or AWS services can assume the role. Define which API actions and resources the application can use after assuming the role. Specify the role when you launch your instances. Have the application retrieve a set of temporary credentials and use them.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

NEW QUESTION 200

In the context of Amazon ElastiCache CLI, which of the following commands can you use to view all EIastiCache instance events for the past 24 hours?

A. elasticache-events --duration 24

B. elasticache-events --duration 1440

C. elasticache-describe-events --duration 24

D. elasticache describe-events --source-type cache-cluster --duration 1440

Answer: D

Explanation: In Amazon EIastiCache, the code "aws elasticache describe-events --source-type cache-cluster

--duration 1440" is used to list the cache-cluster events for the past 24 hours (1440 minutes). Reference: http://docs.aws.amazon.com/AmazonEIastiCache/Iatest/UserGuide/ECEvents.Viewing.html

©2019 by Raghavendra Kambhampati