Question # 1
A company has an on-premises application that is written in Go. A DevOps engineer must
move the application to AWS. The company's development team wants to enable blue/green deployments and perform A/B testing.
Which solution will meet these requirements? | A. Deploy the application on an Amazon EC2 instance, and create an AMI of the instance.
Use the AMI to create an automatic scaling launch configuration that is used in an Auto
Scaling group. Use Elastic Load Balancing to distribute traffic. Whenchanges are made to
the application, a new AMI will be created, which will initiate an EC2 instance refresh. | B. Use Amazon Lightsail to deploy the application. Store the application in a zipped format
in an Amazon S3 bucket. Use this zipped version to deploy new versions of the application
to Lightsail. Use Lightsail deployment options to manage the deployment. | C. Use AWS CodeArtifact to store the application code. Use AWS CodeDeploy to deploy
the application to a fleet of Amazon EC2 instances. Use Elastic Load Balancing to
distribute the traffic to the EC2 instances. When making changes to the application, upload
a new version to CodeArtifact and create a new CodeDeploy deployment. | D. Use AWS Elastic Beanstalk to host the application. Store a zipped version of the
application in Amazon S3. Use that location to deploy new versions of the application. Use
Elastic Beanstalk to manage the deployment options. |
D. Use AWS Elastic Beanstalk to host the application. Store a zipped version of the
application in Amazon S3. Use that location to deploy new versions of the application. Use
Elastic Beanstalk to manage the deployment options.
Question # 2
A company builds a container image in an AWS CodeBuild project by running Docker
commands. After the container image is built, the CodeBuild project uploads the container
image to an Amazon S3 bucket. The CodeBuild project has an 1AM service role that has
permissions to access the S3 bucket.
A DevOps engineer needs to replace the S3 bucket with an Amazon Elastic Container
Registry (Amazon ECR) repository to store the container images. The DevOps engineer
creates an ECR private image repository in the same AWS Region of the CodeBuild
project. The DevOps engineer adjusts the 1AM service role with the permissions that are
necessary to work with the new ECR repository. The DevOps engineer also places new
repository information into the docker build command and the docker push command that
are used in the buildspec.yml file.
When the CodeBuild project runs a build job, the job fails when the job tries to access the
ECR repository.
Which solution will resolve the issue of failed access to the ECR repository? | A. Update the buildspec.yml file to log in to the ECR repository by using the aws ecr getlogin-
password AWS CLI command to obtain an authentication token. Update the docker
login command to use the authentication token to access the ECR repository. | B. Add an environment variable of type SECRETS_MANAGER to the CodeBuild project. In
the environment variable, include the ARN of the CodeBuild project's lAM service role.
Update the buildspec.yml file to use the new environment variable to log in with the docker
login command to access the ECR repository. | C. Update the ECR repository to be a public image repository. Add an ECR repository
policy that allows the 1AM service role to have access. | D. Update the buildspec.yml file to use the AWS CLI to assume the 1AM service role for
ECR operations. Add an ECR repository policy that allows the 1AM service role to have
access. |
A. Update the buildspec.yml file to log in to the ECR repository by using the aws ecr getlogin-
password AWS CLI command to obtain an authentication token. Update the docker
login command to use the authentication token to access the ECR repository.
Explanation: (A) When Docker communicates with an Amazon Elastic Container Registry
(ECR) repository, it requires authentication. You can authenticate your Docker client to the
Amazon ECR registry with the help of the AWS CLI (Command Line Interface). Specifically,
you can use the "aws ecr get-login-password" command to get an authorization token and
then use Docker's "docker login" command with that token to authenticate to the registry.
You would need to perform these steps in your buildspec.yml file before attempting to push
or pull images from/to the ECR repository.
Question # 3
A company runs an application with an Amazon EC2 and on-premises configuration. A
DevOps engineer needs to standardize patching across both environments. Company
policy dictates that patching only happens during non-business hours.
Which combination of actions will meet these requirements? (Choose three.) | A. Add the physical machines into AWS Systems Manager using Systems Manager Hybrid
Activations. | B. Attach an IAM role to the EC2 instances, allowing them to be managed by AWS
Systems Manager. | C. Create IAM access keys for the on-premises machines to interact with AWS Systems
Manager. | D. Run an AWS Systems Manager Automation document to patch the systems every hour. | E. Use Amazon EventBridge scheduled events to schedule a patch window. |
A. Add the physical machines into AWS Systems Manager using Systems Manager Hybrid
Activations. B. Attach an IAM role to the EC2 instances, allowing them to be managed by AWS
Systems Manager.
Question # 4
A space exploration company receives telemetry data from multiple satellites. Small
packets of data are received through Amazon API Gateway and are placed directly into an
Amazon Simple Queue Service (Amazon SQS) standard queue. A custom application is
subscribed to the queue and transforms the data into a standard format.
Because of inconsistencies in the data that the satellites produce, the application is
occasionally unable to transform the data. In these cases, the messages remain in the
SQS queue. A DevOps engineer must develop a solution that retains the failed messages
and makes them available to scientists for review and future processing.
Which solution will meet these requirements? | A. Configure AWS Lambda to poll the SQS queue and invoke a Lambda function to check
whether the queue messages are valid. If validation fails, send a copy of the data that is not
valid to an Amazon S3 bucket so that the scientists can review and correct the data. When
the data is corrected, amend the message in the SQS queue by using a replay Lambda
function with the corrected data. | B. Convert the SQS standard queue to an SQS FIFO queue. Configure AWS Lambda to
poll the SQS queue every 10 minutes by using an Amazon EventBridge schedule. Invoke
the Lambda function to identify any messages with a SentTimestamp value that is older
than 5 minutes, push the data to the same location as the application's output location, and
remove the messages from the queue. | C. Create an SQS dead-letter queue. Modify the existing queue by including a redrive
policy that sets the Maximum Receives setting to 1 and sets the dead-letter queue ARN to
the ARN of the newly created queue. Instruct the scientists to use the dead-letter queue to
review the data that is not valid. Reprocess this data at a later time. | D. Configure API Gateway to send messages to different SQS virtual queues that are
named for each of the satellites. Update the application to use a new virtual queue for any
data that it cannot transform, and send the message to the new virtual queue. Instruct the
scientists to use the virtual queue to review the data that is not valid. Reprocess this data at
a later time. |
C. Create an SQS dead-letter queue. Modify the existing queue by including a redrive
policy that sets the Maximum Receives setting to 1 and sets the dead-letter queue ARN to
the ARN of the newly created queue. Instruct the scientists to use the dead-letter queue to
review the data that is not valid. Reprocess this data at a later time.
Explanation: Create an SQS dead-letter queue. Modify the existing queue by including a
redrive policy that sets the Maximum Receives setting to 1 and sets the dead-letter queue
ARN to the ARN of the newly created queue. Instruct the scientists to use the dead-letter queue to review the data that is not valid. Reprocess this data at a later time.
Question # 5
A company has configured an Amazon S3 event source on an AWS Lambda function The
company needs the Lambda function to run when a new object is created or an existing
object IS modified In a particular S3 bucket The Lambda function will use the S3 bucket
name and the S3 object key of the incoming event to read the contents of the created or
modified S3 object The Lambda function will parse the contents and save the parsed
contents to an Amazon DynamoDB table.
The Lambda function's execution role has permissions to read from the S3 bucket and to
write to the DynamoDB table, During testing, a DevOps engineer discovers that the
Lambda
function does not run when objects are added to the S3 bucket or when existing objects are
modified.
Which solution will resolve this problem? | A. Increase the memory of the Lambda function to give the function the ability to process
large files from the S3 bucket. | B. Create a resource policy on the Lambda function to grant Amazon S3 the permission to
invoke the Lambda function for the S3 bucket | C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as an OnFailure
destination for the Lambda function | D. Provision space in the /tmp folder of the Lambda function to give the function the ability
to process large files from the S3 bucket |
B. Create a resource policy on the Lambda function to grant Amazon S3 the permission to
invoke the Lambda function for the S3 bucket
Explanation:
Option A is incorrect because increasing the memory of the Lambda function does
not address the root cause of the problem, which is that the Lambda function is not
triggered by the S3 event source. Increasing the memory of the Lambda function
might improve its performance or reduce its execution time, but it does not affect
its invocation. Moreover, increasing the memory of the Lambda function might
incur higher costs, as Lambda charges based on the amount of memory allocated
to the function.
Option B is correct because creating a resource policy on the Lambda function to
grant Amazon S3 the permission to invoke the Lambda function for the S3 bucket
is a necessary step to configure an S3 event source. A resource policy is a JSON
document that defines who can access a Lambda resource and underwhat
conditions. By granting Amazon S3 permission to invoke the Lambda function, the
company ensures that the Lambda function runs when a new object is created or
an existing object is modified in the S3 bucket1.
Option C is incorrect because configuring an Amazon Simple Queue Service
(Amazon SQS) queue as an On-Failure destination for the Lambda function does
not help with triggering the Lambda function. An On-Failure destination is a feature
that allows Lambda to send events to another service, such as SQS or Amazon
Simple Notification Service (Amazon SNS), when a function invocation fails.
However, this feature only applies to asynchronous invocations, and S3 event
sources use synchronous invocations. Therefore, configuring an SQS queue as an
On-Failure destination would have no effect on the problem.
Option D is incorrect because provisioning space in the /tmp folder of the Lambda
function does not address the root cause of the problem, which is that the Lambda
function is not triggered by the S3 event source. Provisioning space in the /tmp
folder of the Lambda function might help with processing large files from the S3
bucket, as it provides temporary storage for up to 512 MB of data. However, it
does not affect the invocation of the Lambda function.
Question # 6
A production account has a requirement that any Amazon EC2 instance that has been
logged in to manually must be terminated within 24 hours. All applications in the production
account are using Auto Scaling groups with the Amazon CloudWatch Logs agent
configured.
How can this process be automated? | A. Create a CloudWatch Logs subscription to an AWS Step Functions application.
Configure an AWS Lambda function to add a tag to the EC2 instance that produced the
login event and mark the instance to be decommissioned. Create an Amazon EventBridge
rule to invoke a second Lambda function once a day that will terminate all instances with
this tag. | B. Create an Amazon CloudWatch alarm that will be invoked by the login event. Send the
notification to an Amazon Simple Notification Service (Amazon SNS) topic that the
operations team is subscribed to, and have them terminate the EC2 instance within 24
hours. | C. Create an Amazon Cloud Watch alarm that will be invoked by the login event. Configure
the alarm to send to an Amazon Simple Queue Service (Amazon SQS) queue. Use a
group of worker instances to process messages from the queue, which then schedules an
Amazon Event Bridge rule to be invoked. | D. Create a CloudWatch Logs subscription to an AWS Lambda function. Configure the
function to add a tag to the EC2 instance that produced the login event and mark the
instance to be decommissioned. Create an Amazon EventBridge rule to invoke a daily
Lambda function that terminates all instances with this tag. |
D. Create a CloudWatch Logs subscription to an AWS Lambda function. Configure the
function to add a tag to the EC2 instance that produced the login event and mark the
instance to be decommissioned. Create an Amazon EventBridge rule to invoke a daily
Lambda function that terminates all instances with this tag.
Explanation: "You can use subscriptions to get access to a real-time feed of log events
from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing,
analysis, or loading to other systems. When log events are sent to the receiving service,
they are Base64 encoded and compressed with the gzip format."
Question # 7
A media company has several thousand Amazon EC2 instances in an AWS account. The
company is using Slack and a shared email inbox for team communications and important
updates. A DevOps engineer needs to send all AWS-scheduled EC2 maintenance
notifications to the Slack channel and the shared inbox. The solution must include the
instances' Name and Owner tags.
Which solution will meet these requirements? | A. Integrate AWS Trusted Advisor with AWS Config Configure a custom AWS Config rule
to invoke an AWS Lambda function to publish notifications to an Amazon Simple
Notification Service (Amazon SNS) topic Subscribe a Slack channel endpoint and the
shared inbox to the topic. | B. Use Amazon EventBridge to monitor for AWS Health Events Configure the maintenance events to target an Amazon Simple Notification Service (Amazon SNS) topic Subscribe an
AWS Lambda function to the SNS topic to send notifications to the Slack channel and the
shared inbox. | C. Create an AWS Lambda function that sends EC2 maintenance notifications to the Slack
channel and the shared inbox Monitor EC2 health events by using Amazon CloudWatch
metrics Configure a CloudWatch alarm that invokes the Lambda function when a
maintenance notification is received. | D. Configure AWS Support integration with AWS CloudTrail Create a CloudTrail lookup
event to invoke an AWS Lambda function to pass EC2 maintenance notifications to
Amazon Simple Notification Service (Amazon SNS) Configure Amazon SNS to target the
Slack channel and the shared inbox. |
B. Use Amazon EventBridge to monitor for AWS Health Events Configure the maintenance events to target an Amazon Simple Notification Service (Amazon SNS) topic Subscribe an
AWS Lambda function to the SNS topic to send notifications to the Slack channel and the
shared inbox.
Amazon Web Services DOP-C02 Exam Dumps
5 out of 5
Pass Your AWS Certified DevOps Engineer - Professional Exam in First Attempt With DOP-C02 Exam Dumps. Real AWS Certified Professional Exam Questions As in Actual Exam!
— 250 Questions With Valid Answers
— Updation Date : 28-Mar-2025
— Free DOP-C02 Updates for 90 Days
— 98% AWS Certified DevOps Engineer - Professional Exam Passing Rate
PDF Only Price 49.99$
19.99$
Buy PDF
Speciality
Additional Information
Testimonials
Related Exams
- Number 1 Amazon Web Services AWS Certified Professional study material online
- Regular DOP-C02 dumps updates for free.
- AWS Certified DevOps Engineer - Professional Practice exam questions with their answers and explaination.
- Our commitment to your success continues through your exam with 24/7 support.
- Free DOP-C02 exam dumps updates for 90 days
- 97% more cost effective than traditional training
- AWS Certified DevOps Engineer - Professional Practice test to boost your knowledge
- 100% correct AWS Certified Professional questions answers compiled by senior IT professionals
Amazon Web Services DOP-C02 Braindumps
Realbraindumps.com is providing AWS Certified Professional DOP-C02 braindumps which are accurate and of high-quality verified by the team of experts. The Amazon Web Services DOP-C02 dumps are comprised of AWS Certified DevOps Engineer - Professional questions answers available in printable PDF files and online practice test formats. Our best recommended and an economical package is AWS Certified Professional PDF file + test engine discount package along with 3 months free updates of DOP-C02 exam questions. We have compiled AWS Certified Professional exam dumps question answers pdf file for you so that you can easily prepare for your exam. Our Amazon Web Services braindumps will help you in exam. Obtaining valuable professional Amazon Web Services AWS Certified Professional certifications with DOP-C02 exam questions answers will always be beneficial to IT professionals by enhancing their knowledge and boosting their career.
Yes, really its not as tougher as before. Websites like Realbraindumps.com are playing a significant role to make this possible in this competitive world to pass exams with help of AWS Certified Professional DOP-C02 dumps questions. We are here to encourage your ambition and helping you in all possible ways. Our excellent and incomparable Amazon Web Services AWS Certified DevOps Engineer - Professional exam questions answers study material will help you to get through your certification DOP-C02 exam braindumps in the first attempt.
Pass Exam With Amazon Web Services AWS Certified Professional Dumps. We at Realbraindumps are committed to provide you AWS Certified DevOps Engineer - Professional braindumps questions answers online. We recommend you to prepare from our study material and boost your knowledge. You can also get discount on our Amazon Web Services DOP-C02 dumps. Just talk with our support representatives and ask for special discount on AWS Certified Professional exam braindumps. We have latest DOP-C02 exam dumps having all Amazon Web Services AWS Certified DevOps Engineer - Professional dumps questions written to the highest standards of technical accuracy and can be instantly downloaded and accessed by the candidates when once purchased. Practicing Online AWS Certified Professional DOP-C02 braindumps will help you to get wholly prepared and familiar with the real exam condition. Free AWS Certified Professional exam braindumps demos are available for your satisfaction before purchase order.
Send us mail if you want to check Amazon Web Services DOP-C02 AWS Certified DevOps Engineer - Professional DEMO before your purchase and our support team will send you in email.
If you don't find your dumps here then you can request what you need and we shall provide it to you.
Bulk Packages
$50
- Get 3 Exams PDF
- Get $33 Discount
- Mention Exam Codes in Payment Description.
Buy 3 Exams PDF
$70
- Get 5 Exams PDF
- Get $65 Discount
- Mention Exam Codes in Payment Description.
Buy 5 Exams PDF
$100
- Get 5 Exams PDF + Test Engine
- Get $105 Discount
- Mention Exam Codes in Payment Description.
Buy 5 Exams PDF + Engine
 Jessica Doe
AWS Certified Professional
We are providing Amazon Web Services DOP-C02 Braindumps with practice exam question answers. These will help you to prepare your AWS Certified DevOps Engineer - Professional exam. Buy AWS Certified Professional DOP-C02 dumps and boost your knowledge.
|