A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users.
Which solution meets these requirements?
Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint.
Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
Order daily AWS Snowball devices. Load the data onto the Snowball devices and return the devices to AWS each day.
Submit a support ticket through the AWS Management Console. Request the removal of S3 service limits from the account.
Answer is Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
AWS Direct Connect is a network service that allows you to establish a dedicated network connection from your on-premises data center to AWS. This connection bypasses the public Internet and can provide more reliable, lower-latency communication between your on-premises application and Amazon S3. By directing backup traffic through the AWS Direct Connect connection, you can minimize the impact on your internet bandwidth and ensure timely backups to S3.
Option A (wrong), establishing AWS VPN connections and proxying all traffic through a VPC gateway endpoint, would not necessarily minimize the impact on internet bandwidth as it would still utilize the public Internet to access S3.
Option C (wrong), using AWS Snowball devices, would not address the issue of internet bandwidth limitations as the data would still need to be transferred over the Internet to and from the Snowball devices.
Option D (wrong), submitting a support ticket to request the removal of S3 service limits, would not address the issue of internet bandwidth limitations and would not ensure timely backups to S3.
A company has a data ingestion workflow that consists of the following:
- An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries
- An AWS Lambda function to process the data and record metadata
The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job.
Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Choose two.)
Deploy the Lambda function in multiple Availability Zones.
Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
Increase the CPU and memory that are allocated to the Lambda function.
Increase provisioned throughput for the Lambda function.
Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
Answers are;
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic.
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue.
BE is correct as SQS ensures the messages are stored in a queue for processing.
B. Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe it to the SNS topic. This will decouple the ingestion workflow and provide a buffer to temporarily store the data in case of network connectivity issues.
E. Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue. This will allow the Lambda function to process the data from the SQS queue at its own pace, decoupling the data ingestion from the data delivery and providing more flexibility and fault tolerance.
A: No issue with Lambda availability so this solution is wrong
C: No issues with CPU or memory so no value added by this step also
D: This is not a provisioning issue so provisioning more Lambda won't solve the re-execution issues. The missed messages will still be lost
Question 53
A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in size.
Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.
What should a solutions architect do to meet these requirements with the LEAST development effort?
Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan the objects in the bucket. If objects contain PII, trigger an S3 Lifecycle policy to remove the objects that contain PII.
Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain PII, use Amazon Simple Email Service (Amazon SES) to trigger a notification to the administrators and trigger an S3 Lifecycle policy to remove the meats that contain PII.
Answer is Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain PII, use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain PII.
Amazon Macie discovers sensitive data using machine learning and pattern matching, provides visibility into data security risks, and enables automated protection against those risks. Despite that B does not look to automate remediation and requires admin interaction, it is the best fit as Macie is the designated service for scanning S3 and finding PII.
Can not be D because how can a lambda trigger a life cycle policy to remove PII, this is not practical and life cycle policies does not remove files by an invocation
A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week.
What should the company do to guarantee the EC2 capacity?
Purchase Reserved Instances that specify the Region needed.
Create an On-Demand Capacity Reservation that specifies the Region needed.
Purchase Reserved Instances that specify the Region and three Availability Zones needed.
Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
Answer is Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed.
An On-Demand Capacity Reservation is a type of Amazon EC2 reservation that enables you to create and manage reserved capacity on Amazon EC2. With an On-Demand Capacity Reservation, you can specify the Region and Availability Zones where you want to reserve capacity, and the number of EC2 instances you want to reserve. This allows you to guarantee capacity in specific Availability Zones in a specific Region.
Other options;
Option A, purchasing Reserved Instances that specify the Region needed, would not guarantee capacity in specific Availability Zones.
Option B, creating an On-Demand Capacity Reservation that specifies the Region needed, would not guarantee capacity in specific Availability Zones.
Option C, purchasing Reserved Instances that specify the Region and three Availability Zones needed, would not guarantee capacity in specific Availability Zones as Reserved Instances do not provide capacity reservations.
A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that the catalog is stored in a durable location.
What should a solutions architect do to meet these requirements?
Move the catalog to Amazon ElastiCache for Redis.
Deploy a larger EC2 instance with a larger instance store.
Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
Answer is Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
Securely and reliably access your files with a fully managed file system designed for 99.999999999 percent (11 9s) durability and up to 99.99 percent (4 9s) of availability
A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Configure the application to send the data to Amazon Kinesis Data Firehose.
Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API for the data.
Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data.
Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by email.
Answers are;
B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data.
B&D are the only 2 correct options. If you are choosing option E then you missed the daily morning schedule requirement mentioned in the question which cant be achieved with S3 events for SNS. Event Bridge can used to configure scheduled events (every morning in this case). Option B fulfills the email in HTML format requirement (by SES) and D fulfills every morning schedule event requirement (by EventBridge)
A. Amazon Kinesis Data Firehose: This service is typically used for real-time streaming data processing rather than for scheduled tasks like generating a morning report.
C. Amazon EventBridge to invoke an AWS Glue job: AWS Glue is a data integration service that's more focused on ETL (extract, transform, load) operations, often involving large datasets and complex transformations, which might be more than needed for this scenario.
E. Amazon S3 with SNS topic: Storing data in S3 and using SNS for notification is viable, but this doesn't directly address the need to format the data into HTML and send it as an email report. SNS is better suited for sending notifications rather than formatted reports.
Question 57
A company runs multiple Windows workloads on AWS. The company's employees use Windows file shares that are hosted on two Amazon EC2 instances. The file shares synchronize data between themselves and maintain duplicate copies.
The company wants a highly available and durable storage solution that preserves how users currently access the files.
What should a solutions architect do to meet these requirements?
Migrate all the data to Amazon S3. Set up IAM authentication for users to access files.
Set up an Amazon S3 File Gateway. Mount the S3 File Gateway on the existing EC2 instances.
Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuration. Migrate all the data to Amazon EFS.
Answer is Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuration. Migrate all the data to FSx for Windows File Server.
FSx for Windows provides fully managed Windows-native SMB file shares that are accessible from Windows clients.
It allows seamlessly migrating the existing Windows file shares to FSx shares without disrupting users.
The Multi-AZ configuration provides high availability and durability for file storage.
Users can continue to access files the same way over SMB without any changes.
It is optimized for Windows workloads and provides features like user quotas, ACLs, AD integration.
Data is stored on SSDs with automatic backups for resilience.
A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately.
The company wants to forward all requests to the website so that the requests will use HTTPS.
What should a solutions architect do to meet this requirement?
Update the ALB's network ACL to accept only HTTPS traffic.
Create a rule that replaces the HTTP in the URL with HTTPS.
Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI).
Answer is Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.
To meet the requirement of forwarding all requests to the website so that the requests will use HTTPS, a solutions architect can create a listener rule on the ALB that redirects HTTP traffic to HTTPS. This can be done by creating a rule with a condition that matches all HTTP traffic and a rule action that redirects the traffic to the HTTPS listener. The HTTPS listener should already be configured to accept HTTPS traffic and forward it to the target group.
Option A. Updating the ALB's network ACL to accept only HTTPS traffic is not a valid solution because the network ACL is used to control inbound and outbound traffic at the subnet level, not at the listener level.
Option B. Creating a rule that replaces the HTTP in the URL with HTTPS is not a valid solution because this would not redirect the traffic to the HTTPS listener.
Option D. Replacing the ALB with a Network Load Balancer configured to use Server Name Indication (SNI) is not a valid solution because it would not address the requirement to redirect HTTP traffic to HTTPS.
A company is deploying a new public web application to AWS. The application will run behind an Application Load Balancer (ALB). The application needs to be encrypted at the edge with an SSL/TLS certificate that is issued by an external certificate authority (CA). The certificate must be rotated each year before the certificate expires.
What should a solutions architect do to meet these requirements?
Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate.
Use AWS Certificate Manager (ACM) to issue an SSL/TLS certificate. Import the key material from the certificate. Apply the certificate to the ALUse the managed renewal feature to automatically rotate the certificate.
Use AWS Certificate Manager (ACM) Private Certificate Authority to issue an SSL/TLS certificate from the root CA. Apply the certificate to the ALB. Use the managed renewal feature to automatically rotate the certificate.
Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.
Answer is Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply the certificate to the ALB. Use Amazon EventBridge (Amazon CloudWatch Events) to send a notification when the certificate is nearing expiration. Rotate the certificate manually.
Imported certificates – If you want to use a third-party certificate with Amazon CloudFront, Elastic Load Balancing, or Amazon API Gateway, you may import it into ACM using the AWS Management Console, AWS CLI, or ACM APIs. ACM can not renew imported certificates, but it can help you manage the renewal process. You are responsible for monitoring the expiration date of your imported certificates and for renewing them before they expire. You can use ACM CloudWatch metrics to monitor the expiration dates of an imported certificates and import a new third-party certificate to replace an expiring one.
A company hosts an application on multiple Amazon EC2 instances. The application processes messages from an Amazon SQS queue, writes to an Amazon RDS table, and deletes the message from the queue. Occasional duplicate records are found in the RDS table. The SQS queue does not contain any duplicate messages.
What should a solutions architect do to ensure messages are being processed once only?
Use the CreateQueue API call to create a new queue.
Use the AddPermission API call to add appropriate permissions.
Use the ReceiveMessage API call to set an appropriate wait time.
Use the ChangeMessageVisibility API call to increase the visibility timeout.
Answer is Use the ChangeMessageVisibility API call to increase the visibility timeout.
To ensure that messages are being processed only once, a solutions architect should use the ChangeMessageVisibility API call to increase the visibility timeout which is Option D.
The visibility timeout determines the amount of time that a message received from an SQS queue is hidden from other consumers while the message is being processed. If the processing of a message takes longer than the visibility timeout, the message will become visible to other consumers and may be processed again. By increasing the visibility timeout, the solutions architect can ensure that the message is not made visible to other consumers until the processing is complete and the message can be safely deleted from the queue.
Option A (Use the CreateQueue API call to create a new queue) would not address the issue of duplicate message processing.
Option B (Use the AddPermission API call to add appropriate permissions) is not relevant to this issue because it deals with setting permissions for accessing an SQS queue, which is not related to preventing duplicate records in the RDS table.
Option C (Use the ReceiveMessage API call to set an appropriate wait time) is not relevant to this issue because it is related to configuring how long the ReceiveMessage API call should wait for new messages to arrive in the SQS queue before returning an empty response. It does not address the issue of duplicate records in the RDS table.