A company is hosting a web application on AWS using a single Amazon EC2 instance that stores user-uploaded documents in an Amazon EBS volume. For better scalability and availability, the company duplicated the architecture and created a second EC2 instance and EBS volume in another Availability Zone, placing both behind an Application Load Balancer. After completing this change, users reported that, each time they refreshed the website, they could see one subset of their documents or the other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?
Copy the data so both EBS volumes contain all the documents
Configure the Application Load Balancer to direct a user to the server with the documents
Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
Configure the Application Load Balancer to send the request to both servers. Return each document from the correct server
Answer is Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS
To ensure users can see all their documents at once in the duplicated architecture with multiple EC2 instances and EBS volumes behind an Application Load Balancer, the most appropriate solution is Option C: Copy the data from both EBS volumes to Amazon EFS (Elastic File System) and modify the application to save new documents to Amazon EFS.
In summary, Option C, which involves copying the data to Amazon EFS and modifying the application to use Amazon EFS for document storage, is the most appropriate solution to ensure users can see all their documents at once in the duplicated architecture. Amazon EFS provides scalability, availability, and shared access, allowing both EC2 instances to access and synchronize the documents seamlessly.
A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1 MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?
Create an S3 bucket. Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interface (VIF) to connect to the S3 File Gateway. Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.
Answer is Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
This solution is the most efficient way to migrate the video files to Amazon S3. The Snowball Edge device can transfer data at up to 100 Gbps, which is much faster than the company's current network bandwidth. The Snowball Edge device is also a secure way to transfer data, as it is encrypted at rest and in transit.
Option A: It would require transferring the data over the network, which could consume a significant amount of bandwidth. This option does not address the requirement of minimizing network bandwidth usage.
Option C: It would still involve network transfers, potentially utilizing a significant amount of bandwidth.
Option D: It would also involve network transfers. Although it provides a dedicated network connection, it doesn't address the requirement of minimizing network bandwidth usage.
A company is migrating a distributed application to AWS. The application serves variable workloads. The legacy platform consists of a primary server that coordinates jobs across multiple compute nodes. The company wants to modernize the application with a solution that maximizes resiliency and scalability.
How should a solutions architect design the architecture to meet these requirements?
Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling to use scheduled scaling.
Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure AWS CloudTrail as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the primary server.
Implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs. Configure EC2 Auto Scaling based on the load on the compute nodes.
Answer is Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs. Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group. Configure EC2 Auto Scaling based on the size of the queue.
Based on the requirements stated, Option B is the most appropriate solution. It utilizes Amazon SQS for job destination and EC2 Auto Scaling based on the size of the queue to handle variable workloads while maximizing resiliency and scalability.
The architecture for this scenario works well if the number of image uploads doesn't vary over time. But if the number of uploads changes over time, you might consider using dynamic scaling to scale the capacity of your Auto Scaling group.
Configure scaling based on Amazon SQS
Tasks to do:
Step 1: Create a CloudWatch custom metric
Step 2: Create a target tracking scaling policy
Step 3: Test your scaling policy
Configuring an Amazon SQS queue as a destination for the jobs, implementing compute nodes with EC2 instances managed in an Auto Scaling group, and configuring EC2 Auto Scaling based on the size of the queue is the most suitable solution. With this approach, the primary server can enqueue jobs into the SQS queue, and the compute nodes can dynamically scale based on the size of the queue. This ensures that the compute capacity adjusts according to the workload, maximizing resiliency and scalability. The SQS queue acts as a buffer, decoupling the primary server from the compute nodes and providing fault tolerance in case of failures or spikes in the workload.
A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received.
Which solution will meet these requirements?
Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.
Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
Use an API Gateway authorizer to block any requests while the application processes an order.
Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing.
Answer is Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.
B because SQS FIFO queue guarantees message order.
- Amazon API Gateway will be used to receive the orders from the web application.
- Instead of directly processing the orders, the API Gateway will integrate with an Amazon SQS FIFO queue.
- FIFO (First-In-First-Out) queues in Amazon SQS ensure that messages are processed in the order they are received.
- By using a FIFO queue, the order processing is guaranteed to be sequential, ensuring that the first order received is processed before the next one.
- An AWS Lambda function can be configured to be triggered by the SQS FIFO queue, processing the orders as they arrive
An application development team is designing a microservice that will convert large images to smaller, compressed images. When a user uploads an image through the web interface, the microservice should store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function, and store the image in its compressed form in a different S3 bucket.
A solutions architect needs to design a solution that uses durable, stateless components to process the images automatically.
Which combination of actions will meet these requirements? (Choose two.)
Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket.
Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
Configure the Lambda function to monitor the S3 bucket for new uploads. When an uploaded image is detected, write the file name to a text file in memory and use the text file to keep track of the images that were processed.
Launch an Amazon EC2 instance to monitor an Amazon Simple Queue Service (Amazon SQS) queue. When items are added to the queue, log the file name in a text file on the EC2 instance and invoke the Lambda function.
Configure an Amazon EventBridge (Amazon CloudWatch Events) event to monitor the S3 bucket. When an image is uploaded, send an alert to an Amazon ample Notification Service (Amazon SNS) topic with the application owner's email address for further processing.
Answers are;
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the S3 bucket to send a notification to the SQS queue when an image is uploaded to the S3 bucket.
B. Configure the Lambda function to use the Amazon Simple Queue Service (Amazon SQS) queue as the invocation source. When the SQS message is successfully processed, delete the message in the queue.
Keywords:
- Store the image in an Amazon S3 bucket, process and compress the image with an AWS Lambda function.
- Durable, stateless components to process the images automatically
To design a solution that uses durable, stateless components to process images automatically, a solutions architect could consider the following actions:
Option A involves creating an SQS queue and configuring the S3 bucket to send a notification to the queue when an image is uploaded. This allows the application to decouple the image upload process from the image processing process and ensures that the image processing process is triggered automatically when a new image is uploaded.
Option B involves configuring the Lambda function to use the SQS queue as the invocation source. When the SQS message is successfully processed, the message is deleted from the queue. This ensures that the Lambda function is invoked only once per image and that the image is not processed multiple times.
Option C is incorrect because it involves storing state (the file name) in memory, which is not a durable or scalable solution.
Option D is incorrect because it involves launching an EC2 instance to monitor the SQS queue, which is not a stateless solution.
Option E is incorrect because it involves using Amazon EventBridge (formerly Amazon CloudWatch Events) to send an alert to an Amazon Simple Notification Service (Amazon SNS) topic, which is not related to the image processing process.
Question 46
A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?
Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers.
Change the platform from Aurora to Amazon DynamoDProvision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.
Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).
Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Answer is Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.
Keywords:
- Company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database.
- Improve scalability and minimize the configuration effort.
A: Incorrect - Lambda is Serverless and automatically scale - EC2 instance we have to create load balancer, auto scaling group,.. a lot of things. using native Java Database Connectivity (JDBC) drivers don't improve the performance.
B: Incorrect - a lot of things to changes and DynamoDB Accelerator use for cache(read) not for write.
C: Incorrect - SNS is use for send notification (e-mail, SMS).
D: Correct - Lambda and SQS are serverless. with SQS we can scale application well by queuing the data.
By dividing the functionality into two Lambda functions, one for receiving the information and the other for loading it into the database, you can independently scale and optimize each function based on their specific requirements. This approach allows for more efficient resource allocation and reduces the potential impact of high volumes of data on the overall system.
Integrating the Lambda functions using an SQS adds another layer of scalability and reliability. The receiving function can push the information to the SQS, and the loading function can retrieve messages from the queue and process them independently. This asynchronous decoupling ensures that the receiving function can handle high volumes of incoming requests without overwhelming the loading function. Additionally, SQS provides built-in retries and guarantees message durability, ensuring that no data is lost during processing.
Question 47
A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances, Amazon RDS DB instances and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.
What should a solutions architect do to accomplish this?
Use AWS Config rules to define and detect resources that are not properly tagged.
Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
Answer is Use AWS Config rules to define and detect resources that are not properly tagged.
AWS Config provides a set of pre-built or customizable rules that can be used to check the configuration and compliance of AWS resources. By creating a custom rule or using the built-in rule for tagging, you can define the required tags for EC2, RDS DB and Redshift clusters. AWS Config continuously monitors the resources and generates configuration change events or evaluation results.
By leveraging AWS Config, the solution can automatically detect any resources that do not comply with the defined tagging requirements. This approach eliminates the need for manual checks or periodic code execution, reducing operational overhead. Additionally, AWS Config provides the ability to automatically remediate non-compliant resources by triggering Lambda or sending notifications, further streamlining the configuration management process.
Option B (using Cost Explorer) primarily focuses on cost analysis and does not provide direct enforcement of proper tagging. Option C and D (writing API calls and running them manually or through scheduled Lambda) require more manual effort and maintenance compared to using AWS Config rules.
Question 48
A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions.
Which solution will meet these requirements with the LEAST operational overhead?
Create an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.
Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure replication between the S3 buckets.
Create a customer managed KMS key and an S3 bucket in each Region. Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS). Configure replication between the S3 buckets.
Answer is Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
By creating a customer managed multi-Region KMS key, you can have a single key that works across both AWS Regions.
Creating an S3 bucket in each Region allows you to store data in both Regions.
Configuring replication between the S3 buckets ensures that the data is replicated between the Regions.
Using client-side encryption with the KMS key ensures that the data is encrypted and decrypted with the same KMS key.
A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS Well-Architected Framework.
Which solution will meet these requirements with the LEAST operational overhead?
Use the EC2 serial console to directly access the terminal interface of each instance for administration.
Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.
Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local on-premises machines to connect directly to the instances by using SSH keys across the VPN tunnel.
Answer is Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
Option A provides direct access to the terminal interface of each instance, but it may not be practical for administration purposes and can be cumbersome to manage, especially for multiple instances.
Option C adds operational overhead and introduces additional infrastructure that needs to be managed, monitored, and secured. It also requires SSH key management and maintenance.
Option D is complex and may not be necessary for remote administration. It also requires administrators to connect from their local on-premises machines, which adds complexity and potential security risks.
Therefore, option B is the recommended solution as it provides secure, auditable, and repeatable remote access using IAM roles and AWS Systems Manager Session Manager, with minimal operational overhead.
A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis.
The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.
What is the MOST operationally efficient solution that meets these requirements?
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts. Create a script on the EC2 instances that will store the alerts in an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster. Set up the Amazon OpenSearch Service (Amazon Elasticsearch Service) cluster to take manual snapshots every day and delete data from the cluster that is older than 14 days.
Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts, and set the message retention period to 14 days. Configure consumers to poll the SQS queue, check the age of the message, and analyze the message data as needed. If the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue.
Answer is Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts. Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket. Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days.
B suggests launching EC2 instances to ingest and store the alerts, which introduces additional infrastructure management overhead and may not be as cost-effective and scalable as using managed services like Kinesis Data Firehose and S3.
C involves delivering the alerts to an Amazon OpenSearch Service cluster and manually managing snapshots and data deletion. This introduces additional complexity and manual overhead compared to the simpler solution of using Kinesis Data Firehose and S3.
D suggests using SQS to ingest the alerts, but it does not provide the same level of data persistence and durability as storing the alerts directly in S3. Additionally, it requires manual processing and copying of messages to S3, which adds operational complexity.
Therefore, A provides the most operationally efficient solution that meets the company's requirements by leveraging Kinesis Data Firehose to ingest the alerts, storing them in an S3 bucket, and using an S3 Lifecycle configuration to transition data to S3 Glacier for long-term archival, all without the need for managing additional infrastructure.