SAA-C03: AWS Certified Solutions Architect - Associate

14%

Question 11

A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks.

Which solution meets these requirements?
Enable Amazon GuardDuty on the account.
Enable Amazon Inspector on the EC2 instances.
Enable AWS Shield and assign Amazon Route 53 to it.
Enable AWS Shield Advanced and assign the ELB to it.




Answer is Enable AWS Shield Advanced and assign the ELB to it.

AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.

Option A is incorrect because Amazon GuardDuty is a threat detection service that focuses on identifying malicious activity and unauthorized behavior within AWS accounts. While it is useful for detecting various security threats, it does not specifically address large-scale DDoS attacks.

Option B is also incorrect because Amazon Inspector is a vulnerability assessment service that helps identify security issues and vulnerabilities within EC2. It does not directly protect against DDoS attacks.

Reference:
https://aws.amazon.com/shield/features/#:~:text=In%20addition%20to%20the%20network
,WAF%2C%20a%20web%20application%20firewall.

Question 12

A company has an Amazon S3 bucket that contains critical data.
The company must protect the data from accidental deletion.

Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
Enable versioning on the S3 bucket.
Enable MFA Delete on the S3 bucket.
Create a bucket policy on the S3 bucket.
Enable default encryption on the S3 bucket.
Create a lifecycle policy for the objects in the S3 bucket.




Answers are;
A. Enable versioning on the S3 bucket.
B. Enable MFA Delete on the S3 bucket.


Enabling versioning on S3 ensures multiple versions of object are stored in bucket. When object is updated or deleted, new version is created, preserving previous version.

Enabling MFA Delete adds additional layer of protection by requiring MFA device to be present when attempting to delete objects. This helps prevent accidental or unauthorized deletions by requiring extra level of authentication.

C. Creating a bucket policy on S3 is more focused on defining access control and permissions for bucket and its objects, rather than protecting against accidental deletion.

D. Enabling default encryption on S3 ensures that any new objects uploaded to bucket are automatically encrypted. While encryption is important for data security, it does not directly address accidental deletion.

E. Creating lifecycle policy for objects in S3 allows for automated management of objects based on predefined rules. While this can help with data retention and storage cost optimization, it does not directly protect against accidental deletion.

Reference:
https://aws.amazon.com/it/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/

Question 13

A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.

What should a solutions architect do to meet these requirements?
Create an AWS Lambda function to apply the patch to all EC2 instances.
Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.




Answer is Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.

AWS Systems Manager Run Command allows the company to run commands or scripts on multiple EC2 instances. By using Run Command, the company can quickly and easily apply the patch to all 1,000 EC2 instances to remediate the security vulnerability.

Creating an AWS Lambda function to apply the patch to all EC2 instances would not be a suitable solution, as Lambda functions are not designed to run on EC2 instances. Configuring AWS Systems Manager Patch Manager to apply the patch to all EC2 instances would not be a suitable solution, as Patch Manager is not designed to apply third-party software patches. Scheduling an AWS Systems Manager maintenance window to apply the patch to all EC2 instances would not be a suitable solution, as maintenance windows are not designed to apply patches to third-party software

D is best choice: Critical means immediate. Just run the patch command with AWS SM run command to get it done.
A: Too convoluted
B: Can work but have to setup a lot of things to get this done. would be a good choice if D wasn't an option
C: It's a critical patch so not time for maintenance window

Reference:
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html

Question 14

A company needs to store its accounting records in Amazon S3. The records must be immediately accessible for 1 year and then must be archived for an additional 9 years.
No one at the company, including administrative users and root users, can be able to delete the records during the entire 10-year period. The records must be stored with maximum resiliency.

Which solution will meet these requirements?
Store the records in S3 Glacier for the entire 10-year period. Use an access control policy to deny deletion of the records for a period of 10 years.
Store the records by using S3 Intelligent-Tiering. Use an IAM policy to deny deletion of the records. After 10 years, change the IAM policy to allow deletion.
Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.
Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 year. Use S3 Object Lock in governance mode for a period of 10 years.




Answer is Use an S3 Lifecycle policy to transition the records from S3 Standard to S3 Glacier Deep Archive after 1 year. Use S3 Object Lock in compliance mode for a period of 10 years.

To meet the requirements of immediately accessible records for 1 year and then archived for an additional 9 years with maximum resiliency, we can use S3 Lifecycle policy to transition records from S3 Standard to S3 Glacier Deep Archive after 1 year. And to ensure that the records cannot be deleted by anyone, including administrative and root users, we can use S3 Object Lock in compliance mode for a period of 10 years. Therefore, the correct answer is option C.

Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.htmls

Question 15

A solutions architect is developing a VPC architecture that includes multiple subnets. The architecture will host applications that use Amazon EC2 instances and Amazon RDS DB instances. The architecture consists of six subnets in two Availability Zones. Each Availability Zone includes a public subnet, a private subnet, and a dedicated subnet for databases. Only EC2 instances that run in the private subnets can have access to the RDS databases.

Which solution will meet these requirements?
Create a new route table that excludes the route to the public subnets' CIDR blocks. Associate the route table with the database subnets.
Create a security group that denies inbound traffic from the security group that is assigned to instances in the public subnets. Attach the security group to the DB instances.
Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.
Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.




Answer is Create a security group that allows inbound traffic from the security group that is assigned to instances in the private subnets. Attach the security group to the DB instances.

RDS databases can only be accessed by EC2 instances located in private subnets: From the security group given to instances in the private subnets, the DB instances' security group will permit incoming traffic. Because of this, the RDS databases will only be accessible by EC2 instances located on the private subnets.
Because of its safe architecture, Every other source of incoming traffic will be blocked by the security group that is linked to the database instances. The RDS databases will be better shielded from unwanted access thanks to this.

Option A, creating a new route table that excludes the route to the public subnets' CIDR blocks and associating it with the database subnets, would not meet the requirements because it would block all traffic to the database subnets, not just traffic from the public subnets.

Option B, creating a security group that denies inbound traffic from the security group assigned to instances in the public subnets and attaching it to the DB instances, would not meet the requirements because it would allow all traffic from the private subnets to reach the DB instances, not just traffic from the security group assigned to instances in the private subnets.

Option D, creating a new peering connection between the public subnets and the private subnets and a different peering connection between the private subnets and the database subnets, would not meet the requirements because it would allow all traffic from the private subnets to reach the DB instances, not just traffic from the security group assigned to instances in the private subnets.

Question 16

A company has registered its domain name with Amazon Route 53. The company uses Amazon API Gateway in the ca-central-1 Region as a public interface for its backend microservice APIs. Third-party services consume the APIs securely.
The company wants to design its API Gateway URL with the company's domain name and corresponding certificate so that the third-party services can use HTTPS.

Which solution will meet these requirements?
Create stage variables in API Gateway with Name="Endpoint-URL" and Value="Company Domain Name" to overwrite the default URL. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM).
Create Route 53 DNS records with the company's domain name. Point the alias record to the Regional API Gateway stage endpoint. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region.
Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.
Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the us-east-1 Region. Attach the certificate to the API Gateway APIs. Create Route 53 DNS records with the company's domain name. Point an A record to the company's domain name.




Answer is Create a Regional API Gateway endpoint. Associate the API Gateway endpoint with the company's domain name. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region. Attach the certificate to the API Gateway endpoint. Configure Route 53 to route traffic to the API Gateway endpoint.

To design the API Gateway URL with the company's domain name and corresponding certificate, the company needs to do the following:
1. Create a Regional API Gateway endpoint: This will allow the company to create an endpoint that is specific to a region.
2. Associate the API Gateway endpoint with the company's domain name: This will allow the company to use its own domain name for the API Gateway URL.
3. Import the public certificate associated with the company's domain name into AWS Certificate Manager (ACM) in the same Region: This will allow the company to use HTTPS for secure communication with its APIs.
4. Attach the certificate to the API Gateway endpoint: This will allow the company to use the certificate for securing the API Gateway URL.
5. Configure Route 53 to route traffic to the API Gateway endpoint: This will allow the company to use Route 53 to route traffic to the API Gateway URL using the company's domain name.

Option A: Using stage variables and importing certificates into ACM is not sufficient for achieving the requirement of associating a custom domain and certificate with the API Gateway endpoint.
Option B: While it mentions importing the certificate into ACM, it doesn't address the need for a Regional API Gateway or the appropriate region for the certificate.
Option D: Using certificates from the us-east-1 region for a Regional API Gateway might cause issues. Additionally, it doesn't provide clear details on how to associate the domain name and certificate with the API Gateway endpoint.

Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-regional-api-custom-domain-create.html

Question 17

A company is running a popular social media website. The website gives users the ability to upload images to share with other users.
The company wants to make sure that the images do not contain inappropriate content. The company needs a solution that minimizes development effort.

What should a solutions architect do to meet these requirements?
Use Amazon Comprehend to detect inappropriate content. Use human review for low-confidence predictions.
Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.
Use Amazon SageMaker to detect inappropriate content. Use ground truth to label low-confidence predictions.
Use AWS Fargate to deploy a custom machine learning model to detect inappropriate content. Use ground truth to label low-confidence predictions.




Answer is Use Amazon Rekognition to detect inappropriate content. Use human review for low-confidence predictions.

Amazon Rekognition is a cloud-based image and video analysis service that can detect inappropriate content in images using its pre-trained label detection model. It can identify a wide range of inappropriate content, including explicit or suggestive adult content, violent content, and offensive language. The service provides high accuracy and low latency, making it a good choice for this use case.

Option A, using Amazon Comprehend, is not a good fit for this use case because Amazon Comprehend is a natural language processing service that is designed to analyze text, not images.

Option C, using Amazon SageMaker to detect inappropriate content, would require significant development effort to build and train a custom machine learning model. It would also require a large dataset of labeled images to train the model, which may be time-consuming and expensive to obtain.

Option D, using AWS Fargate to deploy a custom machine learning model, would also require significant development effort and a large dataset of labeled images. It may not be the most efficient or cost-effective solution for this use case.

Reference:
https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html?pg=ln&sec=ft

Question 18

A company is developing a two-tier web application on AWS. The company's developers have deployed the application on an Amazon EC2 instance that connects directly to a backend Amazon RDS database. The company must not hardcode database credentials in the application. The company must also implement a solution to automatically rotate the database credentials on a regular basis.

Which solution will meet these requirements with the LEAST operational overhead?
Store the database credentials in the instance metadata. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and instance metadata at the same time.
Store the database credentials in a configuration file in an encrypted Amazon S3 bucket. Use Amazon EventBridge (Amazon CloudWatch Events) rules to run a scheduled AWS Lambda function that updates the RDS credentials and the credentials in the configuration file at the same time. Use S3 Versioning to ensure the ability to fall back to previous values.
Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret.
Store the database credentials as encrypted parameters in AWS Systems Manager Parameter Store. Turn on automatic rotation for the encrypted parameters. Attach the required permission to the EC2 role to grant access to the encrypted parameters.




Answer is Store the database credentials as a secret in AWS Secrets Manager. Turn on automatic rotation for the secret. Attach the required permission to the EC2 role to grant access to the secret.

AWS Secrets Manager is a service that enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. By storing the database credentials as a secret in Secrets Manager, you can ensure that they are not hardcoded in the application and that they are automatically rotated on a regular basis. To grant the EC2 instance access to the secret, you can attach the required permission to the EC2 role. This will allow the application to retrieve the secret from Secrets Manager as needed.

Option A, storing the database credentials in the instance metadata and using a Lambda function to update them, would not meet the requirement of not hardcoding the credentials in the application.

Option B, storing the database credentials in an encrypted S3 bucket and using a Lambda function to update them, would also not meet this requirement, as the application would still need to access the credentials from the configuration file.

Option D, storing the database credentials as encrypted parameters in AWS Systems Manager Parameter Store, would also not meet this requirement, as the application would still need to access the encrypted parameters in order to use them.

Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/elb-redirect-http-to-https-using-alb/

Question 19

A company has more than 5 TB of file data on Windows file servers that run on premises. Users and applications interact with the data each day. The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file storage with minimum latency. The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS.

What should a solutions architect do to meet these requirements?
Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on-premises file data to FSx for Windows File Server. Reconfigure the workloads to use FSx for Windows File Server on AWS.
Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to the S3 File Gateway. Reconfigure the on-premises workloads and the cloud workloads to use the S3 File Gateway.
Deploy and configure an Amazon S3 File Gateway on premises. Move the on-premises file data to Amazon S3. Reconfigure the workloads to use either Amazon S3 directly or the S3 File Gateway. depending on each workload's location.
Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx File Gateway on premises. Move the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-premises workloads to use the FSx File Gateway.




Answer is Deploy and configure Amazon FSx for Windows File Server on AWS. Deploy and configure an Amazon FSx File Gateway on premises. Move the on-premises file data to the FSx File Gateway. Configure the cloud workloads to use FSx for Windows File Server on AWS. Configure the on-premises workloads to use the FSx File Gateway.

Amazon FSx File Gateway is a service that provides low latency and efficient access to Amazon FSx for Windows File Server shares from on-premises facilities. It helps eliminate on-premises file servers and consolidates all the data into AWS to take advantage of the scale and economics of cloud storage.

A does not include any on-premises component, thus it can't meet the "access to ... on-premises file storage with minimum latency" requirement. B and C use S3 which cannot be directly accessed by the Windows servers they are going to move to AWS.

Reference:
https://aws.amazon.com/blogs/storage/accessing-your-file-workloads-from-on-premises-with-file-gateway/

Question 20

A hospital recently deployed a RESTful API with Amazon API Gateway and AWS Lambda. The hospital uses API Gateway and Lambda to upload reports that are in PDF format and JPEG format. The hospital needs to modify the Lambda code to identify protected health information (PHI) in the reports.

Which solution will meet these requirements with the LEAST operational overhead?
Use existing Python libraries to extract the text from the reports and to identify the PHI from the extracted text.
Use Amazon Textract to extract the text from the reports. Use Amazon SageMaker to identify the PHI from the extracted text.
Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.
Use Amazon Rekognition to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.




Answer is Use Amazon Textract to extract the text from the reports. Use Amazon Comprehend Medical to identify the PHI from the extracted text.

Using Amazon Textract to extract the text from the reports, and Amazon Comprehend Medical to identify the PHI from the extracted text, would be the most efficient solution as it would involve the least operational overhead. Textract is specifically designed for extracting text from documents, and Comprehend Medical is a fully managed service that can accurately identify PHI in medical text. This solution would require minimal maintenance and would not incur any additional costs beyond the usage fees for Textract and Comprehend Medical.

Option A: Using existing Python libraries to extract the text and identify the PHI from the text would require the hospital to maintain and update the libraries as needed. This would involve operational overhead in terms of keeping the libraries up to date and debugging any issues that may arise.

Option B: Using Amazon SageMaker to identify the PHI from the extracted text would involve additional operational overhead in terms of setting up and maintaining a SageMaker model, as well as potentially incurring additional costs for using SageMaker.

Option D: Using Amazon Rekognition to extract the text from the reports would not be an effective solution, as Rekognition is primarily designed for image recognition and would not be able to accurately extract text from PDF or JPEG files.

< Previous PageNext Page >

Quick access to all questions in this exam