A solutions architect must design a highly available infrastructure for a website. The website is powered by Windows web servers that run on Amazon EC2 instances. The solutions architect must implement a solution that can mitigate a large-scale DDoS attack that originates from thousands of IP addresses. Downtime is not acceptable for the website.
Which actions should the solutions architect take to protect the website from such an attack? (Choose two.)
Use AWS Shield Advanced to stop the DDoS attack.
Configure Amazon GuardDuty to automatically block the attackers.
Configure the website to use Amazon CloudFront for both static and dynamic content.
Use an AWS Lambda function to automatically add attacker IP addresses to VPC network ACLs.
Use EC2 Spot Instances in an Auto Scaling group with a target tracking scaling policy that is set to 80% CPU utilization.
Answers are; Use AWS Shield Advanced to stop the DDoS attack.
C. Configure the website to use Amazon CloudFront for both static and dynamic content.
Option A.
It provides always-on protection for Amazon EC2 instances, Elastic Load Balancers, and Amazon Route 53 resources. By using AWS Shield Advanced, the solutions architect can help protect the website from large-scale DDoS attacks.
Option C.
CloudFront is a content delivery network (CDN) that integrates with other Amazon Web Services products, such as Amazon S3 and Amazon EC2, to deliver content to users with low latency and high data transfer speeds. By using CloudFront, the solutions architect can distribute the website's content across multiple edge locations, which can help absorb the impact of a DDoS attack and reduce the risk of downtime for the website.
B. While GuardDuty can detect and provide insights into potential malicious activity, it is not specifically designed for DDoS mitigation.
D. Network ACLs are not designed to handle high-volume traffic or DDoS attacks efficiently.
E. Spot Instances are a cost optimization strategy and may not provide the necessary availability and protection against DDoS attacks compared to using dedicated instances with DDoS protection mechanisms like Shield Advanced and CloudFront.
A company is preparing to deploy a new serverless workload. A solutions architect must use the principle of least privilege to configure permissions that will be used to run an AWS Lambda function. An Amazon EventBridge (Amazon CloudWatch Events) rule will invoke the function.
Which solution meets these requirements?
Add an execution role to the function with lambda:InvokeFunction as the action and * as the principal.
Add an execution role to the function with lambda:InvokeFunction as the action and Service: lambda.amazonaws.com as the principal.
Add a resource-based policy to the function with lambda:* as the action and Service: events.amazonaws.com as the principal.
Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service: events.amazonaws.com as the principal.
Answer is Add a resource-based policy to the function with lambda:InvokeFunction as the action and Service: events.amazonaws.com as the principal.
The principle of least privilege requires that permissions are granted only to the minimum necessary to perform a task. In this case, the Lambda function needs to be able to be invoked by Amazon EventBridge (Amazon CloudWatch Events). To meet these requirements, you can add a resource-based policy to the function that allows the InvokeFunction action to be performed by the Service: events.amazonaws.com principal. This will allow Amazon EventBridge to invoke the function, but will not grant any additional permissions to the function.
Option A is incorrect because it grants the lambda:InvokeFunction action to any principal (*), which would allow any entity to invoke the function and goes beyond the minimum permissions needed.
Option B is incorrect because it grants the lambda:InvokeFunction action to the Service: lambda.amazonaws.com principal, which would allow any Lambda function to invoke the function and goes beyond the minimum permissions needed.
Option C is incorrect because it grants the lambda:* action to the Service: events.amazonaws.com principal, which would allow Amazon EventBridge to perform any action on the function and goes beyond the minimum permissions needed.
A company needs to store data in Amazon S3 and must prevent the data from being changed. The company wants new objects that are uploaded to Amazon S3 to remain unchangeable for a nonspecific amount of time until the company decides to modify the objects. Only specific users in the company's AWS account can have the ability to delete the objects.
What should a solutions architect do to meet these requirements?
Create an S3 Glacier vault. Apply a write-once, read-many (WORM) vault lock policy to the objects.
Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Set a retention period of 100 years. Use governance mode as the S3 bucket’s default retention mode for new objects.
Create an S3 bucket. Use AWS CloudTrail to track any S3 API events that modify the objects. Upon notification, restore the modified objects from any backup versions that the company has.
Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add the s3:PutObjectLegalHold permission to the IAM policies of users who need to delete the objects.
Answer is Create an S3 bucket with S3 Object Lock enabled. Enable versioning. Add a legal hold to the objects. Add the s3:PutObjectLegalHold permission to the IAM policies of users who need to delete the objects.
The Object Lock legal hold operation enables you to place a legal hold on an object version. Like setting a retention period, a legal hold prevents an object version from being overwritten or deleted. However, a legal hold doesn't have an associated retention period and remains in effect until removed.
A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data files that are stored on Amazon S3. The EC2 instances are hosted in public subnets. The EC2 instances access Amazon S3 over the internet, but they do not require any other network access.
A new requirement mandates that the network traffic for file transfers take a private route and not be sent over the internet.
Which change to the network architecture should a solutions architect recommend to meet this requirement?
Create a NAT gateway. Configure the route table for the public subnets to send traffic to Amazon S3 through the NAT gateway.
Configure the security group for the EC2 instances to restrict outbound traffic so that only traffic to the S3 prefix list is permitted.
Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table for the private subnets.
Remove the internet gateway from the VPC. Set up an AWS Direct Connect connection, and route traffic to Amazon S3 over the Direct Connect connection.
Answer is Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table for the private subnets.
To meet the new requirement of transferring files over a private route, the EC2 instances should be moved to private subnets, which do not have direct access to the internet. This ensures that the traffic for file transfers does not go over the internet.
To enable the EC2 instances to access Amazon S3, a VPC endpoint for Amazon S3 can be created. VPC endpoints allow resources within a VPC to communicate with resources in other services without the traffic being sent over the internet. By linking the VPC endpoint to the route table for the private subnets, the EC2 instances can access Amazon S3 over a private connection within the VPC.
Option A (Create a NAT gateway) would not work, as a NAT gateway is used to allow resources in private subnets to access the internet, while the requirement is to prevent traffic from going over the internet.
Option B (Configure the security group for the EC2 instances to restrict outbound traffic) would not achieve the goal of routing traffic over a private connection, as the traffic would still be sent over the internet.
Option D (Remove the internet gateway from the VPC and set up an AWS Direct Connect connection) would not be necessary, as the requirement can be met by simply creating a VPC endpoint for Amazon S3 and routing traffic through it.
A company is running an online transaction processing (OLTP) workload on AWS. This workload uses an unencrypted Amazon RDS DB instance in a Multi-AZ deployment. Daily database snapshots are taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted moving forward?
Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot.
Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to it. Enable encryption on the DB instance.
Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS) Restore encrypted snapshot to an existing DB instance.
Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS Key Management Service (AWS KMS) managed keys (SSE-KMS).
Answer is Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted snapshot.
You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's created. However, you can add encryption to an unencrypted DB instance by creating a snapshot of your DB instance, and then creating an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot to get an encrypted copy of your original DB instance.
A company wants to build a scalable key management infrastructure to support developers who need to encrypt data in their applications.
What should a solutions architect do to reduce the operational burden?
Use multi-factor authentication (MFA) to protect the encryption keys.
Use AWS Key Management Service (AWS KMS) to protect the encryption keys.
Use AWS Certificate Manager (ACM) to create, store, and assign the encryption keys.
Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys.
Answer is Use AWS Key Management Service (AWS KMS) to protect the encryption keys.
To reduce the operational burden, the solutions architect should use AWS Key Management Service (AWS KMS) to protect the encryption keys.
AWS KMS is a fully managed service that makes it easy to create and manage encryption keys. It allows developers to easily encrypt and decrypt data in their applications, and it automatically handles the underlying key management tasks, such as key generation, key rotation, and key deletion. This can help to reduce the operational burden associated with key managemen.
Option A (MFA), option C (ACM), and option D (IAM policy) are not directly related to reducing the operational burden of key management. While these options may provide additional security measures or access controls, they do not specifically address the scalability and management aspects of a key management infrastructure. AWS KMS is designed to simplify the key management process and is the most suitable option for reducing the operational burden in this scenario.
A company is developing a file-sharing application that will use an Amazon S3 bucket for storage. The company wants to serve all the files through an Amazon CloudFront distribution. The company does not want the files to be accessible through direct navigation to the S3 URL.
What should a solutions architect do to meet these requirements?
Write individual policies for each S3 bucket to grant read permission for only CloudFront access.
Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to CloudFront.
Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the target S3 bucket as the Amazon Resource Name (ARN).
Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the OAI has read permission.
Answer is Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the S3 bucket permissions so that only the OAI has read permission.
To meet the requirements of serving files through CloudFront while restricting direct access to the S3 bucket URL, the recommended approach is to use an origin access identity (OAI). By creating an OAI and assigning it to the CloudFront distribution, you can control access to the S3 bucket.
This setup ensures that the files stored in the S3 bucket are only accessible through CloudFront and not directly through the S3 bucket URL. Requests made directly to the S3 URL will be blocked.
Option A suggests writing individual policies for each S3 bucket, which can be cumbersome and difficult to manage, especially if there are multiple buckets involved.
Option B suggests creating an IAM user and assigning it to CloudFront, but this does not address restricting direct access to the S3 bucket URL.
Option C suggests writing an S3 bucket policy with CloudFront distribution ID as the Principal, but this alone does not provide the necessary restrictions to prevent direct access to the S3 bucket URL.
A company runs workloads on AWS. The company needs to connect to a service from an external provider. The service is hosted in the provider's VPC. According to the company’s security team, the connectivity must be private and must be restricted to the target service. The connection must be initiated only from the company’s VPC.
Which solution will mast these requirements?
Create a VPC peering connection between the company's VPC and the provider's VPC. Update the route table to connect to the target service.
Ask the provider to create a virtual private gateway in its VPC. Use AWS PrivateLink to connect to the target service.
Create a NAT gateway in a public subnet of the company’s VPUpdate the route table to connect to the target service.
Ask the provider to create a VPC endpoint for the target service. Use AWS PrivateLink to connect to the target service.
Answer is Ask the provider to create a VPC endpoint for the target service. Use AWS PrivateLink to connect to the target service.
By asking the provider to create a VPC endpoint for the target service, the company can use AWS PrivateLink to connect to the target service. This enables the company to access the service privately and securely over an Amazon VPC endpoint, without requiring a NAT gateway, VPN, or AWS Direct Connect. Additionally, this will restrict the connectivity only to the target service, as required by the company's security team.
Option A VPC peering connection may not meet security requirement as it can allow communication between all resources in both VPCs.
Option B, asking the provider to create a virtual private gateway in its VPC and use AWS PrivateLink to connect to the target service is not the optimal solution because it may require the provider to make changes and also you may face security issues.
Option C, creating a NAT gateway in a public subnet of the company’s VPC can expose the target service to the internet, which would not meet the security requirements.
A company collects data for temperature, humidity, and atmospheric pressure in cities across multiple continents. The average volume of data that the company collects from each site daily is 500 GB. Each site has a high-speed Internet connection.
The company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon S3 bucket. The solution must minimize operational complexity.
Which solution meets these requirements?
Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
Upload the data from each site to an S3 bucket in the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket. Then remove the data from the origin S3 bucket.
Schedule AWS Snowball Edge Storage Optimized device jobs daily to transfer data from each site to the closest Region. Use S3 Cross-Region Replication to copy objects to the destination S3 bucket.
Upload the data from each site to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. At regular intervals, take an EBS snapshot and copy it to the Region that contains the destination S3 bucket. Restore the EBS volume in that Region.
Answer is Turn on S3 Transfer Acceleration on the destination S3 bucket. Use multipart uploads to directly upload site data to the destination S3 bucket.
Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet. S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications.
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?
Create a gateway VPC endpoint to the S3 bucket.
Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
Create an instance profile on Amazon EC2 to allow S3 access.
Create an Amazon API Gateway API with a private link to access the S3 endpoint.
Answer is Create a gateway VPC endpoint to the S3 bucket.
Keywords:
- EC2 in VPC
- EC2 instance needs to access the S3 bucket without connectivity to the internet
A: Correct - Gateway VPC endpoint can connect to S3 bucket privately without additional cost
B: Incorrect - You can set up interface VPC endpoint for CloudWatch Logs for private network from EC2 to CloudWatch. But from CloudWatch to S3 bucket: Log data can take up to 12 hours to become available for export and the requirement only need EC2 to S3
C: Incorrect - Create an instance profile just grant access but not help EC2 connect to S3 privately
D: Incorrect - API Gateway like the proxy which receive network from out site and it forward request to AWS Lambda, Amazon EC2, Elastic Load Balancing products such as Application Load Balancers or Classic Load Balancers, Amazon DynamoDB, Amazon Kinesis, or any publicly available HTTPS-based endpoint. But not S3