SysOps Associate - Practice Test Study Notes
EXAM 3
To set up a serverless video transcoding workflow, you first create an Amazon S3 bucket and associate it with a Lambda trigger that submits a transcoding job using Amazon Elemental MediaConvert when a new file is ingested. The transcoding job specifies video packaging settings like HLS and DASH along with the bitrates that are expected from the transcoded output. These information are all stored in another Amazon S3 bucket. Finally, the videos are delivered securely by Amazon CloudFront, restricting access to the Amazon S3 bucket by using origin access identity.
Mappings is the correct answer. The optional Mappings section matches a key to a corresponding set of named values. For example, if you want to set values based on a region, you can create a mapping that uses the region name as a key and contains the values you want to specify for each specific region. You use the Fn::FindInMap intrinsic function to retrieve values in a map.
Conditions is incorrect since this section is used to include statements that define when a resource is created or when a property is defined.
The amount of request traffic being sent to the ELB is causing the surge queue to fill up relatively quickly. The SurgeQueueLength metric tells us the total number of requests or connections that are pending routing to a healthy instance. Additional requests or connections are rejected when the queue is full. The SpilloverCount metric then becomes helpful here because it tells us the total number of requests that were rejected when the surge queue becomes full. From these two metrics, we can therefore estimate how many more instances we need to spin up.
HTTPCode_ELB_4XX and HTTPCode_ELB_5XX are incorrect. HTTPCode_ELB_4XX and HTTPCode_ELB_5XX metrics tell you the number of HTTP 4XX client error codes and HTTP 5XX server error codes generated by the load balancer, respectively. These are incorrect because if that is the case, then your initial user connections and requests should have experienced these errors right from the start. Also, your users are experiencing request timeouts.
Amazon EBS emits notifications based on Amazon CloudWatch Events for a variety of snapshot and encryption status changes. With CloudWatch Events, you can establish rules that trigger programmatic actions in response to a change in snapshot or encryption key state. For example, when a snapshot is created, you can trigger an AWS Lambda function to share the completed snapshot with another account or copy it to another region for disaster-recovery purposes. Hence, the correct answers are integrating CloudWatch Events with EBS and setting up Lambda functions to copy the snapshots to another region.
AWS CloudFormation StackSets extends the functionality of stacks by enabling you to create, update, or delete stacks across multiple accounts and regions with a single operation. Using an administrator account, you define and manage an AWS CloudFormation template, and use the template as the basis for provisioning stacks into selected target accounts across specified regions.
You can configure Amazon Redshift to automatically copy snapshots (automated or manual) for a cluster to another region. When a snapshot is created in the cluster’s primary region, it will be copied to a secondary region; these are known respectively as the source region and destination region.
If you need to remove a file from CloudFront edge caches before it expires, you can do one of the following:
Invalidate the file from edge caches. The next time a viewer requests the file, CloudFront returns to the origin to fetch the latest version of the file.
Use file versioning to serve a different version of the file that has a different name.
You can invalidate most types of content that are served by a web distribution, but you cannot invalidate media files in the Microsoft Smooth Streaming format when you have enabled Smooth Streaming for the corresponding cache behavior. In addition, you cannot invalidate objects that are served by an RTMP distribution.
Manually removing the photo from the CloudFront servers by using the AWS CLI is incorrect because you cannot manually remove a file in CloudFront using the AWS CLI. You can only invalidate the objects using the aws cloudfront create-invalidation command.
You manage access in AWS by creating policies and attaching them to IAM identities or AWS resources. A policy is an object in AWS that, when associated with an entity or resource, defines their permissions. AWS evaluates these policies when a principal, such as a user, makes a request. Permissions in the policies determine whether the request is allowed or denied. Hence, the correct answer is to attach identity-based policies to your users and resource-based policies to your AWS resources.
Enabling Multi-Factor Authentication for each user is incorrect since MFA just provides an extra level of security that you can apply to your AWS environment. You can also use AWS MFA together with Amazon S3 secure delete for additional protection of your S3 stored versions.
In the scenario, there are three main parts of the architecture that you must implement in order for it to work as designed:
An Auto Scaling group to manage EC2 instances for the purposes of processing messages from an SQS queue.
A custom metric to send to Amazon CloudWatch that measures the number of messages in the queue per EC2 instance in the Auto Scaling group. You can use the ApproximateNumberOfMessages attribute of the SQS get-queue-attributes command.
A target tracking policy that configures your Auto Scaling group to scale based on the custom metric and a set target value. CloudWatch alarms invoke the scaling policy.
ApproximateNumberOfMessagesDelayed is incorrect because this attribute returns the approximate number of messages in the queue that are delayed and not available for reading immediately, which can happen when the queue is configured as a delay queue or when a message has been sent with a delay parameter. It does not approximate the number of messages available for retrieval from the SQS queue.
QueueArn is incorrect because this attribute simply returns the Amazon resource name (ARN) of the queue and not the approximate number of messages available for retrieval from the SQS queue.
ApproximateNumberOfMessagesNotVisible is incorrect because this attribute returns the approximate number of messages that are in flight. Messages are considered to be in flight if they have been sent to a client but have not yet been deleted or have not yet reached the end of their visibility window. It does not return the approximate number of messages available for retrieval from the SQS queue.
Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon RDS and your AWS solutions. You should collect monitoring data from all of the parts of your AWS solution so that you can more easily debug a multi-point failure if one occurs.
In this scenario, you can use the FreeStorageSpace Amazon CloudWatch metric to monitor the available storage space for an RDS DB instance.
BinLogDiskUsage is incorrect because the BinLogDiskUsage metric tracks the amount of disk space occupied by binary logs on the master. This only applies to MySQL read replicas.
FreeableMemory is incorrect because the FreeableMemory metric tracks the amount of available random access memory and not the available storage space.
DiskQueueDepth is incorrect because the DiskQueueDepth metric just provides the number of outstanding IOs (read/write requests) waiting to access the disk and not the available storage space.
With step scaling policies, you can specify the number of seconds that it takes for a newly launched instance to warm up. Until its specified warm-up time has expired, an instance is not counted toward the aggregated metrics of the Auto Scaling group. While scaling out, AWS also does not consider instances that are warming up as part of the current capacity of the group.
Therefore, multiple alarm breaches that fall in the range of the same step adjustment result in a single scaling activity. This ensures that AWS doesn't add more instances than you need
EXAM 2
The options that say, "creating custom workflows or use pre-defined workflows maintained by AWS," "receiving notifications about automation tasks and workflows by using CloudWatch Events" and "monitoring automation progress and execution details in the Systems Manager console" are correct because these are the automation capabilities of Systems Manager that you can perform on EC2 instances.
Designing Automation documents that are securely tied to the user and cannot be shared to others is incorrect because you can create best practices for resource management in automation documents and easily share the documents across AWS Regions and groups. You can also constrain the allowed values for the parameters the document accepts.
Setting synchronized EC2 instance restart times even without proper user access to some instances is incorrect because access to Systems Manager requires credentials. Those credentials must have permissions to access AWS resources for different tasks. You can have valid credentials to authenticate your requests but unless you have permissions, you cannot create or access Systems Manager resources.
Allowing unlimited concurrent automation executions without duration limits is incorrect because Systems Manager has service limits when it comes to concurrently executing automations (25) and maximum duration an automation execution can run (12 hrs).
Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications compared with EBS-backed EC2 instances.
If you use a Linux AMI with kernel version 4.4 or later and use all the SSD-based instance store volumes available to your instance, you get the IOPS (in 4kB block size) performance of over 100,000 IOPS depending on the instance types. For example, an i3.4xlarge instance can provide you with 825,000 Random Read IOPS and 360,000 Write IOPS.
AWS Config sends notifications for the following events:
Configuration item change for a resource.
Configuration history for a resource was delivered for your account.
Configuration snapshot for recorded resources was started and delivered for your account.
Compliance state of your resources and whether they are compliant with your rules.
Evaluation started for a rule against your resources.
AWS Config failed to deliver the notification to your account.
Going to the compliance portion of the AWS website and get all the required details is the correct answer. AWS provides a page that provides a list of compliance certifications and attestations as assessed by a third-party or independent auditor. The page also lists laws/regulations that AWS is compliant for.
AWS allows the creation of a subdomain that uses Route 53 as the DNS service without having to migrate the parent domain for a cheap cost. The steps for performing this can be found in the reference.
You don't need to register a new domain in Route 53 just for the purpose of creating the subdomains.
The option that says, "Provision EC2 servers with elastic IPs attached to them, and use them to host the new webpages. The use Route 53 A records to point to the elastic IPs, and create NS records to direct subdomain queries" is too tedious to do. The school only needs a host for their subdomains and there are more efficient ways to solve the problem.
The Amazon CloudWatch Monitoring Scripts for Amazon Elastic Compute Cloud (Amazon EC2) Linux-based instances demonstrate how to produce and consume Amazon CloudWatch custom metrics. These sample Perl scripts comprise a fully functional example that reports memory, swap, and disk space utilization metrics for a Linux instance.
You can aggregate the metrics for AWS resources across multiple resources. Amazon CloudWatch cannot aggregate data across Regions. Metrics are completely separate between Regions. For example, you can aggregate statistics for your EC2 instances that have detailed monitoring enabled. Instances that use basic monitoring are not included. Therefore, you must enable detailed monitoring (at an additional charge), which provides data in 1-minute periods. Therefore, the correct answers are the options that say, "Set up a CloudWatch dashboard. Add a widget for each region which contains the aggregated CPU utilization for all EC2 instances that is running on a specific region" and "Enable detailed monitoring for all EC2 instances."
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/GetSingleMetricAllDimensions.html
The HTTPCode_Backend_5XX response metric is caused by a server error response sent from the registered instances hence, HTTPCode_Backend_5XX is correct. To solve this, you can view the access logs or the error logs on your instances to determine the cause. Send requests directly to the instance (bypass the load balancer) to view the responses.
HTTPCode_Backend_2XX is incorrect as this metric indicates a normal, successful response from the registered instances.
HTTPCode_Backend_3XX is incorrect as this metric indicates a redirect response sent from the registered instances.
HTTPCode_Backend_4XX is incorrect as this metric indicates a client error response sent from the registered instances.
The Cost Optimization Monitor can help you generate reports that provide insight into service usage and costs as you deploy and operate cloud architecture. They include detailed billing reports, which you can access in the AWS Billing and Cost Management console. These reports provide estimated costs to help monitor and forecast monthly charges. You can analyze this information to optimize your infrastructure and maximize your return on investment using elasticity. The solution uses Amazon Elasticsearch Service and leverages its built-in support for Kibana, enabling customers to visualize their first batch of data as soon as it's processed.
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination.
Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn helps you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.
There are CloudWatch Agents You can create a CloudWatch alarm that watches a single metric. The alarm performs one or more actions based on the value of the metric relative to a threshold over a number of time periods. The action can be an Amazon EC2 action, an Amazon EC2 Auto Scaling action, or a notification sent to an Amazon SNS topic. You can also add alarms to CloudWatch dashboards and monitor them visually. When an alarm is on a dashboard, it turns red when it is in the ALARM state, making it easier for you to monitor its status proactively.
Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance, or close security gaps. It provides alerts on several of the most common security misconfigurations that can occur, including leaving certain ports open that make you vulnerable to hacking and unauthorized access, neglecting to create IAM accounts for your internal users, allowing public access to Amazon S3 buckets, not turning on user activity logging (AWS CloudTrail), or not using MFA on your root AWS Account.
Using AWS Config Security Checks to monitor and assess changes in the configurations of AWS resources is incorrect - AWS Config is a fully managed service that enables you to assess, audit, and evaluate the configurations of your AWS resources. This option is incorrect since it does not give you recommendations on what to check, unlike Trusted Advisor.
Using AWS Inspector Checks to evaluate whether your assessment targets (your collection of AWS resources) have potential security issues that you need to address is also incorrect - AWS Inspector assessment targets can consist only of EC2 instances that run on a number of supported operating systems. Therefore, it won't be able to assess your other resources.
AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources.
The dashboard displays relevant and timely information to help you manage events in progress, and provides proactive notification to help you plan for scheduled activities. With Personal Health Dashboard, alerts are triggered by changes in the health of AWS resources, giving you event visibility, and guidance to help quickly diagnose and resolve issues.
Simply using a CloudWatch Dashboard to automatically check the status of underlying hardware that hosts your AWS resources and sending alerts for any outages is incorrect because CloudWatch only monitors the health of the resources that you own based on certain metrics but it does not check the underlying hardware that hosts the AWS resources.
AWS Config enables continuous monitoring of your AWS resources, making it simple to assess, audit, and record resource configurations and changes. AWS Config does this through the use of rules that define the desired configuration state of your AWS resources. AWS Config provides a number of AWS managed rules that address a wide range of security concerns such as checking if you encrypted your Amazon Elastic Block Store (Amazon EBS) volumes, tagged your resources appropriately, and enabled multi-factor authentication (MFA) for root accounts.
The option that says, "AWS notifies customers when systems need to be brought offline to perform regular maintenance and system patching" is incorrect because AWS does not require systems to be brought offline to perform regular maintenance and system patching. AWS’ own maintenance and system patching generally do not impact customers. Maintenance of instances themselves is controlled by the customer.
If you have an HTTPS listener, you deployed an SSL server certificate on your load balancer when you created the listener. Each certificate comes with a validity period. You must ensure that you renew or replace the certificate before its validity period ends. You can replace the certificate deployed on your load balancer with a certificate provided by ACM or a certificate uploaded to IAM.
To replace an SSL certificate with a certificate uploaded to IAM:
Use the get-server-certificate command to get the ARN of the certificate:
aws iam get-server-certificate --server-certificate-name my-new-certificate
Use the set-load-balancer-listener-ssl-certificate command to set the certificate. For example:
aws elb set-load-balancer-listener-ssl-certificate --load-balancer-name my-load-balancer --load-balancer-port 443 --ssl-certificate-id arn:aws:iam::123456789012:server-certificate/my-new-certificate
The following option:
Use the aws acm renew-certificate command to request a new certificate and get its ARN.
Add the new certificate to the load balancer using the set-load-balancer-listener-ssl-certificate command with the ARN of the certificate as a parameter.
is incorrect because you don't need to use the aws acm renew-certificate command since there is already a new certificate which has already been generated. This may cause duplicate certificates.
Amazon Inspector enables you to analyze the behavior of your AWS resources and helps you to identify potential security issues. You can create an assessment template and launch a security assessment run of this target. During the assessment run, the network, file system, and process activity within the specified target are monitored, and a wide set of activity and configuration data is collected. The collected data is correlated, analyzed, and compared to a set of security rules specified in the assessment template. A completed assessment run produces a list of findings - potential security problems of various severity. Since you are assessing EC2 configurations, Inspector is the correct option.
The m1.small instance type has a low network performance due to its size. To fix this issue, you can use a larger EC2 instance type such as m1.medium, m1.large or m1.xlarge. Alternatively, you can launch a new EC2 instance type from a Current Generation of EC2 instances and avoid using the ones in the Previous Generation.
Enabling Enhanced Networking is incorrect because the m1.small instance type does not support Enhanced Networking.
An Amazon EBS–optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for Amazon EBS I/O. This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance. Therefore, the correct answer is if the instance type of your instance does not support EBS optimization, change your instance type to one that supports it.
Templates include several major sections. The Resources section is the only required section. Some sections in a template can be in any order. However, as you build your template, it can be helpful to use the logical order shown in the following list because values in one section might refer to values from a previous section.
Format Version (optional)
The AWS CloudFormation template version that the template conforms to. The template format version is not the same as the API or WSDL version. The template format version can change independently of the API and WSDL versions.
When you need to make changes to a stack's settings or change its resources, you update the stack instead of deleting it and creating a new stack. For example, if you have a stack with an EC2 instance, you can update the stack to change the instance's AMI ID. When you update a stack, you submit changes, such as new input parameter values or an updated template. AWS CloudFormation compares the changes you submit with the current state of your stack and updates only the changed resources.
AWS CloudFormation provides two methods for updating stacks:
Direct update
Creating and executing change sets.
When you directly update a stack, you submit changes and AWS CloudFormation immediately deploys them. Use direct updates when you want to quickly deploy your updates.
With change sets, you can preview the changes AWS CloudFormation will make to your stack, and then decide whether to apply those changes. Change sets are JSON-formatted documents that summarize the changes AWS CloudFormation will make to a stack. Use change sets when you want to ensure that AWS CloudFormation doesn't make unintentional changes or when you want to consider several options. For example, you can use a change set to verify that AWS CloudFormation won't replace your stack's database instances during an update.
You can use Amazon CloudWatch Events to detect and react to changes in the status of AWS Personal Health Dashboard (AWS Health) events. Then, based on the rules that you create, CloudWatch Events invokes one or more target actions when an event matches the values that you specify in a rule.
Depending on the type of event, you can send notifications, capture event information, take corrective action, initiate events, or take other actions. You can select the following types of targets when using CloudWatch Events as a part of your AWS Health workflow:
AWS Lambda functions
Kinesis streams
Amazon SQS queues
Built-in targets (CloudWatch alarm actions)
Amazon SNS topics
The following are some use cases:
Use a Lambda function to pass a notification to a Slack channel when an event occurs.
Send custom text or SMS notifications with Amazon SNS when an AWS Health event happens by using Lambda and CloudWatch Events.
Therefore, the correct answer is to use AWS Health Events with CloudWatch Events and a Lambda function to send a notification to a Slack channel when an event occurs.
Amazon Cognito identity pools assign your authenticated users a set of temporary, limited privilege credentials to access your AWS resources. The permissions for each user are controlled through IAM roles that you create. You can define rules to choose the role for each user based on claims in the user's ID token. You can define a default role for authenticated users. You can also define a separate IAM role with limited permissions for guest users who are not authenticated.
EXAM 1
You can create a load balancer with the following security features:
https://tutorialsdojo.com/aws-cheat-sheet-aws-elastic-load-balancing-elb/
If you want to grant a user the ability to pass any of an approved set of roles to the Amazon EC2 service upon launching an instance, you need to have these three elements:
An IAM permissions policy attached to the role that determines what the role can do.
A trust policy for the role that allows the service to assume the role.
An IAM permissions policy attached to the IAM user that allows the user to pass only those roles that are approved.
A trust policy is defined for the role that allows the service to assume the role. For example, you could attach the following trust policy to the role with the UpdateAssumeRolePolicy action. This situation requires a trust policy that allows Amazon EC2 to use the role and the permission attached to the role.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html https://tutorialsdojo.com/aws-cheat-sheet-aws-identity-and-access-management-iam/
Hence, the following are true statements about ELB listener configuration:
-If the front-end connection uses TCP or SSL, then your back-end connections can use either TCP or SSL.
-If the front-end connection uses HTTP or HTTPS, then your back-end connections can use either HTTP or HTTPS.
-When you use HTTP (layer 7) for both front-end and back-end connections, your load balancer parses the headers in the request and terminates the connection before sending the request to the back-end instances.
The option that says, "When you use TCP (layer 4) for both front-end and back-end connections, your load balancer forwards the request to the back-end instances with modified headers" is incorrect because when you use TCP (layer 4) for both front-end and back-end connections, your load balancer forwards the request to the back-end instances WITHOUT modifying the headers.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer, Elastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify as compressed files. You can disable access logging at any time. Not into CloudWatch
Cannot encrypt S3 via bucket policies.
Disk Read Operations and Disk Write Operations are both incorrect because the Disk Read and Write Operations metrics are only applicable for instance store-backed AMI instances. Take note that the scenario described EBS-backed instances and not instance store-backed instances.
Amazon Simple Queue Service (SQS) is a fast, reliable, scalable, fully managed message queuing service that lets you easily decouple the components of a cloud application. You can use Amazon SQS to transmit any volume data, without losing messages or requiring other services to be always available. Using SQS on this scenario, the data can be temporarily saved on the SQS queue while the EC2 instances pull the data from the queue and process them. This setup is highly scalable.
To allow administrators to easily manage tags on provisioned products, AWS Service Catalog provides a TagOption library. A TagOption is a key-value pair managed in AWS Service Catalog. It is not an AWS tag, but serves as a template for creating an AWS tag based on the TagOption.
The TagOption library makes it easier to enforce the following:
A consistent taxonomy
Proper tagging of AWS Service Catalog resources
Defined, user-selectable options for allowed tags
Administrators can associate TagOptions with portfolios and products. During a product launch (provisioning), AWS Service Catalog aggregates the associated portfolio and product TagOptions, and applies them to the provisioned product, as shown in the following diagram.

With the TagOption library, you can deactivate TagOptions and retain their associations to portfolios or products, and reactivate them when you need them. This approach not only helps maintain library integrity, it also allows you to manage TagOptions that might be used intermittently, or only under special circumstances. You manage TagOptions with the AWS Service Catalog console or the TagOption library API.
Hence, the correct answer is to use the AWS Service Catalog TagOption Library.
Manually tagging resources using the AWS Tag Editor is incorrect because it entails a lot of administrative overhead, which makes it unsuitable for this scenario.
The option that says, "Create a Lambda function that uses the GetResources and TagResources actions of the Resource Groups Tagging API to identify the untagged resources and afterwards, tag them automatically" is incorrect because the GetResources API only returns all the tagged or previously tagged resources that are located in the specified region for the AWS account. It is not primarily used to get the list of all resources which doesn't have any tags.
You can use an AWS Direct Connect gateway to connect your AWS Direct Connect connection over a private virtual interface to one or more VPCs in your account that are located in the same or different regions. You associate a Direct Connect gateway with the virtual private gateway for the VPC, and then create a private virtual interface for your AWS Direct Connect connection to the Direct Connect gateway. You can attach multiple private virtual interfaces to your Direct Connect gateway. A Direct Connect gateway is a globally available resource. You can create the Direct Connect gateway in any public region and access it from all other public regions.
Establishing a Direct Connect connection between the VPC in US East (N. Virginia) region to the on-premises data center in Chicago and then establishing another Direct Connect connection between the VPC in US West (N. California) region to the on-premises data center is incorrect because establishing two separate Direct Connect connections is expensive and hence, not a cost-effective option. It is better to establish a Direct Connect gateway instead which just uses one Direct Connect connection to integrate the 2 VPCs and the on-premises data center.
Which of the following services does not help you capture the monitoring information about the ELB activity?
ELB health checks are used to determine whether the EC2 instances behind the ELB are healthy or not. But it does not help in capturing the monitoring information for the ELB itself.
ELB Access logs is incorrect because this enables you to capture detailed information about requests sent to your load balancer and store these logs to S3.
CloudWatch metrics is incorrect because ELB publishes data points to Amazon CloudWatch for the load balancers and its backend instances. CloudWatch enables one to receive statistics about those data points.
ELB API calls with CloudTrail is incorrect because CloudTrail allows you to log all API calls including those made via the ELB.
For an EC2 instance to be able to communicate to the Internet over IPv6, the following configuration should be done in the VPC:
-Associate a /56 IPv6 CIDR block with the VPC. The size of the IPv6 CIDR block is fixed (/56) and the range of IPv6 addresses is automatically allocated from Amazon's pool of IPv6 addresses (you cannot select the range yourself).
-Create a subnet with a /64 IPv6 CIDR block in your VPC. The size of the IPv6 CIDR block is fixed (/64).
-Create a custom route table, and associates it with your subnet, so that traffic can flow between the subnet and the Internet gateway.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/get-started-ipv6.html
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario1.html
Setting up a VPN connection to AWS requires you to have both Virtual Private Gateway and Customer Gateway available. To enable instances in your VPC to reach your customer gateway, you must configure your route table to include the routes used by your VPN connection and point them to your virtual private gateway. You can enable route propagation for your route table to automatically propagate those routes to the table for you.
Specifying the private Autonomous System Number (ASN) for the Amazon side of the gateway is the correct answer here because when you create a virtual private gateway, it is only optional that you specify a private Autonomous System Number (ASN) for the Amazon side of the gateway. The ASN must be different from the BGP ASN specified for the customer gateway.
The option that says, "Create an EC2 security group for the servers and a DB security group for the MySQL database. Configure to only allow inbound traffic from the EC2 security group to the DB security group" is incorrect because DB security groups are for database instances not in a VPC, but in an EC2-Classic platform.
Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console, or consume the Enhanced Monitoring JSON output from CloudWatch Logs in a monitoring system of your choice. By default, Enhanced Monitoring metrics are stored in the CloudWatch Logs for 30 days. To modify the amount of time the metrics are stored in the CloudWatch Logs, change the retention for the RDSOSMetrics log group in the CloudWatch console.
Take note that there are certain differences between CloudWatch and Enhanced Monitoring Metrics. CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance, and Enhanced Monitoring gathers its metrics from an agent on the instance. As a result, you might find differences between the measurements, because the hypervisor layer performs a small amount of work.
The differences can be greater if your DB instances use smaller instance classes, because then there are likely more virtual machines (VMs) that are managed by the hypervisor layer on a single physical instance. Enhanced Monitoring metrics are useful when you want to see how different processes or threads on a DB instance use the CPU.

Viewing the CPU% and MEM% metrics which are readily available in the Amazon RDS console is incorrect because the CPU% and MEM% metrics are not readily available in the Amazon RDS console, which is contrary to what is being stated in this option.
Writing a shell script that collects and publishes custom metrics to CloudWatch which tracks the real-time CPU Utilization of the RDS instance is incorrect because although you can use Amazon CloudWatch Logs and CloudWatch dashboard to monitor the CPU Utilization of the database instance, using CloudWatch alone is still not enough to get the specific percentage of the CPU bandwidth and total memory consumed by each database processes. The data provided by CloudWatch is not as detailed as compared with the Enhanced Monitoring feature in RDS. Take note as well that you do not have direct access to the instances/servers of your RDS database instance, unlike with your EC2 instances where you can install a CloudWatch agent or a custom script to get CPU and memory utilization of your instance.
Setting up a monitoring system which uses Amazon CloudWatch to track the CPU Utilization of your database is incorrect because although you can use Amazon CloudWatch to monitor the CPU Utilization of your database instance, it does not provide the percentage of the CPU bandwidth and total memory consumed by each database process in your RDS instance. Take note that CloudWatch gathers metrics about CPU utilization from the hypervisor for a DB instance while RDS Enhanced Monitoring gathers its metrics from an agent on the instance.
As the SysOps Administrator, you are responsible for provisioning the required resources and access policies of your cloud infrastructure. You received a request from one of the development teams to be able to create, overwrite, and delete any object in an S3 bucket as well as to write additional ACL for the applicable bucket.
Which bucket ACL permission should you grant?
Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. Each bucket and object has an ACL attached to it as a subresource. It defines which AWS accounts or groups are granted access and the type of access. When a request is received against a resource, Amazon S3 checks the corresponding ACL to verify that the requester has the necessary access permissions.
The WRITE ACL permission allows grantee to create, overwrite, and delete any object in the bucket and WRITE_ACP allows grantee to write the ACL for the applicable bucket.
READ is incorrect because it will only provide read access to the objects of the bucket.
WRITE is incorrect because this permission alone does not allow the grantee to write the ACL for the applicable bucket.
READ and READ_ACP are incorrect because this will only allow the grantee to list the objects and read the bucket ACL.
FULL_CONTROL is incorrect because although this permission can provide the required WRITE and WRITE_ACP permissions for the team, it will also allow the READ and READ_ACP permissions which are not required. You have to always follow the principle of least privilege when it comes to providing access.
You are serving static content from your S3 bucket and using CloudFront service to speed up content delivery to your users across the globe. For your next business cycle, you plan on improving these services to attract more customers and provide them a better user experience. Therefore, you will be needing more information regarding the activities that are occurring in your AWS resources to plan your next step. AWS CloudFront includes a variety of reports you can use to see usage and activity that is occurring in your CloudFront distributions.
How will you utilize these reports for this matter? (Choose 3)
Explanation
The following are the correct answers:
Use Popular Objects Report to determine what objects are frequently being accessed, and get statistics on those objects.
Use Usage Reports to know the number of HTTP and HTTPS requests that CloudFront responds to from edge locations in selected regions.
Use Viewers Reports to determine the locations of the viewers that access your content most frequently.
The statements mentioned above are correct because you are using the correct report for each purpose. Popular Objects Report can determine what objects are frequently being accessed, and get statistics on those objects. Usage Reports tells you the number of HTTP and HTTPS requests that CloudFront responds to from edge locations in selected regions. Viewers Reports can determine the locations of the viewers that access your content most frequently.
The option that says: Use Cache Statistics Reports to display a list of the 25 website domains that originated the most HTTP and HTTPS requests for objects that CloudFront is distributing for a specified distribution is incorrect because this is actually provided by the Top Referrers Reports and not the Cache Statistics Reports.
The option that says: Use Top Referrers Reports to get statistics on viewer requests grouped by HTTP status code is incorrect because this is actually provided by the Cache Statistics Reports and not the Top Referrers Reports.
The option that says: Use Usage Reports to learn about the different types of browsers that your users frequently use to access your content is incorrect because this is actually provided by the Viewers Reports and not the Usage Reports.
You have two On-Demand EC2 instances in your VPC which are launched in subnet Tango and subnet Delta respectively. You logged into the first instance and tried to ping the second instance but it has no response.
Explanation
To allow traffic on two EC2 instances located on different subnets, you should properly configure their respective Security Groups as well as the Network ACL. Hence, the following options are the correct answers:
The second instance's security group does not allow inbound ICMP traffic.
The NACL on subnet Delta does not allow outbound ICMP traffic.
AWS provides two features that you can use to increase security in your VPC: security groups and network ACLs. Security groups control inbound and outbound traffic for your instances, and network ACLs control inbound and outbound traffic for your subnets. In most cases, security groups can meet your needs; however, you can also use network ACLs if you want an additional layer of security for your VPC.
The option that says, "The subnet Tango has no target route to subnet Delta in the route table" is incorrect because every subnet that you create is automatically associated with the main route table for the VPC. Hence, you don't need to define a route from subnet Tango to subnet Delta.
The option that says, "There is no IAM role provisioned to the first instance" is incorrect because an IAM role is not required to allow communication between two EC2 instances.
The option that says, "There is no Internet Gateway attached to the VPC" is incorrect because there is no requirement to allow the EC2 instances to connect to the Internet, hence, the use of Internet Gateway (IGW) is unnecessary and totally unrelated with the issue.
The option that says, "The subnet Delta is private while subnet Tango is public which is why the two instances could not connect to each other" is incorrect because it does not matter whether the subnet is public or private as long as they both reside in one VPC. Take note that a public subnet basically means that it has a route to the Internet Gateway and a private subnet does not. Hence, the subnet type (public/private) is not related to the issue.
Last updated
Was this helpful?