Sunday, June 25, 2017

AWS EIP Elastic IP Addresses

An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. 

Elastic IP Address Basics

The following are the basic characteristics of an Elastic IP address:
  • To use an Elastic IP address, you first allocate one to your account, and then associate it with your instance or a network interface.
  • When you associate an Elastic IP address with an instance or its primary network interface, the instance's public IPv4 address (if it had one) is released back into Amazon's pool of public IPv4 addresses. You cannot reuse a public IPv4 address. For more information, see Public IPv4 Addresses and External DNS Hostnames.
  • You can disassociate an Elastic IP address from a resource, and reassociate it with a different resource.
  • A disassociated Elastic IP address remains allocated to your account until you explicitly release it.
  • To ensure efficient use of Elastic IP addresses, we impose a small hourly charge if an Elastic IP address is not associated with a running instance, or if it is associated with a stopped instance or an unattached network interface. While your instance is running, you are not charged for one Elastic IP address associated with the instance, but you are charged for any additional Elastic IP addresses associated with the instance. For more information, see Amazon EC2 Pricing.
  • An Elastic IP address is for use in a specific region only. 
  • When you associate an Elastic IP address with an instance that previously had a public IPv4 address, the public DNS hostname of the instance changes to match the Elastic IP address. 
  • We resolve a public DNS hostname to the public IPv4 address or the Elastic IP address of the instance outside the network of the instance, and to the private IPv4 address of the instance from within the network of the instance.

    CharacteristicEC2-ClassicEC2-VPC
    Allocating an Elastic IP address
    When you allocate an Elastic IP address, it's for use in EC2-Classic; however, you can migrate an Elastic IP address to the EC2-VPC platform. For more information, seeMigrating an Elastic IP Address from EC2-Classic to EC2-VPC.
    When you allocate an Elastic IP address, it's for use only in a VPC.
    Associating an Elastic IP address
    You associate an Elastic IP address with an instance.
    An Elastic IP address is a property of a network interface. You can associate an Elastic IP address with an instance by updating the network interface attached to the instance. For more information, see Elastic Network Interfaces.
    Reassociating an Elastic IP address
    If you try to associate an Elastic IP address that's already associated with another instance, the address is automatically associated with the new instance.
    If your account supports EC2-VPC only, and you try to associate an Elastic IP address that's already associated with another instance, the address is automatically associated with the new instance. If you're using a VPC in an EC2-Classic account, and you try to associate an Elastic IP address that's already associated with another instance, it succeeds only if you allowed reassociation.
    Associating an Elastic IP address with a target that has an existing Elastic IP addressThe existing Elastic IP address is disassociated from the instance, but remains allocated to your account.If your account supports EC2-VPC only, the existing Elastic IP address is disassociated from the instance, but remains allocated to your account. If you're using a VPC in an EC2-Classic account, you cannot associate an Elastic IP address with a network interface or instance that has an existing Elastic IP address.
    Stopping an instance
    If you stop an instance, its Elastic IP address is disassociated, and you must reassociate the Elastic IP address when you restart the instance.
    If you stop an instance, its Elastic IP address remains associated.
    Assigning multiple IP addresses
    Instances support only a single private IPv4 address and a corresponding Elastic IP address.
    Instances support multiple IPv4 addresses, and each one can have a corresponding Elastic IP address. For more information, see Multiple IP Addresses.

AWS QUESTIONS FROM WEB

    Question 1 (of 7): Amazon Glacier is designed for: (Choose 2 answers)

    • A. active database storage.
    • B. infrequently accessed data.
    • C. data archives.
    • D. frequently accessed data.
    • E. cached session data.
    Answer: B. infrequently accessed data. C. data archives.
    Think “cold storage” and the name Glacier makes a bit more sense.  AWS includes a number of storage solutions and as per the to pass the exam, you are expected to know the appropriate use of all of them.
    I picture them on the following scale:
    Instance (aka ephemeral, aka local) storage is a device like a RAM disk physically attached to your server (your EC2 instance) and characteristically it gets completely wiped every reboot.  Naturally this makes it suitable for temporary storage, but nothing that needs to survive something as simple as a reboot. You can store the Operating System on there if nothing important gets stored there after the instance is started (and bootstrapping completes).  Micro-sized instance types (low specification servers) don’t have ephemeral storage.  Some larger more expensive instance types come with SSD instance storage for higher performance.
    Elastic Block Store (EBS) is a service where you buy devices more akin to a hard disk that can be attached to one (and only one -at the time of writing) EC2 instance.  They can be set to persist after an instance is restarted.  They can be easily “snapshotted”, i.e. backed up in away that you can create a new identical device and attach that to the same or another EC2 instance.  One other thing to know about EBS is that you can pay extra money for what is known as provisioned IOPS which means guaranteed (and very high if you like) disk read and write speeds.
    S3 is a cloud file storage service more akin to DropBox or GoogleDrive.  It is possible to attach a storage volume created and stored in S3 to an EC2 instance, but this is no longer recommended (EBS is preferable).  S3 is instead for storing things like your EC2 server images (Amazon Machine Images aka AMIs), static content e.g. for a web site, input or output data files (like you’ve use an SFTP site), or anything that you’d treat like a file.
    An S3 store is called a bucket whilst living in one specified global region, has a globally unique name.  S3 integrates extremely will with the CloudFront content distribution service which offers caching of content to a much more globally distributed set of edge locations (thus improving performance and saving bandwidth costs).
    Glacier comes next as basically a variant on S3 where you expect to want to view the files either hardly ever or never again.  For example old backups, old data only kept for compliance purposes.  Instead of a bucket, Glacier files are stored in a Vault. Instead of getting instant access to files, you have to make a retrieval request and wait a number of hours. S3 and Glacier play very nicely together because you can set up Lifecycles for S3 objects which cause them to be moved to Glacier after a certain trigger e.g. a certain elapsed “expiry” time passing.
    Wrong answers: 
    A. active database storage.
    Obviously databases are written to regularly i.e. the polar (excuse the pun) opposite of Glacier.
    Amazon offer a 5 different options for databases.
    RDS is the Relational Database Service. This allows Amazon to handle the database software for you (including licenses, replication, backups and more). You aren’t given access to any underlying EC2 servers and instead you simply connect to the database using your preferred method (e.g. JDBC). NB. currently this supports MySQL, Oracle, PostGreSQL and Microsoft SQL Slug.
    SimpleDB is a non-relational database service that works in a similar way to RDS.
    Redshift is Amazon’s relational data warehouse solution capable of much larger (and efficient at large scale) storage.
    DynamoDB is Amazons NoSQL managed database service. For this storage Amazon apparently uses Solid Stage Devices for high performance.
    Finally of course, you can create servers with EC2 and install the database software yourself and work as you would in your data centre. This is the only time that you would need to consider what storage solution you actually want to use for a database.  EBS would be most appropriate.  Clearly Instance storage is a very risky option due to not persisting after restarts.  S3 is inappropriate for databases especially for Oracle which can efficiently manage raw storage devices rather than writing files to a file system.
    D. frequently accessed data.
    Clearly this is the opposite of Glacier.  Obviously if your data doesn’t need to persist after restarts, Instance storage would be the best choice for Frequently accessed data. Otherwise EBS is the choice if you applications are reading and writing the data. S3 (plus CloudFront) is the option if end users access your data over the www.
    E. cached session data.
    ElasticCache is the AWS that provides a  Memcached or Redis compliant caching server that your applications can make use of.  Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer.

    Question 2 (of 7): You configured ELB to perform health checks on these EC2 instances. If an instance fails to pass health checks, which statement will be true?

    • A. The instance is replaced automatically by the ELB.
    • B. The instance gets terminated automatically by the ELB.
    • C. The ELB stops sending traffic to the instance that failed its health check.
    • D. The instance gets quarantined by the ELB for root cause analysis.
    Answer: C. The ELB stops sending traffic to the instance that failed its health check.
    This question tests that you properly understand how auto-scaling works. If you don’t, you might take a guess that load balancers take the more helpful sounding option A, i.e. automatically replacing a failed server.
    The fact is, an elastic load balancer is still just a load balancer. Arguably when you ignore the elastic part, it is quite a simple load balancer in that (currently) it only supports round robin routing as opposed to anything more clever (perhaps balancing that takes into account the load on each instance).
    The elastic part just means that when new servers are added to an “auto-scaling group”, the load balancer recognises them and starts sending them traffic. In fact to make answer A above, you need the following:
    • A launch configuration This tells AWS how to stand up a bootstrapped server that once up is ready to do work without any human intervention
    • An auto-scaling group This tells AWS where it can create servers (could be subnets in different Availability Zones in one region (NB. subnets can’t span AZ’s), but not across multiple regions).  Also: which launch configuration to use, the minimum and maximum allowed servers in the group, and how to scale up and down. By how to scale up and down, it means for example 1 at a time, 10% more and various other things.  With both of these configured, the when an instance fails the heath checks (presumably because it is down), it is the auto scaling group that will decide whether we now need to add another server t to compensate.
    Just to complete the story about auto scaling, it is worth mentioning the CloudWatch service. This is the name for the monitoring service in AWS. You can add custom checks and use these to trigger scaling policies to expand or contract your group of servers (and of course the ELB keeps up and routes traffic appropriately).
    Wrong answers:
    A. The instance is replaced automatically by the ELB.
    As described above, you need an Auto Scaling group to handle replacements.
    B. The instance gets terminated automatically by the ELB.
    As discussed above, load balancers aren’t capable of manipulating EC2 like this.
    D. The instance gets quarantined by the ELB for root cause analysis.
    There is no concept of quarantining.

    Question 3 (of 7):You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publically accessible from S3 directly? 

    • A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.
    • B. Add the CloudFront account security group “amazon-cf/amazon-cf-sg” to the appropriate S3 bucket policy.
    • C. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
    • D. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
    Answer: A. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.
    An Origin Access Identity is a special user that you will set up the CloudFront service to use to access you restricted content, see here.
    Wrong Answers:
    B. Add the CloudFront account security group “amazon-cf/amazon-cf-sg” to the appropriate S3 bucket policy.
    The CloudFront OAI solution is more tightly integrated with S3 and you don’t need to know implementation level details like the actual user name as that gets handled under the covers by the service.
    C. Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
    IAM is the service for controlling who can do what within your AWS account. The fact is that an AWS account is so incredibly powerful, that it would be far too dangerous to have many people in a company with full access to create servers, remove storage, etc. etc.
    IAMs allows you to create that fine grained access to use of services. It doesn’t work down to the level suggested in this answer of specific objects. IAMs could stop a user accessing S3 admin functions, but not specific objects.
    D. Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN). When configuring Bucket policies, a Principal is one or more named individuals in receipt of a particular policy statement. For example, you could be listed as a principal so that you can be denied access to delete objects in an S3 bucket. So the terminology is misused.

    Question 4 (of 7): Which of the following will occur when an EC2 instance in a VPC (Virtual Private Cloud) with an associated Elastic IP is stopped and started? (Choose 2 answers)

    • A. The Elastic IP will be dissociated from the instance
    • B. All data on instance-store devices will be lost
    • C. All data on EBS (Elastic Block Store) devices will be lost
    • D. The ENI (Elastic Network Interface) is detached
    • E. The underlying host for the instance is changed
    Answers: B. All data on instance-store devices will be lost
    (See storage explanations above)
    E. The underlying host for the instance is changed
    Not a great answer here.  You are completely abstracted from underlying hosts.  So you have no way of knowing this.  But by elimination, I picked this.
    Wrong Answers:
    A. The Elastic IP will be dissociated from the instance
    This is the opposite of the truth. Elastic IPs are sticky until re-assigned for a good reason (such as the instance has been terminated i.e. it is never coming back).
    C. All data on EBS (Elastic Block Store) devices will be lost
    EBS devices are independent of EC2 instances and by default outlive them (unless configured otherwise). All data on Instance storage however will be lost and also on the root (/dev/sda1) partition of S3 backed servers.
    D. The ENI (Elastic Network Interface) is detached
    As far as I know, just as silly answer!

    Question 5 (of 7): In the basic monitoring package for EC2, Amazon CloudWatch provides the following metrics:

    • A. web server visible metrics such as number failed transaction requests
    • B. operating system visible metrics such as memory utilization
    • C. database visible metrics such as number of connections
    • D. hypervisor visible metrics such as CPU utilization
    Answer: D. hypervisor visible metrics such as CPU utilization
    Amazon needs to know this anyway to provide IaaS, so it seems natural that they share it.
    Wrong Answers:
    A. web server visible metrics such as number failed transaction requests
    Too detailed for EC2 – Amazon don’t even want to know whether you have or haven’t even installed a web server.
    B. operating system visible metrics such as memory utilization
    Too detailed for EC2 – Amazon don’t want to interact with your operating system.
    C. database visible metrics such as number of connections
    Too detailed for EC2 – Amazon don’t even want to know whether you have or haven’t even installed a web server.  NB. the question states Ec2 monitoring, RDS monitoring does include this.

    Question 6 (of 7): Which is an operational process performed by AWS for data security?

    • A. AES-256 encryption of data stored on any shared storage device
    • B. Decommissioning of storage devices using industry-standard practices
    • C. Background virus scans of EBS volumes and EBS snapshots
    • D. Replication of data across multiple AWS Regions E. Secure wiping of EBS data when an EBS volume is un-mounted
    Answer: B. Decommissioning of storage devices using industry-standard practices
    Clearly there is no way you could do this, so AWS take care.
    Wrong Answers:
    A. AES-256 encryption of data stored on any shared storage device
    Encryption of storage devices (EBS) is your concern.
    C. Background virus scans of EBS volumes and EBS snapshots
    Too detailed for EC2 – Amazon don’t want to interact with your data.
    D. Replication of data across multiple AWS Regions
    No, you have to do this yourself.
    E. Secure wiping of EBS data when an EBS volume is un-mounted
    An un-mount doesn’t cause an EBS volume to be wiped.

    Question 7 (of 7): To protect S3 data from both accidental deletion and accidental overwriting, you should:

    • A. enable S3 versioning on the bucket
    • B. access S3 data using only signed URLs
    • C. disable S3 delete using an IAM bucket policy
    • D. enable S3 Reduced Redundancy Storage
    • E. enable Multi-Factor Authentication (MFA) protected access
    Answer: A. enable S3 versioning on the bucket
    As the name suggests, S3 versioning means that all versions of a file are kept and retrievable at a later date (by making a request to the bucket, using the object ID and also the version number). The only charge for having this enabled is from the fact that you will incur more storage. When an object is deleted, it will still be accessible just not visible.
    Wrong Answers:
    B. access S3 data using only signed URLs
    Signed URLs are actually part of CloudFront which as I mentioned earlier is the content distribution service. These protect content from un-authorised access.
    C. disable S3 delete using an IAM bucket policy
    No such thing as an IAM bucket policy.  There are IAM policies and there are Bucket policies.
    D. enable S3 Reduced Redundancy
    Reduced Redundancy Storage RRS is a way of storing something on S3 with a lower durability, i.e. a lower assurance from Amazon that they won’t lose the data on your behalf. Obviously this lower standard of service comes at a lower price. RRC is designed for things that you need to store for convenience e.g. software binaries, but if they got deleted you could recreate (or re-download). So with this in mind enabling RRC reduces the level of protection rather than increases it. It is worth noticing the incredible level of durance that S3 provides. Without RRC enabled, durability is 11 9s, which equates to
    “If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so. This storage is designed in such a way that we can sustain the concurrent loss of data in two separate storage facilities.”
    (see here, thanks to here).
    With RRC, this drops to 4 9s which is still probably probably better than most IT departments can offer.
    E. enable Multi-Factor Authentication (MFA) protected access
    This answer is of little relevance. As I mentioned accounts on AWS are incredibly powerful due to the logical nature of what they control. In the physical world it isn’t possible for someone to press a button and delete an entire data centre (servers, storage, backups and all). In AWS, you could press a few buttons and do that, not just in one data centre, but in ever data centre you’ve used globally. So MFA is a mechanism for increasing security over people accessing your AWS account. As I mentioned earlier IAMS is the mechanism for further restricting what authenticated people are authorised to do.
      Only in EC2 Classic will an EIP be disassociated. If you start, stop, or reboot an EC2 instance in a VPC (including the default VPC) the EIP will still remain allocated to the EC2 instance.

AWS EBS Notes

  • Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone

    You can create EBS General Purpose SSD (gp2), Provisioned IOPS SSD (io1), Throughput Optimized HDD (st1), and Cold HDD (sc1) volumes up to 16 TiB in size. You can mount these volumes as devices on your Amazon EC2 instances. You can mount multiple volumes on the same instance, but each volume can be attached to only one instance at a time. You can dynamically change the configuration of a volume attached to an instance. For more information, see Creating an Amazon EBS Volume.

    When you create an encrypted EBS volume and attach it to a supported instance type, data stored at rest on the volume, disk I/O, and snapshots created from the volume are all encrypted. The encryption occurs on the servers that hosts EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage.

    EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance.

    You can attach multiple volumes to the same instance within the limits specified by your AWS account. Your account has a limit on the number of EBS volumes that you can use, and the total storage available to you.

    EBS volumes behave like raw, unformatted block devices. You can create a file system on top of these volumes, or use them in any other way you would use a block device (like a hard drive). 

    You can create point-in-time snapshots of EBS volumes, which are persisted to Amazon S3. Snapshots protect data for long-term durability, and they can be used as the starting point for new EBS volumes. The same snapshot can be used to instantiate as many volumes as you wish. These snapshots can be copied across AWS regions

AWS EC2 NOTES



FROM AMAZON EC2 FAQs - Salient Points

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud.

Just as Amazon Simple Storage Service (Amazon S3) enables storage in the cloud, Amazon EC2 enables “compute” in the cloud. 


When you launch your Amazon EC2 instances you have the ability to store your root device data on Amazon EBS or the local instance store. By using Amazon EBS, data on the root device will persist independently from the lifetime of the instance. This enables you to stop and restart the instance at a subsequent time, which is similar to shutting down your laptop and restarting it when you need it again.
Alternatively, the local instance store only persists during the life of the instance. This is an inexpensive way to launch instances where data is not stored to the root device. For example, some customers use this option to run large web sites where each instance is a clone to handle web traffic.
An Amazon Machine Image (AMI) is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance. Your AMIs are your unit of deployment. You might have just one AMI or you might compose your system out of several building block AMIs (e.g., webservers, appservers, and databases). 
Amazon EC2 provides a number of tools to make creating an AMI easy. Once you create a custom AMI, you will need to bundle it. If you are bundling an image with a root device backed by Amazon EBS, you can simply use the bundle command in the AWS Management Console. If you are bundling an image with a boot partition on the instance store, then you will need to use the AMI Tools to upload it to Amazon S3.

Billing commences when Amazon EC2 initiates the boot sequence of an AMI instance. Billing ends when the instance terminates, which could occur through a web services command, by running "shutdown -h", or through instance failure. When you stop an instance, we shut it down but don't charge hourly usage for a stopped instance, or data transfer fees, but we do charge for the storage for any Amazon EBS volumes.

Amazon EC2 instances are grouped into 5 families: 
General Purpose, Compute Optimized, Memory Optimized, GPU, and Storage Optimized instances. 
General Purpose Instances have memory to CPU ratios suitable for most general purpose applications and come with fixed performance (M4 and M3 instances) or burstable performance (T2); 
Compute Optimized instances (C4 and C3 instances) have proportionally more CPU resources than memory (RAM) and are well suited for scale out compute-intensive applications and High Performance Computing (HPC) workloads; 
Memory Optimized Instances (R3 and R4 instances) offer larger memory sizes for memory-intensive applications, including database and memory caching applications; 
GPU Compute instances (P2) take advantage of the parallel processing capabilities of NVIDIA Tesla GPUs for high performance parallel computing; GPU Graphics instances (G2) offer high-performance 3D graphics capabilities for applications using OpenGL and DirectX; 
Storage Optimized Instances include I3 and I2 instances that provide very high, low latency, I/O capacity using SSD-based local instance storage for I/O-intensive applications and D2, Dense-storage instances, that provide high storage density and sequential I/O performance for data warehousing, Hadoop and other data-intensive applications. 
When choosing instance types, you should consider the characteristics of your application with regards to resource utilization (i.e. CPU, Memory, Storage) and select the optimal instance family and instance size.

By default, all accounts are limited to 5 Elastic IP addresses per region.

If your applications benefit from high packet-per-second performance and/or low latency networking, Enhanced Networking will provide significantly improved performance, consistence of performance and scalability.Amazon VPC allows us to deliver many advanced networking features to you that are not possible in EC2-Classic. Enhanced Networking is another example of a capability enabled by Amazon VPC.

The data stored on a local instance store will persist only as long as that instance is alive. However, data that is stored on an Amazon EBS volume will persist independently of the life of the instance. Therefore, we recommend that you use the local instance store for temporary data and, for data requiring a higher level of durability, we recommend using Amazon EBS volumes or backing up the data to Amazon S3. If you are using an Amazon EBS volume as a root partition, you will need to set the Delete On Terminate flag to "N" if you want your Amazon EBS volume to persist outside the life of the instance.

The EBS General Purpose (SSD) volumes are backed by the same technology found in EBS Provisioned IOPS (SSD) volumes. The EBS General Purpose (SSD) volume type is designed for 99.999% availability, and a broad range of use-cases such as boot volumes, small and medium size databases, and development and test environments. General Purpose (SSD) volumes deliver a ratio of 3 IOPS per GB, offer single digit millisecond latencies, and also have the ability to burst up to 3000 IOPS for short periods.

Customers can now choose between three EBS volume types to best meet the needs of their workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that we recommend as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium sized databases, development and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance, and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types. Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.
 While you are able to attach multiple volumes to a single instance, attaching multiple instances to one volume is not supported at this time.
If you have an Auto Scaling group with running instances and you choose to delete the Auto Scaling group, the instances will be terminated and the Auto Scaling group will be deleted.

Elastic Load Balancing offers two types of load balancers that both feature high availability, automatic scaling, and robust security. These include the Classic Load Balancer that routes traffic based on either application or network level information, and the Application Load Balancer that routes traffic based on advanced application level information that includes the content of the request.
The Classic Load Balancer is ideal for simple load balancing of traffic across multiple EC2 instances, while the Application Load Balancer is ideal for applications needing advanced routing capabilities, microservices, and container-based architectures

Spot instances are complementary to On-Demand instances and Reserved Instances, providing another option for obtaining compute capacity.

A Spot fleet allows you to automatically bid on and manage multiple Spot instances that provide the lowest price per unit of capacity for your cluster or application, like a batch processing job, a Hadoop workflow, or an HPC grid computing job. 

If you have a relatively low throughput application or web site with an occasional need to consume significant compute cycles, we recommend using Micro instances.

CloudWatch reporting 100% CPU utilization is your signal that you should consider scaling – manually or via Auto Scaling – up to a larger instance type or scale out to multiple Micro instances.

Compute-optimized instances are designed for applications that benefit from high compute power. These applications include high performance front-end fleets, web-servers, batch processing, distributed analytics, high performance science and engineering applications, ad serving, MMO gaming, video-encoding, and distributed analytics.


Accelerated Computing Instance family is a family of instances which use hardware accelerators, or co-processors, to perform some functions, such as floating point number calculation and graphics processing, more efficiently than is possible in software running on CPUs. Amazon EC2 provides two types of Accelerated Computing Instances – GPU Compute Instances for general-purpose computing and GPU Graphics Instances for graphics intensive applications.
Cluster Compute and Cluster GPU Instances use differs from other Amazon EC2 instance types in two ways.
First, Cluster Compute and Cluster GPU Instances use Hardware Virtual Machine (HVM) based virtualization and run only Amazon Machine Images (AMIs) based on HVM virtualization. Paravirtual Machine (PVM) based AMIs used with other Amazon EC2 instance types cannot be used with Cluster Compute or Cluster GPU Instances.
Second, in order to fully benefit from the available low latency, full bisection bandwidth between instances, Cluster Compute and Cluster GPU Instances must be launched into a cluster placement group through the Amazon EC2 API or AWS Management Console.
Amazon EC2 allows you to choose between Fixed Performance Instances (e.g. M3, C3, and R3) and Burstable Performance Instances (e.g. T2). 

Dense-storage instances are designed for workloads that require high sequential read and write access to very large data sets, such as Hadoop distributed computing, massively parallel processing data warehousing, and log processing applications. The Dense-storage instances offer the best price/GB-storage and price/disk-throughput across other EC2 instances.
Memory-optimized instances offer large memory size for memory intensive applications including in-memory applications, in-memory databases, in-memory analytics solutions, High Performance Computing (HPC), scientific computing, and other memory-intensive applications.

X1 instances are ideal for running in-memory databases like SAP HANA, big data processing engines like Apache Spark or Presto, and high performance computing (HPC) applications.

Our SLA guarantees a Monthly Uptime Percentage of at least 99.95% for Amazon EC2 and Amazon EBS within a Region.

Placement Groups: 
placement group is a logical grouping of instances within a single Availability Zone. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. ... First, you create a placement group and then you launch multiple instances into the placement group.
Placement groups have the following limitations:
  • A placement group can't span multiple Availability Zones.
  • The name you specify for a placement group must be unique within your AWS account.
  • The following are the only instance types that you can use when you launch an instance into a placement group:
    • General purpose: m4.large | m4.xlarge | m4.2xlarge | m4.4xlarge | m4.10xlarge | m4.16xlarge
    • Compute optimized: c4.large | c4.xlarge | c4.2xlarge | c4.4xlarge | c4.8xlarge | c3.large | c3.xlarge | c3.2xlarge | c3.4xlarge | c3.8xlarge | cc2.8xlarge
    • Memory optimized: cr1.8xlarge | r3.large | r3.xlarge | r3.2xlarge | r3.4xlarge | r3.8xlarge | r4.large | r4.xlarge | r4.2xlarge | r4.4xlarge | r4.8xlarge | r4.16xlarge |x1.16xlarge | x1.32xlarge
    • Storage optimized: d2.xlarge | d2.2xlarge | d2.4xlarge | d2.8xlarge | hi1.4xlarge | hs1.8xlarge | i2.xlarge | i2.2xlarge | i2.4xlarge | i2.8xlarge | i3.large | i3.xlarge | i3.2xlarge | i3.4xlarge | i3.8xlarge | i3.16xlarge
    • Accelerated computing: cg1.4xlarge | f1.2xlarge | f1.16xlarge | g2.2xlarge | g2.8xlarge | p2.xlarge | p2.8xlarge | p2.16xlarge
  • The maximum network throughput speed of traffic between two instances in a placement group is limited by the slower of the two instances. For applications with high-throughput requirements, choose an instance type with 10 Gbps or 20 Gbps network connectivity. For more information about instance type network performance, see the Amazon EC2 Instance Types Matrix.
  • Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed. We recommend using the same instance type for all instances in a placement group.
  • You can't merge placement groups. Instead, you must terminate the instances in one placement group, and then relaunch those instances into the other placement group.
  • A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs. For more information about VPC peering connections, see the Amazon VPC Peering Guide.
  • You can't move an existing instance into a placement group. You can create an AMI from your existing instance, and then launch a new instance from the AMI into a placement group.
  • Reserved Instances provide a capacity reservation for EC2 instances in an Availability Zone. The capacity reservation can be used by instances in a placement group that are assigned to the same Availability Zone. However, it is not possible to explicitly reserve capacity for a placement group.
  • To ensure that network traffic remains within the placement group, members of the placement group must address each other via their private IPv4 addresses or IPv6 addresses (if applicable). If members address each other using their public IPv4 addresses, throughput drops to 5 Gbps or less.
  • Network traffic to and from resources outside the placement group is limited to 5 Gbps.

From Web:
  • creating an AMI from my instance, and launching a new instance from that AMI and assigning a role to the new VM. EC2 instance roles default to 'None' when creating one. If a role has already been assigned to an instance, you can modify the policies assigned to that role after the fact. It's worth considering creating a new blank role when creating a single instance, since it will help you to avoid this issue after making changes to your VM
  • An Elastic IP address is for use in a specific region only.