问题
If I create a dedicated instance and if it starts on a multi die server, does it share RAM bandwidth of its own CPU? What happens if a neighbor runs some memory bandwidth consuming task? Does it affect my instance?
How is this on a shared instance?
What kind of RAM addressing mode is there? Only on its own numa node or interleaved on all memory sticks of multi-die server?
回答1:
No Amazon EC2 instances share anything. You will never be impacted by a "noisy neighbour".
Where resources are limited (eg RAM, network bandwidth, CPU), each instance is allocated a maximum limit. Resources are never over-allocated, so each instance will have access to its maximum limit of RAM, network bandwidth, etc.
This applies to all types of instances. The difference with a dedicated instance is that only one AWS account will be using the host computer.
All resources are virtualized, so there is no indication of underlying hardware, such as RAM addressing modes.
回答2:
Amazon EC2 instances operate on resources shared by the other EC2 instances which are in turn managed by the underlying **Hypervisor.**
Above statement is true for all instance types provided by Amazon, except the dedicated instances.
The dedicated instances don't share resources when in use, to avoid multi-tenancy. Hypervisor makes sure to turn up the only one virtual instance with no other VM working on the underlying server.
回答3:
Dedicated instances may share hardware with your other instances. From EC2 Dedicated Instances:
"Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that's dedicated to a single customer. Your Dedicated instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated instances may share hardware with other instances from the same AWS account that are not Dedicated instances."
As far as I know, Amazon AWS do not specify exactly how the physical memory hierarchy and bandwidth is divided between different virtual machines on different CPU classes, but we can infer how it is done based on documentation from Intel and Xen/KVM (the hypervisors that EC2 use as base technology).
Note that cache and memory bandwidth are related, but, strictly speaking, there is a difference between cache and memory - cache refers to memory on the CPU, and memory refers to DRAM chips accessible for load/store operations from the CPU. Not all EC2 CPUs have all these technologies, for example, Intel introduced cache management before memory bandwidth management.
The most relevant technology to the question of shared RAM bandwidth is Memory Bandwidth Allocation. This enables the hypervisor to partition memory bandwidth between different CPU cores, so that different instances don't have to share bandwidth. However, this technology wasn't introduced until Skylake, which is used for C5 instances.
Technologies that were introduced in subsequent CPU families:
Xeon E5-2676 v3 (Haswell)
Used by EC2 instances: T2, C4, some M4
Cache Monitoring Technology (CMT)
"CMT can be used to monitor Last Level Cache (LLC) usage by application threads. With this information, administrators and management applications can balance workloads more efficiently to improve both application performance and physical resource utilization. For example, CMT can be used to reduce the impact of the so-called "noisy neighbour" issue in multitenant cloud and data center environments." (see Xen Intel Platform QoS Technologies)
Cache Allocation Technology (CAT)
"Intel’s Cache Allocation Technology (CAT) helps address shared resource concerns by providing software control of where data is allocated into the last-level cache (LLC), enabling isolation and prioritization of key applications."
Code and Data Prioritization
"Code and Data Prioritization (CDP) as introduced on the Intel® Xeon® processor E5 v4 family is a specialized extension to Cache Allocation Technology (CAT), which enables software control over code and data placement in the last-level cache (LLC)."
Note that these are all cache management technologies, not memory bandwidth (though obviously related).
Xeon E5-2686 v4 (Broadwell)
Used by EC2 instances: some M4.
Memory Bandwidth Monitoring (MBM).
"Both cache and memory bandwidth can have a large impact on overall application performance in complex modern multithreaded and multitenant environments. In the cloud datacenter for instance it is important to understand the resource requirements of an application in order to meet targets and provide optimal performance. Similarly, some applications may over-utilize shared resources, and the capability to detect such “noisy neighbor” applications is important. The new Memory Bandwidth Monitoring (MBM) feature helps address this issue, for the first time, by providing per-thread memory bandwidth monitoring for all threads simultaneously."
MBM enables monitoring of memory bandwidth usage, but not direct control. Monitoring is still useful for mitigation - for example, a VM that uses a lot of memory bandwidth could be migrated to a physical machine where more memory bandwidth is available.
Xeon Platinum 8000 (Skylake)
Used by EC2 instances: C5
Memory Bandwidth Allocation (MBA)
"The Intel® Xeon® Scalable processors introduce Memory Bandwidth Allocation (MBA), which provides new levels of control over how memory bandwidth is distributed across running applications. MBA enables improved prioritization, bandwidth management and is a valuable tool to help control data center noisy neighbors."
MBA enables direct control and QoS partitioning of memory bandwidth usage.
来源:https://stackoverflow.com/questions/54398215/does-ec2-dedicated-instance-share-ram-bandwidth