kvm

Qemu-KVM: Translation of guest physical address to host virtual/host physical address

大城市里の小女人 提交于 2020-07-20 17:18:45
问题 I am working on a project where I need to translate qemu-guest physical addresses to host virtual/physical addresses. I am using VMI (virtual machine introspection) to introspect into the qemu process (the KVM VM) and to read guest physical addresses stored in virtio ring buffer descriptors. Therefore, I am looking for a simple way to translate the qemu physical addresses to host virtual addresses at the host side. (i.e., to extract as less info as possible from the qemu process). I read

libvirtError: XML error: expected unicast mac address, found multicast

蓝咒 提交于 2020-06-26 12:54:45
问题 I'm setting up KVM automation via ansible and I have one VM which keeps giving me this error: libvirtError: XML error: expected unicast mac address, found multicast '53:54:00:b4:ad:81' I don't believe this is an ansible problem, as several other VMs work just fine. I've tried a different host, even changing the MAC used to one that has worked before as well as one that has never been used. Best I can tell this is NOT a multicast mac address, I'm not sure what the problem is or where to look

Linux perf events profiling in Google Compute Engine not working

雨燕双飞 提交于 2020-06-12 09:07:23
问题 I'm new to using Google Compute Engine. I'd like to use the Linux perf tool to do some various perf events measurements of my application and eventually sample profiling. I've installed the linux perf tool on my Ubuntu 16.04 LTS VM. However even basic events like cycles show up as "not supported". I'm guessing that the underlying KVM hypervisor does not have virtual PMU support enabled, although I believe KVM does support this with a non-default flag setting. Is there any way to get this

Check if VT-D / IOMMU has been enabled in the BIOS/UEFI

守給你的承諾、 提交于 2020-05-27 09:24:28
问题 To check if Intel's VT-X or AMD's AMD-V is enabled in the BIOS/UEFI, I use: if systool -m kvm_amd -v &> /dev/null || systool -m kvm_intel -v &> /dev/null ; then echo "AMD-V / VT-X is enabled in the BIOS/UEFI." else echo "AMD-V / VT-X is not enabled in the BIOS/UEFI" fi I couldn't find a way to check if Intel's VT-D or AMD's IOMMU are enabled in the BIOS/UEFI. I need a way to detect if it is enabled or not without having the iommu kernel parameters set ( iommu=1 , amd_iommu=on , intel_iommu=on

Check if VT-D / IOMMU has been enabled in the BIOS/UEFI

流过昼夜 提交于 2020-05-27 09:24:10
问题 To check if Intel's VT-X or AMD's AMD-V is enabled in the BIOS/UEFI, I use: if systool -m kvm_amd -v &> /dev/null || systool -m kvm_intel -v &> /dev/null ; then echo "AMD-V / VT-X is enabled in the BIOS/UEFI." else echo "AMD-V / VT-X is not enabled in the BIOS/UEFI" fi I couldn't find a way to check if Intel's VT-D or AMD's IOMMU are enabled in the BIOS/UEFI. I need a way to detect if it is enabled or not without having the iommu kernel parameters set ( iommu=1 , amd_iommu=on , intel_iommu=on

KVM shadow page table handling in x86 platform

懵懂的女人 提交于 2020-05-10 03:24:22
问题 From what I understand, on processors that doesn't have hardware support for guest virtual to host physical address translation KVM uses the shadow page table. Shadow page table is built and updated when the guest OS modifies its page tables. Are there special instructions in the hardware (let’s take x86 for reference) for modifying the page table? Unless there are special instructions there won't be a trap to the VMM. Isn't the page table maintained in software by the Linux kernel just

KVM shadow page table handling in x86 platform

a 夏天 提交于 2020-05-10 03:22:18
问题 From what I understand, on processors that doesn't have hardware support for guest virtual to host physical address translation KVM uses the shadow page table. Shadow page table is built and updated when the guest OS modifies its page tables. Are there special instructions in the hardware (let’s take x86 for reference) for modifying the page table? Unless there are special instructions there won't be a trap to the VMM. Isn't the page table maintained in software by the Linux kernel just

KVM shadow page table handling in x86 platform

一个人想着一个人 提交于 2020-05-10 03:22:04
问题 From what I understand, on processors that doesn't have hardware support for guest virtual to host physical address translation KVM uses the shadow page table. Shadow page table is built and updated when the guest OS modifies its page tables. Are there special instructions in the hardware (let’s take x86 for reference) for modifying the page table? Unless there are special instructions there won't be a trap to the VMM. Isn't the page table maintained in software by the Linux kernel just

推荐一个好的国外VPS——Vultr

為{幸葍}努か 提交于 2020-05-09 22:22:01
Vultr介绍 Vultr是美国知名云服务提供商Choopa.com的VPS服务,Choopa一直为游戏公司提供全球支撑服务,因此该公司在全球14个国家及地区部署数据中心,包括日本东京、新加坡、美国洛杉矶、西雅图、英国伦敦、德国等地区。性价比极高,当前最低仅需$2.5/月。新用户注册即送$25。 点击以下链接,进行注册,新用户赠送25美金(需要支付10美金,相当于充10美金,得35美金): 优惠活动活动链接 Vultr优势 1.机房众多:拥有日本、美国、欧洲等14个机房。 2.架构优秀:全部采用KVM架构、SSD固态硬盘、1000GB/月流量起步。 3.镜像强大:常见的Linux之外还可以自定义安装ISO系统,还可以安装Windows系统。 4.后台强大:拥有系统快照、一键装机部署脚本、备份、防火墙等强大功能。 5.促销活动:不定期有新用户注册奖励,有时赠送金额高达$100。 6.计费灵活:采用小时计费模式,可以任意的添加和删除机器。 7.客服速度:客服速度响应快,一般15分钟就可以得到解答(可能涉及到时差问题)。 配置价格 来源: oschina 链接: https://my.oschina.net/u/1470240/blog/2876658

阿里云ECS内存增强型实例re6云服务器CPU内存性能评测

笑着哭i 提交于 2020-05-09 20:31:50
阿里云ECS云服务器内存增强型实例re6实例发布,内存增强型实例re6实例搭载Intel最新处理器、更高单核内存容量,拥有更高性价比之选,码笔记分享阿里云官网发布的关于云服务器 ECS内存增强型实例re6 实例CPU内存性能评测及应用场景介绍: ECS内存增强型实例re6云服务器性能详解 阿里云ECS云服务器 内存增强型实例re6 基于神龙架构,降低虚拟化开销,性能提升30%,价格降低4.5%,拥有更高性价比。 I/O优化实例 支持ESSD云盘、SSD云盘和高效云盘 针对高性能数据库、内存数据库和其他内存密集型企业应用程序进行了优化 处理器:2.5 GHz主频的Intel ® Xeon ® Platinum 8269CY(Cascade Lake),睿频3.2 GHz,计算性能稳定 处理器与内存配比为1:16,高内存资源占比,最大支持3 TiB内存 re6实例性能评测 ECS内存增强型实例re6云服务器性能评测,全方位解读CPU、内存、网络、应用性能测试: re6实例CPU性能提升 采用2.5 GHz主频的Intel ® Xeon ® Platinum 8269CY(Cascade Lake),全核睿频3.2 GHz,算力提升30%以上。 re6实例内存性能提升 内存容量提升到1:14.8,底层环境开启Numa,内存延时大幅度降低,选配更高主频内存,带宽持续提升。