pci

PCIe基础篇(二)、协议详解

百般思念 提交于 2019-12-05 23:55:26
一个完整的PCIe协议体系结构包括应用层、事务层(Transaction Layer)、数据链路层(Data Link Layer)和物理层(Physical Layer)。其中,应用层由用户需要自行设计,其他需要严格按照协议进行遵循。应用层与软件层一般指的是Device Core and interface to Transaction Layer,该层决定了PCIe设备的类型和基础功能,可以由FPGA实现。 事务层(Transaction Layer):接收端的事务层负责事务层包(Transaction Layer Packet,TLP)的解码与校验,发送端的事务层负责事务层包的建立。此外事务层包还具有流量监控等功能。 数据链路层(Data Link Layer):负责数据链路层包(Data Link Layer Packet,DLLP)的创建、解码和校验。同时还实现了ACK、NACK应答机制。 物理层(Physical Layer):物理层负责Ordered-Set Packet的创建与解码。同时负责发送与接收所有类型的包。在发送之前,还需要对包进行一些列的处理,扰码(线性反馈移位寄存器),8b10编码(电流平衡)。 在PCIe体系中,事务层、数据链路层、物理层曾在与每一个端口中,是一个结构总必须包含的组成。 来源: https://www.cnblogs.com

Address mapping of PCI-memory in Kernel space

吃可爱长大的小学妹 提交于 2019-12-05 14:03:57
I'm trying to read and write to and PCI-device from a loadable kernel module. Therefore I follow this post : pci_enable_device(dev); pci_request_regions(dev, "expdev"); bar1 = pci_iomap(dev, 1, 0); // void iowrite32(u32 val, void __iomem *addr) iowrite32( 0xaaaaaaaa, bar1 + 0x060000); /* offset from device spec */ But at the end the device doesn't do his work as expected. Then I look to the address behind bar1 and found a very big value ffffbaaaaa004500 . At this point I don't really understand what was happen there and what was right. Can I interpret bar1 as an address inside my kernel

Retrieving PCI coordinates by Windows' API (user mode)

≯℡__Kan透↙ 提交于 2019-12-05 10:41:45
Is there a way to obtain PCI coordinates (bus/slot/function numbers) of devices by using Windows c/c++ API (e.g PnP Configuration Manager API)? I already know how to do it in kernel mode, I need an user-mode solution. My target system is Windows XP-32 bit. I've eventually found a simple solution (it was just a matter of digging into MSDN). This minimal code finds the device's PCI coordinates in terms of bus/slot/function: DWORD bus, addr, slot, func; HDEVINFO h; // Obtained by SetupDiGetClassDevs SP_DEVINFO_DATA d; // Filled by SetupDiGetDeviceInterfaceDetail SetupDiGetDeviceRegistryProperty(h

enable device: BAR 0 [mem 0x00000000-0x003fffff] not claimed

落爺英雄遲暮 提交于 2019-12-05 07:41:34
/******************************************************************************* * enable device: BAR 0 [mem 0x00000000-0x003fffff] not claimed * 说明: * Linux驱动pci_enable_device函数调用出现not claimed报错。 * * 2019-11-22 深圳 宝安西乡 曾剑锋 ******************************************************************************/ 一、参考文档 1. pci_enable_device() fails after remove/rescan https://stackoverflow.com/questions/46476844/pci-enable-device-fails-after-remove-rescan 二、原因 FPGA的PCI/PCIe控制器的class没有设置好,可能使用了默认的0。从而导致在Linux下面识别为pcie Non-VGA unclassified device。 三、解决方法: 可以设置为0x40000(multimedia video device)或者0xff0000

check for IOMMU support on linux [closed]

谁都会走 提交于 2019-12-05 05:51:30
Closed. This question is off-topic . It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I'd like to verify on any given Linux machine if PCI passthrough is supported. After a bit of googling, I found that I should rather check if IOMMU is supported, and I did so by running: dmesg | grep IOMMU If it supports IOMMU (and not IOMMUv2), I would get: IOMMU [ 0.000000] DMAR: IOMMU enabled [ 0.049734] DMAR-IR: IOAPIC id 8 under DRHD base 0xfbffc000 IOMMU 0 [ 0.049735] DMAR-IR: IOAPIC id 9 under DRHD base

数据存储技术

為{幸葍}努か 提交于 2019-12-05 00:41:30
课程大纲 判断对错、名词解释、问答、计算 数据硬件组成 数据存取途径是指从数据源到目的地数据和命令传输的路径 。 数源据和目的地通常是存储器或存储设备。 介于两端之间的物理器件便组成了存取途径的硬件系统。 总线 总线是连接设备的通信通道,包括 数据总线、地址总线、控制总线 (避免多个CPU操作同一块内存)和 电源线 等 。 系统总线: 系统总线 是一条致密的高速总线、 连接CPU/Cache和系统内存 。通过总线仲裁获得总线使用权,然后在两个硬件模块间进行通信。 内存总线 IO总线: 外围设备总线( IO总线 ),与系统总线桥接,由于桥控制器控制, 连接外围设备 。桥接控制器中包含高速缓存,用以调节两条总线的速度差。 IDE总线 桥间连接总线 USB总线 PCI总线: 目前台式机与服务器所普遍使用的一种南桥与外设连接的总线技术。 SCSI总线 PCI总线: PCI ( Peripheral Component Interconnect)总线是目前台式机与服务器所普遍使用的一种南桥与外设连接的总线技术。 PCI 总线的地址总线与数据总线是分时复用的。一方面可以节省接插件的管脚数,另一方面便于实现突发数据传输。 在数据传输时,一个 PCI 设备作为发起者(主控, Initiator或 Master),而另一个 PCI 设备作为目标(从设备、 Target或 Slave)。

MSI和MSI-X中断机制

微笑、不失礼 提交于 2019-12-04 15:03:04
在 PCI 总线中,所有需要提交中断请求的设备,必须能够通过 INTx 引脚提交中断请求,而 MSI 机制是一个可选机制。而在 PCIe 总线中, PCIe 设备必须支持 MSI 或者 MSI-X 中断请求机制,而可以不支持 INTx 中断消息。 在 PCIe 总线中, MSI 和 MSI-X 中断机制使用存储器写请求 TLP 向处理器提交中断请求,下文为简便起见将传递 MSI/MSI-X 中断消息的存储器写报文简称为 MSI/MSI-X 报文。不同的处理器使用了不同的机制处理这些 MSI/MSI-X 中断请求,如 PowerPC 处理器使用 MPIC 中断控制器处理 MSI/MSI-X 中断请求,本章将在第 6.2 节中介绍这种处理情况;而 x86 处理器使用 FSB Interrupt Message 方式处理 MSI/MSI-X 中断请求。 不同的处理器对 PCIe 设备发出的 MSI 报文的解释并不相同。但是 PCIe 设备在提交 MSI 中断请求时,都是向 MSI/MSI-X Capability 结构中的 Message Address 的地址写 Message Data 数据,从而组成一个存储器写 TLP ,向处理器提交中断请求. 有些 PCIe 设备还可以支持 Legacy 中断方式 [1] 。但是 PCIe 总线并不鼓励其设备使用 Legacy 中断方式

A CUDA context was created on a GPU that is not currently debuggable

匿名 (未验证) 提交于 2019-12-03 08:30:34
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: When i start cuda debugging, Nsight return this error: A CUDA context was created on a GPU that is not currently debuggable. Breakpoints will be disabled. Adapter: GeForce GT 720M This is my system and CUDA information. Please note that last version of CUDA and Nsight are installed. I searched this issue and could not find my answer. Thank you so much. Report Information UnixTime Generated 1490538033 OS Information Computer Name DESKTOP - OLFM6NT NetBIOS Name DESKTOP - OLFM6NT OS Name Windows 10 Pro GetVersionEx dwMajorVersion 10

How is a PCI / PCIe BAR size determined?

匿名 (未验证) 提交于 2019-12-03 03:03:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I know that the base address register (BAR) in PCI configuration space defines the start location of a PCI address, but how does the size of this region get established? Surely this is a property of the hardware since only it knows how far into its address space it can deal. However, I cannot seem to see a BAR size field in the PCI configuration structure. 回答1: Found the answer at OSDev Wiki : "To determine the amount of address space needed by a PCI device, you must save the original value of the BAR, write a value of all 1's to the

How to create out-of-tree QEMU devices?

匿名 (未验证) 提交于 2019-12-03 02:03:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Two possible mechanisms come to mind: IPC like the existing QMP and QAPI QEMU loads a shared library plugin that contains the model Required capabilities (of course all possible through the C API, but not necessarily IPC APIs): inject interrupts register callbacks for register access modify main memory Why I want this: use QEMU as a submodule and leave its source untouched additional advantages only present for IPC methods: write the models in any language I want use a non-GPL license for my device I'm aware of in-tree devices as explained