pci-e

How is a PCI / PCIe BAR size determined?

心已入冬 提交于 2019-12-30 03:12:06
问题 I know that the base address register (BAR) in PCI configuration space defines the start location of a PCI address, but how does the size of this region get established? Surely this is a property of the hardware since only it knows how far into its address space it can deal. However, I cannot seem to see a BAR size field in the PCI configuration structure. 回答1: First of all, the BAR size must be a power of two (e.g., 1 KiB, 2 MiB), and each area must be aligned in memory such that the lower

FPGA与PCI-E

本小妞迷上赌 提交于 2019-12-28 16:16:03
从并行到串行: PCI Express(又称PCIe)是一种高性能、高带宽串行通讯互连标准,取代了基于总线的通信架构,如:PCI、PCI Extended (PCI-X) 以及加速图形端口(AGP)。 PCI-e的主要性能: 更低的生产成本 更高系统吞吐量 更好可扩展性和灵活性 上述传统基于总线的互连几乎根本无法达到PCI-e所拥有的优秀性能。 PCI Express标准的制定是着眼未来的,它还在继续发展为系统提供更高的吞吐量。第一代PCIe约定的吞吐量是2.5千兆位/秒(Gbps),第二代则达到5.0Gbps,而最近发布的PCIe3.0标准则能支持8.0Gbps的速率。在PCIe标准继续利用最新的技术以提供不断增加的吞吐量的同时,利用分层协议、通过使驱动程序保持与现有PCI应用的软件兼容性将简化从PCI到PCIe的过渡。 虽然最初定位在电脑扩展卡和图形卡应用,但目前,PCIe已在更广泛的领域得到应用,包括:网络、通信、存储、工业和消费类电子产品等。 这里对PCI-e的详细协议不做介绍,只从整体上介绍PCI-e的概述、PCI-e的优势以及FPGA实现PCI-e的优势。 PCIe的优势以其复杂性为代价。PCIe是基于分组的串行连接协议,估计比PCI并行总线复杂10倍以上。这种复杂性部分源于在千兆赫速率所要求的并行到串行的数据转换以及转向基于分组的实现。 PCI与PCI-e接口

pci device info access in linux from userspace

谁说我不能喝 提交于 2019-12-24 03:38:07
问题 I want to access the pci device tree information from user space programatically. Like the root complex and the devices connected to it. How can I do it please let me know. Regards, Pradeep 回答1: libpci or pcilib (on which lspci is based) uses sysfs, procfs, and possibly other means to access PCI information. You can check pciutils package source code for further reference: https://github.com/gittup/pciutils https://github.com/gittup/pciutils/blob/gittup/lspci.c 回答2: From command line try to

Intel NVMe drive Performance degradation with xfs filesystem with sector size other than 4096

孤者浪人 提交于 2019-12-23 01:11:08
问题 I am working with NVMe card on linux(Ubuntu 14.04). I am finding some performance degradation for Intel NVMe card when formatted with xfs file system with its default sector size(512). or any other sector size less than 4096. In the experiment I formatted the card with xfs filesystem with default options. I tried running fio with 64k block size on an arm64 platform with 64k page size. This is the command used fio --rw=randread --bs=64k --ioengine=libaio --iodepth=8 --direct=1 --group

Enabling write-combining IO access in userspace

天大地大妈咪最大 提交于 2019-12-21 03:55:08
问题 I have a PCIe device with a userspace driver. I'm writing commands to the device through a BAR, the commands are latency sensitive and amount of data is small (~64-bytes) so I don't want to use DMA. If I remap the physical address of the BAR in the kernel using ioremap_wc and then write 64-bytes to the BAR inside the kernel , I can see that the 64-bytes are written as a single TLP over PCIe. If I allow my userspace program to mmap the region with the MAP_SHARED flag and then write 64-bytes I

If I have only the physical address of device buffer (PCIe), how can I map this buffer to user-space?

佐手、 提交于 2019-12-20 05:49:19
问题 If I have only the physical address of the memory buffer to which is mapped the device buffer via the PCI-Express BAR (Base Address Register), how can I map this buffer to user-space ? For example, how does usually the code should look like in Linux-kernel? unsigned long long phys_addr = ...; // get device phys addr unsigned long long size_buff = ...l // get device size buff // ... mmap(), remap_pfn_range(), Or what should I do now? On: Linux x86_64 From: https://stackoverflow.com/a/17278263

How do I inform a user space application that the driver has received an interrupt in linux?

﹥>﹥吖頭↗ 提交于 2019-12-14 04:19:41
问题 I have a PCIe device that will send a hardware interrupt when a data buffer is ready to be read. I believe the best approach for this is to use signals but I'm not entirely sure how. What I believe I need to do is: Save the PID of the user space application so the driver knows where to send the signal In the interrupt handler of the PCIe device driver, send a signal to the user space application In the User space application implement a signal handler function for processing the signal I'm

Generating a 64-byte read PCIe TLP from an x86 CPU

巧了我就是萌 提交于 2019-12-12 09:47:57
问题 When writing data to a PCIe device, it is possible to use a write-combining mapping to hint the CPU that it should generate 64-byte TLPs towards the device. Is it possible to do something similar for reads? Somehow hint the CPU to read an entire cache line or a larger buffer instead of reading one word at a time? 回答1: Intel has a white-paper on copying from video RAM to main memory; this should be similar but a lot simpler (because the data fits in 2 or 4 vector registers). It says that NT

Using pci_enable_msi_block

雨燕双飞 提交于 2019-12-11 00:08:41
问题 I am trying to enable multiple MSI irq lines in a kernel module. I am operating in RC mode. The problem is when I call pci_enable_msi_block() it will not allocate more than 1 MSI. If I call pci_enable_msi_block(dev, 32) it will return 4 (which I assume should mean I can use 4 MSI). I then call pci_enable_msi_block(dev,4) and it returns 1. Here is an output from $lspci -v after insmod Custom_module.ko but with only a successful enable of 1 MSI 00:00.0 PCI bridge: Texas Instruments Device 8888

How to add pciutils package in yocto AGL?

旧时模样 提交于 2019-12-10 12:08:34
问题 I have built Yocto AGL(6.0.0) image for RCar-salvator-xs board and flashed its hyperflash memory. Now, I want to perform PCIe related investigation, for that I want to use lspci command. But, After ligging in as a root in flashed AGL image and executing lspci command it gives command not found . How can I include pciutils in AGL source code and build it to use lspci command. I am new to Yocto and AGL. Any help will be much appreciated. 回答1: You can add IMAGE_INSTALL += "pciutils" or IMAGE