dma

How to write kernel space memory (physical address) to a file using O_DIRECT?

自闭症网瘾萝莉.ら 提交于 2019-12-04 00:42:55
I want to write a physical memory to a file. The memory itself will not be touched again, thus I want to use O_DIRECT to gain the best write performance. My first idea was to open /dev/mem and mmap the memory and write everything to a file, which is opened with O_DIRECT . The write call fails ( EFAULT ) on the memory-address returned by mmap. If I do not use O_DIRECT , it results in a memcpy . #include <cstdint> #include <iostream> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <stdio.h> #include <errno.h> #include <malloc.h> #include <sys/mman.h>

DMA复习

穿精又带淫゛_ 提交于 2019-12-03 14:09:41
DMA 全称Direct Memory Access,即直接存储器访问。 DMA传输将数据从一个地址空间复制到另一个地址空间。当CPU初始化这个传输动作,传输动作本身是由DMA控制器来实现和完成的。 DMA传输方式无需CPU直接控制传输,也没有中断处理方式那样保留现场和恢复现场过程,通过硬件为RAM和IO设备开辟一条直接传输数据的通道,使得CPU的效率大大提高。 作用:为CPU减负。 TM32最多有2个DMA控制器(DMA2仅存在大容量产品中),DMA1有7个通道。DMA2有5个通道。每个通道专门用来管理来自于一个或多个外设对存储器访问的请求。还有一个仲裁起来协调各个DMA请求的优先权。 DMA配置程序过程: ① 使能DMA时钟 RCC_AHBPeriphClockCmd(); ② 初始化DMA通道参数 DMA_Init(); ③使能串口DMA发送,串口DMA使能函数: USART_DMACmd(); ④使能DMA1通道,启动传输。 DMA_Cmd(); ⑤查询DMA传输状态 DMA_GetFlagStatus(); ⑥获取/设置通道当前剩余数据量: DMA_GetCurrDataCounter(); DMA_SetCurrDataCounter(); 来源: https://www.cnblogs.com/tiange-137/p/11798404.html

What is DMA mapping and DMA engine in context of linux kernel?

匆匆过客 提交于 2019-12-03 13:24:30
What is DMA mapping and DMA engine in context of linux kernel? When DMA mapping API and DMA engine API can be used in Linux Device Driver? Any real Linux Device Driver example as a reference would be great. Punit Vara What is DMA mapping and DMA engine in context of linux kernel? The kernel normally uses virtual address. Functions like kmalloc() , vmalloc() normally return virtual address. It can be stored in void* . Virtual memory system converts these addresses to physical addresses. These physical addresses are not actually useful to drivers. Drivers must use ioremap() to map the space and

renesas ravb网卡驱动实现分析(linux uboot xvisor)

易管家 提交于 2019-12-03 09:52:46
net_device结构体 相对于linux做了相当大简化,其结构及含义如下: struct net_device { char name[MAX_NETDEV_NAME_LEN]; //用于存放网络设备的设备名称 struct vmm_device *dev; //??? const struct net_device_ops *netdev_ops; //上层ops接口 const struct ethtool_ops *ethtool_ops; //可选ops接口 unsigned int state; //网络设备接口的状态 unsigned int link_state; //链接状态 void *priv; /* Driver specific private data */ void *nsw_priv; /* VMM virtual packet switching layer specific private data.*/ void *net_priv; /* VMM specific private data -Usecase is currently undefined */ unsigned char dev_addr[MAX_NDEV_HW_ADDRESS]; //硬件接口地址 unsigned int hw_addr_len; //硬件地址长度

Direct memory access DMA - how does it work?

假如想象 提交于 2019-12-03 07:14:24
问题 I read that if DMA is available, then processor can route long read or write requests of disk blocks to the DMA and concentrate on other work. But, DMA to memory data/control channel is busy during this transfer. What else can processor do during this time? 回答1: First of all, DMA (per se) is almost entirely obsolete. As originally defined, DMA controllers depended on the fact that the bus had separate lines to assert for memory read/write, and I/O read/write. The DMA controller took advantage

Linux driver DMA transfer to a PCIe card with PC as master

馋奶兔 提交于 2019-12-03 07:07:36
I am working on a DMA routine to transfer data from PC to a FPGA on a PCIe card. I read DMA-API.txt and LDD3 ch. 15 for details. However, I could not figure out how to do a DMA transfer from PC to a consistent block of iomem on the PCIe card. The dad sample for PCI in LDD3 maps a buffer and then tells the card to do the DMA transfer, but I need the PC to do this. What I already found out: Request bus master pci_set_master(pdev); Set the DMA mask if (dma_set_mask(&(pdev->dev), DMA_BIT_MASK(32))) { dev_err(&pdev->dev,"No suitable DMA available.\n"); goto cleanup; } Request a DMA channel if

DMA transfer RAM-to-RAM

拈花ヽ惹草 提交于 2019-12-03 05:08:57
问题 A friend of mine has told me that on x86 architecture DMA controller can't transfer between two different RAM locations. It can only transfer between RAM and peripheral (such as PCI bus). Is this true? Because AFAIK DMA controller should be able between arbitrary devices that sit on BUS and have an address. In particular I see no problem if both source and destionation addresses belong to the same physical device. 回答1: ISA (remember? ;-) DMA chips certainly have a Fetch-and-Deposit transfer

What happens after a packet is captured?

风流意气都作罢 提交于 2019-12-03 04:34:57
问题 I've been reading about what happens after packets are captured by NICs, and the more I read, the more I'm confused. Firstly, I've read that traditionally, after a packet is captured by the NIC, it gets copied to a block of memory in the kernel space, then to the user space for whatever application that then works on the packet data. Then I read about DMA, where the NIC directly copies the packet into memory, bypassing the CPU. So is the NIC -> kernel memory -> User space memory flow still

Linux kernel device driver to DMA into kernel space

余生长醉 提交于 2019-12-03 03:54:28
LDD3 (p:453) demos dma_map_single using a buffer passed in as a parameter. bus_addr = dma_map_single(&dev->pci_dev->dev, buffer, count, dev->dma_dir); Q1 : What/where does this buffer come from? kmalloc ? Q2 : Why does DMA-API-HOWTO.txt state I can use raw kmalloc to DMA into? Form http://www.mjmwired.net/kernel/Documentation/DMA-API-HOWTO.txt L:51 If you acquired your memory via the page allocator kmalloc() then you may DMA to/from that memory using the addresses returned from those routines. L:74 you cannot take the return of a kmap() call and DMA to/from that. So I can pass the address

device tree overlay phandle

匿名 (未验证) 提交于 2019-12-03 01:36:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm working on a Cyclone V SOC FPGA from Altera with a double Cortex-A9 processor. The embedded system (linux-socfpga 4.16) is created with Buildroot-2018.05. I use a "top" device tree at boot time for processor component and a device-tree overlay to configure the FPGA part of the component and load the associated drivers. The overlay will be attach to the base_fpga_region of the top DT. top device tree /dts-v1/; / { model = "MY_PROJECT"; /* appended from boardinfo */ compatible = "altr,socfpga-cyclone5", "altr,socfpga"; /* appended from