dma

linux dmaengine编程

て烟熏妆下的殇ゞ 提交于 2019-11-29 00:01:24
开发板:A33,运行linux-3.4.39 主机:Ubuntu 14.04 ---------------------------------------------- DMA是Direct Memory Access的缩写,顾名思义,就是绕开CPU直接访问memory的意思。在计算机中,相比CPU,memory和外设的速度是非常慢的,因而在memory和memory(或者memory和设备)之间搬运数据,非常浪费CPU的时间,造成CPU无法及时处理一些实时事件。因此,工程师们就设计出来一种专门用来搬运数据的器件----DMA控制器,协助CPU进行数据搬运。 DMA传输可以是内存到内存、内存到外设和外设到内存。这里的代码通过dma驱动实现了内存到内存的数据传输。linux实现了DMA框架,叫做DMA Engine,内核驱动开发者必须按照固定的流程编码才能正确的使用DMA。 1. DMA用法包括以下的步骤: 1)分配一个DMA通道; dma_request_channel() 2)设置controller特定的参数; none 3)获取一个传输描述符; device_prep_dma_memcpy() 4)提交传输描述符; tx_submit(); 5)dma_async_issue_pending() 2. 测试: 1)交叉编译成ko模块,下载到A33开发板 2)加载模块

Linux 4.0的dmaengine编程

*爱你&永不变心* 提交于 2019-11-29 00:01:21
在Linux 4.0下进行dmaengine的编程主要分为两部分,DMA Engine控制器编程和DMA Engine API编程。 DMA Engine API编程 slave DMA用法包括以下的步骤: 1. 分配一个DMA slave通道; 2. 设置slave和controller特定的参数; 3. 获取一个传输描述符; 4. 提交传输描述符; 5. 发起等待的请求并等待回调通知。 下面是以上每一步的详细说明。 1. 分配一个DMA slave通道 在slave DMA上下文通道的分配略有不同,客户端驱动通常需要一个通道,这个通道源自特定的DMA控制器,在某些情况甚至需要一个特定的通道。请求通道的API是channel dma_request_channel()。 其接口如下: struct dma_chan *dma_request_channel(dma_cap_mask_t mask, dma_filter_fn filter_fn, void *filter_param); 1 2 3 其中dma_filter_fn接口定义如下: typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); 1 filter_fn是可选的,但是对于slave和cyclic通道我们强烈推荐使用

和菜鸟一起学linux总线驱动之DMA传输

跟風遠走 提交于 2019-11-29 00:01:11
DMA的工作过程: 对于嵌入式中的DMA,其实是在写数据寄存器的时候用dma的传输来代替。就像i2c设备,在发送和接收数据的时候都是要往数据寄存器中写数据的。比如那个寄存器是I2C_DATA,如果用cpu来传输的话就是writel(data, I2C_DATA);而用dma传输就是配置好要传输的buf长度,然后源地址就是buf的地址,目标地址就是I2C_DATA。 这里还要注意经过cpu的是虚拟地址,而dma传输的是物理地址。 其实dma传输就是dma控制在两个物理地址之间传输数据。 Linux下用dma传输主要调用下面这些函数就可以实现外部的dma了。 具体的就可以看下面简单的解释,以下主要是dma发送的,其实接收也一样的。配置反一下就可以了。 1、初始化DMA dma_cap_zero(mask); dma_cap_set(DMA_SLAVE,mask); /*1. Init rx channel */ dws->rxchan= dma_request_channel(mask, dma_chan_filter, params); 主要就是申请DMA通道。 dma_chan_filter这个函数主要是查找你的dma传输的设备的请求信号线,其具体是在注册时填写的。 这里会根据这个函数返回的真假来判断已经注册在总线上的dma slave的。 buf =kmalloc(DMA

Linux platform驱动模型

帅比萌擦擦* 提交于 2019-11-28 23:14:54
/************************************************************************************ *本文为个人学习记录,如有错误,欢迎指正。 *本文参考资料: *         http://www.cnblogs.com/xiaojiang1025/p/6367061.html *         http://www.cnblogs.com/xiaojiang1025/p/6367910.html *         http://www.cnblogs.com/xiaojiang1025/p/6369065.html *         https://www.cnblogs.com/lifexy/p/7569371.html *         https://www.cnblogs.com/biaohc/p/6667529.html ************************************************************************************/ 1. platform总线 1.1 platform总线简介 在Linux2.6以后的设备驱动模型中,需关心总线,设备和驱动这三种实体,总线将设备和驱动绑定

dma vs interrupt-driven i/o

冷暖自知 提交于 2019-11-28 17:40:06
I'm a little unclear on differences between DMA and interrupt I/O. (Currently reading Operating Systems Concepts, 7th ed). Specifically, I'm not sure when the interrupts occur in either case, and at what points in both cases is the CPU is free to do other work. Things I've been reading, but can't necessarily reconcile: Interrupt-driven Controller initialized via driver Controller examines registers loaded by driver in order to decide action Data transfer from/to peripheral and controller's buffer ensues. Controller issues interrupt when (on each byte read? on each word read? when the buffer

Why mmap() is faster than sequential IO? [duplicate]

橙三吉。 提交于 2019-11-28 15:59:05
Possible Duplicate: mmap() vs. reading blocks I heard (read it on the internet somewhere) that mmap() is faster than sequential IO. Is this correct? If yes then why it is faster? mmap() is not reading sequentially. mmap() has to fetch from the disk itself same as read() does The mapped area is not sequential - so no DMA (?). So mmap() should actually be slower than read() from a file? Which of my assumptions above are wrong? Tony Delroy I heard (read it on the internet somewhere) that mmap() is faster than sequential IO. Is this correct? If yes then why it is faster? It can be - there are pros

Linux kernel device driver to DMA from a device into user-space memory

本小妞迷上赌 提交于 2019-11-28 15:32:33
I want to get data from a DMA enabled, PCIe hardware device into user-space as quickly as possible. Q: How do I combine "direct I/O to user-space with/and/via a DMA transfer" Reading through LDD3, it seems that I need to perform a few different types of IO operations!? dma_alloc_coherent gives me the physical address that I can pass to the hardware device. But would need to have setup get_user_pages and perform a copy_to_user type call when the transfer completes. This seems a waste, asking the Device to DMA into kernel memory (acting as buffer) then transferring it again to user-space. LDD3

网络数据包收发流程(三):e1000网卡和DMA

扶醉桌前 提交于 2019-11-28 01:00:35
转载 https://www.cnblogs.com/CasonChan/p/5166239.html 一、硬件布局 每个网卡(MAC)都有自己的专用DMA Engine,如上图的 TSEC 和 e1000 网卡intel82546。 上图中的红色线就是以太网数据流,DMA与DDR打交道需要其他模块的协助,如TSEC,PCI controller 以太网数据在 TSEC<-->DDR PCI_Controller<-->DDR 之间的流动,CPU的core是不需要介入的 只有在数据流动结束时(接收完、发送完),DMA Engine才会以外部中断的方式告诉CPU的core 二、DMA Engine 上面是DMA Engine的框图,以接收为例: 1. 在System memory中为DMA开辟一端连续空间,用来BD数组 (一致性dma内存) BD是给DMA Engine使用的,所以不同的设备,BD结构不同,但是大致都有状态、长度、指针3个成员。 2. 初始化BD数组,status为E,length为0 在System memory中再开辟一块一块的内存,可以不连续,用来存放以太网包 将这些内存块的总线地址赋给buf(dma映射) 3. 当MAC接收以太网数据流,放在了Rx FIFO中 4. 当一个以太网包接收完全后,DMA engine依次做以下事情 fetch bd

2019 8 10 STM32F407ADC1M采样频率相关设置

て烟熏妆下的殇ゞ 提交于 2019-11-27 16:17:23
GPIO_InitTypeDef GPIO_InitStructure; ADC_CommonInitTypeDef ADC_CommonInitStructure; ADC_InitTypeDef ADC_InitStructure; ADC_DeInit(); RCC_AHB1PeriphClockCmd(RCC_AHB1Periph_GPIOA, ENABLE);//ʹÄÜGPIOAʱÖÓ RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1, ENABLE); //ʹÄÜADC1ʱÖÓ //Ïȳõʼ»¯ADC1ͨµÀ5 IO¿Ú GPIO_InitStructure.GPIO_Pin = GPIO_Pin_5;//PA5 ͨµÀ5 GPIO_InitStructure.GPIO_Mode = GPIO_Mode_AN;//Ä£ÄâÊäÈë GPIO_InitStructure.GPIO_PuPd = GPIO_PuPd_NOPULL ;//²»´øÉÏÏÂÀ­ GPIO_Init(GPIOA, &GPIO_InitStructure);//³õʼ»¯ RCC_APB2PeriphResetCmd(RCC_APB2Periph_ADC1,ENABLE); //ADC1¸´Î» RCC_APB2PeriphResetCmd

Why mmap() is faster than sequential IO? [duplicate]

◇◆丶佛笑我妖孽 提交于 2019-11-27 09:25:28
问题 This question already has answers here : Closed 7 years ago . Possible Duplicate: mmap() vs. reading blocks I heard (read it on the internet somewhere) that mmap() is faster than sequential IO. Is this correct? If yes then why it is faster? mmap() is not reading sequentially. mmap() has to fetch from the disk itself same as read() does The mapped area is not sequential - so no DMA (?). So mmap() should actually be slower than read() from a file? Which of my assumptions above are wrong? 回答1: I