dma

How to transfer data via DMA from RAM to RAM?

∥☆過路亽.° 提交于 2019-12-01 11:06:27
I want to write a kernel module that can transfer data via DMA from RAM to RAM. There are some posts that discuss this, but I don't really get it. Some say it is possible others say it isn’t. If I understood ldd3 right, RAM to RAM copying isn‘t possible with the DMA API of linux, but the driver/dma/dmaengine.c provides a flag DMA_MEMCPY for a "DMA Transfer Type", so there should be a way. Is this correct, can I use a dma engine to transfer data from one ram address to another? If it is hardware dependent, how can I determine if my system supports dma memcpy? As you correctly pointed out, DMA

Linux on arm64: sendto causes “Unhandled fault: alignment fault (0x96000021)” when sending data from mmapped coherent DMA buffer

故事扮演 提交于 2019-12-01 09:42:44
问题 I'm building a data acquisition system based on the UltraScale+ FPGA equipped with arm64 CPU. The data are transmitted to RAM via DMA. The DMA buffers in the driver are reserved as below: virt_buf[i] = dma_zalloc_coherent(&pdev->dev, BUF_SIZE, &phys_buf[i],GFP_KERNEL); In the driver's mmap function, the mapping to the user space is done in the following way: #ifdef ARCH_HAS_DMA_MMAP_COHERENT printk(KERN_INFO "Mapping with dma_map_coherent DMA buffer at phys: %p virt %p\n",phys_buf[off],virt

How to transfer data via DMA from RAM to RAM?

老子叫甜甜 提交于 2019-12-01 08:28:58
问题 I want to write a kernel module that can transfer data via DMA from RAM to RAM. There are some posts that discuss this, but I don't really get it. Some say it is possible others say it isn’t. If I understood ldd3 right, RAM to RAM copying isn‘t possible with the DMA API of linux, but the driver/dma/dmaengine.c provides a flag DMA_MEMCPY for a "DMA Transfer Type", so there should be a way. Is this correct, can I use a dma engine to transfer data from one ram address to another? If it is

利用DMA实现采样数据的直接搬运存储

回眸只為那壹抹淺笑 提交于 2019-12-01 08:02:06
  尝试了下STM32的ADC采样,并利用DMA实现采样数据的直接搬运存储,这样就不用CPU去参与操作了。   找了不少例子参考,ADC和DMA的设置了解了个大概,并直接利用开发板来做一些实验来验证相关的操作,保证自己对各部分设置的理解。   我这里用了3路的ADC通道,1路外部变阻器输入,另外两路是内部的温度采样和Vrefint,这样就能组成连续的采样,来测试多通道ADC自动扫描了,ADC分规则转换和注入转换,其实规则转换就是按照既定的设定来顺序转换,而注入转换就是可以在这顺序队列中插队一样,能够提前转换了。   初始化设置://PC0 FOR ANAGLE SAMPLE   static void Protect_ClkInit(void)   {   RCC_APB2PeriphClockCmd(RCC_APB2Periph_ADC1|RCC_APB2Periph_GPIOC,ENABLE);   RCC_ADCCLKConfig(RCC_PCLK2_Div6);   RCC_AHBPeriphClockCmd(RCC_AHBPeriph_DMA1, ENABLE);   }   static void Protect_GPIOInit(void)   {   GPIO_InitTypeDef GPIO_InitStructure;   /*GPIO PhaseA_H 初始化

How do I allocate a DMA buffer backed by 1GB HugePages in a linux kernel module?

主宰稳场 提交于 2019-11-30 19:32:58
I'm trying to allocate a DMA buffer for a HPC workload. It requires 64GB of buffer space. In between computation, some data is offloaded to a PCIe card. Rather than copy data into a bunch of dinky 4MB buffers given by pci_alloc_consistent, I would like to just create 64 1GB buffers, backed by 1GB HugePages. Some background info: kernel version: CentOS 6.4 / 2.6.32-358.el6.x86_64 kernel boot options: hugepagesz=1g hugepages=64 default_hugepagesz=1g relevant portion of /proc/meminfo: AnonHugePages: 0 kB HugePages_Total: 64 HugePages_Free: 64 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize:

stm32学习笔记——DMA

邮差的信 提交于 2019-11-30 18:17:31
stm32 学习笔记—— DMA 目的:用 DMA 发送数据到 USART1 并同时点亮 LED 灯,熟悉 DMA 配置过程 配置文件: #include "stm32f10x_dma.h" #include "stm32f10x_gpio.h" #include "stm32f10x_rcc.h" #include "stm32f10x_usart.h" 结构体定义: typedef struct { uint32_t DMA_PeripheralBaseAddr; //DMA 传输目标地址 uint32_t DMA_MemoryBaseAddr; //DAM 传输源地址 uint32_t DMA_DIR; //DMA 传输方向 uint32_t DMA_BufferSize; //DMA 传输大小 uint32_t DMA_PeripheralInc; // 外设是否开启地址自增 uint32_t DMA_MemoryInc; // 内存是否开启地址自增 / uint32_t DMA_PeripheralDataSize;// 外设数据单元 uint32_t DMA_MemoryDataSize; // 内存数据单元,与前者须一致 uint32_t DMA_Mode; //DMA 模式,可循环,也可不循环 uint32_t DMA_Priority; //DMA 优先级,多个

STM32F4学习笔记7——USART Part2

天涯浪子 提交于 2019-11-30 18:16:23
硬件流控制 使用 nCTS 输入和 nRTS 输出可以控制 2 个器件间的串行数据流。如图显示了在这种模式 下如何连接 2 个器件: 分别向 USART_CR3 寄存器中的 RTSE 位和 CTSE 位写入 1,可以分别使能 RTS 和 CTS 流 控制。 RTS 流控制 如果使能 RTS 流控制 (RTSE=1),只要 USART 接收器准备好接收新数据,便会将 nRTS 变 为有效(连接到低电平)。当接收寄存器已满时,会将 nRTS 变为无效,表明发送过程会在 当前帧结束后停止。下图图显示了在使能 RTS 流控制的情况下进行通信的示例。 CTS 流控制 如果使能 CTS 流控制 (CTSE=1),则发送器会在发送下一帧前检查 nCTS。如果 nCTS 有效 (连接到低电平),则会发送下一数据(假设数据已准备好发送,即 TXE=0);否则不会进 行发送。如果在发送过程中 nCTS 变为无效,则当前发送完成之后,发送器停止。 当 CTSE=1 时,只要 nCTS 发生变化,CTSIF 状态位便会由硬件自动置 1。这指示接收器是 否已准备好进行通信。如果 USART_CR3 寄存器中的 CTSIE 位置 1,则会产生中断。下图 显示了在使能 CTS 流控制的情况下进行通信的示例。 注意:停止帧的特殊行为:当使能 CTS 流后,发送器发送停止信号时将不检查 nCTS 输入状态。

RT-Thread中的串口DMA分析

丶灬走出姿态 提交于 2019-11-30 11:49:35
这里分析一下RT-Thread中串口DMA方式的实现,以供做新处理器串口支持时的参考。 背景 在如今的芯片性能和外设强大功能的情况下,串口不实现DMA/中断方式操作,我认为在实际项目中基本是不可接受的,但遗憾的是,rt-thread现有支持的实现中,基本上没有支持串口的DMA,文档也没有关于串口DMA支持相关的说明,这里以STM32实现为背景,梳理一下串口DMA的实现流程,以供新处理器实现时以作参考。 DMA接收准备 启用DMA接收,需要在打开设备的时候做一些处理,入口函数为rt_device_open()。主体实现是: rt_err_t rt_device_open(rt_device_t dev, rt_uint16_t oflag) { ...... result = device_init(dev); ...... result = device_open(dev, oflag); ...... } device_init()就是rt_serial_init()函数,其主要是调用configure()函数, static rt_err_t rt_serial_init(struct rt_device *dev) { ...... if (serial->ops->configure) result = serial->ops->configure(serial,

MT2625 SPI学习

安稳与你 提交于 2019-11-30 10:32:21
https://blog.csdn.net/qq_38410730/article/details/80318821 https://zhuanlan.zhihu.com/p/37506796 https://wenku.baidu.com/view/cf7de1dcfd0a79563d1e7220.html DMA Direct Memory Access. DMA is a feature of computer systems that allows certain hardware subsystems to access main system memory independent from the central processing unit (CPU). FIFO First In, First Out. FIFO is a method for organizing and manipulating a data buffer, where the first entry, or 'head' of the queue, is processed first. GPIO General Purpose Inputs-Outputs. NVIC Nested Vectored Interrupt Controller. NVIC is the interrupt

(五)Linux内存管理zone_sizes_init

不想你离开。 提交于 2019-11-30 06:28:22
背景 Read the fucking source code! --By 鲁迅 A picture is worth a thousand words. --By 高尔基 说明: Kernel版本:4.14 ARM64处理器,Contex-A53,双核 使用工具:Source Insight 3.5, Visio 1. 介绍 在 (四)Linux内存模型之Sparse Memory Model 中,我们分析了 bootmem_init 函数的上半部分,这次让我们来到下半部分吧,下半部分主要是围绕 zone_sizes_init 函数展开。 前景回顾: bootmem_init() 函数代码如下: void __init bootmem_init(void) { unsigned long min, max; min = PFN_UP(memblock_start_of_DRAM()); max = PFN_DOWN(memblock_end_of_DRAM()); early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT); max_pfn = max_low_pfn = max; arm64_numa_init(); /* * Sparsemem tries to allocate bootmem in memory_present(