embedded

malloc in an embedded system without an operating system

拜拜、爱过 提交于 2019-12-03 06:23:53
This query is regarding allocation of memory using malloc . Generally what we say is malloc allocates memory from heap. Now say I have a plain embedded system(No operating system), I have normal program loaded where I do malloc in my program. In this case where is the memory allocated from ? malloc() is a function that is usually implemented by the runtime-library. You are right, if you are running on top of an operating system, then malloc will sometimes (but not every time) trigger a system-call that makes the OS map some memory into your program's address space. If your program runs without

Is it worth offloading FFT computation to an embedded GPU?

て烟熏妆下的殇ゞ 提交于 2019-12-03 06:11:01
We are considering porting an application from a dedicated digital signal processing chip to run on generic x86 hardware. The application does a lot of Fourier transforms, and from brief research, it appears that FFTs are fairly well suited to computation on a GPU rather than a CPU. For example, this page has some benchmarks with a Core 2 Quad and a GF 8800 GTX that show a 10-fold decrease in calculation time when using the GPU: http://www.cv.nrao.edu/~pdemores/gpu/ However, in our product, size constraints restrict us to small form factors such as PC104 or Mini-ITX, and thus to rather limited

What's an efficient implementation of Conway's Game of Life for low memory uses?

我是研究僧i 提交于 2019-12-03 06:02:30
问题 I'm looking for a fast and memory efficient approach for implementing Conway's Game of Life. Constraints: a 96x128 board, approximately 2kB RAM available and 52MHz processor (see the tech specs here: http://www.getinpulse.com/features). My current naive solution that represents each cell as a single bit in a matrix (96*128/8=1,536 bytes) works but is too slow. What tricks can be used to improve performance? Storing the coordinates of live cells (for example in this implementation http://dotat

How do you organize code in embedded projects?

℡╲_俬逩灬. 提交于 2019-12-03 05:58:39
问题 Highly embedded (limited code and ram size) projects pose unique challenges for code organization. I have seen quite a few projects with no organization at all. (Mostly by hardware engineers who, in my experience are not typically concerned with non-functional aspects of code.) However, I have been trying to organize my code accordingly: hardware specific (drivers, initialization) application specific (not likely to be reused) reusable, hardware independent For each module I try to keep the

How to start off with ARM processors?

一曲冷凌霜 提交于 2019-12-03 05:55:03
问题 Is it advisable to directly start off with the datasheet and user manual of an ARM processor for a newbie or first get an idea about the ARM world and then go ahead? 回答1: Several good resources are described in the answers to this related question: https://stackoverflow.com/questions/270078/resources-for-learning-arm-assembly In addition, Hitex has "Insider's Guides" for a few different microcontrollers based on ARM processors (free, but requires registration): http://www.hitex.com/index.php

PID controller integral term causing extreme instability

家住魔仙堡 提交于 2019-12-03 05:54:25
I have a PID controller running on a robot that is designed to make the robot steer onto a compass heading. The PID correction is recalculated/applied at a rate of 20Hz. Although the PID controller works well in PD mode (IE, with the integral term zero'd out) even the slightest amount of integral will force the output unstable in such a way that the steering actuator is pushed to either the left or right extreme. Code: private static void DoPID(object o) { // Bring the LED up to signify frame start BoardLED.Write(true); // Get IMU heading float currentHeading = (float)RazorIMU.Yaw; // We just

Why is Read-Modify-Write necessary for registers on embedded systems?

不问归期 提交于 2019-12-03 05:48:15
I was reading http://embeddedgurus.com/embedded-bridge/2010/03/different-bit-types-in-different-registers/ , which said: With read/write bits, firmware sets and clears bits when needed. It typically first reads the register, modifies the desired bit, then writes the modified value back out and I have run into that consrtuct while maintaining some production code coded by old salt embedded guys here. I don't understand why this is necessary. When I want to set/clear a bit, I always just or/nand with a bitmask. To my mind, this solves any threadsafe problems, since I assume setting (either by

SIGTRAP despite no set breakpoints; hidden hardware breakpoint?

你。 提交于 2019-12-03 05:29:16
I am debugging this piece of software for an STM32 embedded system. In one of the functions my programs keeps hitting some sort of breakpoint: SIGTRAP, Trace/breakpoint trap However, in GDB, when I do info breakpoints I get No breakpoints or watchpoints . The breakpoint actually corresponds to a breakpoint I had set quite some time ago, in another version of the executable. When I set that breakpoint, GDB told me automatically using a hardware breakpoint on read-only memory (or a similar message). I think the hardware breakpoint remains on my chip, despite having loaded a new version of the

Free alternative to MPLAB (PIC development)

拈花ヽ惹草 提交于 2019-12-03 05:24:31
I started using MPLAB recently, but for someone that works with Eclipse and VS the IDE it's very limited. Do you know any free IDE or how to configure Ecplise or Netbeans to PIC development? Thanks all The underlying toolchain (compiler/linker etc.) can be used from any environment including Eclipse and Visual Studio, though Eclipse is probably the more flexible in this respect. MPLAB has a feature to export a project as a makefile that can be used with GNU make, although you may rather generate your own makefile, or use the project management provided by Eclipse. In Visual Studio, create a

Does Linux malloc() behave differently on ARM vs x86?

守給你的承諾、 提交于 2019-12-03 05:09:50
There are a lot of questions about memory allocation on this site, but I couldn't find one that specifically addresses my concern. This question seems closest, and it led me to this article , so... I compared the behavior of the three demo programs it contains on a (virtual) desktop x86 Linux system and an ARM-based system. My findings are detailed here , but the quick summary is: on my desktop system, the demo3 program from the article seems to show that malloc() always lies about the amount of memory allocated—even with swap disabled. For example, it cheerfully 'allocates' 3 GB of RAM, and