Test Automation with Embedded Hardware

旧巷老猫 提交于 2019-12-17 17:29:16

问题


Has anyone had success automating testing directly on embedded hardware?

Specifically, I am thinking of automating a battery of unit tests for hardware layer modules. We need to have greater confidence in our hardware layer code. A lot of our projects use interrupt driven timers, ADCs, serial io, serial SPI devices (flash memory) etc..

Is this even worth the effort?

We typically target:

Processor: 8 or 16 bit microcontrollers (some DSP stuff)
Language: C (sometimes c++).


回答1:


Sure. In the automotive industry we use $100,000 custom built testers for each new product to verify the hardware and software are operating correctly.

The developers, however, also build a cheaper (sub $1,000) tester that includes a bunch of USB I/O, A/D, PWM in/out, etc and either use scripting on the workstation, or purpose built HIL/SIL test software such as MxVDev.

Hardware in the Loop (HIL) testing is probably what you mean, and it simply involves some USB hardware I/O connected to the I/O of your device, with software on the computer running tests against it.

Whether it's worth it depends.

In the high reliability industry (airplane, automotive, etc) the customer specifies very extensive hardware testing, so you have to have it just to get the bid.

In the consumer industry, with non complex projects it's usually not worth it.

With any project where there's more than a few programmers involved, though, it's really nice to have a nightly regression test run on the hardware - it's hard to correctly simulate the hardware to the degree needed to satisfy yourself that the software testing is enough.

The testing then shows immediately when a problem has entered the build.

Generally you perform both black box and white box testing - you have diagnostic code running on the device that allows you to spy on signals and memory in the hardware (which might just be a debugger, or might be code you wrote that reacts to messages on a bus, for instance). This would be white box testing where you can see what's happening internally (and even cause some things to happen, such as critical memory errors which can't be tested without introducing the error yourself).

We also run a bunch of 'black box' tests where the diagnostic path is ignored and only the I/O is stimulated/read.

For a much cheaper setup, you can get $100 microcontroller boards with USB and/or ethernet (such as the Atmel UC3 family) which you can connect to your device and run basic testing.

It's especially useful for product maintenance - when the project is done, store a few working boards, the tester, and a complete set of software on CD. When you need to make a modification or debug a problem, it's easy to set it all back up and work on it with some knowledge (after testing) that the major functionality was not affected by your changes.

-Adam




回答2:


Yes. I have had success, but it is not a stragiht-forward problem to solve. In a nutshell here is what my team did:

  1. Defined a variety of unit tests using a home-built C unit-testing framework. Basically, just a lot of macros, most of which were named TEST_EQUAL, TEST_BITSET, TEST_BITVLR, etc.

  2. Wrote a boot code generator that took these compiled tests and orchestrated them into an execution environment. It's just a small driver that executes our normal startup routine - but instead of going into the control loop, it executes a test suite. When done, it stores the last suite to run in flash memory, then it resets the CPU. It will then run then next suite. This is to provide isolation incase a suite dies. (However, you may want to disable this to make sure your modules cooperate. But that's an integration test, not a unit test.)

  3. Individual tests would log their output using the serial port. This was OK for our design because the serial port was free. You will have to find a way to store your results if all your IO is consumed.

It worked! And it was great to have. Using our custom datalogger, you could hit the "Test" button, and a couple minutes later, you would have all the results. I highly recommend it.

Updated to clarify how the test driver works.




回答3:


Yes.

The difficulty depends on the type of hardware that you're trying to test. As others have said earlier the issue is going to be the complexity of the external stimulus that you need to apply. External stimulus is probably best achieved with some external test rig (as Adam Davis has described).

One thing to consider, though, is exactly what it is that you're trying to verify.

It's tempting to assume that to verify the interaction of the hardware and the firmware then you've really no option but to directly apply external stimulus (ie. applying DACs to all of your ADC inputs, etc.). In these cases, though, the corner cases that you really want to test are often going to be subject to issues of timing (eg. interrupts arriving when you're executing function foo()) which are going to be incredibly difficult to test in a meaningful way - and even harder to get meaningful results from. (ie. The first 100K times we ran this test it was fine. The last time we ran it it failed. Why?!?)

But the verification of the hardware should be done separately. Once this is done, unless it's changing regularly (through downloadable fpga images or the like) then you should be able to assume that the hardware works and purely test your firmware.

So in this case you can concentrate on verifying the algorithms that are used for processing your external stimulii. For example, calling your ADC conversion routines with a fixed value as if they came from your ADC directly. These tests are repeatable and therefore of benefit. They will require special test builds though.

Testing the communications paths of your device is going to be relatively straightforward and shouldn't require special code builds.




回答4:


We have had good results with automated testing on our embedded systems. We have test written in high level (easy to program and debug) languages that run on dedicated test machines. These test generally do sanity checking or generate random inputs into the devices, then check for correct behavior. There is a lot of work to generate and maintain these tests. We designed a framework and then let interns work on the tests themselves.

It's not a perfect solution, and the tests are certainly prone to errors, but the most important part is to improve on your existing coverage holes. Find the biggest hole and design something to cover it in an automated fashion, even if it isn't perfect or won't cover the entire feature. Later when all of your stuff is covered somewhat, you can come back and address the worst coverage or the most critical features.

Some things to consider:

  • What is the penalty of a firmware bug? How easier is it to update firmware in the field.
  • What kind of coverage do my test provide? Is it a simple sanity check? Is it configurable enough that it can test many different scenarios?
  • Once a test has failed, how will you reproduce that value in order to debug it? Did you log all the device and test settings so you can eliminate as many variables as possible? Device configuration, firmware version, test software version, all external inputs, all observed behavior?
  • What are you testing against? Is the spec clear enough on what the expected behavior of the device you are testing or are you validating against what you think the code should do?



回答5:


If your goal is to test your low-level driver code you will likely need to create some sort of test fixture, using loopback cables or multiple interconnected units to allow you to exercise each driver. Pairing a board with known-good software with a board running a development build will allow you to test for regressions in communication protocols, etc.

Specific test strategies depend on the hardware you wish to test. For example, ADCs can be tested by presenting a known waveform and converting a series of samples, then checking for the proper range, frequency, average value, etc.

I have found this type of testing to be very valuable in the past, allowing me to confidently modify and improve driver code without fear of breaking existing applications.




回答6:


Yes, I do this, although I've always had a serial port available for test I/O.

It is frequently difficult to leave the unit totally unmodified. Some tests require a line commented out or a call added e.g. to deal with a watchdog.

IMHO, this is better than no unit testing at all. And of course you need to be doing complete integration/system testing, too.




回答7:


Unit testing embedded projects is quite diffucult, as it usually requires a external stimulus and external measurment.

We have been successful in developing a external serial protocol (either rs232 or udp or tcpip messages) with basic commands for exercising the hw with debug logging in the low level drivers looking for erroneous conditions or even slightly abnormal conditions(espcially for limit checking)

But once developed we then can run the testing after every build if required. It will definitly allow you to deliver a better quality product.




回答8:


If your goal is manufacturing test (ensuring that the modules are properly assembled, no inadvertent shorts/opens/etc), you should focus first on testing cables and connectors, followed by socketed and soldered connections, then the PCB itself. These items can all be tested for shorts & opens by finding access patterns that drive each individual line high while its neighbors are low and vice-versa, then reading back the lines' values.

Without knowing more details of your hardware it's difficult to be more specific, but most embedded processors can set I/O pins to a GPIO mode that simplifies this sort of testing.

If you are not performing bed-of-nails testing on your PCAs, this testing should be considered a mandatory first step for newly manufactured boards.




回答9:


I know this is old now, but maybe it will help. Yes, you can do it but it depends on how much you want to invest in the solution you want. More than two years I have worked on test and validation for the MCAL layer of AUTOSAR. This is kind of the lowest you can get when it comes to software testing. It was a sort of component level testing. Some may call it unit level but it was slightly higher than that because we were testing the APIs of the MCAL components. Things like: ADC, SPI, ICU, DIO and so on.

The solution used involved: - a test framework that was running on the target micro - a dSPACE box to provide and read signals to and from the target when required - XCP access through Vector CANape to trigger the test execution and results collection - a python framework to perform the test control and validation of the results

The test cases were written in C and they were flashed on the target along with the software under test. It was a black box test cause we didn't alter in any way the implementation of the MCAL. And I think not even the startup sequence was touched. An Idle task was used to continuously check the state of a flag that was the signal to start executing a test. A 10 ms task was used to actually run the test. A test case was in fact a switch case. Every case in this switch was a test step. Python was triggering the test execution at the test step level. A good thing with this approach was the reusing of steps with different parameters. This test control, what to execute and how, was done by Python through a test control data structure acting as an API in between the test implementation and the test triggering and evaluation mechanism. This is what CANape was used for. To set the test to be executed and to read the results of the test. Every value obtained by a test step was stored in an array part of the data structure. The test step itself wasn't involved in any validation because the target was considered a non trust-able component of the test environment. The validation was done by Python based on the test specifications. Python was parsing these specifications and was able to automatically create test triggering scripts including the validation criteria for every test step. The specification of every test case was a series of test steps descriptions together with their validation criteria. Some of these steps were dSPACE related steps. As an example, one step was initializing something and was calling for some capturing some edges on an already configured channel, and the next step was applying the signal on that channel by commanding the dSPACE equipment.

A cheaper solution would involve using an in-house board instead of the dSPACE equipment. To some extent, even a programmable signal generator can be used, but that would not help if you need to validate signals output-ed by the target.



来源:https://stackoverflow.com/questions/115115/test-automation-with-embedded-hardware

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!