Is it a bad practice to randomly-generate test data?

耗尽温柔 提交于 2019-12-03 14:52:28

This is an answer to your second point:

(2) I use testing to as a form of documentation for the code. If I have hard-coded fixture values, it's hard to reveal what a particular test is trying to demonstrate.

I agree. Ideally spec examples should be understandable by themselves. Using fixtures is problematic, because it splits the pre-conditions of the example from its expected results.

Because of this, many RSpec users have stopped using fixtures altogether. Instead, construct the needed objects in the spec example itself.

describe Item, "#most_expensive" do
  it 'should return the most expensive item' do
    items = [
      Item.create!(:price => 100),
      Item.create!(:price => 50)
    ]

    Item.most_expensive.price.should == 100
  end
end

If your end up with lots of boilerplate code for object creation, you should take a look at some of the many test object factory libraries, such as factory_girl, Machinist, or FixtureReplacement.

KeyserSoze

I'm surprised no one in this topic or in the one Jason Baker linked to mentioned Monte Carlo Testing. That's the only time I've extensively used randomized test inputs. However, it was very important to make the test reproducible, by having a constant seed for the random number generator for each test case.

We thought about this a lot on a recent project of mine. In the end, we settled on two points:

  • Repeatability of test cases is of paramount importance. If you must write a random test, be prepared to document it extensively, because if/when it fails, you will need to know exactly why.
  • Using randomness as a crutch for code coverage means you either don't have good coverage or you don't understand the domain enough to know what constitutes representative test cases. Figure out which is true and fix it accordingly.

In sum, randomness can often be more trouble than it's worth. Consider carefully whether you're going to be using it correctly before you pull the trigger. We ultimately decided that random test cases were a bad idea in general and to be used sparingly, if at all.

Lots of good information has already been posted, but see also: Fuzz Testing. Word on the street is that Microsoft uses this approach on a lot of their projects.

My experience with testing is mostly with simple programs written in C/Python/Java, so I'm not sure if this is entirely applicable, but whenever I have a program that can accept any sort of user input, I always include a test with random input data, or at least input data generated by the computer in an unpredictable way, because you can never make assumptions about what users will enter. Or, well, you can, but if you do then some hacker who doesn't make that assumption may well find a bug that you totally overlooked. Machine-generated input is the best (only?) way I know of to keep human bias completely out of the testing procedures. Of course, in order to reproduce a failed test you have to do something like saving the test input to a file or printing it out (if it's text) before running the test.

Random testing is a bad practice a long as you don't have a solution for the oracle problem, i.e., determining which is the expected outcome of your software given its input.

If you solved the oracle problem, you can get one step further than simple random input generation. You can choose input distributions such that specific parts of your software get exercised more than with simple random.

You then switch from random testing to statistical testing.

if (a > 0)
    // Do Foo
else (if b < 0)
    // Do Bar
else
    // Do Foobar

If you select a and b randomly in int range, you exercise Foo 50% of the time, Bar 25% of the time and Foobar 25% of the time. It is likely that you will find more bugs in Foo than in Bar or Foobar.

If you select a such that it is negative 66.66% of the time, Bar and Foobar get exercised more than with your first distribution. Indeed the three branches get exercised each 33.33% of the time.

Of course, if your observed outcome is different than your expected outcome, you have to log everything that can be useful to reproduce the bug.

I would suggest having a look at Machinist:

http://github.com/notahat/machinist/tree/master

Machinist will generate data for you, but it is repeatable, so each test-run has the same random data.

You could do something similar by seeding the random number generator consistently.

One problem with randomly generated test cases is that validating the answer should be computed by code and you can't be sure it doesn't have bugs :)

Jason Baker

You might also see this topic: Testing with random inputs best practices.

Effectiveness of such testing largely depends on quality of random number generator you use and on how correct is the code that translates RNG's output into test data.

If the RNG never produces values causing your code to get into some edge case condition you will not have this case covered. If your code that translates the RNG's output into input of the code you test is defective it may happen that even with a good generator you still don't hit all the edge cases.

How will you test for that?

The problem with randomness in test cases is that the output is, well, random.

The idea behind tests (especially regression tests) is to check that nothing is broken.

If you find something that is broken, you need to include that test every time from then on, otherwise you won't have a consistent set of tests. Also, if you run a random test that works, then you need to include that test, because its possible that you may break the code so that the test fails.

In other words, if you have a test which uses random data generated on the fly, I think this is a bad idea. If however, you use a set of random data, WHICH YOU THEN STORE AND REUSE, this may be a good idea. This could take the form of a set of seeds for a random number generator.

This storing of the generated data allows you to find the 'correct' response to this data.

So, I would recommend using random data to explore your system, but use defined data in your tests (which may have originally been randomly generated data)

Use of random test data is an excellent practice -- hard-coded test data only tests the cases you explicitly thought of, whereas random data flushes out your implicit assumptions that might be wrong.

I highly recommend using Factory Girl and ffaker for this. (Never use Rails fixtures for anything under any circumstances.)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!