问题
I'm a big fan of TDD and use it for the vast majority of my development these days. One situation I run into somewhat frequently, though, and have never found what I thought was a "good" answer for, is something like the following (contrived) example.
Suppose I have an interface, like this (writing in Java, but really, this applies to any OO language):
public interface PathFinder {
GraphNode[] getShortestPath(GraphNode start, GraphNode goal);
int getShortestPathLength(GraphNode start, GraphNode goal);
}
Now, suppose I want to create three implementations of this interface. Let's call them DijkstraPathFinder
, DepthFirstPathFinder
, and AStarPathFinder
.
The question is, how do I develop these three implementations using TDD? Their public interface is going to be the same, and, presumably, I would write the same tests for each, since the results of getShortestPath() and getShortestPathLength() should be consistent among all three implementations.
My choices seem to be:
Write one set of tests against
PathFinder
as I code the first implementation. Then write the other two implementations "blind" and make sure they pass thePathFinder
tests. This doesn't seem right because I'm not using TDD to develop the second two implementation classes.Develop each implementation class in a test-first manner. This doesn't seem right because I would be writing the same tests for each class.
Combine the two techniques above; now I have a set of tests against the interface and a set of tests against each implementation class, which is nice, but the tests are all the same, which isn't nice.
This seems like a fairly common situation, especially when implementing a Strategy pattern, and of course the differences between implementations might be more than just time complexity. How do others handle this situation? Is there a pattern for test-first development against an interface that I'm not aware of?
回答1:
You write interface tests to exercise the interface, and you write more detailed tests for the actual implementations. Interface-based design talks a bit about the fact that your unit tests should form a kind of "contract" specification for that interface. Maybe when Spec# comes out, there'll be a langugage supported way to do this.
In this particular case, which is a strict strategy implementation, the interface tests are enough. In other cases, where an interface is a subset of the implementation's functionality, you would have tests for both the interface and the implementation. Think of a class which implements 3 interfaces, for example.
EDIT: This is useful so that when you add another implementation of the interface down the road, you already have tests for verifying that the class implements the contract of the interface correctly. This can work for something as specific as ISortingStrategy to something as wide-ranging as IDisposable.
回答2:
there is nothing wrong with writing tests against the interface, and reusing them for each implementation, for example -
public class TestPathFinder : TestClass
{
public IPathFinder _pathFinder;
public IGraphNode _startNode;
public IGraphNode _goalNode;
public TestPathFinder() : this(null,null,null) { }
public TestPathFinder(IPathFinder ipf,
IGraphNode start, IGraphNode goal) : base()
{
_pathFinder = ipf;
_startNode = start;
_goalNode = goal;
}
}
TestPathFinder tpfDijkstra = new TestPathFinder(
new DijkstraPathFinder(), n1, nN);
tpfDijkstra.RunTests();
//etc. - factory optional
I would argue that this is the least effort solution, which is very much in line with Agile/TDD principles.
回答3:
I would have no problem going with option 1, and keep in mind that refactoring is part of TDD and it's usually during a refactoring phase that you move to a design pattern such as strategy, so I wouldn't feel bad about doing that w/o writing new tests.
If you wanted to test the implementation-specific details of each PathFinder impl, you might consider passing mock GraphNodes which are somehow capable of helping to assert the Dijkstra-ness or DepthFirst-ness, etc, of the implementation. (Perhaps these mock GraphNodes could record how they are traversed, or somehow measure performance.) Maybe this is testing overkill, but then again if you know your system needs these three distinct strategies for some reason, it'd probably be good to have tests to demonstrate why - otherwise why not just pick one implementation and throw the others away?
回答4:
I don't mind reusing test code as a template for new tests that have similar functionality. Depending on the particular class under test, you may have to rework them with different mock objects and expectations. At the least you'll have to refactor them to use the new implementation. I would follow the TDD method, though, of taking one test, reworking it for the new class, then writing just the code to pass that test. This may take even more discipline, though, since you already have one implementation under your belt and will undoubtedly be influenced by code you have already written.
回答5:
This doesn't seem right because I'm not using TDD to develop the second two implementation classes.
Sure you are.
Start by commenting out all the tests but one. As you make a test pass either refactor or uncomment another test.
Jtf
来源:https://stackoverflow.com/questions/544472/developing-to-an-interface-with-tdd