Dependencies analysis tool - updating regression test cases

為{幸葍}努か 提交于 2019-12-02 20:37:23
wrm

With JDepend you can analyse dependencies between packages and even create unit tests to assure dependencies or integrate it with Fitnesse to have a nice dependency-table-test. This may help, if your tests are in specific packages...

I'm a huge fan of Sonar myself, since you already have it running, and working in your CI environment, that would be my suggestion. I'm not quite sure how Sonar's Dependency Cycle Matrix doesn't already do what you want, or rather what it is you want in addition to that.

You can find up-to Class level dependencies using Classycle-Dependency Checker. And the section - Check Classes Dependency in the guide might help in your case. However, normally using a static analysis tool we analyze either source codebase (e.g.: Project->src directory) or test codebase (e.g.: Project->test directory) but not both. But in your case it seems that you want to find out dependencies between source and test code too. So, executing an appropriate dependency analysis tool by providing the input path as the parent path of both src and test (e.g.: project root directory) is may be what you need (e.g.: using dependencies find out -> if source X changes, what dependent classes in tests get impacted by that change).

The best tool you can find for your problem is actually not a tool, but a practise. I strongly suggest you read about Test Driven Development (see Lasse Koskela's book), and Specification by Example (see Gojko Adzic's books, they're great).

Using these practises will fundamentally change two things:

  1. You will have a test coverage close to 100%
  2. The tests will become first class citizens and will be the centre point of your code

The reason why I find this relevant to your question is that your scenario hints for the exact opposite role of the tests: people will perform changes in code and they'll then think "oh, no...now I have to go figure out what I broke in those damn tests".

From my experience, tests should not be overlooked or considered "lower grade code". And while my answer points to a methodology change that will only have visible results on the long run, it might help avoid the issue altogether in the future.

If I read the question correctly, you want a tool that analyses source code and determines which tests don't need to be rerun.

First thing to consider is what's the problem with always running all the tests? That's what CI means. If you can't run all your unit tests in 15 minutes and all your integration/stress tests overnight, resolve whatever issue causes that to be the case, probably by buying some hardware.

Failing that, the only thing that can track potential test regression is coverage analysis (e.g. http://www.eclemma.org/). Given even basic OO design, let alone dependency injection, static package or class dependencies are at best meaningless, and probably actively misleading, in terms of what changes can cause what tests to fail. And that's ignoring the case where the problem is A should be calling B, and isn't. However, cross-reference changed files with called files and if there is no intersection, maybe not rerunning the test is arguably safeish - the remaining failures will be things like some code adding an event handler where there wasn't one before.

I don't think there is a free tool to do this, but it should be scriptable from Jenkins/Emma/Git. Or use one of the integrated solutions like Kalistick. But I'd be sceptical that it could efficiently beat human judgement based on communication with the development team as to what they are trying to do.

On a vaguely related note, the simple tool the Java world actually needs, but doesn't afaik exist, is an extension to junit that runs a set of tests in dependency order, and stop on the first error. That would provide a little bit of extra help to let a developer focus on the most likely source of a regression.

Ira Baxter

You can figure out which tests are relevant by tracking what code they touch. You can track what code they touch by using test coverage tools.

Most test coverage tools build an explicit set of locations that a running test executes; that's the point of "coverage". If you organize your test running to execute one unit test at a time, and then take a snapshot of the coverage data, then you know for each test what code it covers.

When code is modified, you can determine the intersection of the modified code and what an individual test covers. If the intersection is non-empty, you surely need to run that test again, and likely you'll need to update it.

There are a several practical problems with making this work.

First, it is often hard to figure out how the test coverage tools record this positioning data. Second, you have to get the testing mechanism to capture it on per test basis; that may be awkward to organize, and/or the coverage data may be clumsy to extract and store. Third, you need to compute the intersection of "code modified" with "code covered" by a test; this is abstractly largely a problem in intersecting big bit vectors. Finally, capturing "code modified" is bit tricky because sometimes code moves; if location 100 in a file was tested, and the line moves in the file, you may not get comparable position data. That will lead to false positives and false negatives.

There are test coverage tools which record the coverage data in an easily captured form, and can do the computations. Determine code changes is trickier; you can use diff but the moved code will confuse the issue somewhat. You can find diff tools that compare code structures, which identify such moves, so you can get better answers.

If you have source code slicers, you could compute how the output tested is (backward slice-)dependent on the input; all code in that slice affects the test, obviously. I think the bad news here is that such slicers are not easy to get.

basav

Some tools which might help. Note not all of them intergrate with CI.

iPlasma is a great tool

CodePro is an Eclipse Plugin which help in detecting code and design problems (e.g. duplicated code, classes that break encapsulation, or methods placed in a wrong class). (Now acquired by Google )

Relief is a tool visualizing the following parameters of Java projects: size of a package, i.e. how many classes and interfaces it contains the kind of item being visualized (Packages and classes are represented as boxes, interfaces and type's fields as spheres). how heavy an item is being used (represented as gravity, i.e. distance of center) number of dependancies (represented as depth).

Stan4j is a commercial tool that costs a couple of hundred $. It is targeting only Java projects, comes very close to (or a little better reports than? not sure) Sonar. It has a good Eclipse integration.

Intellij Dependency Analysis

From what I could understand from your problem is that - you need to track two OO metrics - Efferent Coupling (Ca) and Afferent Couplings (Ce). With this you can narrow down to the required packages. You can explore the oppurtunity of writing an eclipse plugin which on every build - can highlight the required classes based on the Ca, Ce metrics.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!