Do atomic tests make sense in dynamically created environments?

ぐ巨炮叔叔 提交于 2019-12-11 09:45:02

问题


We´re building a product that allows users to create custom databases and store data within those DBs (WebApp).

Our issue for testing of the frontend (coffeescript) is that every test should be atomic but that would require setting up a DB for seeing if an item within that DB can be created and persists or to see how changes in a DB affect items.

Essentially, the issue is that the setup code needed to get to the item tests basically sets up a new DB and therefore equals the code that tests setting up a new DB.

There are two approaches and we´re torn on which to use:

1) Create and tear down a new DB with each group of tests

  • (+) Sorta Atomic (still fails if setting up a DB fails)
  • (-) Takes a lot of time to execute
  • (-) Tons of surounding code
  • (-) No way to explore the created environment
  • (-) Messy on errors, everything fails

2) Do the setup step by step as seperate tests depending on each other, cleanup routine at beginning of a test

  • (+) The created environment can be accessed via the UI (not automatically torn down)
  • (+) Step by step testing, less overall/repetitive code
  • (-) Tests depended on each other (messy)
  • (-) Somewhat overall messy

We´re wondering therefore if the golden rule that tests should be atomic makes sense in such a dynamic environment?


回答1:


Basically, what you are talking about is Integration tests. These are different from Unit Tests. Examples of integration test would be Automated UI tests or Coded UI tests. In most of the projects I've worked on we've had both types of tests and I strongly encourage you to have both types in your project too.

The philosophy behind both these tests is slightly different.

  • Unit Tests are meant to test isolated bits of functionality.
  • They are meant to be very fast.
  • A developer should be able to run them all on their machine in a reasonable amount of time.

There are various consequences of this philosophy.

  • Because unit test is testing an isolated bit of functionality, you should use mocks and stubs to isolate the rest of the environment and only focus on tiny bits of functionality.
  • The isolation helps your "design thinking" while writing these tests. In fact this is the reason why the unit tests are required to be fast, because a developer is actively and constantly changing the code and unit tests as part of the design and redesign process. There should be very low overhead to set up, change and run the unit tests. I should be able to ignore everything other than the problem I am trying to solve and quickly iterate and reiterate my designs and tests. This is the idea behind TDD and its claim to help write good testable code. If you are spending a long time trying to set up an overly complex unit test then you have to start reconsidering your design.
  • The fast nature means that you could run these as part of your Continuous Integration build.
  • The disadvantage is that because you are testing each functionality in isolation you don't know if they will all work together as a whole. Each time you write a mock, you are implicitly baking in an assumption about how the rest of the system works and that the rest of the system is currently working as it is meant to (i.e nothing else is broken as part of your deployment or running or patching of the OS etc.)

Integration Tests are meant to test the functionality from end to end. You try NOT to mock out or isolate any part of the system.

There are again various consequence of this philosophy. Note that there is no requirement for integration tests to be fast.

  • Integration tests, by their very nature need to run after your full deployment (as opposed to unit tests which can be run as soon as your code compiles).
  • Because they take longer, you don't run them as part of your CI environment, but you still need to run them regularly. We usually run them as part of our nightly builds. Or you can run it twice daily etc.
  • Because the integration tests take a black box approach to the whole system, it doesn't really help you with you "design thinking" about how to actually build the system. But it does help your thinking about the specifications of the system as a whole. i.e What the system should do, not how it should do something.

Note that in both cases the rule of tests being atomic still applies. Each test is different from other tests. This way when a test fails you can be sure about all the conditions that are causing it to fail and concentrate on only fixing that. It's just that an integration test touches as many parts your system as possible.

To give you an example on our current project.

Lets say we need to write a bit of functionality that requires us to add a new table to the DB and bring it through all the layers to show it in the UI.

We start by creating our business logic classes, domain classes, write the appropriate web service, build view models, modify the database etc. While doing each of these we write unit tests to test the code we are currently writing. So when building the business logic classes, we mock out everything else to ensure that the logic in the class is valid (for example, clients over 60 years old get a 50% discount on their car insurance etc.)

Once we do that, we now need to update our deployment scripts / packages etc. to be able to deploy it. i.e update the database creation SQL scripts and the database alteration SQL scripts etc. (In your case this will be complex process).

Now we write integration tests. In this case we might test from SQL Server to Web Service. There is a SQL Integration test base class which contains the set up and tear down method for each test. In the set up we create a brand new database using our sql deployment scripts. Each test also specifies a test data sql script. So for example this test data script might insert a new record into the client table whose age is 70 years. We run this script as part of the "Arrange" of our test. Then make a web service call to search for clients older than 60. This is the "Act" part of the test and from the result, we check to make sure that we only get back the user we've inserted into the DB. At the end of the test, the database is deleted. We've caught bugs here when the columns in SQL database aren't nullable or the datetime columns overflow because the default minimum datetime in .Net is a different size from SQL server's minimum datetime.

Some functionality requires us to interact with an Oracle database. For example, if a new record is added to Oracle, then a trigger/db procedure kicks off and transfers that record to SQL and then we need to bring it up the layers. In this case we have an OracleSQL integration test base class. As you might have guessed, this follows a simliar pattern, but creates both Oracle and SQL dbs inserts test data into Oracle and blows them both away at the end of the test.

The developers usually pick the Web service layer for writing their integration tests. The testers on the other hand use UI automation tools to make sure that the data is actually showing up on screen. For example they will record a test that goes to web page, clicks search button, puts "60" into the age box, clicks the search button etc. That test might leverages the same test data sql script that inserts test data that the developer wrote (or the testing team might come to the developer and ask help crafting sql scripts to insert whatever highly convoluted data they can think of). But the point is, once the test data insertion script is created, it leverages the same underlying system to blow away the whole db, create a new one, insert test data, and run the specified test.

So, to answer your question, you will need two types of tests, unit tests and integration tests. You might have to put in some initial work into creating some base classes or helper methods to create/delete databases, automating your deployment to install/uninstall other components of your system etc. You will have to do this for your final deployment anyway. Integration tests will also be closely related to and dependent on your deployment strategy. This is an advantage and not a disadvantage in my opinion. While it might be painful at first to set it all up, one of the things your integration tests are implicitly testing is your deployment mechanism. If there are any issues with deploying/installing any of the components required by your system, you want to know about it as quickly as possible. Not the day before you are supposed to be deploying to production.

A good suite of tests is invaluable. It also needs to be isolated, rigorous and comprehensive. The tests shouldn't fail when they don't need to but more importantly, they should fail when they need to. And when they do fail, you want them to provide as much information as possible and point you at the exact location of failure. This makes fixing the issue a much easier task. Any time you put into building this test suite will more than pay for itself in no time.




回答2:


You're not doing atomic tests if you're talking to a database.

You need to mock the database interface and talk to the mock instead. That will be fast, and you'll be able to use the mock to introduce errors that would be difficult using the real database.



来源:https://stackoverflow.com/questions/14960097/do-atomic-tests-make-sense-in-dynamically-created-environments

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!