coverage.py

Configure coverage.py to use no migrations

我们两清 提交于 2020-01-05 04:31:27
问题 I am using Django Test Without Migrations to make my unit tests faster and I run the tests the following way: python manage.py test --nomigrations It significantly improved the speed. I want to do the same with PyCharm and coverage.py in order to take advantage of visuals PyCharm creates. I tried to add this to .coveragerc: [run] omit = */migrations/* But it turns out that it affects only reports. How can I do this? 回答1: Assuming you have professional version with django support: Click on

Test coverage tool for Behave test framework

我是研究僧i 提交于 2020-01-02 02:59:06
问题 We are using Behave BDD tool for automating API's. Is there any tool which give code coverage using our behave cases? We tried using coverage module, it didn't work with Behave. 回答1: You can run any module with coverage to see the code usage. In your case should be close to coverage run --source='.' -m behave Tracking code coverage for Aceptace/Integration/Behaviour test will give a high coverage number easily but can lead to the idea that the code are properly tested. Those are for see

How do I make coverage include not tested files?

你。 提交于 2019-12-31 08:28:33
问题 I have just started writing some unit tests for a python project I have using unittest and coverage . I'm only currently testing a small proportion, but I am trying to work out the code coverage I run my tests and get the coverage using the following python -m unittest discover -s tests/ coverage run -m unittest discover -s tests/ coverage report -m The problem I'm having is that coverage is telling I have 44% code coverage and is only counting the files that: were tested in the unit tests (i

Excluding abstractproperties from coverage reports

可紊 提交于 2019-12-30 01:39:15
问题 I have an abstract base class along the lines of: class MyAbstractClass(object): __metaclass__ = ABCMeta @abstractproperty def myproperty(self): pass But when I run nosetests (which coverage) on my project, it complains that the property def line is untested. It can't actually be tested (AFAIK) as instantiation of the abstract class will result in an exception being raised.. Are there any workarounds to this, or do I just have to accept < 100% test coverage? Of course, I could remove the

Python Coverage for C++ PyImport

不羁岁月 提交于 2019-12-23 12:08:21
问题 Situation: I'm attempting to get coverage reports on all python code in my current project. I've utilized Coverage.py with great success for the most part. Currently I'm using it like this taking advantage of the sitecustomize.py process. For everything that's being started from the command line, and it works amazing. Issue: I can't get python modules run from C++ via PyImport_Import() type statements to actually trace and output coverage data. Example: [test.cpp] #include <stdio.h> #include

Incremental code coverage for Python unit tests?

大憨熊 提交于 2019-12-23 09:57:14
问题 How can I get an incremental report on code coverage in Python? By "incremental", I mean what has been the change in the covered lines since some "last" report, or from a particular Git commit. I'm using unittest and coverage (and coveralls.io) to get the code coverage statistics, which work great. But I'm involved only with a part of the project, and at first I'm concerned with what my last commit has changed. I expected coverage to be able to show the difference between two reports, but so

Incremental code coverage for Python unit tests?

可紊 提交于 2019-12-23 09:55:31
问题 How can I get an incremental report on code coverage in Python? By "incremental", I mean what has been the change in the covered lines since some "last" report, or from a particular Git commit. I'm using unittest and coverage (and coveralls.io) to get the code coverage statistics, which work great. But I'm involved only with a part of the project, and at first I'm concerned with what my last commit has changed. I expected coverage to be able to show the difference between two reports, but so

How do I generate coverage xml report for a single package?

断了今生、忘了曾经 提交于 2019-12-22 10:54:45
问题 I'm using nose and coverage to generate coverage reports. I only have one package right now, ae , so I specify to only cover that: nosetests -w tests/unit --with-xunit --with-coverage --cover-package=ae And here are the results, which look good: Name Stmts Exec Cover Missing ---------------------------------------------- ae 1 1 100% ae.util 253 224 88% 39, 63-65, 284, 287, 362, 406 ---------------------------------------------- TOTAL 263 234 88% -----------------------------------------------

Running tests from coverage.py vs running coverage from test runner

寵の児 提交于 2019-12-21 05:52:32
问题 During the Coverage.py with Ned Batchelder python&testing podcast, Brian and Ned briefly discussed that, if you need to run tests with coverage, it is preferred to run tests from coverage.py executing the coverage run as opposed to invoking a test runner with coverage. Why is that and what is the difference? To put some context into this: currently I'm using nose test runner and execute the tests with the help of nosetests command-line tool with --with-coverage option: $ nosetests --with

Running tests from coverage.py vs running coverage from test runner

╄→гoц情女王★ 提交于 2019-12-21 05:52:12
问题 During the Coverage.py with Ned Batchelder python&testing podcast, Brian and Ned briefly discussed that, if you need to run tests with coverage, it is preferred to run tests from coverage.py executing the coverage run as opposed to invoking a test runner with coverage. Why is that and what is the difference? To put some context into this: currently I'm using nose test runner and execute the tests with the help of nosetests command-line tool with --with-coverage option: $ nosetests --with