coverage.py

python running coverage on never ending process

纵然是瞬间 提交于 2019-12-03 11:22:01
问题 I have a multi processed web server with processes that never end, I would like to check my code coverage on the whole project in a live environment (not only from tests). The problem is, that since the processes never end, I don't have a good place to set the cov.start() cov.stop() cov.save() hooks. Therefore, I thought about spawning a thread that in an infinite loop will save and combine the coverage data and then sleep some time, however this approach doesn't work, the coverage report

coverage.py: exclude files

余生长醉 提交于 2019-12-03 08:09:14
问题 How do I exclude entire files from coverage.py reports? According to the documentation you can exclude code by matching lines. I want to exclude entire files, so that the reports don't include 3rd party libraries. Am I missing something? Can it be done? 回答1: You can omit modules with the --omit flag. It takes a comma-separated list of path prefixes. So for example: coverage run my_program.py coverage report --omit=path/to/3rdparty 回答2: Omitting some files worked for me using coverage API.

python running coverage on never ending process

与世无争的帅哥 提交于 2019-12-03 02:50:03
I have a multi processed web server with processes that never end, I would like to check my code coverage on the whole project in a live environment (not only from tests). The problem is, that since the processes never end, I don't have a good place to set the cov.start() cov.stop() cov.save() hooks. Therefore, I thought about spawning a thread that in an infinite loop will save and combine the coverage data and then sleep some time, however this approach doesn't work, the coverage report seems to be empty, except from the sleep line. I would be happy to receive any ideas about how to get the

Is it possible exclude test directories from coverage.py reports?

走远了吗. 提交于 2019-12-03 00:56:18
I'm kind of a rookie with python unit testing, and particularly coverage.py. Is it desirable to have coverage reports include the coverage of your actual test files? Here's a screenshot of my HTML report as an example. You can see that the report includes tests/test_credit_card . At first I was trying to omit the tests/ directory from the reports, like so: coverage html --omit=tests/ -d tests/coverage I tried several variations of that command but I could not for the life of me get the tests/ excluded. After accepting defeat, I began to wonder if maybe the test files are supposed to be

coverage.py: exclude files

蹲街弑〆低调 提交于 2019-12-02 21:48:54
How do I exclude entire files from coverage.py reports? According to the documentation you can exclude code by matching lines. I want to exclude entire files, so that the reports don't include 3rd party libraries. Am I missing something? Can it be done? You can omit modules with the --omit flag. It takes a comma-separated list of path prefixes. So for example: coverage run my_program.py coverage report --omit=path/to/3rdparty Omitting some files worked for me using coverage API. Well it is the same kind what Ned suggested. Here it is how I did it: cov = coverage.coverage(omit='/usr/lib/python2

How do I make coverage include not tested files?

冷暖自知 提交于 2019-12-02 15:32:05
I have just started writing some unit tests for a python project I have using unittest and coverage . I'm only currently testing a small proportion, but I am trying to work out the code coverage I run my tests and get the coverage using the following python -m unittest discover -s tests/ coverage run -m unittest discover -s tests/ coverage report -m The problem I'm having is that coverage is telling I have 44% code coverage and is only counting the files that: were tested in the unit tests (i.e., all the files that were not tested are missing and not in the overall coverage) were in the

Running coverage inside virtualenv

时光总嘲笑我的痴心妄想 提交于 2019-11-30 11:49:13
I recently stumbled upon some issue with running coverage measurements within virtual environment. I do not remember similar issues in the past, nor I was able to find solution on the web. Basically, when I am trying to run test suite in virtualenv, it works fine. But as soon, as I try to do it using coverage , it fails because of lack of modules it requires. Based on some answer on StackOverflow I checked my script and found out that coverage uses different interpreter, even if running from inside the same virtualenv . Here is how to reproduce it: $ virtualenv --no-site-packages venv New

Using py.test with coverage doesn't include imports

风格不统一 提交于 2019-11-30 06:36:43
问题 For Jedi we want to generate our test coverage. There is a related question in stackoverflow, but it didn't help. We're using py.test as a test runner. However, we are unable to add the imports and other "imported" stuff to the report. For example __init__.py is always reported as being uncovered: Name Stmts Miss Cover -------------------------------------------------- jedi/__init__ 5 5 0% [..] Clearly this file is being imported and should therefore be reported as tested. We start tests like

Excluding abstractproperties from coverage reports

我与影子孤独终老i 提交于 2019-11-30 06:02:51
I have an abstract base class along the lines of: class MyAbstractClass(object): __metaclass__ = ABCMeta @abstractproperty def myproperty(self): pass But when I run nosetests (which coverage) on my project, it complains that the property def line is untested. It can't actually be tested (AFAIK) as instantiation of the abstract class will result in an exception being raised.. Are there any workarounds to this, or do I just have to accept < 100% test coverage? Of course, I could remove the ABCMeta usage and simply have the base class raise NotImpementedError , but I prefer the former method.

Finding unused Django code to remove

喜欢而已 提交于 2019-11-30 03:27:05
I've started working on a project with loads of unused legacy code in it. I was wondering if it might be possible to use a tool like coverage in combination with a crawler (like the django-test-utils one) to help me locate code which isn't getting hit which we can mark with deprecation warnings. I realise that something like this won't be foolproof but thought it might help. I've tried running coverage.py with the django debug server but it doesn't work correctly (it seems to just profile the runserver machinery rather than my views, etc). We're improving our test coverage all the time but