pytest

Why doesn't my current directory show up in the path using pytest on Windows?

送分小仙女□ 提交于 2019-12-10 14:52:43
问题 I have the following folder structure; myapp\ myapp\ __init__.py tests\ test_ecprime.py and my pwd is C:\Users\wwerner\programming\myapp\ I have the following test setup: import pytest import sys import pprint def test_cool(): pprint.pprint(sys.path) assert False That produces the following paths: ['C:\\Users\\wwerner\\programming\\myapp\\tests', 'C:\\Users\\wwerner\\programming\\envs\\myapp\\Scripts', 'C:\\Windows\\system32\\python34.zip', 'C:\\Python34\\DLLs', 'C:\\Python34\\lib', 'C:\

How to organize fixtures when using pytest

前提是你 提交于 2019-12-10 13:47:49
问题 Fixtures tend to be small and reusable. Given that a specific fixture can rely on other fixtures @pytest.fixture def Account(db, memcache): ... I would like to organize my fixtures in modules, and import them in a specific test-file like so (e.g.) from .fixtures.models import Account Unfortunately this does not seem to work. Instead I always have to import all subordinate fixtures too, e.g. from .fixtures.models import Account, db, memcache What is the better approach to have fine-grained

How to let pytest rewrite assert in non-test modules

£可爱£侵袭症+ 提交于 2019-12-10 13:27:50
问题 We defined all our custom assertions in a separate python file which is not a test module. For example: custom_asserts.py class CustomAsserts(object): def silly_assert(self, foo, bar): assert foo == bar , 'some error message' If we use assert directly in tests, we will get extra info about the AssertionError, which is very useful. Output of directly use assert in tests: > assert 'foo' == 'bar', 'some error message' E AssertionError: some error message E assert 'foo' == 'bar' E - foo E + bar

How to generate coverage report for http based integration tests?

烈酒焚心 提交于 2019-12-10 13:25:59
问题 I am writing integration tests for a project in which I am making HTTP calls and testing whether they were successful or not. Since I am not importing any module and not calling functions directly coverage.py report for this is 0%. I want to know how can I generate coverage report for such integration HTTP request tests? 回答1: The recipe is pretty much this: Ensure the backend starts in code coverage mode Run the tests Ensure the backend coverage is written to file Read the coverage from file

Async fixtures with pytest

泄露秘密 提交于 2019-12-10 12:58:30
问题 How do I define async fixtures and use them in async tests? The following code, all in the same file, fails miserably. Is the fixture called plainly by the test runner and not awaited? @pytest.fixture async def create_x(api_client): x_id = await add_x(api_client) return api_client, x_id async def test_app(create_x, auth): api_client, x_id = create_x resp = await api_client.get(f'my_res/{x_id}', headers=auth) assert resp.status == web.HTTPOk.status_code producing ==============================

Django connections object does not see the tables of a second database during testing with pytest-django

不想你离开。 提交于 2019-12-10 12:45:33
问题 Bottom Line: My Django connections object does not see the table relations of a second database during testing with pytest-django. Overview: I have a problem where my Django connections object seems to get the wrong database information. I stumbled upon this issue when I was querying on a table in the 'customers' DB and Django told me the relation does not exist. With the settings.py database section was set up like below: DATABASES = { 'default': { 'NAME': 'user_data', 'ENGINE': 'django.db

pytest + xdist without capturing output?

蹲街弑〆低调 提交于 2019-12-10 12:33:45
问题 I'm using pytest with pytest-xdist for parallel test running. It doesn't seem to honour the -s option for passing through the standard output to the terminal as the tests are run. Is there any way to make this happen? I realise this could cause the output from the different processes to be jumbled up in the terminal but I'm ok with that. 回答1: I found a workaround, although not a full solution. By redirecting stdout to stderr, the output of print statements is displayed. This can be

py.test teardown see if the test failed and print subprocess output

谁说我不能喝 提交于 2019-12-10 10:54:10
问题 I want to print subprocess output in py.test teardown if the test is failed - or send it to any other human readable output. Is it possible to check if the test has failed in teardown? Any other ways to get the output of subprocess commands only during test failures? My code: """Test different scaffold operations.""" import subprocess import pytest from tempfile import mkdtemp @pytest.fixture(scope='module') def app_scaffold(request) -> str: """py.test fixture to create app scaffold."""

ImportError: No module named base

做~自己de王妃 提交于 2019-12-10 10:23:54
问题 I'm trying to implement PageObject pattern for my first Login test. While running it I'm receiving the following error: >> py.test -v test_login.py ============================= test session starts ============================== platform linux2 -- Python 2.7.3 -- pytest-2.3.4 plugins: xdist collected 0 items / 1 errors ==================================== ERRORS ==================================== ____________________ ERROR collecting test_login_logout.py _____________________ test_login

PyTest skip module_teardown()

匆匆过客 提交于 2019-12-10 10:22:04
问题 I have following code in my tests module def teardown_module(): clean_database() def test1(): pass def test2(): assert 0 and I want teardown_module() (some cleanup code) to be called only if some test failed. Otherwise (if all passed) this code shouldn't have to be called. Can I do such a trick with PyTest? 回答1: You can. But it is a little bit of a hack. As written here: http://pytest.org/latest/example/simple.html#making-test-result-information-available-in-fixtures you do the following, to