pytest

Use ipdb instead of pdb with py.test --pdb option

二次信任 提交于 2019-12-03 14:16:11
I want to use ipdb instead of pdb with py.test --pdb option. Is this possible? If so, how? Clearly, I can use import ipdb; ipdb.set_trace() in the code but that requires to run the test, watch it fail, open a file, find the point of failure in said file, write the above line, re-run the tests. Lots of hassle if I could have something that by passes all of that. Have you tried pytest-ipdb ? Looks like it's exactly what you are looking for? Use this option to set custom debugger: --pdbcls=IPython.terminal.debugger:Pdb It can also be included in pytest.ini using addopts : [pytest] addopts = -

How to make pytest wait for (manual) user action?

北慕城南 提交于 2019-12-03 13:15:15
We are sucessfully using pytest (Python 3) to run a test suite testing some hardware devices (electronics). For a subset of these tests, we need the tester to change the hardware arrangement, and afterwards change it back. My approach was to use a module-level fixture attached to the tests in question (which are all in a separate module), with two input calls: @pytest.fixture(scope="module") def disconnect_component(): input('Disconnect component, then press enter') yield # At this point all the tests with this fixture are run input('Connect component again, then press enter') When running

pytest: Reusable tests for different implementations of the same interface

梦想的初衷 提交于 2019-12-03 11:57:57
问题 Imagine I have implemented a utility (maybe a class) called Bar in a module foo , and have written the following tests for it. test_foo.py: from foo import Bar as Implementation from pytest import mark @mark.parametrize(<args>, <test data set 1>) def test_one(<args>): <do something with Implementation and args> @mark.parametrize(<args>, <test data set 2>) def test_two(<args>): <do something else with Implementation and args> <more such tests> Now imagine that, in the future I expect different

pytest assert introspection in helper function

折月煮酒 提交于 2019-12-03 11:47:27
问题 pytest does wonderful assert introspection so it is easy to find differences in strings especially if the difference is in white space. Now I use a slightly complicated test helper that I reuse in many testcases. The helper has its own module, too and for that module I want to add assert introspection. helpers.py: ... def my_helper(): assert 'abcy' == 'abcx' test_mycase.py: from .helpers import my_helper def test_assert_in_tc(): assert 'abcy' == 'abcx' def test_assert_in_helper(): my_helper()

How to use py.test from Python?

霸气de小男生 提交于 2019-12-03 11:43:20
问题 I'm working in a project that recently switched to the py.test unittest framework. I was used to call my tests from Eclipse, so that I can use the debugger (e.g. placing breakpoints to analyze how a test failure develops). Now this is no longer possible, since the only way to run the tests is via the command line blackbox. Is there some way to use py.test from within Python, so that one is not forced to drop out of the IDE? The tests should of course not be run in a separate process. 回答1: I

How to test a Connexion/Flask app?

[亡魂溺海] 提交于 2019-12-03 11:27:38
问题 I'm using the Connexion framework for Flask to build a microservice. I would like to write tests for my application using py.test . In the pytest-flask doc it says to create a fixture in conftest.py that creates the app like so: conftest.py import pytest from api.main import create_app @pytest.fixture def app(): app = create_app() return app In my test I'm using the client fixture like this: test_api.py def test_api_ping(client): res = client.get('/status') assert res.status == 200 However

pytest-使用

北城以北 提交于 2019-12-03 11:09:10
import pytest """  使用pytest编写用例,必须遵守以下规则: (1)测试文件名必须以“test_”开头或者"_test"结尾(如:test_ab.py)   (2)测试方法必须以“test_”开头。   (3)测试类命名以"Test"开头。 """ """ @pytest.fixture(scope='') function:每个test都运行,默认是function的scope class:每个class的所有test只运行一次 module:每个module的所有test只运行一次 session:每个session只运行一次 """ @pytest.fixture(scope='function') def setup_function(request): def teardown_function(): print("teardown_function called.") request.addfinalizer(teardown_function) # 此内嵌函数做teardown工作 print('setup_function called.') @pytest.mark.website def test_1(setup_function): print('Test_1 called.') @pytest.fixture(scope='module

How to run all PyTest tests even if some of them fail?

无人久伴 提交于 2019-12-03 10:41:54
I am looking for a way to run all of my unit tests in PyTest, even if some of them fail. I know there must be a simple way to do this. I checked the CLi options and looked through this site for similar questions/answers but didn't see anything. Sorry if this has already been answered. For example, consider the following code snippet, with PyTest code alongside it: def parrot(i): return i def test_parrot(): assert parrot(0) == 0 assert parrot(1) == 1 assert parrot(2) == 1 assert parrot(2) == 2 By default, the execution stops at the first failure: $ python -m pytest fail_me.py ==================

How can I repeat each test multiple times in a py.test run?

浪尽此生 提交于 2019-12-03 10:35:01
问题 I want to run each selected py.test item an arbitrary number of times, sequentially. I don't see any standard py.test mechanism for doing this. I attempted to do this in the pytest_collection_modifyitems() hook. I modified the list of items passed in, to specify each item more than once. The first execution of a test item works as expected, but that seems to cause some problems for my code. Further, I would prefer to have a unique test item object for each run, as I use id (item) as a key in

pytest run tests parallel

匆匆过客 提交于 2019-12-03 09:36:28
I want to run all my pytest tests in parallel instead of sequentially. my current setup looks like: class Test1(OtherClass): @pytest.mark.parametrize("activity_name", ["activity1", "activity2"]) @pytest.mark.flaky(reruns=1) def test_1(self, activity_name, generate_test_id): """ """ test_id = generate_random_test_id() test_name = sys._getframe().f_code.co_name result_triggers = self.proxy(test_name, generate_test_id, test_id, activity_name) expected_items = ["response"] validate_response("triggers", result_triggers, expected_items) @pytest.mark.parametrize("activity_name", ["activity1",