python-hypothesis

Exception handling and testing with pytest and hypothesis

☆樱花仙子☆ 提交于 2021-01-28 06:46:08
问题 I'm writing tests for a statistical analysis with hypothesis. Hypothesis led me to a ZeroDivisionError in my code when it is passed very sparse data. So I adapted my code to handle the exception; in my case, that means log the reason and reraise the exception. try: val = calc(data) except ZeroDivisionError: logger.error(f"check data: {data}, too sparse") raise I need to pass the exception up through the call stack because the top-level caller needs to know there was an exception so that it

@composite vs flatmap in complex strategies

£可爱£侵袭症+ 提交于 2020-08-10 16:39:07
问题 hypothesis allows two different ways to define derived strategies, @composite and flatmap . As far as I can tell the former can do anything the latter can do. However, the implementation of the numpy arrays strategy, speaks of some hidden costs # We support passing strategies as arguments for convenience, or at least # for legacy reasons, but don't want to pay the perf cost of a composite # strategy (i.e. repeated argument handling and validation) when it's not # needed. So we get the best of

Generating unique ids that are not repeated in hypothesis

萝らか妹 提交于 2020-05-31 06:41:06
问题 I want to generate unique ids that aren't repeated. I tried to use st.uuids() . This is my code class MyTest(<class that inherits from unittest.TestCase>) @hypothesis.seed(0) @hypothesis.settings(derandomize=True) @hypothesis.setting(use_coverage=False) @hypothesis.given(st.uuids()) def test(self, test_id): logging.info('test_id %s', test_id) logging.info('self.ids %s', self.ids) if test_id in self.ids: logging.info('SHOULD NOT BE HERE') else: logging.info('HAPPY') self.ids.append(test_id)

Have a Strategy that does not uniformly choose between different strategies

爷,独闯天下 提交于 2019-12-13 04:23:16
问题 I'd like to create a strategy C that, 90% of the time chooses strategy A, and 10% of the time chooses strategy B. The random python library does not work even if I seed it since each time the strategy produces values, it generates the same value from random. I looked at the implementation for OneOfStrategy and they use i = cu.integer_range(data, 0, n - 1) to randomly generate a number cu is from the internals import hypothesis.internal.conjecture.utils as cu Would it be fine for my strategy

What does Flaky: Hypothesis test produces unreliable results mean?

余生长醉 提交于 2019-12-12 09:53:11
问题 I am using the hypothesis python package for testing. I am getting the following error: Flaky: Hypothesis test_visiting produces unreliable results: Falsified on the first call but did not on a subsequent one As far as I can tell, the test is working correctly. How do I get around this? 回答1: It means more or less what it says: You have a test which failed the first time but succeeded the second time when rerun with the same example. This could be a Hypothesis bug, but it usually isn't. The

test isolation between pytest-hypothesis runs

瘦欲@ 提交于 2019-12-10 20:29:22
问题 I just migrated a pytest test suite from quickcheck to hypothesis . This worked quite well (and immediately uncovered some hidden edge case bugs), but one major difference I see is related to test isolation between the two property managers. quickcheck seems to simply run the test function multiple times with different parameter values, each time running my function-scoped fixtures. This also results in many more dots in pytest's output. hypothesis however seems to run only the body of the

Generating list of lists with custom value limitations with Hypothesis

て烟熏妆下的殇ゞ 提交于 2019-12-10 01:42:55
问题 The Story: Currently, I have a function-under-test that expects a list of lists of integers with the following rules: number of sublists (let's call it N ) can be from 1 to 50 number of values inside sublists is the same for all sublists (rectangular form) and should be >= 0 and <= 5 values inside sublists cannot be more than or equal to the total number of sublists. In other words, each value inside a sublist is an integer >= 0 and < N Sample valid inputs: [[0]] [[2, 1], [2, 0], [3, 1], [1,

What does Flaky: Hypothesis test produces unreliable results mean?

空扰寡人 提交于 2019-12-06 02:12:39
I am using the hypothesis python package for testing. I am getting the following error: Flaky: Hypothesis test_visiting produces unreliable results: Falsified on the first call but did not on a subsequent one As far as I can tell, the test is working correctly. How do I get around this? It means more or less what it says: You have a test which failed the first time but succeeded the second time when rerun with the same example. This could be a Hypothesis bug, but it usually isn't. The most common cause of this is that you have a test which depends on some external state - e.g. if you're using

Generating list of lists with custom value limitations with Hypothesis

点点圈 提交于 2019-12-05 00:45:40
The Story: Currently, I have a function-under-test that expects a list of lists of integers with the following rules: number of sublists (let's call it N ) can be from 1 to 50 number of values inside sublists is the same for all sublists (rectangular form) and should be >= 0 and <= 5 values inside sublists cannot be more than or equal to the total number of sublists. In other words, each value inside a sublist is an integer >= 0 and < N Sample valid inputs: [[0]] [[2, 1], [2, 0], [3, 1], [1, 0]] [[1], [0]] Sample invalid inputs: [[2]] # 2 is more than N=1 (total number of sublists) [[0, 1], [2