Please open notebook rsepython-s2r3.ipynb
Frameworks should simplify our lives:
test --only "tests about fields"
py.test is a recommended python testing framework.
We can use its tools in the notebook for on-the-fly tests in the notebook. This, happily, includes the negative-tests example we were looking for a moment ago.
def I_only_accept_positive_numbers(number):
# Check input
if number < 0:
raise ValueError("Input {} is negative".format(number))
# Do something
from pytest import raises
with raises(ValueError):
I_only_accept_positive_numbers(-5)
but the real power comes when we write a test file alongside our code files in our homemade packages:
%%bash
mkdir -p saskatchewan
touch saskatchewan/__init__.py
%%writefile saskatchewan/overlap.py
def overlap(field1, field2):
left1, bottom1, top1, right1 = field1
left2, bottom2, top2, right2 = field2
overlap_left = max(left1, left2)
overlap_bottom = max(bottom1, bottom2)
overlap_right = min(right1, right2)
overlap_top = min(top1, top2)
# Here's our wrong code again
overlap_height = (overlap_top-overlap_bottom)
overlap_width = (overlap_right-overlap_left)
return overlap_height*overlap_width
Writing saskatchewan/overlap.py
%%writefile saskatchewan/test_overlap.py
from .overlap import overlap
def test_full_overlap():
assert overlap((1.,1.,4.,4.),(2.,2.,3.,3.)) == 1.0
def test_partial_overlap():
assert overlap((1,1,4,4),(2,2,3,4.5)) == 2.0
def test_no_overlap():
assert overlap((1,1,4,4),(4.5,4.5,5,5)) == 0.0
Writing saskatchewan/test_overlap.py
%%bash
cd saskatchewan
py.test
============================= test session starts ==============================
platform linux – Python 3.7.0, pytest-3.9.3, py-1.7.0, pluggy-0.8.
rootdir: /research-se-python/section2/saskatchewan, inifile:
collected 3 items
test_overlap.py ..F
=================================== FAILURES ===================================
___________ test_no_overlap ____________
def test_no_overlap():
> assert overlap((1,1,4,4),(4.5,4.5,5,5)) == 0.0
E assert 0.25 == 0.0
E + where 0.25 = overlap((1, 1, 4, 4), (4.5, 4.5, 5, 5))
test_overlap.py:10: AssertionError
====================== 1 failed, 2 passed in 0.03 seconds ======================
Note that it reported which test had failed, how many tests ran, and how many failed.
The symbol ..F
means there were three tests, of which the third one failed.
Pytest will:
test_*.py
test_*
Some options:
py.test --help
py.test -k foo
# tests with ‘foo’ in the test nameFloating points are inaccurate representations of real numbers:
1.0 == 0.99999999999999999
is true to the last bit.
This can lead to numerical errors during calculations:
1000.0 * 1.0 - 1000.0 * 0.9999999999999998
2.2737367544323206e-13
1000.0 * (1.0 - 0.9999999999999998)
2.220446049250313e-13
Both results are wrong: 2e-13
is the correct answer.
The size of the error will depend on the magnitude of the floating points:
1000.0 * 1e5 - 1000.0 * 0.9999999999999998e5
1.4901161193847656e-08
The result should be 2e-8
.
Use the “approx”, for a default of a relative tolerance of
from pytest import approx
assert 0.7 == approx(0.7 + 1e-7)
Or be more explicit:
magnitude = 0.7
assert 0.7 == approx(0.701 , rel=0.1, abs=0.1)
Choosing tolerances is a big area of debate.
Numerical vectors are best represented using numpy.
from numpy import array, pi
vector_of_reals = array([0.1, 0.2, 0.3, 0.4]) * pi
Numpy ships with a number of assertions (in numpy.testing
) to make
comparison easy:
from numpy import array, pi
from numpy.testing import assert_allclose
expected = array([0.1, 0.2, 0.3, 0.4, 1e-12]) * pi
actual = array([0.1, 0.2, 0.3, 0.4, 2e-12]) * pi
actual[:-1] += 1e-6
assert_allclose(actual, expected, rtol=1e-5, atol=1e-8)
It compares the difference between actual
and expected
to atol + rtol * abs(expected)
.
Next: Reading - Mocks