-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run Nengo's test suite #63
base: master
Are you sure you want to change the base?
Conversation
|
||
@pytest.fixture | ||
def Simulator(): | ||
return nengo_spinnaker.Simulator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to monkeypatch/subclass this class before you return it, rather than modifying the simulator itself? You could also use the fixture's finaliser to call close, even if the test itself threw an exception and couldn't call close(). That may help the situation with the InsufficientResourceError
s too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should help the SpiNNakerMemoryError
s as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was my original intention; here's what I had in my first draft but it didn't work:
@pytest.fixture
def Simulator(request):
def _simulator(*args, **kwargs):
sim = nengo_spinnaker.Simulator(*args, **kwargs)
request.addfinalizer(sim.close)
return sim
return _simulator
But, that was before I'd actually looked at simulator.py
, so I realize now that I can just use _close_open_simulators
as a finalizer. I'll try that out now.
Thanks, I'll have a look at the list of xfails later and try to extract a list of bugs I should open! |
4599a86
to
381b540
Compare
Made a few more commits. With these, it's much more stable, but still not perfect (I get a few resource/memory errors, but much fewer). I'm going to add a pytest option so that these tests don't automatically get run when testing |
You can request this by passing `--nengo` to a py.test invocation. E.g., `py.test tests` runs all but the Nengo tests. `py.test tests --nengo` runs all tests, including the Nengo tests.
I made another commit to add a Currently this causes the tests to fail, because Nengo's test utilities didn't anticipate any Also, in order to get this to work properly, I had to add a function ( |
This PR makes it possible to run tests from Nengo's test suite with
nengo_spinnaker.Simulator
instead of the defaultnengo.Simulator
. To do so, execute:from the
nengo_spinnaker
directory.I'm marking this as a work-in-progress because:
sim.close()
at the end of every test function, because in order to register a function to occur after the test has run, the simulator has to exist before the test, but obviously that can't happen because the tests create the simulator. So, I hacked it in by closing any open simulators when you make a new one. This isn't quite enough though, as when running through the whole test suite, by the end you will getInsufficientResourceError
s andSpiNNakerMemoryError
s relatively frequently, which will get more frequent if you rerun the test suite. Resetting the board fixes it temporarily. Not sure if there's a better way to free resources to solve this issue?test_nengo.py
). It's quite possible that some of these tests can be made to pass with minor tweaks, so I wanted to put this out there to see if there are some easy tweaks that I can implement to raise the number of tests that pass.For reference and/or interest, here's the full output from a run of the test suite. A lot of the assertions that fail for tolerance reasons have been marked as expected failures, but I thought showing the traceback would be helpful here.