-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Convention for test data which uses APIs defined by Python stubs #862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Currently we check only for syntax errors in stub files. The idea is to add test data for static analyzers similar to how it's done in DefinitelyTyped for TypeScript.
The Travis tests are almost OK except pytype which apparently doesn't yet support function comments for type hints added to PEP 484 last year. @matthiaskramm I was unable to find an issue about it at the pytype issue tracker. Could you take a look at this problem? |
@vlasovskikh: pytype supports this now. We can export a new open-source release. Out of curiosity, why do you need support for this syntax? And why in Python 3.6? What happens if these comments are just ignored? |
@matthiaskramm I used the comment-based syntax in a 3.6 test accidentally, by analogy to my other 2and3 test data. I use type hints for functions in test data since mypy doesn't check functions with no type hints by default. If the mypy guys have no objections, we can add this option for running mypy for typeshed tests. |
I'm still not excited about this. The tests just require you to say everything twice -- there's nothing that verifies that the tests actually match the implementation, so the problem remains essentially the same: if the author of a set of stubs misreads the docs, the stubs will be wrong, because the tests will be based on their mis-reading of the docs. |
I think that these might only be worthwhile if we'd make it possible to run the tests using, say, pytest. This way we would be able to verify that both the tests and the stubs conform to the implementation. The stubs could still be too general or narrow, but at least they can't be totally inaccurate. It would also be helpful to have tests that are expected to not pass type checking, to verify that bad code can be diagnosed correctly. Again, having tests for stubs wouldn't be required or maybe not even generally recommended, but they might he helpful in some cases, and they could make reviewing changes to stubs easier, as we wouldn't need to always manually verify that types are correct (or blindly trust the contributor). If something looks fishy in a PR, we could always ask the contributor to write some tests. Here's a hypothetical test case for
When this test case is run using pytest, we test that When we type check the test case using mypy (with
We could also have tests that verify that invalid arguments are rejected. For example:
When the test case is run using pytest, we'd fail the case case unless the call to When we'd type check the test case using mypy, mypy would recognize |
@JukkaL I like your idea and I'll experiment with it. It looks viable for at stdlib stubs at least. With many incoming pull requests to the repository I feel like having working code examples for suggested stubs is a good idea. As for third-party stubs, it may require installing unspecified versions of dependencies (including incompatible with one other). |
Starting with stdlib sounds reasonable. We may be able to use Another option would be use to multiple virtualenvs, e.g. one per third-party package. We might have to do this anyway, since we could have two third-party packages with conflicting dependencies. |
@gvanrossum @JukkaL @matthiaskramm I've sent another PR #917 that proposes both static and run-time tests. Closing this one. |
This PR addresses #754 about testing typeshed stubs from the static analysis perspective.
Currently we check only for syntax errors in stub files. The idea is to add test data for static analyzers similar to how it's done in DefinitelyTyped for TypeScript.