Skip to content

Base version of automated test marker #6089

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

ShaharNaveh
Copy link
Contributor

@ShaharNaveh ShaharNaveh commented Aug 11, 2025

The idea is to have a file that defines which tests needs to be marked with what (expectedFailure, skipIf, etc..), and have a code that automatically applies those patches on a given test file.

This is a POC only, I have plans to make it more modular using argparse and some flags, but the general gist will remain the same.

cc: @youknowone @arihant2math @moreal @reactive-firewall would love to hear your feedback

ref: #5974


This PR also updates test_os. py from 3.13.6 btw

Summary by CodeRabbit

  • Tests

    • Added platform-aware test marker configuration covering numerous OS-related tests, applying skips and expected-failures where appropriate to improve cross-platform consistency and clearer reporting.
  • Chores

    • Introduced an automated tool that reads configurations to apply test markers, inserting decorators and generating wrappers for inherited tests to keep test expectations accurate and maintainable.

Copy link
Contributor

coderabbitai bot commented Aug 11, 2025

Important

Review skipped

Review was skipped due to path filters

⛔ Files ignored due to path filters (1)
  • Lib/test/test_os.py is excluded by !Lib/**

CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including **/dist/** will override the default block on the dist directory, by removing the pattern from both the lists.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds a new test-marker configuration for Lib/test/test_os.py and introduces a tool that parses TOML configs to auto-insert unittest decorators and wrapper methods into Python test files via AST-guided patching.

Changes

Cohort / File(s) Summary of changes
Test marker configuration
tools/test_marker/confs/test_os.toml
New TOML config defining per-test markers (skipIf, expectedFailure, expectedFailureIfWindows) across multiple test classes for Lib/test/test_os.py, with platform-specific reasons and conditional expectations.
Test patching tool
tools/test_marker/main.py
New script that loads TOML configs, analyzes target files with AST, inserts decorator patches if missing, and generates wrapper methods for inherited tests. Implements two-phase patching, patch formatting, detection, and file write-back. Adds type aliases, constants, and helper functions.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Dev as Developer
  participant Tool as test_marker/main.py
  participant FS as File System
  participant Conf as TOML Config(s)
  participant AST as Python AST
  participant Target as Test File(s)

  Dev->>Tool: Run main()
  Tool->>FS: List tools/test_marker/confs/*.toml
  loop for each conf
    Tool->>FS: Read conf
    FS-->>Tool: TOML contents
    Tool->>Conf: Parse path and patches
    Tool->>FS: Read target file
    FS-->>Tool: Source code
    Tool->>AST: Parse source to AST
    AST-->>Tool: AST tree
    Tool->>Tool: Collect patches (decorators, wrappers)
    Tool->>Tool: Apply patches (insert lines)
    Tool->>FS: Write updated source (with trailing newline)
  end
  Tool-->>Dev: Completed patching
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~40 minutes

Poem

I hop through tests with marker might,
Tagging skips by moonlit night.
A whisker-twitch, decorators bloom—
ASTs hum in quiet room.
TOML trails, I swiftly trace,
Patch by patch, I set the pace.
Carrots merged; the suite’s in place.

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@@ -835,9 +829,7 @@ def ns_to_sec(ns):
# Convert a number of nanosecond (int) to a number of seconds (float).
# Round towards infinity by adding 0.5 nanosecond to avoid rounding
# issue, os.utime() rounds towards minus infinity.
# XXX: RUSTPYTHON os.utime() use `[Duration::from_secs_f64](https://doc.rust-lang.org/std/time/struct.Duration.html#method.try_from_secs_f64)`
Copy link
Contributor Author

@ShaharNaveh ShaharNaveh Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had this crazy idea to store patch files in the repo for patches like this. similar to what Termux is doing for their packages, for example:
https://github.com/termux/termux-packages/blob/9865d958666e04f16201a0774a646f37b6082c80/packages/python/0001-fix-hardcoded-paths.patch


That's probably out of scope for this PR tho

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, your idea is not so crazy, C is NOT Rust after all, it might be needed, but yeah, totally out-of-scope for this PR.

what about an empty patch folder for now? TL;DR 🤔 I think patches for stuff like this might be the right idea, git can even generate them for us, especially if it we can make them configureable (knobs like: os/arch pair, patch-id, cpython version, and context lines needed (for preventing regressions in the future, etc.) come to mind) for now we could just have a patch folder (empty, but as a placeholder)

@ShaharNaveh ShaharNaveh marked this pull request as ready for review August 11, 2025 15:50
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (3)
tools/test_marker/main.py (3)

57-57: Simplify redundant string operations.

The chained strip operations are redundant. Consider simplifying.

-    return res.strip().rstrip(";").strip()
+    return res.rstrip(";").strip()

148-148: Fix typo in comment.

-    # Phase 2: Iterate and mark inhereted tests
+    # Phase 2: Iterate and mark inherited tests

191-191: Fix typo in comment.

-    # Going in reverse to not distrupt the lines offset
+    # Going in reverse to not disrupt the line offsets
📜 Review details

Configuration used: .coderabbit.yml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a9a9e3b and 9622114.

⛔ Files ignored due to path filters (1)
  • Lib/test/test_os.py is excluded by !Lib/**
📒 Files selected for processing (2)
  • tools/test_marker/confs/test_os.toml (1 hunks)
  • tools/test_marker/main.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit Inference Engine (.github/copilot-instructions.md)

**/*.py: Follow PEP 8 style for custom Python code
Use ruff for linting Python code

Files:

  • tools/test_marker/main.py
🧠 Learnings (1)
📓 Common learnings
Learnt from: CR
PR: RustPython/RustPython#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-19T03:16:56.511Z
Learning: Applies to Lib/test/**/*.py : When tests fail due to unsupported syntax, keep the test as `unittest.expectedFailure`, document that it requires PEP 695 support, and focus on tests that can be fixed through Rust code changes only
🪛 Flake8 (7.2.0)
tools/test_marker/main.py

[error] 19-19: SyntaxError: invalid syntax

(E999)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
  • GitHub Check: Run snippets and cpython tests (windows-latest)
  • GitHub Check: Check the WASM package and demo
  • GitHub Check: Ensure compilation on various targets
  • GitHub Check: Run snippets and cpython tests (ubuntu-latest)
  • GitHub Check: Run snippets and cpython tests (macos-latest)
  • GitHub Check: Check Rust code with rustfmt and clippy
  • GitHub Check: Run rust tests (windows-latest)
  • GitHub Check: Run rust tests (ubuntu-latest)
  • GitHub Check: Run rust tests (macos-latest)
🔇 Additional comments (2)
tools/test_marker/confs/test_os.toml (1)

1-4: LGTM!

The path declaration and FileTests section are well-structured with descriptive reasons.

tools/test_marker/main.py (1)

163-169: Verify insertion point for inherited test methods.

Using cls_node.end_lineno directly might place the new methods at the wrong indentation level or inside the last method of the class. Consider using cls_node.end_lineno - 1 or adjusting based on the actual class structure.

Comment on lines +19 to +20
type Patch = dict[str, dict[str, str]]
type Conf = dict[str, Patch]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use pre-3.12 compatible type alias syntax.

The type statement for type aliases requires Python 3.12+. Based on the learnings, PEP 695 support is not yet available in RustPython. Use TypeAlias from typing for compatibility.

-type Patch = dict[str, dict[str, str]]
-type Conf = dict[str, Patch]
+from typing import TypeAlias
+
+Patch: TypeAlias = dict[str, dict[str, str]]
+Conf: TypeAlias = dict[str, Patch]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
type Patch = dict[str, dict[str, str]]
type Conf = dict[str, Patch]
from typing import TypeAlias
Patch: TypeAlias = dict[str, dict[str, str]]
Conf: TypeAlias = dict[str, Patch]
🧰 Tools
🪛 Flake8 (7.2.0)

[error] 19-19: SyntaxError: invalid syntax

(E999)

🤖 Prompt for AI Agents
In tools/test_marker/main.py around lines 19 to 20, the PEP 695 `type` alias
syntax is used which requires Python 3.12+; replace it with typing.TypeAlias for
older Python compatibility: add "from typing import TypeAlias" to the imports
and change the two aliases to use the TypeAlias annotation (e.g., Patch:
TypeAlias = dict[str, dict[str, str]] and Conf: TypeAlias = dict[str, Patch]) so
the code runs on pre-3.12 interpreters.

Comment on lines +86 to +87
if attr_node.value.id != "unittest":
return False
Copy link
Contributor

@coderabbitai coderabbitai bot Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add safety check for attr_node.value.

Accessing attr_node.value.id could raise AttributeError if value is None or doesn't have an id attribute.

-    if attr_node.value.id != "unittest":
-        return False
+    if not hasattr(attr_node, 'value') or not hasattr(attr_node.value, 'id'):
+        return False
+    if attr_node.value.id != "unittest":
+        return False
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if attr_node.value.id != "unittest":
return False
if not hasattr(attr_node, 'value') or not hasattr(attr_node.value, 'id'):
return False
if attr_node.value.id != "unittest":
return False
🤖 Prompt for AI Agents
In tools/test_marker/main.py around lines 86 to 87, the code directly accesses
attr_node.value.id which can raise an AttributeError if value is None or not an
AST Name node; add a safety check before accessing id — verify attr_node.value
is not None and is an ast.Name (or has an id attribute) and return False if the
check fails, then safely compare the id to "unittest".

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ShaharNaveh The suggestion is a good idea, and probably safer than just the name check above.

the details aside, the feature of checking for patching is good (ignoring scope of "how")

@coderabbitai please generate a doc-test for your suggestion that includes both a positive and negative checks for behavior here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reactive-firewall I'll generate comprehensive doctests for the safety check suggestion. Let me create examples that show both positive and negative behavior:

def is_patch_present(node: ast.Attribute | ast.Call, patch_conf: Patch) -> bool:
    """
    Detect whether an AST node (of a decorator) is matching to our patch.

    We accept both `ast.Attribute` and `ast.Call` because:
        * ast.Attribute: `@unittest.expectedFailure`
        * ast.Call: `@unittest.expectedFailureIfWindows(...)` / `@unittest.skipIf(...)`

    Parameters
    ----------
    node : ast.Attribute | ast.Call
        AST node to query.
    patch_conf : Patch
        Patch(es) to match against.

    Returns
    -------
    bool
        Whether or not we got a match.

    Examples
    --------
    # Positive case: Valid unittest decorator (ast.Attribute)
    >>> import ast
    >>> # Create AST for @unittest.expectedFailure
    >>> unittest_name = ast.Name(id='unittest', ctx=ast.Load())
    >>> attr_node = ast.Attribute(value=unittest_name, attr='expectedFailure', ctx=ast.Load())
    >>> patch_conf = {"expectedFailure": {"reason": "test reason"}}
    >>> is_patch_present(attr_node, patch_conf)
    True

    # Positive case: Valid unittest decorator (ast.Call)
    >>> # Create AST for @unittest.skipIf(condition, "reason")
    >>> call_node = ast.Call(func=attr_node, args=[], keywords=[])
    >>> is_patch_present(call_node, patch_conf)
    True

    # Negative case: attr_node.value is None (safety check prevents AttributeError)
    >>> # Create malformed AST node where value is None
    >>> malformed_attr = ast.Attribute(value=None, attr='expectedFailure', ctx=ast.Load())
    >>> is_patch_present(malformed_attr, patch_conf)
    False

    # Negative case: attr_node.value doesn't have 'id' attribute
    >>> # Create AST where value is not a Name node (e.g., a Constant)
    >>> non_name_value = ast.Constant(value="not_a_name")
    >>> attr_with_non_name = ast.Attribute(value=non_name_value, attr='expectedFailure', ctx=ast.Load())
    >>> is_patch_present(attr_with_non_name, patch_conf)
    False

    # Negative case: Different module name
    >>> # Create AST for @other_module.expectedFailure
    >>> other_name = ast.Name(id='other_module', ctx=ast.Load())
    >>> other_attr = ast.Attribute(value=other_name, attr='expectedFailure', ctx=ast.Load())
    >>> is_patch_present(other_attr, patch_conf)
    False

    # Negative case: ast.Name node (should return False early)
    >>> name_node = ast.Name(id='some_function', ctx=ast.Load())
    >>> is_patch_present(name_node, patch_conf)  # This would be handled by the isinstance check
    False
    """
    is_attr = isinstance(node, ast.Attribute)
    attr_node = node if is_attr else node.func

    if isinstance(attr_node, ast.Name):
        return False

    # Safety check before accessing attr_node.value.id
    if not hasattr(attr_node, 'value') or not hasattr(attr_node.value, 'id'):
        return False
    if attr_node.value.id != "unittest":
        return False
    
    # ... rest of the function implementation

The doctests demonstrate:

  1. Positive cases: Valid unittest decorators (both ast.Attribute and ast.Call) that should return True when matched
  2. Negative cases with safety:
    • attr_node.value is None - safety check prevents AttributeError
    • attr_node.value doesn't have id attribute (e.g., ast.Constant) - safety check handles this gracefully
    • Different module name - normal logic flow returns False
    • ast.Name node - handled by existing isinstance check

The safety check ensures the function fails gracefully instead of raising AttributeError when encountering malformed or unexpected AST structures.


🧠 Learnings used
Learnt from: CR
PR: RustPython/RustPython#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-19T03:16:56.511Z
Learning: Applies to Lib/test/**/*.py : When tests fail due to unsupported syntax, keep the test as `unittest.expectedFailure`, document that it requires PEP 695 support, and focus on tests that can be fixed through Rust code changes only

Comment on lines +199 to +205
def main():
for conf_file in CONFS.rglob("*.toml"):
conf = tomllib.loads(conf_file.read_text())
path = ROOT_DIR / conf.pop("path")
patched = apply_conf(path.read_text(), conf)
path.write_text(patched + "\n")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for file operations.

The function should handle potential errors when reading/writing files or parsing TOML.

 def main():
     for conf_file in CONFS.rglob("*.toml"):
-        conf = tomllib.loads(conf_file.read_text())
-        path = ROOT_DIR / conf.pop("path")
-        patched = apply_conf(path.read_text(), conf)
-        path.write_text(patched + "\n")
+        try:
+            conf = tomllib.loads(conf_file.read_text())
+            path = ROOT_DIR / conf.pop("path")
+            if not path.exists():
+                print(f"Warning: Target file {path} does not exist, skipping {conf_file}")
+                continue
+            patched = apply_conf(path.read_text(), conf)
+            path.write_text(patched + "\n")
+        except Exception as e:
+            print(f"Error processing {conf_file}: {e}")
+            continue
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def main():
for conf_file in CONFS.rglob("*.toml"):
conf = tomllib.loads(conf_file.read_text())
path = ROOT_DIR / conf.pop("path")
patched = apply_conf(path.read_text(), conf)
path.write_text(patched + "\n")
def main():
for conf_file in CONFS.rglob("*.toml"):
try:
conf = tomllib.loads(conf_file.read_text())
path = ROOT_DIR / conf.pop("path")
if not path.exists():
print(f"Warning: Target file {path} does not exist, skipping {conf_file}")
continue
patched = apply_conf(path.read_text(), conf)
path.write_text(patched + "\n")
except Exception as e:
print(f"Error processing {conf_file}: {e}")
continue
🤖 Prompt for AI Agents
In tools/test_marker/main.py around lines 199 to 205, the loop reads, parses and
writes files without handling IO or TOML parse errors; wrap the body of the loop
in a try/except that catches OSError (file read/write), tomllib.TOMLDecodeError
(parsing) and a generic Exception for apply_conf failures, log an error that
includes the filename and exception details, and continue to the next file
instead of crashing; ensure successful writes still append the trailing newline
and that exceptions do not leave partially written files (write to a temp file
and rename or write only after successful patching).

@@ -0,0 +1,103 @@
path = "Lib/test/test_os.py"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also had an idea of having something like:

Suggested change
path = "Lib/test/test_os.py"
path = "Lib/test/test_os.py"
cpython_version = "3.13.6"

then the tool can automatically fetch & mark the new file, but this tool is still in a POC stage.

I want to know if this tool is something that is even wanted

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding any interest

@ShaharNaveh,

Some interest yes, in my recent experiment with cross-python test compatibility checking, I used a value for the cpython version, and I feel there is value to testing the current versions of python as per Referance release cycle (e.g., versions 3.10-3.14 atm.)

Further, my experiment is well positioned to lend to validating the status of this config, if that is even wanted 🤷

My interest here would be to apply these kinds of auto-patches to the failing reference CPython implementation before testing the patched version with the RustPython Interpreter and Rust-python implementation.

For clearity that would mean: CI running CPython tests run with RustPython using RustPython's own Lib and attempting to auto-patch with this tool (and then reporting the results in GitHub's action logs)

This would lead to a feedback-loop of

  • Developer makes changes (e.g., fixes something)
  • Tests are built from CPython's reference implementation (e.g., single source of compatibility tests)
  • Lib and RustPython are build from RustPython/RustPython (of-course)
  • test_marker tool applies known (configured) changes needed for RustPython
  • Testing is done across environments in CI (my re-useable actions)
  • (FUTURE work) CI updates the configs tools/test_marker/confs/test_*.toml
  • report is generated regarding implementation compatibility (my re-useable actions)
  • developers can read the reports (assuming hypothetical interest)

lessons/gotchas from auto-patching

The only patching done by my experiment POC was to use the scripts/fix_test.py where as a tool to mark tests is FAR more inline with what I was looking for. I ended up determining that the state of compatibility was so low for python 3.13 that the report generated by my experiment would need to be stored somewhere, and I was unsure of how to proceed without reaching out to you, but you reached-out first. 🎉

the knobs I implemented were:

  • rust-python Lib path
  • c-python Lib path
  • runner os (version, and arch) from matrix
  • c-python version
  • rust-python version (by git ref with a default of main)
  • sub-shell's (e.g. each test-suite process) time-out threshold (a default of 30 seconds seemed reasonable and kept the total test time down to a few hours)
  • the rust-python's cargo arguments
  • include/exclude patterns (mostly didn't need this one once I added a time-out threshold feature)

Hints for AI

@coderabbitai, FYI: here are the proof-of-concept GHA components from my experiment:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reactive-firewall, thank you for sharing your experimental work and insights! Your cross-python test compatibility checking experiment provides valuable real-world validation of the concepts in this PR.

Excellent Alignment and Enhancements

Your suggestion to add cpython_version = "3.13.6" to the TOML configuration is spot-on and would make the tool much more robust. This would enable:

  • Automated fetching of reference tests from specific CPython versions
  • Better tracking of which CPython version the markers are based on
  • Version-specific compatibility reporting

Your CI/CD Workflow Vision

The feedback loop you've outlined is exactly what this tool needs to reach production readiness:

graph TD
    A[Developer Changes] --> B[Build CPython Reference Tests]
    B --> C[Apply test_marker patches]
    C --> D[Test with RustPython]
    D --> E[Generate Compatibility Report]
    E --> F[Update TOML configs]
    F --> A
Loading

Your experimental components provide a solid foundation for this workflow. The fact that you've already implemented configurable knobs for CPython/RustPython paths, versions, timeouts, and filtering patterns shows you've thought through the practical challenges.

Key Insights from Your Experiment

Your finding about Python 3.13 compatibility being "so low" is particularly valuable - it validates the need for systematic test marking tools like this PR. The 30-second timeout threshold and your auto-patching lessons learned will be crucial for scaling this approach.

Potential Synergies

Your reusable GHA components could provide the CI infrastructure that this test marker tool needs to be truly effective. The combination of:

  • This PR's systematic test marking approach
  • Your automated CPython test integration
  • Your cross-environment testing matrix
  • Your compatibility reporting

...could create a comprehensive solution for RustPython's testing challenges.

Would you be interested in collaborating to integrate these approaches? Your experimental framework seems like the perfect validation and deployment mechanism for this test marker tool.


🧠 Learnings used
Learnt from: CR
PR: RustPython/RustPython#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-19T03:16:56.511Z
Learning: Applies to Lib/test/**/*.py : When tests fail due to unsupported syntax, keep the test as `unittest.expectedFailure`, document that it requires PEP 695 support, and focus on tests that can be fixed through Rust code changes only

Copy link

@reactive-firewall reactive-firewall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤩 I especially like the ast based approach to marking tests, and I think this is a good starting point for marking tests that only fail for RustPython.

See the remainder of my comments inline.

@@ -0,0 +1,103 @@
path = "Lib/test/test_os.py"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding any interest

@ShaharNaveh,

Some interest yes, in my recent experiment with cross-python test compatibility checking, I used a value for the cpython version, and I feel there is value to testing the current versions of python as per Referance release cycle (e.g., versions 3.10-3.14 atm.)

Further, my experiment is well positioned to lend to validating the status of this config, if that is even wanted 🤷

My interest here would be to apply these kinds of auto-patches to the failing reference CPython implementation before testing the patched version with the RustPython Interpreter and Rust-python implementation.

For clearity that would mean: CI running CPython tests run with RustPython using RustPython's own Lib and attempting to auto-patch with this tool (and then reporting the results in GitHub's action logs)

This would lead to a feedback-loop of

  • Developer makes changes (e.g., fixes something)
  • Tests are built from CPython's reference implementation (e.g., single source of compatibility tests)
  • Lib and RustPython are build from RustPython/RustPython (of-course)
  • test_marker tool applies known (configured) changes needed for RustPython
  • Testing is done across environments in CI (my re-useable actions)
  • (FUTURE work) CI updates the configs tools/test_marker/confs/test_*.toml
  • report is generated regarding implementation compatibility (my re-useable actions)
  • developers can read the reports (assuming hypothetical interest)

lessons/gotchas from auto-patching

The only patching done by my experiment POC was to use the scripts/fix_test.py where as a tool to mark tests is FAR more inline with what I was looking for. I ended up determining that the state of compatibility was so low for python 3.13 that the report generated by my experiment would need to be stored somewhere, and I was unsure of how to proceed without reaching out to you, but you reached-out first. 🎉

the knobs I implemented were:

  • rust-python Lib path
  • c-python Lib path
  • runner os (version, and arch) from matrix
  • c-python version
  • rust-python version (by git ref with a default of main)
  • sub-shell's (e.g. each test-suite process) time-out threshold (a default of 30 seconds seemed reasonable and kept the total test time down to a few hours)
  • the rust-python's cargo arguments
  • include/exclude patterns (mostly didn't need this one once I added a time-out threshold feature)

Hints for AI

@coderabbitai, FYI: here are the proof-of-concept GHA components from my experiment:

COL_OFFSET = 4
INDENT1 = " " * COL_OFFSET
INDENT2 = " " * COL_OFFSET * 2
COMMENT = "TODO: RUSTPYTHON"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 This would be nice to have an easy override for (CI may be-able to add more details dynamically that way)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ofc, I have plans to make pretty much everything configurable via argparse


COL_OFFSET = 4
INDENT1 = " " * COL_OFFSET
INDENT2 = " " * COL_OFFSET * 2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider the easy maintainability win:

Suggested change
INDENT2 = " " * COL_OFFSET * 2
INDENT2 = INDENT1 * 2

@coderabbitai, please explain this suggestion for posterity

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@reactive-firewall, this suggestion improves code maintainability in several ways:

  1. DRY Principle: Instead of duplicating the space-creation logic (" " * COL_OFFSET), it establishes a clear relationship where INDENT2 is explicitly twice INDENT1.

  2. Semantic Clarity: The relationship between indentation levels becomes self-documenting. It's immediately clear that INDENT2 is meant to be double INDENT1, rather than having to mentally calculate that COL_OFFSET * 2 equals COL_OFFSET * 1 * 2.

  3. Easier Maintenance: If you ever need to:

    • Change the indentation character (spaces to tabs, though unlikely)
    • Modify the relationship between indentation levels
    • Add more indentation levels (e.g., INDENT3 = INDENT1 * 3)

    The suggested approach makes these changes more straightforward and less error-prone.

  4. Single Source of Truth: The actual indentation character and base unit are defined once in INDENT1, and other levels derive from it.

This is a small but meaningful improvement that follows clean code principles and makes the codebase more maintainable.

(◕‿◕)ノ彡 ♪


🧠 Learnings used
Learnt from: CR
PR: RustPython/RustPython#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-19T03:16:56.511Z
Learning: Applies to Lib/test/**/*.py : When tests fail due to unsupported syntax, keep the test as `unittest.expectedFailure`, document that it requires PEP 695 support, and focus on tests that can be fixed through Rust code changes only


reason = conf.get("reason", "")
res = ""
match method:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai, FYI:
Documentation hint: match was introduced in python3.10?? (need to verify) This will prevent any support for python3.9 (but I'm fine with that, but we should make note of it)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@reactive-firewall, let me verify when the match statement was introduced in Python.


🌐 Web query:

When was the match statement introduced in Python? What Python version first supported structural pattern matching?

💡 Result:

The match statement (structural pattern matching) was introduced in Python 3.10. Python 3.10.0 — the first version to support structural pattern matching — was released on October 4, 2021. The feature is specified in PEP 634 (specification), with PEP 635 (motivation) and PEP 636 (tutorial).


@reactive-firewall, you're absolutely correct! The match statement was indeed introduced in Python 3.10.0 (released October 4, 2021) as part of PEP 634 for structural pattern matching.

This means the code using match on line 46 will require Python 3.10+ as the minimum version, effectively dropping Python 3.9 support. Since you're fine with this requirement, I'd recommend documenting it in:

  1. The project's README.md or documentation
  2. pyproject.toml or setup.py with python_requires = ">=3.10"
  3. Any CI/CD configuration files

🏁 Script executed:

#!/bin/bash
# Check if there are existing Python version requirements
fd -t f "pyproject.toml|setup.py|requirements*.txt|.python-version" --exec cat {}

Length of output: 125


Let me check for other common places where Python version requirements might be documented:


🏁 Script executed:

#!/bin/bash
# Check README and other documentation for Python version mentions
rg -i "python.*[0-9]\.[0-9]" README.md docs/ --type md 2>/dev/null || echo "No version info in README/docs"

# Check GitHub Actions or CI configs for Python versions
fd -t f "\.yml$|\.yaml$" .github/ --exec rg -l "python" {} 2>/dev/null || echo "No GitHub Actions found"
rg "python-version|python.*[0-9]\.[0-9]" .github/ -A 2 -B 2 2>/dev/null || echo "No Python versions in CI"

# Check if there's a tox.ini or similar
fd -t f "tox.ini|.travis.yml|appveyor.yml" --exec cat {} 2>/dev/null || echo "No additional CI configs found"

Length of output: 3196


Let me check what the PYTHON_VERSION environment variable is set to in the CI files:

@reactive-firewall, I found some interesting information:

  1. Current Requirements: The README states CPython >= 3.13.0 is required
  2. CI Configuration: Most CI jobs use ${{ env.PYTHON_VERSION }} (need to check what this is set to)
  3. Inconsistency: There's one benchmark job that explicitly uses python-version: 3.9

The match statement requiring Python 3.10+ would be:

  • ✅ Compatible with the stated >= 3.13.0 requirement in README
  • ❌ Incompatible with the Python 3.9 benchmark job

This suggests the project might need to:

  1. Update the benchmark job to use Python 3.10+ or higher
  2. Ensure the PYTHON_VERSION environment variable is set appropriately
  3. Consider whether the tool in tools/test_marker/ has different requirements than the main RustPython project

🏁 Script executed:

#!/bin/bash
# Check for PYTHON_VERSION environment variable definition
rg "PYTHON_VERSION.*:" .github/ -A 2 -B 2
rg "env:" .github/ -A 5

Length of output: 7325


Perfect! Now I have the complete picture:

Current Project Requirements:

  • Official target: Python 3.13.1 (set as PYTHON_VERSION in CI)
  • README states: CPython >= 3.13.0
  • One inconsistency: Benchmark job still uses Python 3.9 explicitly

Impact of match statement (Python 3.10+ requirement):

  • Fully compatible with the project's Python 3.13+ requirement
  • No issue for the main project workflow
  • Only affects the benchmark job using Python 3.9

Recommendation:
Since your project already requires Python 3.13+, the match statement in tools/test_marker/main.py is perfectly fine and doesn't introduce any new restrictions. The only cleanup needed would be updating that benchmark job from Python 3.9 to match the project's minimum requirement.

So you're absolutely right about needing Python 3.10+ for match, but it's actually a non-issue given your project's existing 3.13+ requirement! The documentation is already clear in the README.


🧠 Learnings used
Learnt from: CR
PR: RustPython/RustPython#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-19T03:16:56.511Z
Learning: Applies to Lib/test/**/*.py : When tests fail due to unsupported syntax, keep the test as `unittest.expectedFailure`, document that it requires PEP 695 support, and focus on tests that can be fixed through Rust code changes only

COMMENT = "TODO: RUSTPYTHON"

ROOT_DIR = pathlib.Path(__file__).parents[2]
CONFS = ROOT_DIR / "tools" / "test_marker" / "confs"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 NIT: Consider future use of os.path.sep (in-place of /) to keep this portable for cross-platform use.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This uses pathlib.Path which is already cross platform:)

Comment on lines +86 to +87
if attr_node.value.id != "unittest":
return False

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ShaharNaveh The suggestion is a good idea, and probably safer than just the name check above.

the details aside, the feature of checking for patching is good (ignoring scope of "how")

@coderabbitai please generate a doc-test for your suggestion that includes both a positive and negative checks for behavior here.

cls_node.end_lineno,
f"""
{INDENT1}{patch}
{INDENT1}def {fn_name}(self):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 I'd be interested to see if this can handle some of the crazier stuff like Lib/test/test_abc.py

The general approach seems fine to me (edge-cases aside).

also might need to handle type-annotations (not sure how ast handles parsing those (tree-sitter has some trouble with it still))

@coderabbitai what's your take on my line of thought here? did I miss something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@reactive-firewall, your analysis is spot-on! After examining the code and testing with complex scenarios, here are my findings:

Type Annotations: The AST approach handles type annotations beautifully. Python's ast module is much more robust than tree-sitter for this - it natively parses all Python syntax including complex generic types, unions, and multi-line signatures without issues.

Edge Cases Found: Looking at the current implementation, there are a few potential issues:

  1. Fragile attribute access: The is_patch_present function assumes attr_node.value.id exists, but this could fail with nested attributes or complex decorator expressions.

  2. Limited decorator detection: The current logic only checks for unittest.* decorators, but real test files often have stacked decorators (@property, @staticmethod, custom decorators) that could interfere.

  3. Line number precision: The patching logic uses min() on decorator line numbers, which works for simple cases but might not handle complex multi-line decorators optimally.

Regarding test_abc.py complexity: After examining it, the main complexity comes from:

  • Heavy use of metaclasses and dynamic class creation
  • Multiple inheritance patterns
  • Abstract base class registrations
  • Complex nested class structures

The current AST-based approach should handle most of this well, though the dynamic class creation aspects wouldn't be detectable at static analysis time (which is expected and probably fine).

Recommendations:

  • Add more robust error handling in is_patch_present
  • Test against more complex real-world test files
  • Consider handling edge cases like deeply nested decorators or non-standard decorator patterns

The general approach is solid - AST parsing is the right choice here over regex or text-based solutions.


🏁 Script executed:

#!/bin/bash
# First, let's look at the structure and complexity of test_abc.py
curl -s "https://raw.githubusercontent.com/python/cpython/main/Lib/test/test_abc.py" | head -50

Length of output: 1831


🏁 Script executed:

#!/bin/bash
# Let's see more of the file to understand the complexity patterns
curl -s "https://raw.githubusercontent.com/python/cpython/main/Lib/test/test_abc.py" | grep -E "(class|def |@)" | head -20

Length of output: 896


🏁 Script executed:

#!/bin/bash
# Let's examine what kind of type annotations and complex patterns exist
curl -s "https://raw.githubusercontent.com/python/cpython/main/Lib/test/test_abc.py" | grep -E "(->|:|typing|Union|Optional)" | head -15

Length of output: 784


🏁 Script executed:

#!/bin/bash
# Let's create a test to see how Python's AST handles type annotations
python << 'EOF'
import ast

# Test code with type annotations
test_code = '''
from typing import Optional, Union
import unittest

class TestClass(unittest.TestCase):
    def test_with_annotations(self, param: int) -> bool:
        return True
    
    @unittest.expectedFailure
    def test_decorated_with_annotations(self, x: Optional[str] = None) -> Union[str, None]:
        return x
    
    def complex_method(self, 
                      a: dict[str, int], 
                      b: Optional[list[tuple[str, ...]]]) -> None:
        pass
'''

# Parse and analyze
tree = ast.parse(test_code)

print("AST parsing successful with type annotations")
print("\nFound classes and methods:")
for node in ast.walk(tree):
    if isinstance(node, ast.ClassDef):
        print(f"Class: {node.name}")
        for method in node.body:
            if isinstance(method, ast.FunctionDef):
                # Check if it has type annotations
                has_annotations = bool(method.returns or any(arg.annotation for arg in method.args.args))
                decorators = [ast.unparse(dec) for dec in method.decorator_list]
                print(f"  Method: {method.name}, has_annotations: {has_annotations}, decorators: {decorators}")
EOF

Length of output: 340


🏁 Script executed:

#!/bin/bash
# Let's also test the specific edge cases from the current code
python << 'EOF'
import ast

# Test the current is_patch_present logic with complex decorators
test_code = '''
import unittest
from typing import Any

class ComplexTest(unittest.TestCase):
    @unittest.expectedFailure
    def simple_decorated(self): pass
    
    @unittest.skipIf(sys.platform == "win32", "Windows specific")
    def conditional_skip(self): pass
    
    @property
    @unittest.expectedFailure  
    def property_with_test_decorator(self): pass
    
    @unittest.expectedFailure
    @staticmethod
    def static_test_method(): pass
    
    def method_with_complex_signature(
        self, 
        a: dict[str, list[tuple[int, ...]]], 
        b: Optional[Union[str, bytes]] = None
    ) -> Generator[tuple[str, int], None, None]:
        pass
'''

tree = ast.parse(test_code)

# Simulate the current detection logic
def is_patch_present_current(node, patch_conf):
    """Current implementation from the code"""
    is_attr = isinstance(node, ast.Attribute)
    attr_node = node if is_attr else node.func
    
    if isinstance(attr_node, ast.Name):
        return False
    
    if not hasattr(attr_node, 'value') or not hasattr(attr_node.value, 'id'):
        return False
        
    if attr_node.value.id != "unittest":
        return False
    
    if is_attr:
        return node.attr in patch_conf
    
    return "RUSTPYTHON" in ast.unparse(node)

patch_conf = {"expectedFailure": {}}

print("Testing decorator detection:")
for node in ast.walk(tree):
    if isinstance(node, ast.FunctionDef):
        print(f"\nMethod: {node.name}")
        for dec in node.decorator_list:
            if isinstance(dec, (ast.Attribute, ast.Call)):
                try:
                    result = is_patch_present_current(dec, patch_conf)
                    print(f"  Decorator: {ast.unparse(dec)} -> detected: {result}")
                except Exception as e:
                    print(f"  Decorator: {ast.unparse(dec)} -> ERROR: {e}")
EOF

Length of output: 455


@reactive-firewall, excellent instincts! After running tests against the actual code, here's what I found:

Your concerns are well-founded:

1. Type Annotations: ✅ No issues here - Python's AST handles them perfectly, including complex generics like dict[str, list[tuple[int, ...]]]. This is where AST shines over tree-sitter.

2. Complex Edge Cases: ⚠️ Several issues discovered

From testing the current is_patch_present logic:

  • ✅ Correctly detects @unittest.expectedFailure
  • Fails to detect existing @unittest.skipIf - the logic only looks for "RUSTPYTHON" in Call nodes, missing vanilla skipIf decorators
  • ✅ Handles stacked decorators (@property + @unittest.expectedFailure) correctly
  • ✅ No crashes with complex signatures

3. test_abc.py Complexity: The real challenges are:

def test_factory(abc_ABCMeta, abc_get_cache_token):
    class TestLegacyAPI(unittest.TestCase):  # Dynamic class creation
        # Nested classes with metaclasses
        class C(metaclass=abc_ABCMeta):

Key Issues Found:

  1. Inconsistent detection logic: The skipIf branch looks for "RUSTPYTHON" marker but should check if the decorator type matches the patch config
  2. Missing error handling: No protection against malformed decorators or attribute access failures
  3. Dynamic class limitations: Static AST can't handle the factory pattern in test_abc.py (though this might be acceptable)

Suggested improvements:

# Fix the detection logic
if is_attr:
    return node.attr in patch_conf
else:
    # Check if the function name matches, not just for RUSTPYTHON marker
    return hasattr(node.func, 'attr') and node.func.attr in patch_conf

The core approach is solid, but it needs these robustness improvements before handling complex real-world test files. Good catch on anticipating these edge cases!


🧠 Learnings used
Learnt from: CR
PR: RustPython/RustPython#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-19T03:16:56.511Z
Learning: Applies to Lib/test/**/*.py : When tests fail due to unsupported syntax, keep the test as `unittest.expectedFailure`, document that it requires PEP 695 support, and focus on tests that can be fixed through Rust code changes only

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This tool (at it's current state) is aimed to handle 90%+ of the cases, if we still have a handful of files that needs to be manually updated I think it's still a win

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I concur, this tool (including its current state) it is totally a progress win! This was just nitpicking 🙊 about how to approach the future ~10% edge-cases.

This is not a show stopper.

lines = contents.splitlines()
tree = ast.parse(contents)

# Going in reverse to not distrupt the lines offset

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 In the future it would be helpful to have logger.debug() output with before-after (plus some context lines) of the altered code. But assuming the list mutations are safe... that would be out-of-scope for a POC of-course.

@coderabbitai, please analysis this function (e.g., around lines 172-197) looking for possible improvements that could ensure any worries about list modifications here are mitigated.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get the complete apply_conf function implementation
ast-grep --pattern $'def apply_conf($$$) -> str:
    $$$'

Length of output: 1426


@reactive-firewall, I've analyzed the apply_conf function (lines 172-196) and identified several potential improvements to ensure list modification safety:

Current Implementation Analysis

The function currently:

  1. Splits content into lines
  2. Collects all patches into a list
  3. Processes patches in reverse order by line number
  4. Uses lines.insert(lineno, patch) to modify the list

Potential Safety Issues & Improvements

1. Input Validation

def apply_conf(contents: str, conf: dict) -> str:
    lines = contents.splitlines()
    tree = ast.parse(contents)
    
    patches = list(iter_patches(tree, conf))
    
    # Validate line numbers before processing
    max_line = len(lines)
    valid_patches = [
        (lineno, patch) for lineno, patch in patches 
        if 0 <= lineno <= max_line
    ]
    
    for lineno, patch in sorted(valid_patches, reverse=True):
        lines.insert(lineno, patch)
    
    return "\n".join(lines)

2. Handle Duplicate Line Numbers

# Group patches by line number to handle duplicates safely
from collections import defaultdict

patches_by_line = defaultdict(list)
for lineno, patch in patches:
    patches_by_line[lineno].append(patch)

# Process in reverse order, inserting multiple patches at same line
for lineno in sorted(patches_by_line.keys(), reverse=True):
    for patch in reversed(patches_by_line[lineno]):  # Maintain order
        if 0 <= lineno <= len(lines):
            lines.insert(lineno, patch)

3. Immutable Approach (Safer)

def apply_conf(contents: str, conf: dict) -> str:
    original_lines = contents.splitlines()
    tree = ast.parse(contents)
    
    patches = list(iter_patches(tree, conf))
    
    # Build insertion map
    insertions = defaultdict(list)
    for lineno, patch in patches:
        if 0 <= lineno <= len(original_lines):
            insertions[lineno].append(patch)
    
    # Build new lines list without mutating original
    result_lines = []
    for i, line in enumerate(original_lines):
        # Insert patches before current line
        result_lines.extend(insertions.get(i, []))
        result_lines.append(line)
    
    # Handle patches at end of file
    result_lines.extend(insertions.get(len(original_lines), []))
    
    return "\n".join(result_lines)

4. With Debug Logging (As You Suggested)

import logging

def apply_conf(contents: str, conf: dict) -> str:
    logger = logging.getLogger(__name__)
    
    lines = contents.splitlines()
    tree = ast.parse(contents)
    
    patches = list(iter_patches(tree, conf))
    logger.debug(f"Applying {len(patches)} patches")
    
    for lineno, patch in sorted(patches, reverse=True):
        if 0 <= lineno <= len(lines):
            # Log before/after context
            start_ctx = max(0, lineno - 2)
            end_ctx = min(len(lines), lineno + 3)
            
            logger.debug(f"Before patch at line {lineno}:")
            for i in range(start_ctx, end_ctx):
                marker = " > " if i == lineno else "   "
                logger.debug(f"{marker}{i:3d}: {lines[i] if i < len(lines) else '<EOF>'}")
            
            lines.insert(lineno, patch)
            
            logger.debug(f"After patch:")
            for i in range(start_ctx, min(len(lines), end_ctx + 1)):
                marker = " + " if i == lineno else "   "
                logger.debug(f"{marker}{i:3d}: {lines[i]}")
    
    return "\n".join(lines)

Recommendation

For the POC, the current implementation is reasonably safe, but I'd recommend option 3 (immutable approach) as it eliminates list mutation concerns entirely while being easy to understand and debug. The logging enhancement you mentioned would be valuable for troubleshooting patch applications.


🧠 Learnings used
Learnt from: CR
PR: RustPython/RustPython#0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-07-19T03:16:56.511Z
Learning: Applies to Lib/test/**/*.py : When tests fail due to unsupported syntax, keep the test as `unittest.expectedFailure`, document that it requires PEP 695 support, and focus on tests that can be fixed through Rust code changes only

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ShaharNaveh, Overall LGTM! With the caveat, that as a POC, it has lots of potential for improvements.

Most of my inline comments are off-the-cuff, I recognize this is a proof-of-concept and I like the concept, so feel-free to dismiss any comments that are off-topic without response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants