Skip to content

Speed up the Tier 2 interpreter #112287

Closed as not planned
Closed as not planned
@gvanrossum

Description

@gvanrossum

The Tier 2 interpreter hasn't really been optimized carefully. While the "optimizer" pass is intended to make the Tier 2 micro-code faster through things like guard elimination or constantification, we should also look into just making the Tier 2 interpreter itself faster -- possibly by changing the representation of executable traces held in the executor (the current format is identical to the IR, which is rather verbose, using 16 bytes per uop!), and possibly by just carefully tuning the interpreter. (For example, if the space of micro-opcode ordinals could overlap the space of Tier 1 bytecode ordinals, we could fit the Tier 2 opcode in one byte.)

Linked PRs

Metadata

Metadata

Assignees

No one assigned

    Labels

    interpreter-core(Objects, Python, Grammar, and Parser dirs)

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions