-
-
Notifications
You must be signed in to change notification settings - Fork 32.2k
gh-120144: Make it possible to use sys.monitoring
for bdb and make it default for pdb
#124533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gh-120144: Make it possible to use sys.monitoring
for bdb and make it default for pdb
#124533
Conversation
And of course we need documentation updates, I will do it later when the feature is accepted and the interface is decided. |
Its after midnight so will test much later today. If all ok using default, will patch IDLE to pass 'monitoring'. |
Ran fine with default backend. Not fine with EDIT: The remote execution process crashes because of an unrecoverable exception in the rpc code. Monitoring does not seem to work across the socket connection. Some of the debugger code likely needs a change (as pdb does). (But IDLE does not have to use 'monitoring'. |
Right - the most important thing is I think at least part of the issue is multi-threading. For |
Hi @terryjreedy , I "fixed" the multi-threading issue. Well by "fixed" I meant making it the same behavior as before. Let me know if you have some time to test it out :) |
With |
The improvement you see here is entirely due to the changes in break_anywhere(self, frame): introduced in python 3.14. If you try with the released alpha versions of 3.14 you get similar results to yours. I have tried your new bdb code, based on sys.monitoring, in different cases and, unfortunately, I did not see any improvements. Quite the opposite! |
#124553 was merged after this PR so that's not possible (I did it btw). But yes, for this specific case, the changes to What case did you try for this implementation? Like I mentioned, this is not the perfect solution, the major problem is solves is when a line event is triggered multiple times. So, for the same code, if you put a breakpoint at When you say |
Yes it does! Massive improvement in this case!
Could you please elaborate on how this is achieved?
One script I saw degradation of performance was the following: from timeit import timeit
class A(object):
def __init__(self):
self.value = 1
class B(A):
@staticmethod
def create_property(name):
return property(
lambda self: getattr(self.srg, name),
lambda self, v: setattr(self.srg, name, v),
)
value = create_property.__func__("value")
def __init__(self):
self.srg = __import__("threading").local()
self.value = 2
super().__init__()
b1 = B()
b2 = B()
print(timeit("b1.value = 4; c = b1.value", number=100000, globals=globals()))
print(timeit("b1.v = 4; c = b1.v", number=100000, globals=globals())) With a breakpoint in the last line, the monitoring-based Bdb takes 4 times as much time to reach it. But let me add some more details about how I am testing. I am using a Bdb-based debugger in PyScripter, which is adapted for multi-treaded debugging. I am not using your modified bdb.py directly. Instead I created a subclass of Bdb integrating your code (see fastbdb.zip). Since, I wanted to back port your code to python 12 and 13, I replaced clear_tool_id with a custom function. Finally, since I am using the code for multi-threaded debugging, I have removed the changes in 99ea70c. |
Okay that's a completely different story. Unfortunately this is the CPython PR so I can't fully solve issues for your debugger, but I have some potential theories.
So, if you can reproduce the performance issue on pdb from this branch, I can investigate more into it. If you are experiencing issues on a debugger that's based on a customized version of this bdb, I'm afriad you are on your own because it's highly possible that the issue is caused by your customization (otherwise you should be able to repro it with pure pdb). |
I am testing on python 3.14 and I am using the following clear_tool_id def clear_tool_id(tool_id):
import sys
if sys.version_info >= (3,14):
sys.monitoring.clear_tool_id(tool_id)
else:
for event in sys.monitoring.events.__dict__.values():
if isinstance(event, int) and event: # Ensure it's an event constant
sys.monitoring.register_callback(tool_id, event, None) So, no difference here.
No. There is just one thread and it works exactly like your code. PyScripter allows you to debug multi-threaded python code, but for single-threaded scripts it just uses Bdb. And I am not asking you to solve my problems. I reported one script where monitoring degrades debugging performance. Why don't you try it on your side? |
direct pdb usage
Former is monitoring and latter is settrace. Let me know if you have different results. Again, not results from pyscripter, please only send results of pdb monitoring vs pdb settrace. (And you can use the latest PR which merged in the 3.14 changes). |
950e032
to
b595682
Compare
Using the attached script I get
So monitoring is about 25% slower in this example. Consistent with your results. |
I think a module-level one. The interface to pdb is the module. The |
I added the module level util for default backends. I also added the docs (with my best effort) and the whatsnew entry. This is a full package that needs review now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@terryjreedy are you OK with this being merged?
@terryjreedy friendly ping. Do you have other comments for this change? IDLE can use settrace version bdb now and we can work together to see if we can move it to monitoring in the future. We also have a few months before 3.14 release so time is on our side. |
I found another bug. Consider the following code: for i in range(10):
j = i ** 2
print(j)
Suggested fixCall restart_events() whenever a breakpoint is added. I cannot think of a better solution. def set_break(self, filename, lineno, temporary=False, cond=None,
funcname=None):
"""Set a new breakpoint for filename:lineno.
...
self.restart_events() #added
return None |
Thanks for filing the bug. The events should be restarted after every user interaction. I forgot the one after |
Another friendly ping to @terryjreedy :) Sorry I can't reach you on discord. Any objections/suggestions to this PR? Thanks! |
As Terry is busy, I will merge this soon if no one objects. I don't expect this change to break anything (except for pdb maybe). |
I ran IDLE debugger with updated main, a PR branch, and after adding |
Thank you @terryjreedy for testing it out. I don't think you can change the backend while the debugging is working. It should be fine to change it before it starts tracing or after it's done. I'm glad the current version just works for IDLE - that's the original goal anyway. |
… make it default for pdb (python#124533)
… make it default for pdb (python#124533)
This is the most conservative attempt ever to utilize
sys.monitoring
inbdb
andpdb
.Highlights:
test_pdb
andtest_bdb
at all with the new backendbdb
will still default tosys.settrace
, which keeps all the old behavior. Users can opt-in to the newsys.monitoring
backend, and the interface is still the same, even fortrace_dispatch
(that's howtest_bdb
passes).bdb
where user can disable certain events to improve the peformance.pdb.Pdb
will usesys.settrace
by default too, and is configurable withpdb.Pdb.DEFAULT_BACKEND
pdb
CLI andbreakpoint()
uses themonitoring
backend and no noticable difference I can observe at this point.Solution:
Basically, I mimicked the behavior of
sys.settrace
withsys.monitoring
to keep the old behavior as much as possible. But I had the chance to use the new API ofsys.monitoring
to disable certain events.Performance:
It's not as impressive as the original proposal, but for the following code:
On my laptop, without debugger, it takes 0.025s. With the new pdb attached (
b f
thenc
), it takes the same amount of time, and with Python 3.12 pdb, it takes 1.04s(4100%+ overhead). The performance improvement is significant to say at least.And as you can tell from the diff, the actual changes to
pdb
is minimal - just changesys.settrace(tracefunc)
toself.start_trace(tracefunc)
andsys.settrace(None)
toself.stop_trace()
. That's what the debugger developers need to do to onboard.sys.monitoring
for pdb/bdb #120144