-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Micro-benchmarks for performance critical operations #644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
So I attempted to set up a way to benchmark a python function with criterion but it seems that it is currently impossible to use criterion in this way because of how criterion uses closures. At least the way I was attempting to use it. assuming ctx = object()
def setup():
ctx.items = []
def bench(ctx):
ctx.items.append(1) I was attempting to isolate the let vars = vm.ctx.new_scope(); // Keep track of local variables
let result = vm.run_code_obj(code, vars.clone())?;
let setup_fn = vars.load_name(&vm, "setup");
let bench_fn = vars.load_name(&vm, "bench");
match setup_fn {
Some(sf) => {
return vm.invoke(sf, PyFuncArgs::default());
}
_ => println!("No setup function for bench '{}'", name),
}
match bench_fn {
Some(f) => {
let mut c = Criterion::default().configure_from_args();
//let name_str = name.clone();
c.bench_function(name.clone().as_str(), move |b| {
b.iter(move || vm.invoke(f, PyFuncArgs::default()))
});
Ok(vm.get_none())
}
_ => {
let name_error_type = vm.ctx.exceptions.name_error.clone();
let msg = format!("bench function is not defined");
let name_error = vm.new_exception(name_error_type, msg);
Err(name_error)
}
} however no matter what I try I can't get code that works like this to compile. the function object just can not be moved into the inner closure. |
I'm not sure a Rust-level benchmark runner is needed? CPython has the entire suite in Python itself, and AFAIK they managed to make it quite consistent (I'm not sure if criterion.rs actually does some of these things): https://github.com/python/performance |
hmm, yes I think this is actually the best approach. we'll have to get the standard library, for the most part, working before we can really use it; but the end result is that we can compare not only performance change to change but with other python versions as well. |
@OddCoincidence recently someone did this would it help in figuring out if some critical bits of code need improvements? |
Uh oh!
There was an error while loading. Please reload this page.
Some bits of code are so fundamental that even the slightest inefficiency will massively degrade overall performance. It'd be good to track the performance of these bits. I think criterion for measurement and critcmp for comparisons are the best way to do this on stable rust.
The text was updated successfully, but these errors were encountered: