Skip to content

Micro-benchmarks for performance critical operations #644

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
OddCoincidence opened this issue Mar 9, 2019 · 4 comments
Closed

Micro-benchmarks for performance critical operations #644

OddCoincidence opened this issue Mar 9, 2019 · 4 comments

Comments

@OddCoincidence
Copy link
Contributor

OddCoincidence commented Mar 9, 2019

Some bits of code are so fundamental that even the slightest inefficiency will massively degrade overall performance. It'd be good to track the performance of these bits. I think criterion for measurement and critcmp for comparisons are the best way to do this on stable rust.

@Ryex
Copy link
Contributor

Ryex commented Apr 10, 2019

So I attempted to set up a way to benchmark a python function with criterion but it seems that it is currently impossible to use criterion in this way because of how criterion uses closures. At least the way I was attempting to use it.

assuming code is the compiled code for a file like

ctx = object()

def setup():
    ctx.items = []

def bench(ctx):
    ctx.items.append(1)

I was attempting to isolate the bench function and run it for benchmarking

let vars = vm.ctx.new_scope(); // Keep track of local variables
let result = vm.run_code_obj(code, vars.clone())?;

let setup_fn = vars.load_name(&vm, "setup");
let bench_fn = vars.load_name(&vm, "bench");
match setup_fn {
    Some(sf) => {
        return vm.invoke(sf, PyFuncArgs::default());
    }
    _ => println!("No setup function for bench '{}'", name),
}
match bench_fn {
    Some(f) => {
        let mut c = Criterion::default().configure_from_args();
        //let name_str = name.clone();
        c.bench_function(name.clone().as_str(), move |b| {
            b.iter(move || vm.invoke(f, PyFuncArgs::default()))
        });
        Ok(vm.get_none())
    }
    _ => {
        let name_error_type = vm.ctx.exceptions.name_error.clone();
        let msg = format!("bench function is not defined");
        let name_error = vm.new_exception(name_error_type, msg);
        Err(name_error)
    }
}

however no matter what I try I can't get code that works like this to compile. the function object just can not be moved into the inner closure.

@adrian17
Copy link
Contributor

I'm not sure a Rust-level benchmark runner is needed?

CPython has the entire suite in Python itself, and AFAIK they managed to make it quite consistent (I'm not sure if criterion.rs actually does some of these things):

https://github.com/python/performance
https://github.com/vstinner/perf

@Ryex
Copy link
Contributor

Ryex commented Apr 10, 2019

CPython has the entire suite in Python itself, and AFAIK they managed to make it quite consistent (I'm not sure if criterion.rs actually does some of these things):

https://github.com/python/performance
https://github.com/vstinner/perf

hmm, yes I think this is actually the best approach.

we'll have to get the standard library, for the most part, working before we can really use it; but the end result is that we can compare not only performance change to change but with other python versions as well.

@mireille-raad
Copy link
Member

@OddCoincidence recently someone did this

#2367

would it help in figuring out if some critical bits of code need improvements?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants