Skip to content

Commit 9b231d9

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
perf/core: Fix time on IOC_ENABLE
Vince reported that when we do IOC_ENABLE/IOC_DISABLE while the task is SIGSTOP'ed state the timestamps go wobbly. It turns out we indeed fail to correctly account time while in 'OFF' state and doing IOC_ENABLE without getting scheduled in exposes the problem. Further thinking about this problem, it occurred to me that we can suffer a similar fate when we migrate an uncore event between CPUs. The perf_event_install() on the 'new' CPU will do add_event_to_ctx() which will reset all the time stamp, resulting in a subsequent update_event_times() to overwrite the total_time_* fields with smaller values. Reported-by: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent bfe3349 commit 9b231d9

File tree

1 file changed

+36
-5
lines changed

1 file changed

+36
-5
lines changed

kernel/events/core.c

Lines changed: 36 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2217,16 +2217,46 @@ static int group_can_go_on(struct perf_event *event,
22172217
return can_add_hw;
22182218
}
22192219

2220+
/*
2221+
* Complement to update_event_times(). This computes the tstamp_* values to
2222+
* continue 'enabled' state from @now, and effectively discards the time
2223+
* between the prior tstamp_stopped and now (as we were in the OFF state, or
2224+
* just switched (context) time base).
2225+
*
2226+
* This further assumes '@event->state == INACTIVE' (we just came from OFF) and
2227+
* cannot have been scheduled in yet. And going into INACTIVE state means
2228+
* '@event->tstamp_stopped = @now'.
2229+
*
2230+
* Thus given the rules of update_event_times():
2231+
*
2232+
* total_time_enabled = tstamp_stopped - tstamp_enabled
2233+
* total_time_running = tstamp_stopped - tstamp_running
2234+
*
2235+
* We can insert 'tstamp_stopped == now' and reverse them to compute new
2236+
* tstamp_* values.
2237+
*/
2238+
static void __perf_event_enable_time(struct perf_event *event, u64 now)
2239+
{
2240+
WARN_ON_ONCE(event->state != PERF_EVENT_STATE_INACTIVE);
2241+
2242+
event->tstamp_stopped = now;
2243+
event->tstamp_enabled = now - event->total_time_enabled;
2244+
event->tstamp_running = now - event->total_time_running;
2245+
}
2246+
22202247
static void add_event_to_ctx(struct perf_event *event,
22212248
struct perf_event_context *ctx)
22222249
{
22232250
u64 tstamp = perf_event_time(event);
22242251

22252252
list_add_event(event, ctx);
22262253
perf_group_attach(event);
2227-
event->tstamp_enabled = tstamp;
2228-
event->tstamp_running = tstamp;
2229-
event->tstamp_stopped = tstamp;
2254+
/*
2255+
* We can be called with event->state == STATE_OFF when we create with
2256+
* .disabled = 1. In that case the IOC_ENABLE will call this function.
2257+
*/
2258+
if (event->state == PERF_EVENT_STATE_INACTIVE)
2259+
__perf_event_enable_time(event, tstamp);
22302260
}
22312261

22322262
static void ctx_sched_out(struct perf_event_context *ctx,
@@ -2471,10 +2501,11 @@ static void __perf_event_mark_enabled(struct perf_event *event)
24712501
u64 tstamp = perf_event_time(event);
24722502

24732503
event->state = PERF_EVENT_STATE_INACTIVE;
2474-
event->tstamp_enabled = tstamp - event->total_time_enabled;
2504+
__perf_event_enable_time(event, tstamp);
24752505
list_for_each_entry(sub, &event->sibling_list, group_entry) {
2506+
/* XXX should not be > INACTIVE if event isn't */
24762507
if (sub->state >= PERF_EVENT_STATE_INACTIVE)
2477-
sub->tstamp_enabled = tstamp - sub->total_time_enabled;
2508+
__perf_event_enable_time(sub, tstamp);
24782509
}
24792510
}
24802511

0 commit comments

Comments
 (0)