Skip to content

Commit 19a469a

Browse files
Marc Zyngierctmarinas
authored andcommitted
drivers/perf: arm-pmu: Handle per-interrupt affinity mask
On a big-little system, PMUs can be wired to CPUs using per CPU interrups (PPI). In this case, it is important to make sure that the enable/disable do happen on the right set of CPUs. So instead of relying on the interrupt-affinity property, we can use the actual percpu affinity that DT exposes as part of the interrupt specifier. The DT binding is also updated to reflect the fact that the interrupt-affinity property shouldn't be used in that case. Acked-by: Rob Herring <robh@kernel.org> Tested-by: Caesar Wang <wxt@rock-chips.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
1 parent 90f777b commit 19a469a

File tree

2 files changed

+25
-6
lines changed

2 files changed

+25
-6
lines changed

Documentation/devicetree/bindings/arm/pmu.txt

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,9 @@ Optional properties:
3939
When using a PPI, specifies a list of phandles to CPU
4040
nodes corresponding to the set of CPUs which have
4141
a PMU of this type signalling the PPI listed in the
42-
interrupts property.
42+
interrupts property, unless this is already specified
43+
by the PPI interrupt specifier itself (in which case
44+
the interrupt-affinity property shouldn't be present).
4345

4446
This property should be present when there is more than
4547
a single SPI.

drivers/perf/arm_pmu.c

Lines changed: 22 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -603,7 +603,8 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu)
603603

604604
irq = platform_get_irq(pmu_device, 0);
605605
if (irq >= 0 && irq_is_percpu(irq)) {
606-
on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1);
606+
on_each_cpu_mask(&cpu_pmu->supported_cpus,
607+
cpu_pmu_disable_percpu_irq, &irq, 1);
607608
free_percpu_irq(irq, &hw_events->percpu_pmu);
608609
} else {
609610
for (i = 0; i < irqs; ++i) {
@@ -645,7 +646,9 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler)
645646
irq);
646647
return err;
647648
}
648-
on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1);
649+
650+
on_each_cpu_mask(&cpu_pmu->supported_cpus,
651+
cpu_pmu_enable_percpu_irq, &irq, 1);
649652
} else {
650653
for (i = 0; i < irqs; ++i) {
651654
int cpu = i;
@@ -961,9 +964,23 @@ static int of_pmu_irq_cfg(struct arm_pmu *pmu)
961964
i++;
962965
} while (1);
963966

964-
/* If we didn't manage to parse anything, claim to support all CPUs */
965-
if (cpumask_weight(&pmu->supported_cpus) == 0)
966-
cpumask_setall(&pmu->supported_cpus);
967+
/* If we didn't manage to parse anything, try the interrupt affinity */
968+
if (cpumask_weight(&pmu->supported_cpus) == 0) {
969+
if (!using_spi) {
970+
/* If using PPIs, check the affinity of the partition */
971+
int ret, irq;
972+
973+
irq = platform_get_irq(pdev, 0);
974+
ret = irq_get_percpu_devid_partition(irq, &pmu->supported_cpus);
975+
if (ret) {
976+
kfree(irqs);
977+
return ret;
978+
}
979+
} else {
980+
/* Otherwise default to all CPUs */
981+
cpumask_setall(&pmu->supported_cpus);
982+
}
983+
}
967984

968985
/* If we matched up the IRQ affinities, use them to route the SPIs */
969986
if (using_spi && i == pdev->num_resources)

0 commit comments

Comments
 (0)