Skip to content

Commit 72d5050

Browse files
rchatreKAGA-KOKO
authored andcommitted
x86/intel_rdt: Add utilities to test pseudo-locked region possibility
A pseudo-locked region does not have a class of service associated with it and thus not tracked in the array of control values maintained as part of the domain. Even so, when the user provides a new bitmask for another resource group it needs to be checked for interference with existing pseudo-locked regions. Additionally only one pseudo-locked region can be created in any cache hierarchy. Introduce two utilities in support of above scenarios: (1) a utility that can be used to test if a given capacity bitmask overlaps with any pseudo-locked regions associated with a particular cache instance, (2) a utility that can be used to test if a pseudo-locked region exists within a particular cache hierarchy. Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: fenghua.yu@intel.com Cc: tony.luck@intel.com Cc: vikas.shivappa@linux.intel.com Cc: gavin.hindman@intel.com Cc: jithu.joseph@intel.com Cc: dave.hansen@intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/b8e31dbdcf22ddf71df46072647b47e7558abb32.1529706536.git.reinette.chatre@intel.com
1 parent 17eafd0 commit 72d5050

File tree

2 files changed

+76
-0
lines changed

2 files changed

+76
-0
lines changed

arch/x86/kernel/cpu/intel_rdt.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -503,6 +503,8 @@ enum rdtgrp_mode rdtgroup_mode_by_closid(int closid);
503503
int rdtgroup_tasks_assigned(struct rdtgroup *r);
504504
int rdtgroup_locksetup_enter(struct rdtgroup *rdtgrp);
505505
int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp);
506+
bool rdtgroup_cbm_overlaps_pseudo_locked(struct rdt_domain *d, u32 _cbm);
507+
bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d);
506508
struct rdt_domain *get_domain_from_cpu(int cpu, struct rdt_resource *r);
507509
int update_domains(struct rdt_resource *r, int closid);
508510
void closid_free(int closid);

arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -299,3 +299,77 @@ int rdtgroup_locksetup_exit(struct rdtgroup *rdtgrp)
299299
pseudo_lock_free(rdtgrp);
300300
return 0;
301301
}
302+
303+
/**
304+
* rdtgroup_cbm_overlaps_pseudo_locked - Test if CBM or portion is pseudo-locked
305+
* @d: RDT domain
306+
* @_cbm: CBM to test
307+
*
308+
* @d represents a cache instance and @_cbm a capacity bitmask that is
309+
* considered for it. Determine if @_cbm overlaps with any existing
310+
* pseudo-locked region on @d.
311+
*
312+
* Return: true if @_cbm overlaps with pseudo-locked region on @d, false
313+
* otherwise.
314+
*/
315+
bool rdtgroup_cbm_overlaps_pseudo_locked(struct rdt_domain *d, u32 _cbm)
316+
{
317+
unsigned long *cbm = (unsigned long *)&_cbm;
318+
unsigned long *cbm_b;
319+
unsigned int cbm_len;
320+
321+
if (d->plr) {
322+
cbm_len = d->plr->r->cache.cbm_len;
323+
cbm_b = (unsigned long *)&d->plr->cbm;
324+
if (bitmap_intersects(cbm, cbm_b, cbm_len))
325+
return true;
326+
}
327+
328+
return false;
329+
}
330+
331+
/**
332+
* rdtgroup_pseudo_locked_in_hierarchy - Pseudo-locked region in cache hierarchy
333+
* @d: RDT domain under test
334+
*
335+
* The setup of a pseudo-locked region affects all cache instances within
336+
* the hierarchy of the region. It is thus essential to know if any
337+
* pseudo-locked regions exist within a cache hierarchy to prevent any
338+
* attempts to create new pseudo-locked regions in the same hierarchy.
339+
*
340+
* Return: true if a pseudo-locked region exists in the hierarchy of @d or
341+
* if it is not possible to test due to memory allocation issue,
342+
* false otherwise.
343+
*/
344+
bool rdtgroup_pseudo_locked_in_hierarchy(struct rdt_domain *d)
345+
{
346+
cpumask_var_t cpu_with_psl;
347+
struct rdt_resource *r;
348+
struct rdt_domain *d_i;
349+
bool ret = false;
350+
351+
if (!zalloc_cpumask_var(&cpu_with_psl, GFP_KERNEL))
352+
return true;
353+
354+
/*
355+
* First determine which cpus have pseudo-locked regions
356+
* associated with them.
357+
*/
358+
for_each_alloc_enabled_rdt_resource(r) {
359+
list_for_each_entry(d_i, &r->domains, list) {
360+
if (d_i->plr)
361+
cpumask_or(cpu_with_psl, cpu_with_psl,
362+
&d_i->cpu_mask);
363+
}
364+
}
365+
366+
/*
367+
* Next test if new pseudo-locked region would intersect with
368+
* existing region.
369+
*/
370+
if (cpumask_intersects(&d->cpu_mask, cpu_with_psl))
371+
ret = true;
372+
373+
free_cpumask_var(cpu_with_psl);
374+
return ret;
375+
}

0 commit comments

Comments
 (0)