Skip to content

Commit fff4068

Browse files
hnaztorvalds
authored andcommitted
mm: page_alloc: revert NUMA aspect of fair allocation policy
Commit 81c0a2b ("mm: page_alloc: fair zone allocator policy") meant to bring aging fairness among zones in system, but it was overzealous and badly regressed basic workloads on NUMA systems. Due to the way kswapd and page allocator interacts, we still want to make sure that all zones in any given node are used equally for all allocations to maximize memory utilization and prevent thrashing on the highest zone in the node. While the same principle applies to NUMA nodes - memory utilization is obviously improved by spreading allocations throughout all nodes - remote references can be costly and so many workloads prefer locality over memory utilization. The original change assumed that zone_reclaim_mode would be a good enough predictor for that, but it turned out to be as indicative as a coin flip. Revert the NUMA aspect of the fairness until we can find a proper way to make it configurable and agree on a sane default. Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by: Michal Hocko <mhocko@suse.cz> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: <stable@kernel.org> # 3.12 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 8798cee commit fff4068

File tree

1 file changed

+9
-10
lines changed

1 file changed

+9
-10
lines changed

mm/page_alloc.c

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1816,7 +1816,7 @@ static void zlc_clear_zones_full(struct zonelist *zonelist)
18161816

18171817
static bool zone_local(struct zone *local_zone, struct zone *zone)
18181818
{
1819-
return node_distance(local_zone->node, zone->node) == LOCAL_DISTANCE;
1819+
return local_zone->node == zone->node;
18201820
}
18211821

18221822
static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
@@ -1913,18 +1913,17 @@ get_page_from_freelist(gfp_t gfp_mask, nodemask_t *nodemask, unsigned int order,
19131913
* page was allocated in should have no effect on the
19141914
* time the page has in memory before being reclaimed.
19151915
*
1916-
* When zone_reclaim_mode is enabled, try to stay in
1917-
* local zones in the fastpath. If that fails, the
1918-
* slowpath is entered, which will do another pass
1919-
* starting with the local zones, but ultimately fall
1920-
* back to remote zones that do not partake in the
1921-
* fairness round-robin cycle of this zonelist.
1916+
* Try to stay in local zones in the fastpath. If
1917+
* that fails, the slowpath is entered, which will do
1918+
* another pass starting with the local zones, but
1919+
* ultimately fall back to remote zones that do not
1920+
* partake in the fairness round-robin cycle of this
1921+
* zonelist.
19221922
*/
19231923
if (alloc_flags & ALLOC_WMARK_LOW) {
19241924
if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0)
19251925
continue;
1926-
if (zone_reclaim_mode &&
1927-
!zone_local(preferred_zone, zone))
1926+
if (!zone_local(preferred_zone, zone))
19281927
continue;
19291928
}
19301929
/*
@@ -2390,7 +2389,7 @@ static void prepare_slowpath(gfp_t gfp_mask, unsigned int order,
23902389
* thrash fairness information for zones that are not
23912390
* actually part of this zonelist's round-robin cycle.
23922391
*/
2393-
if (zone_reclaim_mode && !zone_local(preferred_zone, zone))
2392+
if (!zone_local(preferred_zone, zone))
23942393
continue;
23952394
mod_zone_page_state(zone, NR_ALLOC_BATCH,
23962395
high_wmark_pages(zone) -

0 commit comments

Comments
 (0)