-
Notifications
You must be signed in to change notification settings - Fork 14.9k
[ConstantFolding] Fold scalable get_active_lane_masks #156659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Scalable get_active_lane_mask intrinsics with a range of 0 can be lowered to zeroinitializer. This helps remove no-op scalable masked stores and loads. When the second operand is 0, this cannot be done (see llvm#152140)
@@ -4238,6 +4238,13 @@ static Constant *ConstantFoldScalableVectorCall( | |||
|
|||
return ConstantInt::getFalse(SVTy); | |||
} | |||
case Intrinsic::get_active_lane_mask: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we probably want to do this for fixed-width as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As it stands, ConstantFolding can already handle this for fixed-width. Scalable seems to have been left behind: https://godbolt.org/z/snd7M5oer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see yeah, and as it stands the code in ConstantFoldFixedVectorCall
is broken because it should return poison if Op1
is zero. Once the LangRef is fixed we can always revisit this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@@ -4238,6 +4238,13 @@ static Constant *ConstantFoldScalableVectorCall( | |||
|
|||
return ConstantInt::getFalse(SVTy); | |||
} | |||
case Intrinsic::get_active_lane_mask: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see yeah, and as it stands the code in ConstantFoldFixedVectorCall
is broken because it should return poison if Op1
is zero. Once the LangRef is fixed we can always revisit this.
case Intrinsic::get_active_lane_mask: { | ||
auto Op0 = cast<ConstantInt>(Operands[0])->getValue(); | ||
auto Op1 = cast<ConstantInt>(Operands[1])->getValue(); | ||
if ((Op0.uge(Op1) && (!Op1.isZero()))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this needs a check for Op1.isZero. It is perfectly valid to refine poison to zero.
It would be converting the whilelo -> get_active_lane_mask that would be invalid, if zero produced poison, as it converts from zero->poison.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(This also has many more brackets than necessary, you can drop a few).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't that mean that if add nsw i64 %v1, %v2
or getelementptr inbounds ...
, etc. are known to produce poison we can just return any other value that we think makes sense too? At the IR level the choice of a zero value here is completely arbitrary (since the LangRef explicitly says the result is poison, not zero), but what if a completely different IR pass decides it can prove Op1
is zero via other means (via computed bits analysis, etc) and decides to return another completely arbitrary value following the same logic, e.g. 1? Wouldn't we then be in a situation where two calls to get.active.lane.mask with an operand of 0 return different non-poison results?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, poison propagates so if it is not frozen it can have different values at different places.
https://llvm.org/docs/LangRef.html#poison-values
Producing all-ones would be valid (if it was poison), but not as useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, I've removed the Op1.isZero check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough, thanks for explaining @davemgreen!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@llvm/pr-subscribers-llvm-analysis @llvm/pr-subscribers-llvm-transforms Author: Matthew Devereau (MDevereau) ChangesScalable get_active_lane_mask intrinsics with a range of 0 can be lowered to zeroinitializer. This helps remove no-op scalable masked stores and loads. When the second operand is 0, this cannot be done (see #152140) Full diff: https://github.com/llvm/llvm-project/pull/156659.diff 2 Files Affected:
diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp
index 2148431c1acce..67e6be5b70cb6 100644
--- a/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/llvm/lib/Analysis/ConstantFolding.cpp
@@ -4238,6 +4238,13 @@ static Constant *ConstantFoldScalableVectorCall(
return ConstantInt::getFalse(SVTy);
}
+ case Intrinsic::get_active_lane_mask: {
+ auto Op0 = cast<ConstantInt>(Operands[0])->getValue();
+ auto Op1 = cast<ConstantInt>(Operands[1])->getValue();
+ if (Op0.uge(Op1))
+ return ConstantVector::getNullValue(SVTy);
+ break;
+ }
default:
break;
}
diff --git a/llvm/test/Transforms/InstSimplify/ConstProp/active-lane-mask.ll b/llvm/test/Transforms/InstSimplify/ConstProp/active-lane-mask.ll
index a904e697cc975..ed26deb58eae4 100644
--- a/llvm/test/Transforms/InstSimplify/ConstProp/active-lane-mask.ll
+++ b/llvm/test/Transforms/InstSimplify/ConstProp/active-lane-mask.ll
@@ -307,6 +307,39 @@ entry:
ret <4 x float> %var33
}
+define <vscale x 4 x i1> @nxv4i1_12_12() {
+; CHECK-LABEL: @nxv4i1_12_12(
+; CHECK-NEXT: entry:
+; CHECK-NEXT: ret <vscale x 4 x i1> zeroinitializer
+;
+entry:
+ %mask = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i32(i32 12, i32 12)
+ ret <vscale x 4 x i1> %mask
+}
+
+define <vscale x 4 x i1> @nxv4i1_8_4() {
+; CHECK-LABEL: @nxv4i1_8_4(
+; CHECK-NEXT: entry:
+; CHECK-NEXT: ret <vscale x 4 x i1> zeroinitializer
+;
+entry:
+ %mask = call <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i32(i32 8, i32 4)
+ ret <vscale x 4 x i1> %mask
+}
+
+define <vscale x 16 x i1> @nxv16i1_0_0() {
+; CHECK-LABEL: @nxv16i1_0_0(
+; CHECK-NEXT: entry:
+; CHECK-NEXT: ret <vscale x 16 x i1> zeroinitializer
+;
+entry:
+ %mask = call <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv16i1.i64(i64 0, i64 0)
+ ret <vscale x 16 x i1> %mask
+}
+
declare <4 x i1> @llvm.get.active.lane.mask.v4i1.i32(i32, i32)
declare <8 x i1> @llvm.get.active.lane.mask.v8i1.i32(i32, i32)
declare <16 x i1> @llvm.get.active.lane.mask.v16i1.i32(i32, i32)
+
+declare <vscale x 4 x i1> @llvm.get.active.lane.mask.nxv4i1.i32(i32, i32)
+declare <vscale x 16 x i1> @llvm.get.active.lane.mask.nxv4i1.i64(i64, i64)
|
Scalable get_active_lane_mask intrinsics with a range of 0 can be lowered to zeroinitializer. This helps remove no-op scalable masked stores and loads.