Skip to content

Conversation

preames
Copy link
Collaborator

@preames preames commented Sep 4, 2025

The arithmetic expansion requires fewer registers, and is often fewer instructions. The critical path does increase by (up to) one instruction.

This is a sub-case of the expansion we do without zicond, but restricted specifically to the simm12 case. In the general case where the other source is a register using zicond is likely better. (Edit: While technically true, this is a bit misleading, we do this in combineSelectToBinOp which is also used in the zicond path, just further down.)

The arithmetic expansion requires fewer registers, and is often
fewer instructions.  The critical path does increase by (up to)
one instruction.

This is a sub-case of the expansion we do without zicond, but
restricted specifically to the simm12 case.  In the general case
where the other source is a register using zicond is likely better.
@llvmbot
Copy link
Member

llvmbot commented Sep 4, 2025

@llvm/pr-subscribers-backend-risc-v

Author: Philip Reames (preames)

Changes

The arithmetic expansion requires fewer registers, and is often fewer instructions. The critical path does increase by (up to) one instruction.

This is a sub-case of the expansion we do without zicond, but restricted specifically to the simm12 case. In the general case where the other source is a register using zicond is likely better.


Full diff: https://github.com/llvm/llvm-project/pull/156957.diff

4 Files Affected:

  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+18)
  • (modified) llvm/test/CodeGen/RISCV/cmov-branch-opt.ll (+6-4)
  • (modified) llvm/test/CodeGen/RISCV/select-const.ll (+46-156)
  • (modified) llvm/test/CodeGen/RISCV/select.ll (+50-88)
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index e8891538ede50..19864dd5a311e 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -9240,6 +9240,10 @@ foldBinOpIntoSelectIfProfitable(SDNode *BO, SelectionDAG &DAG,
   return DAG.getSelect(DL, VT, Sel.getOperand(0), NewT, NewF);
 }
 
+static bool isSimm12Constant(SDValue V) {
+  return isa<ConstantSDNode>(V) && V->getAsAPIntVal().isSignedIntN(12);
+}
+
 SDValue RISCVTargetLowering::lowerSELECT(SDValue Op, SelectionDAG &DAG) const {
   SDValue CondV = Op.getOperand(0);
   SDValue TrueV = Op.getOperand(1);
@@ -9261,6 +9265,20 @@ SDValue RISCVTargetLowering::lowerSELECT(SDValue Op, SelectionDAG &DAG) const {
   // sequence or RISCVISD::SELECT_CC node (branch-based select).
   if ((Subtarget.hasStdExtZicond() || Subtarget.hasVendorXVentanaCondOps()) &&
       VT.isScalarInteger()) {
+
+    // select c, simm12, 0 -> andi (sub x0, c), simm12
+    if (isSimm12Constant(TrueV) && isNullConstant(FalseV)) {
+      SDValue Mask = DAG.getNegative(CondV, DL, VT);
+      return DAG.getNode(ISD::AND, DL, VT, TrueV, Mask);
+    }
+
+    // select c, 0, simm12 -> andi (addi c, -1), simm12
+    if (isNullConstant(TrueV) && isSimm12Constant(FalseV)) {
+      SDValue Mask = DAG.getNode(ISD::SUB, DL, VT, CondV,
+                                 DAG.getConstant(1, DL, XLenVT));
+      return DAG.getNode(ISD::AND, DL, VT, FalseV, Mask);
+    }
+
     // (select c, t, 0) -> (czero_eqz t, c)
     if (isNullConstant(FalseV))
       return DAG.getNode(RISCVISD::CZERO_EQZ, DL, VT, TrueV, CondV);
diff --git a/llvm/test/CodeGen/RISCV/cmov-branch-opt.ll b/llvm/test/CodeGen/RISCV/cmov-branch-opt.ll
index 6608874286e34..351b02494ae85 100644
--- a/llvm/test/CodeGen/RISCV/cmov-branch-opt.ll
+++ b/llvm/test/CodeGen/RISCV/cmov-branch-opt.ll
@@ -149,8 +149,9 @@ define signext i32 @test4(i32 signext %x, i32 signext %y, i32 signext %z) {
 ;
 ; CMOV-ZICOND-LABEL: test4:
 ; CMOV-ZICOND:       # %bb.0:
-; CMOV-ZICOND-NEXT:    li a0, 3
-; CMOV-ZICOND-NEXT:    czero.nez a0, a0, a2
+; CMOV-ZICOND-NEXT:    snez a0, a2
+; CMOV-ZICOND-NEXT:    addi a0, a0, -1
+; CMOV-ZICOND-NEXT:    andi a0, a0, 3
 ; CMOV-ZICOND-NEXT:    ret
 ;
 ; SFB-NOZICOND-LABEL: test4:
@@ -164,8 +165,9 @@ define signext i32 @test4(i32 signext %x, i32 signext %y, i32 signext %z) {
 ;
 ; SFB-ZICOND-LABEL: test4:
 ; SFB-ZICOND:       # %bb.0:
-; SFB-ZICOND-NEXT:    li a0, 3
-; SFB-ZICOND-NEXT:    czero.nez a0, a0, a2
+; SFB-ZICOND-NEXT:    snez a0, a2
+; SFB-ZICOND-NEXT:    addi a0, a0, -1
+; SFB-ZICOND-NEXT:    andi a0, a0, 3
 ; SFB-ZICOND-NEXT:    ret
   %c = icmp eq i32 %z, 0
   %a = select i1 %c, i32 3, i32 0
diff --git a/llvm/test/CodeGen/RISCV/select-const.ll b/llvm/test/CodeGen/RISCV/select-const.ll
index 5b5548d15abca..652018d023b86 100644
--- a/llvm/test/CodeGen/RISCV/select-const.ll
+++ b/llvm/test/CodeGen/RISCV/select-const.ll
@@ -1080,184 +1080,74 @@ define i32 @sext_or_constant2(i32 signext %x) {
 
 
 define i32 @select_0_6(i32 signext %x) {
-; RV32I-LABEL: select_0_6:
-; RV32I:       # %bb.0:
-; RV32I-NEXT:    srai a0, a0, 2
-; RV32I-NEXT:    srli a0, a0, 30
-; RV32I-NEXT:    slli a0, a0, 1
-; RV32I-NEXT:    ret
-;
-; RV32IF-LABEL: select_0_6:
-; RV32IF:       # %bb.0:
-; RV32IF-NEXT:    srai a0, a0, 2
-; RV32IF-NEXT:    srli a0, a0, 30
-; RV32IF-NEXT:    slli a0, a0, 1
-; RV32IF-NEXT:    ret
-;
-; RV32ZICOND-LABEL: select_0_6:
-; RV32ZICOND:       # %bb.0:
-; RV32ZICOND-NEXT:    srli a0, a0, 31
-; RV32ZICOND-NEXT:    li a1, 6
-; RV32ZICOND-NEXT:    czero.eqz a0, a1, a0
-; RV32ZICOND-NEXT:    ret
-;
-; RV64I-LABEL: select_0_6:
-; RV64I:       # %bb.0:
-; RV64I-NEXT:    srai a0, a0, 2
-; RV64I-NEXT:    srli a0, a0, 62
-; RV64I-NEXT:    slli a0, a0, 1
-; RV64I-NEXT:    ret
-;
-; RV64IFD-LABEL: select_0_6:
-; RV64IFD:       # %bb.0:
-; RV64IFD-NEXT:    srai a0, a0, 2
-; RV64IFD-NEXT:    srli a0, a0, 62
-; RV64IFD-NEXT:    slli a0, a0, 1
-; RV64IFD-NEXT:    ret
+; RV32-LABEL: select_0_6:
+; RV32:       # %bb.0:
+; RV32-NEXT:    srai a0, a0, 2
+; RV32-NEXT:    srli a0, a0, 30
+; RV32-NEXT:    slli a0, a0, 1
+; RV32-NEXT:    ret
 ;
-; RV64ZICOND-LABEL: select_0_6:
-; RV64ZICOND:       # %bb.0:
-; RV64ZICOND-NEXT:    srli a0, a0, 63
-; RV64ZICOND-NEXT:    li a1, 6
-; RV64ZICOND-NEXT:    czero.eqz a0, a1, a0
-; RV64ZICOND-NEXT:    ret
+; RV64-LABEL: select_0_6:
+; RV64:       # %bb.0:
+; RV64-NEXT:    srai a0, a0, 2
+; RV64-NEXT:    srli a0, a0, 62
+; RV64-NEXT:    slli a0, a0, 1
+; RV64-NEXT:    ret
   %cmp = icmp sgt i32 %x, -1
   %cond = select i1 %cmp, i32 0, i32 6
   ret i32 %cond
 }
 
 define i32 @select_6_0(i32 signext %x) {
-; RV32I-LABEL: select_6_0:
-; RV32I:       # %bb.0:
-; RV32I-NEXT:    srli a0, a0, 31
-; RV32I-NEXT:    addi a0, a0, -1
-; RV32I-NEXT:    andi a0, a0, 6
-; RV32I-NEXT:    ret
-;
-; RV32IF-LABEL: select_6_0:
-; RV32IF:       # %bb.0:
-; RV32IF-NEXT:    srli a0, a0, 31
-; RV32IF-NEXT:    addi a0, a0, -1
-; RV32IF-NEXT:    andi a0, a0, 6
-; RV32IF-NEXT:    ret
-;
-; RV32ZICOND-LABEL: select_6_0:
-; RV32ZICOND:       # %bb.0:
-; RV32ZICOND-NEXT:    srli a0, a0, 31
-; RV32ZICOND-NEXT:    li a1, 6
-; RV32ZICOND-NEXT:    czero.nez a0, a1, a0
-; RV32ZICOND-NEXT:    ret
-;
-; RV64I-LABEL: select_6_0:
-; RV64I:       # %bb.0:
-; RV64I-NEXT:    srli a0, a0, 63
-; RV64I-NEXT:    addi a0, a0, -1
-; RV64I-NEXT:    andi a0, a0, 6
-; RV64I-NEXT:    ret
-;
-; RV64IFD-LABEL: select_6_0:
-; RV64IFD:       # %bb.0:
-; RV64IFD-NEXT:    srli a0, a0, 63
-; RV64IFD-NEXT:    addi a0, a0, -1
-; RV64IFD-NEXT:    andi a0, a0, 6
-; RV64IFD-NEXT:    ret
+; RV32-LABEL: select_6_0:
+; RV32:       # %bb.0:
+; RV32-NEXT:    srli a0, a0, 31
+; RV32-NEXT:    addi a0, a0, -1
+; RV32-NEXT:    andi a0, a0, 6
+; RV32-NEXT:    ret
 ;
-; RV64ZICOND-LABEL: select_6_0:
-; RV64ZICOND:       # %bb.0:
-; RV64ZICOND-NEXT:    srli a0, a0, 63
-; RV64ZICOND-NEXT:    li a1, 6
-; RV64ZICOND-NEXT:    czero.nez a0, a1, a0
-; RV64ZICOND-NEXT:    ret
+; RV64-LABEL: select_6_0:
+; RV64:       # %bb.0:
+; RV64-NEXT:    srli a0, a0, 63
+; RV64-NEXT:    addi a0, a0, -1
+; RV64-NEXT:    andi a0, a0, 6
+; RV64-NEXT:    ret
   %cmp = icmp sgt i32 %x, -1
   %cond = select i1 %cmp, i32 6, i32 0
   ret i32 %cond
 }
 
 define i32 @select_0_394(i32 signext %x) {
-; RV32I-LABEL: select_0_394:
-; RV32I:       # %bb.0:
-; RV32I-NEXT:    srai a0, a0, 31
-; RV32I-NEXT:    andi a0, a0, 394
-; RV32I-NEXT:    ret
-;
-; RV32IF-LABEL: select_0_394:
-; RV32IF:       # %bb.0:
-; RV32IF-NEXT:    srai a0, a0, 31
-; RV32IF-NEXT:    andi a0, a0, 394
-; RV32IF-NEXT:    ret
-;
-; RV32ZICOND-LABEL: select_0_394:
-; RV32ZICOND:       # %bb.0:
-; RV32ZICOND-NEXT:    srli a0, a0, 31
-; RV32ZICOND-NEXT:    li a1, 394
-; RV32ZICOND-NEXT:    czero.eqz a0, a1, a0
-; RV32ZICOND-NEXT:    ret
-;
-; RV64I-LABEL: select_0_394:
-; RV64I:       # %bb.0:
-; RV64I-NEXT:    srai a0, a0, 63
-; RV64I-NEXT:    andi a0, a0, 394
-; RV64I-NEXT:    ret
-;
-; RV64IFD-LABEL: select_0_394:
-; RV64IFD:       # %bb.0:
-; RV64IFD-NEXT:    srai a0, a0, 63
-; RV64IFD-NEXT:    andi a0, a0, 394
-; RV64IFD-NEXT:    ret
+; RV32-LABEL: select_0_394:
+; RV32:       # %bb.0:
+; RV32-NEXT:    srai a0, a0, 31
+; RV32-NEXT:    andi a0, a0, 394
+; RV32-NEXT:    ret
 ;
-; RV64ZICOND-LABEL: select_0_394:
-; RV64ZICOND:       # %bb.0:
-; RV64ZICOND-NEXT:    srli a0, a0, 63
-; RV64ZICOND-NEXT:    li a1, 394
-; RV64ZICOND-NEXT:    czero.eqz a0, a1, a0
-; RV64ZICOND-NEXT:    ret
+; RV64-LABEL: select_0_394:
+; RV64:       # %bb.0:
+; RV64-NEXT:    srai a0, a0, 63
+; RV64-NEXT:    andi a0, a0, 394
+; RV64-NEXT:    ret
   %cmp = icmp sgt i32 %x, -1
   %cond = select i1 %cmp, i32 0, i32 394
   ret i32 %cond
 }
 
 define i32 @select_394_0(i32 signext %x) {
-; RV32I-LABEL: select_394_0:
-; RV32I:       # %bb.0:
-; RV32I-NEXT:    srli a0, a0, 31
-; RV32I-NEXT:    addi a0, a0, -1
-; RV32I-NEXT:    andi a0, a0, 394
-; RV32I-NEXT:    ret
-;
-; RV32IF-LABEL: select_394_0:
-; RV32IF:       # %bb.0:
-; RV32IF-NEXT:    srli a0, a0, 31
-; RV32IF-NEXT:    addi a0, a0, -1
-; RV32IF-NEXT:    andi a0, a0, 394
-; RV32IF-NEXT:    ret
-;
-; RV32ZICOND-LABEL: select_394_0:
-; RV32ZICOND:       # %bb.0:
-; RV32ZICOND-NEXT:    srli a0, a0, 31
-; RV32ZICOND-NEXT:    li a1, 394
-; RV32ZICOND-NEXT:    czero.nez a0, a1, a0
-; RV32ZICOND-NEXT:    ret
-;
-; RV64I-LABEL: select_394_0:
-; RV64I:       # %bb.0:
-; RV64I-NEXT:    srli a0, a0, 63
-; RV64I-NEXT:    addi a0, a0, -1
-; RV64I-NEXT:    andi a0, a0, 394
-; RV64I-NEXT:    ret
-;
-; RV64IFD-LABEL: select_394_0:
-; RV64IFD:       # %bb.0:
-; RV64IFD-NEXT:    srli a0, a0, 63
-; RV64IFD-NEXT:    addi a0, a0, -1
-; RV64IFD-NEXT:    andi a0, a0, 394
-; RV64IFD-NEXT:    ret
+; RV32-LABEL: select_394_0:
+; RV32:       # %bb.0:
+; RV32-NEXT:    srli a0, a0, 31
+; RV32-NEXT:    addi a0, a0, -1
+; RV32-NEXT:    andi a0, a0, 394
+; RV32-NEXT:    ret
 ;
-; RV64ZICOND-LABEL: select_394_0:
-; RV64ZICOND:       # %bb.0:
-; RV64ZICOND-NEXT:    srli a0, a0, 63
-; RV64ZICOND-NEXT:    li a1, 394
-; RV64ZICOND-NEXT:    czero.nez a0, a1, a0
-; RV64ZICOND-NEXT:    ret
+; RV64-LABEL: select_394_0:
+; RV64:       # %bb.0:
+; RV64-NEXT:    srli a0, a0, 63
+; RV64-NEXT:    addi a0, a0, -1
+; RV64-NEXT:    andi a0, a0, 394
+; RV64-NEXT:    ret
   %cmp = icmp sgt i32 %x, -1
   %cond = select i1 %cmp, i32 394, i32 0
   ret i32 %cond
diff --git a/llvm/test/CodeGen/RISCV/select.ll b/llvm/test/CodeGen/RISCV/select.ll
index 1e7bb4295938b..11585baf0bc59 100644
--- a/llvm/test/CodeGen/RISCV/select.ll
+++ b/llvm/test/CodeGen/RISCV/select.ll
@@ -25,16 +25,18 @@ define i16 @select_xor_1(i16 %A, i8 %cond) {
 ; RV64IMXVTCONDOPS-LABEL: select_xor_1:
 ; RV64IMXVTCONDOPS:       # %bb.0: # %entry
 ; RV64IMXVTCONDOPS-NEXT:    andi a1, a1, 1
-; RV64IMXVTCONDOPS-NEXT:    li a2, 43
-; RV64IMXVTCONDOPS-NEXT:    vt.maskc a1, a2, a1
+; RV64IMXVTCONDOPS-NEXT:    seqz a1, a1
+; RV64IMXVTCONDOPS-NEXT:    addi a1, a1, -1
+; RV64IMXVTCONDOPS-NEXT:    andi a1, a1, 43
 ; RV64IMXVTCONDOPS-NEXT:    xor a0, a0, a1
 ; RV64IMXVTCONDOPS-NEXT:    ret
 ;
 ; CHECKZICOND-LABEL: select_xor_1:
 ; CHECKZICOND:       # %bb.0: # %entry
 ; CHECKZICOND-NEXT:    andi a1, a1, 1
-; CHECKZICOND-NEXT:    li a2, 43
-; CHECKZICOND-NEXT:    czero.eqz a1, a2, a1
+; CHECKZICOND-NEXT:    seqz a1, a1
+; CHECKZICOND-NEXT:    addi a1, a1, -1
+; CHECKZICOND-NEXT:    andi a1, a1, 43
 ; CHECKZICOND-NEXT:    xor a0, a0, a1
 ; CHECKZICOND-NEXT:    ret
 entry:
@@ -66,19 +68,27 @@ define i16 @select_xor_1b(i16 %A, i8 %cond) {
 ;
 ; RV64IMXVTCONDOPS-LABEL: select_xor_1b:
 ; RV64IMXVTCONDOPS:       # %bb.0: # %entry
-; RV64IMXVTCONDOPS-NEXT:    andi a1, a1, 1
-; RV64IMXVTCONDOPS-NEXT:    li a2, 43
-; RV64IMXVTCONDOPS-NEXT:    vt.maskc a1, a2, a1
+; RV64IMXVTCONDOPS-NEXT:    slli a1, a1, 63
+; RV64IMXVTCONDOPS-NEXT:    srai a1, a1, 63
+; RV64IMXVTCONDOPS-NEXT:    andi a1, a1, 43
 ; RV64IMXVTCONDOPS-NEXT:    xor a0, a0, a1
 ; RV64IMXVTCONDOPS-NEXT:    ret
 ;
-; CHECKZICOND-LABEL: select_xor_1b:
-; CHECKZICOND:       # %bb.0: # %entry
-; CHECKZICOND-NEXT:    andi a1, a1, 1
-; CHECKZICOND-NEXT:    li a2, 43
-; CHECKZICOND-NEXT:    czero.eqz a1, a2, a1
-; CHECKZICOND-NEXT:    xor a0, a0, a1
-; CHECKZICOND-NEXT:    ret
+; RV32IMZICOND-LABEL: select_xor_1b:
+; RV32IMZICOND:       # %bb.0: # %entry
+; RV32IMZICOND-NEXT:    slli a1, a1, 31
+; RV32IMZICOND-NEXT:    srai a1, a1, 31
+; RV32IMZICOND-NEXT:    andi a1, a1, 43
+; RV32IMZICOND-NEXT:    xor a0, a0, a1
+; RV32IMZICOND-NEXT:    ret
+;
+; RV64IMZICOND-LABEL: select_xor_1b:
+; RV64IMZICOND:       # %bb.0: # %entry
+; RV64IMZICOND-NEXT:    slli a1, a1, 63
+; RV64IMZICOND-NEXT:    srai a1, a1, 63
+; RV64IMZICOND-NEXT:    andi a1, a1, 43
+; RV64IMZICOND-NEXT:    xor a0, a0, a1
+; RV64IMZICOND-NEXT:    ret
 entry:
  %and = and i8 %cond, 1
  %cmp10 = icmp ne i8 %and, 1
@@ -166,37 +176,13 @@ entry:
 }
 
 define i16 @select_xor_3(i16 %A, i8 %cond) {
-; RV32IM-LABEL: select_xor_3:
-; RV32IM:       # %bb.0: # %entry
-; RV32IM-NEXT:    andi a1, a1, 1
-; RV32IM-NEXT:    addi a1, a1, -1
-; RV32IM-NEXT:    andi a1, a1, 43
-; RV32IM-NEXT:    xor a0, a0, a1
-; RV32IM-NEXT:    ret
-;
-; RV64IM-LABEL: select_xor_3:
-; RV64IM:       # %bb.0: # %entry
-; RV64IM-NEXT:    andi a1, a1, 1
-; RV64IM-NEXT:    addi a1, a1, -1
-; RV64IM-NEXT:    andi a1, a1, 43
-; RV64IM-NEXT:    xor a0, a0, a1
-; RV64IM-NEXT:    ret
-;
-; RV64IMXVTCONDOPS-LABEL: select_xor_3:
-; RV64IMXVTCONDOPS:       # %bb.0: # %entry
-; RV64IMXVTCONDOPS-NEXT:    andi a1, a1, 1
-; RV64IMXVTCONDOPS-NEXT:    li a2, 43
-; RV64IMXVTCONDOPS-NEXT:    vt.maskcn a1, a2, a1
-; RV64IMXVTCONDOPS-NEXT:    xor a0, a0, a1
-; RV64IMXVTCONDOPS-NEXT:    ret
-;
-; CHECKZICOND-LABEL: select_xor_3:
-; CHECKZICOND:       # %bb.0: # %entry
-; CHECKZICOND-NEXT:    andi a1, a1, 1
-; CHECKZICOND-NEXT:    li a2, 43
-; CHECKZICOND-NEXT:    czero.nez a1, a2, a1
-; CHECKZICOND-NEXT:    xor a0, a0, a1
-; CHECKZICOND-NEXT:    ret
+; CHECK-LABEL: select_xor_3:
+; CHECK:       # %bb.0: # %entry
+; CHECK-NEXT:    andi a1, a1, 1
+; CHECK-NEXT:    addi a1, a1, -1
+; CHECK-NEXT:    andi a1, a1, 43
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    ret
 entry:
  %and = and i8 %cond, 1
  %cmp10 = icmp eq i8 %and, 0
@@ -208,37 +194,13 @@ entry:
 ; Equivalent to above, but with icmp ne (and %cond, 1), 1 instead of
 ; icmp eq (and %cond, 1), 0
 define i16 @select_xor_3b(i16 %A, i8 %cond) {
-; RV32IM-LABEL: select_xor_3b:
-; RV32IM:       # %bb.0: # %entry
-; RV32IM-NEXT:    andi a1, a1, 1
-; RV32IM-NEXT:    addi a1, a1, -1
-; RV32IM-NEXT:    andi a1, a1, 43
-; RV32IM-NEXT:    xor a0, a0, a1
-; RV32IM-NEXT:    ret
-;
-; RV64IM-LABEL: select_xor_3b:
-; RV64IM:       # %bb.0: # %entry
-; RV64IM-NEXT:    andi a1, a1, 1
-; RV64IM-NEXT:    addi a1, a1, -1
-; RV64IM-NEXT:    andi a1, a1, 43
-; RV64IM-NEXT:    xor a0, a0, a1
-; RV64IM-NEXT:    ret
-;
-; RV64IMXVTCONDOPS-LABEL: select_xor_3b:
-; RV64IMXVTCONDOPS:       # %bb.0: # %entry
-; RV64IMXVTCONDOPS-NEXT:    andi a1, a1, 1
-; RV64IMXVTCONDOPS-NEXT:    li a2, 43
-; RV64IMXVTCONDOPS-NEXT:    vt.maskcn a1, a2, a1
-; RV64IMXVTCONDOPS-NEXT:    xor a0, a0, a1
-; RV64IMXVTCONDOPS-NEXT:    ret
-;
-; CHECKZICOND-LABEL: select_xor_3b:
-; CHECKZICOND:       # %bb.0: # %entry
-; CHECKZICOND-NEXT:    andi a1, a1, 1
-; CHECKZICOND-NEXT:    li a2, 43
-; CHECKZICOND-NEXT:    czero.nez a1, a2, a1
-; CHECKZICOND-NEXT:    xor a0, a0, a1
-; CHECKZICOND-NEXT:    ret
+; CHECK-LABEL: select_xor_3b:
+; CHECK:       # %bb.0: # %entry
+; CHECK-NEXT:    andi a1, a1, 1
+; CHECK-NEXT:    addi a1, a1, -1
+; CHECK-NEXT:    andi a1, a1, 43
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    ret
 entry:
  %and = and i8 %cond, 1
  %cmp10 = icmp ne i8 %and, 1
@@ -730,22 +692,22 @@ define i32 @select_add_3(i1 zeroext %cond, i32 %a) {
 ;
 ; RV64IMXVTCONDOPS-LABEL: select_add_3:
 ; RV64IMXVTCONDOPS:       # %bb.0: # %entry
-; RV64IMXVTCONDOPS-NEXT:    li a2, 42
-; RV64IMXVTCONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64IMXVTCONDOPS-NEXT:    addi a0, a0, -1
+; RV64IMXVTCONDOPS-NEXT:    andi a0, a0, 42
 ; RV64IMXVTCONDOPS-NEXT:    addw a0, a1, a0
 ; RV64IMXVTCONDOPS-NEXT:    ret
 ;
 ; RV32IMZICOND-LABEL: select_add_3:
 ; RV32IMZICOND:       # %bb.0: # %entry
-; RV32IMZICOND-NEXT:    li a2, 42
-; RV32IMZICOND-NEXT:    czero.nez a0, a2, a0
+; RV32IMZICOND-NEXT:    addi a0, a0, -1
+; RV32IMZICOND-NEXT:    andi a0, a0, 42
 ; RV32IMZICOND-NEXT:    add a0, a1, a0
 ; RV32IMZICOND-NEXT:    ret
 ;
 ; RV64IMZICOND-LABEL: select_add_3:
 ; RV64IMZICOND:       # %bb.0: # %entry
-; RV64IMZICOND-NEXT:    li a2, 42
-; RV64IMZICOND-NEXT:    czero.nez a0, a2, a0
+; RV64IMZICOND-NEXT:    addi a0, a0, -1
+; RV64IMZICOND-NEXT:    andi a0, a0, 42
 ; RV64IMZICOND-NEXT:    addw a0, a1, a0
 ; RV64IMZICOND-NEXT:    ret
 entry:
@@ -857,22 +819,22 @@ define i32 @select_sub_3(i1 zeroext %cond, i32 %a) {
 ;
 ; RV64IMXVTCONDOPS-LABEL: select_sub_3:
 ; RV64IMXVTCONDOPS:       # %bb.0: # %entry
-; RV64IMXVTCONDOPS-NEXT:    li a2, 42
-; RV64IMXVTCONDOPS-NEXT:    vt.maskcn a0, a2, a0
+; RV64IMXVTCONDOPS-NEXT:    addi a0, a0, -1
+; RV64IMXVTCONDOPS-NEXT:    andi a0, a0, 42
 ; RV64IMXVTCONDOPS-NEXT:    subw a0, a1, a0
 ; RV64IMXVTCONDOPS-NEXT:    ret
 ;
 ; RV32IMZICOND-LABEL: select_sub_3:
 ; RV32IMZICOND:       # %bb.0: # %entry
-; RV32IMZICOND-NEXT:    li a2, 42
-; RV32IMZICOND-NEXT:    czero.nez a0, a2, a0
+; RV32IMZICOND-NEXT:    addi a0, a0, -1
+; RV32IMZICOND-NEXT:    andi a0, a0, 42
 ; RV32IMZICOND-NEXT:    sub a0, a1, a0
 ; RV32IMZICOND-NEXT:    ret
 ;
 ; RV64IMZICOND-LABEL: select_sub_3:
 ; RV64IMZICOND:       # %bb.0: # %entry
-; RV64IMZICOND-NEXT:    li a2, 42
-; RV64IMZICOND-NEXT:    czero.nez a0, a2, a0
+; RV64IMZICOND-NEXT:    addi a0, a0, -1
+; RV64IMZICOND-NEXT:    andi a0, a0, 42
 ; RV64IMZICOND-NEXT:    subw a0, a1, a0
 ; RV64IMZICOND-NEXT:    ret
 entry:

@preames
Copy link
Collaborator Author

preames commented Sep 4, 2025

This is a sub-case of the expansion we do without zicond, but restricted specifically to the simm12 case. In the general case where the other source is a register using zicond is likely better.

I'm not really happy about the code duplication here. If anyone has a suggestion on how to better handle this, open to ideas. Note that I'm probably going to want to add the select c, -1, simm12 cases too.

Copy link

github-actions bot commented Sep 4, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.


// select c, 0, simm12 -> andi (addi c, -1), simm12
if (isNullConstant(TrueV) && isSimm12Constant(FalseV)) {
SDValue Mask = DAG.getNode(ISD::SUB, DL, VT, CondV,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make the ADD with -1 since that's what the comment says?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the change as requested, but honestly, I think the SUB spelling is more clear. Happy to go either way.

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@preames preames merged commit 91e85cc into llvm:main Sep 4, 2025
9 checks passed
@preames preames deleted the pr-riscv-select-of-simm12-and-zero branch September 4, 2025 21:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants