-
Notifications
You must be signed in to change notification settings - Fork 14.9k
[RISCV][NFC] Simplify some rvv regbankselect cases #155961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@llvm/pr-subscribers-backend-risc-v @llvm/pr-subscribers-llvm-globalisel Author: Jianjian Guan (jacquesguan) ChangesPatch is 394.08 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/155961.diff 10 Files Affected:
diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/regbankselect/rvv/add.mir b/llvm/test/CodeGen/RISCV/GlobalISel/regbankselect/rvv/add.mir
index 759c28543f1e5..dcb88cd826291 100644
--- a/llvm/test/CodeGen/RISCV/GlobalISel/regbankselect/rvv/add.mir
+++ b/llvm/test/CodeGen/RISCV/GlobalISel/regbankselect/rvv/add.mir
@@ -1,10 +1,10 @@
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py
# RUN: llc -mtriple=riscv32 -mattr=+m,+v -run-pass=regbankselect \
# RUN: -simplify-mir -verify-machineinstrs %s \
-# RUN: -o - | FileCheck -check-prefix=RV32I %s
+# RUN: -o - | FileCheck -check-prefixes=CHECK %s
# RUN: llc -mtriple=riscv64 -mattr=+m,+v -run-pass=regbankselect \
# RUN: -simplify-mir -verify-machineinstrs %s \
-# RUN: -o - | FileCheck -check-prefix=RV64I %s
+# RUN: -o - | FileCheck -check-prefixes=CHECK %s
---
name: vadd_vv_nxv1i8
legalized: true
@@ -13,23 +13,14 @@ body: |
bb.0.entry:
liveins: $v8, $v9
- ; RV32I-LABEL: name: vadd_vv_nxv1i8
- ; RV32I: liveins: $v8, $v9
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v9
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8 = COPY [[ADD]](<vscale x 1 x s8>)
- ; RV32I-NEXT: PseudoRET implicit $v8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv1i8
- ; RV64I: liveins: $v8, $v9
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v9
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8 = COPY [[ADD]](<vscale x 1 x s8>)
- ; RV64I-NEXT: PseudoRET implicit $v8
+ ; CHECK-LABEL: name: vadd_vv_nxv1i8
+ ; CHECK: liveins: $v8, $v9
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s8>) = COPY $v9
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8 = COPY [[ADD]](<vscale x 1 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
%0:_(<vscale x 1 x s8>) = COPY $v8
%1:_(<vscale x 1 x s8>) = COPY $v9
%2:_(<vscale x 1 x s8>) = G_ADD %0, %1
@@ -45,23 +36,14 @@ body: |
bb.0.entry:
liveins: $v8, $v9
- ; RV32I-LABEL: name: vadd_vv_nxv2i8
- ; RV32I: liveins: $v8, $v9
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v9
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 2 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8 = COPY [[ADD]](<vscale x 2 x s8>)
- ; RV32I-NEXT: PseudoRET implicit $v8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv2i8
- ; RV64I: liveins: $v8, $v9
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v9
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 2 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8 = COPY [[ADD]](<vscale x 2 x s8>)
- ; RV64I-NEXT: PseudoRET implicit $v8
+ ; CHECK-LABEL: name: vadd_vv_nxv2i8
+ ; CHECK: liveins: $v8, $v9
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s8>) = COPY $v9
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 2 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8 = COPY [[ADD]](<vscale x 2 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
%0:_(<vscale x 2 x s8>) = COPY $v8
%1:_(<vscale x 2 x s8>) = COPY $v9
%2:_(<vscale x 2 x s8>) = G_ADD %0, %1
@@ -77,23 +59,14 @@ body: |
bb.0.entry:
liveins: $v8, $v9
- ; RV32I-LABEL: name: vadd_vv_nxv4i8
- ; RV32I: liveins: $v8, $v9
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v9
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8 = COPY [[ADD]](<vscale x 4 x s8>)
- ; RV32I-NEXT: PseudoRET implicit $v8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv4i8
- ; RV64I: liveins: $v8, $v9
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v9
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8 = COPY [[ADD]](<vscale x 4 x s8>)
- ; RV64I-NEXT: PseudoRET implicit $v8
+ ; CHECK-LABEL: name: vadd_vv_nxv4i8
+ ; CHECK: liveins: $v8, $v9
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s8>) = COPY $v9
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8 = COPY [[ADD]](<vscale x 4 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
%0:_(<vscale x 4 x s8>) = COPY $v8
%1:_(<vscale x 4 x s8>) = COPY $v9
%2:_(<vscale x 4 x s8>) = G_ADD %0, %1
@@ -109,23 +82,14 @@ body: |
bb.0.entry:
liveins: $v8, $v9
- ; RV32I-LABEL: name: vadd_vv_nxv8i8
- ; RV32I: liveins: $v8, $v9
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v9
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 8 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8 = COPY [[ADD]](<vscale x 8 x s8>)
- ; RV32I-NEXT: PseudoRET implicit $v8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv8i8
- ; RV64I: liveins: $v8, $v9
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v9
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 8 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8 = COPY [[ADD]](<vscale x 8 x s8>)
- ; RV64I-NEXT: PseudoRET implicit $v8
+ ; CHECK-LABEL: name: vadd_vv_nxv8i8
+ ; CHECK: liveins: $v8, $v9
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s8>) = COPY $v9
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 8 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8 = COPY [[ADD]](<vscale x 8 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
%0:_(<vscale x 8 x s8>) = COPY $v8
%1:_(<vscale x 8 x s8>) = COPY $v9
%2:_(<vscale x 8 x s8>) = G_ADD %0, %1
@@ -141,23 +105,14 @@ body: |
bb.0.entry:
liveins: $v8m2, $v10m2
- ; RV32I-LABEL: name: vadd_vv_nxv16i8
- ; RV32I: liveins: $v8m2, $v10m2
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v8m2
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v10m2
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8m2 = COPY [[ADD]](<vscale x 16 x s8>)
- ; RV32I-NEXT: PseudoRET implicit $v8m2
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv16i8
- ; RV64I: liveins: $v8m2, $v10m2
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v8m2
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v10m2
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8m2 = COPY [[ADD]](<vscale x 16 x s8>)
- ; RV64I-NEXT: PseudoRET implicit $v8m2
+ ; CHECK-LABEL: name: vadd_vv_nxv16i8
+ ; CHECK: liveins: $v8m2, $v10m2
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v8m2
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s8>) = COPY $v10m2
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8m2 = COPY [[ADD]](<vscale x 16 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
%0:_(<vscale x 16 x s8>) = COPY $v8m2
%1:_(<vscale x 16 x s8>) = COPY $v10m2
%2:_(<vscale x 16 x s8>) = G_ADD %0, %1
@@ -173,23 +128,14 @@ body: |
bb.0.entry:
liveins: $v8m4, $v12m4
- ; RV32I-LABEL: name: vadd_vv_nxv32i8
- ; RV32I: liveins: $v8m4, $v12m4
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v8m4
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v12m4
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 32 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8m4 = COPY [[ADD]](<vscale x 32 x s8>)
- ; RV32I-NEXT: PseudoRET implicit $v8m4
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv32i8
- ; RV64I: liveins: $v8m4, $v12m4
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v8m4
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v12m4
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 32 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8m4 = COPY [[ADD]](<vscale x 32 x s8>)
- ; RV64I-NEXT: PseudoRET implicit $v8m4
+ ; CHECK-LABEL: name: vadd_vv_nxv32i8
+ ; CHECK: liveins: $v8m4, $v12m4
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v8m4
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s8>) = COPY $v12m4
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 32 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8m4 = COPY [[ADD]](<vscale x 32 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
%0:_(<vscale x 32 x s8>) = COPY $v8m4
%1:_(<vscale x 32 x s8>) = COPY $v12m4
%2:_(<vscale x 32 x s8>) = G_ADD %0, %1
@@ -205,23 +151,14 @@ body: |
bb.0.entry:
liveins: $v8m8, $v16m8
- ; RV32I-LABEL: name: vadd_vv_nxv64i8
- ; RV32I: liveins: $v8m8, $v16m8
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v8m8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v16m8
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 64 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8m8 = COPY [[ADD]](<vscale x 64 x s8>)
- ; RV32I-NEXT: PseudoRET implicit $v8m8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv64i8
- ; RV64I: liveins: $v8m8, $v16m8
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v8m8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v16m8
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 64 x s8>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8m8 = COPY [[ADD]](<vscale x 64 x s8>)
- ; RV64I-NEXT: PseudoRET implicit $v8m8
+ ; CHECK-LABEL: name: vadd_vv_nxv64i8
+ ; CHECK: liveins: $v8m8, $v16m8
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v8m8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 64 x s8>) = COPY $v16m8
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 64 x s8>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8m8 = COPY [[ADD]](<vscale x 64 x s8>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m8
%0:_(<vscale x 64 x s8>) = COPY $v8m8
%1:_(<vscale x 64 x s8>) = COPY $v16m8
%2:_(<vscale x 64 x s8>) = G_ADD %0, %1
@@ -237,23 +174,14 @@ body: |
bb.0.entry:
liveins: $v8, $v9
- ; RV32I-LABEL: name: vadd_vv_nxv1i16
- ; RV32I: liveins: $v8, $v9
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v9
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8 = COPY [[ADD]](<vscale x 1 x s16>)
- ; RV32I-NEXT: PseudoRET implicit $v8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv1i16
- ; RV64I: liveins: $v8, $v9
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v9
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8 = COPY [[ADD]](<vscale x 1 x s16>)
- ; RV64I-NEXT: PseudoRET implicit $v8
+ ; CHECK-LABEL: name: vadd_vv_nxv1i16
+ ; CHECK: liveins: $v8, $v9
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 1 x s16>) = COPY $v9
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 1 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8 = COPY [[ADD]](<vscale x 1 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
%0:_(<vscale x 1 x s16>) = COPY $v8
%1:_(<vscale x 1 x s16>) = COPY $v9
%2:_(<vscale x 1 x s16>) = G_ADD %0, %1
@@ -269,23 +197,14 @@ body: |
bb.0.entry:
liveins: $v8, $v9
- ; RV32I-LABEL: name: vadd_vv_nxv2i16
- ; RV32I: liveins: $v8, $v9
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v9
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 2 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8 = COPY [[ADD]](<vscale x 2 x s16>)
- ; RV32I-NEXT: PseudoRET implicit $v8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv2i16
- ; RV64I: liveins: $v8, $v9
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v9
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 2 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8 = COPY [[ADD]](<vscale x 2 x s16>)
- ; RV64I-NEXT: PseudoRET implicit $v8
+ ; CHECK-LABEL: name: vadd_vv_nxv2i16
+ ; CHECK: liveins: $v8, $v9
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 2 x s16>) = COPY $v9
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 2 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8 = COPY [[ADD]](<vscale x 2 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
%0:_(<vscale x 2 x s16>) = COPY $v8
%1:_(<vscale x 2 x s16>) = COPY $v9
%2:_(<vscale x 2 x s16>) = G_ADD %0, %1
@@ -301,23 +220,14 @@ body: |
bb.0.entry:
liveins: $v8, $v9
- ; RV32I-LABEL: name: vadd_vv_nxv4i16
- ; RV32I: liveins: $v8, $v9
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v9
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8 = COPY [[ADD]](<vscale x 4 x s16>)
- ; RV32I-NEXT: PseudoRET implicit $v8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv4i16
- ; RV64I: liveins: $v8, $v9
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v9
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8 = COPY [[ADD]](<vscale x 4 x s16>)
- ; RV64I-NEXT: PseudoRET implicit $v8
+ ; CHECK-LABEL: name: vadd_vv_nxv4i16
+ ; CHECK: liveins: $v8, $v9
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 4 x s16>) = COPY $v9
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 4 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8 = COPY [[ADD]](<vscale x 4 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8
%0:_(<vscale x 4 x s16>) = COPY $v8
%1:_(<vscale x 4 x s16>) = COPY $v9
%2:_(<vscale x 4 x s16>) = G_ADD %0, %1
@@ -333,23 +243,14 @@ body: |
bb.0.entry:
liveins: $v8m2, $v10m2
- ; RV32I-LABEL: name: vadd_vv_nxv8i16
- ; RV32I: liveins: $v8m2, $v10m2
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v8m2
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v10m2
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 8 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8m2 = COPY [[ADD]](<vscale x 8 x s16>)
- ; RV32I-NEXT: PseudoRET implicit $v8m2
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv8i16
- ; RV64I: liveins: $v8m2, $v10m2
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v8m2
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v10m2
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 8 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8m2 = COPY [[ADD]](<vscale x 8 x s16>)
- ; RV64I-NEXT: PseudoRET implicit $v8m2
+ ; CHECK-LABEL: name: vadd_vv_nxv8i16
+ ; CHECK: liveins: $v8m2, $v10m2
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v8m2
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 8 x s16>) = COPY $v10m2
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 8 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8m2 = COPY [[ADD]](<vscale x 8 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m2
%0:_(<vscale x 8 x s16>) = COPY $v8m2
%1:_(<vscale x 8 x s16>) = COPY $v10m2
%2:_(<vscale x 8 x s16>) = G_ADD %0, %1
@@ -365,23 +266,14 @@ body: |
bb.0.entry:
liveins: $v8m4, $v12m4
- ; RV32I-LABEL: name: vadd_vv_nxv16i16
- ; RV32I: liveins: $v8m4, $v12m4
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v8m4
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v12m4
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8m4 = COPY [[ADD]](<vscale x 16 x s16>)
- ; RV32I-NEXT: PseudoRET implicit $v8m4
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv16i16
- ; RV64I: liveins: $v8m4, $v12m4
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v8m4
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v12m4
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8m4 = COPY [[ADD]](<vscale x 16 x s16>)
- ; RV64I-NEXT: PseudoRET implicit $v8m4
+ ; CHECK-LABEL: name: vadd_vv_nxv16i16
+ ; CHECK: liveins: $v8m4, $v12m4
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v8m4
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 16 x s16>) = COPY $v12m4
+ ; CHECK-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 16 x s16>) = G_ADD [[COPY]], [[COPY1]]
+ ; CHECK-NEXT: $v8m4 = COPY [[ADD]](<vscale x 16 x s16>)
+ ; CHECK-NEXT: PseudoRET implicit $v8m4
%0:_(<vscale x 16 x s16>) = COPY $v8m4
%1:_(<vscale x 16 x s16>) = COPY $v12m4
%2:_(<vscale x 16 x s16>) = G_ADD %0, %1
@@ -397,23 +289,14 @@ body: |
bb.0.entry:
liveins: $v8m8, $v16m8
- ; RV32I-LABEL: name: vadd_vv_nxv32i16
- ; RV32I: liveins: $v8m8, $v16m8
- ; RV32I-NEXT: {{ $}}
- ; RV32I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v8m8
- ; RV32I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v16m8
- ; RV32I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 32 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV32I-NEXT: $v8m8 = COPY [[ADD]](<vscale x 32 x s16>)
- ; RV32I-NEXT: PseudoRET implicit $v8m8
- ;
- ; RV64I-LABEL: name: vadd_vv_nxv32i16
- ; RV64I: liveins: $v8m8, $v16m8
- ; RV64I-NEXT: {{ $}}
- ; RV64I-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v8m8
- ; RV64I-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v16m8
- ; RV64I-NEXT: [[ADD:%[0-9]+]]:vrb(<vscale x 32 x s16>) = G_ADD [[COPY]], [[COPY1]]
- ; RV64I-NEXT: $v8m8 = COPY [[ADD]](<vscale x 32 x s16>)
- ; RV64I-NEXT: PseudoRET implicit $v8m8
+ ; CHECK-LABEL: name: vadd_vv_nxv32i16
+ ; CHECK: liveins: $v8m8, $v16m8
+ ; CHECK-NEXT: {{ $}}
+ ; CHECK-NEXT: [[COPY:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v8m8
+ ; CHECK-NEXT: [[COPY1:%[0-9]+]]:vrb(<vscale x 32 x s16>) = COPY $v16m8
+ ; CH...
[truncated]
|
@@ -1,10 +1,10 @@ | |||
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py | |||
# RUN: llc -mtriple=riscv32 -mattr=+m,+v -run-pass=regbankselect \ | |||
# RUN: -simplify-mir -verify-machineinstrs %s \ | |||
# RUN: -o - | FileCheck -check-prefix=RV32I %s | |||
# RUN: -o - | FileCheck -check-prefixes=CHECK %s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# RUN: -o - | FileCheck -check-prefixes=CHECK %s | |
# RUN: -o - | FileCheck %s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed.
No description provided.