Skip to content

Conversation

Pierre-vh
Copy link
Contributor

@Pierre-vh Pierre-vh commented Aug 21, 2025

Loads didn't have the Expand option in AtomicExpandPass. Stores had Expand but it didn't defer to TLI and instead did an action directly.
Add a CustomExpand option and make it always map to the TLI hook for all cases. The Expand option now refers to a generic expansion for all targets.

Copy link
Contributor Author

Pierre-vh commented Aug 21, 2025

@llvmbot
Copy link
Member

llvmbot commented Aug 21, 2025

@llvm/pr-subscribers-backend-loongarch
@llvm/pr-subscribers-backend-amdgpu
@llvm/pr-subscribers-backend-aarch64

@llvm/pr-subscribers-backend-arm

Author: Pierre van Houtryve (Pierre-vh)

Changes

Loads didn't have the Expand option in AtomicExpandPass. Stores had Expand but it didn't defer to TLI and instead did an action directly.
Move the old behavior to a XChg expansion and make Expand behave like all other instructions.


Full diff: https://github.com/llvm/llvm-project/pull/154708.diff

6 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/TargetLowering.h (+15-1)
  • (modified) llvm/lib/CodeGen/AtomicExpandPass.cpp (+9-3)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+2-2)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/Hexagon/HexagonISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+1-1)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 4480ced637456..8c5f03ce526b4 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -262,7 +262,9 @@ class LLVM_ABI TargetLoweringBase {
     LLOnly,  // Expand the (load) instruction into just a load-linked, which has
              // greater atomic guarantees than a normal load.
     CmpXChg, // Expand the instruction into cmpxchg; used by at least X86.
-    MaskedIntrinsic,   // Use a target-specific intrinsic for the LL/SC loop.
+    MaskedIntrinsic, // Use a target-specific intrinsic for the LL/SC loop.
+    XChg, // Expand a store too large to be atomic into a xchg, then re-process
+          // it.
     BitTestIntrinsic,  // Use a target-specific intrinsic for special bit
                        // operations; used by X86.
     CmpArithIntrinsic, // Use a target-specific intrinsic for special compare
@@ -2273,6 +2275,18 @@ class LLVM_ABI TargetLoweringBase {
         "Generic atomicrmw expansion unimplemented on this target");
   }
 
+  /// Perform a atomic store using a target-specific way.
+  virtual void emitExpandAtomicStore(StoreInst *SI) const {
+    llvm_unreachable(
+        "Generic atomic store expansion unimplemented on this target");
+  }
+
+  /// Perform a atomic load using a target-specific way.
+  virtual void emitExpandAtomicLoad(LoadInst *LI) const {
+    llvm_unreachable(
+        "Generic atomic load expansion unimplemented on this target");
+  }
+
   /// Perform a cmpxchg expansion using a target-specific method.
   virtual void emitExpandAtomicCmpXchg(AtomicCmpXchgInst *CI) const {
     llvm_unreachable("Generic cmpxchg expansion unimplemented on this target");
diff --git a/llvm/lib/CodeGen/AtomicExpandPass.cpp b/llvm/lib/CodeGen/AtomicExpandPass.cpp
index 278dd6560e736..b617b53385b55 100644
--- a/llvm/lib/CodeGen/AtomicExpandPass.cpp
+++ b/llvm/lib/CodeGen/AtomicExpandPass.cpp
@@ -84,7 +84,7 @@ class AtomicExpandImpl {
   bool expandAtomicLoadToCmpXchg(LoadInst *LI);
   StoreInst *convertAtomicStoreToIntegerType(StoreInst *SI);
   bool tryExpandAtomicStore(StoreInst *SI);
-  void expandAtomicStore(StoreInst *SI);
+  void expandAtomicStoreToXChg(StoreInst *SI);
   bool tryExpandAtomicRMW(AtomicRMWInst *AI);
   AtomicRMWInst *convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI);
   Value *
@@ -537,6 +537,9 @@ bool AtomicExpandImpl::tryExpandAtomicLoad(LoadInst *LI) {
   case TargetLoweringBase::AtomicExpansionKind::NotAtomic:
     LI->setAtomic(AtomicOrdering::NotAtomic);
     return true;
+  case TargetLoweringBase::AtomicExpansionKind::Expand:
+    TLI->emitExpandAtomicLoad(LI);
+    return true;
   default:
     llvm_unreachable("Unhandled case in tryExpandAtomicLoad");
   }
@@ -547,7 +550,10 @@ bool AtomicExpandImpl::tryExpandAtomicStore(StoreInst *SI) {
   case TargetLoweringBase::AtomicExpansionKind::None:
     return false;
   case TargetLoweringBase::AtomicExpansionKind::Expand:
-    expandAtomicStore(SI);
+    TLI->emitExpandAtomicStore(SI);
+    return true;
+  case TargetLoweringBase::AtomicExpansionKind::XChg:
+    expandAtomicStoreToXChg(SI);
     return true;
   case TargetLoweringBase::AtomicExpansionKind::NotAtomic:
     SI->setAtomic(AtomicOrdering::NotAtomic);
@@ -620,7 +626,7 @@ StoreInst *AtomicExpandImpl::convertAtomicStoreToIntegerType(StoreInst *SI) {
   return NewSI;
 }
 
-void AtomicExpandImpl::expandAtomicStore(StoreInst *SI) {
+void AtomicExpandImpl::expandAtomicStoreToXChg(StoreInst *SI) {
   // This function is only called on atomic stores that are too large to be
   // atomic if implemented as a native store. So we replace them by an
   // atomic swap, that can be implemented for example as a ldrex/strex on ARM
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index d168cc8d1bd06..b8bd726cd8e6c 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -28422,10 +28422,10 @@ AArch64TargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
   if (isOpSuitableForRCPC3(SI))
     return AtomicExpansionKind::None;
   if (isOpSuitableForLSE128(SI))
-    return AtomicExpansionKind::Expand;
+    return AtomicExpansionKind::XChg;
   if (isOpSuitableForLDPSTP(SI))
     return AtomicExpansionKind::None;
-  return AtomicExpansionKind::Expand;
+  return AtomicExpansionKind::XChg;
 }
 
 // Loads and stores less than 128-bits are already atomic; ones above that
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 830156359e9e8..dd5dba402173b 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -21236,7 +21236,7 @@ ARMTargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
     has64BitAtomicStore = Subtarget->hasV6Ops();
 
   unsigned Size = SI->getValueOperand()->getType()->getPrimitiveSizeInBits();
-  return Size == 64 && has64BitAtomicStore ? AtomicExpansionKind::Expand
+  return Size == 64 && has64BitAtomicStore ? AtomicExpansionKind::XChg
                                            : AtomicExpansionKind::None;
 }
 
diff --git a/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp b/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
index c54b67ccd8843..e44626868454a 100644
--- a/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
+++ b/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
@@ -3938,7 +3938,7 @@ TargetLowering::AtomicExpansionKind
 HexagonTargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
   // Do not expand loads and stores that don't exceed 64 bits.
   return SI->getValueOperand()->getType()->getPrimitiveSizeInBits() > 64
-             ? AtomicExpansionKind::Expand
+             ? AtomicExpansionKind::XChg
              : AtomicExpansionKind::None;
 }
 
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 19131fbd4102b..653b032039aa7 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -31723,7 +31723,7 @@ X86TargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
       return AtomicExpansionKind::None;
   }
 
-  return needsCmpXchgNb(MemType) ? AtomicExpansionKind::Expand
+  return needsCmpXchgNb(MemType) ? AtomicExpansionKind::XChg
                                  : AtomicExpansionKind::None;
 }
 

@llvmbot
Copy link
Member

llvmbot commented Aug 21, 2025

@llvm/pr-subscribers-backend-hexagon

Author: Pierre van Houtryve (Pierre-vh)

Changes

Loads didn't have the Expand option in AtomicExpandPass. Stores had Expand but it didn't defer to TLI and instead did an action directly.
Move the old behavior to a XChg expansion and make Expand behave like all other instructions.


Full diff: https://github.com/llvm/llvm-project/pull/154708.diff

6 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/TargetLowering.h (+15-1)
  • (modified) llvm/lib/CodeGen/AtomicExpandPass.cpp (+9-3)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+2-2)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/Hexagon/HexagonISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+1-1)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 4480ced637456..8c5f03ce526b4 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -262,7 +262,9 @@ class LLVM_ABI TargetLoweringBase {
     LLOnly,  // Expand the (load) instruction into just a load-linked, which has
              // greater atomic guarantees than a normal load.
     CmpXChg, // Expand the instruction into cmpxchg; used by at least X86.
-    MaskedIntrinsic,   // Use a target-specific intrinsic for the LL/SC loop.
+    MaskedIntrinsic, // Use a target-specific intrinsic for the LL/SC loop.
+    XChg, // Expand a store too large to be atomic into a xchg, then re-process
+          // it.
     BitTestIntrinsic,  // Use a target-specific intrinsic for special bit
                        // operations; used by X86.
     CmpArithIntrinsic, // Use a target-specific intrinsic for special compare
@@ -2273,6 +2275,18 @@ class LLVM_ABI TargetLoweringBase {
         "Generic atomicrmw expansion unimplemented on this target");
   }
 
+  /// Perform a atomic store using a target-specific way.
+  virtual void emitExpandAtomicStore(StoreInst *SI) const {
+    llvm_unreachable(
+        "Generic atomic store expansion unimplemented on this target");
+  }
+
+  /// Perform a atomic load using a target-specific way.
+  virtual void emitExpandAtomicLoad(LoadInst *LI) const {
+    llvm_unreachable(
+        "Generic atomic load expansion unimplemented on this target");
+  }
+
   /// Perform a cmpxchg expansion using a target-specific method.
   virtual void emitExpandAtomicCmpXchg(AtomicCmpXchgInst *CI) const {
     llvm_unreachable("Generic cmpxchg expansion unimplemented on this target");
diff --git a/llvm/lib/CodeGen/AtomicExpandPass.cpp b/llvm/lib/CodeGen/AtomicExpandPass.cpp
index 278dd6560e736..b617b53385b55 100644
--- a/llvm/lib/CodeGen/AtomicExpandPass.cpp
+++ b/llvm/lib/CodeGen/AtomicExpandPass.cpp
@@ -84,7 +84,7 @@ class AtomicExpandImpl {
   bool expandAtomicLoadToCmpXchg(LoadInst *LI);
   StoreInst *convertAtomicStoreToIntegerType(StoreInst *SI);
   bool tryExpandAtomicStore(StoreInst *SI);
-  void expandAtomicStore(StoreInst *SI);
+  void expandAtomicStoreToXChg(StoreInst *SI);
   bool tryExpandAtomicRMW(AtomicRMWInst *AI);
   AtomicRMWInst *convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI);
   Value *
@@ -537,6 +537,9 @@ bool AtomicExpandImpl::tryExpandAtomicLoad(LoadInst *LI) {
   case TargetLoweringBase::AtomicExpansionKind::NotAtomic:
     LI->setAtomic(AtomicOrdering::NotAtomic);
     return true;
+  case TargetLoweringBase::AtomicExpansionKind::Expand:
+    TLI->emitExpandAtomicLoad(LI);
+    return true;
   default:
     llvm_unreachable("Unhandled case in tryExpandAtomicLoad");
   }
@@ -547,7 +550,10 @@ bool AtomicExpandImpl::tryExpandAtomicStore(StoreInst *SI) {
   case TargetLoweringBase::AtomicExpansionKind::None:
     return false;
   case TargetLoweringBase::AtomicExpansionKind::Expand:
-    expandAtomicStore(SI);
+    TLI->emitExpandAtomicStore(SI);
+    return true;
+  case TargetLoweringBase::AtomicExpansionKind::XChg:
+    expandAtomicStoreToXChg(SI);
     return true;
   case TargetLoweringBase::AtomicExpansionKind::NotAtomic:
     SI->setAtomic(AtomicOrdering::NotAtomic);
@@ -620,7 +626,7 @@ StoreInst *AtomicExpandImpl::convertAtomicStoreToIntegerType(StoreInst *SI) {
   return NewSI;
 }
 
-void AtomicExpandImpl::expandAtomicStore(StoreInst *SI) {
+void AtomicExpandImpl::expandAtomicStoreToXChg(StoreInst *SI) {
   // This function is only called on atomic stores that are too large to be
   // atomic if implemented as a native store. So we replace them by an
   // atomic swap, that can be implemented for example as a ldrex/strex on ARM
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index d168cc8d1bd06..b8bd726cd8e6c 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -28422,10 +28422,10 @@ AArch64TargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
   if (isOpSuitableForRCPC3(SI))
     return AtomicExpansionKind::None;
   if (isOpSuitableForLSE128(SI))
-    return AtomicExpansionKind::Expand;
+    return AtomicExpansionKind::XChg;
   if (isOpSuitableForLDPSTP(SI))
     return AtomicExpansionKind::None;
-  return AtomicExpansionKind::Expand;
+  return AtomicExpansionKind::XChg;
 }
 
 // Loads and stores less than 128-bits are already atomic; ones above that
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 830156359e9e8..dd5dba402173b 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -21236,7 +21236,7 @@ ARMTargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
     has64BitAtomicStore = Subtarget->hasV6Ops();
 
   unsigned Size = SI->getValueOperand()->getType()->getPrimitiveSizeInBits();
-  return Size == 64 && has64BitAtomicStore ? AtomicExpansionKind::Expand
+  return Size == 64 && has64BitAtomicStore ? AtomicExpansionKind::XChg
                                            : AtomicExpansionKind::None;
 }
 
diff --git a/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp b/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
index c54b67ccd8843..e44626868454a 100644
--- a/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
+++ b/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
@@ -3938,7 +3938,7 @@ TargetLowering::AtomicExpansionKind
 HexagonTargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
   // Do not expand loads and stores that don't exceed 64 bits.
   return SI->getValueOperand()->getType()->getPrimitiveSizeInBits() > 64
-             ? AtomicExpansionKind::Expand
+             ? AtomicExpansionKind::XChg
              : AtomicExpansionKind::None;
 }
 
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 19131fbd4102b..653b032039aa7 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -31723,7 +31723,7 @@ X86TargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
       return AtomicExpansionKind::None;
   }
 
-  return needsCmpXchgNb(MemType) ? AtomicExpansionKind::Expand
+  return needsCmpXchgNb(MemType) ? AtomicExpansionKind::XChg
                                  : AtomicExpansionKind::None;
 }
 

@llvmbot
Copy link
Member

llvmbot commented Aug 21, 2025

@llvm/pr-subscribers-backend-x86

Author: Pierre van Houtryve (Pierre-vh)

Changes

Loads didn't have the Expand option in AtomicExpandPass. Stores had Expand but it didn't defer to TLI and instead did an action directly.
Move the old behavior to a XChg expansion and make Expand behave like all other instructions.


Full diff: https://github.com/llvm/llvm-project/pull/154708.diff

6 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/TargetLowering.h (+15-1)
  • (modified) llvm/lib/CodeGen/AtomicExpandPass.cpp (+9-3)
  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+2-2)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/Hexagon/HexagonISelLowering.cpp (+1-1)
  • (modified) llvm/lib/Target/X86/X86ISelLowering.cpp (+1-1)
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 4480ced637456..8c5f03ce526b4 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -262,7 +262,9 @@ class LLVM_ABI TargetLoweringBase {
     LLOnly,  // Expand the (load) instruction into just a load-linked, which has
              // greater atomic guarantees than a normal load.
     CmpXChg, // Expand the instruction into cmpxchg; used by at least X86.
-    MaskedIntrinsic,   // Use a target-specific intrinsic for the LL/SC loop.
+    MaskedIntrinsic, // Use a target-specific intrinsic for the LL/SC loop.
+    XChg, // Expand a store too large to be atomic into a xchg, then re-process
+          // it.
     BitTestIntrinsic,  // Use a target-specific intrinsic for special bit
                        // operations; used by X86.
     CmpArithIntrinsic, // Use a target-specific intrinsic for special compare
@@ -2273,6 +2275,18 @@ class LLVM_ABI TargetLoweringBase {
         "Generic atomicrmw expansion unimplemented on this target");
   }
 
+  /// Perform a atomic store using a target-specific way.
+  virtual void emitExpandAtomicStore(StoreInst *SI) const {
+    llvm_unreachable(
+        "Generic atomic store expansion unimplemented on this target");
+  }
+
+  /// Perform a atomic load using a target-specific way.
+  virtual void emitExpandAtomicLoad(LoadInst *LI) const {
+    llvm_unreachable(
+        "Generic atomic load expansion unimplemented on this target");
+  }
+
   /// Perform a cmpxchg expansion using a target-specific method.
   virtual void emitExpandAtomicCmpXchg(AtomicCmpXchgInst *CI) const {
     llvm_unreachable("Generic cmpxchg expansion unimplemented on this target");
diff --git a/llvm/lib/CodeGen/AtomicExpandPass.cpp b/llvm/lib/CodeGen/AtomicExpandPass.cpp
index 278dd6560e736..b617b53385b55 100644
--- a/llvm/lib/CodeGen/AtomicExpandPass.cpp
+++ b/llvm/lib/CodeGen/AtomicExpandPass.cpp
@@ -84,7 +84,7 @@ class AtomicExpandImpl {
   bool expandAtomicLoadToCmpXchg(LoadInst *LI);
   StoreInst *convertAtomicStoreToIntegerType(StoreInst *SI);
   bool tryExpandAtomicStore(StoreInst *SI);
-  void expandAtomicStore(StoreInst *SI);
+  void expandAtomicStoreToXChg(StoreInst *SI);
   bool tryExpandAtomicRMW(AtomicRMWInst *AI);
   AtomicRMWInst *convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI);
   Value *
@@ -537,6 +537,9 @@ bool AtomicExpandImpl::tryExpandAtomicLoad(LoadInst *LI) {
   case TargetLoweringBase::AtomicExpansionKind::NotAtomic:
     LI->setAtomic(AtomicOrdering::NotAtomic);
     return true;
+  case TargetLoweringBase::AtomicExpansionKind::Expand:
+    TLI->emitExpandAtomicLoad(LI);
+    return true;
   default:
     llvm_unreachable("Unhandled case in tryExpandAtomicLoad");
   }
@@ -547,7 +550,10 @@ bool AtomicExpandImpl::tryExpandAtomicStore(StoreInst *SI) {
   case TargetLoweringBase::AtomicExpansionKind::None:
     return false;
   case TargetLoweringBase::AtomicExpansionKind::Expand:
-    expandAtomicStore(SI);
+    TLI->emitExpandAtomicStore(SI);
+    return true;
+  case TargetLoweringBase::AtomicExpansionKind::XChg:
+    expandAtomicStoreToXChg(SI);
     return true;
   case TargetLoweringBase::AtomicExpansionKind::NotAtomic:
     SI->setAtomic(AtomicOrdering::NotAtomic);
@@ -620,7 +626,7 @@ StoreInst *AtomicExpandImpl::convertAtomicStoreToIntegerType(StoreInst *SI) {
   return NewSI;
 }
 
-void AtomicExpandImpl::expandAtomicStore(StoreInst *SI) {
+void AtomicExpandImpl::expandAtomicStoreToXChg(StoreInst *SI) {
   // This function is only called on atomic stores that are too large to be
   // atomic if implemented as a native store. So we replace them by an
   // atomic swap, that can be implemented for example as a ldrex/strex on ARM
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index d168cc8d1bd06..b8bd726cd8e6c 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -28422,10 +28422,10 @@ AArch64TargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
   if (isOpSuitableForRCPC3(SI))
     return AtomicExpansionKind::None;
   if (isOpSuitableForLSE128(SI))
-    return AtomicExpansionKind::Expand;
+    return AtomicExpansionKind::XChg;
   if (isOpSuitableForLDPSTP(SI))
     return AtomicExpansionKind::None;
-  return AtomicExpansionKind::Expand;
+  return AtomicExpansionKind::XChg;
 }
 
 // Loads and stores less than 128-bits are already atomic; ones above that
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 830156359e9e8..dd5dba402173b 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -21236,7 +21236,7 @@ ARMTargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
     has64BitAtomicStore = Subtarget->hasV6Ops();
 
   unsigned Size = SI->getValueOperand()->getType()->getPrimitiveSizeInBits();
-  return Size == 64 && has64BitAtomicStore ? AtomicExpansionKind::Expand
+  return Size == 64 && has64BitAtomicStore ? AtomicExpansionKind::XChg
                                            : AtomicExpansionKind::None;
 }
 
diff --git a/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp b/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
index c54b67ccd8843..e44626868454a 100644
--- a/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
+++ b/llvm/lib/Target/Hexagon/HexagonISelLowering.cpp
@@ -3938,7 +3938,7 @@ TargetLowering::AtomicExpansionKind
 HexagonTargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
   // Do not expand loads and stores that don't exceed 64 bits.
   return SI->getValueOperand()->getType()->getPrimitiveSizeInBits() > 64
-             ? AtomicExpansionKind::Expand
+             ? AtomicExpansionKind::XChg
              : AtomicExpansionKind::None;
 }
 
diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp
index 19131fbd4102b..653b032039aa7 100644
--- a/llvm/lib/Target/X86/X86ISelLowering.cpp
+++ b/llvm/lib/Target/X86/X86ISelLowering.cpp
@@ -31723,7 +31723,7 @@ X86TargetLowering::shouldExpandAtomicStoreInIR(StoreInst *SI) const {
       return AtomicExpansionKind::None;
   }
 
-  return needsCmpXchgNb(MemType) ? AtomicExpansionKind::Expand
+  return needsCmpXchgNb(MemType) ? AtomicExpansionKind::XChg
                                  : AtomicExpansionKind::None;
 }
 

@@ -262,7 +262,9 @@ class LLVM_ABI TargetLoweringBase {
LLOnly, // Expand the (load) instruction into just a load-linked, which has
// greater atomic guarantees than a normal load.
CmpXChg, // Expand the instruction into cmpxchg; used by at least X86.
MaskedIntrinsic, // Use a target-specific intrinsic for the LL/SC loop.
MaskedIntrinsic, // Use a target-specific intrinsic for the LL/SC loop.
XChg, // Expand a store too large to be atomic into a xchg, then re-process
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of introducing this special case naming, I'd rather rename the existing Expand to Custom, and keeping the Expand as the general case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't that be done separately though? This just continues the trend with all the other enum cases.
Do you mean that XChg here becomes "Custom", and "Expand" remains the one for the TLI hook?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can pre-rename it, or just directly do it here. The other cases are different because there are multiple "default" expansions available to choose from. This case only has one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept the behavior of the old "Expand", and added an "ExpandCustom" for expansion that uses TLI hooks.

@@ -2273,6 +2275,18 @@ class LLVM_ABI TargetLoweringBase {
"Generic atomicrmw expansion unimplemented on this target");
}

/// Perform a atomic store using a target-specific way.
virtual void emitExpandAtomicStore(StoreInst *SI) const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These atomic APIs are a huge mess. Do we really need the same API for every single type of atomic instruction? I guess this just adds another, merging them should be a separate change

Loads didn't have the `Expand` option in `AtomicExpandPass`. Stores had `Expand`  but it didn't defer to TLI and instead did an action directly.
Move the old behavior to a `XChg`  expansion and make `Expand` behave like all other instructions.
@Pierre-vh Pierre-vh force-pushed the users/pierre-vh/expand-atomic-loadstore branch from 2b53423 to 9cdf588 Compare August 22, 2025 10:45
@Pierre-vh Pierre-vh requested a review from arsenm August 25, 2025 07:39
@@ -268,6 +268,7 @@ class LLVM_ABI TargetLoweringBase {
CmpArithIntrinsic, // Use a target-specific intrinsic for special compare
// operations; used by X86.
Expand, // Generic expansion in terms of other atomic operations.
CustomExpand, // Custom target-specific expansion using TLI hooks.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make more sense to call it Custom instead of CustomExpand, in the light of the naming in SelectionDAG?

Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The naming is a bit of a mess, but maybe can fix that as part of consolidating all the atomic control hooks

@Pierre-vh Pierre-vh merged commit 8b9b0fd into main Aug 28, 2025
11 checks passed
@Pierre-vh Pierre-vh deleted the users/pierre-vh/expand-atomic-loadstore branch August 28, 2025 07:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants