-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[TargetLowering] Fold (a | b) ==/!= b -> (a & ~b) ==/!= 0 when and-not exists #145368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@llvm/pr-subscribers-llvm-selectiondag @llvm/pr-subscribers-backend-aarch64 Author: AZero13 (AZero13) ChangesThis is especially helpful for AArch64, which simplifies ands cmp to tst. Full diff: https://github.com/llvm/llvm-project/pull/145368.diff 3 Files Affected:
diff --git a/llvm/include/llvm/CodeGen/TargetLowering.h b/llvm/include/llvm/CodeGen/TargetLowering.h
index 727526055e592..ff2523b8a2517 100644
--- a/llvm/include/llvm/CodeGen/TargetLowering.h
+++ b/llvm/include/llvm/CodeGen/TargetLowering.h
@@ -5800,6 +5800,8 @@ class LLVM_ABI TargetLowering : public TargetLoweringBase {
private:
SDValue foldSetCCWithAnd(EVT VT, SDValue N0, SDValue N1, ISD::CondCode Cond,
const SDLoc &DL, DAGCombinerInfo &DCI) const;
+ SDValue foldSetCCWithOr(EVT VT, SDValue N0, SDValue N1, ISD::CondCode Cond,
+ const SDLoc &DL, DAGCombinerInfo &DCI) const;
SDValue foldSetCCWithBinOp(EVT VT, SDValue N0, SDValue N1, ISD::CondCode Cond,
const SDLoc &DL, DAGCombinerInfo &DCI) const;
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index 66717135c9adf..72f860521bb04 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -4212,6 +4212,56 @@ SDValue TargetLowering::foldSetCCWithAnd(EVT VT, SDValue N0, SDValue N1,
return SDValue();
}
+/// This helper function of SimplifySetCC tries to optimize the comparison when
+/// either operand of the SetCC node is a bitwise-or instruction.
+SDValue TargetLowering::foldSetCCWithOr(EVT VT, SDValue N0, SDValue N1,
+ ISD::CondCode Cond, const SDLoc &DL,
+ DAGCombinerInfo &DCI) const {
+ if (N1.getOpcode() == ISD::OR && N0.getOpcode() != ISD::OR)
+ std::swap(N0, N1);
+
+ SelectionDAG &DAG = DCI.DAG;
+ EVT OpVT = N0.getValueType();
+ if (N0.getOpcode() != ISD::OR || !OpVT.isInteger() ||
+ (Cond != ISD::SETEQ && Cond != ISD::SETNE))
+ return SDValue();
+
+ // Match these patterns in any of their permutations:
+ // (X | Y) == Y
+ // (X | Y) != Y
+ SDValue X, Y;
+ if (N0.getOperand(0) == N1) {
+ X = N0.getOperand(1);
+ Y = N0.getOperand(0);
+ } else if (N0.getOperand(1) == N1) {
+ X = N0.getOperand(0);
+ Y = N0.getOperand(1);
+ } else {
+ return SDValue();
+ }
+
+ SDValue Zero = DAG.getConstant(0, DL, OpVT);
+ if (N0.hasOneUse() && hasAndNotCompare(Y)) {
+ // If the target supports an 'and-not' or 'and-complement' logic operation,
+ // try to use that to make a comparison operation more efficient.
+ // But don't do this transform if the mask is a single bit because there are
+ // more efficient ways to deal with that case (for example, 'bt' on x86 or
+ // 'rlwinm' on PPC).
+
+ // Bail out if the compare operand that we want to turn into a zero is
+ // already a zero (otherwise, infinite loop).
+ if (isNullConstant(Y))
+ return SDValue();
+
+ // Transform this into: X & ~Y == 0.
+ SDValue NotY = DAG.getNOT(SDLoc(Y), Y, OpVT);
+ SDValue NewAnd = DAG.getNode(ISD::AND, SDLoc(N0), OpVT, X, NotY);
+ return DAG.getSetCC(DL, VT, NewAnd, Zero, Cond);
+ }
+
+ return SDValue();
+}
+
/// There are multiple IR patterns that could be checking whether certain
/// truncation of a signed number would be lossy or not. The pattern which is
/// best at IR level, may not lower optimally. Thus, we want to unfold it.
@@ -5507,6 +5557,9 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
if (SDValue V = foldSetCCWithAnd(VT, N0, N1, Cond, dl, DCI))
return V;
+
+ if (SDValue V = foldSetCCWithOr(VT, N0, N1, Cond, dl, DCI))
+ return V;
}
// Fold remainder of division by a constant.
diff --git a/llvm/test/CodeGen/AArch64/aarch64-bitwisenot-fold.ll b/llvm/test/CodeGen/AArch64/aarch64-bitwisenot-fold.ll
index 5fbf38b2560d4..28099a76fa34b 100644
--- a/llvm/test/CodeGen/AArch64/aarch64-bitwisenot-fold.ll
+++ b/llvm/test/CodeGen/AArch64/aarch64-bitwisenot-fold.ll
@@ -96,3 +96,29 @@ define i64 @andnot_sub_with_neg_i64(i64 %a0, i64 %a1) {
%and = and i64 %diff, %a0
ret i64 %and
}
+
+define i32 @and_not_select_eq(i32 %a, i32 %b, i32 %c) {
+; CHECK-LABEL: and_not_select_eq:
+; CHECK: // %bb.0:
+; CHECK-NEXT: orr w8, w1, w0
+; CHECK-NEXT: cmp w8, w0
+; CHECK-NEXT: csel w0, w0, w2, eq
+; CHECK-NEXT: ret
+ %or = or i32 %b, %a
+ %cmp = icmp eq i32 %or, %a
+ %a.c = select i1 %cmp, i32 %a, i32 %c
+ ret i32 %a.c
+}
+
+define i32 @and_not_select_ne(i32 %a, i32 %b, i32 %c) {
+; CHECK-LABEL: and_not_select_ne:
+; CHECK: // %bb.0:
+; CHECK-NEXT: orr w8, w1, w0
+; CHECK-NEXT: cmp w8, w0
+; CHECK-NEXT: csel w0, w0, w2, ne
+; CHECK-NEXT: ret
+ %or = or i32 %b, %a
+ %cmp = icmp ne i32 %or, %a
+ %a.c = select i1 %cmp, i32 %a, i32 %c
+ ret i32 %a.c
+}
|
Your tests didn't change in the second commit... |
Fixed! |
3591e33
to
cfe69cb
Compare
Test failures are unrelated. |
Typo in subject line. I think you mean (a | b) == b -> (a & ~b) == 0 or similar. |
Thank you! |
@RKSimon gentle ping |
4d66a55
to
0798c6a
Compare
✅ With the latest revision this PR passed the C/C++ code formatter. |
0f5cf67
to
5e2253a
Compare
No we cannot use this in Target lowering. m_Or is not defined, and neither is sd_match. No function in target lowering does this, and it seems it was done this way for a reason. @RKSimon |
Note that there is also a comment in this file from a developer lamenting how m_Or would have been useful. This also implies they found m_Or could not be used, so this problem has come up in Target Lowering. |
I see no mention of m_Or in TargetLowering.cpp. Please link to the comment. |
Oh it's |
Fixed |
…t exists This is especially helpful for AArch64, which simplifies ands + cmp to tst. Alive2: https://alive2.llvm.org/ce/z/LLgcJJ
Please don't force push. You should use fixup commits to make changes and merge commits to pull in main. Force pushes give reviewers no context for what changes you're making. |
This is especially helpful for AArch64, which simplifies ands + cmp to tst.
Alive2: https://alive2.llvm.org/ce/z/LLgcJJ