verilator/test_regress/t/t_dfg_peephole.v

414 lines
24 KiB
Systemverilog
Raw Normal View History

Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
// DESCRIPTION: Verilator: Verilog Test module
//
// This file ONLY is placed under the Creative Commons Public Domain.
// SPDX-FileCopyrightText: 2022 Geza Lore
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
// SPDX-License-Identifier: CC0-1.0
`define signal(name, expr) wire [$bits(expr)-1:0] ``name = expr
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2022-09-30 17:19:53 +02:00
module t (
2024-09-08 19:00:03 +02:00
`include "portlist.vh" // Boilerplate generated by t_dfg_peephole.py
2026-03-08 23:26:40 +01:00
rand_a, rand_b, srand_a, srand_b, arand_a, arand_b
);
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2024-09-08 19:00:03 +02:00
`include "portdecl.vh" // Boilerplate generated by t_dfg_peephole.py
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2026-03-08 23:26:40 +01:00
input rand_a;
input rand_b;
input srand_a;
input srand_b;
input arand_a;
input arand_b;
wire logic [63:0] rand_a;
wire logic [63:0] rand_b;
wire logic signed [63:0] srand_a;
wire logic signed [63:0] srand_b;
// verilator lint_off ASCRANGE
wire logic [0:63] arand_a;
wire logic [0:63] arand_b;
// verilator lint_on ASCRANGE
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2026-03-08 23:26:40 +01:00
wire logic randbit_a = rand_a[0];
wire logic [127:0] rand_ba = {rand_b, rand_a};
wire logic [127:0] rand_aa = {2{rand_a}};
wire logic [63:0] const_a;
wire logic [63:0] const_b;
2026-03-25 18:15:05 +01:00
wire logic [63:0] zero;
wire logic [63:0] ones;
2026-03-08 23:26:40 +01:00
wire logic signed [63:0] sconst_a;
wire logic signed [63:0] sconst_b;
logic [63:0] array [3:0];
logic [63:0] unitArrayWhole [0:0];
logic [63:0] unitArrayParts [0:0];
assign array[0] = (rand_a << 32) | (rand_a >> 32);
assign array[1] = (rand_a << 16) | (rand_a >> 48);
assign array[2][3:0] = rand_a[3:0];
always @(rand_b) begin // Intentional non-combinational partial driver
array[2][7:4] = rand_a[7:4];
end
assign unitArrayWhole[0] = rand_a;
assign unitArrayParts[0][1] = rand_a[1];
assign unitArrayParts[0][9] = rand_a[9];
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2026-03-08 23:26:40 +01:00
`signal(FOLD_UNARY_LogNot, !const_a[0]);
`signal(FOLD_UNARY_Negate, -const_a);
`signal(FOLD_UNARY_Not, ~const_a);
`signal(FOLD_UNARY_RedAnd, &const_a);
`signal(FOLD_UNARY_RedOr, |const_a);
`signal(FOLD_UNARY_RedXor, ^const_a);
// verilator lint_off WIDTH
wire logic [79:0] tmp_FOLD_UNARY_Extend = const_a;
wire logic signed [79:0] tmp_FOLD_UNARY_ExtendS = sconst_a;
//verilator lint_on WIDTH
`signal(FOLD_UNARY_Extend, tmp_FOLD_UNARY_Extend);
`signal(FOLD_UNARY_ExtendS, tmp_FOLD_UNARY_ExtendS);
2026-03-08 23:26:40 +01:00
`signal(FOLD_BINARY_Add, const_a + const_b);
`signal(FOLD_BINARY_And, const_a & const_b);
`signal(FOLD_BINARY_Concat, {const_a, const_b});
`signal(FOLD_BINARY_Div, const_a / 64'd3);
`signal(FOLD_BINARY_DivS, sconst_a / 64'sd3);
`signal(REMOVE_DIV_ONE, rand_a / 64'd1);
`signal(REMOVE_DIVS_ONE, srand_a / 64'sd1);
2026-03-08 23:26:40 +01:00
`signal(FOLD_BINARY_Eq, const_a == const_b);
`signal(FOLD_BINARY_Gt, const_a > const_b);
`signal(FOLD_BINARY_GtS, sconst_a > sconst_b);
`signal(FOLD_BINARY_Gte, const_a >= const_b);
`signal(FOLD_BINARY_GteS, sconst_a >= sconst_b);
`signal(FOLD_BINARY_LogAnd, const_a[0] && const_b[0]);
`signal(FOLD_BINARY_LogEq, const_a[0] <-> const_b[0]);
`signal(FOLD_BINARY_LogIf, const_a[0] -> const_b[0]);
`signal(FOLD_BINARY_LogOr, const_a[0] || const_b[0]);
`signal(FOLD_BINARY_Lt, const_a < const_b);
`signal(FOLD_BINARY_Lt2, const_a < const_a);
`signal(FOLD_BINARY_LtS, sconst_a < sconst_b);
`signal(FOLD_BINARY_LtS2, sconst_a < sconst_a);
`signal(FOLD_BINARY_Lte, const_a <= const_b);
`signal(FOLD_BINARY_Lte2, const_a <= const_a);
`signal(FOLD_BINARY_LteS, sconst_a <= sconst_b);
`signal(FOLD_BINARY_LteS2, sconst_a <= sconst_a);
`signal(FOLD_BINARY_ModDiv, const_a % 64'd3);
`signal(FOLD_BINARY_ModDivS, sconst_a % 64'sd3);
`signal(FOLD_BINARY_Mul, const_a * 64'd3);
`signal(FOLD_BINARY_MulS, sconst_a * 64'sd3);
`signal(REMOVE_MUL_ZERO, rand_a * 64'd0);
`signal(REMOVE_MUL_ONE, rand_a * 64'd1);
`signal(REMOVE_MULS_ZERO, srand_a * 64'sd0);
`signal(REMOVE_MULS_ONE, srand_a * 64'sd1);
2026-03-08 23:26:40 +01:00
`signal(FOLD_BINARY_Neq, const_a != const_b);
`signal(FOLD_BINARY_Or, const_a | const_b);
`signal(FOLD_BINARY_Pow, const_a ** 64'd2);
`signal(FOLD_BINARY_PowSS, sconst_a ** 64'sd2);
`signal(FOLD_BINARY_PowSU, sconst_a ** 64'd2);
`signal(FOLD_BINARY_PowUS, const_a ** 64'sd2);
`signal(FOLD_BINARY_Replicate, {2{const_a}});
`signal(FOLD_BINARY_ShiftL, const_a << 2);
`signal(FOLD_BINARY_ShiftR, const_a >> 2);
`signal(FOLD_BINARY_ShiftRS, sconst_a >>> 2);
`signal(FOLD_BINARY_Sub, const_a - const_b);
`signal(FOLD_BINARY_Xor, const_a ^ const_b);
2026-03-08 23:26:40 +01:00
`signal(FOLD_ASSOC_BINARY_LHS_OF_RHS_And, (const_a & (const_b & rand_a)));
`signal(FOLD_ASSOC_BINARY_LHS_OF_RHS_Or, (const_a | (const_b | rand_a)));
`signal(FOLD_ASSOC_BINARY_LHS_OF_RHS_Xor, (const_a ^ (const_b ^ rand_a)));
`signal(FOLD_ASSOC_BINARY_LHS_OF_RHS_Add, (const_a + (const_b + rand_a)));
`signal(FOLD_ASSOC_BINARY_LHS_OF_RHS_Mul, (const_a * (const_b * rand_a)));
`signal(FOLD_ASSOC_BINARY_LHS_OF_RHS_MulS, (sconst_a * (sconst_b * srand_a)));
`signal(FOLD_ASSOC_BINARY_LHS_OF_RHS_Concat, {const_a, {const_b, rand_a}});
2026-03-08 23:26:40 +01:00
`signal(FOLD_ASSOC_BINARY_RHS_OF_LHS_And, ((rand_a & const_b) & const_a));
`signal(FOLD_ASSOC_BINARY_RHS_OF_LHS_Or, ((rand_a | const_b) | const_a));
`signal(FOLD_ASSOC_BINARY_RHS_OF_LHS_Xor, ((rand_a ^ const_b) ^ const_a));
`signal(FOLD_ASSOC_BINARY_RHS_OF_LHS_Add, ((rand_a + const_b) + const_a));
`signal(FOLD_ASSOC_BINARY_RHS_OF_LHS_Mul, ((rand_a * const_b) * const_a));
`signal(FOLD_ASSOC_BINARY_RHS_OF_LHS_MulS, ((srand_a * sconst_b) * sconst_a));
`signal(FOLD_ASSOC_BINARY_RHS_OF_LHS_Concat, {{rand_a, const_b}, const_a});
2026-03-08 23:26:40 +01:00
`signal(FOLD_SEL, const_a[3:1]);
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2026-03-08 23:26:40 +01:00
`signal(SWAP_CONST_IN_COMMUTATIVE_BINARY, rand_a + const_a);
`signal(SWAP_NOT_IN_COMMUTATIVE_BINARY, rand_a + ~rand_a);
`signal(SWAP_VAR_IN_COMMUTATIVE_BINARY, rand_b + rand_a);
`signal(PUSH_BITWISE_OP_THROUGH_CONCAT, 32'h12345678 ^ {8'h0, rand_a[23:0]});
`signal(PUSH_BITWISE_OP_THROUGH_CONCAT_2, 32'h12345678 ^ {rand_b[7:0], rand_a[23:0]});
`signal(PUSH_COMPARE_OP_THROUGH_CONCAT_EQ, 4'b1011 == {2'b10, rand_a[1:0]});
`signal(PUSH_COMPARE_OP_THROUGH_CONCAT_NE, 4'b1011 != {2'b10, rand_a[1:0]});
2026-03-08 23:26:40 +01:00
`signal(PUSH_REDUCTION_THROUGH_COND_WITH_CONST_BRANCH, |(rand_a[32] ? rand_a[3:0] : 4'h0));
`signal(REPLACE_REDUCTION_OF_CONST_AND, &const_a);
`signal(REPLACE_REDUCTION_OF_CONST_OR, |const_a);
`signal(REPLACE_REDUCTION_OF_CONST_XOR, ^const_a);
`signal(REPLACE_EXTEND, 4'(rand_a[0]));
`signal(PUSH_NOT_THROUGH_COND, ~(rand_a[0] ? rand_a[4:0] : 5'hb));
`signal(REMOVE_NOT_NOT, ~~rand_a);
`signal(REPLACE_NOT_NEQ, ~(rand_a != rand_b));
`signal(REPLACE_NOT_EQ, ~(srand_a == srand_b));
// GT/GTE variants use shifted operands to avoid CSE interference. The peephole's
// REPLACE_NOT_LT fires first and emits a new DfgGte(rand_a, rand_b); the intra-pass
// CSE then merges it with the existing DfgGte(rand_a, rand_b) that is the source of
// REPLACE_NOT_GTE, giving that vertex two sinks and causing its !hasMultipleSinks()
// guard to fail. Unique shift amounts break the CSE match, so all eight patterns fire.
`signal(REPLACE_NOT_GT, ~((rand_a >> 18) > (rand_b >> 18)));
`signal(REPLACE_NOT_GTE, ~((rand_a >> 19) >= (rand_b >> 19)));
`signal(REPLACE_NOT_GTES, ~((srand_a >>> 20) >= (srand_b >>> 20)));
`signal(REPLACE_NOT_GTS, ~((srand_a >>> 21) > (srand_b >>> 21)));
`signal(REPLACE_NOT_LT, ~(rand_a < rand_b));
`signal(REPLACE_NOT_LTE, ~(rand_a <= rand_b));
`signal(REPLACE_NOT_LTES, ~(srand_a <= srand_b));
`signal(REPLACE_NOT_LTS, ~(srand_a < srand_b));
2026-03-08 23:26:40 +01:00
`signal(REPLACE_NOT_OF_CONST, ~4'd0);
`signal(REPLACE_DISTRIBUTIVE_AND_OR_ABAC, ((rand_a >> 10) | (rand_b >> 10)) & ((rand_a >> 10) | (srand_b >> 10)));
`signal(REPLACE_DISTRIBUTIVE_AND_OR_ABCA, ((rand_a >> 11) | (rand_b >> 11)) & ((srand_b >> 11) | (rand_a >> 11)));
`signal(REPLACE_DISTRIBUTIVE_AND_OR_BAAC, ((rand_b >> 12) | (rand_a >> 12)) & ((rand_a >> 12) | (srand_b >> 12)));
`signal(REPLACE_DISTRIBUTIVE_AND_OR_BACA, ((rand_b >> 13) | (rand_a >> 13)) & ((srand_b >> 13) | (rand_a >> 13)));
`signal(REPLACE_AND_OF_NOT_AND_NOT, ~rand_a[1] & ~rand_b[1]);
`signal(REPLACE_AND_OF_NOT_AND_NEQ, ~rand_a[2] & (rand_b != 64'd2));
`signal(REPLACE_AND_OF_CONST_AND_CONST, const_a & const_b);
`signal(REPLACE_AND_WITH_ZERO, 64'd0 & rand_a);
`signal(REMOVE_AND_WITH_ONES, -64'd1 & rand_a);
`signal(REMOVE_AND_WITH_SELF, ~rand_a & ~rand_a);
`signal(REPLACE_CONTRADICTORY_AND, rand_a & ~rand_a);
`signal(REPLACE_CONTRADICTORY_AND_3, ~(rand_a + 1) & ((rand_a + 1) & rand_b));
`signal(REPLACE_OR_DISTRIBUTIVE, (rand_a & rand_b) | (rand_a & srand_b));
`signal(REPLACE_DISTRIBUTIVE_OR_AND_ABAC, ((rand_a >> 14) & (rand_b >> 14)) | ((rand_a >> 14) & (srand_b >> 14)));
`signal(REPLACE_DISTRIBUTIVE_OR_AND_ABCA, ((rand_a >> 15) & (rand_b >> 15)) | ((srand_b >> 15) & (rand_a >> 15)));
`signal(REPLACE_DISTRIBUTIVE_OR_AND_BAAC, ((rand_b >> 16) & (rand_a >> 16)) | ((rand_a >> 16) & (srand_b >> 16)));
`signal(REPLACE_DISTRIBUTIVE_OR_AND_BACA, ((rand_b >> 17) & (rand_a >> 17)) | ((srand_b >> 17) & (rand_a >> 17)));
`signal(REPLACE_OR_OF_NOT_AND_NOT, ~rand_a[3] | ~rand_b[3]);
`signal(REPLACE_OR_OF_NOT_AND_NEQ, ~rand_a[4] | (rand_b != 64'd3));
`signal(REPLACE_OR_OF_CONCAT_ZERO_LHS_AND_CONCAT_RHS_ZERO, {2'd0, rand_a[1:0]} | {rand_b[1:0], 2'd0});
`signal(REPLACE_OR_OF_CONCAT_LHS_ZERO_AND_CONCAT_ZERO_RHS, {rand_a[1:0], 2'd0} | {2'd0, rand_b[1:0]});
`signal(REPLACE_OR_OF_CONST_AND_CONST, const_a | const_b);
`signal(REPLACE_OR_WITH_ONES, -64'd1 | rand_a);
`signal(REMOVE_OR_WITH_SELF, ~rand_a | ~rand_a);
`signal(REMOVE_OR_WITH_ZERO, 64'd0 | rand_a);
`signal(REPLACE_TAUTOLOGICAL_OR, rand_a | ~rand_a);
`signal(REPLACE_TAUTOLOGICAL_OR_3, ~(rand_a + 1) | ((rand_a + 1) | rand_b));
`signal(FOLD_SELF_SUB, rand_a - rand_a);
2026-03-08 23:26:40 +01:00
`signal(REMOVE_SUB_ZERO, rand_a - 64'd0);
`signal(REPLACE_SUB_WITH_NOT, rand_a[0] - 1'b1);
`signal(REMOVE_REDUNDANT_ZEXT_ON_RHS_OF_SHIFT, rand_a << {2'b0, rand_a[2:0]});
`signal(REPLACE_EQ_OF_CONST_AND_CONST, 4'd0 == 4'd1);
`signal(REMOVE_FULL_WIDTH_SEL, rand_a[63:0]);
`signal(REMOVE_SEL_FROM_RHS_OF_CONCAT, rand_ba[63:0]);
`signal(REMOVE_SEL_FROM_LHS_OF_CONCAT, rand_ba[127:64]);
`signal(PUSH_SEL_THROUGH_CONCAT, rand_ba[120:0]);
`signal(PUSH_SEL_THROUGH_REPLICATE, rand_aa[0]);
`signal(REPLACE_SEL_FROM_CONST, const_a[2]);
`signal(REPLACE_CONCAT_OF_CONSTS, {const_a, const_b});
`signal(REPLACE_CONCAT_ZERO_AND_SEL_TOP_WITH_SHIFTR, {62'd0, rand_a[63:62]});
`signal(REPLACE_CONCAT_SEL_BOTTOM_AND_ZERO_WITH_SHIFTL, {rand_a[1:0], 62'd0});
`signal(PUSH_CONCAT_THROUGH_NOTS, {~(rand_a+64'd101), ~(rand_b+64'd101)} );
`signal(REMOVE_CONCAT_OF_ADJOINING_SELS, {rand_a[10:3], rand_a[2:1]});
`signal(REPLACE_NESTED_CONCAT_OF_ADJOINING_SELS_ON_LHS_CAT, {rand_a[2:1], rand_b});
`signal(REPLACE_NESTED_CONCAT_OF_ADJOINING_SELS_ON_RHS_CAT, {rand_b, rand_a[10:3]});
`signal(REPLACE_NESTED_CONCAT_OF_ADJOINING_SELS_ON_LHS, {rand_a[10:4], {rand_a[3:1], rand_b}});
`signal(REPLACE_NESTED_CONCAT_OF_ADJOINING_SELS_ON_RHS, {{rand_a, rand_b[11:6]}, rand_b[5:1]});
`signal(REPLACE_NESTED_CONCAT_OF_ADJOINING_SELS_ON_RHS_COMMON, {rand_a, rand_b[11:6]});
2026-03-08 23:26:40 +01:00
`signal(REMOVE_COND_WITH_FALSE_CONDITION, 1'd0 ? rand_a : rand_b);
`signal(REMOVE_COND_WITH_TRUE_CONDITION, 1'd1 ? rand_a : rand_b);
`signal(REMOVE_COND_WITH_BRANCHES_SAME, rand_a[0] ? ~rand_b : ~rand_b);
`signal(SWAP_COND_WITH_NOT_CONDITION, (~rand_a[0] & 1'd1) ? rand_a : rand_b);
`signal(SWAP_COND_WITH_NEQ_CONDITION, rand_b != rand_a ? rand_a : rand_b);
`signal(PULL_NOTS_THROUGH_COND, rand_a[0] ? ~rand_a[4:0] : ~rand_b[4:0]);
`signal(REPLACE_COND_OR_THEN_COND_LHS, (rand_a[0] | rand_b[0] ? (rand_a[0] ? rand_a : rand_b) : srand_a));
`signal(REPLACE_COND_OR_THEN_COND_RHS, (rand_a[0] | rand_b[0] ? (rand_b[0] ? rand_a : rand_b) : srand_a));
`signal(REPLACE_COND_WITH_THEN_BRANCH_COND, rand_a[0] ? rand_a[0] : rand_a[1]);
`signal(REPLACE_COND_WITH_THEN_BRANCH_ZERO, rand_a[0] ? 1'd0 : rand_a[1]);
`signal(REPLACE_COND_WITH_THEN_BRANCH_ONES, rand_a[0] ? 1'd1 : rand_a[1]);
`signal(REPLACE_COND_WITH_ELSE_BRANCH_COND, rand_a[0] ? rand_a[1] : rand_a[0]);
2026-03-08 23:26:40 +01:00
`signal(REPLACE_COND_WITH_ELSE_BRANCH_ZERO, rand_a[0] ? rand_a[1] : 1'd0);
`signal(REPLACE_COND_WITH_ELSE_BRANCH_ONES, rand_a[0] ? rand_a[1] : 1'd1);
`signal(INLINE_ARRAYSEL_SPLICE, array[0]);
`signal(NO_INLINE_ARRAYSEL_SPLICE_PARTIAL, array[2]);
`signal(INLINE_ARRAYSEL_UNIT, unitArrayWhole[0]);
`signal(NO_INLINE_ARRAYSEL_UNIT_PARTIAL, unitArrayParts[0]);
`signal(PUSH_BITWISE_THROUGH_REDUCTION_AND, (&(rand_a + 64'd105)) & (&(rand_b + 64'd108)));
`signal(PUSH_BITWISE_THROUGH_REDUCTION_OR, (|(rand_a + 64'd106)) | (|(rand_b + 64'd109)));
`signal(PUSH_BITWISE_THROUGH_REDUCTION_XOR, (^(rand_a + 64'd107)) ^ (^(rand_b + 64'd110)));
`signal(PUSH_REDUCTION_THROUGH_CONCAT_AND, &{1'd1, rand_b});
`signal(PUSH_REDUCTION_THROUGH_CONCAT_OR, |{1'd1, rand_b});
`signal(PUSH_REDUCTION_THROUGH_CONCAT_XOR, ^{1'd1, rand_b});
`signal(REMOVE_WIDTH_ONE_REDUCTION_AND, &rand_a[0]);
`signal(REMOVE_WIDTH_ONE_REDUCTION_OR, |rand_a[0]);
`signal(REMOVE_WIDTH_ONE_REDUCTION_XOR, ^rand_a[0]);
`signal(REMOVE_XOR_WITH_ZERO, 64'd0 ^ rand_a);
`signal(REPLACE_XOR_WITH_SELF, ~rand_a ^ ~rand_a);
`signal(REPLACE_XOR_WITH_ONES, -64'd1 ^ rand_a);
`signal(REPLACE_COND_DEC, randbit_a ? rand_b - 64'b1 : rand_b);
`signal(REPLACE_COND_INC, randbit_a ? rand_b + 64'b1 : rand_b);
`signal(NO_REPLACE_COND_DEC, randbit_a ? rand_b - 64'hf000000000000000 : rand_b);
`signal(NO_REPLACE_COND_INC, randbit_a ? rand_b + 64'hf000000000000000 : rand_b);
`signal(REPLACE_LOGAND_WITH_AND, rand_a[0] && rand_a[1]);
`signal(REPLACE_LOGOR_WITH_OR, rand_a[0] || rand_a[1]);
`signal(RIGHT_LEANING_ASSOC, (((rand_a + rand_b) + rand_a) + rand_b));
`signal(RIGHT_LEANING_CONCET, {{{rand_a, rand_b}, rand_a}, rand_b});
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_ADD_COMMON, rand_a[23:4] + ~rand_b[23:4]);
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_ADD, rand_a[23:4] + (~rand_b[23:4] + rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_MUL_COMMON, rand_a[23:4] * ~rand_b[23:4]);
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_MUL, rand_a[23:4] * (~rand_b[23:4] * rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_MULS_COMMON, srand_a[23:4] * ~srand_b[23:4]);
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_MULS, srand_a[23:4] * (~srand_b[23:4] * srand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_AND_COMMON, rand_a[23:4] & ~rand_b[23:4]);
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_AND, rand_a[23:4] & (~rand_b[23:4] & rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_OR_COMMON, rand_a[23:4] | ~rand_b[23:4]);
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_OR, rand_a[23:4] | (~rand_b[23:4] | rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_XOR_COMMON, rand_a[23:4] ^ ~rand_b[23:4]);
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_XOR, rand_a[23:4] ^ (~rand_b[23:4] ^ rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_CAT_COMMON, {rand_a[23:4], ~rand_b[23:4]});
`signal(REUSE_ASSOC_LHS_WITH_LHS_OF_RHS_CAT, {rand_a[23:4], {~rand_b[23:4], rand_a[39:20]}});
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_ADD_COMMON, rand_a[23:4] + rand_a[39:20]);
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_ADD, rand_a[23:4] + (~rand_b[24:5] + rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_MUL_COMMON, rand_a[23:4] * rand_a[39:20]);
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_MUL, rand_a[23:4] * (~rand_b[24:5] * rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_MULS_COMMON, srand_a[23:4] * rand_a[39:20]);
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_MULS, srand_a[23:4] * (~srand_b[24:5] * srand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_AND_COMMON, rand_a[23:4] & rand_a[39:20]);
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_AND, rand_a[23:4] & (~rand_b[24:5] & rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_OR_COMMON, rand_a[23:4] | rand_a[39:20]);
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_OR, rand_a[23:4] | (~rand_b[24:5] | rand_a[39:20]));
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_XOR_COMMON, rand_a[23:4] ^ rand_a[39:20]);
`signal(REUSE_ASSOC_LHS_WITH_RHS_OF_RHS_XOR, rand_a[23:4] ^ (~rand_b[24:5] ^ rand_a[39:20]));
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2026-04-20 19:28:11 +02:00
`signal(REPLACE_COND_CONST_ONE_ZERO, rand_a[0] ? 80'b1 : 80'b0);
`signal(REPLACE_COND_CONST_ZERO_ONE, rand_a[0] ? 80'b0 : 80'b1);
`signal(REPLACE_COND_CONST_ONES_ZERO, rand_a[0] ? -80'b1 : 80'b0);
`signal(REPLACE_COND_CONST_ZERO_ONAE, rand_a[0] ? 80'b0 : -80'b1);
`signal(REPLACE_COND_CAT_LHS_CONST_ONE_ZERO, rand_a[0] ? {8'b1, rand_b[0]} : {8'b0, rand_b[1]});
`signal(REPLACE_COND_CAT_LHS_CONST_ZERO_ONE, rand_a[0] ? {8'b0, rand_b[0]} : {8'b1, rand_b[1]});
`signal(REPLACE_COND_SAME_CAT_LHS, rand_a[0] ? {8'd0, rand_b[0]} : {8'd0, rand_b[1]});
`signal(REPLACE_COND_SAME_CAT_RHS, rand_a[0] ? {rand_b[0], 8'd0} : {rand_b[1], 8'd0});
`signal(REPLACE_COND_SAM_COND_THEN, rand_a[0] ? (rand_a[0] ? rand_b[1:0] : rand_b[3:2]) : rand_b[5:4]);
`signal(REPLACE_COND_SAM_COND_ELSE, rand_a[0] ? rand_b[1:0] : (rand_a[0] ? rand_b[3:2] : rand_b[5:4]));
`signal(REMOVE_SHIFTL_ZERO, rand_a << 0);
`signal(REPLACE_SHIFTL_OVER, rand_a << 64);
`signal(REPLACE_SHIFTL_SEL, rand_a[27:0] << 4);
`signal(REMOVE_SHIFTR_ZERO, rand_a >> 0);
`signal(REPLACE_SHIFTR_OVER, rand_a >> 64);
`signal(REPLACE_SHIFTR_SEL, rand_a[28:1] >> 4);
`signal(PUSH_BITWISE_OP_THROUGH_COND_AND, 4'd2 & (rand_a[0] ? 4'd7 : 4'd4));
`signal(PUSH_BITWISE_OP_THROUGH_COND_OR, 4'd2 | (rand_a[0] ? 4'd7 : 4'd4));
`signal(PUSH_BITWISE_OP_THROUGH_COND_XOR, 4'd2 ^ (rand_a[0] ? 4'd7 : 4'd4));
`signal(SIMPLIFY_COND_THEN, rand_a[0] ? {1'b0, ~rand_a[0]} : 2'b0);
`signal(SIMPLIFY_COND_ELSE, rand_a[0] ? 2'b0 : {1'b0, ~rand_a[0]});
`signal(PUSH_CONCAT_THROUGH_COND_LHS_A, {5'd0, rand_a[0] ? {rand_b[4], 1'b0} : 2'b0});
`signal(PUSH_CONCAT_THROUGH_COND_LHS_B, {5'd0, rand_a[0] ? 2'b0 : {rand_b[4], 1'b0}});
`signal(PUSH_CONCAT_THROUGH_COND_RHS_A, {rand_a[0] ? {rand_b[5], 1'b0} : 2'b0, 5'd0});
`signal(PUSH_CONCAT_THROUGH_COND_RHS_B, {rand_a[0] ? 2'b0 : {rand_b[5], 1'b0}, 5'd0});
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_CONCAT_AND, &({rand_a[0], rand_a[2]} & rand_b[1:0]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_CONCAT_OR, |({rand_a[0], rand_a[2]} | rand_b[1:0]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_CONCAT_XOR, ^({rand_a[0], rand_a[2]} ^ rand_b[1:0]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_SELS_AND_A, &(rand_a[55+:2] & rand_a[53+:2]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_SELS_OR_A, |(rand_a[55+:2] | rand_a[53+:2]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_SELS_XOR_A, ^(rand_a[55+:2] ^ rand_a[53+:2]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_SELS_AND_B, &(rand_a[57+:2] & rand_a[59+:2]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_SELS_OR_B, |(rand_a[57+:2] | rand_a[59+:2]));
`signal(PUSH_REDUCTION_THROUGH_BITWISE_OF_SELS_XOR_B, ^(rand_a[57+:2] ^ rand_a[59+:2]));
`signal(PUSH_SHIFTL_THROUGH_COND, (rand_a[0] ? rand_a >> 1 : rand_b >> 1) << 1);
`signal(PUSH_SHIFTR_THROUGH_COND, (rand_a[0] ? rand_a << 1 : rand_b << 1) >> 1);
`signal(REPLACE_BITWISE_OF_REDUCTION_OF_SELS_WITH_REDUCTION_AND_A, (&rand_a[10 +:2]) & (&rand_a[12 +: 2]));
`signal(REPLACE_BITWISE_OF_REDUCTION_OF_SELS_WITH_REDUCTION_OR_A, (|rand_a[10 +:2]) | (|rand_a[12 +: 2]));
`signal(REPLACE_BITWISE_OF_REDUCTION_OF_SELS_WITH_REDUCTION_XOR_A, (^rand_a[10 +:2]) ^ (^rand_a[12 +: 2]));
`signal(REPLACE_BITWISE_OF_REDUCTION_OF_SELS_WITH_REDUCTION_AND_B, (&rand_a[12 +:2]) & (&rand_a[10 +: 2]));
`signal(REPLACE_BITWISE_OF_REDUCTION_OF_SELS_WITH_REDUCTION_OR_B, (|rand_a[12 +:2]) | (|rand_a[10 +: 2]));
`signal(REPLACE_BITWISE_OF_REDUCTION_OF_SELS_WITH_REDUCTION_XOR_B, (^rand_a[12 +:2]) ^ (^rand_a[10 +: 2]));
`signal(REPLACE_SHIFTL_SHIFTL, rand_a << 2 << 3);
`signal(REPLACE_SHIFTR_SHIFTR, rand_a >> 2 >> 3);
`signal(PUSH_COMMUTATIVE_BINARY_THROUGH_COND, 58'h1 + (rand_a[0] ? rand_b[1 +: 58] : ~rand_b[1 +: 58]));
2026-03-22 14:34:33 +01:00
`signal(REMOVE_ADD_ZERO, rand_a + '0);
`signal(REPLACE_ADD_WITH_COUNT_ONES_A, 4'(rand_a[63]) + 4'(rand_a[62]) + 4'(rand_a[61]));
`signal(REPLACE_ADD_WITH_COUNT_ONES_B, 32'(rand_a[63]) + 32'(rand_a[62]) + 32'(rand_a[61]));
`signal(REPLACE_ADD_WITH_COUNT_ONES_C, 200'(rand_a[63]) + 200'(rand_a[62]) + 200'(rand_a[61]));
`signal(REPLACE_ADD_WITH_COUNT_ONES_D, 1'(rand_a[63]) + 1'(rand_a[62]) + 1'(rand_a[61]));
`signal(FOLD_SELF_EQ, rand_a == rand_a);
`signal(FOLD_SELF_NEQ, rand_a != rand_a);
`signal(FOLD_SELF_GT, rand_a > rand_a);
`signal(FOLD_SELF_GTS, srand_a > srand_a);
`signal(FOLD_SELF_GE, rand_a >= rand_a);
`signal(FOLD_SELF_GES, srand_a >= srand_a);
`signal(FOLD_SELF_LT, rand_a < rand_a);
`signal(FOLD_SELF_LTS, srand_a < srand_a);
`signal(FOLD_SELF_LE, rand_a <= rand_a);
`signal(FOLD_SELF_LES, srand_a <= srand_a);
2026-03-25 18:15:05 +01:00
`signal(FOLD_MUX_FROM_ONES, ones[rand_a[5:0]]);
`signal(FOLD_MUX_FROM_ZERO, zero[rand_a[5:0]]);
`signal(REPLACE_MUX_WITH_SEL, rand_a[const_a[5:0]]);
`signal(REPLACE_COND_THEN_OR_LHS, rand_a[0] ? rand_a[1] | rand_b[1] : rand_a[1]);
`signal(REPLACE_COND_THEN_OR_RHS, rand_a[0] ? rand_a[1] | rand_b[1] : rand_b[1]);
`signal(REMOVE_ACI_BINARY_LHS, ~rand_a & (~rand_b & ~rand_a));
`signal(REMOVE_ACI_BINARY_RHS, (~rand_b & ~rand_a) & ~rand_a);
`signal(REPLACE_CONCAT_SAME, {rand_a, rand_a});
`signal(REPLACE_CONCAT_REP_ON_LHS, {{2{rand_a}}, rand_a});
`signal(REPLACE_CONCAT_REP_ON_RHS, {rand_a, {2{rand_a}}});
`signal(REPLACE_AND_REP_COND_ELSE_ZERO, {4{rand_a[0]}} & (rand_a[1] ? rand_b[3:0] : 4'd0));
`signal(REPLACE_EQ_BIT_0, 1'b0 == rand_a[0]);
`signal(REMOVE_EQ_BIT_1, 1'b1 == rand_a[0]);
`signal(REMOVE_NEQ_BIT_0, 1'b0 != rand_a[0]);
`signal(REPLACE_NEQ_BIT_1, 1'b1 != rand_a[0]);
2026-04-20 19:28:11 +02:00
`signal(REPLACE_COND_INSERT, rand_a[0] ? {rand_b[63:40], {1'd0, rand_b[38:0]}} : rand_b);
`signal(REPLACE_REP_REP, {2{({3{rand_a[0]}})}});
2026-03-08 23:26:40 +01:00
// Operators that should work wiht mismatched widths
`signal(MISMATCHED_ShiftL,const_a << 4'd2);
`signal(MISMATCHED_ShiftR,const_a >> 4'd2);
`signal(MISMATCHED_ShiftRS, const_a >> 4'd2);
`signal(MISMATCHED_PowUU, rand_a ** 4'd5);
`signal(MISMATCHED_PowSS, srand_a ** 4'sd5);
`signal(MISMATCHED_PowSU, srand_b ** 4'd5);
`signal(MISMATCHED_PowUS, rand_b ** 4'sd5);
2026-03-08 23:26:40 +01:00
// Some selects need extra temporaries
wire [63:0] sel_from_cond = rand_a[0] ? rand_a : const_a;
wire [63:0] sel_from_shiftl = rand_a << 10;
wire [31:0] sel_from_sel = rand_a[10+:32];
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2026-03-08 23:26:40 +01:00
`signal(PUSH_SEL_THROUGH_COND, sel_from_cond[2]);
`signal(PUSH_SEL_THROUGH_SHIFTL, sel_from_shiftl[20:0]);
`signal(REPLACE_SEL_FROM_SEL, sel_from_sel[4:3]);
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
2026-03-08 23:26:40 +01:00
logic [2:0] sel_from_partial_tmp;;
always_comb begin
sel_from_partial_tmp[1:0] = 2'd0;
if (rand_a[0]) begin
sel_from_partial_tmp[0] = rand_b[0];
end
end
`signal(PUSH_SEL_THROUGH_SPLICE, sel_from_partial_tmp[1:0]);
Optimize complex combinational logic in DFG (#6298) This patch adds DfgLogic, which is a vertex that represents a whole, arbitrarily complex combinational AstAlways or AstAssignW in the DfgGraph. Implementing this requires computing the variables live at entry to the AstAlways (variables read by the block), so there is a new ControlFlowGraph data structure and a classical data-flow analysis based live variable analysis to do that at the variable level (as opposed to bit/element level). The actual CFG construction and live variable analysis is best effort, and might fail for currently unhandled constructs or data types. This can be extended later. V3DfgAstToDfg is changed to convert the Ast into an initial DfgGraph containing only DfgLogic, DfgVertexSplice and DfgVertexVar vertices. The DfgLogic are then subsequently synthesized into primitive operations by the new V3DfgSynthesize pass, which is a combination of the old V3DfgAstToDfg conversion and new code to handle AstAlways blocks with complex flow control. V3DfgSynthesize by default will synthesize roughly the same constructs as V3DfgAstToDfg used to handle before, plus any logic that is part of a combinational cycle within the DfgGraph. This enables breaking up these cycles, for which there are extensions to V3DfgBreakCycles in this patch as well. V3DfgSynthesize will then delete all non synthesized or non synthesizable DfgLogic vertices and the rest of the Dfg pipeline is identical, with minor changes to adjust for the changed representation. Because with this change we can now eliminate many more UNOPTFLAT, DFG has been disabled in all the tests that specifically target testing the scheduling and reporting of circular combinational logic.
2025-08-19 16:06:38 +02:00
2026-03-08 23:26:40 +01:00
`signal(REPLACE_SHIFTL_CAT, {31'd0, rand_a[42 +: 7]} << 31);
`signal(REPLACE_SHIFTRL_CAT, {rand_a[13 +: 7], rand_b[8 +: 27]} >> 27 << 27);
2026-03-08 23:26:40 +01:00
// Asscending ranges
`signal(ASCENDNG_SEL, arand_a[0:4]);
// verilator lint_off ASCRANGE
wire [0:7] ascending_assign;
// verilator lint_on ASCRANGE
assign ascending_assign[0:3] = arand_a[4:7];
assign ascending_assign[4:7] = arand_b[0:3];
`signal(ASCENDING_ASSIGN, ascending_assign);
2026-03-08 23:26:40 +01:00
// Special cases to be covered
`signal(REPLICATE_WIDTH, {4'd8{rand_a[0]}}); // Replicate count unsigned, but MSB set
if ($bits(REPLICATE_WIDTH) != 8) $fatal("%0d != 8", $bits(REPLICATE_WIDTH));
2026-03-08 23:26:40 +01:00
// Sel from not requires the operand to have a sinle sink, so can't use
// the chekc due to the raw expression referencing the operand
wire [63:0] sel_from_not_tmp = ~(rand_a >> rand_b[2:0] << rand_a[3:0]);
wire sel_from_not = sel_from_not_tmp[2];
always @(posedge randbit_a) if ($c(0)) $display(sel_from_not); // Do not remove signal
2022-09-30 17:19:53 +02:00
2026-03-08 23:26:40 +01:00
// Narrow concatenation
wire [9:0] narrow_concat = {5'd0, ~rand_a[44 +: 5]};
`signal(NARROW_CONCAT_A, narrow_concat[5:1]);
`signal(NARROW_CONCAT_B, narrow_concat[8:4]);
`signal(NARROW_CONCAT_C, narrow_concat[5:4]);
2026-03-08 23:26:40 +01:00
// Assigned at the end to avoid inlining by other passes
assign const_a = 64'h0123456789abcdef;
assign const_b = 64'h98badefc10325647;
assign sconst_a = 64'hfedcba9876543210;
assign sconst_b = 64'hba0123456789cdef;
2026-03-25 18:15:05 +01:00
assign zero = '0;
assign ones = '1;
Introduce DFG based combinational logic optimizer (#3527) Added a new data-flow graph (DFG) based combinational logic optimizer. The capabilities of this covers a combination of V3Const and V3Gate, but is also more capable of transforming combinational logic into simplified forms and more. This entail adding a new internal representation, `DfgGraph`, and appropriate `astToDfg` and `dfgToAst` conversion functions. The graph represents some of the combinational equations (~continuous assignments) in a module, and for the duration of the DFG passes, it takes over the role of AstModule. A bulk of the Dfg vertices represent expressions. These vertex classes, and the corresponding conversions to/from AST are mostly auto-generated by astgen, together with a DfgVVisitor that can be used for dynamic dispatch based on vertex (operation) types. The resulting combinational logic graph (a `DfgGraph`) is then optimized in various ways. Currently we perform common sub-expression elimination, variable inlining, and some specific peephole optimizations, but there is scope for more optimizations in the future using the same representation. The optimizer is run directly before and after inlining. The pre inline pass can operate on smaller graphs and hence converges faster, but still has a chance of substantially reducing the size of the logic on some designs, making inlining both faster and less memory intensive. The post inline pass can then optimize across the inlined module boundaries. No optimization is performed across a module boundary. For debugging purposes, each peephole optimization can be disabled individually via the -fno-dfg-peepnole-<OPT> option, where <OPT> is one of the optimizations listed in V3DfgPeephole.h, for example -fno-dfg-peephole-remove-not-not. The peephole patterns currently implemented were mostly picked based on the design that inspired this work, and on that design the optimizations yields ~30% single threaded speedup, and ~50% speedup on 4 threads. As you can imagine not having to haul around redundant combinational networks in the rest of the compilation pipeline also helps with memory consumption, and up to 30% peak memory usage of Verilator was observed on the same design. Gains on other arbitrary designs are smaller (and can be improved by analyzing those designs). For example OpenTitan gains between 1-15% speedup depending on build type.
2022-09-23 17:46:22 +02:00
endmodule