|
|
|
|
@ -0,0 +1,340 @@
|
|
|
|
|
KAHN'S ALGORITHM BFS FOR OPENSTA
|
|
|
|
|
Functional Specification
|
|
|
|
|
April 2026
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1. MOTIVATION
|
|
|
|
|
|
|
|
|
|
OpenSTA's parallel BFS traversal (visitParallel) processes vertices one level at a time. All threads must finish the current level before any thread can start the next. If a level has only a handful of vertices, most threads sit idle waiting for them to finish. In real designs, level sizes vary widely -- some levels have thousands of vertices and some have very few -- making this wait-at-every-level approach a significant bottleneck for multi-threaded timing analysis.
|
|
|
|
|
|
|
|
|
|
Kahn's algorithm is a classical method for topological traversal of a directed acyclic graph. It tracks how many unprocessed predecessors each vertex has (its "in-degree"). A vertex becomes ready as soon as its in-degree reaches zero -- meaning all the vertices it depends on have been processed. This is a natural fit for timing analysis: a vertex's arrival time depends only on its fanin, so it can be computed the moment all fanin arrivals are known, without waiting for unrelated vertices at the same level to finish.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2. PROPOSED SOLUTION
|
|
|
|
|
|
|
|
|
|
Replace the per-level barrier model with Kahn's topological traversal. Instead of waiting for all vertices at level L to finish before starting level L+1, a vertex becomes eligible for processing as soon as every one of its predecessors has been processed. This allows vertices at different levels to execute concurrently, keeping threads busy.
|
|
|
|
|
|
|
|
|
|
The implementation is integrated into the existing BfsIterator class hierarchy as a runtime toggle, supporting both forward (arrival) and backward (required-time) propagation. The original level-based BFS remains the default and is always available as a fallback.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3. ALGORITHM
|
|
|
|
|
|
|
|
|
|
The timing graph is already a DAG within each visitParallel() call: flip-flop feedback is broken at D inputs, latch D-to-Q edges are excluded by search predicates, and combinational loops are broken by the Levelizer's disabled-loop edges. This satisfies Kahn's requirement for an acyclic graph.
|
|
|
|
|
|
|
|
|
|
When Kahn's is enabled, visitParallel() proceeds in two stages:
|
|
|
|
|
|
|
|
|
|
Stage 1: Discovery and In-Degree Counting (single-threaded)
|
|
|
|
|
|
|
|
|
|
Starting from the seed vertices already in the BFS queue, a forward BFS discovers all reachable vertices following the same edge-filtering rules used by the original traversal. As each new vertex is discovered, its in-degree (number of active predecessors) is recorded in a flat array indexed by graph vertex ID. Seed vertices have in-degree zero.
|
|
|
|
|
|
|
|
|
|
Stage 2: Recursive-Dispatch Parallel Traversal (multi-threaded)
|
|
|
|
|
|
|
|
|
|
The unit of scheduling is a single ready vertex. All zero-in-degree vertices are initially dispatched as separate tasks into the existing DispatchQueue thread pool. Each task does three things:
|
|
|
|
|
|
|
|
|
|
1. Visit the vertex (computing arrivals or required times).
|
|
|
|
|
2. Atomically decrement the in-degree of each successor.
|
|
|
|
|
3. If any successor's in-degree reaches zero, dispatch that successor immediately as a new task into the same DispatchQueue.
|
|
|
|
|
|
|
|
|
|
A single finishTasks() call at the end waits for all dispatched work -- including tasks dispatched recursively from within running tasks -- to complete. There are no per-batch or per-level barriers. A worker thread that makes a successor ready sends it straight into the pool, where any idle thread can pick it up without waiting for unrelated tasks to finish. The DispatchQueue uses condition_variable internally, so idle threads block efficiently rather than spinning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4. IMPLEMENTATION DETAILS
|
|
|
|
|
|
|
|
|
|
Files modified:
|
|
|
|
|
|
|
|
|
|
include/sta/Bfs.hh -- Added kahnForEachSuccessor pure virtual method (forward follows out-edges, backward follows in-edges), persistent KahnState storage, use_kahns_ toggle, kahn_pred_ pointer for the discovery edge filter, and resetLevelBounds helper.
|
|
|
|
|
|
|
|
|
|
search/Bfs.cc -- Defined KahnState struct holding persistent in-degree arrays (reused across calls to avoid re-allocation). Added a third branch to visitParallel: single-threaded / original-parallel / Kahn's-parallel. Implemented kahnForEachSuccessor for both BfsFwdIterator and BfsBkwdIterator.
|
|
|
|
|
|
|
|
|
|
include/sta/Search.hh -- Added useKahnsBfs() getter and setUseKahnsBfs(bool) setter on Search, forwarding to the arrival and required iterators.
|
|
|
|
|
|
|
|
|
|
search/Search.cc -- Two lines in the Search constructor wire the Kahn's edge filter (SearchAdj) onto the arrival and required iterators. Added Search::useKahnsBfs() and Search::setUseKahnsBfs() implementations.
|
|
|
|
|
|
|
|
|
|
include/sta/Sta.hh -- Added useKahnsBfs() / setUseKahnsBfs() declarations for the Tcl variable sta_use_kahns_bfs.
|
|
|
|
|
|
|
|
|
|
search/Sta.cc -- Implemented Sta::useKahnsBfs() and Sta::setUseKahnsBfs() as thin forwarders to Search.
|
|
|
|
|
|
|
|
|
|
search/Search.i -- Exposed use_kahns_bfs and set_use_kahns_bfs to SWIG for Tcl.
|
|
|
|
|
|
|
|
|
|
tcl/Variables.tcl -- Added the sta_use_kahns_bfs Tcl variable with a read/write trace that calls the underlying commands.
|
|
|
|
|
|
|
|
|
|
Enabling Kahn's at the iterator level requires two calls on a BfsIterator:
|
|
|
|
|
|
|
|
|
|
iterator->setKahnPred(predicate); // edge filter for discovery
|
|
|
|
|
iterator->setUseKahns(true); // enable Kahn's
|
|
|
|
|
|
|
|
|
|
The edge filter is separate from the iterator's existing search_pred_ because the original BFS never uses search_pred_ directly for arrivals -- the visitor provides its own filter at call time. Kahn's discovery runs before any visitor, so it needs the filter upfront. If the filter is null, visitParallel falls back to the original BFS.
|
|
|
|
|
|
|
|
|
|
Kahn's is also bypassed -- even when enabled -- whenever the Tcl variable sta_dynamic_loop_breaking is set. That feature relies on arrival tags that only emerge during propagation to decide whether an otherwise-disabled loop edge can be traversed. Kahn's needs the active subgraph and in-degrees known before propagation begins, so it cannot consult those tags. To avoid silently missing vertices, visitParallel guards the Kahn's path with an explicit check on variables_->dynamicLoopBreaking() and falls back to the original level-based BFS whenever dynamic loop breaking is active. The toggle remains a no-op from the user's point of view; results stay correct.
|
|
|
|
|
|
|
|
|
|
For end users, Kahn's can be toggled from Tcl via a design-level variable:
|
|
|
|
|
|
|
|
|
|
set sta_use_kahns_bfs 1 ;# enable Kahn's (default)
|
|
|
|
|
set sta_use_kahns_bfs 0 ;# fall back to original level-based BFS
|
|
|
|
|
puts $sta_use_kahns_bfs ;# read current setting
|
|
|
|
|
|
|
|
|
|
Setting the variable calls Sta::setUseKahnsBfs, which applies the flag to both the arrival (forward) and required-time (backward) BFS iterators. No arrivals or requireds are invalidated on toggle -- the two algorithms produce bit-identical results, so cached state remains valid. Scripts can flip the variable mid-session to compare the two paths without a full rerun.
|
|
|
|
|
|
|
|
|
|
The Tcl plumbing is registered at package load via Variables.tcl (trace add variable) and the underlying commands use_kahns_bfs / set_use_kahns_bfs are exposed through Search.i (SWIG) to the sta namespace.
|
|
|
|
|
|
|
|
|
|
Persistent state (KahnState) stores the in-degree arrays across calls. On the first call it allocates; on subsequent calls it resets only the entries touched previously, avoiding full re-initialization.
|
|
|
|
|
|
|
|
|
|
The Stage 2 task body is a std::function defined as a local variable in visitParallel. It captures itself by reference so that task code can recursively dispatch successors via the same function. Its lifetime is the duration of visitParallel; finishTasks() guarantees all dispatched tasks complete before the function returns, so the self-reference is always valid.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5. INCREMENTAL TIMING UPDATES
|
|
|
|
|
|
|
|
|
|
OpenSTA supports incremental timing: when a cell is resized or an edge delay changes, only the affected vertices need to be re-evaluated instead of recomputing the whole graph. This is driven by Search.cc, which tracks dirty vertices in an "invalid arrivals" set and enqueues them as seeds before the next findArrivals call. Our implementation hooks into this existing mechanism without modification.
|
|
|
|
|
|
|
|
|
|
When Kahn's runs, the seed vertices in the BFS queue are exactly the dirty ones supplied by the incremental framework. The discovery stage walks forward from those seeds and finds the downstream subgraph that could be affected. Only that subgraph -- not the whole graph -- gets in-degrees computed and gets visited in Stage 2. For small updates (a few changed cells in a large design), the active set is a small fraction of the total graph, and the work is proportional to it.
|
|
|
|
|
|
|
|
|
|
There is one behavioral difference from the original BFS worth noting. The original stops propagating through a vertex whose arrivals did not change after re-evaluation; it skips the enqueue of its fanout. Our Kahn's implementation discovers the full reachable subgraph upfront and decrements in-degrees unconditionally, so every reachable vertex is visited.
|
|
|
|
|
|
|
|
|
|
The reason is fundamental to Kahn's algorithm: every active predecessor must decrement its successor's in-degree exactly once, otherwise the counter never reaches zero and the vertex stalls forever. If we skipped a decrement because "arrivals didn't change," a downstream vertex with multiple predecessors could be left waiting on a decrement that will never come -- even if its other predecessors did change and genuinely need to propagate.
|
|
|
|
|
|
|
|
|
|
The practical cost is that vertices whose arrivals did not change are still visited, but the visitor detects no change and no downstream updates happen. This is correct but slightly more eager than the original. It has not caused test failures or measurable overhead in any regression so far.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6. COMPARISON WITH ALTERNATE IMPLEMENTATION
|
|
|
|
|
|
|
|
|
|
An alternate implementation (BfsFwdInDegreeIterator) in a separate repository takes a standalone-class approach used only for delay calculation.
|
|
|
|
|
|
|
|
|
|
Architecture: The alternate creates a separate class. Ours integrates into the existing BfsIterator with a toggle, supporting both forward and backward BFS across all callers.
|
|
|
|
|
|
|
|
|
|
Discovery cost: The alternate scans every vertex and edge in the entire graph to compute in-degrees -- O(V_total + E_total) where V_total is all vertices in the graph and E_total is all edges. Even if only a small portion needs re-timing, the full graph is walked. Ours starts from the dirty seed vertices and only walks the subgraph reachable from them -- O(V_active + E_active) where V_active and E_active are only the vertices and edges that actually need processing. For loop breaking, the alternate uses a raw level comparison (to_level >= from_level) to decide which edges to skip. Ours uses the same SearchAdj filter that the Levelizer and the rest of the BFS already use, so the set of skipped edges (disabled loops, latch D-to-Q, timing checks) is guaranteed to be consistent.
|
|
|
|
|
|
|
|
|
|
Thread safety: The alternate uses a non-atomic visited flag from worker threads (data race risk) and maintains a per-edge mutex-locked set for deduplication (serialization bottleneck). Ours uses a read-only array for active-set checks and computes in-degrees upfront so edge tracking is unnecessary.
|
|
|
|
|
|
|
|
|
|
Scheduling granularity: The alternate uses batched dispatch -- it dispatches a wavefront of ready vertices, waits for all to finish via finishTasks(), then dispatches the next wavefront. This re-introduces a barrier between wavefronts. Ours dispatches each ready vertex as a separate task and, when a running task makes a successor ready, dispatches that successor immediately via recursive dispatch into the same DispatchQueue. A single finishTasks() at the end waits for all work. This removes the per-wavefront barrier and keeps threads continuously fed.
|
|
|
|
|
|
|
|
|
|
What we adopted from them: The DispatchQueue-based execution model. Our initial implementation used custom spin-wait workers (std::this_thread::yield) which wasted CPU. Moving to DispatchQueue with condition_variable-based blocking cut overhead substantially.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7. FINDINGS FROM REGRESSIONS
|
|
|
|
|
|
|
|
|
|
Finding 1: Vertex IDs can exceed vertexCount() after deletions
|
|
|
|
|
|
|
|
|
|
The graph's ObjectTable stores vertices in blocks of 128. graph->id(vertex) returns (block_index * 128 + slot), which can be much larger than graph->vertexCount() (the live count) after cells are deleted. Sizing the in-degree array to vertexCount()+1 caused an out-of-bounds segfault during the rmp.gcd_restructure flow, which deletes cells during restructuring.
|
|
|
|
|
|
|
|
|
|
Resolution: The in-degree array now grows dynamically during discovery when any vertex ID exceeds current capacity. Worker threads include bounds checks. The alternate implementation has the same latent issue but has not encountered it because its code path does not trigger the deletion pattern.
|
|
|
|
|
|
|
|
|
|
Finding 2: The arrival iterator has a null search predicate
|
|
|
|
|
|
|
|
|
|
The arrival BFS iterator is constructed with search_pred = nullptr because the original BFS never uses it -- the visitor always provides the filter. Kahn's discovery used search_pred directly, causing a null-pointer crash during arrival propagation in the rmp flow.
|
|
|
|
|
|
|
|
|
|
Resolution: Introduced kahn_pred, a dedicated predicate for Kahn's discovery, wired to SearchAdj in the Search constructor. This keeps the original BFS path completely unchanged.
|
|
|
|
|
|
|
|
|
|
Both findings were caught by rmp.gcd_restructure.tcl and resolved without changing the original BFS behavior.
|
|
|
|
|
|
|
|
|
|
Finding 3: Incompatibility with dynamic loop breaking
|
|
|
|
|
|
|
|
|
|
sta_dynamic_loop_breaking (a pre-existing Tcl variable, default off) enables on-the-fly re-activation of disabled-loop edges when arrival propagation produces loop tags that satisfy user-declared false-path exceptions. The check lives in SearchAdj::searchThru: a disabled-loop edge is traversable when (dynamicLoopBreaking() && hasPendingLoopPaths(edge)) holds, where hasPendingLoopPaths consults the visitor's live TagGroupBldr to see which loop tags are currently propagating.
|
|
|
|
|
|
|
|
|
|
The SearchAdj instance we reuse as kahn_pred_ (search_adj_ in Search.cc) is constructed with tag_bldr_ == nullptr, so hasPendingLoopPaths always returns false for it -- by design, since Kahn's discovery runs before any visitor is active and there are no live tags to consult. This means that when a user enables sta_dynamic_loop_breaking alongside sta_use_kahns_bfs, Kahn's discovery and successor decrement would systematically skip disabled-loop edges that the original ArrivalVisitor path (using its own tag-aware adj_pred_) can traverse. Vertices reachable only through those edges would never enter the active set, leaving their arrivals and slacks stale.
|
|
|
|
|
|
|
|
|
|
Neither OpenSTA's regression suite nor OpenROAD's standard flows set sta_dynamic_loop_breaking, so this never surfaced in testing. It was identified during code review.
|
|
|
|
|
|
|
|
|
|
Resolution: visitParallel now falls back to the original level-based BFS whenever variables_->dynamicLoopBreaking() is true, regardless of the Kahn's toggle. This is a defensive guard; the Tcl variable still reads and writes normally, but the traversal uses the original path when the two features would otherwise interact unsafely. The cost is one additional boolean check per visitParallel invocation.
|
|
|
|
|
|
|
|
|
|
A future enhancement could make Kahn's loop-breaking-aware by conservatively discovering through disabled-loop edges and adjusting in-degrees based on actual propagation, but that work is non-trivial and not worth pursuing until a concrete use case combines both features.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8. PERFORMANCE
|
|
|
|
|
|
|
|
|
|
On the OpenSTA regression suite (6109 tests), Kahn's BFS runs at parity with the original level-based BFS (28s vs 27-30s). On small test designs the discovery stage overhead is negligible. On large designs with uneven level populations, barrier elimination should produce net speedups, particularly at high thread counts where the original BFS leaves threads idle.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9. TEST PLAN
|
|
|
|
|
|
|
|
|
|
Beyond the OpenSTA standalone regression suite and the OpenROAD full regression, a set of helper scripts is provided for A/B runtime benchmarking and validation across ORFS designs. These run the full ORFS flow for each design twice -- once with Kahn's BFS disabled and once with Kahn's enabled -- and collect per-step timing and design-size metrics for comparison.
|
|
|
|
|
|
|
|
|
|
All scripts live under flow/util/ and are intended to be invoked from the flow/ directory. They do not modify any design scripts or ORFS flow files; instead, a tiny binary wrapper injects the Tcl variable sta_use_kahns_bfs into every OpenROAD invocation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9.1 Binary wrapper: openroad_kahns_wrap.sh
|
|
|
|
|
|
|
|
|
|
ORFS invokes openroad with -no_init, so ~/.openroad is not sourced. To toggle sta_use_kahns_bfs across every invocation of every flow step without editing any Tcl, this wrapper sits in front of the real OpenROAD binary:
|
|
|
|
|
|
|
|
|
|
- Finds the .tcl cmd_file argument in the invocation.
|
|
|
|
|
- Creates a temporary Tcl that performs
|
|
|
|
|
set sta_use_kahns_bfs <mode>
|
|
|
|
|
puts "kahns-wrap: requested=<mode>, effective=$::sta_use_kahns_bfs"
|
|
|
|
|
source "<original.tcl>"
|
|
|
|
|
- Execs the real OpenROAD on the temporary file.
|
|
|
|
|
|
|
|
|
|
The wrapper reads KAHNS_BFS from the environment (0 = original BFS, 1 = Kahn's). The breadcrumb puts line lands in every step log, so a single grep confirms the flag was in effect and never overridden by a downstream script.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9.2 Benchmark driver: kahns_benchmark.sh
|
|
|
|
|
|
|
|
|
|
Runs an A/B sweep across one or more designs. For each design:
|
|
|
|
|
1. make clean_all
|
|
|
|
|
2. Run target (default: finish) with KAHNS_BFS=0; time with date +%s.%N.
|
|
|
|
|
3. Save elapsed-all.txt and copy logs/<pdk>/<design>/<variant>/ before the next clean.
|
|
|
|
|
4. make clean_all
|
|
|
|
|
5. Run target with KAHNS_BFS=1; time.
|
|
|
|
|
6. Save elapsed-all.txt and the logs tree again.
|
|
|
|
|
|
|
|
|
|
Output directory layout:
|
|
|
|
|
|
|
|
|
|
<bench_dir>/
|
|
|
|
|
summary.csv wall-time totals, CSV
|
|
|
|
|
<design>_kahns_off.log full stdout, OFF run
|
|
|
|
|
<design>_kahns_on.log full stdout, ON run
|
|
|
|
|
<design>_kahns_off_artifacts/elapsed-all.txt per-step seconds, OFF
|
|
|
|
|
<design>_kahns_off_artifacts/logs/ raw step logs and JSON metrics, OFF
|
|
|
|
|
<design>_kahns_on_artifacts/elapsed-all.txt per-step seconds, ON
|
|
|
|
|
<design>_kahns_on_artifacts/logs/ raw step logs and JSON metrics, ON
|
|
|
|
|
|
|
|
|
|
Usage (from flow/):
|
|
|
|
|
util/kahns_benchmark.sh [-t target] [-o outdir] [design-configs...]
|
|
|
|
|
|
|
|
|
|
Target defaults to finish. For STA-focused benchmarking, -t route covers all STA-heavy steps (place, repair_timing_post_place, cts, global_route, repair_timing_post_global_route, detail_route) without the downstream fill / final_report overhead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9.3 Per-step runtime comparison: kahns_compare.sh
|
|
|
|
|
|
|
|
|
|
Reads the elapsed-all.txt files from a benchmark directory and produces a per-step comparison table with OFF seconds, ON seconds, delta, and ratio (ON/OFF). Positive deltas mean Kahn's was slower for that step; ratios below 1.00x mean Kahn's was faster.
|
|
|
|
|
|
|
|
|
|
Usage (from flow/):
|
|
|
|
|
util/kahns_compare.sh <bench_dir> [design_tag]
|
|
|
|
|
|
|
|
|
|
Without design_tag, every design that has both OFF and ON artifacts is compared in a single run.
|
|
|
|
|
|
|
|
|
|
Typical reading pattern for a given design:
|
|
|
|
|
- Non-STA steps (yosys, floorplan_macro, pdn, fillcell): ratio ~1.00x.
|
|
|
|
|
- STA-heavy steps (3_3_place_gp, 4_1_cts, 5_1_grt, 5_2_route): where any real speed-up or slowdown appears.
|
|
|
|
|
- Small designs: slight positive delta from Kahn's discovery-pass overhead.
|
|
|
|
|
- Large designs with uneven level populations: expected speed-up from barrier elimination.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9.4 Design-size view and correctness check: kahns_size.sh
|
|
|
|
|
|
|
|
|
|
Extracts design-size metrics (instance count, net count, IO count, cell area) at each major stage from the step-level JSON metrics files (<step>.json). Provides three modes:
|
|
|
|
|
|
|
|
|
|
Default (combined view):
|
|
|
|
|
util/kahns_size.sh <bench_dir> [design_tag]
|
|
|
|
|
Prints one table per design with the OFF-run values and a match column
|
|
|
|
|
that flags any stage where ON disagreed. Ideal for spotting correctness
|
|
|
|
|
regressions at a glance: every row must show ok.
|
|
|
|
|
|
|
|
|
|
Verbose (-v):
|
|
|
|
|
util/kahns_size.sh -v <bench_dir> [design_tag]
|
|
|
|
|
Prints the two separate OFF and ON tables side-by-side so the actual
|
|
|
|
|
disagreeing values can be read.
|
|
|
|
|
|
|
|
|
|
Validation sweep (-c, --check):
|
|
|
|
|
util/kahns_size.sh --check <bench_dir>
|
|
|
|
|
Iterates every design in the benchmark directory and emits one line per
|
|
|
|
|
design: OK, FAIL (with the stage and metrics that disagreed), or SKIP
|
|
|
|
|
(missing artifacts). Exits non-zero if any design fails, which makes it
|
|
|
|
|
CI-friendly. Any FAIL is a real correctness bug -- Kahn's must produce
|
|
|
|
|
the same netlist as the original BFS.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9.5 Operational checklist
|
|
|
|
|
|
|
|
|
|
Running a full sweep across several designs:
|
|
|
|
|
|
|
|
|
|
1. Build OpenROAD with Kahn's: the flag sta_use_kahns_bfs defaults to 1.
|
|
|
|
|
2. From flow/, choose the target, the output directory, and the design list.
|
|
|
|
|
For example, pick one or more config.mk paths from a platform of interest
|
|
|
|
|
(under flow/designs/<platform>/) and pass them on the command line:
|
|
|
|
|
util/kahns_benchmark.sh -t finish -o <bench_dir> <config1> <config2> ...
|
|
|
|
|
3. While it runs, tail the most recent per-design stdout log to follow
|
|
|
|
|
progress and verify the wrapper breadcrumb:
|
|
|
|
|
tail -f "$(ls -t <bench_dir>/*.log | head -1)" | grep -i "kahns-wrap\|error"
|
|
|
|
|
4. Validate correctness once designs finish:
|
|
|
|
|
util/kahns_size.sh --check <bench_dir>
|
|
|
|
|
Address any FAIL before trusting the runtime numbers.
|
|
|
|
|
5. Compare per-step runtimes:
|
|
|
|
|
util/kahns_compare.sh <bench_dir>
|
|
|
|
|
Interpret in the context of design size:
|
|
|
|
|
util/kahns_size.sh <bench_dir>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9.6 Additional conventions
|
|
|
|
|
|
|
|
|
|
- Always run KAHNS_BFS=0 first, then KAHNS_BFS=1. The OFF pass is the
|
|
|
|
|
baseline; running OFF first avoids any chance that a bug in the ON
|
|
|
|
|
path could corrupt shared state and affect a subsequent OFF run.
|
|
|
|
|
- Target choice: -t route is usually enough for STA-feature benchmarking.
|
|
|
|
|
-t finish adds fillers / final report which do not exercise Kahn's much.
|
|
|
|
|
- Parallelism: ORFS exports NUM_CORES to OpenROAD's -threads flag.
|
|
|
|
|
Kahn's and the original BFS both respect this. A fair comparison must
|
|
|
|
|
use identical thread counts.
|
|
|
|
|
- Disk usage: each artifact directory copies the per-design logs tree.
|
|
|
|
|
Budget a few hundred MB per design for a finish sweep.
|
|
|
|
|
- Clean up between sweeps: kahns_benchmark.sh always runs make clean_all
|
|
|
|
|
before each design's first iteration. No manual cleanup is required.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10. TEST RESULTS
|
|
|
|
|
|
|
|
|
|
OpenSTA standalone: 6109/6109 tests PASS with Kahn's enabled.
|
|
|
|
|
|
|
|
|
|
OpenROAD full regression: All tests PASS, including rmp.gcd_restructure (the test that surfaced both findings in Section 7), rsz (incremental netlist modification), and cts (buffer insertion with re-timing).
|
|
|
|
|
|
|
|
|
|
ORFS A/B runtime benchmarks (Section 9): in progress. An initial sweep across several platform/design combinations is running using util/kahns_benchmark.sh. Completed designs to date show Kahn's at parity or slightly slower on small designs (for a representative small design the measured overhead was roughly +3% / +12s over a ~375s baseline), consistent with the Section 8 prediction that the discovery-pass overhead dominates when the active graph is small. Larger designs are still pending; this section will be updated with their numbers and the per-step breakdown as each finishes. Correctness (netlist-size match between OFF and ON) is verified after each design via util/kahns_size.sh --check.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11. LIMITATIONS OF THE CURRENT APPROACH
|
|
|
|
|
|
|
|
|
|
The current implementation is correct and matches the original BFS at parity on small designs, but several limitations remain:
|
|
|
|
|
|
|
|
|
|
Eager visits in Kahn's traversal. Every vertex in the discovered active subgraph is visited regardless of whether its arrival actually changes. This is a fundamental consequence of the in-degree counting model -- each predecessor must decrement each successor's counter exactly once, so skipping decrements is not allowed. The original BFS short-circuits via an "arrivals unchanged" check at the visitor level and avoids enqueuing downstream when no change occurred. We do not. For most designs the cost is small because the visitor itself detects no change and terminates quickly, but in deep-pipeline designs with many cascaded unchanged vertices the extra visits add up.
|
|
|
|
|
|
|
|
|
|
Full forward sweep for slack queries. Slack at pin P is computed as required(P) minus arrival(P). The required-time backward BFS is already scoped to P's level. But the forward arrival BFS is not scoped to P's fanin cone -- it propagates from all dirty seeds to all endpoints they can reach. For a single-point slack query on a design with large independent cones, most of the forward work is spent on endpoints the query does not care about.
|
|
|
|
|
|
|
|
|
|
Over-invalidation in the dirty set. The incremental framework's invalid_arrivals_ and invalid_requireds_ sets are tracked conservatively. Some edge-delay or pin changes invalidate more vertices than strictly necessary; the visitor detects no change and does no further propagation, but we still paid for the visit. A more precise validity analysis could prune the seed set before the BFS starts.
|
|
|
|
|
|
|
|
|
|
Per-call active_vertices allocation. The KahnState persistence avoids re-allocating in_degree_init and in_degree across calls, but the active_vertices vector is rebuilt every call. For very frequent small updates this has some overhead.
|
|
|
|
|
|
|
|
|
|
Recursive dispatch cost for small workloads. Each ready vertex is dispatched as its own DispatchQueue task. The dispatch lock and condition-variable signaling cost is tiny per task, but for active sets smaller than the thread count the parallelism benefit may not offset the dispatch overhead.
|
|
|
|
|
|
|
|
|
|
No Kahn's when dynamic loop breaking is enabled. sta_dynamic_loop_breaking decides whether a disabled-loop edge is traversable based on arrival tags that only appear during propagation, which Kahn's upfront-discovery model cannot consult. visitParallel therefore falls back to the original level-based BFS whenever dynamicLoopBreaking() is true. The Tcl toggle sta_use_kahns_bfs still reads normally, but the traversal uses the original path. See Section 7, Finding 3 for details.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12. FUTURE ROADMAP
|
|
|
|
|
|
|
|
|
|
The following enhancements extend the current Kahn's-based incremental timing implementation. They address known limitations in the existing approach and are orthogonal to Kahn's itself — each can be layered on top of the existing implementation independently. Items are listed in rough order of payoff relative to effort.
|
|
|
|
|
|
|
|
|
|
1. Visit-level change short-circuit
|
|
|
|
|
Objective: Restore the "arrivals unchanged" optimization within Kahn's propagation by skipping the visit body when no predecessor's arrival has actually changed.
|
|
|
|
|
Approach: Track a per-vertex "changed" flag during Kahn's traversal. When a vertex is popped from the queue, check whether any of its incoming arrivals differ from the previously recorded values before performing the full visit.
|
|
|
|
|
Benefit: Addresses the eager-visit limitation in the current implementation, eliminating redundant work when dirty vertices do not actually propagate value changes downstream.
|
|
|
|
|
Risk and effort: Low risk, localized change.
|
|
|
|
|
|
|
|
|
|
2. Validity-boundary seeding
|
|
|
|
|
Objective: Address over-invalidation by narrowing the set of vertices that require re-propagation.
|
|
|
|
|
Approach: Before seeding the BFS, perform a pre-pass that walks the dirty-reachable subgraph to identify the minimal boundary — the last layer of vertices whose arrivals remain known-valid but whose fanout begins the invalid region. Seed the BFS from those valid boundary vertices so that propagation walks forward into the invalid region with known-correct source arrivals. The same pattern applies in reverse for required times: find the boundary between known-valid and known-invalid requireds, seed from the valid side, and propagate backward into the invalid region.
|
|
|
|
|
Benefit: If the invalidation is actually narrower than the dirty set suggests, the boundary pass prunes work the current approach would still perform. Starting from vertices with known-correct arrivals also gives the forward pass a clean reference point, which may catch cases where a dirty vertex's arrival does not actually change.
|
|
|
|
|
Risk and effort: Medium effort, refinement of the current dirty-set mechanism rather than an architectural change. Can be added in Search.cc at the seeding step (seedInvalidArrivals / seedInvalidRequireds) without modifying the BFS iterator.
|
|
|
|
|
|
|
|
|
|
3. Demand-driven forward propagation for single-point queries
|
|
|
|
|
Objective: For single slack queries at a given pin, restrict forward propagation to only the vertices that actually influence that pin.
|
|
|
|
|
Approach: For a slack query at pin P:
|
|
|
|
|
--Walk backward from P to compute its fanin cone.
|
|
|
|
|
--Intersect the cone with the dirty set — only those dirty vertices actually affect P.
|
|
|
|
|
--Propagate forward only from those vertices, restricted to within the cone.
|
|
|
|
|
|
|
|
|
|
OpenSTA already applies this pattern in the backward required-time pass (which stops at the query pin's level) and for the endpoint-with-no-fanout shortcut (seedRequired, no BFS at all). It does not currently do this for the forward arrival pass in slack queries — findAllArrivals always goes to maxLevel.
|
|
|
|
|
Implementation requirements:
|
|
|
|
|
|
|
|
|
|
--A query context passed from the Tcl command down to findArrivals
|
|
|
|
|
--A backward cone computation, or a lazy cone-membership check
|
|
|
|
|
--Predicate filtering during Kahn's discovery to skip vertices outside the cone
|
|
|
|
|
|
|
|
|
|
Kahn's traversal still applies to the forward pass within the cone — it remains a DAG traversal and retains the barrier elimination benefit. The two optimizations stack rather than compete.
|
|
|
|
|
Benefit: The largest win for narrow single-point queries on large designs.
|
|
|
|
|
Risk and effort: Larger architectural change spanning Search.cc and the Tcl query entry points. Scope is architectural, not inside the BFS iterator.
|
|
|
|
|
|
|
|
|
|
4. Hybrid scheduling
|
|
|
|
|
Objective: Address the small-workload overhead of the current scheduling model.
|
|
|
|
|
Approach: Introduce an adaptive threshold that switches between batched dispatch (suited for small active sets) and recursive dispatch (suited for large active sets), based on the size of the active set at scheduling time.
|
|
|
|
|
Benefit: Eliminates overhead for small incremental updates while preserving throughput for large ones.
|
|
|
|
|
|
|
|
|
|
5. Multi-query cone batching
|
|
|
|
|
Objective: Amortize cone-computation cost when multiple slack queries are issued in sequence.
|
|
|
|
|
Approach: When several slack queries arrive together, compute the union of their backward cones once and perform a single scoped forward sweep across the combined cone, rather than repeating the cone computation and forward traversal per query.
|
|
|
|
|
Benefit: Reduces redundant work in reporting flows that issue many related queries, such as full endpoint slack reports or path-group summaries.
|