Commit Graph

12 Commits

Author SHA1 Message Date
Geza Lore 1f04cd868c
CI: Run some RTLMeter cases with --hierarchical (#6605) 2025-10-28 20:48:08 -04:00
dependabot[bot] 151b17ac82
CI: Bump actions/upload-artifact from 4 to 5 (#6600) 2025-10-27 13:46:11 -04:00
dependabot[bot] fafe3737a2
CI: Bump actions/download-artifact from 5 to 6 (#6599) 2025-10-27 13:04:36 -04:00
Geza Lore 5247e40659
CI: Add 60 minute timeout to RTLMeter compile and run steps (#6506)
The default timeout for a job in GitHub Actions is 6 hours, which is
both too long, but more importantly, when reached the job is 'cancelled'
instead of 'failed'. Use the new `--timeout` option on `rtlmeter run`
instead to add a 60 minute timeout to compile and execute steps. (This
is ~3x longer than recent worst runs in the CI)
2025-09-29 17:37:06 +01:00
Wilson Snyder f67534069c
CI: Avoid man page builds (#6402) 2025-09-07 13:32:34 -04:00
dependabot[bot] 319ab84f90
Bump actions/checkout from 4 to 5 (#6374) 2025-09-04 08:21:26 -04:00
dependabot[bot] 2a09114dc7
Bump actions/download-artifact from 4 to 5 (#6367) 2025-09-03 18:50:12 -04:00
Geza Lore b0044da8a7
Internals: Upload published nightly RTLMeter results artifact (#6346)
First half of #6301
2025-08-30 16:08:44 +01:00
Wilson Snyder 1eccfa64b5 Commentary 2025-08-08 17:54:48 -04:00
Geza Lore 07394fcafa
Automatically publish scheduled RTLMeter results (#6074)
Update RTLMeter workflow to automatically push the performance numbers
from scheduled (nightly) runs to verilator/verilator-rtlmeter-results
2025-06-10 13:06:05 +01:00
Wilson Snyder aee5051526 CI: Reduce action permissions per best practices 2025-05-11 17:43:48 -04:00
Geza Lore 6ac2aa2d99
Add RTLMeter GitHub workflows (#5948)
Add the GitHub Actions workflows for running RTLMeter.

Runs start daily, at 02:00 UTC, on ubuntu-24.04. There are 2 runs:
- Using GCC, with default verilator options
- Using Clang, with "--threads 4"

Each run uses a maximum of 2 runners in parallel (so max 4 in total),
and takes slightly over 2 hours to complete.

The jobs will fail if a benchmark is broken, so this already serves as a
regression test for the included designs.

For now, performance metrics are recorded as artefacts of the run but
not otherwise published.

Performance metrics are always recorded for all successful jobs, even if
some cases are failing.
2025-04-19 15:42:33 +01:00