* [2.x] feat: Add cacheVersion setting for global cache invalidation
**Problem**
There was no escape hatch to invalidate all task caches when needed.
**Solution**
Add `Global / cacheVersion` setting that incorporates into the cache key
hash. Changing it invalidates all caches. Defaults to reading system
property `sbt.cacheversion`, or else 0L. When 0L, the hash is identical
to the previous behavior (backward compatible).
Fixes#8992
* [2.x] refactor: Simplify BuildWideCacheConfiguration and add cacheVersion test
- Replace auxiliary constructors with default parameter values
- Add unit test verifying cacheVersion invalidates the cache
* [2.x] fix: Restore auxiliary constructors for binary compatibility
* [2.x] test: Improve cacheVersion scripted test and add release note
- Scripted test now verifies cache invalidation via a counter
that increments only when the task body actually executes
- Add release note documenting the cacheVersion setting
**Problem**
When the remote cache server (e.g. bazel-remote using S3 for storage) reports an AC (Action Cache)
hit but the underlying CAS (Content Addressable Storage) blob is missing or
corrupt, ActionCache.cache propagates the resulting exception (typically
java.io.FileNotFoundException) directly to the SBT task engine process with no interception of the propogated error.
This causes a build failure instead of a graceful cache miss.
The three unguarded call sites are:
1. organicTask - syncBlobs after a successful put only caught NoSuchFileException,
missing FileNotFoundException and other IO errors.
2. getWithFailure / readFromSymlink fast-path - syncBlobs inside flatMap with no
exception handling.
3. getWithFailure main branch - both syncBlobs calls and the subsequent IO.read
were completely unguarded.
**Solution**
Guard all three call sites with NonFatal catches:
- Cache read failures (getWithFailure) return Left(None) which the caller
interprets as a cache miss, triggering organic recomputation.
- Cache write failures (organicTask) are demoted to a debug-level log; the task
result that was already computed is returned successfully.
Two regression tests are added to ActionCacheTest:
1. Tests the main getWithFailure branch using the default relative-path converter.
2. Tests the readFromSymlink fast-path using an absolute-path converter so the
symlink created on the first run is found by Files.isSymbolicLink on the second.
* test: Migrate FileInfoSpec.scala to verify.BasicTestSuite (#8542)
Migrate FileInfoSpec.scala from ScalaTest's AnyFlatSpec to
verify.BasicTestSuite, following the pattern established by other
test files in the util-cache module.
Changes:
- Replace AnyFlatSpec class with BasicTestSuite object
- Convert 'it should ... in' syntax to 'test(...)' syntax
- Use Scala 3 syntax with colon indentation
- Change === to == for assertions (BasicTestSuite style)
- Add explicit Unit return types for consistency
- Add 'end FileInfoSpec' marker
Fixes#8542
Migrate SingletonCacheSpec.scala from ScalaTest's AnyFlatSpec to
verify.BasicTestSuite, following the pattern established by other
test files in the util-cache module.
Changes:
- Replace AnyFlatSpec class with BasicTestSuite object
- Convert 'should ... in' syntax to 'test(...)' syntax
- Use Scala 3 syntax with colon indentation
- Change === to == for assertions (BasicTestSuite style)
- Replace intercept[Exception] with scala.util.Try pattern
- Add 'end SingletonCacheSpec' and 'end ComplexType' markers
Related to the ongoing test migration effort.
Co-authored-by: GlobalStar117 <GlobalStar117@users.noreply.github.com>
When a compilation fails with CompileFailed, the failure is now cached
so that subsequent builds with the same inputs don't re-run the failed
compilation. This significantly improves the experience when using BSP
clients like Metals that may trigger many compilations in a row.
The implementation:
- Adds CachedCompileFailure, CachedProblem, and CachedPosition types
to serialize compilation failures
- Modifies ActionCache.cache to catch CompileFailed exceptions and
store them in the cache with exitCode=1
- On cache lookup, checks for cached failures first and re-throws
the cached exception if found
- Fixes DiskActionCacheStore.put to preserve exitCode from request
- Adds unit test to verify cached failure behavior
Fixes#7662
**Problem**
Currently `syncBlobs` delete the existing files in the out directory when remote cache kicks in.
**Solution**
1. This refactors `Digest(...)` and adds support for `Digest.apply(Path)` and `Digest.sameDigest(...)`
2. This uses the `sameDigest` to compare the digest and replace the existing out files only when it needs to
For the details about this PR, please see the blog post https://eed3si9n.com/sbt-remote-cache/.
* Add cache basics
* Refactor Attributed to use StringAttributeMap, which is Map[StringAttributeKey, String]
* Implement disk cache
* Rename Package to Pkg
* Virtualize packageBin
* Use HashedVirtualFileRef for packageBin
* Virtualize compile task
There are just too many instances in which sbt's code relies on
the `lastModified`/`setLastModified` semantics, so instead of moving
to `get`/`setModifiedTime`, we use new IO calls that offer the new
timestamp precision, but retain the old semantics.
Looks like the reason that util-cache depended on util-collection was to
define the sjson-new formats (HListFormats) for util-collection's HList.
Given that util-collection already depends on sjsonnew, HListFormats can
also be defined in util-collection.
All that was left then was (a) HListFormatSpec requires
sjsonnewScalaJson, so that was added in test scope, and (b) HListFormats
had to be dropped from sbt.util.CacheImplicits - HListFormats will have
to be imported and/or mixed-in where required downstream. For importing
convenience I defined a companion object.
sbt/util#45 implemented caching using sjson-new. Now many of the functions take `CacheStore` that abstracts the caching ability.
sbt/sbt#3109 demonstrates that setting up CacheStore requires boilerplate involving concepts introduced in sbt 1.
This change adds back overrides using File by making assumption that majority of the time we would want standard JSON converter.