Problem
-------
Starting Scala 2.13.12, Scala 2 has in-sourced the compiler bridge
implementtion, which hopefully will be kept up to date more than the
ones in Zinc.
Solution
--------
This switches to using the pre-compiled compiler bridge for >=2.13.12.
In addition to ExecuteProgress, this new interface allows builds and plugins to receive events when commands start and finish, including the State before and after each command. It also makes cancellation visible to clients by making the Cancelled type top-level.
Fixes https://github.com/sbt/sbt/issues/7327
**Problem**
In builds with mixed Scala patch versions (like scalameta),
it's possible for a core subproject to be set to the lastest 2.12.x,
but the compiler plugin component is cross published to 2.12.0 etc.
`++ 2.12.0` in this case does not work since sbt 1.7.x onwards requires
the queried Scala version to be listed in `crossScalaVersions`.
**Solution**
This implements sbt 1.6.x-like fallback mechanism,
but instead of using the queried version (e.g. 2.12.0) it will set
the Scala version to one of listed versions that is binary compatible.
- remove unused type params
- use `withFilter` if possible
- use `collectFirst` instead of `collect` and `headOption`
- use `length` instead of `size` if `Array` or `String`
- use `foreach` instead of `map`
Problem
-------
`sbt new` (`sbt init`) hardcodes the templates, which I think is ok,
but without changing much, we can make it extensible.
Solution
--------
This adds two new keys `templateDescriptions` and `templateRunLocal`,
which can customize the behavior for in-house usage etc.
Problem
-------
sbt init menu doesn't pick the right template in the latter half.
Solution
--------
This fixes the mapping between the position and the letter.
If the terminal supports ANSI control sequence,
this displays the template list in an interactive way.
The focused template is rendered reversed,
and arrow key can be used to move the focus up/down.
The sbt-reproducible-build fails if two artifacts point to the same file.
When packaging the artifacts of an sbt plugin,
we copy each files to avoid this issue.
**Problem**
You want to get started with sbt, and you don't know
which template to get started with.
**Solution**
This implements an interactive menu on `sbt new` command
when invoked without an argument to list template candidates.
The first option is `scala/toolkit.local`, which locally creates
an sbt build without calling out to Giter8 (GitHub).
Expose what the incremental compiler is doing behind the scenes. The RunProfiler interface has been part of Zinc for a while, but this allows the build itself, or an Sbt plugin, to hook their own implementation.
We expose a list of such listeners to avoid plugins stepping on each other and replacing an existing listener.
This key has been added in 4061dabf4d but it is only available to Sbt itself. Since ExternalHooks is a Java interface, defined in Zinc for a while and fairly stable, I think this should be safe to do.
My main motivation is to allow installing an InvalidationProfiler from an Sbt plugin, thus gaining access to zinc recompilation decisions. See related PR https://github.com/sbt/zinc/pull/1181
Artifactory rejects the legacy artifacts of sbt plugin.
It is now possible to publish to Artifactory
by turning `sbtPluginPublishLegacyMavenStyle` off.
Problem
-------
In sbt 1, platform cross building is implemented using in the user-land
using `%%%` operator, which clevery handles both Scala cross building
and appending platform suffix like sjs1.
However, in general symbolic `%%%` is confusing, and hard to explain.
Solution
--------
In sbt 2, we should subsume the idea of platform cross building,
so `%%` can act as the current `%%%` operator.
This adds a new setting called `platform`, which defaults to
`Platform.jvm` by default.
When a subprojects sets it to `Platform.sjs1`, `ModuleID`s defined using
`%%` operator will inject the platform suffix `_sjs1` **in addition**
to the Scala binary suffix `_2.13` etc.
Note: Explicit JVM dependencies will now require `.platform(Platform.jvm)`.
See https://eed3si9n.com/simplifying-sbt-with-common-settings/
Problem
-------
The behavior of bare settings is confusing in a multi-project build.
This is partly due to the fact that to use `ThisBuild` scoping
the build user needs to be aware of the task implementation,
and know if the task is already defined at project level.
Solution
--------
This changes the interpretation of the baresettings to be common
settings, which works similar to the way `ThisBuild` behaves in sbt 1.x,
but since this would be a simple append at project-level, it should
work for any tasks or settings.
For an sbt plugin, we publish two POM files, the legacy one, and the
new Maven compatible one. The name of the new POM file contains the sbt
cross-version _2.12_1.0. The format of the new POM file is also slightly
different, because we append the sbt cross-version to all artifactIds of
sbt plugins. Hence Maven can resolve the new sbt plugin POM and its
dependencies.
When resolving an sbt plugin, we first try to resolve the new Maven POM
and if it fails we fallback on the legacy one. When parsing the new POM
format, we remove the sbt cross-version from all artifact IDs so that
there is no mismatch between old and new format of dependencies.
Java diagnostics don't have a pointer but we should report them.
Copied implementation from Bloop to translate the position of an
xsbti.Problem to a BSP range.
Problem
-------
Given val root, currently both `root` and synthetic root gets loaded.
This might be caused by build.sbt being virtualized, and no longer
matching the build root directory.
Solution
--------
For now, comparing the canonical paths seems to fix the issue.
Fixes https://github.com/sbt/sbt/issues/7118
Problem
-------
sbtn 1.8.1 was built using ubuntu-latest, which meant picking up newer
glibc.
Solution
--------
This downgraded the ubuntu machine to build sbtn.
I believe this was just an oversight that it's not marked as true since
sbt can handle `debugSession/start`. This change just ensures that
during the initialization process sbt says it's a `debugProvider` for
the same languages as `runProvider` and `testProvider`. It also
correctly marks the build targets as `canDebug`, unless they are sbt
targets.
Problem
-------
Fixes https://github.com/sbt/sbt/issues/7013
In a shared environment, multiple users will try to create
`/tmp/.sbt/` directory, and fail.
Solution
--------
Append a deterministic hash like `/tmp/.sbt1234ABCD` based
on the user home, so the same directory is used for the given
user, but each user would have a unique runtime directory.
Before
remote cache not found for 0.0.0-7c40144bd1c774e6
After
remote cache artifact not found for org.gontard:sbt-test:0.0.0-7c40144bd1c774e6 Some(cached-compile)
This change is a continuation of the work that was done in
https://github.com/sbt/sbt/pull/6874 to allow the Scala 3 compiler to
forward the `diagnosticCode` of an error as well as the other normal
things. This change in dotty was merged in
https://github.com/lampepfl/dotty/pull/15565 and also backported so it
will be in the 3.2.x series release. This change captures the
`diagnosticCode` and forwards it on via BSP so that tools like Metals
can can use this code.
You can see this in the BSP trace now for a diagnostic:
For example with some code that contains the following:
```scala
val x: Int = "hi"
```
You'll see the code in the BSP diagnostic:
``` "diagnostics": [
{
"range": {
"start": {
"line": 9,
"character": 15
},
"end": {
"line": 9,
"character": 19
}
},
"severity": 1,
"code": "7",
"source": "sbt",
"message": "Found: (\u001b[32m\"hi\"\u001b[0m : String)\nRequired: Int\n\nThe following import might make progress towards fixing the problem:\n\n import sourcecode.Text.generate\n\n"
}
]
```
Co-authored-by: Adrien Piquerez <adrien.piquerez@gmail.com>
Refs: https://github.com/lampepfl/dotty/issues/14904
If we use the ProxyTerminal in the background jobs, the logs
would be spread across different terminals, switching from active
client to active client. We want the logs to stick
to the client that started the job.
A new context is created and closed for each state of the MainLoop.
But the context of the backgroundJob must stay alive.
So we use a context that is owned by the BackgroundJobService.
It creates a new logger for each background job and cleans it when
the job stops.
Problem
-------
Since sbt-doge merger `++ <sv> <command1>` has used binary compatibility
as a test to select subproject, but it causes surprising situations like
sbt/sbt#6915, and it blurs the responsibility of YAML file and build
file as the version specified in the version can override the Scala
version test on local laptop.
Solution
--------
This removes the compatibiliy check (backward-only or otherwise),
and require that `<sv>` match one of `crossScalaVersions` using the new
Semantic Version selector pattern.
Suggested by @SethTisue in https://github.com/sbt/sbt/pull/6894#issuecomment-1168042209
Will show multiple lines when different versions are selected for different subprojects.
When no subprojects have a matching Scala version you will get a
'Switch failed' exception anyway, so in that case there is no
change in behavior.
Problem
-------
There's a bug in ipcsocket cleanup logic that effectively
tries to wipe out /tmp.
Workaround
----------
We should fix the underlying bug, but we can start by
explicitly configuring the temp directories ipcsocket uses.
Ref https://github.com/sbt/sbt/issues/6931
This removes OkHttp dependency.
This also removes an experimental feature to customize HTTP for Ivy called
CustomHttp we added in sbt 1.3.0.
Since LM got switched to Coursier in 1.3.0, I don't think we advertized
CustomHttp.
At some point the watchOnTermination callback stopped working. I'm not
exactly sure how or why that happened but it is fairly straightforward
to restore. The one tricky thing was that the callback has the signature
(Watch.Action, _, _, _) => State, which requires propagating the action
to the failWatch command. The easiest way to do this was to add a
mutable field to the ContinuousState. This is rather ugly and reflects
some poor design choices but a more comprehensive refactor is out of
the scope of this fix.
This commit adds a scripted test that ensures that the callback is
invoked both in the successful and unsuccessful watch cases. In each
case the callback deletes a file and we ensure that the file is indeed
absent after the watch exits.
At some point the watchOnTermination callback stopped working. I'm not
exactly sure how or why that happened but it is fairly straightforward
to restore. The one tricky thing was that the callback has the signature
(Watch.Action, _, _, _) => State, which requires propagating the action
to the failWatch command. The easiest way to do this was to add a
mutable field to the ContinuousState. This is rather ugly and reflects
some poor design choices but a more comprehensive refactor is out of
the scope of this fix.
This commit adds a scripted test that ensures that the callback is
invoked both in the successful and unsuccessful watch cases. In each
case the callback deletes a file and we ensure that the file is indeed
absent after the watch exits.
Before
remote cache not found for 0.0.0-7c40144bd1c774e6
After
remote cache artifact not found for org.gontard:sbt-test:0.0.0-7c40144bd1c774e6 Some(cached-compile)
Fixes#6720
Ref #5994
Problem
-------
Sometimes the compiler returns a fake position such as `<macro>`.
This causes this causes InvalidPathException on Windows if we try
to convert it into NIO path.
Solution
--------
Looks for less-than sign in the VirtualFileRef and skip those.
Since BSP requires the diagnostic info to be associated with
files, we probably can't do much.
Fixes#6592
Problem
-------
On Heroku there's timeout.
Solution
--------
This seems to be coming from supershell closing the executor.
Extend the timeout to 30s.
Migrates TreeView.scala to use Contraband from scala.util.parsing.json,
because this is now deprecated.
The TreeView logic is used in the dependencyBrowseTree task.
`systemOut` notifications are buffered so that they are sent at most
once every 20 millisecond. Other RPC messages are not buffered.
As a consequence, some RPC messages can pass in front of some
systemOut notifications.
That's why `sbt --client run` can exit before it receives all the logs.
In general I think it is safer to maintain the order of all messages.
To do so we can force the flush of systemOut before each RPC message.
Normally scripted tests are forked using the JVM that is running sbt.
If set `scripted / javaHome`, forked using it.
```
scripted / javaHome := Some(file("/path/to/jdk-x.y.z"))
```
Or use `java++` command before scripted.
```
sbt> java++ 11!
sbt> scripted
```
Fixes https://github.com/sbt/sbt/issues/6558
Problem
-------
sbt uses SecurityManager feature of JDK to trap `sys.exit` call during
`run`-like tasks, since we emulate `run` and `console` as function calls.
JDK 17 deprecated SecurityManager and it's printing warnings.
Solution
--------
About 10 years go, `exit` was a convenient way of quitting both Scala
REPL and sbt shell. Scala 2.11 broke this by removing the `Predef.exit`.
We still need to worry about `run` potentially calling `sys.exit`
but that can be handled using fork feature.
In the long-run, it probably is better to be JDK 17 compatible.
Currently crossSbtVersions is incorrectly generating a warning that it
is an unused setting (see https://github.com/sbt/sbt/pull/5153). This
PR fixes this by adding it to the list of excluded lint keys.
Fixes#6571.
Ref #6592
When there's an issue like timeout, currently it throws a
ClassCastException because I made some assumption about the underlying
error. This removes the unnecessary casting.
This fixes the closed channel exception generated when running
sbt -timings help. This bug was introduced in sbt 1.4 where a wrapper
was created (Terminal.scala) around the terminal. This meant that the
shutdown hook which had been used previously was no longer working.
This has been fixed by avoiding the use of the JVM shutdown hook and
instead manually running the thunk at a place in the code where
the programme is still able to respond.
The request of the form buildTarget/* often take a sequence of build
targets as parameter. So far if there is an error on a single build
target, the entire request fails.
This is not the best because the client wants the result of the other
build targets anyway:
For example:
- workspace/buildTargets: if one build target has an invalid Scala
version we still want to import the other ones
- buildTarget/scalacOptions: if a dependency cannot be resolved we still
want to import the build targets that do not depend on it
- buildTarget/scalaMainClasses: if buildTarget does not compile we still
want the main classes of the other targets
...
The change is to respond to BSP requests with the successful
build targets and to ignore the failed ones.
This is implemented the same in Bloop since before BSP in sbt.
In https://github.com/build-server-protocol/build-server-protocol/issues/204,
I made a proposal to also add the failed build targets in the response.
Since build.sbt is compiled/evaluated in `sbt.compiler.Eval`,
this commit introduces a `BuildServerEvalReporter` to redirect
the compiler errors to the BSP clients.
A new `finalReport` method is added in the new `EvalReporter` base class
to reset the old diagnostics.
- Restore old type of `bspWorkspace` key for backwards compat.
Instead, introduce `bspFullWorkspace` that includes the
SBT targets
- Log a warning if the client requests, e.g. `scalaMainClasses`
for a SBT target
- Refactor the code that creates the SBT build targets so it
doesn't depend on `sbtFullWorkspace`.
- Add a setting `bspSbtEnabled` to let the user opt-opt of
SBT target export (e.g. to compensate for a client that does
not yet support them)
What is the problem?
When using remote caching, the resource files are not tracked so if they
have changed, pullRemoteCache will deliver both the old resource file
as well as the changed one.
This is a problem, because it's not the behaviour that our users will
expect and it's not in keeping with the contract of this feature.
Why is this happening?
Zinc, sbt's incremental compiler, keeps track of changes that have
been made. It keeps this in what is called the Analysis file.
However, resource files are not tracked in the Analysis file, so
remote caching is not invalidating the unchanged resource file in
place of the latest version.
What is the solution?
PullRemoteCache deletes all of the resources files. After this,
copyResources is called by PackageBin, which puts the latest
version of the resources back.
If the user runs foo/runMain in a project with multiple main classes,
sbt will still warn the user about their being multiple main classes
even though this is a pointless warning since the user either is running
runMain which requires a main class. The run task is also excluded since
by default it prompts the user with a main class selector. The previous
logic for doing this filtering was bad because it only looked at the
first command in a sequence and couldn't handle the foo/runMain case
since it was looking for an exact match with `run` or `runMain`. This
commit relaxes those restrictions to look at all of the strings in the
command as well as splitting the string to check if the last part of the
key ends in run or runMain. This logic could theoretically be incorrect
if the user wrote an input task that was expecting run or runMain as
user input but even in that case the only consequence would be that they
wouldn't see the multiple main class warning which generally isn't all
the helpful unless you are packaging a jar that expects there to be only
one main class. It seems unlikely that that the user would be running a
custom input task that is both packaging a jar and expecting run or
runMain as input strings.
Fixes#6430.
What is the problem?
As detailed in #6430, the @nowarn annotation was not suppressing
warnings, even after the first attempt to fix this in PR#6431.
This first PR fixed the problem for projects using
enablePlugins(SbtPlugin), but not for those using sbtPlugin := true.
Why is this a valuable problem to solve?
The annotation was not working as users would expect.
What is this solution?
I have moved the scalacOptions change from SbtPlugin.projectSettings
to the scalacOptions in the JvmPlugin settings.
Has this been tested?
Yes, a test has been added. Also, this branch was tested successfully
on the twinagle repo (https://github.com/soundcloud/twinagle/pull/224).
The problem was that -client was different from --client, which
makes for a confusing user experience. So, this change makes
them the same and re-names -client to --java-client. The value
of this is that hopefully -client and --client being the same
feels more logical to users.
Fixes#2835
Somehow the fix in #4033 due to various initialization.
Here's a bit more aggressive rebasing of the base directory
that should fix the `project` and `target` directory created during sbt
new.
Add `testReportsDirectory` setting to allow output directory for
JUnitXmlTestsListener to be configured.
Add `testReportSettings` which provides defaults values:
- by default this uses the build configuration name as a prefix so
`target/test-reports` for `Test` config, but `target/it-reports`
for `IntegrationTest` (previously this was hardcoded to always
use `target/test-reports`). To override this set e.g.
`Test / testReportsDirectory := target.value / "my-custom-dir"`
- the `JunitXmlTestsListener` is now only attached to the `Test`
and `IntegrationTest` configs by default (previously it was added
to the global configuration object). Any configs which inherit
from one of these will continue to have the listener attached;
but completely custom configurations will need to re-add with:
`project.settings(testReportSettings)`
Fixes#2853
Fixes https://github.com/sbt/sbt/issues/5206
Problem
--------
Coursier uses directories-jvm to determine its default cache directory.
Currently directories-jvm shells out to Powershell to call the [Known Folders API](https://docs.microsoft.com/en-us/windows/win32/shell/knownfolderid), which doesn't work in various environments, and instead of an error, it apparently returns `null/Coursier/cache` as the directory name.
Solution
--------
With due respect to the heroic effort directories-jvm is making to comply to the directory standards on all operating systems, including that of Microsoft, I don't think the majority of the sbt users care exactly where that directory is as long as it is well-documented, and calculated in a fast and predictable way.
Instead of shelling out to Powershell, or using JNI, to hit the Known Folders API, I propose we first look at `LOCALAPPDATA` environment variable. When it is not found, it will fall back to `$HOME/AppData/Local`. Per discussion in https://github.com/dirs-dev/directories-jvm/issues/43, `LOCALAPPDATA` environment variable may NOT represent the one-true Known Folders API value of the AppData directory in case the user happened to have set the `LOCALAPPDATA` environmental variable. For the purpose of picking a directory for Coursier cache, I don't find that to be a problem because it will be faster, more reliable, and predictable.
The processing log is sent when a command issued by a request is being
processed, if the request has not yet been responded. In particular,
the processing log of sbtReportResult is filtered out if the client has
already received a response.
The processing log severity is the lowest so that it can be ignored by
the BSP client.
The inputFileChanges, inputFiles and outputFiles macros all used the now
deprecated `in` syntax, which causes warnings for builds that use these
apis. SlashSyntax doesn't naively work in these macros so it was easier
to just reimplement the logic in this file.
Fixes#6330
Fixes https://github.com/sbt/sbt/issues/5934
In #5886 adpi2 reported that Test / scalacOptions could have unwanted duplicated flags from Compile / scalacOptions. So #5887 added
```diff
+ // remove duplicated semanticdbOptions
+ scalacOptions --= (Compile / semanticdbOptions).value
```
Notice that it's subtracting (Compile / semanticdbOptions).value regardless of the semanticdbEnabled value. If semanticdbEnabled is set to true I am guessing that it would all work out, but when it's false, it's just subtracting "-Yrangepos".
This fixes that by checking for semanticdbEnabled.
Second take at https://github.com/sbt/sbt/issues/5886, which fixed the
problem for Test specifically but not for custom configurations.
* Any child of Compile (like Custom in the scripted) had to use
testSettings, whether they were related to testing or not
* Custom configurations with grand parents (like SystemTest in the
scripted) would get duplicated scalacOptions no matter what they used
Scala compiler changed the implementation of reporter in 2.12.13 such that overriding `info0` no longer increments the error count in the delegating reporter.
See https://github.com/scala/bug/issues/12317 for details.
Fixes https://github.com/sbt/sbt/issues/6235
In sbt 1.4.0 (https://github.com/sbt/sbt/pull/5344) we started wiping out the timestamps in JAR
to make the builds more repeatable.
This had an unintended consequence of breaking Play's last-modified response header (https://github.com/playframework/playframework/issues/10572).
This adds a global setting called `packageTimestamp`, which is
initialized as follows:
```scala
packageTimestamp :== Package.defaultTimestamp,
```
Here the `Package.defaultTimestamp` would pick either the value from the
`SOURCE_DATE_EPOCH` environment variable or 2010-01-01.
To opt out of this default, the user can use:
```scala
ThisBuild / packageTimestamp := Package.keepTimestamps
// or
ThisBuild / packageTimestamp := Package.gitCommitDateTimestamp
```
Before (sbt 1.4.6)
------------------
```
$ ll example
total 32
-rw-r--r-- 1 eed3si9n wheel 901 Jan 1 1970 Greeting.class
-rw-r--r-- 1 eed3si9n wheel 3079 Jan 1 1970 Hello$.class
-rw-r--r-- 1 eed3si9n wheel 738 Jan 1 1970 Hello$delayedInit$body.class
-rw-r--r-- 1 eed3si9n wheel 875 Jan 1 1970 Hello.class
```
After (using Package.gitCommitDateTimestamp)
--------------------------------------------
```
$ unzip -v target/scala-2.13/root_2.13-0.1.0-SNAPSHOT.jar
Archive: target/scala-2.13/root_2.13-0.1.0-SNAPSHOT.jar
Length Method Size Cmpr Date Time CRC-32 Name
-------- ------ ------- ---- ---------- ----- -------- ----
288 Defl:N 136 53% 01-25-2021 03:09 888682a9 META-INF/MANIFEST.MF
0 Stored 0 0% 01-25-2021 03:09 00000000 example/
901 Defl:N 601 33% 01-25-2021 03:09 3543f377 example/Greeting.class
3079 Defl:N 1279 59% 01-25-2021 03:09 848b4386 example/Hello$.class
738 Defl:N 464 37% 01-25-2021 03:09 571f4288 example/Hello$delayedInit$body.class
875 Defl:N 594 32% 01-25-2021 03:09 ad295259 example/Hello.class
-------- ------- --- -------
5881 3074 48% 6 files
```
Ref https://github.com/scala/scala-parallel-collections/issues/22
Parallel collection got split off without source-compatible library, so apparently we need to roll our own compat hack, which causes import not used, so it needs to be paired with silencer.
The bytecode generated by 2.13 compiler contains scala.runtime.BoxedUnit in the constant pool. To avoid referencing scala-library, port XMainConfiguration to Java.
Contrary to Scala 2, Scala 3 compiler bridges are tied to the compiler
version. There is one compiler bridge foreach compiler version.
Scala compiler bridges are pure java so they are binary compatible with
all Scala version. Consequently, we can fetch the binary module directly
without using the ZincCompilerBridgeProvider.
Fixes https://github.com/sbt/sbt/issues/5976
Ref https://eed3si9n.com/enforcing-semver-with-sbt-strict-update
This removes the guess-based EvictionWarning, and runs EvictionError instead.
EvictionError uses the `ThisBuild / versionScheme` information supplied by the library authors in addition to `libraryDependencySchemes` that the build users could provide:
```scala
ThisBuild / libraryDependencySchemes += "com.example" %% "name" % "early-semver"
```
as the version scheme "early-semver", "semver-spec", "pvp", "strict", or "always" may be used.
Here's an example of `update` failure:
```
[error] * org.typelevel:cats-effect_2.13:3.0.0-M4 (early-semver) is selected over {2.2.0, 2.0.0, 2.0.0, 2.2.0}
[error] +- com.example:use2_2.13:0.1.0-SNAPSHOT (depends on 3.0.0-M4)
[error] +- org.http4s:http4s-core_2.13:0.21.11 (depends on 2.2.0)
[error] +- io.chrisdavenport:vault_2.13:2.0.0 (depends on 2.0.0)
[error] +- io.chrisdavenport:unique_2.13:2.0.0 (depends on 2.0.0)
[error] +- co.fs2:fs2-core_2.13:2.4.5 (depends on 2.2.0)
[error]
[error]
[error] this can be overridden using libraryDependencySchemes or evictionErrorLevel
```
This catches the violation of cats-effect_2.13:3.0.0-M4 version scheme (early-semver) without user setting additional overrides. If the user wants to opt-out of this, the user can do so per module:
```scala
ThisBuild / libraryDependencySchemes += "org.typelevel" %% "cats-effect" % "always"
```
or globally as:
```
ThisBuild / evictionErrorLevel := Level.Info
```
Made the bspConfig task dependendant on the bspConfig value.
Changed the bspConfig setting to use a attributeKey so we can use it in the server as well.
The waitWatch command is very similar to shell in that it should
override the onFailure command to be itself. It also enqueues itself to
remaining commands whenever it reads a new command which made it
unnecessary to append waitWatch to the runCommmand in Continuous.
When a user returns to the shell with 's' in recent versions of sbt, the
prompt is not initially displayed. This ends up being because MainLoop
was incorrectly setting the terminal prompt to Prompt.Watch when it
exited watch. I realized in debugging the issue that it didn't make
sense to restort the terminal prompt to the initial value before task
evaluation. By removing that logic, the 's' option option started
working correctly again.
There isn't yet a version of the jna available that works with the new
apple silicon using arm64. To workaround this, we can use the jni
implementation by default on arm64 macs. If the user wants to force the
jni implementation for any supported platform, they can opt in with the
`sbt.ipcsocket.jni` system property and/or by setting the serverUseJni
setting.
When the sbt server is running a task, it presents all connected clients
with a message that instructs them that they cancel the running task.
Unfortunately, this often didn't work and the task would keep running
after cancel was entered. The reason for this was because the exec id
passed in to NetworkChannel did not necessarily match the exec id of the
running task. Because cancel in this case is not really exec specific,
this commit adds a flag to NetworkChannel.cancel that forces it to
cancel the running task regardless of what execID is passed in.
In https://github.com/sbt/sbt/pull/5981 I tried to work around the spruious post-macro "a pure expression does nothing" warning (https://github.com/scala/bug/issues/12112) by trying to remove some pure-looking expressions out of the tree.
This quickly backfired when it was reported that sbt 1.4.3 was not evaluating some code. This backs out the macro-level manipulation, and instead try to silence the warning at the reporter level. This feels safer, and it seems to work just as well.
Fixes https://github.com/sbt/sbt/issues/6102https://github.com/sbt/sbt/pull/6026 changed the implementation of remote cache to NOT use dependency resolution (Coursier), and directly use Ivy resolver for efficiency. This was good, but when I made the change, I've changed the cache directory to be `crossTarget.value / "remote-cache"`. This was ok for local testing purpose, but not great for real usage since we don't want the cache to be wiped out either in the CI machines or on a local laptop.
This adds a new Global key called `localCacheDirectory`. Similar to Coursier cache, this is meant to be shared across all builds running on a machine. Also similar to Coursier cache this will try to follow the operating system specifc caching directory.
### localCacheDirectory location
- Environment variable: `SBT_LOCAL_CACHE`
- System property: `sbt.global.localcache`
- Windows: %LOCALAPPDATA%\sbt\v1
- macOS: $HOME/Library/Caches/sbt/v1
- Linux: $HOME/.cache/sbt/v1
In #6091, we updated the ScriptedPlugin to set scriptedBatchExecution :=
true for all 1.x versions but not 0.13. This commit further restricts
the setting so that it is only set for sbt >= 1.4, which seems necessary
based on the comments in #6094.
When using the launcher's classpath for the metabuild, the
scala-compiler jar can be missing. This is because the managedJars only
method returns the scala-library jar and not the rest of the scala
instance. To fix this, we can always prepend the scala instance jars to
the classpath.
In order to simulate the issue in scripted, I had to manually remove the
scala-compiler.jar from the scripted classpath or else the scripted test
that I added doesn't actually do anything because the scala-compiler.jar
would end up on the app.provider.mainClasspath.
Fixes#4452
A periodic stacktrace showed that scripted tests were still hanging in ci
trying to shutdown the background job service (I had previously thought
that I'd fixed that in 16bef0cfc8). It
appears that there is a logical bug that prevents some jobs from being
removed from the jobSet even though they have finished. If that happens,
the shutdown will never exit. That is highly undesirable and can be
avoided by adding a timeout and also only trying to shutdown the job if
it is actually running.
I discovered that the metals bsp implementation worked very badly with
continuous builds. The problem was that metals is able to trigger a bsp
compile slightly before the continuous build would trigger. This would
cause the ui to get in a bad state. The worst case was that it would
actually cause sbt (or the thin client) to exit. A less catastrophic
issue was that it was possible for the wrong count to be printed by the
continuous message.
This commit fixes the issue by more carefully managing the prompt state
and only resetting the ui when the prompt is not in the Prompt.Watch
state.
If the sbt server is launched by the remote client, it should not have a
console ui thread because there is no way to even feed input to it once
the server has launched. Having the ui thread can cause the server to
exit unexpectedly if an EOF is read from the console input stream.
Network client already supports the -bsp command (since
65ab7c94d0). This commit reworks the
BspClient.run method so that it delegates to the NetworkClient. The
advantage to doing it this way is that improvements to starting up the
sbt server by the thin client will automatically propagate to the -bsp
command. The way that it is implemented, all of the output generated
during server startup will be redirected to System.err which is useful
for debugging without messing up the bsp protocol, which relies on only
bsp messages being written to System.out.
The boot server socket was not working correctly when the sbt server was
started by the thin client. This was because it is necessary for us to
create a ConsoleTerminal in order for System.out and System.err to be
properly forwarded to the clients connected over the boot server socket.
As a result, if you started a server instance of sbt with the thin
client, you wouldn't see any output util you connected to the server.
The fix is to just make sure that we create a console terminal if sbt is
run as a subprocess.
When a user enters shutdown in the thin client console, it only exits
the thin client, it does not actually shutdown sbt. Running `sbtn
shutdown` did work to shutdown the server, however. It turned out that
this was because there was special handling for shutdown when processed
through jline. We would enqueue the shutdown command and also close the
client connection. Closing the client connection though removed all of
the enqueued commands for the client, which included the shutdown
command. To fix this, we just make sure that we don't remove the
shutdown command when clearing the client commands.
We no longer need to use the forked version of jline because they have
merged in our required changes. The latest version of jline does upgrade
jansi, however, and some of the apis we were relying on for windows were
removed so they had to be manually implemented. I verified that console
input still worked on my windows vm after this change.
The launcher embeds a fixed version of jansi above the rest of the
classpath on windows. This causes problems for the scala 2.12 console
because it tries to load methods that don't exist from the old jansi
jar. This can be fixed by excluding all jansi classes from the top
loader.
We also need to exclude jansi classes in the scala instance top class
loader to make the 2.10 console work because scala 2.10 uses a shaded
jline that requires a very old jansi version. Due to the shading, the
thin client doesn't work with the 2.10 console.
On terminals with virtual io disabled, we'd spin up a thread for each
watch iteration that performed a blocking read from the terminal input
stream. This thread could not be joined which would cause the triggered
execution to be delayed by 1 second while sbt blocked trying to join
that thread. It also meant that input probably didn't work correctly
since the user would end up with many threads polling from system in.
The fix to this problem is to poll the terminal input stream if it is
unsafe to do a blocking read, which is the case for dumb terminals or if
virtual io is disabled.
Ref https://github.com/sbt/sbt/pull/4443
Fixes https://github.com/sbt/sbt/issues/5750
In #4443 I implemented an optimization where the metabuild would no longer re-resolve numerous sbt artifacts for metabuilds each time, and instead use whatever the JARs provided by the launcher. At the time, this technique didn't work for Coursier so I've placed in some workarounds for it. Now that Coursier's resolution has improved, it seems like the workaround is actually causing more harm. This removes the bandaid, and local testing shows that it seems to be working.
For instance, we no longer need to put in `ThisBuild / useCoursier := false` in sbt/sbt's `project/plugins.sbt`.
* Refactor so as to be testable
* Queue stores the _beginning_ timestamp of each GC time delta
* Message states the correct time over which the GC time was recorded
* Add heap stats from java.lang.Runtime to the message
sbt itself effectively runs its scripted test with
scriptedBatchExecution true and scriptedParallelInstances 1. The
performance is much better when this works. This can cause issues, see
https://github.com/sbt/sbt/issues/6042, but we inadvertently made this
behavior the default in 1.4.0 and it took about a month before #6042 was
reported so I think most users would benefit from this default.
If there are two sbt instances and one of them is running a server, the
other instance is presently prevented from ever starting a server. If an
sbt instance is unable to start a local server because of the presence
of another server, we can monitor the active.json file for changes and,
if it is deleted, we can then try again to start a new server instance.
Refactor remote caching to be scoped to configuration.
In addition, this avoid the use of dependency resolver (since I'm not resolving anything) and directly invoke the Ivy resolver for the artifact, somewhat analogus to publishing process.
This should speed up the `pullRemoteCache` since it avoids the POM download as well.
For sbt-binrary-remote-cache this created a bit of complication since the (publishing) resolver doesn't act correctly as (downloading) resolver in terms of the credentials, so I had to create a new key `remoteCacheResolvers` to have asymmetric resolver.