Fixes https://github.com/sbt/sbt/issues/6102https://github.com/sbt/sbt/pull/6026 changed the implementation of remote cache to NOT use dependency resolution (Coursier), and directly use Ivy resolver for efficiency. This was good, but when I made the change, I've changed the cache directory to be `crossTarget.value / "remote-cache"`. This was ok for local testing purpose, but not great for real usage since we don't want the cache to be wiped out either in the CI machines or on a local laptop.
This adds a new Global key called `localCacheDirectory`. Similar to Coursier cache, this is meant to be shared across all builds running on a machine. Also similar to Coursier cache this will try to follow the operating system specifc caching directory.
### localCacheDirectory location
- Environment variable: `SBT_LOCAL_CACHE`
- System property: `sbt.global.localcache`
- Windows: %LOCALAPPDATA%\sbt\v1
- macOS: $HOME/Library/Caches/sbt/v1
- Linux: $HOME/.cache/sbt/v1
In #6091, we updated the ScriptedPlugin to set scriptedBatchExecution :=
true for all 1.x versions but not 0.13. This commit further restricts
the setting so that it is only set for sbt >= 1.4, which seems necessary
based on the comments in #6094.
When using the launcher's classpath for the metabuild, the
scala-compiler jar can be missing. This is because the managedJars only
method returns the scala-library jar and not the rest of the scala
instance. To fix this, we can always prepend the scala instance jars to
the classpath.
In order to simulate the issue in scripted, I had to manually remove the
scala-compiler.jar from the scripted classpath or else the scripted test
that I added doesn't actually do anything because the scala-compiler.jar
would end up on the app.provider.mainClasspath.
Fixes#4452
A periodic stacktrace showed that scripted tests were still hanging in ci
trying to shutdown the background job service (I had previously thought
that I'd fixed that in 16bef0cfc8). It
appears that there is a logical bug that prevents some jobs from being
removed from the jobSet even though they have finished. If that happens,
the shutdown will never exit. That is highly undesirable and can be
avoided by adding a timeout and also only trying to shutdown the job if
it is actually running.
I discovered that the metals bsp implementation worked very badly with
continuous builds. The problem was that metals is able to trigger a bsp
compile slightly before the continuous build would trigger. This would
cause the ui to get in a bad state. The worst case was that it would
actually cause sbt (or the thin client) to exit. A less catastrophic
issue was that it was possible for the wrong count to be printed by the
continuous message.
This commit fixes the issue by more carefully managing the prompt state
and only resetting the ui when the prompt is not in the Prompt.Watch
state.
If the sbt server is launched by the remote client, it should not have a
console ui thread because there is no way to even feed input to it once
the server has launched. Having the ui thread can cause the server to
exit unexpectedly if an EOF is read from the console input stream.
Network client already supports the -bsp command (since
65ab7c94d0). This commit reworks the
BspClient.run method so that it delegates to the NetworkClient. The
advantage to doing it this way is that improvements to starting up the
sbt server by the thin client will automatically propagate to the -bsp
command. The way that it is implemented, all of the output generated
during server startup will be redirected to System.err which is useful
for debugging without messing up the bsp protocol, which relies on only
bsp messages being written to System.out.
The boot server socket was not working correctly when the sbt server was
started by the thin client. This was because it is necessary for us to
create a ConsoleTerminal in order for System.out and System.err to be
properly forwarded to the clients connected over the boot server socket.
As a result, if you started a server instance of sbt with the thin
client, you wouldn't see any output util you connected to the server.
The fix is to just make sure that we create a console terminal if sbt is
run as a subprocess.
When a user enters shutdown in the thin client console, it only exits
the thin client, it does not actually shutdown sbt. Running `sbtn
shutdown` did work to shutdown the server, however. It turned out that
this was because there was special handling for shutdown when processed
through jline. We would enqueue the shutdown command and also close the
client connection. Closing the client connection though removed all of
the enqueued commands for the client, which included the shutdown
command. To fix this, we just make sure that we don't remove the
shutdown command when clearing the client commands.
We no longer need to use the forked version of jline because they have
merged in our required changes. The latest version of jline does upgrade
jansi, however, and some of the apis we were relying on for windows were
removed so they had to be manually implemented. I verified that console
input still worked on my windows vm after this change.
The launcher embeds a fixed version of jansi above the rest of the
classpath on windows. This causes problems for the scala 2.12 console
because it tries to load methods that don't exist from the old jansi
jar. This can be fixed by excluding all jansi classes from the top
loader.
We also need to exclude jansi classes in the scala instance top class
loader to make the 2.10 console work because scala 2.10 uses a shaded
jline that requires a very old jansi version. Due to the shading, the
thin client doesn't work with the 2.10 console.
On terminals with virtual io disabled, we'd spin up a thread for each
watch iteration that performed a blocking read from the terminal input
stream. This thread could not be joined which would cause the triggered
execution to be delayed by 1 second while sbt blocked trying to join
that thread. It also meant that input probably didn't work correctly
since the user would end up with many threads polling from system in.
The fix to this problem is to poll the terminal input stream if it is
unsafe to do a blocking read, which is the case for dumb terminals or if
virtual io is disabled.
Ref https://github.com/sbt/sbt/pull/4443
Fixes https://github.com/sbt/sbt/issues/5750
In #4443 I implemented an optimization where the metabuild would no longer re-resolve numerous sbt artifacts for metabuilds each time, and instead use whatever the JARs provided by the launcher. At the time, this technique didn't work for Coursier so I've placed in some workarounds for it. Now that Coursier's resolution has improved, it seems like the workaround is actually causing more harm. This removes the bandaid, and local testing shows that it seems to be working.
For instance, we no longer need to put in `ThisBuild / useCoursier := false` in sbt/sbt's `project/plugins.sbt`.
* Refactor so as to be testable
* Queue stores the _beginning_ timestamp of each GC time delta
* Message states the correct time over which the GC time was recorded
* Add heap stats from java.lang.Runtime to the message
sbt itself effectively runs its scripted test with
scriptedBatchExecution true and scriptedParallelInstances 1. The
performance is much better when this works. This can cause issues, see
https://github.com/sbt/sbt/issues/6042, but we inadvertently made this
behavior the default in 1.4.0 and it took about a month before #6042 was
reported so I think most users would benefit from this default.
If there are two sbt instances and one of them is running a server, the
other instance is presently prevented from ever starting a server. If an
sbt instance is unable to start a local server because of the presence
of another server, we can monitor the active.json file for changes and,
if it is deleted, we can then try again to start a new server instance.
Refactor remote caching to be scoped to configuration.
In addition, this avoid the use of dependency resolver (since I'm not resolving anything) and directly invoke the Ivy resolver for the artifact, somewhat analogus to publishing process.
This should speed up the `pullRemoteCache` since it avoids the POM download as well.
For sbt-binrary-remote-cache this created a bit of complication since the (publishing) resolver doesn't act correctly as (downloading) resolver in terms of the credentials, so I had to create a new key `remoteCacheResolvers` to have asymmetric resolver.
This test works fine locally on all platforms but there are issues in
CI. I think that it might work ok with 1.4.2 without a lot of extra
effort so I'm going to disable it for now.
This commit adds a wizard for installing sbtn along with tab completions
for bash, fish, powershell and zsh. It introduces the `installSbtn`
command which installs sbtn into ~/.sbt/1.0/bin/sbtn(.exe) depending on
the platform. It also can optionally install completions. The
completions are installed into ~/.sbt/1.0/completions. The sbtn native
executable is installed by downloading the sbt universal zip for the
version (which can be provided as an input argument with a fallback to
the running sbt version) and extracting the platform specific binary
into ~/.sbt/1.0/bin. After installing the executable, it offers to setup
the path and completions for the four shells. With the user's consent,
it adds a line to the shell config that updates the path to include
~/.sbt/1.0/bin and another line to source the appropriate completion
file for the shell from ~/.sbt/1.0/completions.
With the thin client, when running the command `exit`, it is often the
case that the log message `[info] disconnected` is printed on the same
line as the prompt. This is because there is a small flush delay on the
network client's output stream channel that causes the disconnected info
message to be logged before the the newline that jline 3 echoes to the
client has been printed. To fix this we can manually flush the terminal
output stream before exiting.
A user reported that the watchBeforeCommand callback was not being
invoked in sbt 1.4.{0, 1}. This was an oversight that occurred when
refactoring watch for the thin client and there previously had been no
regression test for that callback.