There may be instances where the user may wish to stop the watch if an
error occurs running the task. To facilitate this, I add boolean
parameter, lastStatus, to watchShouldTerminate. The value is computed by
modifying the state used to run the task to have a custom onFailure
command. If the task fails, the returned state will have the onFailure
command will be enqueued at the head of the remaining commands. The
result of the task then becomes true if the custom onFailure is not
present in the remaining commands and false if it is. We don't actually
run this command, so it's just implemented with the identity function.
I also updated Watched.watch to return an Action instead of Unit. This
enables us to return a failed state if Watched.watch returns
HandleError.
This commit reworks Watched to be more testable and extensible. It also
adds some small features. The previous implementation presented a number
of challenges:
1) It relied on external side effects to terminate the watch, which was
difficult to test
2) It exposed irrelevant implementation details to the user in the
methods that exposed the WatchState as a parameter.
3) It spun up two worker threads. One was to monitor System.in for user
input. The other was to poll the watch service for events and write
them to a queue. The user input thread actually broke '~console'
because nearly every console session will hit the <enter> key, which
would eventually cause the watch to stop when the user exited the
console.
To address (1), I add the shouldTerminate method to WatchConfig. This
takes the current watch iteration is input and if the function returns
true, the watch will stop.
To address (2), I replace the triggeredMessage and watchingMessage keys
with watchTriggeredMessage and watchStartMessage. The latter two keys
are functions that do not take the WatchState as parameters. Both
functions take the current iteration count as a parameter and the
watchTriggeredMessage also has a parameter for the path that triggered
the build.
To address (3), I stop using the sbt.internal.io.EventMonitor and
instead use the sbt.io.FileEventMonitor. The latter class is similar to
the former except that it's polling method accepts a duration, which may
be finite or infinite) and returns all of the events that occurred since
it was last polled. By adding the ability to poll for a finite amount of
time, we can interleave polling for events with polling System.in for
user input, all on the main thread. This eliminates the two extraneous
threads and fixes the '~console' use case I described before.
I also let the user configure the function that reads from System.in via
the watchHandleInput method. In fact, this method need not read from
System.in at all since it's just () => Watched.Action. The reason that
it isn't () => Boolean is that I'd like to leave open the option for the
ability to trigger a build via user input, not just terminating the
watch. My initial idea was to add the ability to type 'r' to re-build in
addition to <enter> to exit. This doesn't work without integrating
jline though because the input is buffered. Regardless, for testing
purposes, it gives us the ability to add a timeout to the watch by
making handleInput return true when a deadline expires.
The tests are a bit wonky because I still need to rely on side effects
in the logging methods to orchestrate the sequence of file events that
I'd like to test. While I could move some of this logic into a
background thread, there still needs to be coordination between the
state of the watch and the background thread. I think it's easier to
reason about when all of the work occurs on the same thread, even if it
makes these user provided functions impure.
I deprecated all of the previous watch related keys that are no longer
used with the new infrastructure. To avoid breaking existing builds, I
make the watchConfig task use the deprecated logging methods if they are
defined in the user's builds, but sbt will not longer set the default
values. For the vast majority of users, it should be straightforward to
migrate their builds to use the new keys. My hunch is that the of the
deprecated keys, only triggeredMessage is widely used (in conjunction
with the clear screen method) and it is dead simple to replace it with
watchTriggeredMessage.
Note: The FileTreeViewConfig class is not really necessary for this commit.
It will become more important in a subsequent commit which introduces an
optional global file system cache.
This commit makes watch event logging work in the '~' command. The
previous design of the command made this difficult, so there is a
significant re-design of the implementation of '~'. I believe that this
redesign will allow the feature to be maintained and improved more
easily moving forward. With the redesign, it is now possible to test the
business logic of the watch command (and I add a rudimentary test that I
will build upon in subsequent commits).
A bonus of this redesign is that now if the user tries to watch an
invalid command, the watch will immediately terminate with an error
rather than get stuck waiting for events when the task can never
possibly succeed.
The previous implementation of the '~' command makes it difficult
to dynamically control the implementation arguments because it is
implemented in the command project which makes it unable to depend on
any task keys that are defined in the build. It works around this by
putting all of it's configuration in the Watched attribute which is
stored globally. This would not have been necessary if the function had
been defined in the main project where it could just extract the value
of the watched task rather than relying on the global attribute value.
Moreover, because it cannot depend on tasks, it makes it nigh impossible
to use the logging framework within the '~' command.
Another issue with the previous implementation is that it's somewhat
difficult to reason about. The executeContinuously has effectively two
entry points: one for the first time the command is run and one for each
subsequent invocation when a new build is triggered. The successive
invocations are triggered by prepending commands to run to the previous
state. This is made recursive by prepending the initial command (that
was prefixed with '~'. Which branch we're in is determined by checking
for the existence of a temporary attribute, that we must ensure that we
remove when the build is stopped. This makes a lot of behavior non-local and
difficult for an outsider who is less familiar with sbt to understand.
Broadly, this refactor does two things:
1) Move the definition of continuous from BasicCommands to BuiltInCommands
2) Re-work the implementation to be executed in code rather than using
the sbt dsl.
The first part is simple. We just add an implementation of continuous to
BuiltInCommands and remove it from the list of BasicCommands. We need to
leave in the legacy implementation for binary compatibility. I also
moved all of the actual implementation logic into Watched, which makes
maintenance easier since most of the logic is in one place.
The second part is more complicated. Rather than rely on the sbt dsl
(e.g. `(ClearOnFailure :: next :: FailureWall :: repeat :: s)`) to
parse and run the command. We manually parse the command and generate a
task of type `() => State`. We don't actually need to do anything with
the generated state because we're going to return the original state at
the end of the command no matter what. With this task, we can then
create a tail recursive function that repeatedly executes the task until
the watch is terminated.
The parsing is handled in the Watch.command method (which is where I
moved the refactored BasicCommands.continuous implementation). The
actual task running and monitoring is handled in Watched.watch. This
method has no reference to the sbt state, which makes it testable. It sets
up an event monitor and then delegates the recursive monitoring to a
small nested function, Watched.watch.impl. One nice thing about this
approach is that it is very easy to reason about the life cycle of the
EventMonitor. The recursive call is within a try { } finally { } where
the monitor and stdin are guaranteed to be cleared at the end.
Adding support for a custom (and default) watch logger is trivial with
the new infrastructure and is done via the watchLogger TaskKey.
There was a small reporting race condition that was introduced by the
change to (2). Because the new implementation is able to bypass command
parsing for triggered builds, the watch message would usually end up
being printed before the task outcome was fully logged. To work around
this, I made the watch and triggered messages be logged rather than
printed directly to stdout. As a result, the only user visible result of
this change should be that instead of seeing:
"1. Waiting for source changes in project foo... (press enter to interrupt)",
users will now see:
"[info] 1. Waiting for source changes in project foo... (press enter to interrupt)".
The State file in intellij was littered with red squiggly lines wherever
the extension methods of State called a different extension method of
State. These went away when I switched to an implicit class, which is
the preferred way of adding extension methods since scala 2.10. As a
bonus, I was able to switch the implicit class to be a value class, so
it should not actually make a new object in most use cases.
I had to re-implement the stateOps method to delegate to the implicit
class for binary compatibility.
Fixes https://github.com/sbt/sbt/issues/3508
This forks an instance of sbt in the background when it's not running already.
```
$ time sbt -client compile
Getting org.scala-sbt sbt 1.2.0-SNAPSHOT (this may take some time)...
:: retrieving :: org.scala-sbt#boot-app
confs: [default]
79 artifacts copied, 0 already retrieved (28214kB/130ms)
[info] entering *experimental* thin client - BEEP WHIRR
[info] server was not detected. starting an instance
[info] waiting for the server...
[info] waiting for the server...
[info] server found
> compile
[success] completed
sbt -client compile 9.25s user 2.39s system 33% cpu 34.893 total
$ time sbt -client compile
[info] entering *experimental* thin client - BEEP WHIRR
> compile
[success] completed
sbt -client compile 3.55s user 1.68s system 107% cpu 4.889 total
```
Fixes https://github.com/sbt/sbt/issues/1502
This adds `--addPluginSbtFile=<file>` command, which adds the given .sbt file to the plugin build.
Using this mechanism editors or IDEs can start a build with required plugin.
```
$ cat /tmp/extra.sbt
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.7")
$ sbt --addPluginSbtFile=/tmp/extra.sbt
...
sbt:helloworld> plugins
In file:/xxxx/hellotest/
...
sbtassembly.AssemblyPlugin: enabled in root
```
A thread blocking on System.in.read() cannot be interrupted, so check
System.in.available before blocking. This is how it used to work. It
requires https://github.com/sbt/io/pull/149 or else a cpu will be
pegged by the EventMonitor user input thread spinning on
System.in.available.
In https://github.com/sbt/io/pull/142, I add a new api for watching for
source file events. This commit updates sbt to use the new EventMonitor
based api. The EventMonitor has an anti-entropy parameter, so that
multiple events on the same file in a short window of time do not
trigger a build. I add a key to tune it.
The implementation of executeContinuously is pretty similar. The main
changes are that shouldTerminate now blocks (EventMonitor spins up a
thread to check the termination condition) and that the
EventMonitor.watch method only returns a Boolean. This is because
the event monitor contains mutable state. It does, however, have a
state() method that returns an immutable snapshot of the state.
I noticed that my custom WatchService was never cleaned up by sbt and
realized that after every build we were making a new WatchService. At
the same time, we were reusing the WatchState from the previous run,
which was using the original WatchService. This was particularly
problematic because it prevented us from registering any paths with the
new watch service. This may have prevented some of the file updates
from being seen by the watch service. Moreover, because we lost the
reference to the original WatchService, there was no way to clean it up,
which was a resource leak.
May be related to #3775, #3695
* 1.1.x:
Update mimaPreviousArtifacts/sbt.version
Introduce SBT_GLOBAL_SERVER_DIR env var to override too long paths
Handle very long socket file paths on UNIX
Conflicts:
project/build.properties
Fixes#3821
Initially I missed why #3821 was failing.
Looking at it again, the error message reads:
```
Caused by: java.lang.IllegalArgumentException: Interface (NGWin32NamedPipeLibrary) of library=kernel32 does not extend Library
at com.sun.jna.Native.loadLibrary(Native.java:566)
at sbt.internal.NGWin32NamedPipeLibrary.<clinit>(NGWin32NamedPipeLibrary.java:38)
... 7 more
```
Inside `Native.loadLibrary`, it requires the "library" interface to extend `com.sun.jna.Library`, which this adds.
Fixes#3823
When you launch a second instance of sbt on a build, prior to this change it was displaying `java.io.IOException: sbt server is already running` on every command. This make it a bit less aggressive, and just display a warning once.
```
[warn] Is another instance of sbt is running on this build?
[warn] Running multiple instances is unsupported
```
Ref https://github.com/sbt/io/pull/96
Under RFC 8089, both u1 and u3 are legal, but many of the other platforms expect traditional u3.
This will increase the compatibility/usability of sbt server, for example to integrate with Vim.
Currently the server will try to start even if there are existing sbt sessions. This causes the second session to take over the server for Unix domain socket.
This adds a check before the server comes up and make sure that the socket is not taken.
I noticed that my custom WatchService was never cleaned up by sbt and
realized that after every build we were making a new WatchService. At
the same time, we were reusing the WatchState from the previous run,
which was using the original WatchService. This was particularly
problematic because it prevented us from registering any paths with the
new watch service. This may have prevented some of the file updates
from being seen by the watch service. Moreover, because we lost the
reference to the original WatchService, there was no way to clean it up,
which was a resource leak.
May be related to #3775, #3695
Fixes#3786
To configure the log level of the server, this introduces a new task key named `serverLog`. The idea is to set this using `Global / serverLog / logLevel`. It will also check the global log level, and if all else fails, fallback to Warn.
```
lazy val level: Level.Value = (s get serverLogLevel) orElse (s get logLevel) match {
case Some(x) => x
case None => Level.Warn
}
```
`NGUnixDomainSocket` throws `java.io.IOException` instead of `SocketException`, probably because `SocketException` does not expose the contructor with a `Throwable` parameter.
To allow clients to disconnect, we need to catch `IOException`.
In addition to TCP, this adds sbt server support for IPC (interprocess communication) using Unix domain socket and Windows named pipe.
The use of Unix domain socket has performance and security benefits.
This adds a new option `dev` to the `reboot` command, which deletes the only the current sbt artifacts from the boot directory. `reboot dev` reads actively from `build.properties` instead of using the current state since `reboot` can restart into another sbt version.
In general, `reboot dev` is intended for the local development of sbt.
Fixes#3590
This adds a sbt.watch.mode system property that if set to 'polling' will
use PollingWatchService instead of WatchServiceAdapter (nio).
On macOS this will default to 'polling' and on all others 'nio'.
This is a temporary workaround for users affected by #3527
This is the first cut for the Language Server Protocol on top of server that is still work in progress.
With this change, sbt is able to invoke `compile` task on saving files in VS Code.
This implements JSON-based port file. Thoughout the lifetime of the sbt server there will be `cwd / "project" / "target" / "active.json"`, which contains `url` field.
Using this `url` the potential client, such as IDEs can find out which port number to hit.
Ref #3508
Adds JVM flag `sbt.server.autostart` to enable/disable the automatic starting of sbt server with the sbt shell.
This also adds a new command `startServer` to manually start the server.
If the read buffer contains more that 2 messages, we need to consume them all before blocking on socket read again. For that we have to loop until the buffer does not contain anymore the message delimiter character.
Same problem in the client ServerConnection code.
This commit adapts `Watched` so that it supports the new `WatchService`
infrastructure introduced in sbt/io. The goal of this infrastructure is
to provide and API for and several implementations of services that
monitor changes to the file system.
The service to use to monitor the file system can be configured with the
key `watchService`.
In sbt 0.13.15, in addition to notifying the user about the existence of
sbt's shell, a feature was added to allow the user to switch to sbt's
shell - a more pro-active approach to just displaying a message.
Unfortunately sbt is often unintentionally invoked in shell scripts in
"interactive mode" when no interaction is expected by, for exmaple,
invoking `sbt package` instead of `sbt package < /dev/null`. In that
case hitting [ENTER] would silently trigger sbt to run its shell,
easily wrecking the script. In addition to that I was unhappy with the
implementation as it created a tight coupling between sbt's command
processing abstraction to sbt's shell command.
If you want to stay in sbt's shell after running a task like `package`
then invoke sbt like so:
sbt package shell
Fixes#3091
This is a change in strategy.
The motivation is the need to find a good balance between:
+ informing the uninformed that would benefit from this information, &
+ not spamming the already informed
Making it dependent on "compile" being present in remainingCommands will
probably make it trigger for, for example, Maven users who are used to
running "mvn compile" and always run "sbt compile", and who therefore
are unneccesarily suffering terribly slow compile speeds by starting up
the jvm and sbt every time.
Fixes#3091Fixes#3097