It is useful to store a buffer of the lines written to each terminal. We
can use those lines to replay the terminal log lines to a different
client. This is particularly nice when a remote client connects to sbt
while it's booting. We can show the remote client all the lines
displayed by the console prior to the client connecting.
This commit upgrades sbt to using jline3. The advantage to jline3 is
that it has a significantly better tab completion engine that is more
similar to what you get from zsh or fish.
The diff is bigger than I'd hoped because there are a number of
behaviors that are different in jline3 vs jline2 in how the library
consumes input streams and implements various features. I also was
unable to remove jline2 because we need it for older versions of the
scala console to work correctly with the thin client. As a result, the
changes are largely additive.
A good amount of this commit was in adding more protocol so that the
remote client can forward its jline3 terminal information to the server.
There were a number of minor changes that I made that either fixed
outstanding ui bugs from #5620 or regressions due to differences between
jline3 and jline2.
The number one thing that caused problems is that the jline3 LineReader
insists on using a NonBlockingInputStream. The implementation ofo
NonBlockingInputStream seems buggy. Moreover, sbt internally uses a
non blocking input stream for system in so jline is adding non blocking
to an already non blocking stream, which is frustrating.
A long term solution might be to consider insourcing LineReader.java
from jline3 and just adapting it to use an sbt terminal rather than
fighting with the jline3 api. This would also have the advantage of not
conflicting with other versions of jline3. Even if we don't, we may want to
shade jline3 if that is possible.
If there is no system console available, then there is no point in
making an ask user thread. An ask user thread can only be created when
the terminal prompt is in the Prompt.Running or Prompt.Loading state.
The console channel will now set itself to be in the Prompt.NoPrompt
state if it detects that there is no System.console available.
The motivation for this change is that jline was printing a lot of extra
text during scripted and server tests. Whenever a jline3 linereader is
closed, it prints a newline so the logs were filled with unnecessary
newlines.
It is possible for sbt to get into a weird state when in a continuous
build when the auto reload feature is on and a source file and a build
file are changed in a small window of time. If sbt detects the source
file first, it will start running the command but then it will
autoreload when it runs the command because of the build file change.
This causes the watch to get into a broken state because it is necessary
to completely restart the watch after sbt exits.
To fix this, we can aggregate the detected events in a 100ms window. The
idea is to handle bursts of file events so we poll in 5ms increments and
as soon as no events are detected, we trigger a build.
Remote clients sometimes flicker when updating progress. This is especially
noticeable when there are two clients and one of them is running a command,
the other will tend to have some visible flickering and character ghosting.
As an experiment, I buffered calls to flush in the NetworkChannel output
stream and the artifacts went away.
Running a `~` command in a local build off the latest develop branch
will cause the build to reload even if the build sources were only
touched and not actually modified.
In the situation where sbt was started in server mode and a client is
running a `~` command and a project reload is triggered by a change to
a build source, the console terminal looks like
sbt:foo>
[info] received remote command: ~compile
sbt:foo>
[info] welcome to sbt 1.4.0-SNAPSHOT (Azul Systems, Inc. Java 1.8.0_252)
sbt:foo>
[info] loading global plugins from ~/.sbt/1.0/plugins
sbt:foo>
[info] loading settings for project foo-build from metals.sbt ...
sbt:foo>
[info] loading project definition from
~/foo/project
sbt:foo>
[info] loading settings for project root from build.sbt ...
sbt:foo>
[info] loading settings for project macros from build.sbt ...
sbt:foo>
[info] loading settings for project main from build.sbt ...
sbt:foo>
[info] set current project to foo (in build file:~/foo)
sbt:foo>
This change fixes that by unprompting all channels during project
loading and reprompting them when it completes.
Linting unused keys was adding a significant overhead to sbt project
loading because Def.compiled is so slow. It was around 4 seconds in the
sbt project on my computer.
The continuous command recompiles the setting graph into a CompiledMap
data structure so that it can determine which files it needs to
transitively monitor during watch. Generating the CompiledMap can be
very slow for large projects (5 seconds or so on my computer in the sbt
project) and this startup cost is paid every time the user enters a
watch with `~`. To avoid this, we can cache the compile map that is
generated during the initial settings evaluation.
The only real drawback I can see is that the compiled map is guaranteed
to remain in memory so long as the BuildStructure instance that holds it
is alive. Given the performance benefit, this seems like a worthwhile
tradeoff.
There have been occasional failures on appveyor where an
AccessDeniedException was thrown at this point. AccessDeniedExceptions
thrown during scripted tests can often by resolved with a Retry.
The graalvm was swallowing all -D arguments and adding them to the
process system properties. This is undesirable since there are sbt
commands that have arguments starting with '-D'. It also breaks our
ability to pass system properties to the forked sbt process.
In a few places, I used this pattern in an attempt to debug some ui
issues. It is incorrect because it doesn't use System.lineSeparator
and is also pointless.
Reboot is a bit tricky for the remote client because the sbt server is
actually shut down during reboot. When sbt shuts down the client, it can
notify the client that the reason is a reboot. The client can then
connect to the recently introduced boot control socket to display the
reboot output and supply input in case the build fails to load. Once the
server has brought back up the server, the client can reconnect. When
the client session is interactive, we're done once we reconnect. When
it's a batch session, the client needs to resend the remaing commands
that have submitted that it hasn't yet run.
When running reboot at the console, the first character that the user
enters after the reboot has completed is lost. This is because it isn't
possible to interrupt System.in and we have a thread that is blocking on
reads to System.in in WriteableInputStream. That thread cannot be
shutdown during normal sbt shutdown while it is reading. When sbt next
starts up (in the same jvm), the previous thread gets the byte but has
nowhere to write it so the byte is lost. This commit fixes that behavior
by ensuring that we only poll from System.in when there is actually a
downstream consumer.
The behavior of reboot is still a little wonky if the user issues a
reboot from a network client and then tries to input commands at the
console. In that case, sbt will have been polling System.in in the ask
user thread prior to the reboot and the ask user thread will be
uninterruptible for the reason described above so the first byte will
again by swallowed by the previous sbt instance. This use case is
sufficiently pathological that it doesn't feel worth the effort to fix.
As annoying as it is, it doesn't break the sbt session. The user will
either submit an invalid command with the missing leading character or
notice the character is missing, possibly think they missed the key,
type backspace a few times and re-type the command.
Shutdown was being handled as a special case in CommandExchange. This
promotes it to a full fledged command. Also replace instance of
hard-coded strings with constants.
On windows, it is sometimes possible to leak an sbt process if two
processes are started simultaneously by a remote client at the same
time. When this happens, the second process is unable to create a
server because of the first process and it also has no io streams
because the the client detaches its streams. We can detect this
in the shell command and prevent the process from persisting as a
zombie.
When a remote client sent the command `shutdown` through the shell, the
client would log an error and exit with a nonzero exit code because
before shutting down, the server would notify the client that it was
disconnecting it due to shutdown. In this scenario, we actually do not
want the client to log an error since they initiated the shutdown, so
before doing the full shutdown, we shutdown the client that inititated
the shutdown with the flag that tells the client not to log the shutdown
or return a nonzero exit code.
One issue with the remote client approach is that it is possible for
multiple clients to start multiple servers concurrently. I encountered
this in testing where in one tmux pane I'd start an sbt server and in
another I might run sbtc before the server had finished loading. This
can actually cause java processes to leak because the second process is
unable to start a server but it doesn't necessarily die after the client
that spawned it exits. This commit prevents this scenario by creating a
server socket before it loads the build and closes once the build is
complete. The client can then receive output bytes and forward input to
the booting server.
The socket that is created during boot is always a local socket, either
a UnixDomainServerSocket or a Win32NamedPipeServerSocket. At the moment,
I don't see any reason to support TCP. This socket also has no impact at
all on the normal sbt server that is started up after the project has
loaded.
The socket is hardcoded to be located at the relative path
project/target/$SOCK_NAME or the named pipe $SOCK_NAME where SOCK_NAME
is a farm hash of the absolute path of the project base directory. There
is no portfile json since there is no need since we don't support TCP.
After the socket is created it listens for clients to whom it relays
input to the console's input stream and relays the process output back
to the client. See the javadoc in BootServerSocket.java for further
details.
The process for forking the server is also a bit more complicated after
this change because the client will read the process output and error
streams until the socket is created and thereafter will only read output
from the socket, not the process.
Supershell does not work correctly when the sbt server is started by the
remote client on windows because it incorrectly calculates the terminal
dimensions. To work around this, we can pass in the dimensions from the
remote client as an environment variable. I tried to do this as a system
property but had all kinds of problems with windows stripping delimeters
from the command. It was much easier to get working with an environment
variable and should really only be set by the sbtc client anyway.
It is inefficient to be constantly allocating and filling an
ArrayBuffer. The buffer is only used for reading headers that we mostly
discard anyway. My assumption is that since we only care about content
length, it's fine to put a fixed limit on the buffer size.
This commit reworks TaskProgress so that it is a singleton object. By
using a singleton, we ensure that there is at most one progress thread
running at a time. With multiple threads, there can be flickering in the
progress reports.
This fixes https://github.com/sbt/sbt/issues/5547. There also was a bug
that the reference to the progress thread was not reset when the thread
itself exited. As a result, it was possible for progress reporting to
stop while tasks were still running. This seemed to primarily happen in
multi-project builds. It should be fixed by this change.
It seemed like a good idea to avoid printing the progress lines if they
were the same as last time. Unfortunately there is a scenario where the
user has multiple clients open and while one of the clients is running a
command, the user presses enter in the inactive client. When this
happens the message that warns that the server is running a different
command gets overwritten. Always printing keeps the message visible.
There were two issues with the server tests on windows
1) Sometimes client connections fail on the first attempt but
will succeed with retries
2) It is possible for named pipes to leak which causes subsequent
tests to be unable to run
3) ClientTest did not handle lines with carriage returns correctly
Issue #1 is fixed with retries. Issue #2 is fixed by using a
unique temp directory for each test run. Issue #3 was fixed by
simplifying the output stream cache to only cache the bytes returned
and then letting scala do the line splitting for us in linesIterator.
After these changes, all of the server tests work on appveyor.
The server tests were very unreliable on travis ci. The problem seemed
to be that the execution context used by the calls to Future() in
readFrame was getting blocked, preventing the test from processing any
new responses. This is fixed by spawning a background thread that reads
json from the server and writes the responses to a queue. By doing this,
we also don't need to reset the connection when one of the checks times
out. As a result, we can make the Socket a val rather than a var.