Ref https://github.com/sbt/zinc/pull/744
This implements `ThisBuild / usePipelining`, which configures subproject pipelining available from Zinc 1.4.0.
The basic idea is to start subproject compilation as soon as pickle JARs (early output) becomes available. This is in part enabled by Scala compiler's new flags `-Ypickle-java` and `-Ypickle-write`.
The other part of magic is the use of `Def.promise`:
```
earlyOutputPing := Def.promise[Boolean],
```
This notifies `compileEarly` task, which to the rest of the tasks would look like a normal task but in fact it is promise-blocked. In other words, without calling full `compile` task together, `compileEarly` will never return, forever waiting for the `earlyOutputPing`.
The EventsTest changes kept appearing. I'm not sure why scalafmt check
was allowing it before. My vim status bar warns me about trailing spaces
and I noticed the two in Keys.scala and removed them.
There were two issues with the server tests on windows
1) Sometimes client connections fail on the first attempt but
will succeed with retries
2) It is possible for named pipes to leak which causes subsequent
tests to be unable to run
3) ClientTest did not handle lines with carriage returns correctly
Issue #1 is fixed with retries. Issue #2 is fixed by using a
unique temp directory for each test run. Issue #3 was fixed by
simplifying the output stream cache to only cache the bytes returned
and then letting scala do the line splitting for us in linesIterator.
After these changes, all of the server tests work on appveyor.
The server tests were very unreliable on travis ci. The problem seemed
to be that the execution context used by the calls to Future() in
readFrame was getting blocked, preventing the test from processing any
new responses. This is fixed by spawning a background thread that reads
json from the server and writes the responses to a queue. By doing this,
we also don't need to reset the connection when one of the checks times
out. As a result, we can make the Socket a val rather than a var.
The sbtc client can provide a ux very similar to using the sbt shell
when combined with tab completions. In fact, since some shells have a
better tab completion engine than that provided by jilne2, the
experience can be even better. To make this work, we add another entry
point to the thin client that is capable of generating completions for
an input string. It queries sbt for the completions and prints the
result to stdout, where they are consumed by the shell and fed into its
completion engine.
In addition to providing tab completions, if there is no server running
or if the user is completing `runMain`, `testOnly` or `testQuick`, the
thin client will prompt the user to ask if they would like to start an
sbt server or if they would like to compile to generate the main class
or test names. Neither powershell nor zsh support forwarding input to
the tab completion script. Zsh will print output to stderr so we
opportunistically start the server or complete the test class names.
Powershell does not print completion output at all, so we do not start a
server or fill completions in that case*. For fish and bash, we prompt
the user that they can take these actions so that they can avoid the
expensive operation if desired.
* Powershell users can set the environment variable SBTC_AUTO_COMPLETE
if they want to automatically start a server of compile for run and test
names. No output will be displayed so there can be a long latency
between pressing <tab> and seeing completion results if this variable is
set.
Running multi commands (input commands delimited by semi-colons) did not
work with the thin client. The commands would actually run on the
server, but the thin client would exit immediately without displaying
the output. The reason was that MainLoop would report the exec complete
when all it had done was split the original command into its constituent
parts and prepended them to the state command list. To work around this,
when we detect a network source command, we can remap its exec id to a
different id and only report the original exec id after the commands
complete. We also have to keep track of whether or not the command
succeeded or failed so that the reporting command reports the correct
result.
The way its implemented is with the the following steps:
1. set the terminal to the network terminal
2. stash the current onFailure so that we can properly report failures
3. add the new exec id to a map of the original exec id to the generated
id
4. actually run the command
5. if the command succeeds, add the original exec id to a result map
6. pop the onFailure
7. restore the terminal to console
8. report the result -- if the original exec id is in the result map we
report success. Otherwise we report failure.
There is also logic in NetworkChannel for finding the original exec id
if reporting one of the artificially generated exec ids because the
client will not be aware of that id.
This commit makes it possible for the sbt server to render the same ui
to multiple clients. The network client ui should look nearly identical
to the console ui except for the log messages about the experimental
client.
The way that it works is that it associates a ui thread with each
terminal. Whenever a command starts or completes, callbacks are invoked
on the various channels to update their ui state. For example, if there
are two clients and one of them runs compile, then the prompt is changed
from AskUser to Running for the terminal that initiated the command
while the other client remains in the AskUser state. Whenever the client
changes uses ui states, the existing thread is terminated if it is
running and a new thread is begun.
The UITask formalizes this process. It is based on the AskUser class
from older versions of sbt. In fact, there is an AskUserTask which is
very similar. It uses jline to read input from the terminal (which could
be a network terminal). When it gets a line, it submits it to the
CommandExchange and exits. Once the next command is run (which may or
may not be the command it submitted), the ui state will be reset.
The debug, info, warn and error commands should work with the multi
client ui. When run, they set the log level globally, not just for the
client that set the level.
This commit adds support for remote clients to connect to the sbt server
and attach themselves as a virtual terminal. In order to make this work,
each connection must send a json rpc request to attach to the server.
When this is received, the server will periodically query the remote
client to get the terminal properties and capabilities that allow the
remote client to act as a jline terminal proxy. There is also support
for json messages with ids sbt/systemIn and sbt/systemOut that allow io
to be relayed from the remote terminal to the sbt server and back.
Certain commands such as `exit` should be evaluated immediately. To make
this work, we add the concept of a MaintenanceTask. The CommandExchange
has a background thread that reads MaintenanceTasks and evaluates them
on demand. This allows maintenance tasks to be evaluated even when sbt
is evaluating an exec. If it weren't done this way, when the user typed
exit while a different remote connection was running a command, they
wouldn't be able to exit until the command completed.
The ServerIntents in ServerHandler did not handle
JsonRpcResponseMessage because prior to this commit, sbt clients were
primarily making requests to the server. But now the server sends
requests to the client for the terminal properties and terminal
capabilities so it was necessary to add an onResponse handler to
ServerIntent.
I had to move the network channel publishBytes method to run on a
background thread because there were scenarios in which the client
socket would get blocked because the server was trying to write on the
same thread that the read the bytes from the client.
To make the console command work, it is necessary to hijack the
classloader for JLine. In MetaBuildLoader, we put a custom forked JLine
that has a setter for the TerminalFactory singleton. This allows us to
change the terminal that is used by JLine in ConsoleReader. Without this
hack, the scala console would not work for remote clients.
Prior to this change, the server test was repeatedly performing io on
the portfile while the server was starting up. This adds a modest sleep
so that the test is just as responsive without doing needless io.
Initial draft for bsp support.
This shows two communication pattern around BSP.
First, if the request can be handled with the build knowledge is readily available in `NetworkChannel` we can reply immediately. `BuildServerImpl#onBspBuildTargets` is an example for that.
Second, if the request requires `State`, then we can forward the parameter into a custom command, and reply back from a command. `BuildServerProtocol.bspBuildTargetSources` is an example of that since it needs to invoke tasks to generate sources.
Fixes https://github.com/sbt/sbt/issues/3112
This unpacks Extracted as State's extension methods.
In addition this provides a way of responding via LSP.