One issue with the remote client approach is that it is possible for
multiple clients to start multiple servers concurrently. I encountered
this in testing where in one tmux pane I'd start an sbt server and in
another I might run sbtc before the server had finished loading. This
can actually cause java processes to leak because the second process is
unable to start a server but it doesn't necessarily die after the client
that spawned it exits. This commit prevents this scenario by creating a
server socket before it loads the build and closes once the build is
complete. The client can then receive output bytes and forward input to
the booting server.
The socket that is created during boot is always a local socket, either
a UnixDomainServerSocket or a Win32NamedPipeServerSocket. At the moment,
I don't see any reason to support TCP. This socket also has no impact at
all on the normal sbt server that is started up after the project has
loaded.
The socket is hardcoded to be located at the relative path
project/target/$SOCK_NAME or the named pipe $SOCK_NAME where SOCK_NAME
is a farm hash of the absolute path of the project base directory. There
is no portfile json since there is no need since we don't support TCP.
After the socket is created it listens for clients to whom it relays
input to the console's input stream and relays the process output back
to the client. See the javadoc in BootServerSocket.java for further
details.
The process for forking the server is also a bit more complicated after
this change because the client will read the process output and error
streams until the socket is created and thereafter will only read output
from the socket, not the process.
The existing implementation of watch did not work with the thin client.
In sbt 1.3.0, watch was changed to be a blocking command that performed
manual task evaluation. This commit makes the implementation more
similar to < 1.3.0 where watch modifies the state and after running the
user specified command(s), it enters a blocking command. The new
blocking command is very similar to the shell command.
As part of this change, I also reworked some of the internals of watch
so that a number of threads are spawned for reading file and input
events. By using background threads that write to a single event queue,
we are able to block on the file events and terminal input stream rather
than polling. After this change, the cpu utilization as measured by ps
drops from roughly 2% of a cpu to 0.
To integrate with the network client, we introduce a new UITask that is
similar to the AskUserTask but instead of reading lines and adding execs
to the command queue, it reads characters and converts them into watch
commands that we also append to the command queue.
With this new implementation, the watch task that was added in 1.3.0 no
longer works. My guess is that no one was really using it. It wasn't
documented anywhere. The motivation for the task implementation was that
it could be called within another task which would let users define a
task that monitors for file changes before running. Since this had never
been advertised and is only of limited utility anyway, I think it's fine
to break it.
I also had to disable the input-parser and symlinks tests. I'm not 100%
sure why the symlinks test was failing. It would tend to work on my
machine but fail in CI. I gave up on debugging it. The input-parser test
also fails but would be a good candidate to be moved to the client test
in the serverTestProj. At any rate, it was testing a code path that was
only exercised if the user changed the watchInputStream method which is
highly unlikely to have been done in any user builds.
The WatchSpec had become a nuisance and wasn't really preventing from
any regressions so I removed it. The scripted tests are how we test
watch.
The sbtc client can provide a ux very similar to using the sbt shell
when combined with tab completions. In fact, since some shells have a
better tab completion engine than that provided by jilne2, the
experience can be even better. To make this work, we add another entry
point to the thin client that is capable of generating completions for
an input string. It queries sbt for the completions and prints the
result to stdout, where they are consumed by the shell and fed into its
completion engine.
In addition to providing tab completions, if there is no server running
or if the user is completing `runMain`, `testOnly` or `testQuick`, the
thin client will prompt the user to ask if they would like to start an
sbt server or if they would like to compile to generate the main class
or test names. Neither powershell nor zsh support forwarding input to
the tab completion script. Zsh will print output to stderr so we
opportunistically start the server or complete the test class names.
Powershell does not print completion output at all, so we do not start a
server or fill completions in that case*. For fish and bash, we prompt
the user that they can take these actions so that they can avoid the
expensive operation if desired.
* Powershell users can set the environment variable SBTC_AUTO_COMPLETE
if they want to automatically start a server of compile for run and test
names. No output will be displayed so there can be a long latency
between pressing <tab> and seeing completion results if this variable is
set.
When the user presses ctrl+c, we want to cancel any running tasks that
were initiated by that client. This is a bit tricky because we may not
be sure what is running if the client is in interactive mode. To work
around this, we send a cancellation request with the special id
__CancelAll. When the NetworkChannel receives this request, it cancels
the active task if was initiated by the client that sent the
cancellation request. The result it returns to the client indicates if
there were any tasks to be cancelled. If there were and the client was
in interactive mode, we do not exit. Otherwise we exit.
This commit adds support for remote clients to connect to the sbt server
and attach themselves as a virtual terminal. In order to make this work,
each connection must send a json rpc request to attach to the server.
When this is received, the server will periodically query the remote
client to get the terminal properties and capabilities that allow the
remote client to act as a jline terminal proxy. There is also support
for json messages with ids sbt/systemIn and sbt/systemOut that allow io
to be relayed from the remote terminal to the sbt server and back.
Certain commands such as `exit` should be evaluated immediately. To make
this work, we add the concept of a MaintenanceTask. The CommandExchange
has a background thread that reads MaintenanceTasks and evaluates them
on demand. This allows maintenance tasks to be evaluated even when sbt
is evaluating an exec. If it weren't done this way, when the user typed
exit while a different remote connection was running a command, they
wouldn't be able to exit until the command completed.
The ServerIntents in ServerHandler did not handle
JsonRpcResponseMessage because prior to this commit, sbt clients were
primarily making requests to the server. But now the server sends
requests to the client for the terminal properties and terminal
capabilities so it was necessary to add an onResponse handler to
ServerIntent.
I had to move the network channel publishBytes method to run on a
background thread because there were scenarios in which the client
socket would get blocked because the server was trying to write on the
same thread that the read the bytes from the client.
To make the console command work, it is necessary to hijack the
classloader for JLine. In MetaBuildLoader, we put a custom forked JLine
that has a setter for the TerminalFactory singleton. This allows us to
change the terminal that is used by JLine in ConsoleReader. Without this
hack, the scala console would not work for remote clients.
The collectAnalysis task an be a bit slow and delays client connections
from running commands. This commit adds an option to skip the analysis
if it isn't needed. The default behavior is left as it was.
Try parse the required semanticdbVersion in the initialization request metadata
Issue a warning if the semanticdb plugin is not enabled
Issue a warning if the semanticdb version is lower than the required
Initial draft for bsp support.
This shows two communication pattern around BSP.
First, if the request can be handled with the build knowledge is readily available in `NetworkChannel` we can reply immediately. `BuildServerImpl#onBspBuildTargets` is an example for that.
Second, if the request requires `State`, then we can forward the parameter into a custom command, and reply back from a command. `BuildServerProtocol.bspBuildTargetSources` is an example of that since it needs to invoke tasks to generate sources.
Fixes https://github.com/sbt/sbt/issues/3112
This unpacks Extracted as State's extension methods.
In addition this provides a way of responding via LSP.
To demonstrate [-Yno-lub](http://eed3si9n.com/stricter-scala-with-ynolub), this shows the code changes that removes lubing (Not all subprojects are done).
After I made the changes, I switched the Scala back to normal 2.12.10.
This is an implementation of `textDocument/definition` request.
Supports types only, and only in case when type is found in Zinc Analysis. When source(s) are found then editor opens potential source(s).
This simple implementation does not use semantic data.
During the processing of `textDocument/didSave`, we will start collecting the location of Analysis files via `lspCollectAnalyses`.
Later on, when the user asked for `textDocument/definition`, sbt server will invoke a Future call to lspDefinition, which direct reads the files to locate the definition of a class.
This is the first cut for the Language Server Protocol on top of server that is still work in progress.
With this change, sbt is able to invoke `compile` task on saving files in VS Code.
This implements JSON-based port file. Thoughout the lifetime of the sbt server there will be `cwd / "project" / "target" / "active.json"`, which contains `url` field.
Using this `url` the potential client, such as IDEs can find out which port number to hit.
Ref #3508
LogManager implementation is modified to use ManagedLogger, which can swap out backing Appenders without re-creating the log instance.
The State was also changed to track `currentCommand: Option[Exec]`. `Exec` knows the origin of the command invocation, and using that we can now send the network-originated events only to the network clients.
Combined together, this implements log splitting between the sbt clients (channels).
This is the beginning of a lightweight client, which talks to the
server over Contraband-generated JSON API. Given that the server is
started on port 5173:
```
$ cd /tmp/bogus
$ sbt client localhost:5173
> compile
StatusEvent(Processing, Vector(compile, server))
StatusEvent(Ready, Vector())
StatusEvent(Processing, Vector(, server))
StatusEvent(Ready, Vector())
```