With the latest sbt snapshot, the ui would get stuck if the user entered
an empty command. They would be presented with an empty prompt and could
not input any commands. This was caused by the change in
d569abe70a that reset the prompt after a
line was read. I had tried to optimize line reading by ignoring empty
commands in UITask.readline so we wouldn't have to make a new thread.
This optimization wasn't really buying much since it only affects how
quickly the user is reprompted after entering an empty command. Unless a
user is spamming the <enter> key, they shouldn't notice a difference.
This commit adds a few options to supershell:
1. Max items -- sets the max number of tasks to display in the progress
reports. It is pretty hard to read more than a few items in the
progress reports so I set the default limit to 8 and made that
configurable via the superShellMaxTasks parameter. If there are more
than the limit, there is an additional line telling how many additional
tasks are running
2. sleep -- sets how long to sleep between reports. The default is 500ms
to ensure that it updates at least once per second but the previous
value of 100ms is more frequent than necessary
3. threshold -- sets the minimum duration a task has to run before being
printed in the progress reports. The default threshold is increased
from 10ms to 100ms. This introduces a delay of threshold milliseconds
before any progress lines appear and also means that if no tasks ever
exceed the threshold, then no progress is ever displayed.
It turns out that task progress actually introduces a fair bit of
overhead. The biggest issue is that the task progress callbacks block
the Execute main thread. This means that time in those callbacks
delays task evaluation, slowing down sbt. This was not negligible, I was
seeing a lot of the total time of a no-op compile in
https://github.com/jtjeferreira/sbt-multi-module-sample was spent in
TaskProgress callbacks. Prior to these changes, I ran 30 no-op compiles
in that project and the average time was about 570ms. This number got
worse and worse because there were memory leaks in the TaskProgress
object. After these changes, it dropped to 250ms and after jit-ing, it
would drop to about 200ms. I also successfully ran 5000 consecutive
no-op compiles without leaking any memory.
A lot of the overhead of task progress was in adding tasks to the
timings map in AbstractTaskProgress. Tasks were never removed and
ConcurrentHashMap insertion time is proportional to the size of the map
(not sure if it's linear, quadratic or other) which was why sbt actually
got slower and slower the longer it ran. Much of the time was spent
adding tasks to the progress timings.
To fix this, I did something similar to what I did to manage logger
state in https://github.com/jtjeferreira/sbt-multi-module-sample. In
MainLoop, we create a new TaskProgress instance before command
evaluation and clean it up after. Earlier I made TaskProgress an object
to try to ensure there was only one progress thread at a time, and that
introduced the memory leak. In addition to removing the leak, I was able
to improve performance by removing tasks from the timings map when they
completed. Unlike TaskTimings and TaskTraceEvent, we don't care about
tasks that have completed for TaskProgress so it is safe to remove them.
In addition to the memory leaks, I also reworked how the background
threads work. Instead of having one thread that sleeps and prints
progress reports, we now use two single threaded executors. One is a
scheduled executor that is used to schedule progress reports and the
other is the actual thread on which the report is generated. When
progress starts, we schedule a recurring report that is generated every
sleep interval until task evaluation completes. Whenever we add a new
task, if we have haven't previously generated a progress report, we
schedule a report in threshold milliseconds. If the task completes
before the threshold period has elapsed, we just cancel the schedule
report. By doing things this way, we reduce the total number of reports
that are generated. Because reports need to effectively lock System.out,
the less we generate them, the better.
I also modified the internal data structures of AbstractTaskProgress so
that there is a single task map of timings instead of one map for
timings and one for active tasks.
It was a bit tricky to reason about the state of the prompt for a
terminal. To help make things more clear, I reworked things so that the
LineReader always sets the prompt to Pending after it reads a command.
In MainLoop, we cache the prompt value and temporarily set it to Running
while the command is running, which is really how it should have always
been.
In order to make the console task work with scala 2.13 and the thin
client, we need to provide a way for the scala repl to use an sbt
provided jline3 terminal instead of the default terminal typically built
by the repl. We also need to put jline 3 higher up in the classloading
hierarchy to ensure that two versions of jline 3 are not loaded (which
makes it impossible to share the sbt terminal with the scala terminal).
One impact of this change is the decoupling of the version of
jline-terminal used by the in process scala console and the version
of jline-terminal specified by the scala version itself. It is possible
to override this by setting the `useScalaReplJLine` flag to true. When
that is set, the scala REPL will run in a fully isolated classloader. That
will ensure that the versions are consistent. It will, however, for sure
break the thin client and may interfere with the embedded shell ui.
As part of this work, I also discovered that jline 3 Terminal.getSize is
very slow. In jline 2, the terminal attributes were automatically cached with a
timeout of, I think, 1 second so it wasn't a big deal to call
Terminal.getAttributes. The getSize method in jline 3 is not cached and
it shells out to run a tty command. This caused a significant
performance regression in sbt because when progress is enabled, we call
Terminal.getSize whenever we log any messages. I added caching of
getSize at the TerminalImpl level to address this. The timeout is 1
second, which seems responsive enough for most use cases. We could also
move the calculation onto a background thread and have it periodically
updated, but that seems like overkill.
There are cases where if the ui state is changing rapidly, that an
AskUserThread can be created and cancelled in a short time windows. This
could cause problems if the AskUserThread is interrupted during
`LineReader.createReader` which I think can shell out to run some
commands so it is relatively slow. If the thread was interrupted during
the call to `LineReader.createReader` and the interruption was not
handled, then the thread would go into `LineReader.readLine`, which
wouldn't exit until the user pressed enter. This ultimately caused the
ui to break until enter because this zombie line reader would be holding
the lock on the terminal input stream.
Prior to these changes, sbt was leaking large amounts of memory via
log4j appenders. sbt has an unusual use case for log4j because it
creates many ephemeral loggers while also having a global logger that is
supposed to work for the duration of the sbt session. There is a lot of
shared global state in log4j and properly cleaning up the ephemeral task
appenders would break global logging. This commit fixes the behavior by
introducing an alternate logging implementation. Users can still use the
old log4j logging implementation but it will be off by default. The
internal implementation is very simple: it just blocks the current
thread and writes to all of the appenders. Nevertheless, I found the
performance to be roughly identical to that of log4j in my sample
project. As an experiment, I did the appending on a thread pool and got
a significant performance improvement but I'll defer that to a later PR
since parallel io is harder to reason about.
Background: I was testing sbt performance in
https://github.com/jtjeferreira/sbt-multi-module-sample and noticed that
performance rapidly degraded after I ran compile a few times. I took a
heap dump and it became obvious that sbt was leaking console appenders.
Further investigation revealed that all of the leaking appenders in the
project were coming from task streams. This made me think that the fix
would be to track what loggers were created during task evaluation and
clear them out when task evaluation completed. That almost worked except
that log4j has an internal append only data structure containing logger
names. Since we create unique logger names for each run, that internal
data structure grew without bound. It looked like this could be worked
around by creating a new log4j Configuration (where that data structure
was stored) but while creating new configurations with each task runs
did fix the leak, it also broke global logging, which was using a
different configuration. At this point, I decided to write an alternate
implementation of the appender api where I could be sure that the
appenders were cleaned up without breaking global logging.
Implementation: I made ConsoleAppender a trait and made it no longer
extends log4j AbstractAppender. To do this, I had to remove the one
log4j specific method, append(LogEvent). ConsoleAppender now has a
method toLog4J that, in most cases, will return a log4j Appender that is
almost identical to the Appenders that we previously used. To manage
the loggers created during task evaluation, I introduce a new class,
LoggerContext. The LoggerContext determines which logging backend to use
and keeps track of what appenders and loggers have been created. We can
create a fresh LoggerContext before each task evaluation and clear it
out, cleaning up all of its resources after task evaluation concludes.
In order to make this work, there were many places where we need to
either pass in a LoggerContext or create a new one. The main magic is
happening in the `next(State)` method in Main. This is where we create a
new LoggerContext prior to command evaluation and clean it up after the
evaluation completes.
Users can toggle log4j using the new useLog4J key. They also can set the
system property, sbt.log.uselog4j. The global logger will use the sbt
internal implementation unless the system property is set.
There are a fairly significant number of mima issues since I changed the
type of ConsoleAppender. All of the mima changes were in the
sbt.internal package so I think this should be ok.
Effects: the memory leaks are gone. I successfully ran 5000 no-op
compiles in the sbt-multi-module-sample above with no degradation of
performace. There was a noticeable degradation after 30 no-op compiles
before.
During the refactor, I had to work on TestLogger and in doing so I also
fixed https://github.com/sbt/sbt/issues/4480.
This also should fix https://github.com/sbt/sbt/issues/4773
Zinc frequently needs to check the library classpath to ensure that
class names are defined in a given jar. There is a cost to looking up
the class names in the jar so it's a benefit to cache this across runs
so that we don't have to redo the same work every time. More
importantly, in testing with the latest sbt HEAD, I found that sbt would
crash fairly frequently because it ran out of direct memory, which is
used by nio to read and write to native memory without copying. The
direct memory area is shared with the java heap and if it reaches the
limit, the jvm crashes hard as though kill -9 was invoked. After caching
the entries, I stopped seeing crashes.
Rather than relying on a command, I realized it makes more sense to
explicitly set the terminal for the calling channel in MainLoop. By
doing it this way, we can also ensure that we always reset it to the
previous value.
Using the scala reflect library always introduces significant
classloading overhead. We can eliminate the classloading overhead by
generating StringTypeTags at compile time instead.
This sped up average project loading time by a few hundred milliseconds
on my computer. The ManagedLoggedReporter in zinc is still using the
type tag based apis but after the next sbt release, we can upgrade the
zinc apis. We also could consider breaking binary compatibility.
sbt depends on scalacache (which hasn't been updated in about a year)
and we really don't need the functionality provided by scalacache. In
fact, the java api is somewhat easier to work with for our use case. The
motivation is that scalacache uses slf4j for logging which meant that it
was implicitly loading log4j. This caused some noisy logs during
shutdown when the previously unused cache was initialized just to be
cleaned up.
This commit also upgrades caffeine and moving forward we can always
upgrade caffeine (and potentially shade it) without any conflict with
the scalacache version.
Upon successful registration with a FileTreeRepository, an Observable is
returned by the FileTreeRepository that can be used to observer the
specific globs that were registered. The FileTreeRepository also has a
global Observable that can be used to monitor _all_ events. In order to
implement this feature, internally the FileTreeRepository needs to hold
a reference to the registered Observable so that it forwards relevant
file change events. If we do not close the Observable, it leaks memory
inside of FileTreeRepository. There were a number of places within sbt
where we registered globs and did nothing with the returned Observable.
It was thus straightforward to fix the leak by just closing the returned
Observables.
This came up because I was looking at a heap dump of
https://github.com/jtjeferreira/sbt-multi-module-sample after running
1000 no-op compiles and noticed that the FileTreeRepository.observables
were taking up 75MB out of a total heap of about 300MB.
As a side note, it would be nice if sbt had a warning for unused return
values when a statement is not the last in a block. It's possible that
these leaks wouldn't have happened if we were forced to handle the
returned Observables.
Ref https://github.com/sbt/zinc/pull/744
This implements `ThisBuild / usePipelining`, which configures subproject pipelining available from Zinc 1.4.0.
The basic idea is to start subproject compilation as soon as pickle JARs (early output) becomes available. This is in part enabled by Scala compiler's new flags `-Ypickle-java` and `-Ypickle-write`.
The other part of magic is the use of `Def.promise`:
```
earlyOutputPing := Def.promise[Boolean],
```
This notifies `compileEarly` task, which to the rest of the tasks would look like a normal task but in fact it is promise-blocked. In other words, without calling full `compile` task together, `compileEarly` will never return, forever waiting for the `earlyOutputPing`.
It can easily take 2ms or more to parse a command depending on state's
combined parser. There are some commands that sbt requires to work that
we can handle in microseconds instead of milliseconds by special casing
them.
After this change, I saw the performance of
https://github.com/eatkins/scala-build-watch-performance improve by
a consistent 4-5ms in the 3 source file example which was a drop from
120ms to 115ms. While not necessarily earth shattering, this difference
could theoretically be much worse in other projects that have a lot of
plugins and custom tasks/commands. I think it's worth the modest
maintenance cost.
Ref https://github.com/sbt/sbt/issues/5710
Ref https://github.com/sbt/librarymanagement/pull/339
This adds `versionScheme` setting. When set, it is included into POM, and gets picked up on the other side as an extra attribute of ModuleID. That information in turn is used to inform the eviction warning.
This should reduce the false positives associated with SemVer'ed libraries showing up in the eviction warning.
The 1.4.0 implementation of watch uses a concurrent hash map to maintain
the global watch state which manages the state for an arbitrary number
of clients. Using a mutable map is not idiomatic sbt and I found it
difficult to reason about when the map was updated. This commit reworks
the feature so that the global state is instead stored in an immutable
map that is only modified during the internal watch commands, which is
easier to reason about.
The EventsTest changes kept appearing. I'm not sure why scalafmt check
was allowing it before. My vim status bar warns me about trailing spaces
and I noticed the two in Keys.scala and removed them.
In eb688c9ecd, we started buffering output
to the remote client to reduce flickering. This was causing problems
with the output for the thin client in batch mode. With the delay, it
was possible for the client to exit before all of its output had been
displayed.
Bonus: only display aggregation error message if terminal has success
enabled (the thin client displays its own timing message so the message
in aggregation ended up being a duplicate).
It is expensive to compute the the hash of every jar on the classpath so
we can try to avoid that by using the timeWrappedStamper which only
computes the hash if the last modified time has changed.
Using the managedCached introduced an unintended performance regression
because it ensured that we always computed the hash of each jar on the
dependency classpath. The backing ReadStamps only computes the stamp if
the timestamp of the jar has changed.
The sbt project load made a number of relatively inefficient
transformations of scala collecitons. I went through and found the slow
parts during project loading and made my best attempt at fixing them.
The most significant changes I made were in places using IMap. An IMap
is more or less a wrapper around an immutable Map. It can be much faster
to construct an IMap by creating a java mutable hashmap, wrapping it a
scala Map that delegates to the underlying java hashmap (with a copy on
write if the map is modified) and constructing the IMap from the wrapped
map. It was also in many cases to parallelize some transformations
wherever the order didn't matter.
After applying all of these changes, I found that loading the sbt
project took generally between 8.5 and 9 seconds on my laptop. With
1.3.13, it hovered around 11.5 seconds. I saw a similar speedup in zinc.
The biggest specific improvement was that generating the compiled map
dropped from between 3.5-4 seconds to pretty consistently being around
1.5 seconds.
This reverts commit b1dcf031a5.
I found that b1dcf031a5 had some
unintended consequences that seemed to mess up the prompt state. The
real problem that it was trying to address was that the prompt was being
interleaved with log messages in some scenarios. There was a different
way to fix that in ProgressState that was both simpler and more
reliable.
The jline2 history file format is incompatible with jline3 and jline3
prints a very noisy warning if it detects such a file. History will also
not work with jline3 until you remove or reformat the old file.
I noticed that when reloading the build, that certain errors are logged
by sbt to System.err. These were not shown to a thin client because we
weren't forwarding System.err. This change remedies that.
System.err is handled more simply than System.out. We do not put
System.err through the progress state because generally System.err is
tends to be unbuffered. I had hesitated to add System.err to the
Terminal interface at all to give users an escape hatch but I couldn't
get project loading to work well with the thin client without it.
In db4878c786, TaskProgress became an
object which mostly made things easier to reason about. The one problem
was that it started leaking tasks with every run because the timings map
would accumulate tasks that weren't cleared. To fix this, we can clear
the timings and activeTasksMap in the TaskProgress object in the
afterAllCompleted callback. Some extra null checks needed to be added
since it's possible for the maps to not contain a previously added key
after reset has been called.
This fixes the nio/external-hooks test and also restores the performance
of the benchmarks for the latest sbt version in
https://github.com/eatkins/scala-build-watch-performance which had
regressed when the custom ExternalHooks were disabled in
7c4b01d9f7.
The main change is that it changes the ReadStamps object that is passed
into the compiler options to one that uses the unmanagedFileStampCache
and managedFileStampCache for source files and falls back to the default
stamper otherwise. This improves the performance quite significantly
since we only hash the files once. It also makes it so that the analysis
file will contain the source file stamps of the files when compilation
began, rather than when compilation ended. That is what
nio/external-hooks was testing. In the real world what could happen was
that one modified a source file during compilation but then no
incremental re-compilation would occur because after the initial
compilation completed, zinc wrote the stamp of the modified source file
in the analysis file even though it may have actually compiled a
different version of the source file.
I noticed some RejectedExecutionExceptions in travis failures of
ClientTest. This could happen if we try to write output to the
network channel after it has been closed. To avoid this problem, we can
catch RejectedExecutionExceptions and do an immediate flush if the
executor has been shutdown.
In order for sbt to function well, it needs the test interface, jansi
and forked jline jars provided by a classloader that is parent to all
other sbt classloaders. To do this for just the test interface jar, I
just checked if the top loader in the app configuration had the correct
name. Now that there are three jars, this is more complicated so I
updated the launcher to create a top loader with the method getEarlyJars
implemented and returning the three needed jars. This is a much more
scalable design.
If sbt is entered with a configuration that does not have a top loader
with the getEarlyJars method defined, then we just fall back on
constructing the default layered classlaoder from the configuration
classpath.
The motivation for this change is that I discovered that sbt immediately
crashed when I tried to run a non-snapshot version. After this change, I
verified that both snapshot and non-snapshot versions of the latest sbt
code could load with both an obsolete and up-to-date launcher.
The unprompt method will actually kill the ui thread if its running. If
we don't to this, we can get into a weird state where after watch is
exited by <enter>, the ask user thread spins up but before it can print
the prompt, the terminal prompt is changed to Running, which has an
empty prompt. The end result is that jline3 never displays the prompt
even though the line reader is active and reading bytes. When the user
typed <enter> a prompt would appear. They also could input a command and
it would run but it wasn't obvious what would happen since the prompt
was missing.
The build source check is evaluated at times when we can't be completely
sure that global logger is pointing at the terminal that initiated the
reload (which may be a passive watch client). To work around this, we
can inspect the exec to determine which terminal initiated the check and
write any output directly to that terminal.