```
[error] (zincLmIntegrationProj / Test / executeTests) java.lang.LinkageError: loader constraint violation in interface itable initialization for class sbt.internal.inc.ZincComponentCompiler$ZincCompilerBridgeProvider: when selecting method 'xsbti.compile.ScalaInstance xsbti.compile.CompilerBridgeProvider.fetchScalaInstance(java.lang.String, xsbti.Logger)' the class loader sbt.internal.SbtInterfaceLoader @1224144a for super interface xsbti.compile.CompilerBridgeProvider, and the class loader sbt.internal.BottomClassLoader @52daa20b of the selected method's class, sbt.internal.inc.ZincComponentCompiler$ZincCompilerBridgeProvider have different Class objects for the type xsbti.Logger used in the signature (xsbti.compile.CompilerBridgeProvider is in unnamed module of loader sbt.internal.SbtInterfaceLoader @1224144a, parent loader sbt.internal.MetaBuildLoader$1 @6f36c2f0; sbt.internal.inc.ZincComponentCompiler$ZincCompilerBridgeProvider is in unnamed module of loader sbt.internal.BottomClassLoader @52daa20b, parent loader sbt.internal.ReverseLookupClassLoader @39b00393)
```
Ref https://github.com/scala/scala-parallel-collections/issues/22
Parallel collection got split off without source-compatible library, so apparently we need to roll our own compat hack, which causes import not used, so it needs to be paired with silencer.
The bytecode generated by 2.13 compiler contains scala.runtime.BoxedUnit in the constant pool. To avoid referencing scala-library, port XMainConfiguration to Java.
When using the launcher's classpath for the metabuild, the
scala-compiler jar can be missing. This is because the managedJars only
method returns the scala-library jar and not the rest of the scala
instance. To fix this, we can always prepend the scala instance jars to
the classpath.
In order to simulate the issue in scripted, I had to manually remove the
scala-compiler.jar from the scripted classpath or else the scripted test
that I added doesn't actually do anything because the scala-compiler.jar
would end up on the app.provider.mainClasspath.
Fixes#4452
Refactor remote caching to be scoped to configuration.
In addition, this avoid the use of dependency resolver (since I'm not resolving anything) and directly invoke the Ivy resolver for the artifact, somewhat analogus to publishing process.
This should speed up the `pullRemoteCache` since it avoids the POM download as well.
For sbt-binrary-remote-cache this created a bit of complication since the (publishing) resolver doesn't act correctly as (downloading) resolver in terms of the credentials, so I had to create a new key `remoteCacheResolvers` to have asymmetric resolver.
I was seeing some failures in travis ci where the build crashed because
it wasn't able to open a socket. This failure was happening in the
scalatest fork runner so I decided to avoid forking entirely. An
alternative approach would possibly be just to remove the scalatest
framework from the server test project but not forking is nicer anyway.
The old sbt launcher uses jansi 1.11, which is incompatible with jline3.
To work around this, we can use the jna terminal implementation for the
jline system terminal. This commit also switches to using the jline
TerminalBuilder for all system terminals except for the windows system
terminal with the thin client. The jline terminal builder uses
reflection that is difficult to make work with the thin client and it is
much easier to just manually construct the thin client. This is only
necessary for windows because on posix the thin client will fall back to
an implementation that shells out for stty commands.
It turns out that task progress actually introduces a fair bit of
overhead. The biggest issue is that the task progress callbacks block
the Execute main thread. This means that time in those callbacks
delays task evaluation, slowing down sbt. This was not negligible, I was
seeing a lot of the total time of a no-op compile in
https://github.com/jtjeferreira/sbt-multi-module-sample was spent in
TaskProgress callbacks. Prior to these changes, I ran 30 no-op compiles
in that project and the average time was about 570ms. This number got
worse and worse because there were memory leaks in the TaskProgress
object. After these changes, it dropped to 250ms and after jit-ing, it
would drop to about 200ms. I also successfully ran 5000 consecutive
no-op compiles without leaking any memory.
A lot of the overhead of task progress was in adding tasks to the
timings map in AbstractTaskProgress. Tasks were never removed and
ConcurrentHashMap insertion time is proportional to the size of the map
(not sure if it's linear, quadratic or other) which was why sbt actually
got slower and slower the longer it ran. Much of the time was spent
adding tasks to the progress timings.
To fix this, I did something similar to what I did to manage logger
state in https://github.com/jtjeferreira/sbt-multi-module-sample. In
MainLoop, we create a new TaskProgress instance before command
evaluation and clean it up after. Earlier I made TaskProgress an object
to try to ensure there was only one progress thread at a time, and that
introduced the memory leak. In addition to removing the leak, I was able
to improve performance by removing tasks from the timings map when they
completed. Unlike TaskTimings and TaskTraceEvent, we don't care about
tasks that have completed for TaskProgress so it is safe to remove them.
In addition to the memory leaks, I also reworked how the background
threads work. Instead of having one thread that sleeps and prints
progress reports, we now use two single threaded executors. One is a
scheduled executor that is used to schedule progress reports and the
other is the actual thread on which the report is generated. When
progress starts, we schedule a recurring report that is generated every
sleep interval until task evaluation completes. Whenever we add a new
task, if we have haven't previously generated a progress report, we
schedule a report in threshold milliseconds. If the task completes
before the threshold period has elapsed, we just cancel the schedule
report. By doing things this way, we reduce the total number of reports
that are generated. Because reports need to effectively lock System.out,
the less we generate them, the better.
I also modified the internal data structures of AbstractTaskProgress so
that there is a single task map of timings instead of one map for
timings and one for active tasks.