The `session save` command has the side effect of modifying a "*.sbt"
file so we don't want to warn about changes or automatically reload when
we return to the shell. Setting the hasCheckedMetaBuild attribute key to
false is sufficient to prevent this.
Ref: https://github.com/sbt/sbt/issues/4813
It didn't really make sense for Continuous to use the other command
parser and then reparse the results. I was able to slightly simplify
things by using the multi parser directly.
We run into issues if we naively split the command input on ';' and
treat each part as a separate command unless the ';' is inside of a
string because it is also valid to have ';'s inside of braced
expressions, e.g. `set foo := { val x = 1; x + 1 }`. There was no parser
for expressions enclosed in braces. I add one that should parse any
expression wrapped in braces so long as each opening brace is matched by a
closing brace. The parser returns the original expression. This allows
the multi parser to ignore ';' inside of '{...}'.
I had to rework the scripted tests to individually run 'reload' and
'setUpScripted' because the new parser rejects setUpScripted because it
isn't a valid command until reload has run.
It was reported in https://github.com/sbt/sbt/issues/4808 that compared
to 1.2.8, sbt 1.3.0-RC2 will truncate the command args of an input task
that contains semicolons. This is actually intentional, but not
completely robust. For sbt >= 1.3.0, we are making ';' syntactically
meaningful. This means that it always represents a command separator
_unless_ it is inside of a quoted string. To enforce this, the multi parser
will effectively split the input on ';', it will then validate that each
command that it extracted is valid. If not, it throws an exception. If
the input is not a multi command, then parsing fails with a normal
failure.
I removed the multi command from the state's defined commands and reworked
State.combinedParser to explicitly first try multi parsing and fall back
to the regular combined parser if it is a regular command. If the multi
parser throws an uncaught exception, parsing fails even if the regular
parser could have successfully parsed the command. The reason is so that
we do not ever allow the user to evaluate, say 'run a;b'. Otherwise the
behavior would be inconsitent when the user runs 'compile; run a;b'
I noticed that in 1.3.0-RC2, the following commands worked:
++2.11.12 ;test
;++2.11.12;test
but
++2.11.12; test
did not. This issue is fixed in the next commit.
It's very expensive to compute the hash code of a deeply nested
Incomplete. To prevent a loop, we only want to check for object equality
which we can do with IdentityHashMap
It didn't make sense to aggregate the watch start command if it was
defined in multiple sources so we previously just fell back to the
default message if multiple commands were being run. This, however,
meant that if you ran, say, ~compile in an aggregate project, it wasn't
possible to customize the start message. There was a message in the
sbt gitter channel where someone found the new message too verbose and
wanted to print something shorter and I realized that this was an
unfortunate restriction. Instead of giving up, we can just use the
project's watchStartMessage as a default. If the watchStartMessage
setting is unset for some reason, we can fall back to the default.
I validated this change manually in the swoval project, which has an
aggregate root project, by running
set ThisBuild / watchStartMessage := { (_, _, _) => None }
and indeed nothing was printed after each task evaluation in '~compile'.
There was an incomplete pattern match that assumed that the jars in the
scala provider included one with the name "scala-library.jar". In
practice, I think this is always true, but it's safer to have a fallback
case and it also removes the compiler warning.
We discovered that turbo mode did not work in the sbt settings project.
I tracked this down to the dependency classloader bundle not being
thread safe.
I realized that some builds may crash if we automatically close the
classloaders. While I do think that is a good thing in general that we
are closing the loaders by default, we shuold have an option for
supressing this behavior.
I made all of the custom classloaders that we define for test and run
check this property before calling the super.close method.
We want to notify users about the new features available in sbt 1.3.0 to
increase visibility. Turbo mode especially can benefit many builds, but
we have opted to leave it off by default for now.
The banner will be displayed the first time sbt enters the shell command
on each sbt run. The banner can be disabled globally with the sbt.banner
system property. It can be displayed on a per sbt version by running the
skipWelcomeBanner command. That command touches a file in the ~/.sbt/1.0
directory to make it persistent across all projects.
I decided creations/deletions/updates were a bit too technical rather
than descriptive. It also wasn't really correct to say Meta build
sources because the meta build is the build for the build. Instead, I
dropped Meta from the sentence. I also made the instructions when
changed sources are detected more active. I left them capitalized since
they are instructions rather than warnings.
Apply these changes by running `reload`.
Automatically reload the build when source changes are detected by setting `Global / onChangedBuildSource := ReloadOnSourceChanges`.
Disable this warning by setting `Global / onChangedBuildSource := IgnoreSourceChanges`.
Also indentation was wrong for the printed files when multiple files had
changed because the mkString middle argument was " \n" rather than "\n ".
This creates a performance mode that enables experimental or advanced features that might require some debugging by the build user when it doesn't work.
Initially we are putting the layered ClassLoader (`ClassLoaderLayeringStrategy.AllLibraryJars`) behind this flag.
We now correctly close the test classloader which can cause this
scripted test to fail if the classloader is closed before the actor
system finishes terminating.
At some point I noticed that projects with no scala sources in the build
loaded significantly faster than projects that had even a single scala
file -- no matter how simple that file was. This didn't really make
sense to me because *.sbt files _do_ have to be compiled. I finally
realized that classloading was a likely bottle neck because *.sbt
files are compiled on the sbt classpath while *.scala files are compiled
with a different classloader generated by the classloader cache. It then
occurred to me that we could pre-fill the classloader cache with the
scala layer of the sbt metabuild classloader.
I found that compared to 1.3.0-M5, a project with a simple scala file in
the project directory loaded about 2 seconds faster after this change.
Even if there are no scala sources in the build.sbt, there is a similar
performance improvement for running "sbt compile", which I found exited
2-3 seconds faster after this change.
My first attempt, cc8c66c66d, at making
java reflection work with a layered classloader with a dependency jar
layer was a failure. It would generally work ok the first time the user
ran test, but was likely to fail on a second run.
There were a number of problems with the strategy:
1) It relied on the thread's context class loader to determine where to
attempt the reverse lookup.
2) It is not possible to ever reload classes loaded by a classloader.
Consider the classloading hierarchy a <- b, where
the arrow implies that a is the parent of b. I incorrectly thought
that a's loadClass method would be called every time a class loaded
by a made a call to Class.forName(name: String). This turns out to
not be the case. As a result, the second time the dependency layer
was used, where now the hierarchy is a <- c, that same Class.forName
call could return a Class from b which causes a nasty crash.
It isn't possible to work around the limitation in 2 so the only option
that allows both caching and java reflection across layers to work is to
cache the dependency layer, but invalidate when cross layer reflection
occurs. This turns out to be straightforward to implement. The
performance looks very similar to the ScalaLibrary strategy when java
reflection is used, which makes sense because the scala library and
scala reflect layers are still reused when the dependency layer is
invalidated.
I also stopped passing around the resource map to all of the layers.
Resource loading is hierarchical and the resource layer comes above the
dependency layer in the stack so there was no need for the bottom layers
to also be RawResource loaders.
While testing some classloader changes, I realized that we didn't always
close the test classloaders because they didn't necessarily extend
URLClassLoader, but instead implemented AutoCloseable.
Bonus: don't set the context classloader. It turns out that the test
framework does that anyway inside of trl.run so it was pointless to do
in Defaults.scala.