This commit does some changes to the implementation with the purpose of
making this code more readable. I find that this rewrite was necessary
as I was implementing the dependency lock file.
This commit has two goals:
* Simplify the `load` API endpoints, removing the unused ones to shorten
the surface of the API.
* Add documentation to the main `load` methods.
Sbt has a feature to show timed logs for every operation at startup.
However, its output is cluttered and users cannot read how much time
single methods consume nor if they call other methods.
This commit improves the status quo by adding indentation.
This commit reduces the complexity around `loadPluginDefinition` et al.
`pluginDefinitionLoader` is not used anywhere in sbt, so the extra
definitions are removed.
Both the implementation of `loadPluginDefinition` and
`pluginDefinitionLoader` are reduced to a bare minimum where the
components at hand (definition classpath, dependency classpath) are
properly defined.
Documentation to the three methods has been added.
It mainly does three things:
* Clean up the implementation, removing unused values like
`globalPluginDefs`.
* Make the implementation more readable.
* And the big one: Remove the creation of a classloader that we were
instantiating but whose value we were throwing away. This has an impact
in performance, but I'm yet to benchmark how much it is.
Previous commit used `synchronized` to ensure that the global reporter
was not reporting errors from other parsing sessions. Theoretically,
though, sbt could invoke parsing in parallel, so it's better to ensure
we remove the `synchronized` block, which could also be preventing some
JVM optimizations.
The following commit solves the issue by introducing a reporter id.
A reporter id is a unique identifier that is mapped to a reporter. Every
parsing session gets its own identifier, which then is reused for
recursive parsing. Error reports between recursive parses cannot collide
because the reporter is cleaned in `parse`.
The previous implementation was instantiating a toolbox to parse every
time it parsed a sbt file (and even recursively!!!).
This is inefficient and translates to instantiating a `ReflectGlobal`
every time we want to parse something.
This commit takes another approach:
1. It removes the dependency on `ReflectGlobal`.
2. It reuses the same `Global` and `Run` instances for parsing.
This is an efficient as it can get without doing a whole overhaul of it.
I think that in the future we may want to reimplement it to avoid the
recursive parsing to work around Scalac's bug.
This change was proposed by Jason in case that the new parsing mechanism
implemented later on has to be reverted. This change provides a good
baseline, but it's far from ideal with regard to readability of the
parser and performance.
The previous implementation was using the Scala runtime universe to
check whether a plugin had or not an `autoImport` member. This is a bad
idea for the following reasons:
* The first time you use it, you class load the whole Scalac compiler
universe. Not efficient. Measurements say this is about a second.
* There is a small overhead of going through the reflection API.
There exists a better approach that consists in checking if `autoImport`
exists with pure Java reflection. Since the class is already class
loaded, we check for:
* A class file named after the plugin FQN that includes `autoImport$` at
the end, which means that an object named `autoImport` exists.
* A field in the plugin class that is named `autoImport`.
This complies with the plugin sbt specification:
http://www.scala-sbt.org/1.0/docs/Plugins.html#Controlling+the+import+with+autoImport
We need to communicate the error states in the thread, so I added a `Future[Unit]` called `ready`.
If something goes wrong during the startup, like if the port is already taken, this can be used to communicate back to the main thread, and display the error accordingly.
+ Don't notify ScriptMain users by moving the logic to xMain
+ Only trigger shell if shell is a defined command
+ Use existing Shell/BootCommand strings instead of new ones
The sbt/sbt-launcher-package doesn't invoke sbt with the "shell"
command. sbt has a mechanism for handling this in its "boot" command
that adds an "iflast shell" to the commands. Handle this when displaying
the "Executing in batch mode" warning.
Fixes#3004
Notify & enable users to stay in sbt's shell on the warm JVM by hitting
[ENTER] while sbt is running.
Looks like this; first I run 'sbt about', then I hit [ENTER]:
$ sbt about
[info] !!! Executing in batch mode !!! For better performance, hit [ENTER] to remain in the sbt shell
[info] Loading global plugins from /Users/dnw/.dotfiles/.sbt/0.13/plugins
[info] Loading project definition from /s/t/project
[info] Set current project to t (in build file:/s/t/)
[info] This is sbt 0.13.14-SNAPSHOT
[info] The current project is {file:/s/t/}t 0.1.0-SNAPSHOT
[info] The current project is built against Scala 2.12.1
[info] Available Plugins: sbt.plugins.IvyPlugin, sbt.plugins.JvmPlugin, sbt.plugins.CorePlugin, sbt.plugins.JUnitXmlReportPlugin, sbt.plugins.Giter8TemplatePlugin
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.6
>
>
Fixes#2987
Have sbt.version set in project/build.properties is a best practice
because it makes the build more deterministic and reproducible.
With this change sbt, after ensuring that the base directory is probably
an sbt project, writes out sbt.version in project/build.properties if it
is missing.
Fixes#754
Before this commit, using dotty in your sbt project required to add:
scalaCompilerBridgeSource := ("ch.epfl.lamp" % "dotty-sbt-bridge" %
scalaVersion.value % "component").sources()
in your build.sbt. We might as well automatically do this, this reduces
the boilerplate for using dotty in your project to:
scalaOrganization := "ch.epfl.lamp"
scalaVersion := "0.1.1-SNAPSHOT"
scalaBinaryVersion := "2.11" // dotty itself is only published as a
// 2.11 artefact currently
java.lang.Class#newInstance deprecated since Java 9
http://download.java.net/java/jdk9/docs/api/java/lang/Class.html#newInstance--
```
Deprecated. This method propagates any exception thrown by the nullary constructor, including a checked exception. Use of this method effectively bypasses the compile-time exception checking that would otherwise be performed by the compiler. The Constructor.newInstance method avoids this problem by wrapping any exception thrown by the constructor in a (checked) InvocationTargetException.
The call
clazz.newInstance()
can be replaced by
clazz.getDeclaredConstructor().newInstance()
The latter sequence of calls is inferred to be able to throw the additional exception types InvocationTargetException and NoSuchMethodException. Both of these exception types are subclasses of ReflectiveOperationException.
Creates a new instance of the class represented by this Class object. The class is instantiated as if by a new expression with an empty argument list. The class is initialized if it has not already been initialized.
```
This adds a macro-level hack to support += op for sourceGenerators and resourceGenerators using RHS of Initialize[Task[Seq[File]]].
When the types match up, the macro now calls `.taskValue` automatically.
This setting controls the maximum width of the ASCII graphs printed
by commands like `inspect tree`. Default value corresponds to the
previously hardcoded value of 40 characters.
Copies products to the workind directory, and the rest to the serviceTempDir of this service, both wrapped in SHA-1 hash of the file contents. This is intended to mimize the file copying and accumulation of the unused JAR file. Since working directory is wiped out when the background job ends, the product JAR is deleted too. Meanwhile, the rest of the dependencies are cached for the duration of this service.
Fixes#2460Fixes#2851
Ref #2707, #2708, #2469
Unlike the previous attempts at fixing the handling of build-level
keys, this change does not change the main parsing logic, which uses
`getKey` to retrieve the key from the key map.
The fact that shell worked pre-0.13.11 means that the parsing was ok.
What this changes is just the "example" keys supplied to the parser so
the tab completion works.
Fixes#2761
With sbt 0.13.13-RC1 rediscovered that the dependency pulled in from
Giter8 was affecting the plugins. To avoid this, this change splits up
the template resolver implementation to another module called
sbt-giter8-resolver, and it will be downloaded using Ivy into
`~/.sbt/0.13/templates/`, and then launched reflectively using Java as
the interface.
This adds `new` command, which helps create a new build definition. The
`new` command is extensible via a mechanism called the template
resolver,
which evaluates the arbitrary arguments passed to the command to find
and run a template.
As a reference implementation [Giter8][g8] is provided as follows:
sbt new eed3si9n/hello.g8
This will run eed3si9n/hello.g8 using Giter8.
[g8]: http://www.foundweekends.org/giter8/
The compiler interface subclasses `scala.tools.nsc.Global`,
and loading this new subclass before each `compile` task forces
HotSpot JIT to deoptimize larges swathes of compiled code. It's
a bit like SBT has rigged the dice to always descend the longest
ladder in a game of Snakes and Ladders.
The slowdown seems to be larger with Scala 2.12. There are a number
of variables at play, but I think the main factor here is that
we now rely on JIT to devirtualize calls to final methods in traits
whereas we used to emit static calls. JIT does a good job at this,
so long as classloading doesn't undo that good work.
This commit extends the existing `ClassLoaderCache` to encompass
the classloader that includes the compiler interface JAR. I've
resorted to adding a var to `AnalyzingCompiler` to inject the
dependency to get the cache to the spot I need it without binary
incompatible changes to the intervening method signatures.