This allows for the same functionality that using SettingsDefinition in
Project#settings allows (specifying either bare Setting[_] or a Seq[Setting[_]])
to be available outside of the settings for a project, for instance when
defining a val.
In short, it allows:
val modelSettings = Def.settings(
sharedSettings,
libraryDependencies += foo
)
This commit introduces a mechanism that allows sbt to find the most
specific version of the compiler interface sources that exists using
Ivy.
For instance, when asked for a compiler interface for Scala 2.11.8-M2,
sbt will look for sources for:
- 2.11.8-M2 ;
- 2.11.8 ;
- 2.11 ;
- the default sources.
This commit also modifies the build definition by removing the
precompiled projects and configuring the compiler-interface project so
that it publishes its source artifacts in a Maven-friendly format.
This provides a convenience function for running an input task from the
extracted state. This is particularly useful for commands, release steps
etc that may want to run input tasks, like scripted.
This maintains old, existing ways of configurating project settings,
while adding the possibility to:
* Define add a sequence of settings, between other individual settings
* Pass a Seq[Setting[_]] to Project.settings without having to do
"varargs expansion", ie. ": _*"
Enumerations always need to start with the same number, otherwise
Scaladoc doesn't recognize them as enumeration (no idea why).
Code blocks need to be surrounded by {{{}}}. The indentation of the
larger code block is still not correctly displayed, but I couldn't find
out why.
A TODO comment is moved out of the Scaladoc comment.
This allows attaching unmanaged sources on `per-ScalaVersion` basis.
Sources from `src/{main,test}/scala-<scalaBinaryVersion>` is collected
in addition to `src/{main,test}/scala` by default.
Note that the treatment of `scalaBinaryVersion` would vary depending on
`scalaVersion` pre-2.10 and 2.10 onwards.
For example:
- with `scalaVersion` `2.9.3`, the `scalaBinaryVersion` is set to `2.9.3`
thus the files in `src/{main,test}/scala-2.9.3` as well
`src/{main,test}/scala` would be available in the `sources` list.
- with `scalaVersion` `2.10.1`, the `scalaBinaryVersion` is set to `2.10`
thus the files in `src/{main,test}/scala-2.10` as well
`src/{main,test}/scala` would be available in the `sources` list.
* Here we wire Aether into the Ivy dependency chain
* Add hooks into Aether to use Ivy's http library (so credentials are configured the same)
* Create the actual Resolver which extracts metadata information from Aether
* Deprecate old Ivy-Maven integrations
* Create hooks in existing Resolver facilities to expose a flag to enable the new behavior.
* Create notes documenting the feature.
* Create a new resolver type `MavenCache` which denotes how to read/write local maven cache metadata
correctly. We use this type for publishM2 and mavenLocal.
* Update failing -SNAPSHOT related tests to use new Aether resolver
* Create specification for expected behavior from the new resolvers.
Known to fix#1322, #321, #647, #1616
Adds project-level dependency exclusions:
excludeDependencies += "org.apache.logging.log4j"
excludeDependencies += "com.example" %% "foo"
In the first example, all artifacts from the organization
`"org.apache.logging.log4j"` are excluded from the managed dependency.
In the second example, artifacts with the organization `"com.example"`
and the name `"foo"` cross versioned to the current `scalaVersion` are
excluded.
- Fixes cached resolution being too verbose
- Adds new UpdateLogging named "Default"
- When global logLevel or logLevel in update is Debug, Default will
bump up to Full UpdateLogging.
* IncrementalCompiler IC object now holds the actual logic to start incremental compilation (rather than AggresiveCompiler or other)
* MixedAnalyzingCompiler ONLY does anlaysis of Java/Scala code
* Moved the AnalyzingJavaCompiler into the integration library so that necessary dependencies are visible.
* Force CompileSetup Equiv typeclass to use Equiv relations defined locally.
* Add toString methods on many of the incremental compiler datatypes.
* Remove remaining binary compatibility issues in Defaults.scala.
* Removed as many binary incompatibilities as I could find.
* Deprecating old APIs
* Attempt to construct new nomenclature that fits the design of Incremental API.
* Add as much documentation as I was comfortable writing (from my understanding of things).
This breaks the loading/saving of the incremental compiler analysis out
into separate task, thereby providing the necessary hooks for byte code
enhancement tasks to enhance bytecode and update the analysis before the
analysis gets stored to disk.
* Create a new sbt.compiler.javac package
* Create new interfaces to control running `javac` and `javadoc` whether forked or local.
* Ensure new interfaces make use of `xsbti.Reporter`.
* Create new method on `xsbti.compiler.JavaCompiler` which takes a `xsbti.Reporter`
* Create a new mechanism to parse (more accurately) Warnings + Errors, to distinguish the two.
* Ensure older xsbti.Compiler implementations still succeed via catcing NoSuchMethodError.
* Feed new toolchain through sbt.actions.Compiler API via dirty hackery until we can break things in sbt 1.0
* Added a set of unit tests for parsing errors from Javac/Javadoc
* Added a new integration test for hidden compilerReporter key, including testing threading of javac reports.
Fixes#875, Fixes#1542, Related #1178 could be looked into/cleaned up.
When conflicts are found for a given module, a forced one
is selected before conflict manager kicks in.
The problem is that DependencyDescriptor seems to mark transitive
forced dependency as forced as well,
so the actual forced dependency are sometimes not prioritized.
To work around this, I’ve introduced a mixin called
SbtDefaultDependencyDescriptor, which carries around ModuleID to detect
direct dependencies.
* Attempt to set publication date to last modified time, if the stars align
* Issue warning about undefined resolution behavior otherwise
* Add scripted test which exercises the NPE issue in resolving -SNAPSHOTs.
* Commit scalariform style edit in Act.scala
* After parsing and transforming the pom, check for pub date.
* If we don't have a pub date, try to grab lastModified from the URL
* If we can't do anything, issue a warning about the problem artifact.
The optimization, and therefore the change in the behavior
of Relation, is now needed by the class Logic, and cannot
be reverted.
This patch (written by Josh) therefore changes the
implementation of setAll() so that _1s is no longer used.
Fixes#1568.
This is the fallout of attempting not to leak config-file classes. Since
we DO NOT have valid incremental compiler for config-classes, we've instituted
workaround to ensure that regular incremental compilation *AND* our own
sbt loader do not hose each other. A full solution will eventually be
to find a way for .sbt files to participate in regular compilation of a
project.
For now, we fix the tracking of generated.class files throughout an sbt
"loadProjects" call, and then clean any .class files that were not
generated for a full reload. This commit just fixes
a minor tracking issue.
#1525 removes unused *.class files created by the metabuid.
The implementation currently prints out a long list of *.class files
it’s about to remove. Unless something bad happens, this information is
not very useful to the user, so I suggest we don’t display it.
Scala instance is added to the Ivy graph via autoLibraryDependency.
For metabuilds, scala-library is scoped under “provided” configuration,
which does not seem to evict modules on “compiled” configuration.
This commit turns overrideScalaVersion flag to true for the metabuilds,
so override rules are added for the following modules:
- scala-library
- scala-compiler
- scala-reflect
Fixes#1524
* Track generated .class files from Eval
* While loading, join all classfiles throughout Load, lots of bookkeeping.
* When a given build URI is done loading, we can look at its
project/target/config-classes directory and clean out any extra items
that are lingering from previous build definitions.
* Add TODOs to handle the same thing in global directories. Right now,
given the shared nature of these projects, it's a bit too dangerous to
do so.
First of all, we revert changes to Eval made in
a9cdd96152. That was a work-around
for problem with broken positions set on some trees. The rest of
the commit describes a fix.
Scala compiler's Global has `currentRun` method which is supposed to
return an instance of Run currently in use. Sbt's Eval which wraps
Global would create an instance of Run but not register it in Global.
Similarly, Eval would create an instance of `CompilationUnit` but not
set it in Run.
This would result in some silent failures like assigning broken positions
to parsed trees even if parser *has* received an instance of compilation
unit. See https://issues.scala-lang.org/browse/SI-8794 for details.
We fix the issue by subclassing Global and making it possible to set
current Run instance externally. Then we use this capability to set a run
instance right before we call compiler phases. We also make sure that
current run has current compilation assigned to it.
Fixes#1181Fixes#1501Fixes#1523
Fixes#1455
* Add a mechanism to detect if a plugin clause includes/excludes
a particular plugin.
* Add a filter for our "enabling" clauses so that user-excluded
root plugins do not show up.
* Add a test to ensure this succeed.
When the compiler reports back the error to CompilationUnit created by Eval#mkUnit, it sometimes returns OffsetPosition whose `source` is set to `NoSourceFile`.
This causes ArrayIndexOutOfBoundsException. The current workaround is to pattern match on the passed in pos and create a new one when the incoming source looks suspicious.
I have not figured out whether this is caused by our macro code or compiler.
There are various build.sbt errors that would cause this behavior:
```scala
libraryDependencies ++= Seq(
depA
depB // missing comma
)
lazy val bob = scala.Console println
test ++
run+
```
Scope.parseScopedKey now supports full range of legal keys
described in the documentation including {.} and other
notation for ProjectRef, BuildRef, and ThisBuild.
UnresolvedWarning is moved back to IvyActions.scala where it belongs.
The mapping between ModuleID and SourcePosition is passed in as UnresolvedWarningConfiguration.
This is calculated once in Defaults using State and is cached to filesystem.
Unresolved dependency warning is moved to UnresolvedDependencyWarning class including
the fail path that was added in #1467.
To display the source position, I need to access the State, so I had to move the
error processing out of IvyActions and add UnresolvedDependencyWarning, which is
aware of State.
* Ensure the ++ command stores its changes on the sbt session.
* Make sure `session clear-all` will clear out ++ changes
* Validate that `set` command doesn't undo `++` changes
Note: There is some autogenerated Setting[_] delegate optimisation
work that could be done in the future.
* Migrate weak reference into logger class so we can test clearing it.
* Ensure new state.log is propoagted into settings on `set` command.
* Fix set test so that it ensures the sLog is relatively stable when
reloading on set command.
This implements all stories from https://github.com/sbt/sbt/wiki/User-Stories%3A-Conflict-Warning.
When scalaVersion is no longer effective an eviction warning will display.
Scala version was updated by one of library dependencies:
* org.scala-lang:scala-library:2.10.2 -> 2.10.3
When there're suspected incompatibility in directly depended Java libraries,
eviction warnings will display.
There may be incompatibilities among your library dependencies.
Here are some of the libraries that were evicted:
* commons-io:commons-io:1.4 -> 2.4
When there's suspected incompatiblity in directly depended Scala libraries,
eviction warnings will display.
There may be incompatibilities among your library dependencies.
Here are some of the libraries that were evicted:
* com.typesafe.akka:akka-actor_2.10:2.1.4 -> 2.3.4
This also adds 'evicted' task, which displays more detailed eviction warnings.
Currently sbt's update task generates UpdateReport from
Ivy's resolution report.
For each configuration there's ConfigurationReport, which contains
just enough information on the resolved module/revision/artifact.
Speaking of module, in Ivy module means organization and name,
and organization, name, and version is called module revision.
In sbt, module revision is called Module.
This is relevant because to talk about evictions, we need
a terminology for organization and name combo.
In any case ConfigurationReport is expanded to have `details`
field, which contains Seq[ModuleDetailReport], which represents
organization and name combo plus all the modules
just like Ivy's resolution report XML.
Furthermore, ModuleReport is expanded to include licenses,
eviction, callers, etc.
This adds a new setting key called updateOptions, which can enable
consolidated resolution for update task.
The consolidated resolution automatically generates an artificial
module descriptor based on the SHA-1 of all external dependencies.
This consolidates the Ivy resolution of identical Ivy dependency
graph across multiple subprojects.
This is how it's enabled:
updateOptions := updateOptions.value.withConsolidatedResolution(true)
* Expose the values PAST the Eval/sbt.compiler package.
* Find projects using the name API rather than finding htem and dropping all values immediately.
* Adds a test to make sure the .sbt values are discovered and set-able
* Expose .sbt values in Set command and inside BuildUnit methods.
* Ensure `consoleProject` can see build.sbt values.
* Add notes for where we can look in the build if we want to expose .sbt values between files.
* Change detection of "default project" to accurately see
if someone has changed the organization.
* Add a flag to notify downstream consumers that a project
was autogenerated and not user specified.
Fixes#1315
Add scala 2.11 test/build verification.
* Add 2.11 build configuratoin to travis ci
* Create command which runs `safe` unit tests
* Create command to test the scala 2.11 build
* Update scalacheck to 1.11.4
* Update specs2 to 2.3.11
* Fix various 2.11/deprecation removals
and other changes.
Fix eval test failure in scala 2.11 with XML not existing.
This does the following:
* Fragments loading into two stages: Discovery + Resolution
* Discovery just looks for .sbt files and Projects, while
loading/compiling them.
* Resolution is responsible for taking discovered projects and
loaded sbt files and globbing everything together. This includes
feeding the project through various manipulations, applying
AutoPlugin settings/configurations and ordering all the settings.
* Add a bunch of docs
* Add direct DSL `enablePlugins` and test
* Add direct DSL `disablePlugins` and test.
* Create new DSLEntry type for settings so we can categorize what we parse
* Use DSLEntry to help solve the Setting[_] vs. Seq[Setting[_]] implicit fun.
* Hack away any non-Setting[_] DSLEntry for now.
* Add test in build.sbt to make sure the new DSL works.
This migrates JunitXmlReportPlugin to work the way we desire
new sbt features/plugins to work:
* Enabling the feature is having the plugin available.
* Disabling the feature is disabling the plugin
* All code/settings for the feature are isolated to the plugin.
* Add task to determine file-name of analysis cache
* If crossPaths := true, then add scala binary version to the analysis cache name.
This makes it possible to leverage incremental compilation with the `+` command.
Fixes#1267
Taken from https://github.com/chenkelmann/junit_xml_listener and slightly improved.
Activating is now just a matter of setting `testReportJUnitXml` to true.
This not only brings a very handy feature (hudson/jenkins support that syntax),
but gives more options for testing the SBT test infrastructure.
For instance checking that the test output is well segregated during a parallel
test run is just a matter of checking the report output.
Conflicts:
main/src/main/scala/sbt/Keys.scala
Fixes#1223.
* Add a new key to disable forcing the garbage collector to run
after each task-executioin
* Add a new flag to disable forcing the garbage collector to run
after each task-exeuction
* Add a hook into EvalauteTask to run System.gc/System.runFinalization
after each task execution
Review by @eed3si9n
Within buildPluginDefinition(), the call to setProject() can
(and usually will) return a modified structure together with
the new state.
The subsequent call to evalPluginDef() should use the updated
structure, rather the old stucture that was present before
the setProject() ("pluginDef"); if that is not the case,
the code called by evalPluginDef() will find an inconsistent
structure/state combination, and behave in bizarre ways as
a result.
More in general, it is a bit dangerous to pass to routines
in parallel the two separate state and structure, as the
two may easily inadvertently fall out of alignment, as in this
case.
This patch should be applied to both the 0.13 branch as well
as to a future 0.12.5 release (the corresponding file there
is ./main/Load.scala).
Add logging of various operations the transactional class file manager is
doing. You can pass logger to be used by the transactional class file
manager by using overloaded definition of `ClassfileManager.transactional`
method. The old overload has been deprecated.
The factory methods for class file manager in IncOptions companion object
has been deprecated in favor of using ClassfileManager companion object
directly. The code in Defaults.scala has been updated to use non-deprecated
methods. The logging is turned off by default.
The canonical way of enabling transactional class file manager in sbt
project is:
```
incOptions := incOptions.value.withNewClassfileManager(
sbt.inc.ClassfileManager.transactional(
crossTarget.value / "classes.bak",
(streams in (compile, Compile)).value.log
)
)
```
It's a bit verbose which shows that the api for this is not the best.
However, I don't expect sbt users to need this code very often.
This patch should help debug the problem described in #1184
The deprecated method should forward to the other overloaded alternative
but it recursed instead.
This kind of mistake would be easily caught by linter warning about
unused `newConfig` local variable. I hope we'll get there some day.
Fixes#1251
`DefaultOptions.addResolvers` and `DefaultOptions.addPluginResolvers`
should not reset the existing value while adding the new ones. The
names are prefixed with _add_ afterall.
1) When `test` is run and there are no tests available, omit logging output.
Especially useful for aggregate modules. `test-only` et al unaffected. (#1185)
2) Added a new setting `testResultLogger` to allow customisation of logging of test results.
* Expose new EvaluateTaskConfig throughout all the APIs
* Create a key for cancellation configuration
* Add default values for cancellation in GlobalPlugin
* Create a test to ensure that cancellation can cancel tasks.
* Deprecate all the existing mechanisms of evaluating tasks which
use the EvaluateConfig API.
* Create a new EvaluateTaskConfig which gives us a bit more freedom
over changign config options to EvaluateTask in the future.
* Create adapted from old EvaluateTask to new EvaluateTask
* Add hooks into signals class to register/remote a signal listener
directly, rather than in an "arm" block.
* Create TaskEvaluationCancelHandler to control the strategy of
who/whom can cancel (sbt-server vs. sbt-terminal).
* Create a null-object for the "can't cancel" scenario so the
code path is exactly the same.
This commit does not wire settings into the build yet, nor does it
fix the config extractio methods.
* AutoImport trait is subsumed by def autoImport method under
AutoPlugin class.
* When def autoImport is overridden by a lazy val or a val, *.sbt
automatically imports autoImport._.
This allows plugins to define a Plugins instance that captures both the
plugin and its required dependencies.
Also fixed up some scaladocs that were wrong.
* packageArtifacts is not cleared by defautlSettings
* Added a test for this behavior (this one test should ensure the ordering for most settings is correct.)
Fixes#1176
We cannot break existing users, but we can deprecate the improper usage.
This is part #2 of the workaround for #1156. This ensures that
users will stop using the legacy methods after 0.13.2 is out.
AddSettings should only expose coarse-grained features of AutoPlugins
or else the Logic we use to ensure safe addition completely breaks
down. Leaving it in the code as an escape hatch if we get desparate,
but we need an alternative for controlling ordering later.
* GlobalPlugin has defaults for controlling parallelism on tasks, basic command stuff.
* IvyModule has the configuration for resolving/publishing modules to ivy, assuming
each project is a single module.
* JvmModule has the configuration for compiling/running/testing/packaging Java/Scala
projects.
* Add new AutoPlugins type to AddSettings.
* Ensure any Plugins filter doesn't just automatically always add
autoplugins every time.
* Load.scala can now adjust AutoPlugins ordering
Note: Adjusting autoplugin ordering is dangerous BUT doing a glob
of "put autoplugin settings here" is generally ok.
* remove the notion of Natures from Autoplugins.
* Update tests to use AutoPlugins with no selection for inclusion.
* Rename exisitng Natures code to Plugins/PluginsDebug.
Fixes#1155.
It seems that somehow during the 0.13.{1 -> 2 } transition, we
stopped pointing at the correct key for TaskKeys (either that or
task streams are now all associated with the `streams` key). I
think this may have been inadvertently caused from several
refactorings to enable greater control over the execution of tasks.
This points the `last*` methods at the correct key for tasks,
fixing both `last <key>` and `export <key>` commands.
In 2.11, the implicit version is named `$conforms` so as to avoid
accidental shadowing by user code, which renders methods using
views and subtype bounds inexplicable unusable.
But, SBT intentionally needs to hide it to make the implicits
in this file line up.
This commit opts-in the the required identifiers from Predef,
rather than opting out of conforms. This makes the same code
source compatible with 2.10 and 2.11.
* 'plugins' displays the list of plugins available for each build along with the project IDs each is enabled on
* 'plugin <name>' displays information about a specific plugin in the context of the current project
- if the plugin is activated on the current project and if so, information about the keys/configurations it provides
- how the plugin could be activated if possible
* tries to detect when it is run on an aggregating project and adjusts accordingly
- indicates if an aggregated project has the plugin activated
- indicates to change to the specific project to get the right context
This is a rough implementation and needs lots of polishing and deduplicating.
The help for the commands needs to be added/expanded.
* Can provide suggestions for how to define a plugin given a context (a loaded Project in practice).
* When a user requests an undefined key at the command line, can indicate whether any (deactivated) plugins provide the key.
TODO:
* Hook up to the key parser
* Implement 'help <plugin>'
* Determine how to best provide the context (the current project is often an aggregating root, which is not typically a useful context)
This commit makes the code source compatible across Scala 2.10.3
and https://github.com/scala/scala/pull/3452, which is proposed
for inclusion in Scala 2.11.0-RC1.
We only strictly need the incremental compiler to build on Scala
2.11, as that is integrated into the IDE. But we gain valuable
insight into compiler regressions by building *all* of SBT with
2.11.
We only got there recently (the 0.13 branch of SBT now fully cross
compiles with 2.10.3 and 2.11.0-SNAPSHOT), and this aims to keep
things that way.
Once 2.10 support is dropped, SBT macros will be able to exploit
the new reflection APIs in 2.11 to avoid the need for casting
to compiler internals, which aren't governed by binary compatibility.
This has been prototyped by @xeno-by: https://github.com/sbt/sbt/pull/1121
We now have `global.Range`, so our wildcard import of `global._`
shadows `scala.Range`.
This commit fully qualifies that type so as to be compatible with
Scala 2.10 and 2.11.
- remove AutoPlugin.provides
* name comes from module name
* AutoPlugin is Nature-like via Basic
- Project.addNatures only accepts varags of Nature values
* enforces that a user cannot explicitly enable an AutoPlugin
* drops need for && and - combinators
- Project.excludeNatures accepts varags of AutoPlugin values
* enforces that only AutoPlugins can be excluded
* drops need for && and - combinators
Untyped trees underneath typed trees makes Jack and sad boy.
And they make superaccessors a sad phase.
The recent refactoring to retain original types in the trees
representing the argument to the task macro meant that the `value`
macro also was changed to try to avoid this untyped-under-typed
problem. However, it didn't go deep enough, and left the child
trees of the placeholder tree `InputWrapper.wrap[T](key)` untyped.
This commit uses `c.typeCheck` to locally typeheck that tree fully
instead.
Fixes#1031
* Alter the TaskProgress listener key to be `State => TaskProgress` so it
can be instantiated from the current server/sbt state.
* Expose the xsbti.Reporter interface for compilation through to sbt builds.
This requires a Format[T] to be implicitly available at the call site and requires the task
to be referenced statically (not in a settingDyn call). References to previous task values
in the form of a ScopedKey[Task[T]] + Format[T] are collected at setting load time in the
'references' setting. These are used to know which tasks should be persisted (the ScopedKey)
and how to persist them (the Format).
When checking/delegating previous references, rules are slightly different.
A normal reference from a task t in scope s cannot refer to t in s unless
there is an earlier definition of t in s. However, a previous reference
does not have this restriction. This commit modifies validateReferenced
to allow this.
TODO: user documentation
TODO: stable selection of the Format when there are multiple .previous calls on the same task
TODO: make it usable in InputTasks, specifically Parsers
This avoids an additional cause of recursion via the semicolon/multiple command, which fixes#933.
It also provides error messages on the expanded command. This fixes#598.
The fix was made possible by the very helpful information provided by @retronym.
This commit does two key things:
1. changes the owner when splicing original trees into new trees
2. ensures the synthetic trees that get spliced into original trees do not need typechecking
Given this original source (from Defaults.scala):
...
lazy val sourceConfigPaths = Seq(
...
unmanagedSourceDirectories := Seq(scalaSource.value, javaSource.value),
...
)
...
After expansion of .value, this looks something like:
unmanagedSourceDirectories := Seq(
InputWrapper.wrapInit[File](scalaSource),
InputWrapper.wrapInit[File](javaSource)
)
where wrapInit is something like:
def wrapInit[T](a: Any): T
After expansion of := we have (approximately):
unmanagedSourceDirectories <<=
Instance.app( (scalaSource, javaSource) ) {
$p1: (File, File) =>
val $q4: File = $p1._1
val $q3: File = $p1._2
Seq($q3, $q4)
}
So,
a) `scalaSource` and `javaSource` are user trees that are spliced into a tuple constructor after being temporarily held in `InputWrapper.wrapInit`
b) the constructed tuple `(scalaSource, javaSource)` is passed as an argument to another method call (without going through a val or anything) and shouldn't need owner changing
c) the synthetic vals $q3 and $q4 need their owner properly set to the anonymous function
d) the references (Idents) $q3 and $q4 are spliced into the user tree `Seq(..., ...)` and their symbols need to be the Symbol for the referenced vals
e) generally, treeCopy needs to be used when substituting Trees in order to preserve attributes, like Types and Positions
changeOwner is called on the body `Seq($q3, $q4)` with the original owner sourceConfigPaths to be changed to the new anonymous function.
In this example, no owners are actually changed, but when the body contains vals or anonymous functions, they will.
An example of the compiler crash seen when the symbol of the references is not that of the vals:
symbol value $q3 does not exist in sbt.Defaults.sourceConfigPaths$lzycompute
at scala.reflect.internal.SymbolTable.abort(SymbolTable.scala:49)
at scala.tools.nsc.Global.abort(Global.scala:254)
at scala.tools.nsc.backend.icode.GenICode$ICodePhase.genLoadIdent$1(GenICode.scala:1038)
at scala.tools.nsc.backend.icode.GenICode$ICodePhase.scala$tools$nsc$backend$icode$GenICode$ICodePhase$$genLoad(GenICode.scala:1044)
at scala.tools.nsc.backend.icode.GenICode$ICodePhase$$anonfun$genLoadArguments$1.apply(GenICode.scala:1246)
at scala.tools.nsc.backend.icode.GenICode$ICodePhase$$anonfun$genLoadArguments$1.apply(GenICode.scala:1244)
...
Other problems with the synthetic tree when it is spliced under the original tree often result in type mismatches or some other compiler error that doesn't result in a crash.
If the owner is not changed correctly on the original tree that gets spliced under a synthetic tree, one way it can crash the compiler is:
java.lang.IllegalArgumentException: Could not find proxy for val $q23: java.io.File in List(value $q23, method apply, anonymous class $anonfun$globalCore$5, value globalCore, object Defaults, package sbt, package <root>) (currentOwner= value dir )
...
while compiling: /home/mark/code/sbt/main/src/main/scala/sbt/Defaults.scala
during phase: global=lambdalift, atPhase=constructors
...
last tree to typer: term $outer
symbol: value $outer (flags: <synthetic> <paramaccessor> <triedcooking> private[this])
symbol definition: private[this] val $outer: sbt.BuildCommon
tpe: <notype>
symbol owners: value $outer -> anonymous class $anonfun$87 -> value x$298 -> method derive -> class BuildCommon$class -> package sbt
context owners: value dir -> value globalCore -> object Defaults -> package sbt
...
The problem here is the difference between context owners and the proxy search chain.
This feature is not activated by default. To enable it set `testForkedParallel` to `true`.
The test-agent then executes the tests in a thread pool.
For now it has a fixed size set to the number of available processors.
The concurrent restrictions configuration should be used.
The completions command is meant for dump terminals that cannot use
the default tab completion. It has been built for use by the emacs
sbt-mode (see https://github.com/hvesalai/sbt-mode), but is equally
useful for other code editors that can integrate with sbt.
This is a temporary workaround: it assumes nothing else uses these streams later.
This condition is ok for 'export' and test streams, since these are unlikely to
reuse these streams. However, the proper fix is for the TaskStreams methods
to be smarter- they could open in append mode if the stream was closed. The
streams associated with a task could be optimistically closed after it finishes executing.
(Any task can write to another task's streams, which is why it is an optimization only.)
If an exception is thrown when accepting a connection from a forked test
agent, currently I'm seeing that all that happens is SBT hangs with no
output. Thread dumps show that the main process is waiting for the
agent to return, while the agent is waiting for the server to send it
something.
This change logs the exception, so that at least the error can be
googled. It also cleans up the server socket.
This protects against deadlocks between the writing and reading end,
since the ObjectOutputStream constructor writes a header, but does not
flush, and the ObjectInputStream constructor reads the header, and
blocks until it's read.
The serialized structures there aren't part of an Analysis,
so they aren't interned.
TODO: Refactor InternedAnalysisFormats so that this is more
straightforward and less error-prone. This commit is
a first-pass fix of the broken test.
Specify an Ivy resolver with ", descriptorOptional" to make Ivy
descriptor files optional for that repository or with
", skipConsistencyCheck" to disable Ivy consistency checks for
that repository.
Set sbt.task.timings=true to print timings for tasks.
This sample progress handler shows how to get names for tasks and
deal with flatMapped tasks. There are still some tasks that make
it through as anonymous, which needs to be investigated.
A setting to provide a custom handler should come in a subsequent commit.
* Consolidate project ID validation and normalization into Project methods
* Provide an earlier and more detailed error message when the directory
name can't be used for the project ID
The Help for these commands now needs to be cleaned up, since they were not written with
this feature in mind. In particular,
* consider adding syntax summaries in the short help strings
* alternatively, add the syntax summary data elsewhere for use specifically by this feature
* display a better message when there is no short help string, such as
"See 'help <command>' for usage." or just displaying the lower level error message, such as
"Expected whitespace"