Fix the problem with unstable names synthesized for existential
types (declared with underscore syntax) by renaming type variables
to a scheme that is guaranteed to be stable no matter where given
the existential type appears.
The sheme we use are De Bruijn-like indices that capture both position
of type variable declarion within single existential type and nesting
level of nested existential type. This way we properly support nested
existential types by avoiding name clashes.
In general, we can perform renamings like that because type variables
declared in existential types are scoped to those types so the renaming
operation is local.
There's a specs2 unit test covering instability of existential types.
The test is included in compiler-interface project and the build
definition has been modified to enable building and executing tests
in compiler-interface project. Some dependencies has been modified:
* compiler-interface project depends on api project for testing
(test makes us of SameAPI)
* dependency on junit has been introduced because it's needed
for `@RunWith` annotation which declares that specs2 unit
test should be ran with JUnitRunner
SameAPI has been modified to expose a method that allows us to
compare two definitions.
This commit also adds `ScalaCompilerForUnitTesting` class that allows
to compile a piece of Scala code and inspect information recorded
callbacks defined in `AnalysisCallback` interface. That class uses
existing ConsoleLogger for logging. I considered doing the same for
ConsoleReporter. There's LoggingReporter defined which would fit our
usecase but it's defined in compile subproject that compiler-interface
doesn't depend on so we roll our own.
ScalaCompilerForUnit testing uses TestCallback from compiler-interface
subproject for recording information passed to callbacks. In order
to be able to access TestCallback from compiler-interface
subproject I had to tweak dependencies between interface and
compiler-interface so test classes from the former are visible in the
latter. I also modified the TestCallback itself to accumulate apis in
a HashMap instead of a buffer of tuples for easier lookup.
An integration test has been added which tests scenario
mentioned in #823.
This commit fixes#823.
As pointed out by @harrah in #705, we might want to merge both API
and dependency phases so we should mention that in the comment explaining
phase ordering constraints instead.
I'd still like to keep the old comment in the history (as separate commit)
because it took me a while to figure out cryptic issues related to
continuations plugin so it's valuable to keep the explanation around in
case somebody else in the future tries to mess around with dependencies
defined by sbt.
This is the first step towards using new mechanism for dependency
extraction that is based on tree walking.
We need dependency extraction in separate phase because the code
walking trees should run before refchecks whereas analyzer phase runs
at the very end of phase pipeline.
This change also includes a work-around for phase ordering issue with
continuations plugin. See included comment and SI-7217 for details.
As pointed out by @harrah in #705, both beginSource and endSource are
not used in sbt internally for anything meaningful.
We've discussed an option of deprecating those methods but since they
are not doing anything meaningful Mark prefers to have compile-time
error in case somebody implements or calls those methods. I agree with
that hence removal.
The test case compiles a project without and with this
setting and checks that a warning is and isn't emitted
respectively.
It's a multi-project build; this bug didn't seem to turn
up in a single-project build.
This way we have a little bit more clear separation
between compiler phase logic and the core logic responsible for
processing each compilation unit and extracting an api for it.
As added benefit, we have a little bit less of mutable state
(e.g. sourceFile doesn't need to be a var anymore).
The API extraction logic contains some internal caches that are
required for correctness. It wasn't very clear if they have to
be maintained during entire phase run or just during single compilation
unit processing. It looks like they have to be maintained during
single compilation unit processing and refactored code both
documents that contracts and implements it in the API phase.
Move collection (a class `Compat`) of compatibility hacks into separate
file. This aids understanding of the code as both Analyzer and API make
use of that class and keeping it `Analyzer.scala` file suggested that
it's used only by Analyzer.
Incremental compiler didn't have any explicit logic to handle
cancelled compilation so it would go into inconsistent state.
Specifically, what would happen is that it would treat cancelled
compilation as a compilation that finished normally and try to
produce a new Analysis object out of partial information collected
in AnalysisCallback. The most obvious outcome would be that the
new Analysis would contain latest hashes for source files. The
next time incremental compiler was asked to recompile the same files
that it didn't recompile due to cancelled compilation it would think
they were already successfully compiled and would do nothing.
We fix that problem by following the same logic that handles compilation
errors, cleans up partial results (produced class files) and makes sure
that no Analysis is created out of broken state.
We do that by introducing a new exception `CompileCancelled`
and throwing it at the same spot as an exception signalizing compilation
errors is being thrown. We also modify `IncrementalCompile` to
catch that exception and gracefully return as there was no compilation
invoked.
NOTE: In case there were compilation errors reported _before_
compilation cancellations was requested we'll still report them
using an old mechanism so partial errors are not lost in case
of cancelled compilation.
In summary this commit:
* drops type normalization in api phase but keeps dealiasing
* fixes#736 and marks corresponding test as passing
I discussed type normalization with @adriaanm and according to him
sbt shouldn't call that method. The purpose of this method to
convert to a form that subtyping algorithm expects. Sbt doesn't need
to call it and it's fairly expensive in some cases.
Dropping type normalization also fixes#726 by not running into
stale cache in Scala compiler problem described in SI-7361.
The infrastructure for resident compilation still exists,
but the actual scalac-side code that was backported is removed.
Future work on using a resident scalac will use that invalidation
code directly from scalac anyway.
While trying to determine binary dependencies sbt lookups class files
corresponding to symbols. It tried to do that for packages and most of the
time would fail because packages don't have corresponding class file
generated. However, in case of case insensitive file system, combined
with special nesting structure you could get spurious dependency.
See added test case for an example of such structure.
The remedy is to never even try to locate class files corresponding to
packages.
Fixes#620.
Extract the api after picklers, since that way we see the same symbol
information/structure irrespective of whether we were typechecking
from source / unpickling previously compiled classes.
Previously, the apiExtractor phase ran after typer.
Since this fix is hard to verify with a test (it's based on the
conceptual argument above, and anecdotal evidence of incremental
compilation of a big codebase), we're providing a way to restore the
old behaviour: run sbt with -Dsbt.api.phase=typer.
This fixes#609.
goal:
a representation of a type reference to a refinement class that's stable
across compilation runs (and thus insensitive to typing from source or
unpickling from bytecode)
problem:
the current representation, which corresponds to the owner chain of the
refinement:
1. is affected by pickling, so typing from source or using unpickled
symbols give different results (because the unpickler "localizes"
owners -- this could be fixed in the compiler in the long term)
2. can't distinguish multiple refinements in the same owner (this is
a limitation of SBT's internal representation and cannot be fixed in
the compiler)
solution:
expand the reference to the corresponding refinement type: doing that
recursively may not terminate, but we can deal with that by
approximating recursive references (all we care about is being sound for
recompilation: recompile iff a dependency changes, and this will happen
as long as we have one unrolling of the reference to the refinement)