The main motivation behind this commit is to reify information about
api changes that incremental compiler considers. We introduce a new
sealed class `APIChange` that has (at the moment) two subtypes:
* APIChangeDueToMacroDefinition - as the name explains, this represents
the case where incremental compiler considers an api to be changed
just because given source file contains a macro definition
* SourceAPIChange - this represents the case of regular api change;
at the moment it's just a simple wrapper around value representing
source file but in the future it will get expanded to contain more
detailed information about API changes (e.g. collection of changed
name hashes)
The APIChanges becomes just a collection of APIChange instances.
In particular, I removed `names` field that seems to be a dead code in
incremental compiler. The `NameChanges` class and methods that refer to
it in `SameAPI` has been deprecated.
The Incremental.scala has been adapted to changed signature of APIChanges
class. The `sameSource` method returns representation of APIChange
(if there's one) instead of just simple boolean. One notable change is
that information about APIChanges is pushed deeper into invalidation logic.
This will allow us to treat the APIChangeDueToMacroDefinition case properly
once name hashing scheme arrives.
This commit shouldn't change any behavior and is purely a refactoring.
The following events are logged:
* invalidation of source file due to macro definition
* inclusion of dependency invalidated by inheritance; we log both
nodes of dependency edge (dependent and dependency)
The second bullet helps to understand what's going on in case of
complex inheritance hierarchies like in Scala compiler.
The #958 describes a scenario where partially successful results are
produced in form of class files written to disk. However, if compilation
fails down the road we do not record any new compilation results (products)
in Analysis object. This leads to Analysis object and disk contents to get
out of sync.
One way to solve this problem is to use transactional ClassfileManager that
commits changes to class files on disk only when entire incremental
compilation session is successful. Otherwise, new class files are rolled
back to previous state.
The other way to solve this problem is to record time stamps of class files
in Analysis object. This way, incremental compiler can detect that class
files and Analysis object got out of sync and recover from that by
recompiling corresponding sources.
This commit uses latter solution which enables simpler (non-transactional)
ClassfileManager to handle scenario from #958.
Fixes#958
* Deprecate old mainClass method on appProvider
* Create entryPoint method which represnets class used to launch instead.
* Create PlainApplication to wrap static `main` methods. Can return
either Int, Exit or Unit.
* Detect supported 'plain' classes via reflection.
* Add new unit tests appropriate to the feature.
The ticket contains detailed description of the problem. The test case
just shows that if we set `incOptions := sbt.inc.IncOptions.Default`
then the incremental compiler doesn't recover from compiler errors
properly.
This is a temporary workaround: it assumes nothing else uses these streams later.
This condition is ok for 'export' and test streams, since these are unlikely to
reuse these streams. However, the proper fix is for the TaskStreams methods
to be smarter- they could open in append mode if the stream was closed. The
streams associated with a task could be optimistically closed after it finishes executing.
(Any task can write to another task's streams, which is why it is an optimization only.)
If an exception is thrown when accepting a connection from a forked test
agent, currently I'm seeing that all that happens is SBT hangs with no
output. Thread dumps show that the main process is waiting for the
agent to return, while the agent is waiting for the server to send it
something.
This change logs the exception, so that at least the error can be
googled. It also cleans up the server socket.
This protects against deadlocks between the writing and reading end,
since the ObjectOutputStream constructor writes a header, but does not
flush, and the ObjectInputStream constructor reads the header, and
blocks until it's read.
Fix the problem with unstable names synthesized for existential
types (declared with underscore syntax) by renaming type variables
to a scheme that is guaranteed to be stable no matter where given
the existential type appears.
The sheme we use are De Bruijn-like indices that capture both position
of type variable declarion within single existential type and nesting
level of nested existential type. This way we properly support nested
existential types by avoiding name clashes.
In general, we can perform renamings like that because type variables
declared in existential types are scoped to those types so the renaming
operation is local.
There's a specs2 unit test covering instability of existential types.
The test is included in compiler-interface project and the build
definition has been modified to enable building and executing tests
in compiler-interface project. Some dependencies has been modified:
* compiler-interface project depends on api project for testing
(test makes us of SameAPI)
* dependency on junit has been introduced because it's needed
for `@RunWith` annotation which declares that specs2 unit
test should be ran with JUnitRunner
SameAPI has been modified to expose a method that allows us to
compare two definitions.
This commit also adds `ScalaCompilerForUnitTesting` class that allows
to compile a piece of Scala code and inspect information recorded
callbacks defined in `AnalysisCallback` interface. That class uses
existing ConsoleLogger for logging. I considered doing the same for
ConsoleReporter. There's LoggingReporter defined which would fit our
usecase but it's defined in compile subproject that compiler-interface
doesn't depend on so we roll our own.
ScalaCompilerForUnit testing uses TestCallback from compiler-interface
subproject for recording information passed to callbacks. In order
to be able to access TestCallback from compiler-interface
subproject I had to tweak dependencies between interface and
compiler-interface so test classes from the former are visible in the
latter. I also modified the TestCallback itself to accumulate apis in
a HashMap instead of a buffer of tuples for easier lookup.
An integration test has been added which tests scenario
mentioned in #823.
This commit fixes#823.
The serialized structures there aren't part of an Analysis,
so they aren't interned.
TODO: Refactor InternedAnalysisFormats so that this is more
straightforward and less error-prone. This commit is
a first-pass fix of the broken test.
As pointed out by @harrah in #705, we might want to merge both API
and dependency phases so we should mention that in the comment explaining
phase ordering constraints instead.
I'd still like to keep the old comment in the history (as separate commit)
because it took me a while to figure out cryptic issues related to
continuations plugin so it's valuable to keep the explanation around in
case somebody else in the future tries to mess around with dependencies
defined by sbt.
This is the first step towards using new mechanism for dependency
extraction that is based on tree walking.
We need dependency extraction in separate phase because the code
walking trees should run before refchecks whereas analyzer phase runs
at the very end of phase pipeline.
This change also includes a work-around for phase ordering issue with
continuations plugin. See included comment and SI-7217 for details.
As pointed out by @harrah in #705, both beginSource and endSource are
not used in sbt internally for anything meaningful.
We've discussed an option of deprecating those methods but since they
are not doing anything meaningful Mark prefers to have compile-time
error in case somebody implements or calls those methods. I agree with
that hence removal.