This is an attempt to provide a decent error message in some cases. However, paths that include
the Java path separator character are just fundamentally problematic and aren't always going to
be cleanly detected.
Add a test-case that documents current behavior of incremental
compiler when it comes to invalidating dependencies that arise
from inheritance and member references.
There are two categories of places in the code that need to refer to
`nameHashing` option:
* places where Analysis object is created so it gets proper
implementation of underlying Relations object
* places with logic that is specifically designed to be enabled by
that option
This commit covers both cases.
The 39036e7c20 introduced
`recompileOnMacroDef` option to IncOptions. However, not all necessary
logic has been changed. This commit fixes that:
* `copy` method does not forget the value of the `recompileOnMacroDef`
flag
* `productArity` has been increased to match the arity of the class
* `productElement` returns the value of `recompileOnMacroDef` flag
* `hashCode` and `equals` methods take into account value of
`recompileOnMacroDef` flag
* fix the name of the key for `recompileOnMacroDef` flag
Move implementation of the following methods from IncrementalCommon
to IncrementalDefaultImpl:
* invalidatedPackageObjects
* sameAPI
* invalidateByExternal
* allDeps
* invalidateSource
These are the methods that are expected to have different implementation
in the name hashing algorithm. Hence, we make them abstract in
IncrementalCommon so they can be implemented differently in subclasses.
Refactor the `invalidateByExternal` method to take single, external
api change. Introduce `invalidateByAllExternal` that takes all APIChanges
object.
This way `invalidateByExternal` will have an access to APIChange object
that represents changed name hashes once name hashing is merged.
This way we'll be able to have a polymorphic implementation of this
method in the future. One implementation will use the old dependency
tracking mechanism and the other will use the new one (implemented
for name hashing).
In addition to `invalidateSources` we introduce `invalidateSource`
that invalidates dependencies of a single source. This is needed
for the name hashing algorithm because its invalidation logic
depends on information about API changes of each source file
individually.
The refactoring is done in `IncrementalCommon` class so it affects
the default implementation as well. However, this refactoring does
not affect the result of invalidation in the default implementation.
Introduce an abstract `IncrementalCommon class that holds the
implementation of incremental compiler that was previously done in
`Incremental` class. Also, introduce `IncrementalDefaultImpl` that
inherits from IncrementalCommon.
This is the first step to introduce a design where most of incremental
compiler's logic lives in IncrementalCommon and we have two subclasses:
1. Default, which holds implementation specific to the old algorithm
known from sbt 0.13.0
2. NameHashing, which holds implementation specific to the name
hashing algorithm
This commit is purely a refactoring and does not change any behavior.
Both Logger and IncOptions instances were passed around Incremental class
implementation unmodified. Given the fact that entire implementation of
the class uses exactly the same values for those types it makes sense
to extract them as constructor arguments so they are accessible everywhere.
This helps reducing signatures of other methods to more essential
parameters that are more specific to given method.
Move most of the functionality from `Incremental` object to its
companion class.
This commit is a preparation for making it possible to have
two different implementation of logic in `Incremental` object.
This entailed modifying ResolutionCache and the CustomPomParser
to reflect changes to the ResolutionCacheManager interface and
DefaultExtendsDescriptor class between Ivy 2.3.0-rc1 and
2.3.0-rc2. Specifically,
1. ResolutionCacheManager now includes two additional methods
that needed implementations in ResolutionCache:
getResolvedModuleDescriptor(mrid: ModuleRevisionId) and
saveResolvedModuleDescriptor(md: ModuleDescriptor). I adapted
the implementations for these (which are expressed primarily in
terms of other interface methods) from Ivy 2.3.0's
DefaultResolutionCacheManager.
2. Instead of taking a ModuleRevisionIdentifier and a resolved
ModuleRevisionIdentifier as its first two arguments, the
DefaultExtendsDescriptor constructor now takes a
ModuleDescriptor. This was a trivial change.
Note that ResolutionCache.getResolvedModuleDescriptor does not
appear to be used by Ivy as sbt uses Ivy and there is thus no
test coverage for its implementation. Also note that the
DefaultResolutionCacheManager object created in
Update.configureResolutionCache now requires a reference to an
IvySettings object; DRCM expects this to be non-null.
This requires a Format[T] to be implicitly available at the call site and requires the task
to be referenced statically (not in a settingDyn call). References to previous task values
in the form of a ScopedKey[Task[T]] + Format[T] are collected at setting load time in the
'references' setting. These are used to know which tasks should be persisted (the ScopedKey)
and how to persist them (the Format).
When checking/delegating previous references, rules are slightly different.
A normal reference from a task t in scope s cannot refer to t in s unless
there is an earlier definition of t in s. However, a previous reference
does not have this restriction. This commit modifies validateReferenced
to allow this.
TODO: user documentation
TODO: stable selection of the Format when there are multiple .previous calls on the same task
TODO: make it usable in InputTasks, specifically Parsers
This requires a Format[T] to be implicitly available at the call site and requires the task
to be referenced statically (not in a settingDyn call). References to previous task values
in the form of a ScopedKey[Task[T]] + Format[T] are collected at setting load time in the
'references' setting. These are used to know which tasks should be persisted (the ScopedKey)
and how to persist them (the Format).
When checking/delegating previous references, rules are slightly different.
A normal reference from a task t in scope s cannot refer to t in s unless
there is an earlier definition of t in s. However, a previous reference
does not have this restriction. This commit modifies validateReferenced
to allow this.
TODO: user documentation
TODO: stable selection of the Format when there are multiple .previous calls on the same task
TODO: make it usable in InputTasks, specifically Parsers
Each of Relations implementation should have specific `toString`
implementation. For example, only `MRelationsNameHashing` implementation
should be printing used names in toString representation.
A hash for given name in a source file is computed by combining
hashes of all definitions with given name. When hashing a single
definition we take into account all information about it except nested
definitions. For example, if we have following definition
class Foo[T] {
def bar(x: Int): Int = ???
}
hash sum for `Foo` will include the fact that we have a class with
a single type parameter but it won't include hash sum of `bar` method.
Computed hash sums are location-sensitive. Each definition is hashed along
with its location so we properly detect cases when definition's signature
stays the same but it's moved around in the same compilation unit.
The location is defined as sequence of selections. Each selection consists
of a name and name type. The name type is either term name or type name.
Scala specification (9.2) guarantees that each publicly visible definition
is uniquely identified by a sequence of such selectors.
For example, if we have:
object Foo {
class Bar { def abc: Int }
}
then location of `abc` is Seq((TermName, Foo), (TypeName, Bar))
It's worth mentioning that we track name-hash pairs separately for
regular (non implicit) and implicit members. That's required for name
hashing algorithm because it does not apply its heuristic when implicit
members are being modified.
Another important characteristic is that we include all inherited members
when computing name hashes.
Here comes the detailed list of changes made in this commit:
* HashAPI has new parameter `includeDefinitions` that allows
shallow hashing of Structures (where we do not compute hashes
recursively)
* HashAPI exposes `finalizeHash` method that allow one to capture
current hash at any time. This is useful if you want to hash a list of
definitions and not just whole `SourceAPI`.
* NameHashing implements actual extraction of public definitions,
grouping them by simple name and computing hash sums for each group
using HashAPI
* `Source` class (defined in interface/other file) has been extended to
include `_internalOnly_nameHashes` field. This field stores
NameHashes data structure for given source file. The NameHashes
stores two separate collections of name-hash pairs for regular and
implicit members.
The prefix `_internalOnly_` is used to indicate that this is not an
official incremental compiler's or sbt's API and it's for use by
incremental compiler internals only. We had to use such a prefix
because the `datatype` code generator doesn't support emitting access
modifiers
* `AnalysisCallback` implementation has been modified to gather all
name hashes and store them in the Source object
* TestCaseGenerators has been modified to implement generation of
NameHashes
* The NameHashingSpecification contains a few unit tests that make sure
that the basic functionality works properly
A hash for given name in a source file is computed by combining
hashes of all definitions with given name. When hashing a single
definition we take into account all information about it except nested
definitions. For example, if we have following definition
class Foo[T] {
def bar(x: Int): Int = ???
}
hash sum for `Foo` will include the fact that we have a class with
a single type parameter but it won't include hash sum of `bar` method.
Computed hash sums are location-sensitive. Each definition is hashed along
with its location so we properly detect cases when definition's signature
stays the same but it's moved around in the same compilation unit.
The location is defined as sequence of selections. Each selection consists
of a name and name type. The name type is either term name or type name.
Scala specification (9.2) guarantees that each publicly visible definition
is uniquely identified by a sequence of such selectors.
For example, if we have:
object Foo {
class Bar { def abc: Int }
}
then location of `abc` is Seq((TermName, Foo), (TypeName, Bar))
It's worth mentioning that we track name-hash pairs separately for
regular (non implicit) and implicit members. That's required for name
hashing algorithm because it does not apply its heuristic when implicit
members are being modified.
Another important characteristic is that we include all inherited members
when computing name hashes.
Here comes the detailed list of changes made in this commit:
* HashAPI has new parameter `includeDefinitions` that allows
shallow hashing of Structures (where we do not compute hashes
recursively)
* HashAPI exposes `finalizeHash` method that allow one to capture
current hash at any time. This is useful if you want to hash a list of
definitions and not just whole `SourceAPI`.
* NameHashing implements actual extraction of public definitions,
grouping them by simple name and computing hash sums for each group
using HashAPI
* `Source` class (defined in interface/other file) has been extended to
include `_internalOnly_nameHashes` field. This field stores
NameHashes data structure for given source file. The NameHashes
stores two separate collections of name-hash pairs for regular and
implicit members.
The prefix `_internalOnly_` is used to indicate that this is not an
official incremental compiler's or sbt's API and it's for use by
incremental compiler internals only. We had to use such a prefix
because the `datatype` code generator doesn't support emitting access
modifiers
* `AnalysisCallback` implementation has been modified to gather all
name hashes and store them in the Source object
* TestCaseGenerators has been modified to implement generation of
NameHashes
* The NameHashingSpecification contains a few unit tests that make sure
that the basic functionality works properly
Tracking of used names is a component needed by the name hashing
algorithm. The extraction and storage of used names is active only when
`AnalysisCallback.nameHashing` flag is enabled and it's disabled by
default.
This change constists of two parts:
1. Modification of Relations to include a new `names` relation
that allows us to track used names in Scala source files
2. Implementation of logic that extracts used names from Scala
compilation units (that correspond to Scala source files)
The first part is straightforward: add standard set of methods in
Relations (along with their implementation) and update the logic which
serializes and deserializes Relations.
The second part is implemented as tree walk that collects all symbols
associated with trees. For each symbol we extract a simple, decoded name
and add it to a set of extracted names. Check documentation of
`ExtractUsedNames` for discussion of implementation details.
The `ExtractUsedNames` comes with unit tests grouped in
`ExtractUsedNamesSpecification`. Check that class for details.
Given the fact that we fork while running tests in `compiler-interface`
subproject and tests are ran in parallel which involves allocating
multiple Scala compiler instances we had to bump the default memory limit.
This commit contains fixes for gkossakowski/sbt#3, gkossakowski/sbt#5 and
gkossakowski/sbt#6 issues.
Tracking of used names is a component needed by the name hashing
algorithm. The extraction and storage of used names is active only when
`AnalysisCallback.nameHashing` flag is enabled and it's disabled by
default.
This change constists of two parts:
1. Modification of Relations to include a new `names` relation
that allows us to track used names in Scala source files
2. Implementation of logic that extracts used names from Scala
compilation units (that correspond to Scala source files)
The first part is straightforward: add standard set of methods in
Relations (along with their implementation) and update the logic which
serializes and deserializes Relations.
The second part is implemented as tree walk that collects all symbols
associated with trees. For each symbol we extract a simple, decoded name
and add it to a set of extracted names. Check documentation of
`ExtractUsedNames` for discussion of implementation details.
The `ExtractUsedNames` comes with unit tests grouped in
`ExtractUsedNamesSpecification`. Check that class for details.
Given the fact that we fork while running tests in `compiler-interface`
subproject and tests are ran in parallel which involves allocating
multiple Scala compiler instances we had to bump the default memory limit.
This commit contains fixes for gkossakowski/sbt#3, gkossakowski/sbt#5 and
gkossakowski/sbt#6 issues.
Add documentation which explains how a general technique using implicits
conversions is employed in Compat class. Previously, it was hidden inside
of Compat class.
Also, I changed `toplevelClass` implementation to call
`sourceCompatibilityOnly` method that is designed for the purpose
of being a compatibility stub.