JEP-201 describes the new modularized Java Standard Library in Java 9.
By default, java.xml.bind is no longer on the default classpath; it needs
to be added explicitly with a JVM option `--add-modules java.xml.bind`,
or with a dependency declaration in the module-info.java file if you
package your own code up as a Jigsaw module.
This commit traps the linkage error and (reflectively) uses the
java.util.Base64, which is the recommended way to encode/decode
since 1.8.
This is the start of an effort to make SBT 0.13.x compatible
with JDK 9.
The system property java.ext.dirs no longer exists now that
JEP-220 has removed the extension and endorsed classpath
facilities. It is also forbidden to manually set this to
an empty string from the command line.
This commit treats the absense of this property as an
empty extension classpath.
Fixes#2761
With sbt 0.13.13-RC1 rediscovered that the dependency pulled in from
Giter8 was affecting the plugins. To avoid this, this change splits up
the template resolver implementation to another module called
sbt-giter8-resolver, and it will be downloaded using Ivy into
`~/.sbt/0.13/templates/`, and then launched reflectively using Java as
the interface.
The compiler interface subclasses `scala.tools.nsc.Global`,
and loading this new subclass before each `compile` task forces
HotSpot JIT to deoptimize larges swathes of compiled code. It's
a bit like SBT has rigged the dice to always descend the longest
ladder in a game of Snakes and Ladders.
The slowdown seems to be larger with Scala 2.12. There are a number
of variables at play, but I think the main factor here is that
we now rely on JIT to devirtualize calls to final methods in traits
whereas we used to emit static calls. JIT does a good job at this,
so long as classloading doesn't undo that good work.
This commit extends the existing `ClassLoaderCache` to encompass
the classloader that includes the compiler interface JAR. I've
resorted to adding a var to `AnalyzingCompiler` to inject the
dependency to get the cache to the spot I need it without binary
incompatible changes to the intervening method signatures.
This is a backport of sbt/zinc#95
The previous approach to value class API (introduced by #2261 and
refined by #2413 and #2414) was to store both unerased and
erased signatures so that changes to value classes forced
recompilations.
This is no longer necessary thanks to #2523: if a class type is
used, then it becomes a dependency of the current class and its name is
part of the used names of the current class. Since the name hash of a
class changes if it stops or start extending AnyVal, this is enough to
force recompilation of anything that uses a value class type. If the
underlying type of a value class change, its name hash doesn't change,
but the name hash of `<init>` changes and since every class uses the name
`<init>`, we don't need to do anything special to trigger recompilations
either.
This is a backport of https://github.com/sbt/zinc/pull/87
When `B2.scala` replaces `B.scala` in the new test
`types-in-used-names-a`, the name hash of `listb` does not change because
the signature of `C.listb` is still `List[B]`, however users of
`C.listb` have to be recompiled since the subtyping relationships of its
type have changed.
This commit does this by extending the definition of "used names" to
also include the names of the types of trees, even if these types
do not appear in the source like `List[B]` in `D.scala` (since `B` has
been invalidated, this will force the recompilation of `D.scala`).
This commit does not fix every issue with used types as illustrated by
the pending test `types-in-used-names-b`, `B.scala` is not recompiled
because it uses the type `T` whose hash has not changed, but `T` is
bounded by `S` and `S` has changed, so it should be recompiled.
This should be fixable by including the type bounds underlying a
`TypeRef` in `symbolsInType`.
The test `as-seen-from-a` that did not work before shows that we may not
have to worry about tracking prefixes in `ExtractAPI` anymore, see the
discussion in sbt/zinc#87 for more information.
traverse(tree: Tree) used to call super.traverse(tree) at the end.
sbt/sbt@0f616294c4 brought the traversing
call to inside of the pattern matching.
For the case of MacroExpansionOf(original), it amounts to not traveling
the macro-expanded code. See
sbt/src/sbt-test/source-dependencies/macro-nonarg-dep for the repro.
Provides a workaround flag `incOptions :=
incOptions.value.withIncludeSynthToNameHashing(true)` for name hashing
not including synthetic methods. This will not be enabled by default in
sbt 0.13. It can also enabled by passing `sbt.inc.include_synth=true`
to JVM.
The reason for instability is a bit tricky so let's unpack what the
previous code checking if there's self type declared was doing. It would
check if `thisSym` of a class is equal to a symbol representing the class.
If that's true, we know that there's no self type. If it's false, then
`thisSym` represents either a self type or a self variable. The second
(type test) was supposed to check whether the type of `thisSym` is
different from a type of the class. However, it would always yield false
because TypeRef of `thisSym` was compared to ClassInfoType of a class.
So if you had a self variable the logic would see a self type (and that's
what API representation would give you).
Now the tricky bit: `thisSym` is not pickled when it's representing just
a self variable because self variable doesn't affect other classes
referring to a class. If you looked at a type after unpickling, the
symbol equality test would yield true and we would not see self type when
just a self variable was declared.
The fix is to check equality of type refs on both side of the type equality
check. This makes the pending test passing.
Also, I added another test that checks if self types are represented in
various combinations of declaring a self variable or/and self type.
Fixes#2504.
Add a pending test that shows a problem with instability of representing
self variables. This test covers the bug described in #2504.
In order to test API representation of a class declared either in source
file or unpickled from a class file, ScalaCompilerForUnitTesting has been
extended to extract APIs from multiple compiler instances sharing a
classpath.
This commit enables control of whether a compiler instance should be reused
between compiling groups of Scala source files. Check comments in the code
for why this can be useful to control.
For example, when the `--sourcepath` option is provided
and the refchecks phase compiles an annotation found
on a referenced symbol from the sourcepath.
`compileLate` assumes that all non-sentinel compiler
phases can be down cast to `GlobalPhase`.
This commit changes the two phases in SBT to extend
this instead of `Phase`. This has the knock on benefit
of simplifying the phases by letting the `GlobalPhase.run`
iterator over the list of compilation units and feed them
to us one by one.
I checked that the test case failed before making each
change.
For example, when the `--sourcepath` option is provided
and the refchecks phase compiles an annotation found
on a referenced symbol from the sourcepath.
`compileLate` assumes that all non-sentinel compiler
phases can be down cast to `GlobalPhase`.
This commit changes the two phases in SBT to extend
this instead of `Phase`. This has the knock on benefit
of simplifying the phases by letting the `GlobalPhase.run`
iterator over the list of compilation units and feed them
to us one by one.
I checked that the test case failed before making each
change.
The previous logic was wrong and included all members of traits in their
API hash, which degraded the performance of the incremental compiler.
This commit changes this logic so that only the non-public members of
traits are included into their API hash.
Fixes#2436
Add a new field in `Source` that indicates whether this `Source`
represents a package object. This field is used in the incrmental
compiler to perform recompilation of package objects rather than relying
on the file name being `package.scala`.
Mark pending test `source-dependencies/package-object-name` as passing.
Modify `invalidatedPackageObjects` to look for package objects that
transitively inherit from an invalidated class instead of just inheriting
directly. This is the correct behavior that should have been implemented
in the first place. The bug comes from a subtle copy&paste mistake. The old
implementation used `publicInherited` relation and new one looks identical
but uses `inheritance` relation. The difference is that `publicInherited`
was a relation that was expanded transitively along inheritance chain but
`inheritance` is not. We have to perform the transitive walk in
`IncrementalNameHashing.invalidatedPackageObjects` implementation.
Mark `pkg-self` test as passing.
Fixes#2326
To aid debugging, modify Relations.toString to include `inheritance` in
addition to already printed `memberRef` relation.
Modify it for both name hashing and old implementation.
If a method's type contains a non-primitive value class then it has two
signatures: one before erasure and one after erasure. Before this
commit, we checked if this was the case using `isAnyValSubtype`, but
this is too crude since primitive value classes are also subtypes of
`AnyVal` but do not change signature after erasure.
This commit replaces `isAnyValSubtype` by `isDerivedValueClass` which
excludes primitive value classes.
In practice, for an empty class, this reduces the size of the output of
`DefaultShowAPI` from 65 lines to 25 lines.
Before:
https://gist.github.com/smarter/cf1d6fe58efda88d6ee6#file-old-api
After:
https://gist.github.com/smarter/cf1d6fe58efda88d6ee6#file-new-api
Before this commit, we did not do the invalidation for methods with
multiple parameter list, the comment above `hasValueClassAsReturnType`
said:
Note: We only inspect the "outermost type" (i.e. no recursion) because
we don't need to inspect after erasure a function that would, for
instance, return a function that returns a subtype of AnyVal.
But this is wrong: a method with signature:
def foo(a: A)(b: B): C
is erased to:
def foo(a: A, b: B): C
and not, as the comment in the code suggest, to:
def foo(a: A): B => C
so we do need to inspect the final result type of methods, because they
can be value classes that will be erased to their underlying value.
This is a fixup of 0f616294c4.
That commit assumed that dealiasing is being done for types referred in
self type. It was changed to not do that but the test wasn't updated.
Unfortunately, that mistake slipped by during PR review because unit tests
of compileInterface were not ran (see #2358).
Further categorize debug output as api diff ([diff]), and
invalidation ([inv]). Since we log a ton of diagnostics,
make it a bit easier to find the relevant bits.
Since pickled annotated types and symbols only mention
static annotations, whereas compilation from source
sees all annotations, we must explicitly filter annotations
in the API representation using the same criteria as the pickler,
so that we generate the same API when compiling from source
as when we're loading classfiles.
Also a bit more complete: handle SelectFromTypeTree,
consider the self type an inheritance dependency,
and flatten any refinement types in inherited types,
to get to the symbols of their parents, instead of the
useless symbol of the refinement class.
Include inheritance dependencies in regular ones
Also, update test to reflect the self type is now seen as an inheritance dependency.
self types are local, so don't treat them like inherited types
note inheritanceSymbols dealiases, where allSymbols is constructed differently
fix NPE in source-dependencies/macro-annotation
Specialize two implementations for each value of the `inherit` boolean argument.
Also use a more direct way of distinguishing declared and inherited members.
backwards compat for source-dependencies/inherited-dependencies
For refinement types, the Structure was already restricted
to declarations (and not inherited members), but all base types
were still included for a refinement's parents, which would
create unwieldy, and even erroneous (cyclic) types by expanding
all constituents of an intersection type to add all base types.
Since the logic already disregarded inherited members, it seems
logical to only include direct parents, and not all ancestor types.
```
class Dep {
def bla(c: Boolean) = if (c) new Value else "bla"
}
class Value extends java.lang.Comparable[Value] { def compareTo(that: Value): Int = 1 }
```
Motivated because we want to make it more robust & configurable.
Original motivation was to diagnose a cyclic type representation,
likely due to an f-bounded existential type, as illustrated by the following:
```
class Dep {
// The API representation for `bla`'s result type contains a cycle
// (an existential's type variable's bound is the existential type itself)
// This results in a stack overflow while showing the API diff.
// Note that the actual result type in the compiler is not cyclic
// (the f-bounded existential for Comparable is truncated)
def bla(c: Boolean) = if (c) new Value else "bla"
}
class Value extends java.lang.Comparable[Value] { def compareTo(that: Value): Int = 1 }
```
Limit nesting (`-Dsbt.inc.apidiff.depth=N`, where N defaults to `2`),
and number of declarations shown for a class/structural type
(via `sbt.inc.apidiff.decls`, which defaults to `0` -- no limit).
Limiting nesting is crucial in keeping the size of api diffs of large programs
within a reasonable amount of RAM...
For example, compiling the Scala library, the API diff with nesting at `4`
exhausts 4G of RAM...
The only aspect of the self variable that's relevant for
incremental compilation is its explicitly declared type,
and only when it's different from the type of the class that declares it.
Technically, any self type that's a super type of the class could be ignored,
as it cannot affect external use (instantiation/subclassing) of the class.
Call `initialize` in case symbol's `info` hadn't been completed
during normal compilation.
Also, normalize to the class symbol immediately.
Add a TODO regarding only looking at class symbols,
and thus ignoring the term symbol for objects,
as the corresponding class symbol has all the relevant info.
The NameHashing classes assumed that extracted API data structure have
private members filtered out already. That assumption was wrong and
resulted in bug #2324.
We fix the bug by simply reusing the same logic as used by SameAPI class.
Fixes#2324.
Mention that private members are being extracted and included in the api
structures but ignored in many other parts of incremental compiler. I've
made a mistake of assuming that private members are ignored at api
extraction time. This manifested itself as bug #2324.
Add a pending test that shows a bug in calculation of name hashes.
By design, only non-private members should contribute to name hashes. The
test shows that name hashes are calcuclated for strictly private members
too.