Adding source dependency on itself doesn't really bring any value so
there's no reason to do it. We avoided recording that kind of dependencies
by performing a check in `AnalysisCallback` implementation. However, if we
have another implementation like `TestCallback` used for testing we do
not benefit from that check.
Therefore, the check has been moved to dependency phase were dependencies
are collected.
Add `extractDependenciesFromSrcs` method to ScalaCompilerForUnitTest
class which allows us to unit test dependency extraction logic.
See the comment attached to the method that explain the details of
how it should be used.
Refactor ScalaCompilerForUnitTesting by introducing a new method
`extractApiFromSrc` which better describes the intent than
`compileSrc`. The `compileSrc` becomes a private, utility method.
Also, `compileSrc` method changed it's signature so it can take
multiple source code snippets as input. This functionality will
be used in future commits.
Previously incremental compiler was extracting source code
dependencies by inspecting `CompilationUnit.depends` set. This set is
constructed by Scala compiler and it contains all symbols that given
compilation unit refers or even saw (in case of implicit search).
There are a few problems with this approach:
* The contract for `CompilationUnit.depend` is not clearly defined
in Scala compiler and there are no tests around it. Read: it's
not an official, maintained API.
* Improvements to incremental compiler require more context
information about given dependency. For example, we want to
distinguish between dependency on a class when you just select
members from it or inherit from it. The other example is that
we might want to know dependencies of a given class instead of
the whole compilation unit to make the invalidation logic more
precise.
That led to the idea of pushing dependency extracting logic to
incremental compiler side so it can evolve indepedently from Scala
compiler releases and can be refined as needed. We extract
dependencies of a compilation unit by walking a type-checked tree
and gathering symbols attached to them.
Specifically, the tree walk is implemented as a separate phase that
runs after pickler and extracts symbols from following tree nodes:
* `Import` so we can track dependencies on unused imports
* `Select` which is used for selecting all terms
* `Ident` used for referring to local terms, package-local terms
and top-level packages
* `TypeTree` which is used for referring to all types
Note that we do not extract just a single symbol assigned to `TypeTree`
node because it might represent a complex type that mentions
several symbols. We collect all those symbols by traversing the type
with CollectTypeTraverser. The implementation of the traverser is inspired
by `CollectTypeCollector` from Scala 2.10. The
`source-dependencies/typeref-only` test covers a scenario where the
dependency is introduced through a TypeRef only.
Introduce an alternative source dependency tracking mechanism that is
needed by upcoming name hashing algorithm. This new mechanism is
implemented by introducing two new source dependency relations called
`memberRef` and `inheritance`.
Those relations are very similar to existing `direct` and
`publicInherited` relations in some subtle ways. Those differences
will be highlighted in the description below.
Dependencies between source files are tracked in two distinct
categories:
* dependencies introduced by inheriting from a class/trait
defined in other source file
* dependencies introduced by referring (selecting) a member
defined in other source file (that covers all other
kinds of dependencies)
Due to invalidation algorithm implementation details sbt would need to
track inheritance dependencies of public classes only. Thus, we had
relation called `publicInherited`. The name hashing algorithm which
improves invalidation logic will need more precise information about
dependencies introduced by inheritance including dependencies of non-public
classes. That's one difference between `inheritance` and `publicInherited`
relations.
One surprising (to me) thing about `publicInherited` is that it includes
all base classes of a given class and not just parents. In that sense
`publicInherited` is transitive. This is a bit irregular because
everything else in Relations doesn't include transitive dependencies.
Since we are introducing new relations we have an excellent chance to
make things more regular. Therefore `inheritance` relation is
non-transitive and includes only extracted parent classes.
The access to `direct`, `publicInherited`, `memberRef` and `inheritance`
relations is dependent upon the value of `memberRefAndInheritanceDeps`
flag. Check documentation of that flag for details.
The two alternatives for source dependency tracking are implemented by
introduction of two subclasses that implement Relations trait and one
abstract class that contains some common logic shared between those two
subclasses. The two new subclasses are needed for the time being when we
are slowly migrating to the name hashing algorithm which requires
subtle changes to dependency tracking as explained above. For some time we
plan to keep both algorithms side-by-side and have a runtime switch which
allows to pick one. So we need logic for both old and new dependency
tracking to be available. That's exactly what two subclasses of
MRelationsCommon implement. Once name hashing is proven to be stable and
reliable we'll phase out the old algorithm and the old dependency tracking
logic.
The TestCaseGenerators uses global set for ensuring that certain generated
values are unique. This is not the best design because the more properties
you check the harder is to generate new sample inputs because of already
accumulated values. This results in:
[info] + Analysis.Simple Merge and Split: OK, proved property.
[info] ! Analysis.Complex Merge and Split: Gave up after only 8 passed tests. 93 tests were discarded.
I don't have an ambition to reduce the scope of this global set but at
least I wanted to make generators to work a bit harder on generating
samples.
Instead of using `suchThat` method for filtering out non-unique samples
we use `retryUntil` that never gives up (therefore it might not
terminate). We had to upgrade to latest (1.11.1) version of scalacheck
in order to have an access to `retryUntil` method.
Also, I overridden the `identifier` to delegate to original
`Gen.identifier` but with minimal size set to be to '3'. This means,
the generated identifier will be of size 3 or larger which is needed in
order to avoid collisions.
Reads/writes are a little faster with the text format,
and it's far more useful. E.g., it allows external manipulation
and inspection of the analysis.
We don't gzip the output. It does greatly shrink the files,
however it makes reads and writes 1.5x-2x slower, and we're
optimizing for speed over compactness.
It was an omission in the original commit that introduced them and didn't
mark them as private. They are purely an implementation detail and should
be hidden. We hiding them now.
Introduce a new incremental compiler option that controls
incremental compiler's treatment of macro definitions and their clients.
The current strategy is that whenever a source file containing a macro
definition is touched it will cause recompilation of all direct
dependencies of that file.
That strategy has proven to be too conservative for some projects like
Scala compiler of specs2 leading to too many source files being recompiled.
We make this behavior optional by introducing a new option
`recompileOnMacroDef` in `IncOptions` class. The default value is set to
`true` which preserves the previous behavior.
Add methods that allow one to set a new value to one of the fields of
IncOptions class. These methods are meant to be an alternative to
copy method that is hard to keep binary compatible when new fields are
added to the class.
Each copying method is related to one field of the class so when new
fields are added existing methods (and their signatures) are unaffected.
Expand case class `IncOptions` in binary compatible way so we can have
better control of methods like `unapply` when new fields are added.
Great precaution has been taken to ensure that this commit doesn't break
binary compatibility. I took a dump of javap output before and after
this change for both the class and it's companion object.
The diff is presented below:
diff -u ~/inc-options-before ~/inc-options-after
--- /Users/grek/inc-options-before 2013-11-03 14:48:45.000000000 +0100
+++ /Users/grek/inc-options-after 2013-11-03 15:53:10.000000000 +0100
@@ -9,7 +9,11 @@
public static java.lang.String transitiveStepKey();
public static sbt.inc.IncOptions setTransactional(sbt.inc.IncOptions, java.io.File);
public static sbt.inc.IncOptions defaultTransactional(java.io.File);
+ public static scala.Option unapply(sbt.inc.IncOptions);
+ public static sbt.inc.IncOptions apply(int, double, boolean, boolean, int, scala.Option, scala.Function0);
public static sbt.inc.IncOptions Default();
+ public static scala.Function1 tupled();
+ public static scala.Function1 curried();
public int transitiveStep();
public double recompileAllFraction();
public boolean relationsDebug();
diff -u inc-options-module-before inc-options-module-after
--- inc-options-module-before 2013-11-03 14:48:55.000000000 +0100
+++ inc-options-module-after 2013-11-12 21:00:41.000000000 +0100
@@ -3,6 +3,9 @@
public static final sbt.inc.IncOptions$ MODULE$;
public static {};
public sbt.inc.IncOptions Default();
+ public final java.lang.String toString();
+ public sbt.inc.IncOptions apply(int, double, boolean, boolean, int, scala.Option, scala.Function0);
+ public scala.Option unapply(sbt.inc.IncOptions);
public sbt.inc.IncOptions defaultTransactional(java.io.File);
public sbt.inc.IncOptions setTransactional(sbt.inc.IncOptions, java.io.File);
public java.lang.String transitiveStepKey();
@@ -13,7 +16,5 @@
public java.lang.String apiDiffContextSize();
public sbt.inc.IncOptions fromStringMap(java.util.Map);
public java.util.Map toStringMap(sbt.inc.IncOptions);
- public sbt.inc.IncOptions apply(int, double, boolean, boolean, int, scala.Option, scala.Function0);
- public scala.Option unapply(sbt.inc.IncOptions);
}
The first diff shows that there are just more static forwarders defined
for top-level companion object and that is binary compatible change.
The second diff shows that there are just a few minor differences in
order in which `unapply`, `apply` and bridge method for `apply` are
defined. Also, there's a new `toString` declaration. All those changes are
binary compatible.
All methods that are generated for a case class are marked as deprecated
and will be removed in the future.
The main motivation behind this commit is to reify information about
api changes that incremental compiler considers. We introduce a new
sealed class `APIChange` that has (at the moment) two subtypes:
* APIChangeDueToMacroDefinition - as the name explains, this represents
the case where incremental compiler considers an api to be changed
just because given source file contains a macro definition
* SourceAPIChange - this represents the case of regular api change;
at the moment it's just a simple wrapper around value representing
source file but in the future it will get expanded to contain more
detailed information about API changes (e.g. collection of changed
name hashes)
The APIChanges becomes just a collection of APIChange instances.
In particular, I removed `names` field that seems to be a dead code in
incremental compiler. The `NameChanges` class and methods that refer to
it in `SameAPI` has been deprecated.
The Incremental.scala has been adapted to changed signature of APIChanges
class. The `sameSource` method returns representation of APIChange
(if there's one) instead of just simple boolean. One notable change is
that information about APIChanges is pushed deeper into invalidation logic.
This will allow us to treat the APIChangeDueToMacroDefinition case properly
once name hashing scheme arrives.
This commit shouldn't change any behavior and is purely a refactoring.
The following events are logged:
* invalidation of source file due to macro definition
* inclusion of dependency invalidated by inheritance; we log both
nodes of dependency edge (dependent and dependency)
The second bullet helps to understand what's going on in case of
complex inheritance hierarchies like in Scala compiler.
The #958 describes a scenario where partially successful results are
produced in form of class files written to disk. However, if compilation
fails down the road we do not record any new compilation results (products)
in Analysis object. This leads to Analysis object and disk contents to get
out of sync.
One way to solve this problem is to use transactional ClassfileManager that
commits changes to class files on disk only when entire incremental
compilation session is successful. Otherwise, new class files are rolled
back to previous state.
The other way to solve this problem is to record time stamps of class files
in Analysis object. This way, incremental compiler can detect that class
files and Analysis object got out of sync and recover from that by
recompiling corresponding sources.
This commit uses latter solution which enables simpler (non-transactional)
ClassfileManager to handle scenario from #958.
Fixes#958
Fix the problem with unstable names synthesized for existential
types (declared with underscore syntax) by renaming type variables
to a scheme that is guaranteed to be stable no matter where given
the existential type appears.
The sheme we use are De Bruijn-like indices that capture both position
of type variable declarion within single existential type and nesting
level of nested existential type. This way we properly support nested
existential types by avoiding name clashes.
In general, we can perform renamings like that because type variables
declared in existential types are scoped to those types so the renaming
operation is local.
There's a specs2 unit test covering instability of existential types.
The test is included in compiler-interface project and the build
definition has been modified to enable building and executing tests
in compiler-interface project. Some dependencies has been modified:
* compiler-interface project depends on api project for testing
(test makes us of SameAPI)
* dependency on junit has been introduced because it's needed
for `@RunWith` annotation which declares that specs2 unit
test should be ran with JUnitRunner
SameAPI has been modified to expose a method that allows us to
compare two definitions.
This commit also adds `ScalaCompilerForUnitTesting` class that allows
to compile a piece of Scala code and inspect information recorded
callbacks defined in `AnalysisCallback` interface. That class uses
existing ConsoleLogger for logging. I considered doing the same for
ConsoleReporter. There's LoggingReporter defined which would fit our
usecase but it's defined in compile subproject that compiler-interface
doesn't depend on so we roll our own.
ScalaCompilerForUnit testing uses TestCallback from compiler-interface
subproject for recording information passed to callbacks. In order
to be able to access TestCallback from compiler-interface
subproject I had to tweak dependencies between interface and
compiler-interface so test classes from the former are visible in the
latter. I also modified the TestCallback itself to accumulate apis in
a HashMap instead of a buffer of tuples for easier lookup.
An integration test has been added which tests scenario
mentioned in #823.
This commit fixes#823.
As pointed out by @harrah in #705, we might want to merge both API
and dependency phases so we should mention that in the comment explaining
phase ordering constraints instead.
I'd still like to keep the old comment in the history (as separate commit)
because it took me a while to figure out cryptic issues related to
continuations plugin so it's valuable to keep the explanation around in
case somebody else in the future tries to mess around with dependencies
defined by sbt.
This is the first step towards using new mechanism for dependency
extraction that is based on tree walking.
We need dependency extraction in separate phase because the code
walking trees should run before refchecks whereas analyzer phase runs
at the very end of phase pipeline.
This change also includes a work-around for phase ordering issue with
continuations plugin. See included comment and SI-7217 for details.
As pointed out by @harrah in #705, both beginSource and endSource are
not used in sbt internally for anything meaningful.
We've discussed an option of deprecating those methods but since they
are not doing anything meaningful Mark prefers to have compile-time
error in case somebody implements or calls those methods. I agree with
that hence removal.
Also add equals/hashCode implementations for MRelations.
Also add some comments to explain that ++ and -- are naively implemented.
Also fix some tabs-vs-spaces indentation nits.
equals/hashCode are useful for debugging/verifying/testing,
and the groupBy implementation is naive. It'll be replaced
by a groupBy implementation in Analysis that will handle
external/internal dep transitions correction.
The test case compiles a project without and with this
setting and checks that a warning is and isn't emitted
respectively.
It's a multi-project build; this bug didn't seem to turn
up in a single-project build.
This way we have a little bit more clear separation
between compiler phase logic and the core logic responsible for
processing each compilation unit and extracting an api for it.
As added benefit, we have a little bit less of mutable state
(e.g. sourceFile doesn't need to be a var anymore).
The API extraction logic contains some internal caches that are
required for correctness. It wasn't very clear if they have to
be maintained during entire phase run or just during single compilation
unit processing. It looks like they have to be maintained during
single compilation unit processing and refactored code both
documents that contracts and implements it in the API phase.
Move collection (a class `Compat`) of compatibility hacks into separate
file. This aids understanding of the code as both Analyzer and API make
use of that class and keeping it `Analyzer.scala` file suggested that
it's used only by Analyzer.
Incremental compiler didn't have any explicit logic to handle
cancelled compilation so it would go into inconsistent state.
Specifically, what would happen is that it would treat cancelled
compilation as a compilation that finished normally and try to
produce a new Analysis object out of partial information collected
in AnalysisCallback. The most obvious outcome would be that the
new Analysis would contain latest hashes for source files. The
next time incremental compiler was asked to recompile the same files
that it didn't recompile due to cancelled compilation it would think
they were already successfully compiled and would do nothing.
We fix that problem by following the same logic that handles compilation
errors, cleans up partial results (produced class files) and makes sure
that no Analysis is created out of broken state.
We do that by introducing a new exception `CompileCancelled`
and throwing it at the same spot as an exception signalizing compilation
errors is being thrown. We also modify `IncrementalCompile` to
catch that exception and gracefully return as there was no compilation
invoked.
NOTE: In case there were compilation errors reported _before_
compilation cancellations was requested we'll still report them
using an old mechanism so partial errors are not lost in case
of cancelled compilation.
Add `apiDiffContextSize` option to `IncOptions` which allows one
to control the size (in lines) of a context used when printing
diffs for textual API representation.
The default value for `apiDiffContextSize` is 5 which seems to be
enough for most situations. This is verified by many debugging
sessions I performed when using api diffing functionality.
Implement displaying API changes by using textual representation
of an API (ShowAPI) and good, old textual diff algorithm. We are
using java-diff-utils library that is distributed under Apache 2.0
license.
Notice that we have only soft dependency on java-diff-utils. It means
that we'll try to lookup java-diff-utils class through reflection
and fail gracefully if none is found on the classpath. This way
sbt is not getting any new dependency. If user needs to debug
api diffs then it's matter of starting sbt with
`-Dsbt.extraClasspath=path/to/diffutils.jar` option passed
to sbt launcher.
1. All parents of public/exported classes/modules/packages are tracked as
'publicInherited' dependencies. These are dealiased and normalized so
that the dependency is on the actual underlying template and not the
source enclosing the alias.
2. All CompilationUnit.depends dependencies are direct dependencies. These
include inherited dependencies.
3. When invalidating changed internal sources,
a. Invalidate all inherited dependencies, transitively and include the
originally modified sources,
b. Invalidate all direct dependencies of these sources,
c. Exclude any sources that were compiled in the previous step unless they
depend on a newly invalidated source.
4. Invalidate changed external sources in the same way as #3 but remove the
external sources from the final set.
Only public inheritance dependencies need to be considered because a template
that is not accessible outside its source file and that inherits from another
file can be handled as a normal, direct dependency. Because the template
isn't public, changes to its API will not propagate outside of the source
file.
Several existing tests cover the correctness, especially:
1. transitive-a covers direct, transitive dependencies with inferred return
types
2. transitive-b covers inherited, transitive dependencies with inferred return
types
There are two new tests, one that tests that public inherited dependencies are
tracked and one that verifies the basic invalidation progression.
More tests are needed to verify the improvements that this algorithm brings:
1. Inheritance-related dependencies are processed in one step to avoid the
otherwise unavoidable several steps.
2. Only immediate direct dependencies are ever processed, which should in many
typical cases avoid large invalidation sets.
In summary this commit:
* drops type normalization in api phase but keeps dealiasing
* fixes#736 and marks corresponding test as passing
I discussed type normalization with @adriaanm and according to him
sbt shouldn't call that method. The purpose of this method to
convert to a form that subtyping algorithm expects. Sbt doesn't need
to call it and it's fairly expensive in some cases.
Dropping type normalization also fixes#726 by not running into
stale cache in Scala compiler problem described in SI-7361.
The infrastructure for resident compilation still exists,
but the actual scalac-side code that was backported is removed.
Future work on using a resident scalac will use that invalidation
code directly from scalac anyway.
Ideally, we wouldn't need to construct the classpath ourselves and instead
reuse the classpath construction code from a compiler (scalac or javac).
However, we need to ensure that we only call the compiler when needed.
Because we need to construct the classpath even when compilation might
not happen, we have to duplicate classpath construction.
This reverts commit 9dd5f076ea which was a revert
itself so we are back to the state before those two reverts.
The problem that caused revert that happened in 9dd5f076ea
has been fixed in 88061dd262.
In `ClassToAPI` both `Private` and `Protected` vals had forward
reference to `Unqualified` so they would get `null` as a result.
Fixed that by rearranging the order of vals being declared.
This fixes a problem described in 9dd5f076ea.
Showing inherited members of a structure was disabled so we would not
run into cycles. To best of my knowledge, we can run into cycles only
if inherited member is `ClassLike`. We filter out those and let
anything else to be shown.
As described in sbt/sbt#676, arguments to `mergeMap` and `merge` methods
were swapped causing wrong structures being created for Java compiled
class files.
This commit fixessbt/sbt#676 and makes pending test to pass now.
That's foundation for API dumping and tests based on API representation contents.
Specific list of changes introduced by this commit:
* `AnalysisCallback` class takes `IncOptions` as argument so it
can determine if API should be minimized in `api` callback.
* Introduce `Incremental.apiDebug` method that determines if api debugging
is enabled. There are two ways to enable api debugging: through system
property and through incremental compiler options. The system property
method will be soon deprecated. We introduce it to make it easier to enable
API debugging until tools (like zinc and ide) catch up with making incremental
compiler configurable.
NOTE: The `apiDebug` method has been introduced in Incremental for two reasons:
1. It's analogous to `incDebug` method that's already there.
2. In other branch I need `apiDebug` to be defined in Incremental.
Once we deprecate and remove enabling debugging options through system properties
the code will be cleaned up.
Introduce a way to configure incremental compiler itself instead
of underlying Java/Scala compiler.
Specific list of changes in this commit:
* Add a method to `xsbti.compile.Setup` that returns incremental
compiler options as a `java.util.Map<String, String>`. We considered
statis interface instead of a `Map` but based on mailing
list feedback we decided that it's not the best way to go because
static interface is hard to evolve it by adding new options.
* Since passing `java.util.Map<String, String>` not very convenient
we convert it immediately to `sbt.inc.IncOptions`
* Add options argument to various methods/classes that implement
incremental compilation so in the end options reach
`sbt.inc.IncOptions` object
* Add `incOptions` task that allows users to configure incremental
compiler options in their build files. Default implementation of
that tasks returns just `IncOptions.DEFAULT`
* Both system property `xsbt.inc.debug` and `IncOptions.relationsDebug`
trigger debugging of relations now. In the near future, we should
deprecate use of `xsbt.inc.debug`.
Some methods take a lot of arguments and I'm about to add one more
which will make them too long for easy reading.
This change is changes code formatting only. That's done on purpose
to make it easier to review other changes.
Let's consider compile/inc/src/main/scala/sbt/CompileSetup.scala.
There are multiple Output types, and according to Eclipse importing
xsbti.compile.Output within the package sbt does not work because the
import is shadowed by sbt.Output.
However, compilation proceeds just fine within SBT. Reproducing the
example however gives the same warning, if the files are in the same
project. The problem here is probably that the shadowing Output
is declared in the same package but in another project, and that seems
to give different results in Eclipse and SBT, but relying on
that looks fragile.
Reading the spec is inconclusive since it doesn't match with Scalac's
behavior — see
https://groups.google.com/d/topic/scala-internals/-Rquc2HBYLk/discussion .
ForkTests has the same behavior as CompileSetup.
Resolution of https://issues.scala-lang.org/browse/SI-4695 seems to be to deprecate
inheriting from a class in a subpackage. This commit is an alternative solution,
possibly to be reverted or restricted if resolution of SI-4695 changes or if this
proves to be too conservative in practice.
Review by @gkossakowski. With separate inheritance/function call dependency tracking,
this probably should only pull in package objects with inheritance dependencies on
invalidated files.
While trying to determine binary dependencies sbt lookups class files
corresponding to symbols. It tried to do that for packages and most of the
time would fail because packages don't have corresponding class file
generated. However, in case of case insensitive file system, combined
with special nesting structure you could get spurious dependency.
See added test case for an example of such structure.
The remedy is to never even try to locate class files corresponding to
packages.
Fixes#620.
We store `Seq[xsbt.api.Compilation]` in `Analysis`. Compilation are
being appended to sequence for every iteration of the incremental
compiler.
Note that `Compilation`s are never removed from the sequence unless
you start from scratch with empty `Analysis`. You can do that by using
sbt's `clean` command.
The main use-case for using `compilations` field is to determine how
many iterations it took to compilen give code. The `Compilation` object
are also stored in `Source` objects so there's an indirect way to recover
information about files being recompiled in every iteration.
Since `Analysis` is persisted you can use this mechanism to track entire
sessions spanning multiple `compile` commands.
Extract the api after picklers, since that way we see the same symbol
information/structure irrespective of whether we were typechecking
from source / unpickling previously compiled classes.
Previously, the apiExtractor phase ran after typer.
Since this fix is hard to verify with a test (it's based on the
conceptual argument above, and anecdotal evidence of incremental
compilation of a big codebase), we're providing a way to restore the
old behaviour: run sbt with -Dsbt.api.phase=typer.
This fixes#609.
goal:
a representation of a type reference to a refinement class that's stable
across compilation runs (and thus insensitive to typing from source or
unpickling from bytecode)
problem:
the current representation, which corresponds to the owner chain of the
refinement:
1. is affected by pickling, so typing from source or using unpickled
symbols give different results (because the unpickler "localizes"
owners -- this could be fixed in the compiler in the long term)
2. can't distinguish multiple refinements in the same owner (this is
a limitation of SBT's internal representation and cannot be fixed in
the compiler)
solution:
expand the reference to the corresponding refinement type: doing that
recursively may not terminate, but we can deal with that by
approximating recursive references (all we care about is being sound for
recompilation: recompile iff a dependency changes, and this will happen
as long as we have one unrolling of the reference to the refinement)
* must start with val, lazy val, or def (no modifiers currently)
* visible only within the same .sbt file
* multiple definitions allowed without being separated by blank lines
* no blank lines allowed within a definition
Problems:
1. Without a message, users don't find 'last'
2. Showing a message for every error clutters output.
This tries to address these issues by:
1. Only showing the message when other feedback has not been provided and
'last' would not usually be helpful. This will require ongoing tweaking.
For now, all commands except 'compile' display the message. 'update' could
omit the message as well, but perhaps knowing about 'last' might be
useful there.
2. Including the exact command to show the output:
last test:compile
and not just
last <task>
3. Highlighting the command in blue for visibility as an experiment.
Review by @ijuma and @retronym, please.