Commit Graph

2468 Commits

Author SHA1 Message Date
Ethan Atkins 7c2a1c858b 1.3.0
-----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJdbz/aAAoJEDeJDimNmiv6Am8IAKv23f6BPIWZFeokzJLkUt8v
 DDLyzIwzE0hTFKInCNhGDCFtACFFgoD8/7t9D5gmLttQr4F9ke94DqWBEP3kbgan
 Qb4rR8uwglPUJmOhzBj2Qs3A8fAXdg3wm/6OlllQzBwCYNxFf3MhmJc3hF4vd+jO
 93JqwbY50entqha9z299+NpLPTKWtVC5R+1pAF+LwObjLOYqlxiGvAcl7jWx1qte
 VN+BabBYT4Hw43kJCutglHu8vttG68m+fqYGxjAmZXYBAbn0NPyE7GHmqkQ5baAz
 DUbc0vU2nY6tpUFNlNfu9PTPnRwHdSjSJTa9Ug7hw24z2oTg2tapNDXIpt6n6ZA=
 =onwH
 -----END PGP SIGNATURE-----

Merge tag 'v1.3.0' into 1.3.x-merge

1.3.0
2019-09-05 10:15:41 -07:00
Ethan Atkins 7c31e03d27 Improve supershell appender management
To avoid reliance on jvm global variables, we need to share the super
shell state with each of the console appenders that write to the console
out. We only set the progress state for the console appenders for the
screen. This prevents messages that are below the global logging level
from modifying the progress state without preventing them from being
written to other appenders.

The ability to set the ProgressState for each of the console appenders
is added in a companion util PR.

I verified that the test output of io/test was correctly written to the
streams after this change (there were no progress lines in the output).
2019-09-03 15:22:34 -07:00
eugene yokota 4f2ffe9b36
Merge pull request #5018 from eatkins/output-file-stamp-cache
Use managedFileStampCache for dependency classpath
2019-09-02 23:17:27 -04:00
eugene yokota 26293640c6
Merge pull request #5022 from eatkins/supershell-no-color
Allow supershell in no color mode
2019-09-02 23:15:52 -04:00
Ethan Atkins 19ead4144d
Merge pull request #5014 from eatkins/fail-on-exception
Display only valid pages in scripted completions
2019-09-02 11:41:21 -07:00
Ethan Atkins a02a58dcfa Allow supershell in no color mode
Disabling supershell when color mode is disabled is a sensible default
(especially for piped output). However, I think it should still be
possible to use supershell in no color mode.

This requires a util change that also enables supershell in no color
mode.
2019-09-02 11:26:24 -07:00
Ethan Atkins c525fa2551 Use managedFileStampCache for dependency classpath
It is redundant and slow to restamp all of the dependency classpath
files when they have likely already been stamped by a subproject.
For the classfiles of subprojects, we fill the managedFileStampCache
with the values returned by the zinc compile analysis product stamps.
This is why they are probably already in the managed cache and should be
up to date so long as zinc is working correctly.

I noticed that various outputFileStamps tasks were showing up in the
task timing report when I ran Test / definedTests in the main sbt project.
That task became about 400ms faster after this change.
2019-09-01 19:14:12 -07:00
Ethan Atkins 30ede13a09 Fix task timings
I noticed that the reports generated when using sbt.task.timings=true
made very little sense. They were displaying timings for tests that
couldn't possibly have been run. I tracked this down to the TaskTimings
be stored in the progressReport setting which meant they were reused
across multiple task runs. After this change, the reports made a lot
more sense.
2019-09-01 19:13:43 -07:00
Ethan Atkins 49bcef029d Display only valid pages in scripted completions
The tab completions for scripted have long been broken. They display a
number of non-sensical pages like '*0of9' or '*1of0'. Some of the
multiparser changes seem to have caused these invalid
2019-08-31 17:32:34 -07:00
eugene yokota ea778e9a5c
Merge pull request #4819 from dwijnand/cleanup-Load.loadTransitive
Cleanup Load.loadTransitive
2019-08-29 23:24:29 -04:00
xuwei-k dfe789d7c6 avoid deprecated /: and :\
use foldLeft and foldRight

https://github.com/scala/scala/blob/v2.13.0/src/library/scala/collection/IterableOnce.scala#L682-L686
2019-08-30 11:20:53 +09:00
Ethan Atkins ebf6d5aee6 Fix performance regression in test classloader
In 5eab9df0df, I updated the
outputFileStamps task to compute all of the stamps for a directory
recursively if an output file is a directory. Prior to that, it had only
computed the stamp for the directory itself. This caused a significant
performance regression in creating the test classloader because it was
computing the last modified time for all of the classfiles in the class path.
The test for 5000 source files in
https://github.com/eatkins/scala-build-watch-performance was running roughly
400ms slower due to this regression.
2019-08-29 11:52:29 -07:00
Eugene Yokota 75e609cba2 Deprecate HTTP resolvers (take 2)
Ref https://github.com/sbt/sbt/issues/4905

This is a companion PR to https://github.com/sbt/librarymanagement/pull/318.

This will print the following warnings:

```
sbt:hello> compile
[warn] insecure HTTP request is deprecated 'Artifact(jsoup, jar, jar, None, Vector(), Some(http://jsoup.org/packages/jsoup-1.9.1.jar), Map(), None, false)'; switch to HTTPS or opt-in using from(url(...), allowInsecureProtocol = true) on ModuleID or .withAllowInsecureProtocol(true) on Artifact
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/releases/'; switch to HTTPS or opt-in as ("Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/releases/'; switch to HTTPS or opt-in as ("Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/releases/'; switch to HTTPS or opt-in as ("Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'Patterns(ivyPatterns=Vector(), artifactPatterns=Vector(http://repo.typesafe.com/typesafe/releases/[organisation]/[module](_[scalaVersion])(_[sbtVersion])/[revision]/[artifact]-[revision](-[classifier]).[ext]), isMavenCompatible=true, descriptorOptional=false, skipConsistencyCheck=false)'; switch to HTTPS or opt-in as Resolver.url("Typesafe Ivy Releases", url(...)).withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'Patterns(ivyPatterns=Vector(), artifactPatterns=Vector(http://repo.typesafe.com/typesafe/releases/[organisation]/[module](_[scalaVersion])(_[sbtVersion])/[revision]/[artifact]-[revision](-[classifier]).[ext]), isMavenCompatible=true, descriptorOptional=false, skipConsistencyCheck=false)'; switch to HTTPS or opt-in as Resolver.url("Typesafe Ivy Releases", url(...)).withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'Patterns(ivyPatterns=Vector(), artifactPatterns=Vector(http://repo.typesafe.com/typesafe/releases/[organisation]/[module](_[scalaVersion])(_[sbtVersion])/[revision]/[artifact]-[revision](-[classifier]).[ext]), isMavenCompatible=true, descriptorOptional=false, skipConsistencyCheck=false)'; switch to HTTPS or opt-in as Resolver.url("Typesafe Ivy Releases", url(...)).withAllowInsecureProtocol(true)
```
2019-08-28 23:20:09 -04:00
eugene yokota c38ce111fe
Merge pull request #4999 from eatkins/clean-directories
Restore old cleanFiles behavior
2019-08-28 20:42:23 -04:00
Ethan Atkins b3320ce1ba Restore old cleanFiles behavior
I inadvertently changed the semantics of clean so that cleanFiles would
only delete the file if it was a regular file. In older versions of sbt,
if a file in cleanFiles was a directory, it would be recursively
deleted.
2019-08-28 16:41:25 -07:00
Eugene Yokota 4086fc1213 Take dependencyOverrides into account
This tracks https://github.com/coursier/sbt-coursier/pull/106
Fixes https://github.com/sbt/sbt/issues/4895
2019-08-28 17:43:42 -04:00
Ethan Atkins 6ec9edb733 Abort early in watch multi commands
During refactoring of Continuous, I inadvertently changed the semantics
of `~` so that all multi commands were run regardless of whether or not
an earlier command had failed. I fixed the issue and added a regression
test.
2019-08-27 10:38:55 -07:00
eugene yokota 110f54a044
Merge pull request #4987 from eatkins/fix-cross-overcompilation
Store compile file stamps for each scala version
2019-08-27 08:39:37 -04:00
Ethan Atkins 6ba3afbef7 Fix settings in ScriptMain
It was reported in https://github.com/sbt/sbt/issues/4973 that the
scalaVersion setting was not being correctly set in a script running
with ScriptMain using 1.3.0-RC4. Using git bisect, I found that the
issue was introduced in
73cfd7c8bd.
That commit manipulates the classloaders passed in by the launcher, but
only for the xMain entry point. I found that the script ran correctly if
I updated the classloader for ScriptedMain as well.

After these changes, the example script in #4973 correctly prints 2.13.0
for the scala version with a locally published sbt.

Bonus: rename xMainImpl object xMain. It was private[sbt] anyway.
2019-08-26 21:15:43 -07:00
Ethan Atkins bd4d04d131 Store compile file stamps for each scala version
https://github.com/sbt/sbt/issues/4986 reported that +compile would
always recompile everything in the project even when the sources hadn't
changed. This was because the dependency classpath was changing between
calls to compile, which caused the external hooks cache introduced in
32a6d0d5d7 to invalidate the scala
library. To fix this, I cache the file stamps on a per scala version
basis. I added a scripted test that checks that there is no
recompilation in two consecutive calls to `+compile` in a multi scala
version build. It failed prior to these changes.
2019-08-26 14:47:57 -07:00
eugene yokota 49afe01287
Merge pull request #4982 from eed3si9n/wip/gc
avoid force gc during load
2019-08-23 13:59:32 -04:00
Eugene Yokota fcd9dbf3dd avoid force gc during load
This initializes the lastGcCheck to the current time so it won't force GC in the first 10 minutes, avoiding unnecessary GC during load.
2019-08-23 02:16:11 -04:00
Ethan Atkins 76ec00dc4b
Merge branch 'develop' into startup-perf 2019-08-22 21:50:21 -07:00
Ethan Atkins 1f9ea70518 Avoid intermediate collection creation during load
The allKeys method was making many intermediate collections. For akka,
this reduced startup time by about 400ms on average.
2019-08-22 20:35:12 -07:00
Ethan Atkins 3fc8817974 Add parallelism to KeyIndex.aggregate
I looked for serial bottlenecks in sbt project loading and discovered
that KeyIndex.aggregate was relatively easily parallelizable. Before
this time, it took about 1 second to run KeyIndex.aggregate in the akka
project on my computer. After this change, it took 250ms. Given that I
have 4 logical cores, the speedup is roughly linear.
2019-08-22 20:35:11 -07:00
Ethan Atkins b6f05b91f6 Stop injecting file management settings for io tasks
It turns out that injecting the keys necessary for incremental tasks
causes a significant startup penalty for many larger projects. For
example, akka starts up about 3 seconds faster if do not inject these
settings for the tasks returning `File` or `Seq[File]`. Given that all
of these apis use java.nio.file anyway, it makes sense to not backport
them to older tasks.
2019-08-22 20:34:37 -07:00
Ethan Atkins 5eab9df0df Fix clean performance
The clean task got a lot slower in 1.3.0
(https://github.com/sbt/sbt/issues/4972). The reason for this was that
sbt 1.3.0 generated many custom clean tasks for any tasks that returned
`File` or `Seq[File]`. Each of these tasks was tagged with
Tags.Clean which meant that only one of them could run at a time. As a
result, it took a long time to evaluate all of the custom tasks, even if
they were no-ops. In the akka project, a no-op clean was taking 35
seconds which is simply unacceptable. After this change, a no-op clean
takes less than a second in akka (a full clean only takes about 6
seconds after running test:compile)

To fix this, I stopped aggregating the clean task across configs and
projects. Because I removed the aggregation, I needed to manually
implement clean in the `Compile` and `Test` configurations to make
`Compile / clean` and `Test / compile` clean work correctly.
2019-08-22 13:01:56 -07:00
Eugene Yokota ecb0375de2 reimplement stack trace suppression
Fixes #4964

Together with https://github.com/sbt/util/pull/211, this brings back stack trace supression for custom tasks by default.
Debug levels logs are available in `last`, and this prints a message informing the user of the fact. BLUE on dark background is difficult to read, so I am chaning the color hilight to MAGENTA.
2019-08-20 13:56:52 -04:00
Eugene Yokota 777cc39fcf Fix inter-project dependencies
Tracking https://github.com/coursier/sbt-coursier/pull/101
2019-08-15 15:40:43 -04:00
Eugene Yokota 46e92949ed use Relaxed reconciliation strategy by default
Fixes #4720
Ref https://github.com/coursier/coursier/pull/1293
Ref https://github.com/coursier/sbt-coursier/pull/112
2019-08-15 15:40:43 -04:00
Ethan Atkins 9c7acdb713 Force invalidate dependency changes
After adding the automatic lookup to external hooks for missing binary
jars, the scripted test dependency-management/invalidate-internal
started failing. This was because the previous analysis contained a jar
dependency that still existed on disk but was no longer a part of the
dependency classpath. Fundamentally the problem is that the zinc
compile analysis is not tightly coupled with the sbt build state.

To fix this, we can cache the dependency classpath file stamps in the
same way that we cache the input file stamps in external hooks and
manually diff them at the sbt level. We then force updates regardless of
the difference between the zinc state and the sbt state.
2019-08-15 11:31:24 -07:00
Ethan Atkins 32a6d0d5d7 Update ExternalHooks to look up changed binaries
It was reported that in community builds, sometimes there was
spurious over-compilation due to invalidation of the scala library jar
(https://github.com/sbt/sbt/issues/4948). The reason for this was that
the external hooks prefills the managed cache with all of the time
stamps for the project dependencies but was not looking up any jars that
weren't in the cache. I suspect I did this because I didn't realize that
zinc also includes its own classpath in the binaries which is not
a part of the dependencyClasspath. The fix is to just add the jar to the
cache if it doesn't already exist by switching to getOrElseUpdate from
get.

I followed the steps in #4948 and published a version of sbt locally
with this change and the spurious re-builds stopped.
2019-08-15 10:34:05 -07:00
Ethan Atkins 7c483909af Invalidate unmanagedFileStampCache in allOutputFiles
In the code formatting use case, the formatting task may modify the
source files in place. If the formatting task uses the nio
inputFileStamps, then it would fill the in-memory cache of source paths
to file stamps. This would cause compile to see the pre-formatted
stamps. To fix this, we can invalidate the file cache entries for the
outputs of a task. This will cause the side-effect of some extra io
because the hashes may be computed three times: once for the format
inputs, once for the format outputs and once for the compile inputs. I
think most users would understand that adding auto-formatting would
potentially slowdown compilation.

To really prove this out, I implemented a poor man's scalafmt plugin in
a scripted test. It is fully incremental. Even in the case when some
files cannot be formatted it will update all of the files that can be
formatted and not re-format them until they change.
2019-08-09 13:18:29 -07:00
Ethan Atkins 6d482eb166 Set scope for fileTreeView
It makes sense to add a scope for the `fileTreeView` key where it is
available. At the moment, there is only one `fileTreeView`
implementation but, if that changes down the road, these tasks will
automatically inherit the correct view.
2019-08-09 12:18:22 -07:00
Ethan Atkins 6700d5f77a Add nio path filter settings
It makes sense for the new glob/nio based apis that we provide first
class support for filtering the results. Because it isn't possible to
scope a task within a task within a task, i.e.
`compile / fileInputs / includePathFilter`, I had to add four new
filter settings of type `PathFilter`:

fileInputIncludeFilter :== AllPassFilter.toNio,
fileInputExcludeFilter :== DirectoryFilter.toNio || HiddenFileFilter,
fileOutputIncludeFilter :== AllPassFilter.toNio,
fileOutputExcludeFilter :== NothingFilter.toNio,

Before I was effectively hard-coding the filter: RegularFileFilter &&
!HiddenFileFilter in the inputFileStamps and allInputFiles tasks. These
remain the defaults, as seen in the fileInputExcludeFilter definition
above, but can be overridden by the user.

It makes sense to exclude directories and hidden files for the input
files, but it doesn't necessarily make sense to apply any output filters
by default. For symmetry, it makes sense to have them, but they are
unlikely to be used often.

Apart from adding and defining the default values for these keys, the
only other changes I had to make was to remove the hard-coded filters
from the allInputFiles and inputFileStamps tasks and also add the
filtering to the allOutputFiles task. Because we don't automatically
calculate the FileAttributes for the output files, I added logic for
bypassing the path filter application if the PathFilter is effectively
AllPass, which is the case for the default values because:
AllPassFilter.toNio == AllPass
NothingFilter.toNio == NoPass
AllPass && !NoPass == AllPass && AllPass == AllPass
2019-08-09 12:18:22 -07:00
Ethan Atkins 8ce2578060 Introduce FileChanges
Prior to this commit, change tracking in sbt 1.3.0 was done via the
changed(Input|Output)Files tasks which were tasks returning
Option[ChangedFiles]. The ChangedFiles case class was defined in io as

case class ChangedFiles(created: Seq[Path], deleted: Seq[Path], updated: Seq[Path])

When no changes were found, or if there were no previous stamps, the
changed(Input|Output)Files tasks returned None. This made it impossible
to tell whether nothing had changed or if it was the first time.
Moreover, the api was awkward as it required pattern matching or folding
the result into a default value.

To address these limitations, I introduce the FileChanges class. It can
be generated regardless of whether or not previous file stamps were
available. The changes contains all of the created, deleted, modified
and unmodified files so that the user can directly call these methods
without having to pattern match.
2019-08-09 12:18:22 -07:00
Ethan Atkins 8e9efbeaac Add extension methods for input and output files
It is tedious to write (foo / allInputFiles).value so I added simple
extension method macros that expand `foo.inputFiles` to
(foo / allInputFiles).value and `foo.outputFiles` to
`(foo / allOutputFiles).value`.
2019-08-09 12:18:22 -07:00
Ethan Atkins f126206231 Fix incremental task evaluation semantics
While writing documentation for the new file management/incremental
task evaluation features, I realized that incremental task evaluation
did not have the correct semantics. The problem was that calls to
`.previous` are not scoped within the current task. By this, I mean that
say there are tasks foo and bar and that the defintion of bar looks like

bar := {
    val current = foo.value
    foo.previous match {
        case Some(v) if v == current => // value hasn't changed
        case _ => process(current)
    }
}

The problem is that foo.previous is stored in
effectively (foo / streams).value.cacheDirectory / "previous". This
means that it is completely decoupled from foo. Now, suppose that the
user runs something like:
> set foo := 1
> bar // processes the value 1
> set foo := 2
> foo
> bar // does not process the new value 2 because foo was called, which updates the previous value

This is not an unrealistic scenario and is, in fact, common if the
incremental task evaluation is changed across multiple processing steps.
For example, in the make-clone scripted test, the linkLib task processes
the outputs of the compileLib task. If compileLib is invoked separately
from linkLib, then when we next call linkLib, it might not do anything
even if there was recompilation of objects because the objects hadn't
changed since the last time we called compileLib.

To fix this, I generalizedthe previous cache so that it can be keyed on
two tasks, one is the task whose value is being stored (foo in the
example above) and the other is the task in which the stored task value
is retrieved (bar in the example above). When the two tasks are the
same, the behavior is the same as before.

Currently the previous value for foo might be stored somewhere like:

base_directory/target/streams/_global/_global/foo/previous

Now, if foo is stored with respect to bar, it might be stored in

base_directory/target/streams/_global/_global/bar/previous-dependencies/_global/_gloal/foo/previous

By storing the files this way, it is easy to remove all of the previous
values for the dependencies of a task.

In addition to changing how the files are stored on disk, we have to store
the references in memory differently. A given task can now have multiple
previous references (if, say, two tasks bar and baz both depend on the
previous value). When we complete the results, we first have to collect
all of the successful tasks. Then for each successful task, we find all
of its references. For each of the references, we only complete the
value if the scope in which the task value is used is successful.

In the actual implemenation in Previous.scala, there are a number places
where we have to cast to ScopedKey[Task[Any]]. This is due to
limitations of ScopedKey and Task being type invariant. These casts are
all safe because we never try to get the value of anything, we only use
the portion of the apis of these types that are independent of the value
type. Structural typing where ScopedKey[Task[_]] gets inferred to
ScopedKey[Task[x]] forSome x is a big part of why we have problems with
type inference.
2019-08-09 12:18:22 -07:00
Ethan Atkins d18cb83b3c Switch from Vector to List in Settings
Using List instead of vector makes the code a bit more readable. We
don't need indexed access into the data structure so its unlikely that
Vector was providing any performance benefit.
2019-08-09 12:18:22 -07:00
Ethan Atkins fdeb6be667 Add scaladoc to FileStamp
As part of a documentation push, I noticed that these were undocumented
and that there were some public apis in FileStamp that I intended to be
private[sbt].
2019-08-09 12:18:22 -07:00
Ethan Atkins fb15065438 Move implicit FileStamp JsonFormats into object
I realized it was probably not ideal to have these implicit JsonFormats
defined directly in the FileStamp object because they might
inadvertently be brought into scope with a wildcard import.
2019-08-09 12:18:22 -07:00
Ethan Atkins 9cd88070ae Fix typo in allOutputFiles description 2019-08-09 12:18:22 -07:00
Ethan Atkins a7715e90a4 Rename cacheStoreFactory attribute
It references a CacheStoreFactoryFactory so it should have been named
accordingly.
2019-08-09 12:18:22 -07:00
eugene yokota 6865f32eae
Merge pull request #4934 from eed3si9n/wip/publishTo
Revert "don't require publishTo specified if publishArtifact is `false`"
2019-08-09 09:24:31 -04:00
Ethan Atkins 14f7177619 Fix implicit numeric widening warning 2019-08-08 16:06:13 -07:00
Ethan Atkins d86afb5745 Revert "Merge pull request #4930 from eatkins/2.12.9"
This reverts commit 053b72005d, reversing
changes made to d6b8e0388c.
2019-08-08 11:09:29 -07:00
Eugene Yokota ec6cf15f12 Revert "don't require publishTo specified if publishArtifact is `false`"
This reverts commit 4668faff7c.
Ref https://github.com/sbt/sbt/pull/3760
Ref https://github.com/sbt/sbt/pull/4931
2019-08-08 00:36:36 -04:00
eugene yokota 8365e4b189
Merge pull request #4926 from eatkins/auto-reload-fix
Improve auto-reload
2019-08-05 19:04:50 -04:00
Ethan Atkins b26ce819ca Bump default scala version to 2.12.9
I automatically generated with:

git grep "2.12.8" | \
  cut -d ':' -f1 | uniq | xargs perl -p -i -e "s/2.12.8/2.12.9/"
2019-08-05 13:12:28 -07:00
Ethan Atkins aa62386f4d Improve auto-reload
I noticed that sometimes if I changed a build source and then ran reload
in the shell, I'd still see a warning about build sources having
changed. We can eliminate this behavior by resetting the
hasCheckedMetaBuild state attribute to false and skipping the
checkBuildSources step if the current command is 'reload'. We also now
skip checking the build source step if the command is exit or reboot.
2019-08-05 07:42:24 -07:00