Commit Graph

2853 Commits

Author SHA1 Message Date
Eugene Yokota 2b24f05435 Fixes update task not invalidating
Fixes https://github.com/sbt/sbt/issues/5292
Ref https://github.com/sbt/sbt/issues/5142

`update` task checks if the timestamp is still the same from the previous resolution. This no longer works since lm-coursier does not populate the timestamps in `UpdateReport`. See 2e5c8aed5e/modules/lm-coursier/src/main/scala/lmcoursier/internal/SbtUpdateReport.scala (L346-L351)

Since the stamps are empty, this caused `update` not to invalidate when the cache is completely missing. This works around the issue by checking if the file still exists. It also adds a warning that the file is missing.
2019-12-12 22:39:05 -05:00
Ethan Atkins a177c386c0 Add closeClassLoader setting
There have been a number of issues that have come up because of sbt
1.3.0 aggressively closing classloaders. While these issues have been
quite useful in helping us determine some issues related to classloader
lifecycle, we should give users the option to prevent sbt from closing
the classloaders.

I also noticed that the classloader-cache/spark test has been
occasionally segfaulting on travis so I disable classloader closing in
that test.
2019-12-12 17:07:40 -08:00
eugene yokota cba7442618
Merge pull request #5295 from eed3si9n/wip/new
Fixes sbt new by restoring the terminal
2019-12-11 18:24:20 -05:00
eugene yokota c03d70113c
Merge pull request #5289 from eatkins/temporary-directories
Do not use temporary directories in java.io.tmpdir
2019-12-11 13:08:23 -05:00
Eugene Yokota 1ef83e9140 Fixes sbt new by restoring the terminal
Fixes https://github.com/sbt/sbt/issues/5063

This fixes "sbt new" on Ubuntu by restoring the terminal state after supershell querying for the terminal width.
2019-12-11 13:05:20 -05:00
eugene yokota 8178673869
Merge pull request #5287 from eatkins/lint-excludes
Add onLoad and onUnload to project lint excludes
2019-12-10 18:59:42 -05:00
Ethan Atkins 283d486796 Do not use temporary directories in java.io.tmpdir
sbt should not by default create files in the location specified by
java.io.tmpdir (which is the default behavior of apis like
IO.createTemporaryDirectory or Files.createTempFile) because they have a
tendency to leak and it also isn't even guaranteed that the user has
write permissions there (though this is unlikely). Doing so creates the
possibility for leaks

I git grepped for `createTemp` and found these apis. After this change,
the files created by sbt should largely be localized to the project and
sbt global base directories.
2019-12-10 15:05:36 -08:00
eugene yokota f0d1e075db
Merge pull request #5288 from eatkins/shutdown-hook-type-annotation
Add type annotation for shutdown hooks
2019-12-10 15:22:48 -05:00
eugene yokota c46c17b92d
Merge pull request #5278 from hvesalai/develop
No supershell for Emacs and other color supporting dumb terminals
2019-12-10 15:21:44 -05:00
Ethan Atkins 38a56358dc Add type annotation for shutdown hooks
Intellij couldn't handle this without an annotation.
2019-12-10 10:37:52 -08:00
Ethan Atkins cc09294cf3 Add onLoad and onUnload to project lint excludes 2019-12-10 10:24:02 -08:00
Heikki Vesalainen 9e72b1c520 No supershell for Emacs or other dumb terminals that support color 2019-12-10 18:14:19 +00:00
Eugene Yokota 93f1f5464c sbt-giter8-resolver 0.12.0 2019-12-09 01:17:38 -05:00
Ethan Atkins 8518c4b4fd Place scalatest framework jar in its own classloader
Closing the ManagedClassLoader generated by test can cause nonlocal
effects because the jdk shares some JarFile resources across multiple
URLClassLoaders. As a result, if one classloader is trying to load a
resource and the classloader is closed, it might cause the resource
loading to fail (see https://github.com/sbt/sbt/issues/5262). This can
be fixed by moving the scalatest framework jar (and its dependencies)
into an additional classloader layer that sits between the scala library
loader and the rest of the test dependencies.

In addition to adding the new layer, I reworked the
ReverseLookupClassLoader to use its dependent classloader to find
resources that may below it in the classloading hierarchy rather than
constructing an entirely new classloader for resources.

After this change, I was able to run test in the repro project:
https://github.com/rjmac/sbt-5262 1000 times with no failures. Note that
the repro is sensitive to the jdk used. I could not reproduce with jdk11
but I could typically induce a failure within 20 or so runs with jdk8.

I benchmarked this change with
https://github.com/eatkins/scala-build-watch-performance and performance
was roughly the same as 1.3.4 with turbo mode and about 200-250ms faster
in non-turbo mode (which can be explained by the time to load the
scalatest classes).
2019-12-06 11:41:44 -08:00
Guillaume Martres 437950266f Give a more precise type to mkIvyConfiguration
This makes it possible to do mkIvyConfiguration.value.withXXX(...) for
all the methods in InlineIvyConfiguration. (I need this to remove
inter-project resolvers when fetching dotty from sbt-dotty to avoid
accidentally fetching a local project in the build of dotty itself).
2019-12-05 18:14:56 +01:00
Ethan Atkins 1438b79378 Make ZombieClassLoader thread safe
The previous implementation of ZombieClassLoader was not thread safe.
This caused problems because it is possible for the ManagedClassLoader
in test to leak into the coursier thread pool if the test uses bouncy
castle apis. Unfortunately, these apis seem to in some cases assign
static variables using the Thread context class loader. Because the
bouncycastle apis are implemented by the jdk, they are found in the
system classloader and thus the static references leak out of the test
context.

I had a local repro of https://github.com/sbt/sbt/issues/5249 that is
fixed by this change.
2019-12-03 18:53:40 -08:00
Ethan Atkins 53788ba356 Support input tasks in cross (+) command 2019-12-03 10:47:15 -08:00
Ethan Atkins 0cbbee4418 Don't import fields from local variables
I found it hard to reason about where certain local variables, like
currentRef, were coming from. I also changed 'x' to 'extracted' in a few
places for clarity as well.
2019-12-03 10:45:52 -08:00
Ethan Atkins abb3f61ff1
Merge branch 'develop' into background-jobs 2019-12-02 08:58:24 -08:00
Ethan Atkins 1a7d6a84f5
Merge branch 'develop' into state-transform 2019-12-02 08:03:46 -08:00
eugene yokota 56af617908
Merge pull request #5258 from eatkins/lint
Make minor improvements to project setting linting
2019-11-30 23:29:26 -05:00
Ethan Atkins 73a196798f Move background job service directory location
Rather than putting the background job temporary files in whatever
java.io.tmpdir points to, this commit moves the files into a
subdirectory of target in the project root directory.

To make the directory configurable via settings, I had to move the
declaration of the bgJobService setting later in the project
initialization process. I don't think this should matter because
background jobs shouldn't be created until after the project has loaded
all of its settings..
2019-11-30 15:20:00 -08:00
Ethan Atkins 73edc8d4ff Use anonymous function instead of Runnable 2019-11-30 15:20:00 -08:00
Ethan Atkins 7426ae520c Fix background job shutdown
When a user calls sbt exit and there is an active background job, sbt
may not exit cleanly. This was primarily because the
background job service shutdown method depended on the
StandardMain.executionContext which was closed before the background job
service was shutdown. This was fixable by reordering the resource
closing in StandardMain.runManaged.
2019-11-30 15:20:00 -08:00
Ethan Atkins 8d26bc73b4 Shutdown background job on error
When running a main method, if the user inputs ctrl+c then the `run`
task will exit but the main method is not interrupted so it continues
running even once sbt has returned to the shell. If the main method is a
webserver, this prevents run from ever starting again on a fixed port.
To fix this, we can modify the waitForTry method to stop the job if an
exception is thrown (ctrl+c leads to an interrupted exception being
thrown by waitFor).

I rework the BackgroundJobService so that the default implementation of
waitForTry is now usable and no longer needs to be overridden. The side
effect of this change is that waitFor may now throw an exception. Within
sbt, waitFor was only used in one place and I reworked it to use
waitForTry instead. This could theoretically break a downstream user who
relied on waitFor not throwing an exception but I suspect that there
aren't many users of this api, if any at all.
2019-11-30 15:20:00 -08:00
Ethan Atkins a83c280db1 Improve StateTransform
The StateTransform class introduced in
9cdeb7120e did not cleanly integrate with
logic for transforming the state using the `transformState` task
attribute. For one thing, the state transform was only applied if the
root node returned StateTransform but no transformation would occur if a
task had a dependency that returned StateTransform. Moreover, the
transformation would override all other transformations that may have
occurred during task evaluation.

This commit updates StateTransform to act very similarly to the
transformState attribute. Instead of wrapping a `State` instance, it now
wraps a transformation function from `State => State`. This function
can be applied on top of or prior to the other transformations via the
`transformState`.

For binary compatibility with 1.3.0, I had to add the stateProxy
function as a constructor parameter in order to implement the `state`
method. The proxy function will generally throw an exception unless the
legacy api is used. To avoid that, I private[sbt]'d the legacy api so
that binary compatibility is preserved but any builds targeting > 1.4.x
will be required to use the new api.

Unfortunately I couldn't private[sbt] StateTransform.apply(state: State)
because mima interpreted it as a method type change becuase I added
StateTransform.apply(transform: State => State). This may be a mima bug
given that StateTransform.apply(state: State) would be jvm public even
when private[sbt], but I figured it was quite unlikely that any users
were using this method anyway since it was incorrectly implemented in
1.3.0 to return `state` instead of `new StateTransform(state)`.
2019-11-30 15:06:34 -08:00
Ethan Atkins 3bb847fc72 Allow lintUnusedKeys to be disabled
The linting can take a while for large projects because `Def.compiled`
scales in the number of settings. Even for small projects (i.e. scripted
tests), it takes about 50 ms on my computer. This doesn't change the
current behavior because the default value is true.
2019-11-30 15:00:38 -08:00
Ethan Atkins 805fa002a7 Only print unused setting warning if there are any 2019-11-30 15:00:38 -08:00
Ethan Atkins 094d730b06 Bump scalafmt 2019-11-30 14:57:20 -08:00
Jason Pickens 71bc3876d9
Scope compiler bridge to consoleProject 2019-11-28 20:29:31 +13:00
eugene yokota c45e991c6b
Merge pull request #5229 from eed3si9n/wip/addPluginSbtFile
addPluginSbtFile command fixes
2019-11-21 17:02:02 -05:00
Frank S. Thomas 16860b5273 Include description and homepage in ivy.xml files
This PR includes the values of the `description` and `homepage`
settings into the `ivy.xml` files generated by the `makeIvyXml`
task. It restores the behaviour of sbt 1.2.8 and if `useCoursier`
is set to `false`.

Two things are changed in this PR:
 * `IvyXml.content` now adds the `homepage` attribute to the
   `description` element if `project.info.homePage` is not empty.
 * `CoursierInputsTasks.coursierProject0` now fills the previous
   empty `CProject.info` field with the description and homepage.

Closes: #5234
2019-11-16 20:18:42 +01:00
Eugene Yokota 033601c393 addPluginSbtFile command fixes
Ref #4211
Fixes #4395
Fixes #4600

This is a reimplementation of `--addPluginSbtFile`. #4211 implemented the command to load extra `*.sbt` files as part of the global plugin subproject. That had the unwanted side effects of not working when `.sbt/1.0/plugins` directory does not exist. This changes the strategy to load the `*.sbt` files as part of the meta build.

```
$ sbt -Dsbt.global.base=/tmp/hello/global --addPluginSbtFile=/tmp/plugins/plugin.sbt
[info] Loading settings for project hello-build from plugin.sbt ...
[info] Loading project definition from /private/tmp/hello/project
sbt:hello> plugins
In file:/private/tmp/hello/
	sbt.plugins.IvyPlugin: enabled in root
	sbt.plugins.JvmPlugin: enabled in root
	sbt.plugins.CorePlugin: enabled in root
	sbt.ScriptedPlugin
	sbt.plugins.SbtPlugin
	sbt.plugins.SemanticdbPlugin: enabled in root
	sbt.plugins.JUnitXmlReportPlugin: enabled in root
	sbt.plugins.Giter8TemplatePlugin: enabled in root
	sbtvimquit.VimquitPlugin: enabled in root
```
2019-11-10 20:03:09 -05:00
Samvel Abrahamyan ff75a21d4f Sleep the current thread when we need to retry background job shutdown 2019-11-05 14:54:55 +01:00
eugene yokota e17c64dfb6
Merge pull request #5153 from eed3si9n/wip/lint
build linting to warn on unused settings during reload
2019-10-30 11:36:43 -04:00
Filipe Regadas 66da2f5926
Merge branch 'develop' into fix/5110 2019-10-19 15:27:34 +01:00
Filipe Regadas 562eae2bff
Add explicit return type to plugin settings 2019-10-19 09:38:54 +01:00
Filipe Regadas d49ced04da
Bump semanticdbVersio to 4.2.3 2019-10-19 09:09:36 +01:00
Filipe Regadas 46b6ad0171
Bump semanticdbVersio to 4.2.4 2019-10-18 21:39:13 +01:00
Filipe Regadas 0ef5b578f8
Fix MiMa 2019-10-18 18:39:39 +01:00
Filipe Regadas a451200bad
Fix #5110: allow semanticdbVersion override 2019-10-18 16:48:36 +01:00
Josh Soref c7bf1a37f2
Remove excess quotation mark 2019-10-17 14:19:20 -04:00
Ethan Atkins d698d6dcdd Don't overwrite nio build settings with injected settings
The current injection of the new nio keys will overwrite any definitions
of those keys in a build source. This is undesirable. The fix is to
create a mapping of scoped keys to settings and for each inject setting
key, if there is a previous key, put that definition after the injected
definition so that it can override it.
2019-10-08 09:47:59 -07:00
Ethan Atkins d12bb2d71e Shutdown progress thread when there are no tasks
It is still possible for progress threads to leak so shut them down if
there are no active tasks. The report0 method will start up a new thread
if a task is added.
2019-10-07 09:43:59 -07:00
Ethan Atkins 6559c3a06d Use only one progress thread during task evaluation
In some circumstances, sbt would generate a number of task progress
threads that could run concurrently. The issue was that the TaskProgress
could be shared by multiple EvaluateTaskConfigs if a dynamic task was
used. This was problematic because when a dynamic task completed, it
might call afterAllCompleted which would stop the progress thread. There
also was a race condition because multiple threads calling initial could
theoretically have created a new progress thread which would cause a
resource leak.

To fix this, we modify the shared task progress so that the `stop()`
method is a no-op. This should prevent dynamic tasks from stopping the
progress thread. We also defer the creation of the task thread until
there is at least one active task. This prevents a thread from being
created in the shell.

The motivation for this change was that I found that sometimes there was
a leaked progress thread that would make the shell not really work for
me because the progress thread would overwrite the shell prompt. This
change fixes that behavior and I was able to validate with jstack that
there was consistently either one or zero task progress threads at a
time (zero in the shell, one when tasks were actually running).
2019-10-07 09:43:59 -07:00
Ethan Atkins 367461e586 Use logger rather than ConsoleOut for TaskTimings
When running sbt -Dtask.timings=true, the task timings get printed to
the console which can overwrite the shell prompt. When we use a logger,
the timing lines are correctly separated from the prompt lines.
2019-10-07 09:43:59 -07:00
Ethan Atkins ae84e162ad Limit scripted page numbers
The completions were generating page numbers that didn't make sense if
there were a small number of scripted tests. For example, suppose that
there were only two tests defined, it would generate *1of3 *2of3 and
*3of3 completions even though there weren't even three tests.
2019-10-06 14:07:30 -07:00
Ethan Atkins 9dff18d736 Fix scripted parser crash
In a local progress, I was able to induce a crash in tab completions
because the group key did not exist in pairMap.
2019-10-06 14:07:30 -07:00
Ethan Atkins 5d8b94de55 Clean ivy resolution cache before regular clean
The way clean was implemented, it was running `clean`, `ivyModule` and
`streams` concurrently. This was problematic because clean could blow
away files needed by `ivyModule` and `streams`. To fix this, move the
cleanCachedResolutionCache into a separate task and run that before the
normal clean.

Should fix https://github.com/sbt/sbt/issues/5067.
2019-10-05 16:42:16 -07:00
Eugene Yokota 460d1f5aa7 Rename to lintUnused for clarification
Address other review comments
2019-10-04 09:04:43 -04:00
Eugene Yokota 3a96ffa2cf include lintBuild as part of reload command 2019-10-03 23:40:21 -04:00
Eugene Yokota 765c451832 add lintBuild task to warn on unsed settings
Fixes https://github.com/sbt/sbt/issues/3183

This implements an input task  lintBuild that checks for unused settings/tasks.
Because most settings are on the intermediary to other settings/tasks, they are included into the linting by default. The notable exceptions are settings used exclusively by a command. To opt-out, you can either append it to `Global / excludeLintKeys` or set the rank to invisible.

On the other hand, many tasks are on the leaf (called by human), so task keys are excluded from linting by default. However there are notable tasks that trip up users, so they are opted-in using `Global / includeLintKeys`.
2019-10-03 23:40:21 -04:00
eugene yokota 22a6ff5d57
Merge pull request #5148 from eatkins/supershell-console
Clear supershell lines before suppressed task
2019-10-03 20:46:28 -04:00
Ethan Atkins cce8358115 Clear supershell lines before suppressed task
I noticed that when entering the console, I'd often be left with a
supershell line at the bottom of the screen that would eventually get
interlaced with my console commands. This can be eliminated by clearing
the supershell progress before evaluating the task if it is one of the
skip tasks.
2019-10-03 15:36:32 -07:00
Eugene Yokota 9cf3243407 Fixes "Could not create directory ...classes.bak"
Fixes https://github.com/sbt/sbt/issues/1673

There's been report of intermittent "Could not create directory" error related to "classes.bak." retronym identified that all configurations are using the same directory, and that might be the cause of race condition.
This addresses the issue by assigning a unique directory for each configuration.
2019-10-03 17:37:50 -04:00
eugene yokota f72990123f
Merge pull request #5112 from eed3si9n/wip/root
Throw error if you run sbt from /
2019-09-30 15:04:25 -04:00
Eugene Yokota 1cfe14a877 Ignore the build ref case 2019-09-30 02:18:11 -04:00
Eugene Yokota d1993bcabb use hedgehog.Result 2019-09-30 02:09:02 -04:00
Eugene Yokota f2de61c681 check for ambiguous project names 2019-09-30 01:56:03 -04:00
Eugene Yokota 073c89059e make URI longer to avoid conflict 2019-09-30 01:56:00 -04:00
Eugene Yokota ad1596c400 increase example count 2019-09-30 01:53:50 -04:00
Charles O'Farrell 67a3eca698 Use hedgehog in ParseKey, Delegates, and ParserSpec test 2019-09-30 01:52:57 -04:00
Ethan Atkins a12bccf4a3 Use java to implement XMain classloaders
These classloaders which are created if sbt is launched with a legacy
launcher (or one that doesn't follow the current classloading hierarchy
convention), were implemented in scala, but that meant that they were
not parallel capable. I fix that by moving the implementations to java.
I also move the static method that creates a MetaBuildLoader into the
java class.
2019-09-27 13:23:42 -07:00
Ethan Atkins 8fd10bfb5f Make all test and run classloaders parallel capable
A number of users were reporting issues with deadlocking when using
1.3.2: https://github.com/sbt/sbt/issues/5116. This seems to be because
most of the sbt created classloaders were not actually parallel capable.
In order for a classloader to be registered as a parallel capable, ALL
of the parent classes except for object in the class hierarchy must be
registered as a parallel capable:
https://docs.oracle.com/javase/8/docs/api/java/lang/ClassLoader.html#registerAsParallelCapable--.
If a classloader is not registered as parallel capable, then a global
lock will be used internally for classloading and this can lead to deadlock.

It is impossible to register a scala 2 classloader as parallel capable
so I ported all of the classloaders to java.

This commit updates the java-serialization scripted test. Prior to the
port, the new version of the test would more or less always deadlock.
After this change, I haven't been able to reproduce a deadlock.

This had no significant performance impact when I reran
https://github.com/eatkins/scala-build-watch-performance
2019-09-27 13:23:42 -07:00
Eugene Yokota 563bcb93aa Throw error if you run sbt from /
Fixes #1458

Running sbt from `/` results to sbt getting stuck trying to load the directories recursively, and eventually erroring with a java.lang.OutOfMemoryError (after freezing for a long time) even on an Alpine container.

To prevent it, this adds a check to see if the absolute path is `/` or not.

```
/ $ sbt -Dsbt.version=1.4.0-SNAPSHOT
[error] java.lang.IllegalStateException: cannot run sbt from root directory without -Dsbt.rootdir=true; see sbt/sbt#1458
[error] Use 'last' for the full log.
```
2019-09-26 17:11:29 -04:00
Ignasi Marimon-Clos 7a87a9e02e Indicate `r`etry is the option applied if users just press RETURN (#4748)
The message:

```
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?
```

is not explicit about retry being the option used when pressing return.
2019-09-25 21:33:19 -04:00
Ethan Atkins edd21b0ec8 Filter out dummy tasks from progress
I don't think that dummy tasks really make sense for task progress
because they are evaluated outside of the normal task evaluation. This
came up because I was seeing streams-manager in supershell which didn't
seem useful.
2019-09-24 11:56:42 -07:00
Ethan Atkins f0bec6d9e3 Limit TaskProgress threads
I noticed some flickering in super shell progress lines and realized
that it was because there were multiple progress threads running
concurrently. This is problematic because each thread has a completely
different state so if each thread has an active task, the display will
flicker between the two tasks. I think this is caused primarily by
dynamic tasks. At least the example where I was seeing it was caused by
a dynamic task.
2019-09-24 11:56:40 -07:00
eugene yokota ccecf1e412
Merge pull request #5096 from eatkins/background-classloading
Preload a number of classes in the background
2019-09-22 23:58:28 -04:00
Ethan Atkins bb0fd5c84c Fix checkbuild sources for projects with meta-meta-build
If a project had a meta-meta build (project/project), the build sources
in the project directory were ignored. This was because the projectGlobs
method did not correctly handle recursion. It inadvertently
discarded the accumulator globs and only returned the most recently
generated globs. This commit fixes that and adds a regression test to
the nio/reload scripted test.
2019-09-21 10:44:00 -07:00
Ethan Atkins d966c40917 Preload a number of classes in the background
I was looking into sbt start up time and in profiling was able to
identify a number of classloading bottlenecks. To speed up
initialization, we can preload those classes in the background. I saw
average speedups of roughly .75 seconds after this change. Also, the `time`
command would consistently report cpu system time very close to 400% and
I have 4 cores on my laptop. With 1.3.0 it would be more like 350%.
2019-09-19 09:41:50 -07:00
eugene yokota 29ea7ee6fc
Merge pull request #5094 from eed3si9n/wip/meta_resolvers
add includePluginResolvers
2019-09-19 12:41:20 -04:00
Ethan Atkins c2dc22f7dc Make allowZombieClassLoaders public
For forward binary compatibility in the 1.3.x series, this key needed to
be private[sbt], but we can make it public in 1.4.x.
2019-09-18 19:27:27 -07:00
Ethan Atkins 231d7966d0 Add the ability to resurrect closed classloaders
There have been a number of complaints about the new classloader closing
behavior. It is too aggressive about closing classloaders after test and
run. This commit softens the behavior by allowing a classloader to be
resurrected after close by creating a new zombie classloader that has
the same urls as the original classloader. After this commit, we always
close the classloaders when we are done, but they can still leak
file descriptors if a zombie is created.

To configure the behavior, I add the allowZombieClassLoaders key. If it
is false (which is default), we will warn but still allow them. If it
is true, then we silence the warning. In a later version of sbt, we can
change the semantics to be strict.

I verified after this change that I could add a shutdown hook in `run`
and it would be evaluated so long as I set `bgCopyClasspath := false`.
Otherwise the needed jars were deleted before the hooks could run.

Bonus: delete unused ResourceLoaderImpl class
2019-09-18 19:26:11 -07:00
Eugene Yokota 6664cbe2ae add includePluginResolvers
Fixes #5070

This adds a new setting called `includePluginResolvers` (default `false`).
When set to `true`, it the project will include resolvers from the metabuild.

This allows the build user to declare a resolver in one place (`project/plugins.sbt`) that gets applied to both the metabuild as well as all the subprojects. The scenario comes up when someone distributes a software on their own repo. Ref #4103
2019-09-17 23:04:10 -04:00
Ethan Atkins 48947b8283 Monitor meta build sources
We want to recursively monitor the project meta build, but we also want
to avoid listing directories that don't exist. To compromise, I rework
the buildSourceFileInputs to add the nested project directories if they
exist. Because the fileInputs are a setting, this means that adding a
new project directory and *.sbt or *.scala will not immediately trigger
a rebuild, but in most common cases, it will. I added a scripted test
for this.
2019-09-16 18:39:53 -07:00
Ethan Atkins 26e60e9b6a Monitor project build sources
In sbt 1.3.0, we only monitor build sources in the root project
directory and the root project meta build directory. This commit adds
these inputs for each project.

Fixes https://github.com/sbt/sbt/issues/5061.
2019-09-16 14:41:29 -07:00
Ethan Atkins 5d2ee701e5 Improve formatting in Continuous 2019-09-16 11:22:41 -07:00
Ethan Atkins 711dfe34d0 Skip state in task progress
In the `watch` input task, which is an alternative to `~`, with super
shell, there would be a solitary progress line for `state` in between
builds.
2019-09-16 11:22:33 -07:00
Ethan Atkins aa09a48b71 Add consoleQuick to skipReportTasks
This was an oversight that caused consoleQuick to not work with
supershell. We should probably try to figure out a way to allow custom
tasks to black list themselves from super shell reporting.
2019-09-15 11:46:02 -07:00
Ethan Atkins d371faf90a Manage classloader in BackgroundJobService
In https://github.com/sbt/sbt/issues/5075 we realized that sbt 1.3.0
introduces a regression where it closes the classloader used to invoke
the main method for in process run before all of its non-daemon threads
have terminated. To fix this and still close the classloader, I add a
method, runInBackgroundWithLoader that provides the background job
services with an optional classloader that it can close after the job
completes.

This cleanly merges and works with 1.3.x as well.
2019-09-14 14:52:18 -07:00
Eugene Yokota 5d0793fece Scala 2.12.10 2019-09-11 23:02:50 -04:00
Yusuke Yamada ae9bba4b80 Set swoval.tmpdir with absolute path via globalBasePath (#5048)
Fixes https://github.com/sbt/sbt/issues/5047

When setting swoval.tmpdir via globalBase, changed to set globalBase as absolute path.

`com.swoval.runtime.NativeLoader.loadPackaged` uses `java.lang.System.load`.
It requires absolute path, so we should set `swoval.tmpdir` with absolute path.
2019-09-09 14:13:34 -04:00
Dmitrii Naumenko e28e451431 remove duplicates from allJars when creating ScalaInstance #5052 (#5053)
Fixes #5052
2019-09-07 16:26:13 -04:00
Ethan Atkins 955547e5bd Update deprecation warnings for api changes
During refactoring, these warnings got out of date. I also added
scaladoc to the watchTriggeredMessage key.

Ref: https://github.com/sbt/sbt/issues/5051.
2019-09-06 12:10:59 -07:00
Ethan Atkins 7c2a1c858b 1.3.0
-----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJdbz/aAAoJEDeJDimNmiv6Am8IAKv23f6BPIWZFeokzJLkUt8v
 DDLyzIwzE0hTFKInCNhGDCFtACFFgoD8/7t9D5gmLttQr4F9ke94DqWBEP3kbgan
 Qb4rR8uwglPUJmOhzBj2Qs3A8fAXdg3wm/6OlllQzBwCYNxFf3MhmJc3hF4vd+jO
 93JqwbY50entqha9z299+NpLPTKWtVC5R+1pAF+LwObjLOYqlxiGvAcl7jWx1qte
 VN+BabBYT4Hw43kJCutglHu8vttG68m+fqYGxjAmZXYBAbn0NPyE7GHmqkQ5baAz
 DUbc0vU2nY6tpUFNlNfu9PTPnRwHdSjSJTa9Ug7hw24z2oTg2tapNDXIpt6n6ZA=
 =onwH
 -----END PGP SIGNATURE-----

Merge tag 'v1.3.0' into 1.3.x-merge

1.3.0
2019-09-05 10:15:41 -07:00
Ethan Atkins 7c31e03d27 Improve supershell appender management
To avoid reliance on jvm global variables, we need to share the super
shell state with each of the console appenders that write to the console
out. We only set the progress state for the console appenders for the
screen. This prevents messages that are below the global logging level
from modifying the progress state without preventing them from being
written to other appenders.

The ability to set the ProgressState for each of the console appenders
is added in a companion util PR.

I verified that the test output of io/test was correctly written to the
streams after this change (there were no progress lines in the output).
2019-09-03 15:22:34 -07:00
eugene yokota 4f2ffe9b36
Merge pull request #5018 from eatkins/output-file-stamp-cache
Use managedFileStampCache for dependency classpath
2019-09-02 23:17:27 -04:00
eugene yokota 26293640c6
Merge pull request #5022 from eatkins/supershell-no-color
Allow supershell in no color mode
2019-09-02 23:15:52 -04:00
Ethan Atkins 19ead4144d
Merge pull request #5014 from eatkins/fail-on-exception
Display only valid pages in scripted completions
2019-09-02 11:41:21 -07:00
Ethan Atkins a02a58dcfa Allow supershell in no color mode
Disabling supershell when color mode is disabled is a sensible default
(especially for piped output). However, I think it should still be
possible to use supershell in no color mode.

This requires a util change that also enables supershell in no color
mode.
2019-09-02 11:26:24 -07:00
Ethan Atkins c525fa2551 Use managedFileStampCache for dependency classpath
It is redundant and slow to restamp all of the dependency classpath
files when they have likely already been stamped by a subproject.
For the classfiles of subprojects, we fill the managedFileStampCache
with the values returned by the zinc compile analysis product stamps.
This is why they are probably already in the managed cache and should be
up to date so long as zinc is working correctly.

I noticed that various outputFileStamps tasks were showing up in the
task timing report when I ran Test / definedTests in the main sbt project.
That task became about 400ms faster after this change.
2019-09-01 19:14:12 -07:00
Ethan Atkins 30ede13a09 Fix task timings
I noticed that the reports generated when using sbt.task.timings=true
made very little sense. They were displaying timings for tests that
couldn't possibly have been run. I tracked this down to the TaskTimings
be stored in the progressReport setting which meant they were reused
across multiple task runs. After this change, the reports made a lot
more sense.
2019-09-01 19:13:43 -07:00
Ethan Atkins 49bcef029d Display only valid pages in scripted completions
The tab completions for scripted have long been broken. They display a
number of non-sensical pages like '*0of9' or '*1of0'. Some of the
multiparser changes seem to have caused these invalid
2019-08-31 17:32:34 -07:00
eugene yokota ea778e9a5c
Merge pull request #4819 from dwijnand/cleanup-Load.loadTransitive
Cleanup Load.loadTransitive
2019-08-29 23:24:29 -04:00
xuwei-k dfe789d7c6 avoid deprecated /: and :\
use foldLeft and foldRight

https://github.com/scala/scala/blob/v2.13.0/src/library/scala/collection/IterableOnce.scala#L682-L686
2019-08-30 11:20:53 +09:00
Ethan Atkins ebf6d5aee6 Fix performance regression in test classloader
In 5eab9df0df, I updated the
outputFileStamps task to compute all of the stamps for a directory
recursively if an output file is a directory. Prior to that, it had only
computed the stamp for the directory itself. This caused a significant
performance regression in creating the test classloader because it was
computing the last modified time for all of the classfiles in the class path.
The test for 5000 source files in
https://github.com/eatkins/scala-build-watch-performance was running roughly
400ms slower due to this regression.
2019-08-29 11:52:29 -07:00
Eugene Yokota 75e609cba2 Deprecate HTTP resolvers (take 2)
Ref https://github.com/sbt/sbt/issues/4905

This is a companion PR to https://github.com/sbt/librarymanagement/pull/318.

This will print the following warnings:

```
sbt:hello> compile
[warn] insecure HTTP request is deprecated 'Artifact(jsoup, jar, jar, None, Vector(), Some(http://jsoup.org/packages/jsoup-1.9.1.jar), Map(), None, false)'; switch to HTTPS or opt-in using from(url(...), allowInsecureProtocol = true) on ModuleID or .withAllowInsecureProtocol(true) on Artifact
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/releases/'; switch to HTTPS or opt-in as ("Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/releases/'; switch to HTTPS or opt-in as ("Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/releases/'; switch to HTTPS or opt-in as ("Typesafe Releases" at "http://repo.typesafe.com/typesafe/releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'Patterns(ivyPatterns=Vector(), artifactPatterns=Vector(http://repo.typesafe.com/typesafe/releases/[organisation]/[module](_[scalaVersion])(_[sbtVersion])/[revision]/[artifact]-[revision](-[classifier]).[ext]), isMavenCompatible=true, descriptorOptional=false, skipConsistencyCheck=false)'; switch to HTTPS or opt-in as Resolver.url("Typesafe Ivy Releases", url(...)).withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'Patterns(ivyPatterns=Vector(), artifactPatterns=Vector(http://repo.typesafe.com/typesafe/releases/[organisation]/[module](_[scalaVersion])(_[sbtVersion])/[revision]/[artifact]-[revision](-[classifier]).[ext]), isMavenCompatible=true, descriptorOptional=false, skipConsistencyCheck=false)'; switch to HTTPS or opt-in as Resolver.url("Typesafe Ivy Releases", url(...)).withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'Patterns(ivyPatterns=Vector(), artifactPatterns=Vector(http://repo.typesafe.com/typesafe/releases/[organisation]/[module](_[scalaVersion])(_[sbtVersion])/[revision]/[artifact]-[revision](-[classifier]).[ext]), isMavenCompatible=true, descriptorOptional=false, skipConsistencyCheck=false)'; switch to HTTPS or opt-in as Resolver.url("Typesafe Ivy Releases", url(...)).withAllowInsecureProtocol(true)
```
2019-08-28 23:20:09 -04:00
eugene yokota c38ce111fe
Merge pull request #4999 from eatkins/clean-directories
Restore old cleanFiles behavior
2019-08-28 20:42:23 -04:00
Ethan Atkins b3320ce1ba Restore old cleanFiles behavior
I inadvertently changed the semantics of clean so that cleanFiles would
only delete the file if it was a regular file. In older versions of sbt,
if a file in cleanFiles was a directory, it would be recursively
deleted.
2019-08-28 16:41:25 -07:00
Eugene Yokota 4086fc1213 Take dependencyOverrides into account
This tracks https://github.com/coursier/sbt-coursier/pull/106
Fixes https://github.com/sbt/sbt/issues/4895
2019-08-28 17:43:42 -04:00
Ethan Atkins 6ec9edb733 Abort early in watch multi commands
During refactoring of Continuous, I inadvertently changed the semantics
of `~` so that all multi commands were run regardless of whether or not
an earlier command had failed. I fixed the issue and added a regression
test.
2019-08-27 10:38:55 -07:00
eugene yokota 110f54a044
Merge pull request #4987 from eatkins/fix-cross-overcompilation
Store compile file stamps for each scala version
2019-08-27 08:39:37 -04:00
Ethan Atkins 6ba3afbef7 Fix settings in ScriptMain
It was reported in https://github.com/sbt/sbt/issues/4973 that the
scalaVersion setting was not being correctly set in a script running
with ScriptMain using 1.3.0-RC4. Using git bisect, I found that the
issue was introduced in
73cfd7c8bd.
That commit manipulates the classloaders passed in by the launcher, but
only for the xMain entry point. I found that the script ran correctly if
I updated the classloader for ScriptedMain as well.

After these changes, the example script in #4973 correctly prints 2.13.0
for the scala version with a locally published sbt.

Bonus: rename xMainImpl object xMain. It was private[sbt] anyway.
2019-08-26 21:15:43 -07:00
Ethan Atkins bd4d04d131 Store compile file stamps for each scala version
https://github.com/sbt/sbt/issues/4986 reported that +compile would
always recompile everything in the project even when the sources hadn't
changed. This was because the dependency classpath was changing between
calls to compile, which caused the external hooks cache introduced in
32a6d0d5d7 to invalidate the scala
library. To fix this, I cache the file stamps on a per scala version
basis. I added a scripted test that checks that there is no
recompilation in two consecutive calls to `+compile` in a multi scala
version build. It failed prior to these changes.
2019-08-26 14:47:57 -07:00
eugene yokota 49afe01287
Merge pull request #4982 from eed3si9n/wip/gc
avoid force gc during load
2019-08-23 13:59:32 -04:00
Eugene Yokota fcd9dbf3dd avoid force gc during load
This initializes the lastGcCheck to the current time so it won't force GC in the first 10 minutes, avoiding unnecessary GC during load.
2019-08-23 02:16:11 -04:00
Ethan Atkins 76ec00dc4b
Merge branch 'develop' into startup-perf 2019-08-22 21:50:21 -07:00
Ethan Atkins 1f9ea70518 Avoid intermediate collection creation during load
The allKeys method was making many intermediate collections. For akka,
this reduced startup time by about 400ms on average.
2019-08-22 20:35:12 -07:00
Ethan Atkins 3fc8817974 Add parallelism to KeyIndex.aggregate
I looked for serial bottlenecks in sbt project loading and discovered
that KeyIndex.aggregate was relatively easily parallelizable. Before
this time, it took about 1 second to run KeyIndex.aggregate in the akka
project on my computer. After this change, it took 250ms. Given that I
have 4 logical cores, the speedup is roughly linear.
2019-08-22 20:35:11 -07:00
Ethan Atkins b6f05b91f6 Stop injecting file management settings for io tasks
It turns out that injecting the keys necessary for incremental tasks
causes a significant startup penalty for many larger projects. For
example, akka starts up about 3 seconds faster if do not inject these
settings for the tasks returning `File` or `Seq[File]`. Given that all
of these apis use java.nio.file anyway, it makes sense to not backport
them to older tasks.
2019-08-22 20:34:37 -07:00
Ethan Atkins 5eab9df0df Fix clean performance
The clean task got a lot slower in 1.3.0
(https://github.com/sbt/sbt/issues/4972). The reason for this was that
sbt 1.3.0 generated many custom clean tasks for any tasks that returned
`File` or `Seq[File]`. Each of these tasks was tagged with
Tags.Clean which meant that only one of them could run at a time. As a
result, it took a long time to evaluate all of the custom tasks, even if
they were no-ops. In the akka project, a no-op clean was taking 35
seconds which is simply unacceptable. After this change, a no-op clean
takes less than a second in akka (a full clean only takes about 6
seconds after running test:compile)

To fix this, I stopped aggregating the clean task across configs and
projects. Because I removed the aggregation, I needed to manually
implement clean in the `Compile` and `Test` configurations to make
`Compile / clean` and `Test / compile` clean work correctly.
2019-08-22 13:01:56 -07:00
Eugene Yokota ecb0375de2 reimplement stack trace suppression
Fixes #4964

Together with https://github.com/sbt/util/pull/211, this brings back stack trace supression for custom tasks by default.
Debug levels logs are available in `last`, and this prints a message informing the user of the fact. BLUE on dark background is difficult to read, so I am chaning the color hilight to MAGENTA.
2019-08-20 13:56:52 -04:00
Eugene Yokota 777cc39fcf Fix inter-project dependencies
Tracking https://github.com/coursier/sbt-coursier/pull/101
2019-08-15 15:40:43 -04:00
Eugene Yokota 46e92949ed use Relaxed reconciliation strategy by default
Fixes #4720
Ref https://github.com/coursier/coursier/pull/1293
Ref https://github.com/coursier/sbt-coursier/pull/112
2019-08-15 15:40:43 -04:00
Ethan Atkins 9c7acdb713 Force invalidate dependency changes
After adding the automatic lookup to external hooks for missing binary
jars, the scripted test dependency-management/invalidate-internal
started failing. This was because the previous analysis contained a jar
dependency that still existed on disk but was no longer a part of the
dependency classpath. Fundamentally the problem is that the zinc
compile analysis is not tightly coupled with the sbt build state.

To fix this, we can cache the dependency classpath file stamps in the
same way that we cache the input file stamps in external hooks and
manually diff them at the sbt level. We then force updates regardless of
the difference between the zinc state and the sbt state.
2019-08-15 11:31:24 -07:00
Ethan Atkins 32a6d0d5d7 Update ExternalHooks to look up changed binaries
It was reported that in community builds, sometimes there was
spurious over-compilation due to invalidation of the scala library jar
(https://github.com/sbt/sbt/issues/4948). The reason for this was that
the external hooks prefills the managed cache with all of the time
stamps for the project dependencies but was not looking up any jars that
weren't in the cache. I suspect I did this because I didn't realize that
zinc also includes its own classpath in the binaries which is not
a part of the dependencyClasspath. The fix is to just add the jar to the
cache if it doesn't already exist by switching to getOrElseUpdate from
get.

I followed the steps in #4948 and published a version of sbt locally
with this change and the spurious re-builds stopped.
2019-08-15 10:34:05 -07:00
Ethan Atkins 7c483909af Invalidate unmanagedFileStampCache in allOutputFiles
In the code formatting use case, the formatting task may modify the
source files in place. If the formatting task uses the nio
inputFileStamps, then it would fill the in-memory cache of source paths
to file stamps. This would cause compile to see the pre-formatted
stamps. To fix this, we can invalidate the file cache entries for the
outputs of a task. This will cause the side-effect of some extra io
because the hashes may be computed three times: once for the format
inputs, once for the format outputs and once for the compile inputs. I
think most users would understand that adding auto-formatting would
potentially slowdown compilation.

To really prove this out, I implemented a poor man's scalafmt plugin in
a scripted test. It is fully incremental. Even in the case when some
files cannot be formatted it will update all of the files that can be
formatted and not re-format them until they change.
2019-08-09 13:18:29 -07:00
Ethan Atkins 6d482eb166 Set scope for fileTreeView
It makes sense to add a scope for the `fileTreeView` key where it is
available. At the moment, there is only one `fileTreeView`
implementation but, if that changes down the road, these tasks will
automatically inherit the correct view.
2019-08-09 12:18:22 -07:00
Ethan Atkins 6700d5f77a Add nio path filter settings
It makes sense for the new glob/nio based apis that we provide first
class support for filtering the results. Because it isn't possible to
scope a task within a task within a task, i.e.
`compile / fileInputs / includePathFilter`, I had to add four new
filter settings of type `PathFilter`:

fileInputIncludeFilter :== AllPassFilter.toNio,
fileInputExcludeFilter :== DirectoryFilter.toNio || HiddenFileFilter,
fileOutputIncludeFilter :== AllPassFilter.toNio,
fileOutputExcludeFilter :== NothingFilter.toNio,

Before I was effectively hard-coding the filter: RegularFileFilter &&
!HiddenFileFilter in the inputFileStamps and allInputFiles tasks. These
remain the defaults, as seen in the fileInputExcludeFilter definition
above, but can be overridden by the user.

It makes sense to exclude directories and hidden files for the input
files, but it doesn't necessarily make sense to apply any output filters
by default. For symmetry, it makes sense to have them, but they are
unlikely to be used often.

Apart from adding and defining the default values for these keys, the
only other changes I had to make was to remove the hard-coded filters
from the allInputFiles and inputFileStamps tasks and also add the
filtering to the allOutputFiles task. Because we don't automatically
calculate the FileAttributes for the output files, I added logic for
bypassing the path filter application if the PathFilter is effectively
AllPass, which is the case for the default values because:
AllPassFilter.toNio == AllPass
NothingFilter.toNio == NoPass
AllPass && !NoPass == AllPass && AllPass == AllPass
2019-08-09 12:18:22 -07:00
Ethan Atkins 8ce2578060 Introduce FileChanges
Prior to this commit, change tracking in sbt 1.3.0 was done via the
changed(Input|Output)Files tasks which were tasks returning
Option[ChangedFiles]. The ChangedFiles case class was defined in io as

case class ChangedFiles(created: Seq[Path], deleted: Seq[Path], updated: Seq[Path])

When no changes were found, or if there were no previous stamps, the
changed(Input|Output)Files tasks returned None. This made it impossible
to tell whether nothing had changed or if it was the first time.
Moreover, the api was awkward as it required pattern matching or folding
the result into a default value.

To address these limitations, I introduce the FileChanges class. It can
be generated regardless of whether or not previous file stamps were
available. The changes contains all of the created, deleted, modified
and unmodified files so that the user can directly call these methods
without having to pattern match.
2019-08-09 12:18:22 -07:00
Ethan Atkins 8e9efbeaac Add extension methods for input and output files
It is tedious to write (foo / allInputFiles).value so I added simple
extension method macros that expand `foo.inputFiles` to
(foo / allInputFiles).value and `foo.outputFiles` to
`(foo / allOutputFiles).value`.
2019-08-09 12:18:22 -07:00
Ethan Atkins f126206231 Fix incremental task evaluation semantics
While writing documentation for the new file management/incremental
task evaluation features, I realized that incremental task evaluation
did not have the correct semantics. The problem was that calls to
`.previous` are not scoped within the current task. By this, I mean that
say there are tasks foo and bar and that the defintion of bar looks like

bar := {
    val current = foo.value
    foo.previous match {
        case Some(v) if v == current => // value hasn't changed
        case _ => process(current)
    }
}

The problem is that foo.previous is stored in
effectively (foo / streams).value.cacheDirectory / "previous". This
means that it is completely decoupled from foo. Now, suppose that the
user runs something like:
> set foo := 1
> bar // processes the value 1
> set foo := 2
> foo
> bar // does not process the new value 2 because foo was called, which updates the previous value

This is not an unrealistic scenario and is, in fact, common if the
incremental task evaluation is changed across multiple processing steps.
For example, in the make-clone scripted test, the linkLib task processes
the outputs of the compileLib task. If compileLib is invoked separately
from linkLib, then when we next call linkLib, it might not do anything
even if there was recompilation of objects because the objects hadn't
changed since the last time we called compileLib.

To fix this, I generalizedthe previous cache so that it can be keyed on
two tasks, one is the task whose value is being stored (foo in the
example above) and the other is the task in which the stored task value
is retrieved (bar in the example above). When the two tasks are the
same, the behavior is the same as before.

Currently the previous value for foo might be stored somewhere like:

base_directory/target/streams/_global/_global/foo/previous

Now, if foo is stored with respect to bar, it might be stored in

base_directory/target/streams/_global/_global/bar/previous-dependencies/_global/_gloal/foo/previous

By storing the files this way, it is easy to remove all of the previous
values for the dependencies of a task.

In addition to changing how the files are stored on disk, we have to store
the references in memory differently. A given task can now have multiple
previous references (if, say, two tasks bar and baz both depend on the
previous value). When we complete the results, we first have to collect
all of the successful tasks. Then for each successful task, we find all
of its references. For each of the references, we only complete the
value if the scope in which the task value is used is successful.

In the actual implemenation in Previous.scala, there are a number places
where we have to cast to ScopedKey[Task[Any]]. This is due to
limitations of ScopedKey and Task being type invariant. These casts are
all safe because we never try to get the value of anything, we only use
the portion of the apis of these types that are independent of the value
type. Structural typing where ScopedKey[Task[_]] gets inferred to
ScopedKey[Task[x]] forSome x is a big part of why we have problems with
type inference.
2019-08-09 12:18:22 -07:00
Ethan Atkins d18cb83b3c Switch from Vector to List in Settings
Using List instead of vector makes the code a bit more readable. We
don't need indexed access into the data structure so its unlikely that
Vector was providing any performance benefit.
2019-08-09 12:18:22 -07:00
Ethan Atkins fdeb6be667 Add scaladoc to FileStamp
As part of a documentation push, I noticed that these were undocumented
and that there were some public apis in FileStamp that I intended to be
private[sbt].
2019-08-09 12:18:22 -07:00
Ethan Atkins fb15065438 Move implicit FileStamp JsonFormats into object
I realized it was probably not ideal to have these implicit JsonFormats
defined directly in the FileStamp object because they might
inadvertently be brought into scope with a wildcard import.
2019-08-09 12:18:22 -07:00
Ethan Atkins 9cd88070ae Fix typo in allOutputFiles description 2019-08-09 12:18:22 -07:00
Ethan Atkins a7715e90a4 Rename cacheStoreFactory attribute
It references a CacheStoreFactoryFactory so it should have been named
accordingly.
2019-08-09 12:18:22 -07:00
eugene yokota 6865f32eae
Merge pull request #4934 from eed3si9n/wip/publishTo
Revert "don't require publishTo specified if publishArtifact is `false`"
2019-08-09 09:24:31 -04:00
Ethan Atkins 14f7177619 Fix implicit numeric widening warning 2019-08-08 16:06:13 -07:00
Ethan Atkins d86afb5745 Revert "Merge pull request #4930 from eatkins/2.12.9"
This reverts commit 053b72005d, reversing
changes made to d6b8e0388c.
2019-08-08 11:09:29 -07:00
Eugene Yokota ec6cf15f12 Revert "don't require publishTo specified if publishArtifact is `false`"
This reverts commit 4668faff7c.
Ref https://github.com/sbt/sbt/pull/3760
Ref https://github.com/sbt/sbt/pull/4931
2019-08-08 00:36:36 -04:00
eugene yokota 8365e4b189
Merge pull request #4926 from eatkins/auto-reload-fix
Improve auto-reload
2019-08-05 19:04:50 -04:00
Ethan Atkins b26ce819ca Bump default scala version to 2.12.9
I automatically generated with:

git grep "2.12.8" | \
  cut -d ':' -f1 | uniq | xargs perl -p -i -e "s/2.12.8/2.12.9/"
2019-08-05 13:12:28 -07:00
Ethan Atkins aa62386f4d Improve auto-reload
I noticed that sometimes if I changed a build source and then ran reload
in the shell, I'd still see a warning about build sources having
changed. We can eliminate this behavior by resetting the
hasCheckedMetaBuild state attribute to false and skipping the
checkBuildSources step if the current command is 'reload'. We also now
skip checking the build source step if the command is exit or reboot.
2019-08-05 07:42:24 -07:00
Ethan Atkins 4061dabf4d Override zinc compile analysis for source changes
Zinc records all of the compile source file hashes when compilation
completes. This is problematic because its possible that a source file
was changed during compilation. From the user perspective, this may mean
that their source change will not be recompiled even if a build is
triggered by the change.

To overcome this, I add logic in the sbt provided external hooks to
override the zinc analysis stamps. This is done by writing the source
file stamps to the previous cache after compilation completes. This
allows us to see the source differences from sbt's perspective, rather
than zinc's perspective. We then merge the combined differences in the
actual implementation of ExternalHooks. In some cases this may result in
over-compilation but generally over-compilation is preferred to under
compilation. Most of the time, the results should be the same.

The scripted test that I added modifies a file during compilation by
invoking a macro. It then effectively asserts that the file is
recompiled during the next test run by validating the compilation result
in the test. The test fails on the latest develop hash.
2019-07-29 20:13:41 -07:00
eugene yokota 5b0d0122af
Merge pull request #4906 from eatkins/turbo-resource-loader
Turbo resource loader
2019-07-29 16:21:17 -04:00
Ethan Atkins 6686e833b1 Sort dependency jars
I realized that it would be a good idea to sort the dependencyJars so
that they appear in the same order that they do in the fullClasspath.
2019-07-29 12:30:42 -07:00
Ethan Atkins 621789eeb2 Remove resource layer for AllDependencyJars strategy
Changed resources were causing the dependency layer to be invalidated on
resource changes in turbo mode because the resource layer was in between
the scala library layer. This commit reworks the layers for the
AllDependencyJars strategy so that the top layer is able to load _all_
of the resources during a test run.

The resource layer was added to address the problem that dependencies
may need to be able to load resources from the project classpath but
wouldn't be able to do so if the dependencies were in a separate layer
from the rest of the classpath. The resource layer was a classloader
that could load any resource on the full classpath but couldn't load any
classes. When I added the resource layer, I was thinking that when
resources changed, the resource class loader needed to be invalidated.
Resources, however, are different from classes in that the same
ClassLoader can find the same resources in a different place because
getResource and getResourceAsStream just return locations but do not
actually do any loading.

Taking advantage of this, I add a proxy classloader for finding
resource locations to ReverseLookupClassLoader. We can reset the
classpath of the resource loader in
ReverseLookupClassLoaderHolder.checkout. This allows us to see the new
versions of the resources without invalidating the dependency layer.
2019-07-29 12:30:42 -07:00
Ethan Atkins f5c8b8aad5 Don't use exception for reloading
I completely forgot about the StateTransform class which allows a task
to modify the state through its return value.
2019-07-26 15:03:32 -07:00
Ethan Atkins f7f6c3edfe Use '_' instead of '$' in path names
The use of '$' in the path names for streams is a pain because, since
'$' is a special character in the shell, it makes it impossible to
directly copy and paste the paths. If we make this change, some builds
will be left with vestigial directories with $global and $build in them
until they run clean. It also would break any scripts that manually
delete these paths. That doesn't seem like a common use case, but it's
worth mentioning.
2019-07-25 14:07:44 -07:00
Eugene Yokota ef05e07cc5 Fixes credential strictness
Fixes #4882

In #4855 I inadvertently introduced `credential` strictness. This makes relaxes it again by ignoring if the credential file doesn't exist.
2019-07-18 18:30:30 -04:00
Ethan Atkins a3ac4c76a6 Bump scalafmt
Intellij had issues resolving 2.0.0-RCX so it will be nice to be using
the latest.
2019-07-18 12:40:21 -07:00
kenji yoshida 534fbfffbb
fix OutOfMemoryError message
s/werecommend/we recommend/
2019-07-18 12:05:46 +09:00
Ethan Atkins 6c4e23f77c Only persist file stamps in turbo mode
The use of the persistent file stamp cache between watch runs didn't
seem to cause any issues, but there was some chance for inconsistency
between the file stamp cache and the file system so it makes sense to
put it behind the turbo flag.

After changing the default, the watch/on-change scripted test started
failing. It turns out that the reason is that the file stamp cache
managed by the watch process was not pre-filled by task evaluation. For
this reason, the first time a source file was modified, it was treated
as a creation regardless of whether or not it actually was.

To fix this, I add logic to pre-fill the watch file stamp cache if we
are _not_ persisting the file stamps between runs.

I ran a before and after with the scala build performance benchmark tool
and setting the watchPersistFileStamps key to true reduced the median
run time by about 200ms in the non-turbo case.
2019-07-15 17:59:14 -07:00
Ethan Atkins 5e374a8e7d Move onEvent callback definition
It makes the file more readable to me to have this definition below the
definition of the FileEventMonitor.
2019-07-15 14:21:14 -07:00
Ethan Atkins 272508596a Use one observer for all aggregated watch tasks
There was a bug where sometimes a source file change would not trigger a
new build if the change occurred during a build. Based on the logs, it
seemed to be because a number of redundant events were generated for the
same path and they triggered the anti-entropy constraint of the file
event monitor.

To fix this, I consolidated a number of observers of the global file
tree repository into a single observer. This way, I am able to ensure
that only one event is generated per file event.

I also reworked the onEvent callback to only stamp the file once. It was
previously stamping the modified source file for all of the aggregated
tasks. In the sbt project running `~compile` meant that we were stamping
a source file 22 times whenever the file changed.

This actually uncovered a zinc issue though as well. Zinc computes and
writes the hash of the source file to the analysis file after
compilation has completed. If a source file is modified during
compilation, then the new hash is written to the analysis file even when
the compilation may have been made against the previous version of the
file. Zinc will then refuse to re-compile that file until another change
is made.

I manually verified that in the sbt project if I ran `~compile` before
this change and modified a file during compilation, then no event was
triggered (there was a log message about the event being dropped due to
the anti-entropy constraint though). After this change, if I changed a
file during compilation, it seemed to always trigger, but due to the
zinc bug, it didn't always re-compile.
2019-07-15 14:21:14 -07:00
eugene yokota a6da4b5b90
Merge pull request #4862 from eatkins/fix-warnings
Fix warnings
2019-07-15 12:57:16 -04:00
Ethan Atkins 055d7cd626 Remove unneeded cast
This was causing an abstract type pattern T is unchecked since it is
eliminated by erasure. It was unneeded because store.get[T] return
Option[(T, Long)]. I'm surprised that the compiler complained about
this.
2019-07-13 15:35:27 -07:00
Ethan Atkins a071ce8224 Handle multi-command with reload correctly
@olegych reported that sbt would silently swallow the 'compile' command
in the multi-command, 'run;compile;reload'. I tracked this down to the
build source check. When the build has
Global / onChangedBuildSource := ReloadOnSourceChanges, the check build
sources command return a new state with "reload" prefixed. To actually
perform the reload, I returned this modified state with the prefixed
reload command.

There were two problems with this:
1) In the auto-reload case, the current command was not run after the
   reload
2) If the multi-command contained reload, the auto-reload check would
   have a false positive which triggered the bug in (1)

To fix this, I clear out the remaining commands before I run the check
command. That way, we know that if the remaining commands has a reload,
then it is an auto-reload. We then prefix the state with both the reload
and the current command.

I updated the scripted test for auto-reload to handle multi commands
containing reload.
2019-07-13 11:18:56 -07:00
Ethan Atkins 263f00f3b2 Rework watch options
In this commit, I both restore some sbt 1.2.8 behavior and enhance the
api for setting keyboard shortcuts in watch. I change the default start
message to just show the watch count, the tasks that are being monitored
and, on a new line, the instructions to terminate the watch or show more
options.

Here's what it looks like:
[info] 1. Monitoring source files for spark/compile...
[info]    Press <enter> to interrupt or '?' for more options.
?
[info] Options:
[info]   <enter>  : interrupt (exits sbt in batch mode)
[info]   <ctrl-d> : interrupt (exits sbt in batch mode)
[info]   'r'      : re-run the command
[info]   's'      : return to shell
[info]   'q'      : quit sbt
[info]   '?'      : print options

I also made it so that the new options can be added (and old options
removed) with the watchInputOptions key. For example, to add an option
to reload the build with the key 'l', you could add
ThisBuild / watchInputOptions += Watch.InputOption('l', "reload", Watch.Reload)
to your global build.sbt.

After adding that to my global ~/sbt/1.0/global.sbt file, the output of
'?' became:
[info] Options:
[info]   <ctrl-d> : interrupt (exits sbt in batch mode)
[info]   <enter>  : interrupt (exits sbt in batch mode)
[info]   '?'      : print options
[info]   'l'      : reload
[info]   'q'      : quit sbt
[info]   'r'      : re-run the command
[info]   's'      : return to shell
2019-07-12 14:10:51 -07:00
eugene yokota 680659210f
Merge pull request #4848 from eatkins/background-copy-hash
Use last modified instead of hash
2019-07-12 15:24:09 -04:00
eugene yokota 3301bce3b8
Merge pull request #4850 from eatkins/in-memory-cache-store
Add support for in memory cache store
2019-07-12 15:23:40 -04:00
eugene yokota 2af6ad5713
Merge pull request #4858 from eed3si9n/wip/thisbuild
scope the reference of useSuperShell to ThisBuild
2019-07-12 15:21:57 -04:00
Eugene Yokota 00f7d1fab5 scope the reference of useSuperShell to ThisBuild
Fixes #4800
2019-07-12 11:34:39 -04:00
Eugene Yokota 9755234a16 address review 2019-07-12 10:52:02 -04:00
Ethan Atkins 0172d118af Add parser for file size
At the suggestion of @eed3si9n, instead of specifying the file cache
size in bytes, we now specify it in a formatted string. For example,
instead of specifying 128 megabytes in bytes (134217728), we can specify
it with the string "128M".
2019-07-11 17:45:16 -07:00
Ethan Atkins cad89d17a9 Add support for in memory cache store
It can be quite slow to read and parse a large json file. Often, we are
reading and writing the same file over and over even though it isn't
actually changing. This is particularly noticeable with the
UpdateReport*. To speed this up, I introduce a global cache that can be
used to read values from a CacheStore. When using the cache, I've seen
the time for the update task drop from about 200ms to about 1ms. This
ends up being a 400ms time savings for test because update is called for
both Compile / compile and Test / compile.

The way that this works is that I add a new abstraction
CacheStoreFactoryFactory, which is the most enterprise java thing I've
ever written. We store a CacheStoreFactoryFactory in the sbt State.
When we make Streams for the task, we make the Stream's
cacheStoreFactory field using the CacheStoreFactoryFactory. The
generated CacheStoreFactory may or may not refer to a global cache.

The CacheStoreFactoryFactory may produce CacheStoreFactory instances
that delegate to a Caffeine cache with a max size parameter that is
specified in bytes by the fileCacheSize setting (which can also be set
with -Dsbt.file.cache.size). The size of the cache entry is estimated by
the size of the contents on disk. Since we are generally storing things
in the cache that are serialized as json, I figure that this should be a
reasonable estimate. I set the default max cache size to 128MB, which is
plenty of space for the previous cache entries for most projects. If the
size is set to 0, the CacheStoreFactoryFactory generates a regular
DirectoryStoreFactory.

To ensure that the cache accurately reflects the disk state of the
previous cache (or other cache's using a CacheStore), the Caffeine cache
stores the last modified time of the file whose contents it should
represent. If there is a discrepancy in the last modified times (which
would happen if, say, clean has been run), then the value is read from
disk even if the value hasn't changed.

* With the following build.sbt file, it takes roughly 200ms to read and
parse the update report on my compute:

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.3"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.1"

This is because spark-sql has an enormous number of dependencies and the
update report ends up being 3MB.
2019-07-11 17:45:16 -07:00
Eugene Yokota c31e0b6b55 add allCredentials to emulate credential registration
Fixes #4802

For Ivy integration sbt uses credential task in a peculiar way. 9fa25de84c/main/src/main/scala/sbt/Defaults.scala (L2271-L2275)

This lets the build user put `credential` task in various places, like metabuild or root project, but they would all act as if they were scoped globally. This PR adds `allCredentials` task to emulate the behavior to pass credentials into lm-coursier.
2019-07-11 14:15:33 -04:00
Ethan Atkins 9bb88cd342 Rename typesafeRelease Resolver
This Resolver had the same name as the typesafe ivy resolver specified
in the launcher boot.properties. It was creating a number of verbose
warnings about having multiple resolvers with the same name. I noticed
that the ivy pattern is slightly different for the boot resolver with
this name. It didn't seem to be causing any problems to have both
resolvers.

Fixes #4839
2019-07-08 19:22:14 -07:00
Ethan Atkins a368bf7026 Use last modified instead of hash
I noticed that for a simple spark project that evaluating the test task
was faster than running run when both tasks evaluated the same code
block. I tracked this down to the BackgroundJobService.copyClasspath
method. This method was hashing the jar contents of all of the files in
the build. On my computer, this took 600ms (for context, the total run
time of the `run` task was around 1.2 seconds, which included about
150ms of scala compiling and 350ms of time in the main method). If
instead we use the last modified time it drops down to 5-10ms. As
predicted, the total runtime of `run` in this project dropped down to
600ms which was on par with `test`.

I am not sure why a hash was used rather than last modified in the first place,
so I reworked things in such a way that, by default, sbt will use a hash
but if turbo mode is on, it will use the last modified instead. We can
revisit the default later.
2019-07-08 17:15:24 -07:00
Ethan Atkins 60b1ac7ac4 Improve multi parser performance
The multi parser had very poor performance if there were many commands.
Evaluating the expansion of something like "compile;" * 30 could cause
sbt to hang indefinitely. I believe this was due to excessive
backtracking due to the optional `(parser <~ semi.?).?` part of the
parser in the non-leading semicolon case.

I also reworked the implementation so that the multi command now has a
name. This allows us to partition the commands into multi and non-multi
commands more easily in State while still having multi in the command
list. With this change, builds and plugins can exclude the multi parser
if they wish.

Using the partitioned parsers, I removed the high/priority low priority
distinction. Instead, I made it so that the multi command will actually
check if the first command is a named command, like '~'. If it is, it
will pass the raw command argument with the named command stripped out
into the parser for the named command. If that is parseable, then we
directly apply the effect. Otherwise we prefix each multi command to the
state.
2019-06-25 13:45:09 -07:00
Eugene Yokota 29d3894b27 add back typesafe-ivy-releases resolver
Fixes #4698
Fixes #4827
2019-06-22 09:45:15 -04:00
eugene yokota eb877757ba
Merge pull request #4811 from eatkins/strict-multi-parser
Strict multi parser
2019-06-21 16:19:24 -04:00
eugene yokota ca8381e057
Merge pull request #4814 from eatkins/session-save
Clear meta build state on session save
2019-06-20 00:08:35 -04:00
eugene yokota 6fec4d350f
Merge pull request #4781 from eatkins/improve-reload-warnings
Improve reload warnings
2019-06-19 19:22:06 -04:00
Ethan Atkins 494755f0f7 Clear meta build state on session save
The `session save` command has the side effect of modifying a "*.sbt"
file so we don't want to warn about changes or automatically reload when
we return to the shell. Setting the hasCheckedMetaBuild attribute key to
false is sufficient to prevent this.

Ref: https://github.com/sbt/sbt/issues/4813
2019-06-19 16:15:00 -07:00
Ethan Atkins 30a16d1e10 Update Continuous to directly use multi parser
It didn't really make sense for Continuous to use the other command
parser and then reparse the results. I was able to slightly simplify
things by using the multi parser directly.
2019-06-19 16:12:45 -07:00
Ethan Atkins ccfc3d7bc7 Validate commands in multiparser
It was reported in https://github.com/sbt/sbt/issues/4808 that compared
to 1.2.8, sbt 1.3.0-RC2 will truncate the command args of an input task
that contains semicolons. This is actually intentional, but not
completely robust. For sbt >= 1.3.0, we are making ';' syntactically
meaningful. This means that it always represents a command separator
_unless_ it is inside of a quoted string. To enforce this, the multi parser
will effectively split the input on ';', it will then validate that each
command that it extracted is valid. If not, it throws an exception. If
the input is not a multi command, then parsing fails with a normal
failure.

I removed the multi command from the state's defined commands and reworked
State.combinedParser to explicitly first try multi parsing and fall back
to the regular combined parser if it is a regular command. If the multi
parser throws an uncaught exception, parsing fails even if the regular
parser could have successfully parsed the command. The reason is so that
we do not ever allow the user to evaluate, say 'run a;b'. Otherwise the
behavior would be inconsitent when the user runs 'compile; run a;b'
2019-06-19 16:12:45 -07:00
Eugene Yokota 7b10b372f8 Fix updateClassifiers
Fixes #4816

Copied sbt-lm-coursier hacks from 9173406bb3/modules/sbt-lm-coursier/src/main/scala/sbt/hack/Foo.scala.
2019-06-19 12:20:26 -04:00
Dale Wijnand 4fb0706930
Cleanup Load.loadTransitive
The largest win is creating the helper, inner, "load" method.
2019-06-17 11:11:52 +01:00
Ethan Atkins 968e83380a Don't use Set[Incomplete]
It's very expensive to compute the hash code of a deeply nested
Incomplete. To prevent a loop, we only want to check for object equality
which we can do with IdentityHashMap
2019-06-13 18:12:54 -07:00
eugene yokota cf9a5f283f
Merge pull request #4805 from eatkins/watch-start-message
Use project watchStartMessage for multi commands
2019-06-14 01:08:38 +02:00
Ethan Atkins 875a25c929 Use project watchStartMessage for multi commands
It didn't make sense to aggregate the watch start command if it was
defined in multiple sources so we previously just fell back to the
default message if multiple commands were being run. This, however,
meant that if you ran, say, ~compile in an aggregate project, it wasn't
possible to customize the start message. There was a message in the
sbt gitter channel where someone found the new message too verbose and
wanted to print something shorter and I realized that this was an
unfortunate restriction. Instead of giving up, we can just use the
project's watchStartMessage as a default. If the watchStartMessage
setting is unset for some reason, we can fall back to the default.

I validated this change manually in the swoval project, which has an
aggregate root project, by running
set ThisBuild / watchStartMessage := { (_, _, _) => None }
and indeed nothing was printed after each task evaluation in '~compile'.
2019-06-11 16:24:22 -07:00
Ethan Atkins 9c821b7b13 Make dependency layer threadsafe
We discovered that turbo mode did not work in the sbt settings project.
I tracked this down to the dependency classloader bundle not being
thread safe.
2019-06-11 15:52:02 -07:00
Ethan Atkins 54d79e664d Remove err.printStackTrace in MainLoop
I added this for debugging and did not mean to leave it in. It causes
massive walls of text to be printed sometimes when compilation fails.
2019-06-09 15:59:08 -07:00
Ethan Atkins a38d2669e1 Add system property for closing classloaders
I realized that some builds may crash if we automatically close the
classloaders. While I do think that is a good thing in general that we
are closing the loaders by default, we shuold have an option for
supressing this behavior.

I made all of the custom classloaders that we define for test and run
check this property before calling the super.close method.
2019-06-08 17:07:39 -07:00
Ethan Atkins a6bc7b1c76 Fix typo 2019-06-08 17:06:34 -07:00
Ethan Atkins 5e0b9a0c2f Add welcome banner to sbt shell
We want to notify users about the new features available in sbt 1.3.0 to
increase visibility. Turbo mode especially can benefit many builds, but
we have opted to leave it off by default for now.

The banner will be displayed the first time sbt enters the shell command
on each sbt run. The banner can be disabled globally with the sbt.banner
system property. It can be displayed on a per sbt version by running the
skipWelcomeBanner command. That command touches a file in the ~/.sbt/1.0
directory to make it persistent across all projects.
2019-06-08 14:09:39 -07:00
Ethan Atkins cd0461e301 Improve reload warnings
I decided creations/deletions/updates were a bit too technical rather
than descriptive. It also wasn't really correct to say Meta build
sources because the meta build is the build for the build. Instead, I
dropped Meta from the sentence. I also made the instructions when
changed sources are detected more active. I left them capitalized since
they are instructions rather than warnings.

Apply these changes by running `reload`.
Automatically reload the build when source changes are detected by setting `Global / onChangedBuildSource := ReloadOnSourceChanges`.
Disable this warning by setting `Global / onChangedBuildSource := IgnoreSourceChanges`.

Also indentation was wrong for the printed files when multiple files had
changed because the mkString middle argument was "  \n" rather than "\n  ".
2019-06-08 13:55:19 -07:00
Eugene Yokota 6878fb6cdb turbo mode
This creates a performance mode that enables experimental or advanced features that might require some debugging by the build user when it doesn't work.

Initially we are putting the layered ClassLoader (`ClassLoaderLayeringStrategy.AllLibraryJars`) behind this flag.
2019-06-08 21:37:50 +02:00
Eugene Yokota 006722f81c centralize system properties 2019-06-08 19:53:59 +02:00
Ethan Atkins 286e52793c Overhaul dependency layer for java reflection
My first attempt, cc8c66c66d, at making
java reflection work with a layered classloader with a dependency jar
layer was a failure. It would generally work ok the first time the user
ran test, but was likely to fail on a second run.

There were a number of problems with the strategy:
1) It relied on the thread's context class loader to determine where to
   attempt the reverse lookup.
2) It is not possible to ever reload classes loaded by a classloader.
   Consider the classloading hierarchy a <- b, where
   the arrow implies that a is the parent of b. I incorrectly thought
   that a's loadClass method would be called every time a class loaded
   by a made a call to Class.forName(name: String). This turns out to
   not be the case. As a result, the second time the dependency layer
   was used, where now the hierarchy is a <- c, that same Class.forName
   call could return a Class from b which causes a nasty crash.

It isn't possible to work around the limitation in 2 so the only option
that allows both caching and java reflection across layers to work is to
cache the dependency layer, but invalidate when cross layer reflection
occurs. This turns out to be straightforward to implement. The
performance looks very similar to the ScalaLibrary strategy when java
reflection is used, which makes sense because the scala library and
scala reflect layers are still reused when the dependency layer is
invalidated.

I also stopped passing around the resource map to all of the layers.
Resource loading is hierarchical and the resource layer comes above the
dependency layer in the stack so there was no need for the bottom layers
to also be RawResource loaders.
2019-06-06 19:17:59 -07:00
Ethan Atkins 76f3bb271e Close test classloaders correctly
While testing some classloader changes, I realized that we didn't always
close the test classloaders because they didn't necessarily extend
URLClassLoader, but instead implemented AutoCloseable.

Bonus: don't set the context classloader. It turns out that the test
framework does that anyway inside of trl.run so it was pointless to do
in Defaults.scala.
2019-06-06 19:17:59 -07:00
Ethan Atkins cbf1793f51 Add final modifier to some ClassLoaders 2019-06-06 19:17:59 -07:00
Ethan Atkins 66d3d8d504 Use named loader for ScalaLibrary
I'd already made a ScalaReflect loader and it makes sense to have a
ScalaLibraryClassLoader as well.
2019-06-06 19:17:59 -07:00
Akhtyam Sakaev 3ea9f8a71b
fix typo 2019-06-06 08:05:16 +03:00
eugene yokota 331271b052
Merge pull request #4761 from dwijnand/document-BuildStructure
Document helper functions in BuildStructure
2019-06-03 23:51:26 -04:00
eugene yokota e31fd3f082
Merge pull request #4765 from eatkins/watch-docs
Watch docs
2019-06-03 22:36:46 -04:00
Ethan Atkins 1ab666daf4 Change signature of pre watch methods
It makes the method parameters more clear if we pass in the ProjectRef
rather than the project name. We also don't lose information.
2019-06-03 17:41:08 -07:00
Ethan Atkins 70899e5cad Switch private[sbt] status of Reload objects
The Reload exception that I added in the sbt package really wasn't
intended to be public. It's only meant to be used by
checkMetaBuildSources, which the users shouldn't override. I put it in
the top package though because I wanted it to be next to FullReload. I
also am not sure why the Reload object in Watch was private[sbt], but
while writing documentation, I realized that users couldn't access it.
2019-06-03 17:35:01 -07:00
Ethan Atkins 7948408368 Simplify watch callbacks
While writing documentation for the watch subsystem, I realized that
it's awkward to configure watch to clear the screen before task
evaluation. To make this easier, I added a setting watchBeforeCommand
which is an arbitrary function that will run before the watch process
evaluates the command(s).

I also added helper functions for adding clear screen functionality.

I also realized that we weren't using the watchOnEnter or
watchOnExit callbacks anywhere. I had added these to support setting up
some state before watch starts and cleaning it up before it exits for
plugin authors. It makes sense to remove that functionality for 1.3.0
and only if a need presents itself re-add it in a later version of sbt.

I also made a few apis private[sbt] that I'm not sure about. Writing
documentation made me realize that some of these are redundant and/or
not ready for general consumption.
2019-06-03 17:35:01 -07:00
Ethan Atkins 4f66b81e03 Fix parameters in watchTriggeredMessage 2019-06-03 17:35:01 -07:00
Ethan Atkins 2905373ff7 Use logger instead of directly printing to System.err 2019-06-03 17:35:01 -07:00
Ethan Atkins 67cbd6c50e Move watch default settings
I had previously set some of the watch settings at the global level and
some at the project level. While writing documentation for the new watch
subsystem, I realized that the defaults should be set globally so that
they can be overridden at the ThisBuild level.

I also moved watchTriggers to sbt.nio.Keys. It was an oversight that it
wasn't moved there in a5cefd45be.
2019-06-03 17:35:01 -07:00
Ethan Atkins 1b8f0ed20f Don't use filtered classpath
The classpath filter for test and run was added in #661 to ensure that
the classloaders were correctly isolated. That is no longer necessary
with the new layering strategies that are more precise about which jars
are exposed at each level. Using the filtered classloader was causing
the reflection used by spark to fail when using java 11.
2019-06-03 17:26:14 -07:00
Ethan Atkins 233307b696 Fix classpath filter in run
We were incorrectly building the dependency layer in the run task using
the raw jars from dependencyClasspath rather than the actual classpath
jars (which may be different if bgCopyClasspath is true -- which it is
by default). This was preventing spark from working with AllLibraryJars
because it would load its classes and resources from the coursier cache
but the classpath filter would reject the resources because they came
from the coursier cache instead of the classpath.
2019-06-03 17:26:14 -07:00
Ethan Atkins 625470cdd5 Make LayeredClassLoaders parallel capable
The docs for ClassLoader,
https://docs.oracle.com/javase/8/docs/api/java/lang/ClassLoader.html
say that all non-hierarchical custom classloaders should be registered
as parallel capable. The docs also suggest that custom classloaders
should try to only override findClass so I reworked LayerdClassLoader to
only override findClass. I also added locking to the class loading to
make it safe for concurrent loading.

All of the custom classloaders besides LayeredClassLoader either
subclass URLClassLoader or LayeredClassLoader but don't override
loadClass. Because those two classloaders are parallel capable, the
subclasses should be as well. It isn't possible to make classloaders
that are implemented in scala parallel capable because scala 2 doesn't
support jvm static blocks (dotty does support this with an annotation).
To work around this, I re-worked some of the classloaders so that they
are either directly implemented in java or I subclassed a scala
implementation class in java.
2019-06-03 17:26:14 -07:00
Ethan Atkins cc8c66c66d Support java reflection with layered classloaders
Jave reflection did not work with layered classloaders if a dependency
attempted to load a class that was below the dependency layer in the
layered classloader hierarchy. The underlying problem was (in general) a
call to Class.forName somewhere. If the classloader parameter is not
specified, then Class.forName locates the ClassLoader for the caller
using reflection. It ultimately delegates to that ClassLoader's
loadClass method. With the previous LayeredClassLoader class, there was
no way for that classloader to access a URL that was below it in the
class loading hierarchy. I reworked LayeredClassLoader so that if it
fails to load the class, it will check the Thread's context classloader
and see if there are other LayeredClassLoader instances below it. If so,
it will then check if any of those classloaders would be able to load
the class by using findResource. If the descendant loader can load the
class, then we manually load it with findClass.
2019-06-03 14:21:59 -07:00
Ethan Atkins a3cde88db4 Fix runtime scala-reflect layer
For best caching performance, we want to use the scala-reflect.jar that
is found in the scala instance. Also, in the runtime configuration,
caching didn't work correctly because we filtered the scala reflect
library from the dependency jars. We really only wanted to filter out
the library jars.

It also was problematic to use a LayeredClassLoader for the scala
reflect layer because in a subsequent commit I add the capability for a
layered classloader to load classes from its descendant loaders. This
caused problems when the scala-reflect layer was a LayeredClassLoader.
Instead, I add the ScalaReflectClassLoader class for better reporting.
2019-06-03 14:21:59 -07:00
Dale Wijnand 4e89e8ace5
Document helper functions in BuildStructure
Also define LoadedBuildUnit#projects & BuildUnit#thisRootProject, &
cleanup AttributeKey & BasicAttributeMap
2019-06-02 23:28:53 +01:00
Ethan Atkins df5f9ae3cb Support commands in continuous
I had previously not though there was much reason to support commands in
continuous builds. This was primarily because there were a number of
questions regarding semantics. Commands cannot have fileInputs
specifically assigned to them because they don't have an associated
scope. They also can arbitrarily modify state so what is the expectation
when running ~foo where foo is a command that, for example, replaces
itself. I settled on the following semantics:

1) Commands run in a continuous build cannot modify the sbt execution
   state which is to say that the state that is returned by continuous
   is the same that was passed in (unless a reload occurred or we exited
   the build with an exception)

2) Any global watchTriggers or fileInputs apply to a watched command.
   They automatically inherit any fileInputs that are queried when
   running tasks in a command. So, for example, ~+compile does what
   you'd expect.

The implementation is fairly straightforward. If we can successfully
parse a command, but we cannot parse a scopedKey from it, we assign it a
private ScopedKey. When computing the watch settings for that key, we
will select the global settings through delegation. This is how it picks
up the global watchTriggers.

To run the command, I had to rework the task evaluation portion because
a command may return a state with additional commands to run. The cross
build command works this way. We recursively run all of the commands
starting with the original until we run out of commands to run. As part
of this work, I was able to remove the three argument version of
Command.processCommand that I'd previously added to support my old
approach to evaluating commands. This was a nice bonus.

I added scripted tests that check that global watchTriggers are picked
up and that commands that delegate to a command that uses fileInputs
automatically pick up those inputs during the watch. I also added a test
that approximates the ~+compile use case and ensures that the failure
semantics are what we expect and that the task runs for all defined
scala versions.
2019-06-01 20:10:26 -07:00
Ethan Atkins b5ff4bda94 Optimize imports 2019-06-01 18:35:11 -07:00
Ethan Atkins cc52f88030
Merge branch 'develop' into play-run 2019-06-01 18:04:11 -07:00
Ethan Atkins 6f7a824478 Reduce idle cpu usage
I noticed that sbt 1.3.0 was using more cpu when idling (either at the
shell or while waiting for file events) than 1.2.8. This was because I'd
reduced a number of timeouts to 2 milliseconds which was causing a
thread to keep waking up every 2 milliseconds to poll a queue. I thought
that this was cheaper than it actually is and drove the cpu utilization
to O(10%) of a cpu on my mac.

To address this, I consolidated a number of queues into a single queue
in CommandExchange and Continuous. In the CommandExchange case, I
reworked CommandChannel to have a register method that passes in a Queue
of CommandChannels. Whenever it appends an exec, it adds itself to the
queue. CommandExchange can then poll that queue directly and poll the
returned CommandChannel for the actual exec. Since the main thread is
blocking on this queue, it does not need to frequently wake up and can
just poll more or less indefinitely until a message is received. This
also reduces average latency compared to older versions of sbt since
messages will be processed almost as soon as they are received.

The continuous case is slightly more complicated because we are polling
from two sources, stdin and FileEventMonitor. In my ideal world, I'd
have a reactive api for both of those sources and they would just write
events to a shared queue that we could block on. That is nontrivial to
implement, so instead I consolidated the FileEventMonitor instances into
a single FileEventMonitor. Since there is now only one FileEventMonitor
queue, we can block on that queue for 30 milliseconds and the poll
stdin. This reduces cpu utilization to O(2%) on my machine while still
having reasonably low latency for key input events (the latency of file
events should be close to zero since we are usually polling the
FileEventMonitor queue when waiting).

I actually had a TODO about the FileEventMonitor change that this
resolves.
2019-05-31 09:34:04 -07:00
Ethan Atkins c748a5583e Restore some legacy watch behavior for play
I noticed that ~run in the Play plugin relied on the presence of the
ContinuousEventMonitor key. Rather than completely break that feature, I
re-added the ContinuousEventMonitor attribute to the state in a
continuous build. That being said, the play team does need to update
their plugin because reading from the console no longer works in 1.3.0 so the
user has to Ctrl-C to exit the watch. I think the best way for them to
fix this is to override the '~' command in their plugin and if the input
is 'run', then they do their custom thing, otherwise they delegate to
the default '~' command.
2019-05-31 09:33:34 -07:00
Ethan Atkins 4158716b9a
Merge branch 'develop' into meta-reload-check 2019-05-30 21:59:02 -07:00
Ethan Atkins d48f41dcf8
Merge branch 'develop' into layer-fixes 2019-05-30 21:08:59 -07:00
Ethan Atkins 5faf78af96 Add scala reflect layer
Not caching scala reflect is extremely painful if the build uses
scalatest. It adds O(1second) to my watch performance benchmarks. It
actually made sbt 1.3.0 much slower than 0.13.17
2019-05-30 17:30:14 -07:00
Ethan Atkins cf73bbbafc Fix ScalaLibrary again
I was benchmarking sbt with turbo mode on and found that tests weren't
running. This was because we were inadvertently excluding all of the
dependency jars from the dynamic classpath. I have no idea why the
scripted tests didn't catch this.

The scalatest scripted test didn't catch this because 'test' just
automaticaly succeeds if no test frameworks are found. To guard against
regression, I had to ensure that 'test' failed for every strategy if a
bad test file was present.
2019-05-30 17:30:14 -07:00
Ethan Atkins cda4713f89 Make AllLibraryJars a case object
This improves the toString and also allows it to be used in a pattern
match.
2019-05-30 17:30:14 -07:00
Ethan Atkins 525bf8fa3d Move meta build source check into Command.processCommand
We want to check the build sources before any command runs, not just
tasks. To achieve this, I moved the logic for checking for build source
changes to CommandProcess.processCommand. Also, @smarter had noticed
that if a user modified a build file and then ran reload, a warning
would be displayed about changed build sources even though they had just
ran reload. This was because running reload didn't update the previous
cache for checkBuildSources / fileInputStamps. I fixed that bug by
running 'checkBuildSources / changedInputFiles' instead of
'checkBuildSources' when the user runs reload.

I verified that after this change:

- If I changed a build file and ran 'show version' a warning was printed
  before it displayed the version. If I also set
  global / onChangedBuildSource := ReloadOnSourceChanges, it
  automatically reloaded before displaying the version.
- If I changed a build source and ran 'reload', followed by
  'show version', no warnings were ever displayed.

As an implementation detail, I had to add the Aggregation.suppressShow
attribute key. We set this key to true before checking the build
sources. Without this, log.success is called whenever we check the build
sources which is both confusing and noisy.
2019-05-30 15:55:21 -07:00
Ethan Atkins 3de3cc15cf Don't use global execution context
Because we are sharing the scala library classloader with test and run,
it is possible that sbt will be competing with for resources with the
test and run tasks when trying to get threads from the global execution
context. Also, by using our own execution context, we can shut it down
when sbt exits.

The motivation for this change is that I was looking at the active jvm
threads of an idle sbt process and noticed a bunch of global execution
context threads.
2019-05-30 14:30:31 -07:00
Ethan Atkins 9d8296fe49 Fix indefinite recompilation
@olegych reported in #4721 that play projects would get stuck in a
strange loop where modifying any source file would cause that source
file to always be recompiled every time a build was triggered regardless
of whether or not it was modified. This was because the play project
sets custom watchSources (using the legacy api) that overlap with the
fileInputs.

There were two parts to this fix:
1) When detecting an event, find if any of the dynamic inputs that cover
   the glob use a hash. If so, these are file inputs so we want to
   update the hash for the path, not the last modified time.
2) Only write hashes into the persistent file stamp cache. Computing the
   last modified time is much cheaper than the hash so it makes sense to
   avoid ever caching last modified times.

I wrote a scripted test that fails if Continuous writes a last modified
time into the file stamp cache instead of a hash. I also verified
manually that a sample play project no longer exhibits the weird
recompilation behavior.

Fixes #4721
2019-05-30 13:02:11 -07:00
Ethan Atkins b9fed2abfb Remove warning about unneeded named variable 2019-05-30 13:02:11 -07:00
Ethan Atkins 90d0c54caa Set watchTriggeredMessage by default
This allows the user to do, for example,
watchTriggeredMessage := { (count, path, commands) =>
  println(Watched.clearScreen)
  watchTriggeredMessage.value(count, path, commands)
}

Also, there was a bug where I accidentally inadvertently used the
deprecated watch message setting where I meant to use the deprecated
trigger message setting.

Fixes #4696
2019-05-30 13:02:11 -07:00
eugene yokota bbe0e62a0f
Merge pull request #4747 from eed3si9n/wip/shutdown
Create serviceTempDir lazily
2019-05-30 09:58:42 -04:00
eugene yokota bcb0294ed8
Merge pull request #4744 from eed3si9n/wip/bumpcoursier
lm-coursier-shaded 1.1.0-M14-3
2019-05-30 09:58:21 -04:00
eugene yokota 4ef0eb609f
Merge pull request #4743 from eed3si9n/wip/java
Fix Java version parsing
2019-05-30 09:58:00 -04:00
Eugene Yokota 4b10c486c4 Create serviceTempDir lazily
Ref #4741
2019-05-30 00:58:14 -04:00
Eugene Yokota a5a8c63732 Move Coursier related tasks into sbt.coursierint
Ref #4713
2019-05-30 00:24:55 -04:00
Eugene Yokota 5b96bcae06 Move to dependencyResolutionTask to Defaults 2019-05-30 00:01:00 -04:00
Eugene Yokota 81d7edb6c6 lm-coursier-shaded 1.1.0-M14-3
Fixes #4738
2019-05-29 23:48:05 -04:00
Eugene Yokota 41eca47e66 Fix Java version parsing
Fixes #4731
2019-05-29 23:11:20 -04:00
Ethan Atkins dcccd17fd2 Improve managed file management in watch
@olegych reported in https://github.com/sbt/sbt/issues/4722 that
sometimes even when a build was triggered during watch that no
recompilation would occur. The cause of this was that we never
invalidated the file stamp cache for managed sources or output files.
The optimization of persisting the source file stamps between task
evaluations in a continuous build only really makes sense for unmanaged
sources. We make the implicit assumption that unmanaged sources are
infrequently updated and generally one at a time. That assumption does
not hold for unmanaged source or output files.

To fix this, I split the fileStampCache into two caches: one for
unmanaged sources and one for everything else. We only persist the
unmanagedFileStampCache during continuous builds. The
managedFileStampCache gets invalidated every time.

I added a scripted test that simulates changing a generated source file.
Prior to this change, the test would fail because the file stamp was not
invalidated for the new source file content.

Fixes #4722
2019-05-29 17:28:04 -07:00
eugene yokota 95822e0eb0
Merge pull request #4739 from dwijnand/no-TupleSyntax
Drop the remaining TupleSyntax usage
2019-05-29 13:43:21 -04:00
eugene yokota 1426a6b48e
Merge pull request #4713 from smarter/public-coursier
Make coursier-related tasks public
2019-05-29 10:51:48 -04:00
Dale Wijnand e86a63ca1b
Drop the remaining TupleSyntax usage 2019-05-29 14:43:18 +01:00
Ethan Atkins e481ddb1fc
Merge branch 'develop' into watch-alias 2019-05-28 21:06:31 -07:00
Ethan Atkins f7dd228808 Allow aliases to be used in continuous builds
@japgolly reported in #4695 that aliased commands don't work in watch
anymore. This was because we were extracting the task from the raw
command rather than the aliased command. Since the alias wasn't a valid
key, we weren't able to parse the scoped key. The fix is to find the
aliased value and try that if we fail to parse the original command.

Fixes #4695
2019-05-28 16:49:23 -07:00
Guillaume Martres 2aab767962 Replace usages of deprecated ScalaInstance#libraryJar 2019-05-29 00:10:47 +02:00
eugene yokota aefd0969b1
Merge pull request #4732 from smarter/fix-updatesbtclass
updateSbtClassifiers: use the correct scalaOrganization
2019-05-28 17:52:12 -04:00
Ethan Atkins 5d08d82f3a Use libraryJars rather than libraryJar in ClassLoaders
Dotty uses multiple library jars. It also simplifies the code to use the
libraryJars method.
2019-05-28 11:22:34 -07:00
Guillaume Martres 7a84808f74 updateSbtClassifiers: use the correct scalaOrganization 2019-05-28 19:58:05 +02:00
Ethan Atkins df51281d90 Remove dead test 2019-05-28 10:39:08 -07:00
Ethan Atkins 7b870d647a Add missing header 2019-05-28 10:36:44 -07:00
Ethan Atkins d78d8d650c Don't automatically die on OOM: metaspace
In an interactive session, it's possible for task evaluation to trigger
an OOM: Metaspace but for sbt to continue working after that failure.
Moreover, the metaspace oom can be caused by using a dependency
classloader layer. If the user changes the layering strategy, they may
be able to re-run their command successfully.
2019-05-28 09:53:36 -07:00
Ethan Atkins 92992a8243 Use file stamps for resource loader
Instead of caching based on the classpath of the resources, we should
instead cache based on the actual resource files. This commit achieves
that by adding the classpathFiles key which just transforms the
attributed classpath to a Seq[Path]. This implicitly generates the
outputFileStamps key for classpathFiles which we can use to read the
stamps (the file stamp entries for the classpath should get filled by
the compile task so this shouldn't actually cause any additional io).
2019-05-28 09:53:36 -07:00
Ethan Atkins b6cdd60cf8 Simplify layering strategies
The ShareRuntimeDependenciesLayerWithTestDependencies strategy doesn't
really work with resources, so it makes sense to get rid of it. Without
the share layer, there is no point in having separate
RuntimeDependencies and TestDependencies layers so I consolidated them
to Dependencies.

If we really care about binary/source compatibility for the 1.3.0-RCx
series, I can restore the traits and objects and set them private[sbt].
I think it was kind of a bug that they existed at all given the issue
with resources so it makes sense to just remove them.
2019-05-28 09:53:36 -07:00
Ethan Atkins 468334f142 Don't use anonymous URLClassLoaders
This makes debugging a bit easier in the eclipse memory analyzer tool
since we get a more specific classloader type than URLClassLoader and by
giving the class a meaningful name, we can tell from where it
originated.
2019-05-28 09:53:35 -07:00
Ethan Atkins af9f665649 Use new ClassLoaderCache for layered classloaders
This commit removes the ClassLoaderCache that I'd added for the purpose
of caching layered classloaders. Instead, we will use the state's global
ClassLoaderCache. This is better both because it centralizes the
classloader caching and because the new ClassLoaderCache will evict
unused classloaders when the jvm is under memory pressure.

I also add a new layer for the resources that goes between the scala
library layer and the dependency layer. This should help in cases where
users depend on libraries that require access to resources, e.g.
logback.xml.
2019-05-28 09:53:35 -07:00
Ethan Atkins a128ddf4a6 Route all ScalaInstance creations through the cache
It was possible to make new classloaders for the scala library and other
jars with each new scala instance. To avoid this, I audited all of the
places within sbt where we make a ScalaInstance and ensure that we
instantiate them in such a way that the classloaders are retrieved
through the state's ClassLoaderCache.

After this change, I found from a heap dump that it was possible to run
test in a project that uses scala 2.12.8 and have only ONE classloader
for the scala library present in the heap dump. With older versions,
there were would be up to 3 or 4 in most heap dumps.
2019-05-28 09:53:35 -07:00
Ethan Atkins 03bf539ce9 Add new ClassLoaderCache implementation
This commit adds a new ClassLoaderCache that builds on the
ClassLoaderCache that is present in zinc (and can be used to build an
instance of the zinc ClassLoaderCache to preserve compatibility). It
differs from the zinc classloader cache that it does not use direct
SoftReferences to classloaders. Instead, we create a wrapper loader
that can't load any classes and just delegates to its parent. This
allows us to add a thread that reaps the soft reference to the wrapper
loader. Crucially, we add a custom SoftReference class that has a strong
reference to the underlying classloader. This allows us to call close on
the strong reference.

The one issue with this approach is that we can't
rescue the jvm from crashing with an OOM: metaspace because the jvm
doesn't give us a chance to close and dereference the underlying
classloaders before it crashes. It WILL collect classloaders under
normal memory pressure, just not metaspace pressure. To fix this, I
check if the MaxMetaspaceSize is set via an MxBean and, if it is, we
fill the cache with regular soft references. We are going to change the
bash script to not set -XX:MaxMetaspaceSize by default so most builds
should probably end up correctly closing the classloaders after this
change. But we should break existing builds that set MaxMetaspaceSize
but don't crash.

As part of this commit, I audited all of the places where we were
instantiating ClassLoaderCache instances and instead pass in the
state's ClassLoaderCache instance. This reduces the total number of
classloaders created.
2019-05-28 09:53:35 -07:00
Ethan Atkins 20f6d22439 Fix memory leak
Using a lazy val causes the log manager to hold onto a reference to the
state. These would accumulate with each task evaluation. I found that
that in the beanpuree project, that if I ran compile 10 times in a row,
the heap usage was 40mb lower after this change.
2019-05-28 09:53:35 -07:00
Ethan Atkins f8d729cd3b Don't set fileOutputs at the compile config level
This was problematic because it had no dependency on the compile task
which meant that any other task in the config would pick up those
fileOutputs which did not make sense. I noticed this because
(resources / outputFileStamps).value would include class files.
2019-05-28 09:53:35 -07:00
Ethan Atkins df628d4f87 Improve legacy launcher
To minimize classloading and consistency between sbt instances launched
with the latest launcher compared to old launchers, I overhauled code
that replaces the app configuration and meta build classloader at
startup. The goals of this change for legacy launchers were:

1) Do not ever load the scala-library.jar from the app provider class loader.
2) Close the class loaders that are below the topLoader in the class
   loading hierarcy

For the new launcher, we simply want to avoid modifying the loader at
all.

I added the SbtParserInit class so that it was more straightforward to
preload the global instance using reflection. We now use reflection to
instantiate an SbtParserInit instance for both the legacy and new
launcher cases to simplify the logic.

After this change, the legacy loader still uses somewhat more metaspace
than the new loader, but the difference seems to be O(10MB), which
should only impact projects that were close their MaxMetaspaceSize to
begin with.

I verified using javap that none of the code in this class uses the
scala standard library which should help metaspace since we don't load
much of the scala standard library until we enter xMainImpl.run.
2019-05-28 09:53:35 -07:00
Ethan Atkins 22d5fbad13 Move external hooks definition
I verified manually that ExternalHooks were still applied by default but
that I could set the incOptions in the Test and Compile configs so that
they weren't used.

Fixes #4624
2019-05-26 19:15:55 -07:00
Eugene Yokota 5936bd1ff2 Revert Defaults.collectFiles
Fixes #4681
Ref #4649
2019-05-25 14:10:14 -04:00
Eugene Yokota 0ff97e4561 scalaCompilerBridgeDependencyResolution
Fixes #4712

This adds a specialized DependencyResolution instance called `scalaCompilerBridgeDependencyResolution` to download the compiler bridge. It has its own list of resolvers set by `scalaCompilerBridgeResolvers`. For backward compatibility, it will append `externalResolvers.value` as well.
2019-05-24 01:02:44 -04:00
Guillaume Martres 186693368d Make coursier-related tasks public
This follows the discussion in
https://github.com/coursier/coursier/issues/1181.
2019-05-21 19:10:38 +02:00
eugene yokota ecce47e5b7
Merge pull request #4703 from eed3si9n/wip/scalatest
Fixes layer 4 missing scala-reflect
2019-05-20 14:26:42 -04:00
Eugene Yokota 9d3b626567 Fix ScalaTest issue with ScalaLibrary classloader
Ref #4689
Ref #4671
2019-05-17 14:24:55 -04:00
Ethan Atkins 5a9f5a69d5 Fix warnings
These lines added new warnings that slipped through the cracks.
2019-05-15 17:27:07 -07:00
eugene yokota 15b4befa9c
Merge pull request #4679 from eatkins/rt-jar
Don't ever invalidate rt.jar
2019-05-14 22:13:45 -04:00
Ethan Atkins a820bb5623 Sort the supershell tasks by task name
This should make the output less jumpy.
2019-05-14 17:27:17 -07:00
Ethan Atkins 564aa7262b Fix TaskProgress
Supershell was not reliably working and I tracked it down to
TaskProgress not actually publishing updates during task execution. This
seemed to happen because the background task was only run once when the
task started up. Once that task exited, no further task reports would be
published. The fix is to start a new thread every time we enter
EvaluateTask. I verified manually that it did not seem to leak threads
because EvaluateTask always calls shutdown, which calls
afterAllCompleted, which stops the progress thread.

I also decreased the default report period to 100ms. I can't imagine
that this will have a big effect on performance. It can be tuned with
the sbt.supershell.sleep parameter.
2019-05-14 17:27:16 -07:00
Ethan Atkins 0d9be6dd4a Don't ever invalidate rt.jar
There are issues when using jdk > 8 where the rt.jar file can be
invalidated by ExternalHooks. This causes spurious rebuilds. I think
it's fair to assume that rt.jar never changes. If a dependency is named
rt.jar, then invalidation may not work correctly but I think that this
is the more important case to handle.

I verified that before this change, it was impossible to run
akka-actor/compile twice in a row using adopt jdk 11 and, after this
change, re-compilation worked as expected.
2019-05-14 16:53:09 -07:00
Ethan Atkins e5b54a59ea Manage shutdown hooks
I discovered that some registered shutdown hooks would crash due to
67df72ab01 because they would try to load
classes from the closed classloader. To fix this, I add a internal
shutdown hooks mechanism that can be managed by sbt. Any unevaluated
shutdown hooks will be run when the sbt main method exits. This means
that they will be run when the user calls reboot. I think that is
reasonable.
2019-05-13 17:38:56 -07:00
eugene yokota 7e5b9c521e
Merge pull request #4675 from eatkins/startup
Startup
2019-05-13 20:10:05 -04:00
eugene yokota 84bee1e440
Merge pull request #4671 from eed3si9n/wip/scalalibrary
change ClassLoaderLayeringStrategy.ScalaInstance to ScalaLibrary
2019-05-13 17:49:47 -04:00
Ethan Atkins 54412d8c59 Improve ScalaMetaBuildClassLoader construction
I realized that all of the data structures that I needed to isolate the
classpath are contained in the AppProvider interface so there was no
need to use structural reflection on the top class loader.
2019-05-13 14:40:22 -07:00
Ethan Atkins 418e7e09fd Shave O(500ms) off of sbt startup
It turns out that it can take roughly one second to instantiate a
scala.nsc.tools.Global instance for the first time. When sbt is starting
up, it also takes nearly 2 seconds to initialize logging. We can speed
up the boot time by doing these two things concurrently. On my machine,
I saw on average a 500ms decrease in startup time after this change.
2019-05-13 14:39:39 -07:00
Eugene Yokota 5c85c04e0d don't include si.allJars into the test classpath
allJars contains unwanted Scala modules.
Having this in prevents the flat classloader from working correctly.

Ref #4609
2019-05-12 23:51:17 -04:00
Eugene Yokota b00c675a19 change ClassLoaderLayeringStrategy.ScalaInstance to ScalaLibrary
Fixes #4609

ScalaInstance contains unwanted Scala modules such as scala-xml and scala-parser-combinators.
2019-05-12 23:36:12 -04:00
Eugene Yokota d8c9eb90c6 exclude inter-project resolvers when resolving the compiler bridge
Fixes #4669
2019-05-12 23:03:07 -04:00
Ethan Atkins b96be5343b Support char buffered stdin on windows in continuous
I finally realized that the trick is that for non cygwin windows, the
available method on the jline wrapped input stream always returns zero.
Unlike on posix, however, the read method is interruptible which means
that we can just spin up a background thread that polls from the input
stream and writes it into a buffer.

I verified that it was no longer necessary to hit <enter> after 'r' to
rerun the continuous command on my windows vm after this change.
2019-05-11 22:01:49 -07:00
Ethan Atkins 8f54ecd536 Check meta build sources before task evaluation
This commit finally fixes #241 by adding support for sbt to either
print a warning or automatically reload the project if the metabuild
sources have changed. To facilitate this, I introduce a new key,
metaBuildSourceOption which has three options:
1) IgnoreSourceChanges
2) WarnOnSourceChanges
3) ReloadOnSourceChanges

When the former is set, sbt will not check if the meta build sources
have changed. Otherwise, sbt will use the buildStructure / fileInputs to
get the ChangedFiles for the metabuild. If there are any changes, it
will either warn or reload the build depending on the value of
metaBuildSourceOption.

The mechanism for diffing the files is that I add a step to EvaluateTask
where, if the project has been loaded and
metaBuildSourceOption != IgnoreSourceChanges, we evaluate the needReload
task. If we need a reload, we return an error that indicates that a
Reload is necessary. When that error is detected, the MainLoop will
prepend "reload" to the pending commands for the state. Otherwise we
just print a warning and continue.

I benchmarked the overhead of this and it wasn't too bad. I generally
saw it taking 5-20ms to perform the check. Since this is only done once
per task evaluation run, I don't think it's a big deal. When
IgnoreSourceChanges is set, there is O(10us) overhead. If performance
does become a problem, we could add a global watch service and skip the
needReload evaluation if no files have been modified.

I removed the watchTrackMetaBuild key and made it so that the continuous
builds only track the meta build when
metaBuildSourceOption == ReloadOnSourceChanges
2019-05-11 22:01:49 -07:00
Ethan Atkins 4007810adb Add watchPersistFileStamps key
The persistentFileStampCache does seem to work pretty well but in case
users encounter issues, I add a boolean flag that allows the user to
turn this behavior off and always re-stamp every source file in every
task evaluation run.
2019-05-11 22:01:48 -07:00
Ethan Atkins ec09e73437 Improve cache invalidation strategy
Previously the persistent attribute map was only reset when the file
event monitor detected a change. This made it possible for the cache to
be inconsistent with the state of the file system*. To fix this, I add an
observer on the file tree repository used by the continuous build that
invalidates the cache entry for any path for which it detects a change.
Invalidating the cache does not stamp the file. That only happens either
when a task asks for the stamp for that file or when the file event
monitor reports an event and we must check if the file was updated or
not.

After this change, touching a source file will not trigger a build
unless the contents of the file actually changes.

I added a test that touches one source file in a project and updates the
content of the other. If the source file that is only ever touched ever
triggers a build, then the test fails.

* This could lead to under-compilation because ExternalHooks would not
detect that the file had been updated.
2019-05-11 22:01:48 -07:00
Ethan Atkins 3d965799f3 Fix trigger bug
I got the if condition wrong which was setting the fileInputs to have a
LastModified stamp.
2019-05-11 21:34:02 -07:00
Ethan Atkins 8a456aef8a Always inject input tasks
I had tried to be cute and only inject certain tasks if they're actually
used, but that made it so that dynamic tasks may not have be able to use
them.
2019-05-11 21:34:02 -07:00
Ethan Atkins 40dc3ff7b3 Move json formatters
Organizationally this was sloppy with the FileStamp implementation
classes split by a bunch of json formatters.
2019-05-11 21:34:02 -07:00
Ethan Atkins f60d4060dd Fix toString for Update 2019-05-11 21:34:02 -07:00
Ethan Atkins b6ad077a72 Update io
The new io verion removes the PathFinder <-> Glob implicit translations.
It also has a number of small bug fixes related to directory listing via
FileTreeView.
2019-05-11 21:34:02 -07:00
Ethan Atkins 2ab8fed8fd Deprecation cleanup
The main project emits a number of deprecation warnings. I've isolated
the deprecation warnings related to Watch to the DeprecatedContinuous
file. I fixed the deprecation warnings where it was straightforward to
do so. After this change, there are three non-watch related changes
emitted:

1) Defaults.scala:3760 uses the deprecated InputTask.apply. This seems
   fixable but I'm not in a hurry
2) oldLoadFailed and oldLastGrep are used by Main. I think this could
   just be fixed by removing the deprecation warnings and setting them
   private[sbt] since they will still be available in the shell.
2019-05-11 21:34:02 -07:00
Ethan Atkins b15b638632 Remove unused private[sbt] key
This slipped through by mistake.
2019-05-11 21:34:02 -07:00
Ethan Atkins f9eb631b13 Filter scala-library more safely
I previously tried to fix https://github.com/sbt/sbt/issues/4608 in
fc715cab44 by finding the instance of
xsbt.boot.BootFilteredLoader in the classloader heirarchy. This was a
risky approach since it made a lot of assumptions about the classloaders
used to invoke xMain.run. Since the point is to filter out the scala
standard library jar, I reworked things to just find all the parents of
the scala provider loader and then walk the graph from the root
classloader until it finds the classloader that contains the scala
library. If no such classloader exists, it ends up returning the parent
of the scala provider library.

I also renamed the libraryLoader parameter to scalaProviderLoader since
that is what is actually passedin. It is actually the libraryLoader that
we want to exclude.
2019-05-11 19:45:25 -07:00
eugene yokota cf932c8a13
Merge pull request #4666 from eed3si9n/wip/coursierlog
silence coursier log when supershell is off
2019-05-11 19:39:21 -04:00
Eugene Yokota b18f3e8710 Reduce the test noise by making id more realistic
Fixes #3893

This fixes the flaky ParserSpec by making the id generation more realistic ASCII identifiers.
2019-05-11 03:55:34 -04:00
Eugene Yokota 7433f1f4ed make Coursier cache directory configurable 2019-05-11 01:11:07 -04:00
Eugene Yokota 9137e21028 silence coursier log when supershell is off 2019-05-11 00:56:40 -04:00
Eugene Yokota 1ba195a4f5 Refactor out keepPreloaded
Ref https://github.com/sbt/sbt/issues/4661
2019-05-11 00:13:22 -04:00
Eugene Yokota bcbd29f496 exclude preloaded local repos for now
Ref https://github.com/sbt/sbt/issues/4661
local-preloaded-ivy contains dangling ivy.xml without JAR files.
We might include local-preloaded again once we have a preloaded in Maven layout.
2019-05-09 23:34:37 -04:00
Eugene Yokota 4b9533b124 ignore bad SDKMAN directories
Fixes #4655
2019-05-09 01:59:03 -04:00
eugene yokota f5edeec2fd
Merge pull request #4647 from eed3si9n/wip/progress
Remove State out of progressReports
2019-05-03 17:51:51 -04:00
Eugene Yokota e8a22bf805 Remove State out of progressReports 2019-05-03 16:44:42 -04:00
Dale Wijnand f5495bdd67
Fix projects help usage text 2019-05-03 08:58:56 +01:00
eugene yokota 8fa3bcf90d
Merge pull request #4644 from eatkins/close-classloaders
Properly close a number of classloaders
2019-05-02 22:12:54 -04:00
eugene yokota cfa64e12d1
Merge pull request #4645 from eatkins/unnecessary-sleep
Unnecessary sleep
2019-05-02 22:04:39 -04:00
Ethan Atkins 7f719d7233 Remove unnecessary sleep
I'm not sure what the previous purpose of this was, but syncTo is
blocking so this just seems to add 100ms to the run task startup time.
2019-05-02 14:43:00 -07:00
Ethan Atkins 67df72ab01 Properly close a number of classloaders
I discovered there were a number of places where closing a ClassLoader
didn't work correctly because I was assuming it was a URLClassLoader
when it was actually a ClasspathFilter. I also incorrectly imported the
wrong kind of URLClassLoader in Run.scala. Finally, I close the
SbtMetaBuildClassLoader when xMain exits now.
2019-05-02 14:38:33 -07:00
Ethan Atkins 507346f3f6 Simplify file management settings
I decided that there were too many settings related to the file
management that did similar things and had similar names but did
slightly different things. To improve this, I introduce the ChangedFiles
class to sbt.nio.file and switch to having just two task for file input
and output retrieval: all(Input|Output)Files and
changed(Input|Output)Files. If, for example, changedInputFiles returns
None that means that either the task has not yet been run or there were
no changes. If there have been any changes, then it will return
Some(changes) and the user can extract the relevant changes that they
are interested in.

The code may be slightly more verbose in a few places, but I think it's
worth it for the conceptual clarity.
2019-05-02 14:36:08 -07:00
Ethan Atkins 3319423369 Add full support for managing file task io
This commit unifies my previous work for automatically watching the
input files for a task with support for automatically tracking and
cleaning up the output files of a task. The big idea is that users may
want to define tasks that depend on the file outputs of other tasks and
we may not want to run the dependent tasks if the output files of the
parent tasks are unmodified.

For example, suppose we wanted to make a plugin for managing typescript
files. There may be, say, two tasks with the following inputs and
outputs:

compileTypescript = taskKey[Unit]("shells out to compile typescript files")
  fileInputs -- sourceDirectory / ** / "*.ts"
  fileOutputs -- target / "generated-js" / ** / "*.js"

minifyGeneratedJS = taskKey[Path]("minifies the js files generated by compileTypescript to a single combined js file.")
  dependsOn: compileTypeScript / fileOutputs

Given a clean build, the following should happen
> minifyGeneratedJS
// compileTypescript is run
// minifyGeneratedJS is run

> minifyGeneratedJS
// no op because nothing changed

> minifyGeneratedJS / clean
// removes the file returned by minifyGeneratedJS.previous

> minifyGeneratedJS
// re-runs minifyGeneratedJS with the previously compiled js artifacts

> compileTypescript / clean
// removes the generated js files

> minifyGeneratedJS
// compileTypescript is run because the previous clean removed the generated js files
// minifyGeneratedJS runs because the artifacts have changed

> clean
// removes the generated js files and the minified js file

> minifyGeneratedJS
// compileTypescript is run because the generated js files were
// minifyGeneratedJS is run both because it was removed and

Moreover, if compileTypescript fails, we want minifyGeneratedJS to fail
as well.

This commit makes this all possible. It adds a number of tasks to
sbt.nio.Keys that deal with the output files. When injecting settings, I
now identify all tasks that return Seq[File], File, Seq[Path] and Path
and create a hidden special task: dynamicFileOutputs: TaskKey[Seq[Path]]
This special task runs the underlying task and converts the result to
Seq[Path]. From there, we can have the tasks like changedOutputPaths
delegate to dynamicFileOutputs which, by proxy, runs the underlying
task. If any task in the input / output chain fails, the entire sequence
fails.

Unlike the fileInputs, we do not register the dynamicFileOutputs or
fileOutputs with continuous watch service so these paths will not
trigger a continuous build if they are modified. Only explicit unmanaged
input sources should should do that.

As part of this, I also added automatic generation of a custom clean task for
any task that returns Seq[File], File, Seq[Path] or Path. I also added
aggregation so that clean can be defined in a configuration or project
and it will automatically run clean for all of the tasks that have a
custom clean implementation in that task or project. The automatic clean
task will only delete files that are in the task target directory to
avoid accidentally deleting unmanaged files.
2019-05-02 14:36:08 -07:00
Ethan Atkins 7166aca0c2 Rename InputGraph to SettingsGraph 2019-05-02 14:36:08 -07:00
Ethan Atkins a5cefd45be Clean up nio apis
This commit refactors things so that the nio apis are located primarily
in the nio package. Because the nio keys are a first class sbt feature,
I had to add import sbt.nio._ and sbt.nio.Keys._ to the autoimports in
BuildUtil.scala
2019-05-02 14:36:06 -07:00
Ethan Atkins 2d1c80f916 Remove duped system in
I had some ideas for allowing the user to get a copy of system in during
a continuous build but I can't really see a good use case now so I'm
going to remove it before 1.3.0.
2019-05-02 14:33:29 -07:00
Ethan Atkins 72df8f674c Add support for managed task inputs
In my recent changes to watch, I have been moving towards a world in
which sbt manages the file inputs and outputs at the task level. The
main idea is that we want to enable a user to specify the inputs and
outputs of a task and have sbt able to track those inputs across
multiple task evaluations. Sbt should be able to automatically trigger a
build when the inputs change and it also should be able to avoid task
evaluation if non of the inputs have changed.

The former case of having sbt automatically watch the file inputs of a
task has been present since watch was refactored. In this commit, I
make it possible for the user to retrieve the lists of new, modified and
deleted files. The user can then avoid task evaluation if none of the
inputs have changed.

To implement this, I inject a number of new settings during project
load if the fileInputs setting is defined for a task. The injected
settings are:

allPathsAndAttributes -- this retrieves all of the paths described by
  the fileInputs for the task along with their attributes
fileStamps -- this retrieves all of the file stamps for the files
  returned by allPathsAndAttributes

Using these two injected tasks, I also inject a number of derived tasks,
such as allFiles, which returns all of the regular files returned by
allPathsAndAttributes and changedFiles, which returns all of the regular
files that have been modified since the last run.

Using these injected settings, the user is able to write tasks that
avoid evaluation if the inputs haven't changed.

foo / fileInputs += baseDirectory.value.toGlob / ** / "*.scala"
foo := {
  foo.previous match {
    case Some(p) if (foo / changedFiles).value.isEmpty => p
    case _ => fooImpl((foo / allFiles).value
  }
}

To make this whole mechanism work, I add a private task key:
val fileAttributeMap = taskKey[java.util.HashMap[Path, Stamp]]("...")
This keeps track of the stamps for all of the files that are managed by
sbt. The fileStamps task will first look for the stamp in the attribute
map and, only if it is not present, it will update the cache. This
allows us to ensure that a given file will only be stamped once per task
evaluation run no matter how the file inputs are specified. Moreover, in
a continuous build, I'm able to reuse the attribute map which can
significantly reduce latency because the default file stamping
implementation used by zinc is fairly expensive (it can take anywhere
between 300-1500ms to stamp 5000 8kb source files on my mac).

I also renamed some of the watch related keys to be a bit more clear.
2019-05-02 14:33:29 -07:00
Ethan Atkins ba1f690bba Make Repository private[sbt]
This trait may not even survive until 1.4.0. It should definitely not be
public. I got a little overexcited about programming with higher kinded
types when I added it.
2019-05-02 14:33:02 -07:00
Ethan Atkins 41c63c1028 Remove unneeded filters 2019-05-02 14:33:02 -07:00
Ethan Atkins 2deac62b00 Bump io
The newest version of io repackages a number of classes into the
sbt.nio.* packages. It also changes some of the semantics of glob
related apis. This commit updates all of the usages of the updated apis
within sbt but should have no functional difference.
2019-05-02 14:33:01 -07:00
Ethan Atkins 20b0ef786b Undeprecate WatchSource
Since the new watch implementation has yet to be widely deployed, we
should hold off on deprecating the old keys. They could still be
deprecated in a patch release or in 1.4.0.
2019-05-02 09:41:53 -07:00
Ethan Atkins 3a6ff8afca Use global classloader cache for scala instance
I noticed in a heap dump of sbt that there were many classloaders for
the scala instance. I then realized that we were making a new
classloader for the scala library on every test run. Even worse, the
ScalaInstanceLoader instance was never closed which lead to a metaspace
leak. I moved the scala instance classloader to the global classloader
cache. Not only will these be correctly cached, they will be closed if
evicted from the cache.
2019-04-30 12:33:43 -07:00
Eugene Yokota 788a864d83 Refactor some code 2019-04-29 10:33:08 -04:00
eugene yokota 1106422fb9
Merge pull request #4617 from dwijnand/zinc-lm-integration
In-source zinc's LM integration code
2019-04-28 22:19:43 -04:00
eugene yokota 33f4f5a49b
Merge pull request #4630 from eed3si9n/wip/cancelable
make Global / cancelable true by default
2019-04-28 18:17:41 -04:00
Eugene Yokota f5444f7715 Merge branch 'develop' into pr/4617 2019-04-28 17:22:54 -04:00
Eugene Yokota f999f6a62e always reresolve sbt artifacts when using Coursier
Ref #4589

This requires sbt server tests to resolve sbt off of local.
2019-04-27 14:31:13 -04:00
Eugene Yokota 96ad731e8c Use allExcludeDependencies 2019-04-26 18:06:10 -04:00
Eugene Yokota 8c0f13a24a manually expand ivy.home
Ref coursier/coursier#1124
2019-04-26 17:51:17 -04:00
Eugene Yokota f354a626c7 use lm-coursier-shaded
This uses lm-coursier-shaded, and follows along the changes in https://github.com/coursier/sbt-coursier/pull/58.
2019-04-26 17:33:14 -04:00
Eugene Yokota 24db77edc5 copy some tests from coursier/sbt-coursier
Copying over sbt-coursier integration tests that do not depend on Coursier-specific things, but excercises sbt integration.
2019-04-26 12:27:38 -04:00
Eugene Yokota 7658f14762 Add maven-plugin and test-jar to classpathTypes
Ref https://github.com/sbt/sbt-native-packager/issues/1053
Ref https://github.com/coursier/coursier/issues/450
2019-04-26 12:27:38 -04:00
Eugene Yokota ca53934941 fix csrCachePath 2019-04-26 12:27:38 -04:00
Eugene Yokota 944e955d06 put sbtCp ahead of resolved JARs
Ref https://github.com/sbt/sbt/pull/4443
Ref https://github.com/coursier/coursier/issues/1128

This is a workaround for Coursier not excluding sbt modules.
2019-04-26 12:27:38 -04:00
Eugene Yokota 5614cfcbb6 Move log to outer task 2019-04-26 12:27:38 -04:00
Eugene Yokota e206e797fe set up specific dependencyResolution instances 2019-04-26 12:27:38 -04:00
Eugene Yokota 6a99906386 manually expand ivy.home
Ref https://github.com/coursier/coursier/issues/1124
2019-04-26 12:25:52 -04:00
Eugene Yokota 21782a51f0 write info.apiURL to ivy.xml
Ref https://github.com/coursier/coursier/issues/1123
2019-04-26 12:25:52 -04:00
Eugene Yokota 38f94a6e31 Coursier dependency resolution integration
This adds dependency to LM implemented using Coursier.
I had to copy paste a bunch of code from sbt-coursier-shared to break the dependency to sbt.

`Global / useCoursier := false` or `-Dsbt.coursier=false` be used to opt-out of using Coursier for the dependency resolution.
2019-04-26 12:25:52 -04:00
Eugene Yokota 3be8efc36e make Global / cancelable true by default
Fixes #3252
2019-04-25 12:14:37 -04:00
Dale Wijnand e978357e47
In-source zinc's LM integration code 2019-04-25 11:57:37 +01:00
Eugene Yokota 6c7faf2b86 trim update and add updateFull
Fixes #4438

This slims down update's UpdateReport by removing evicted modules
caller information. The larger the graph, the effect would be more
pronounced. For example, I saw a graph reduce from 5.9MB to 1.1MB in JSON file.
2019-04-23 14:08:17 -04:00
eugene yokota 4074cb32d3
Merge pull request #4605 from eed3si9n/wip/bumplm
bump to lm 1.3.0-M3
2019-04-23 13:52:08 -04:00
eugene yokota 9b71ee1d6e
Merge pull request #4459 from alexarchambault/topic/update-classifiers-dependency-resolution
Have updateClassifiers use the dependencyResolution task
2019-04-21 19:18:01 -04:00
Eugene Yokota 465ff8e10a Make loggers synchronized
This is to workaround for "[success]" logs displaying after the prompt is displayed.
2019-04-21 04:03:22 -04:00
Eugene Yokota c4d6efe5af move super shell rendering to logger
Fixes #4583
Ref https://github.com/sbt/util/pull/196
2019-04-20 23:32:42 -04:00
Eugene Yokota 1e157b991a apply formatting 2019-04-20 03:23:54 -04:00
Dale Wijnand 546476981c
Resolve compilation warnings in test/Delegates 2019-04-18 09:21:08 +01:00
Ethan Atkins fc715cab44 Don't leak the sbt boot scala library into tests
It was reported in https://github.com/sbt/sbt/issues/4608 that there was
a regression that tests run against scala 2.11 would fail. This was
because the interface loader incorrectly contained the scala library. To
fix this, I needed to find the xsbt.boot.BootFilteredLoader in the
classloading hierarchy and put the sbt testing interface library in
between that loader and the scala library loader.
2019-04-07 15:08:52 -07:00
Ethan Atkins c9aec02d05 Improve toString for flat classloader
It can be helpful to see what jars are available to the underlying url
classloader as well as what the parent classloader is.
2019-04-07 15:08:52 -07:00
Eugene Yokota bf44a6f446 add header 2019-04-06 02:08:21 -04:00
Eugene Yokota 8790a7b45d bump to lm 1.3.0-M3
This also adds `CustomHttp.okhttpClient` and `CustomHttp.okhttpClientBuilder` settings to experimentally customize HTTP client.
2019-04-05 15:28:49 -04:00
Ethan Atkins 031e9463da Improve error reporting for classloading issues
We noticed that the community build was failing for some projects due to
some class loading issues. My initial approach for detecting the errors
didn't always work because the test framework might wrap the underlying
exception. To fix that, I add the causes to the list of throwables to
scan for class loading related exceptions. I also added
ClassNotFoundException to the list of types to check for. I additionally
added more context to the error message so that it is more clear to the
user what specifically went wrong. The error message is intended to
provide examples that the user can actually paste into the console.
There is also a lot of manual line wrapping that could be improved by
defining paragraphs and then splitting on the jline terminal width. That
could be a useful internal helper function to improve our log messages
in general.

The underlying issue could be addressed by allowing the user to specify
libraries that get excluded from the dependency classpath for layering
purposes. I'm not sure the best way to do that yet and adding that
feature wouldn't fix any existing builds so I think that would be better
handled in 1.4.0.
2019-04-03 11:02:49 -07:00
Ethan Atkins 73cfd7c8bd Don't leak the sbt metabuild classpath in run/test
Prior to this commit, it was difficult to prevent the sbt metabuild
classpath from leaking into the runtime and test classpaths. The biggest
issue is that the test-inferface jar was located in the metabuild
classpath. We tried to prevent leakage using the DualClassLoader, but
this was an ugly solution that did not seem to work reliably. The fix is
to modify the actual sbt metabuild classloader provided by the sbt
launcher.

To do this, I add a new classloader SbtMetaClassLoader that isolates the
test-interface jar from the rest of the classpath. I modify xMain to
create a new AppConfiguration that uses this new classloader and
use reflection to invoke the sbt main method using the new classloader.

Not only do I think that this is a much saner solution than DualLoaders,
I accidentally fixed #4575 with this change.
2019-04-02 20:53:37 -07:00
Ethan Atkins 2c19138394 Fix classpath ordering for layered classloaders
The order of the classpath was not previously preserved because I
converted the runtime and test classpaths to set. I fix that in this
commit.
2019-04-02 20:53:37 -07:00
Ethan Atkins 399dd920b0 Set bgCopyClasspath false for shared layer config
It isn't possible to share the runtime and test layers correctly with
bgCopyClasspath is used because the runtime classpath uses the
dependencies copied to the boot directory while the test classpath uses
the classes in target and .ivy2. Since this is not the default and users
have to opt in to
ClassLoaderLayeringStrategy.ShareRuntimeDependenciesLayerWithTestDependencies,
I think this is fine.
2019-04-02 20:53:37 -07:00
Ethan Atkins a4f1d23d71 Close test and run classloaders
It's good practice to call close on a URLClassLoader when we're done
with it.
2019-04-02 20:53:37 -07:00
Ethan Atkins 8ef5a67b64 Add better error message if run fails
It is possible with the new layering strategies that tests may fail if a
java package private class is accessed across classloader layers. This
will result in an IllegalAccessError that is hard to debug. With this
commit, I add an error message that will be displayed if run throws an
IllegalAccessError that suggests that the user try the
ScalaInstance layering strategy or the flat layering strategy.
2019-04-02 20:53:37 -07:00
Ethan Atkins cb7fbfc810 Use named parameters 2019-04-02 20:53:37 -07:00
Ethan Atkins 13cdbb5ea6 Don't make redundant ClassLoaderCache instance
I noticed that sometimes multiple ClassLoaderCache instances were
created in each configuraiton. I believe this was due to the use of
inConfig(...)(...) causing multiple caches to be created. Long term, I'm
not sure that taskRepository and classLoaderCache are the right
solutions so I made classLoaderCache private[sbt] as well.
2019-04-02 20:53:37 -07:00
Ethan Atkins 7f46b27143 Change default FileTree implementation
I have noticed on linux that the file cache updates aren't fast enough
for ExternalHooks. Say you have project b that depends on project a.
With a clean build, if you run b/compile, the file cache may not yet see
the changes to *.class files generated by project a. There are multiple
ways to fix this:
* don't use the file cache for binary products
* use the analysis results to invalidate the cache
* switch over to my hypothetical replacement file system

In the meantime, we should stop spamming users by default.
2019-03-31 22:15:28 -07:00
Ethan Atkins e33bb691ee Fix depth condition on GlobLister.aggregate
I wrote this check in a rush and realized that it didn't quite match the
correct glob semantics. The depth parameter is effectively the index of
the array of sorted child directories of the base. That index is
computed with getNameCount - 1, not getNameCount. It is also inclusive,
not exclusive, hence the switch from `<` to `<=`.

This change was motivated by my reviewing the initial change in the
context of the fix to https://github.com/sbt/sbt/issues/4591.
2019-03-31 09:34:06 -07:00
Ethan Atkins eb2926b004 Validate the cache by default
This commit change the default FileTree.Repository to always use a polling file
repository but one that validates the current file system results
against the cache results. On windows, we do not validate the cache
because the cache can cause io contention in scripted tests. The
cache does seem to work ok on my VM, but not on appveyor for whatever
reason. Validating the cache by default was suggested by @smarter in a
comment in https://github.com/sbt/sbt/issues/4543.
2019-03-30 16:39:10 -07:00
Ethan Atkins 247d242008 Improve watch messages
This commit reworks the watch start message so that instead of printing
something like:

[info] [watch] 1. Waiting for source changes... (press 'r' to re-run the command, 'x' to exit sbt or 'enter' to return to the shell)

it instead prints something like:

[info] 1. Monitoring source files for updates...
[info] Project: filesJVM
[info] Command: compile
[info] Options:
[info]   <enter>: return to the shell
[info]   'r': repeat the current command
[info]   'x': exit sbt

It will also print which path triggered the build.
2019-03-30 16:39:10 -07:00
Ethan Atkins c72005fd2b Support inputs in dynamic tasks
Prior to this commit, it was necessary to add breadcrumbs for every
input that is used within a dynamic task. In this commit, I rework the
watch setup so that we can track the dynamic inputs that are used. To
simplify the discussion, I'm going to ignore aggregation and
multi-commands, but they are both supported. To implement this change, I
update the GlobLister.all method to take a second implicit argument:
DynamicInputs. This is effectively a mutable Set of Globs that is
updated every time a task looks up files from a glob. The repository.get
method should already register the glob with the repository. The set of
globs are necessary because the repository may not do any file filtering
so the file event monitor needs to check the input globs to ensure that
the file event is for a file that actually requested by a task during
evaluation.

* Long term, I plan to add support for lifting tasks into a dynamic task
in a way that records _all_ of the possible dependencies for the task
through each of the dynamic code paths. We should revisit this change to
determine if its still necessary after that change.
2019-03-30 16:39:10 -07:00
Ethan Atkins 7c2607b1ae Clean up file repository management
I had needed to add proxy classes for the global FileTreeRepository so
that tasks that called the close method wouldn't actually stop the
monitoring done by the global repository. I realized that it makes a lot
more sense to just not provide direct access to the underlying file tree
repository and let the registerGlobalCaches manage its life cycle
instead.
2019-03-30 16:39:10 -07:00
Ethan Atkins 9cdeb7120e Add StateTransform class
This commit cleans up the approach for transforming the sbt state upon
completion of a task returning State. I add a new approach where a task
can return an instance of StateTransform, which is just a wrapper around
State. I then update EvaluateTask to apply this stateTransform rather
than the (optional) state transformation that may be stored in the Task
info parameter. By requiring that the user return StateTransform rather
than State directly, we ensure that no existing tasks that depend on the
state transformation function embedded in the Task info break. In sbt 2,
I could see the possibility of making this automatic (and probably
removing the state transformation function via attribute).

The problem with using the transformState attribute key is that it is
applied non-deterministically. This means that if you decorate a task
returning State, then the state transformation may or may not be
correctly applied.

I tracked this non-determinism down to the stateTransform
method in EvaluateTask. It iterates through the task result map and
chains all of the defined transformState attribute values. Because the
result is a map, this order is not specified. This chaining is arguably
a bad design because State => State does not imply commutivity. Indeed,
the problem here was that my state transformation functions were
constant functions, which are obviously non-commutative. I believe that
this logic likely written under the assumption that there would be no
more than one of these tranformations in a given result map.
2019-03-30 16:39:10 -07:00
Ethan Atkins 40d8d8876d Create Watch.scala
I decided that it makes sense to move all of the new watch code out of
the Watched companion object since the Watched trait itself is now
deprecated. I don't really like having the new code in Watched.scala
mixed with the legacy code, so I pulled it all out and moved it into the
Watch object. Since we have to put all of the logic for the Continuous
object in main in order to access the sbt.Keys object, it makes sense to
move the logic out of main-command and into command so that most of the
watch related logic is in the same subproject.
2019-03-30 16:39:10 -07:00
Ethan Atkins e868c43fcc Refactor Watched
This is a huge refactor of Watched. I produced this through multiple
rewrite iterations and it was too difficult to separate all of the
changes into small individual commits so I, unfortunately, had to make a
massive commit. In general, I have tried to document the source code
extensively both to facilitate reading this commit and to help with
future maintenance.

These changes are quite complicated because they provided a built-in
like api to a feature that is implemented like a plugin. In particular,
we have to manually do a lot of parsing as well as roll our own
task/setting evaluation because we cannot infer the watch settings at
project build time because we do not know a priori what commands the
user may watch in a given session. The dynamic setting and task
evaluation is mostly confined to the WatchSettings class in Continuous.
It feels dirty to do all of this extraction by hand, but it does seem to
work correctly with scopes.

At a high level this commit does four things:
1) migrate the watch implementation to using the InputGraph to collect
   the globs that it needs to monitor during the watch
2) simplify WatchConfig to make it easier for plugin authors to write
   their own custom watch implementations
3) allow configuration of the watch settings based on the task(s) that
   is/are being run
4) adds an InputTask implemenation of watch.

Point #1 is mostly handled by Point #3 since I had to overhaul how _all_
of the watch settings are generated. InputGraph already handles both
transitive inputs and triggers as well as legacy watchSources so not
much additional logic is needed beyond passing the correct scoped keys
into InputGraph.

Point #3 require some structural changes. The watch settings cannot in
general be defined statically because we don't know a priori what tasks
the user will try and watch. To address this, I added code that will
extract the task keys for all of the commands that we are running. I
then manually extract the relevant settings for each command. Finally, I
aggregate those settings into a single WatchConfig that can be used to
actually implement the watch. The aggregation is generally
straightforward: we run all of the callbacks for each task and choose
the next watch state based on the highest priority Action that is
returned by any of the callbacks.

Because I needed Extracted to pull out the necessary settings, I was
forced to move a lot of logic out of Watched and into a new singleton,
Continuous, that exists in the main project (Watched is in the command
project). The public footprint of Continuous is tiny. Even though I want
to make the watch feature flexible for plugin authors, the
implementation and api remain a moving target so I do not want to be
limited by future binary compatibility requirements. Anyone who wants to
live dangerously can access the private[sbt] apis via reflection or by
adding custom code to the sbt package in their plugin (a technique I've
used in CloseWatch).

Point #2 is addressed by removing the count and lastStatus from the
WatchConfig callbacks. While these parameters can be useful, they are
not necessary to implement the semantics of a watch. Moreover, a status
boolean isn't really that useful and the sbt task engine makes it very
difficult to actually extract the previous result of the tasks that were
run. After this refactor, WatchConfig has a simpler api. There are fewer
callbacks to implement and the signatures are simpler. To preserve the
_functionality_ of making the count accessible to the user specifiable
callbacks, I still provided settings like watchOnInputEvent that accept
a count parameter, but the count is actually tracked externally to
Watched.watch and incremented every time the task is run.

Moreover, there are a few parameters of the watch: the logger and
transitive globs, that cannot be provided via settings. I provide
callback settings like watchOnStart that mirror the WatchConfig
callbacks except that they return a function from Continuous.Arguments
to the needed callback. The Continuous.aggregate function will check if
the watchOnStart setting is set and if it is, will pass in the needed
arguments. Otherwise it will use the default watchOnStart implementation
which simulates the existing behavior by tracking the iteration count in
an AtomicInteger and passing the current count into the user provided
callback. In this way, we are able to provide a number of apis to the
watch process while preserving the default behavior.

To implement #4, I had to change the label of the `watch` attribute key
from "watch" to "watched". This allows `watch compile` to work at the
sbt command line even thought it maps to the watchTasks key. The actual
implementation is almost trivial. The difference between an
InputTask[Unit] and a command is very small. The tricky part is that the
actual implementation requires applying mapTask to a delegate task that
overrides the Task's info.postTransform value (which is used to
transform the state after task evaluation). The actual postTransform
function can be shared by the continuous task and continuous command.
There is just a slightly different mechanism for getting to the state
transformation function.
2019-03-30 16:38:56 -07:00
Ethan Atkins ed06e18fab Add InputGraph
This commit adds functionality to traverse the settings graph to find
all of the Inputs settings values for the transitive dependencies of the
task. We can use this to build up the list of globs that we must watch
when we are in a continuous build. Because the Inputs key is a setting,
it is actually quite fast to fetch all the values once the compiled map
is generated (O(2ms) in the scripted tests, though I did find that it
took O(20ms) to generate the compiled map).

One complicating factor is that dynamic tasks do not track any of
their dynamic dependencies. To work around this, I added the
transitiveDependencies key. If one does something like:
foo := {
  val _ = bar / transitiveDependencies
  val _ = baz / transitiveDependencies
  if (System.getProperty("some.prop", "false") == "true") Def.task(bar.value)
  else Def.task(baz.value)
}
then (foo / transitiveDependencies).value will return all of the inputs
and triggers for bar and baz as well as for foo.

To implement transitiveDependencies, I did something fairly similar to
streams where if the setting is referenced, I add a default
implementation. If the default implementation is not present, I fall
back on trying to extract the key from the commandLine. This allows the
user to run `show bar / transitiveDependencies` from the command line
even if `bar / transitiveDependencies` is not defined in the project.

It might be possible to coax transitiveDependencies into a setting, but
then it would have to be eagerly evaluated at project definition time
which might increase start up time too much.  Alternatively, we could
just define this task for every task in the build, but I'm not sure how
expensive that would be. At any rate, it should be straightforward to
make that change without breaking binary compatibility if need be. This
is something to possibly explore before the 1.3 release if there is any
spare time (unlikely).
2019-03-30 16:38:44 -07:00