I realized that all of the data structures that I needed to isolate the
classpath are contained in the AppProvider interface so there was no
need to use structural reflection on the top class loader.
It turns out that it can take roughly one second to instantiate a
scala.nsc.tools.Global instance for the first time. When sbt is starting
up, it also takes nearly 2 seconds to initialize logging. We can speed
up the boot time by doing these two things concurrently. On my machine,
I saw on average a 500ms decrease in startup time after this change.
I finally realized that the trick is that for non cygwin windows, the
available method on the jline wrapped input stream always returns zero.
Unlike on posix, however, the read method is interruptible which means
that we can just spin up a background thread that polls from the input
stream and writes it into a buffer.
I verified that it was no longer necessary to hit <enter> after 'r' to
rerun the continuous command on my windows vm after this change.
This commit finally fixes#241 by adding support for sbt to either
print a warning or automatically reload the project if the metabuild
sources have changed. To facilitate this, I introduce a new key,
metaBuildSourceOption which has three options:
1) IgnoreSourceChanges
2) WarnOnSourceChanges
3) ReloadOnSourceChanges
When the former is set, sbt will not check if the meta build sources
have changed. Otherwise, sbt will use the buildStructure / fileInputs to
get the ChangedFiles for the metabuild. If there are any changes, it
will either warn or reload the build depending on the value of
metaBuildSourceOption.
The mechanism for diffing the files is that I add a step to EvaluateTask
where, if the project has been loaded and
metaBuildSourceOption != IgnoreSourceChanges, we evaluate the needReload
task. If we need a reload, we return an error that indicates that a
Reload is necessary. When that error is detected, the MainLoop will
prepend "reload" to the pending commands for the state. Otherwise we
just print a warning and continue.
I benchmarked the overhead of this and it wasn't too bad. I generally
saw it taking 5-20ms to perform the check. Since this is only done once
per task evaluation run, I don't think it's a big deal. When
IgnoreSourceChanges is set, there is O(10us) overhead. If performance
does become a problem, we could add a global watch service and skip the
needReload evaluation if no files have been modified.
I removed the watchTrackMetaBuild key and made it so that the continuous
builds only track the meta build when
metaBuildSourceOption == ReloadOnSourceChanges
The persistentFileStampCache does seem to work pretty well but in case
users encounter issues, I add a boolean flag that allows the user to
turn this behavior off and always re-stamp every source file in every
task evaluation run.
Previously the persistent attribute map was only reset when the file
event monitor detected a change. This made it possible for the cache to
be inconsistent with the state of the file system*. To fix this, I add an
observer on the file tree repository used by the continuous build that
invalidates the cache entry for any path for which it detects a change.
Invalidating the cache does not stamp the file. That only happens either
when a task asks for the stamp for that file or when the file event
monitor reports an event and we must check if the file was updated or
not.
After this change, touching a source file will not trigger a build
unless the contents of the file actually changes.
I added a test that touches one source file in a project and updates the
content of the other. If the source file that is only ever touched ever
triggers a build, then the test fails.
* This could lead to under-compilation because ExternalHooks would not
detect that the file had been updated.
I had tried to be cute and only inject certain tasks if they're actually
used, but that made it so that dynamic tasks may not have be able to use
them.
The new io verion removes the PathFinder <-> Glob implicit translations.
It also has a number of small bug fixes related to directory listing via
FileTreeView.
The main project emits a number of deprecation warnings. I've isolated
the deprecation warnings related to Watch to the DeprecatedContinuous
file. I fixed the deprecation warnings where it was straightforward to
do so. After this change, there are three non-watch related changes
emitted:
1) Defaults.scala:3760 uses the deprecated InputTask.apply. This seems
fixable but I'm not in a hurry
2) oldLoadFailed and oldLastGrep are used by Main. I think this could
just be fixed by removing the deprecation warnings and setting them
private[sbt] since they will still be available in the shell.
I previously tried to fix https://github.com/sbt/sbt/issues/4608 in
fc715cab44 by finding the instance of
xsbt.boot.BootFilteredLoader in the classloader heirarchy. This was a
risky approach since it made a lot of assumptions about the classloaders
used to invoke xMain.run. Since the point is to filter out the scala
standard library jar, I reworked things to just find all the parents of
the scala provider loader and then walk the graph from the root
classloader until it finds the classloader that contains the scala
library. If no such classloader exists, it ends up returning the parent
of the scala provider library.
I also renamed the libraryLoader parameter to scalaProviderLoader since
that is what is actually passedin. It is actually the libraryLoader that
we want to exclude.
Ref https://github.com/sbt/sbt/issues/4661
local-preloaded-ivy contains dangling ivy.xml without JAR files.
We might include local-preloaded again once we have a preloaded in Maven layout.
I discovered there were a number of places where closing a ClassLoader
didn't work correctly because I was assuming it was a URLClassLoader
when it was actually a ClasspathFilter. I also incorrectly imported the
wrong kind of URLClassLoader in Run.scala. Finally, I close the
SbtMetaBuildClassLoader when xMain exits now.
I decided that there were too many settings related to the file
management that did similar things and had similar names but did
slightly different things. To improve this, I introduce the ChangedFiles
class to sbt.nio.file and switch to having just two task for file input
and output retrieval: all(Input|Output)Files and
changed(Input|Output)Files. If, for example, changedInputFiles returns
None that means that either the task has not yet been run or there were
no changes. If there have been any changes, then it will return
Some(changes) and the user can extract the relevant changes that they
are interested in.
The code may be slightly more verbose in a few places, but I think it's
worth it for the conceptual clarity.
This commit unifies my previous work for automatically watching the
input files for a task with support for automatically tracking and
cleaning up the output files of a task. The big idea is that users may
want to define tasks that depend on the file outputs of other tasks and
we may not want to run the dependent tasks if the output files of the
parent tasks are unmodified.
For example, suppose we wanted to make a plugin for managing typescript
files. There may be, say, two tasks with the following inputs and
outputs:
compileTypescript = taskKey[Unit]("shells out to compile typescript files")
fileInputs -- sourceDirectory / ** / "*.ts"
fileOutputs -- target / "generated-js" / ** / "*.js"
minifyGeneratedJS = taskKey[Path]("minifies the js files generated by compileTypescript to a single combined js file.")
dependsOn: compileTypeScript / fileOutputs
Given a clean build, the following should happen
> minifyGeneratedJS
// compileTypescript is run
// minifyGeneratedJS is run
> minifyGeneratedJS
// no op because nothing changed
> minifyGeneratedJS / clean
// removes the file returned by minifyGeneratedJS.previous
> minifyGeneratedJS
// re-runs minifyGeneratedJS with the previously compiled js artifacts
> compileTypescript / clean
// removes the generated js files
> minifyGeneratedJS
// compileTypescript is run because the previous clean removed the generated js files
// minifyGeneratedJS runs because the artifacts have changed
> clean
// removes the generated js files and the minified js file
> minifyGeneratedJS
// compileTypescript is run because the generated js files were
// minifyGeneratedJS is run both because it was removed and
Moreover, if compileTypescript fails, we want minifyGeneratedJS to fail
as well.
This commit makes this all possible. It adds a number of tasks to
sbt.nio.Keys that deal with the output files. When injecting settings, I
now identify all tasks that return Seq[File], File, Seq[Path] and Path
and create a hidden special task: dynamicFileOutputs: TaskKey[Seq[Path]]
This special task runs the underlying task and converts the result to
Seq[Path]. From there, we can have the tasks like changedOutputPaths
delegate to dynamicFileOutputs which, by proxy, runs the underlying
task. If any task in the input / output chain fails, the entire sequence
fails.
Unlike the fileInputs, we do not register the dynamicFileOutputs or
fileOutputs with continuous watch service so these paths will not
trigger a continuous build if they are modified. Only explicit unmanaged
input sources should should do that.
As part of this, I also added automatic generation of a custom clean task for
any task that returns Seq[File], File, Seq[Path] or Path. I also added
aggregation so that clean can be defined in a configuration or project
and it will automatically run clean for all of the tasks that have a
custom clean implementation in that task or project. The automatic clean
task will only delete files that are in the task target directory to
avoid accidentally deleting unmanaged files.
This commit refactors things so that the nio apis are located primarily
in the nio package. Because the nio keys are a first class sbt feature,
I had to add import sbt.nio._ and sbt.nio.Keys._ to the autoimports in
BuildUtil.scala
I had some ideas for allowing the user to get a copy of system in during
a continuous build but I can't really see a good use case now so I'm
going to remove it before 1.3.0.
In my recent changes to watch, I have been moving towards a world in
which sbt manages the file inputs and outputs at the task level. The
main idea is that we want to enable a user to specify the inputs and
outputs of a task and have sbt able to track those inputs across
multiple task evaluations. Sbt should be able to automatically trigger a
build when the inputs change and it also should be able to avoid task
evaluation if non of the inputs have changed.
The former case of having sbt automatically watch the file inputs of a
task has been present since watch was refactored. In this commit, I
make it possible for the user to retrieve the lists of new, modified and
deleted files. The user can then avoid task evaluation if none of the
inputs have changed.
To implement this, I inject a number of new settings during project
load if the fileInputs setting is defined for a task. The injected
settings are:
allPathsAndAttributes -- this retrieves all of the paths described by
the fileInputs for the task along with their attributes
fileStamps -- this retrieves all of the file stamps for the files
returned by allPathsAndAttributes
Using these two injected tasks, I also inject a number of derived tasks,
such as allFiles, which returns all of the regular files returned by
allPathsAndAttributes and changedFiles, which returns all of the regular
files that have been modified since the last run.
Using these injected settings, the user is able to write tasks that
avoid evaluation if the inputs haven't changed.
foo / fileInputs += baseDirectory.value.toGlob / ** / "*.scala"
foo := {
foo.previous match {
case Some(p) if (foo / changedFiles).value.isEmpty => p
case _ => fooImpl((foo / allFiles).value
}
}
To make this whole mechanism work, I add a private task key:
val fileAttributeMap = taskKey[java.util.HashMap[Path, Stamp]]("...")
This keeps track of the stamps for all of the files that are managed by
sbt. The fileStamps task will first look for the stamp in the attribute
map and, only if it is not present, it will update the cache. This
allows us to ensure that a given file will only be stamped once per task
evaluation run no matter how the file inputs are specified. Moreover, in
a continuous build, I'm able to reuse the attribute map which can
significantly reduce latency because the default file stamping
implementation used by zinc is fairly expensive (it can take anywhere
between 300-1500ms to stamp 5000 8kb source files on my mac).
I also renamed some of the watch related keys to be a bit more clear.
This trait may not even survive until 1.4.0. It should definitely not be
public. I got a little overexcited about programming with higher kinded
types when I added it.
The newest version of io repackages a number of classes into the
sbt.nio.* packages. It also changes some of the semantics of glob
related apis. This commit updates all of the usages of the updated apis
within sbt but should have no functional difference.
Since the new watch implementation has yet to be widely deployed, we
should hold off on deprecating the old keys. They could still be
deprecated in a patch release or in 1.4.0.
I noticed in a heap dump of sbt that there were many classloaders for
the scala instance. I then realized that we were making a new
classloader for the scala library on every test run. Even worse, the
ScalaInstanceLoader instance was never closed which lead to a metaspace
leak. I moved the scala instance classloader to the global classloader
cache. Not only will these be correctly cached, they will be closed if
evicted from the cache.
This adds dependency to LM implemented using Coursier.
I had to copy paste a bunch of code from sbt-coursier-shared to break the dependency to sbt.
`Global / useCoursier := false` or `-Dsbt.coursier=false` be used to opt-out of using Coursier for the dependency resolution.
Fixes#4438
This slims down update's UpdateReport by removing evicted modules
caller information. The larger the graph, the effect would be more
pronounced. For example, I saw a graph reduce from 5.9MB to 1.1MB in JSON file.
It was reported in https://github.com/sbt/sbt/issues/4608 that there was
a regression that tests run against scala 2.11 would fail. This was
because the interface loader incorrectly contained the scala library. To
fix this, I needed to find the xsbt.boot.BootFilteredLoader in the
classloading hierarchy and put the sbt testing interface library in
between that loader and the scala library loader.
We noticed that the community build was failing for some projects due to
some class loading issues. My initial approach for detecting the errors
didn't always work because the test framework might wrap the underlying
exception. To fix that, I add the causes to the list of throwables to
scan for class loading related exceptions. I also added
ClassNotFoundException to the list of types to check for. I additionally
added more context to the error message so that it is more clear to the
user what specifically went wrong. The error message is intended to
provide examples that the user can actually paste into the console.
There is also a lot of manual line wrapping that could be improved by
defining paragraphs and then splitting on the jline terminal width. That
could be a useful internal helper function to improve our log messages
in general.
The underlying issue could be addressed by allowing the user to specify
libraries that get excluded from the dependency classpath for layering
purposes. I'm not sure the best way to do that yet and adding that
feature wouldn't fix any existing builds so I think that would be better
handled in 1.4.0.
Prior to this commit, it was difficult to prevent the sbt metabuild
classpath from leaking into the runtime and test classpaths. The biggest
issue is that the test-inferface jar was located in the metabuild
classpath. We tried to prevent leakage using the DualClassLoader, but
this was an ugly solution that did not seem to work reliably. The fix is
to modify the actual sbt metabuild classloader provided by the sbt
launcher.
To do this, I add a new classloader SbtMetaClassLoader that isolates the
test-interface jar from the rest of the classpath. I modify xMain to
create a new AppConfiguration that uses this new classloader and
use reflection to invoke the sbt main method using the new classloader.
Not only do I think that this is a much saner solution than DualLoaders,
I accidentally fixed#4575 with this change.
It isn't possible to share the runtime and test layers correctly with
bgCopyClasspath is used because the runtime classpath uses the
dependencies copied to the boot directory while the test classpath uses
the classes in target and .ivy2. Since this is not the default and users
have to opt in to
ClassLoaderLayeringStrategy.ShareRuntimeDependenciesLayerWithTestDependencies,
I think this is fine.
It is possible with the new layering strategies that tests may fail if a
java package private class is accessed across classloader layers. This
will result in an IllegalAccessError that is hard to debug. With this
commit, I add an error message that will be displayed if run throws an
IllegalAccessError that suggests that the user try the
ScalaInstance layering strategy or the flat layering strategy.
I noticed that sometimes multiple ClassLoaderCache instances were
created in each configuraiton. I believe this was due to the use of
inConfig(...)(...) causing multiple caches to be created. Long term, I'm
not sure that taskRepository and classLoaderCache are the right
solutions so I made classLoaderCache private[sbt] as well.
I have noticed on linux that the file cache updates aren't fast enough
for ExternalHooks. Say you have project b that depends on project a.
With a clean build, if you run b/compile, the file cache may not yet see
the changes to *.class files generated by project a. There are multiple
ways to fix this:
* don't use the file cache for binary products
* use the analysis results to invalidate the cache
* switch over to my hypothetical replacement file system
In the meantime, we should stop spamming users by default.
I wrote this check in a rush and realized that it didn't quite match the
correct glob semantics. The depth parameter is effectively the index of
the array of sorted child directories of the base. That index is
computed with getNameCount - 1, not getNameCount. It is also inclusive,
not exclusive, hence the switch from `<` to `<=`.
This change was motivated by my reviewing the initial change in the
context of the fix to https://github.com/sbt/sbt/issues/4591.
This commit change the default FileTree.Repository to always use a polling file
repository but one that validates the current file system results
against the cache results. On windows, we do not validate the cache
because the cache can cause io contention in scripted tests. The
cache does seem to work ok on my VM, but not on appveyor for whatever
reason. Validating the cache by default was suggested by @smarter in a
comment in https://github.com/sbt/sbt/issues/4543.
This commit reworks the watch start message so that instead of printing
something like:
[info] [watch] 1. Waiting for source changes... (press 'r' to re-run the command, 'x' to exit sbt or 'enter' to return to the shell)
it instead prints something like:
[info] 1. Monitoring source files for updates...
[info] Project: filesJVM
[info] Command: compile
[info] Options:
[info] <enter>: return to the shell
[info] 'r': repeat the current command
[info] 'x': exit sbt
It will also print which path triggered the build.
Prior to this commit, it was necessary to add breadcrumbs for every
input that is used within a dynamic task. In this commit, I rework the
watch setup so that we can track the dynamic inputs that are used. To
simplify the discussion, I'm going to ignore aggregation and
multi-commands, but they are both supported. To implement this change, I
update the GlobLister.all method to take a second implicit argument:
DynamicInputs. This is effectively a mutable Set of Globs that is
updated every time a task looks up files from a glob. The repository.get
method should already register the glob with the repository. The set of
globs are necessary because the repository may not do any file filtering
so the file event monitor needs to check the input globs to ensure that
the file event is for a file that actually requested by a task during
evaluation.
* Long term, I plan to add support for lifting tasks into a dynamic task
in a way that records _all_ of the possible dependencies for the task
through each of the dynamic code paths. We should revisit this change to
determine if its still necessary after that change.
I had needed to add proxy classes for the global FileTreeRepository so
that tasks that called the close method wouldn't actually stop the
monitoring done by the global repository. I realized that it makes a lot
more sense to just not provide direct access to the underlying file tree
repository and let the registerGlobalCaches manage its life cycle
instead.
This commit cleans up the approach for transforming the sbt state upon
completion of a task returning State. I add a new approach where a task
can return an instance of StateTransform, which is just a wrapper around
State. I then update EvaluateTask to apply this stateTransform rather
than the (optional) state transformation that may be stored in the Task
info parameter. By requiring that the user return StateTransform rather
than State directly, we ensure that no existing tasks that depend on the
state transformation function embedded in the Task info break. In sbt 2,
I could see the possibility of making this automatic (and probably
removing the state transformation function via attribute).
The problem with using the transformState attribute key is that it is
applied non-deterministically. This means that if you decorate a task
returning State, then the state transformation may or may not be
correctly applied.
I tracked this non-determinism down to the stateTransform
method in EvaluateTask. It iterates through the task result map and
chains all of the defined transformState attribute values. Because the
result is a map, this order is not specified. This chaining is arguably
a bad design because State => State does not imply commutivity. Indeed,
the problem here was that my state transformation functions were
constant functions, which are obviously non-commutative. I believe that
this logic likely written under the assumption that there would be no
more than one of these tranformations in a given result map.
I decided that it makes sense to move all of the new watch code out of
the Watched companion object since the Watched trait itself is now
deprecated. I don't really like having the new code in Watched.scala
mixed with the legacy code, so I pulled it all out and moved it into the
Watch object. Since we have to put all of the logic for the Continuous
object in main in order to access the sbt.Keys object, it makes sense to
move the logic out of main-command and into command so that most of the
watch related logic is in the same subproject.
This is a huge refactor of Watched. I produced this through multiple
rewrite iterations and it was too difficult to separate all of the
changes into small individual commits so I, unfortunately, had to make a
massive commit. In general, I have tried to document the source code
extensively both to facilitate reading this commit and to help with
future maintenance.
These changes are quite complicated because they provided a built-in
like api to a feature that is implemented like a plugin. In particular,
we have to manually do a lot of parsing as well as roll our own
task/setting evaluation because we cannot infer the watch settings at
project build time because we do not know a priori what commands the
user may watch in a given session. The dynamic setting and task
evaluation is mostly confined to the WatchSettings class in Continuous.
It feels dirty to do all of this extraction by hand, but it does seem to
work correctly with scopes.
At a high level this commit does four things:
1) migrate the watch implementation to using the InputGraph to collect
the globs that it needs to monitor during the watch
2) simplify WatchConfig to make it easier for plugin authors to write
their own custom watch implementations
3) allow configuration of the watch settings based on the task(s) that
is/are being run
4) adds an InputTask implemenation of watch.
Point #1 is mostly handled by Point #3 since I had to overhaul how _all_
of the watch settings are generated. InputGraph already handles both
transitive inputs and triggers as well as legacy watchSources so not
much additional logic is needed beyond passing the correct scoped keys
into InputGraph.
Point #3 require some structural changes. The watch settings cannot in
general be defined statically because we don't know a priori what tasks
the user will try and watch. To address this, I added code that will
extract the task keys for all of the commands that we are running. I
then manually extract the relevant settings for each command. Finally, I
aggregate those settings into a single WatchConfig that can be used to
actually implement the watch. The aggregation is generally
straightforward: we run all of the callbacks for each task and choose
the next watch state based on the highest priority Action that is
returned by any of the callbacks.
Because I needed Extracted to pull out the necessary settings, I was
forced to move a lot of logic out of Watched and into a new singleton,
Continuous, that exists in the main project (Watched is in the command
project). The public footprint of Continuous is tiny. Even though I want
to make the watch feature flexible for plugin authors, the
implementation and api remain a moving target so I do not want to be
limited by future binary compatibility requirements. Anyone who wants to
live dangerously can access the private[sbt] apis via reflection or by
adding custom code to the sbt package in their plugin (a technique I've
used in CloseWatch).
Point #2 is addressed by removing the count and lastStatus from the
WatchConfig callbacks. While these parameters can be useful, they are
not necessary to implement the semantics of a watch. Moreover, a status
boolean isn't really that useful and the sbt task engine makes it very
difficult to actually extract the previous result of the tasks that were
run. After this refactor, WatchConfig has a simpler api. There are fewer
callbacks to implement and the signatures are simpler. To preserve the
_functionality_ of making the count accessible to the user specifiable
callbacks, I still provided settings like watchOnInputEvent that accept
a count parameter, but the count is actually tracked externally to
Watched.watch and incremented every time the task is run.
Moreover, there are a few parameters of the watch: the logger and
transitive globs, that cannot be provided via settings. I provide
callback settings like watchOnStart that mirror the WatchConfig
callbacks except that they return a function from Continuous.Arguments
to the needed callback. The Continuous.aggregate function will check if
the watchOnStart setting is set and if it is, will pass in the needed
arguments. Otherwise it will use the default watchOnStart implementation
which simulates the existing behavior by tracking the iteration count in
an AtomicInteger and passing the current count into the user provided
callback. In this way, we are able to provide a number of apis to the
watch process while preserving the default behavior.
To implement #4, I had to change the label of the `watch` attribute key
from "watch" to "watched". This allows `watch compile` to work at the
sbt command line even thought it maps to the watchTasks key. The actual
implementation is almost trivial. The difference between an
InputTask[Unit] and a command is very small. The tricky part is that the
actual implementation requires applying mapTask to a delegate task that
overrides the Task's info.postTransform value (which is used to
transform the state after task evaluation). The actual postTransform
function can be shared by the continuous task and continuous command.
There is just a slightly different mechanism for getting to the state
transformation function.
This commit adds functionality to traverse the settings graph to find
all of the Inputs settings values for the transitive dependencies of the
task. We can use this to build up the list of globs that we must watch
when we are in a continuous build. Because the Inputs key is a setting,
it is actually quite fast to fetch all the values once the compiled map
is generated (O(2ms) in the scripted tests, though I did find that it
took O(20ms) to generate the compiled map).
One complicating factor is that dynamic tasks do not track any of
their dynamic dependencies. To work around this, I added the
transitiveDependencies key. If one does something like:
foo := {
val _ = bar / transitiveDependencies
val _ = baz / transitiveDependencies
if (System.getProperty("some.prop", "false") == "true") Def.task(bar.value)
else Def.task(baz.value)
}
then (foo / transitiveDependencies).value will return all of the inputs
and triggers for bar and baz as well as for foo.
To implement transitiveDependencies, I did something fairly similar to
streams where if the setting is referenced, I add a default
implementation. If the default implementation is not present, I fall
back on trying to extract the key from the commandLine. This allows the
user to run `show bar / transitiveDependencies` from the command line
even if `bar / transitiveDependencies` is not defined in the project.
It might be possible to coax transitiveDependencies into a setting, but
then it would have to be eagerly evaluated at project definition time
which might increase start up time too much. Alternatively, we could
just define this task for every task in the build, but I'm not sure how
expensive that would be. At any rate, it should be straightforward to
make that change without breaking binary compatibility if need be. This
is something to possibly explore before the 1.3 release if there is any
spare time (unlikely).
In order to walk the full dependency graph of a task, we need to know
the internal class path dependency configurations. Suppose that we have
projects a and b where b depends on *->compile in a. If we want to find
all of the inputs for b, then if we find that there is a dependency on
b/Compile/internalDependencyClasspath, then we must add a / Compile /
internalDependencyClasspath to the list of dependencies for the task.
I copied the setup of one of the other scripted tests that was
introduced to test the track-internal-dependencies feature to write a
basic scripted test for this new key and implementation.
This adds two new tasks: fileInputs and watchTriggers, that will be used by sbt
both to fetch files within a task as well as to create watch sources for
continuous builds. In a subsequent commit, I will add a task for a
command that will traverse the task dependency graph to find all of the
input task dependency scopes. The idea is to make it possible to easily
and accurately specify the watch sources for a task. For example, we'd
be able to write something like:
val foo = taskKey[Unit]("print text file contents")
foo / fileInputs := baseDirectory.value ** "*.txt"
foo := {
(foo / fileInputs).value.all.foreach(f =>
println(s"$f:\n${new String(java.nio.Files.readAllBytes(f.toPath))}"))
}
If the user then runs `~foo`, then the task should trigger if the user
modifies any file with the "txt" extension in the project directory.
Today, the user would have to do something like:
val fooInputs = settingKey[Seq[Source]]("the input files for foo")
fooInputs := baseDirectory.value ** "*.txt"
val foo = taskKey[Unit]("print text file contents")
foo := {
fooInputs.value.all.foreach(f =>
println(s"$f:\n${new String(java.nio.Files.readAllBytes(f.toPath))}"))
}
watchSources ++= fooInputs.value
or even worse:
val foo = taskKey[Unit]("print text file contents")
foo := {
(baseDirectory.value ** "*.txt").all.foreach(f =>
println(s"$f:\n${new String(java.nio.Files.readAllBytes(f.toPath))}"))
}
watchSources ++= baseDirectory.value ** "*.txt"
which makes it possible for the watchSources and the task sources to get
out of sync.
For consistency, I also renamed the `outputs` key `fileOutputs`.
The supershell output is distracting in CI. I added a system property,
sbt.ci, to explicitly set whether or not sbt is running in a ci build.
It was not at all obvious to me if the BUILD_NUMBER or CI environment
variables were set on travis or appveyor.
Fixes#4582#4443 introduced a perf enhacement of exluding sbt out of the metabuild, and instead appending the list of JARs resolved by the sbt launcher to the classpath.
This strategy worked in most cases, but it seems like some plugins are explicitly depending on IO module. In those cases, old IO would come before the new IO in the classpath ordering, resulting to "Symbol X is missing from classpath" error.
This fixes the issue by excluding all modules whose organization is `org.scala-sbt`.
As an escape hatch, I am adding a new key `reresolveSbtArtifacts`, which can be used to opt out of this behavior.
The usingTerminal method synchronizes the JLine object which can lead to
deadlock if multiple threads call it. When we just to want to read the
attributes of the terminal, but not modify it, there doesn't seem to be
any reason to use a lock.