It has long been a frustration of mine that it is necessary to prepend
multiple commands with a ';'. In this commit, I relax that restriction.
I had to reorder the command definitions so that multi comes before act.
This was because if the multi command did not have a leading semicolon,
then it would be handled by the action parser before the multi command
parser had a shot at it. Sadness ensued.
Presently the multi command parser doesn't work correctly if one of the
commands includes a string literal. For example, suppose that there is
an input task defined name "bash" that shells out and runs the input.
Then the following does not work with the current multi command parser:
; bash "rm target/classes/Foo.class; touch src/main/scala/Foo.scala"; comple
Note that this is a real use case that has caused me issues in the past.
The problem is that the semicolon inside of the quote gets interpreted
as a command separator token. To fix this, I rework the parser so that
it consumes string literals and doesn't modify them. By using
StringEscapable, I allow the string to contain quotation marks itself.
I couldn't write a scripted test for this because in a command like
`; foo "bar"; baz`, the quotes around bar seem to get stripped. This
could be fixed by adding an alternative to StringEscapable that matches
an escaped string, but that is more work than I'm willing to do right
now.
I discovered that when I ran multi-commands with '~' that if there was a
space between the ';' and the command, then the parsing of the command
would fail and the watch would abort. To fix this, I refactor
Watched.watch to use the multi command parser and, if that parser fails,
we fallback on a single command.
Prior to this commit, there was no unit testing of the parser for
multiple commands. I wanted to make some improvements to the parser, so
I reworked the implementation to be testable. This change also allows
the multiParserImpl method to be shared with Watched.watch, which I will
also update in a subsequent commit.
There also were no explicit scripted tests for multiple commands, so I
added one that I will augment in later commits.
In #4446, @japgolly reported that in some projects, if a parent project
was broken, then '~' would immediately exit upon startup. I tracked it
down to this managed sources filter. The idea of this filter is to avoid
getting stuck in a build loop if managedSources writes into an unmanaged
source directory. If the (managedSources in ThisScope).value line
failed, however, it would cause the watchSources, and by delegation,
watchTransitiveSources task to fail. The fix is to only create this
filter if the managedSources task succeeds.
I'm not 100% sure if we shouldn't just get rid of this filter entirely
and just document that '~' will probably loop if a build writes the
result of managedSources into an unmanaged source directory.
Fixes#4437
Until now, sbt was resolved twice once by the launcher, and the second time by the metabuild.
This excludes sbt from the metabuild graph, and instead uses app classpath from the launcher.
Fixes#3436
This implements isMetaBuild setting that is explicitly for meta build only,
unlike sbtPlugin setting which can be used for both meta build and plugin development purpose.
I noticed that when using the latest nightly, triggered execution would
fail to work if I switched projects with, e.g. ++2.10.7. This was
because the background thread that filled the file cache was incorrectly shutdown.
To fix this, we just need to close whatever view is cached in the
globalFileTreeView attribute in the exit hook rather than the view
created by the method.
After making this change and publishing a local SNAPSHOT build, I was
able to switch projects with ++ and have triggeredExecution continue to
work.
On windows* it was possible to get into a loop where the build would
continually restart because for some reason the build.sbt file would get
touched during test (I did not see this behavior on osx). Thankfully,
the repository keeps track of the file hash and when we detect that the
build file has been updated, we check the file hash to see if it
actually changed.
Note that had this bug shipped, it would have been fixable by overriding
the watchOnEvent task in user builds.
The loop would occur if I ran ~filesJVM/test in
https://github.com/swoval/swoval. It would not occur if I ran
test:compile, so the fact that the build file is being touched seems
to be related to the test run itself.
For whatever reason, I couldn't get jline to work on windows, so I'm
disabling the re-run with 'r' feature. This can almost surely be fixed,
but the way I was invoking jline was blocking the continuous build from
exiting when the user pressed enter.
It was possible that on startup, when this function was first invoked,
that the default boot commands are present. This was a problem because
the global file repository is instantiated using the value of this task.
When we start a continuous build, this task gets run again to evaluate
again.
When sbt is started without an implicit task list, then the task is
implicitly shell as indicated by the command "iflast shell". We can use
this to determine whether or not to use the global file system cache or
not.
Ideally we use the FileTreeRepository for interactive sessions by
default. A continuous build is effectively interactive, so I'd like that
case to also use the file tree repository. To avoid breaking scripted
tests, many of which implicitly expect file tree changes to be
instantaneously available, we set interactive to true only if we are not
in a scripted run, which can be verified by checking that the commands
contains "setUpScripted".
Sometimes a user may want to rerun their task even if the source files
haven't changed. Presently this is a little annoying because you have to
hit enter to stop the build and then up arrow or <ctrl+r> plus enter to
rebuild. It's more convenient to just be able to press the 'r' key to
re-run the task.
To implement this, I had to make the watch task set up a jline terminal
so that System.in would be character buffered instead of line buffered.
Furthermore, I took advantage of the NonBlockingInputStream
implementation provided by jline to wrap System.in. This was necessary
because even with the jline terminal, System.in.available doesn't return
> 0 until a newline character is entered. Instead, the
NonBlockingInputStream does provide a peek api with a timeout that will
return the next unread key off of System.in if there is one available.
This can be use to proxy available in the WrappedNonBlockingInputStream.
To ensure maximum user flexibility, I also update the watchHandleInput Key to
take an InputStream and return an Action. This setting will now receive
the wrapped System.in, which will allow the user to create their own
keybindings for watch actions without needing to use jline themselves.
Future work might make it more straightforward to go back to a line
buffered input if that is what the user desires.
For projects with a large number of files, zinc has to do a lot of work
to determine which source files and binaries have changes since the last
build. In a very simple project with 5000 source files, it takes roughly
750ms to do a no-op compile using the default incremental compiler
options. After this change, it takes about 200ms. Of those 200ms, 50ms
are due to the update task, which does a partial project resolution*.
The implementation is straightforward since zinc already provides an api
for overriding the built in change detection strategy. In a previous
commit, I updated the sources task to return StampedFile rather than
regular java.io.File instances. To compute all of the source file
stamps, we simply list the sources and if the source is in fact an
instance of StampedFile, we don't need to compute it, otherwise we
generate a StampedFile on the fly. After building a map of stamped files
for both the sources files and all of the binary dependencies, we simply
diff these maps with the previous results in the changedSources,
changedBinaries and removedProducts methods.
The new ExternalHooks are easily disabled by setting
`externalHooks := _ => None`
in the project build.
In the future, I could see moving ExternalHooks into the zinc project so
that other tools like bloop or mill could use them.
* I think this delay could be eliminated by caching the UpdateResult so
long as the project doesn't depend on any snapshot libraries. For a
project with a single source, the no-op compile takes O(50ms) so caching
the project resolution would make compilation start nearly
instantaneous.
I realized that using the cache has the potential to cause issues for
batch processing in CI if some tasks assume that a file created by one
task will immediately be visible in the other. With the cache, there is
typically on O(10ms) latency between a file being created and appearing
in the cache (at least on OSX). When manually running commands, that
latency doesn't matter.
It is not always possible to monitor a directory using OS file system
events. For example, inotify does not work with nfs. To work around
this, I add support for a hybrid FileTreeViewConfig that caches a
portion of the file system and monitors it with os file system
notification, but that polls a subset of the directories. When we query
the view using list or listEntries, we will actually query the file
system for the polling directories while we will read from the cache for
the remainder. When we are not in a continuous build (~ *), there is no
polling of the pollingDirectories but the cache will continue to update
the regular directories in the background. When we are in a continuous
build, we use a PollingWatchService to poll the pollingDirectories and
continue to use the regular repository callbacks for the other
directories.
I suspect that #4179 may be resolved by adding the directories for which
monitoring is not working to the pollingDirectories task.