Commit Graph

7332 Commits

Author SHA1 Message Date
eugene yokota d4df289f2d
Merge pull request #4867 from eatkins/watch-trigger
Watch trigger
2019-07-15 21:40:28 -04:00
Ethan Atkins 6c4e23f77c Only persist file stamps in turbo mode
The use of the persistent file stamp cache between watch runs didn't
seem to cause any issues, but there was some chance for inconsistency
between the file stamp cache and the file system so it makes sense to
put it behind the turbo flag.

After changing the default, the watch/on-change scripted test started
failing. It turns out that the reason is that the file stamp cache
managed by the watch process was not pre-filled by task evaluation. For
this reason, the first time a source file was modified, it was treated
as a creation regardless of whether or not it actually was.

To fix this, I add logic to pre-fill the watch file stamp cache if we
are _not_ persisting the file stamps between runs.

I ran a before and after with the scala build performance benchmark tool
and setting the watchPersistFileStamps key to true reduced the median
run time by about 200ms in the non-turbo case.
2019-07-15 17:59:14 -07:00
Ethan Atkins 5e374a8e7d Move onEvent callback definition
It makes the file more readable to me to have this definition below the
definition of the FileEventMonitor.
2019-07-15 14:21:14 -07:00
Ethan Atkins 272508596a Use one observer for all aggregated watch tasks
There was a bug where sometimes a source file change would not trigger a
new build if the change occurred during a build. Based on the logs, it
seemed to be because a number of redundant events were generated for the
same path and they triggered the anti-entropy constraint of the file
event monitor.

To fix this, I consolidated a number of observers of the global file
tree repository into a single observer. This way, I am able to ensure
that only one event is generated per file event.

I also reworked the onEvent callback to only stamp the file once. It was
previously stamping the modified source file for all of the aggregated
tasks. In the sbt project running `~compile` meant that we were stamping
a source file 22 times whenever the file changed.

This actually uncovered a zinc issue though as well. Zinc computes and
writes the hash of the source file to the analysis file after
compilation has completed. If a source file is modified during
compilation, then the new hash is written to the analysis file even when
the compilation may have been made against the previous version of the
file. Zinc will then refuse to re-compile that file until another change
is made.

I manually verified that in the sbt project if I ran `~compile` before
this change and modified a file during compilation, then no event was
triggered (there was a log message about the event being dropped due to
the anti-entropy constraint though). After this change, if I changed a
file during compilation, it seemed to always trigger, but due to the
zinc bug, it didn't always re-compile.
2019-07-15 14:21:14 -07:00
eugene yokota 9b7ed84953
Merge pull request #4866 from eed3si9n/wip/bump
bump io, util, lm, and zinc
2019-07-15 15:36:57 -04:00
Eugene Yokota 0d1ce8ddcf bump io, util, lm, and zinc
Fixes #4841
Fixes #4011
2019-07-15 14:52:35 -04:00
eugene yokota a6da4b5b90
Merge pull request #4862 from eatkins/fix-warnings
Fix warnings
2019-07-15 12:57:16 -04:00
eugene yokota 82299c89ed
Merge pull request #4864 from eed3si9n/wip/developing
Split some of the developing related docs to DEVELOPING.md
2019-07-15 12:49:50 -04:00
Eugene Yokota 55d1cbbfe2 Split some of the developing related docs to DEVELOPING.md 2019-07-15 12:39:41 -04:00
eugene yokota e2921e70d2
Merge pull request #4861 from eatkins/reload-multi-commands
Reload multi commands
2019-07-13 20:11:23 -04:00
Ethan Atkins 055d7cd626 Remove unneeded cast
This was causing an abstract type pattern T is unchecked since it is
eliminated by erasure. It was unneeded because store.get[T] return
Option[(T, Long)]. I'm surprised that the compiler complained about
this.
2019-07-13 15:35:27 -07:00
Ethan Atkins f2c8d4f436 Fix implicit numeric widening warning
I noticed in CI that a warning was printed for this file about implicit
numeric widening in the GigaBytes case.
2019-07-13 15:20:39 -07:00
Ethan Atkins a071ce8224 Handle multi-command with reload correctly
@olegych reported that sbt would silently swallow the 'compile' command
in the multi-command, 'run;compile;reload'. I tracked this down to the
build source check. When the build has
Global / onChangedBuildSource := ReloadOnSourceChanges, the check build
sources command return a new state with "reload" prefixed. To actually
perform the reload, I returned this modified state with the prefixed
reload command.

There were two problems with this:
1) In the auto-reload case, the current command was not run after the
   reload
2) If the multi-command contained reload, the auto-reload check would
   have a false positive which triggered the bug in (1)

To fix this, I clear out the remaining commands before I run the check
command. That way, we know that if the remaining commands has a reload,
then it is an auto-reload. We then prefix the state with both the reload
and the current command.

I updated the scripted test for auto-reload to handle multi commands
containing reload.
2019-07-13 11:18:56 -07:00
Ethan Atkins 1b0159c547 Remove case in flatMap
I didn't notice that this was triggering a warning. I think at some
point I was actually doing something in the pattern match.
2019-07-13 10:46:25 -07:00
Ethan Atkins 8d20bd4c94
Merge pull request #4859 from eatkins/watch-defaults
Rework watch options
2019-07-12 15:17:54 -07:00
Ethan Atkins 263f00f3b2 Rework watch options
In this commit, I both restore some sbt 1.2.8 behavior and enhance the
api for setting keyboard shortcuts in watch. I change the default start
message to just show the watch count, the tasks that are being monitored
and, on a new line, the instructions to terminate the watch or show more
options.

Here's what it looks like:
[info] 1. Monitoring source files for spark/compile...
[info]    Press <enter> to interrupt or '?' for more options.
?
[info] Options:
[info]   <enter>  : interrupt (exits sbt in batch mode)
[info]   <ctrl-d> : interrupt (exits sbt in batch mode)
[info]   'r'      : re-run the command
[info]   's'      : return to shell
[info]   'q'      : quit sbt
[info]   '?'      : print options

I also made it so that the new options can be added (and old options
removed) with the watchInputOptions key. For example, to add an option
to reload the build with the key 'l', you could add
ThisBuild / watchInputOptions += Watch.InputOption('l', "reload", Watch.Reload)
to your global build.sbt.

After adding that to my global ~/sbt/1.0/global.sbt file, the output of
'?' became:
[info] Options:
[info]   <ctrl-d> : interrupt (exits sbt in batch mode)
[info]   <enter>  : interrupt (exits sbt in batch mode)
[info]   '?'      : print options
[info]   'l'      : reload
[info]   'q'      : quit sbt
[info]   'r'      : re-run the command
[info]   's'      : return to shell
2019-07-12 14:10:51 -07:00
eugene yokota 680659210f
Merge pull request #4848 from eatkins/background-copy-hash
Use last modified instead of hash
2019-07-12 15:24:09 -04:00
eugene yokota 3301bce3b8
Merge pull request #4850 from eatkins/in-memory-cache-store
Add support for in memory cache store
2019-07-12 15:23:40 -04:00
eugene yokota 2af6ad5713
Merge pull request #4858 from eed3si9n/wip/thisbuild
scope the reference of useSuperShell to ThisBuild
2019-07-12 15:21:57 -04:00
eugene yokota 5c3eff52d4
Merge pull request #4855 from eed3si9n/wip/credentials
add allCredentials to emulate credential registration
2019-07-12 15:21:36 -04:00
Eugene Yokota 00f7d1fab5 scope the reference of useSuperShell to ThisBuild
Fixes #4800
2019-07-12 11:34:39 -04:00
Eugene Yokota 9755234a16 address review 2019-07-12 10:52:02 -04:00
Ethan Atkins 0172d118af Add parser for file size
At the suggestion of @eed3si9n, instead of specifying the file cache
size in bytes, we now specify it in a formatted string. For example,
instead of specifying 128 megabytes in bytes (134217728), we can specify
it with the string "128M".
2019-07-11 17:45:16 -07:00
Ethan Atkins cad89d17a9 Add support for in memory cache store
It can be quite slow to read and parse a large json file. Often, we are
reading and writing the same file over and over even though it isn't
actually changing. This is particularly noticeable with the
UpdateReport*. To speed this up, I introduce a global cache that can be
used to read values from a CacheStore. When using the cache, I've seen
the time for the update task drop from about 200ms to about 1ms. This
ends up being a 400ms time savings for test because update is called for
both Compile / compile and Test / compile.

The way that this works is that I add a new abstraction
CacheStoreFactoryFactory, which is the most enterprise java thing I've
ever written. We store a CacheStoreFactoryFactory in the sbt State.
When we make Streams for the task, we make the Stream's
cacheStoreFactory field using the CacheStoreFactoryFactory. The
generated CacheStoreFactory may or may not refer to a global cache.

The CacheStoreFactoryFactory may produce CacheStoreFactory instances
that delegate to a Caffeine cache with a max size parameter that is
specified in bytes by the fileCacheSize setting (which can also be set
with -Dsbt.file.cache.size). The size of the cache entry is estimated by
the size of the contents on disk. Since we are generally storing things
in the cache that are serialized as json, I figure that this should be a
reasonable estimate. I set the default max cache size to 128MB, which is
plenty of space for the previous cache entries for most projects. If the
size is set to 0, the CacheStoreFactoryFactory generates a regular
DirectoryStoreFactory.

To ensure that the cache accurately reflects the disk state of the
previous cache (or other cache's using a CacheStore), the Caffeine cache
stores the last modified time of the file whose contents it should
represent. If there is a discrepancy in the last modified times (which
would happen if, say, clean has been run), then the value is read from
disk even if the value hasn't changed.

* With the following build.sbt file, it takes roughly 200ms to read and
parse the update report on my compute:

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.3"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.1"

This is because spark-sql has an enormous number of dependencies and the
update report ends up being 3MB.
2019-07-11 17:45:16 -07:00
Eugene Yokota c31e0b6b55 add allCredentials to emulate credential registration
Fixes #4802

For Ivy integration sbt uses credential task in a peculiar way. 9fa25de84c/main/src/main/scala/sbt/Defaults.scala (L2271-L2275)

This lets the build user put `credential` task in various places, like metabuild or root project, but they would all act as if they were scoped globally. This PR adds `allCredentials` task to emulate the behavior to pass credentials into lm-coursier.
2019-07-11 14:15:33 -04:00
eugene yokota d9af39c90c
Merge pull request #4851 from eatkins/more-typesafe-releases
Rename typesafe release resolver in sbt build
2019-07-09 19:14:49 -04:00
Ethan Atkins a5db9e86d7 Rename typesafe release resolver in sbt build
I was still seeing a number of warnings in the sbt project itself in
spite of 9bb88cd342. This made those go
away.
2019-07-09 14:29:44 -07:00
eugene yokota e05198a8b8
Merge pull request #4849 from eatkins/typesafe-repo-name
Rename typesafeRelease Resolver
2019-07-09 09:02:46 -04:00
Ethan Atkins 9bb88cd342 Rename typesafeRelease Resolver
This Resolver had the same name as the typesafe ivy resolver specified
in the launcher boot.properties. It was creating a number of verbose
warnings about having multiple resolvers with the same name. I noticed
that the ivy pattern is slightly different for the boot resolver with
this name. It didn't seem to be causing any problems to have both
resolvers.

Fixes #4839
2019-07-08 19:22:14 -07:00
Ethan Atkins a368bf7026 Use last modified instead of hash
I noticed that for a simple spark project that evaluating the test task
was faster than running run when both tasks evaluated the same code
block. I tracked this down to the BackgroundJobService.copyClasspath
method. This method was hashing the jar contents of all of the files in
the build. On my computer, this took 600ms (for context, the total run
time of the `run` task was around 1.2 seconds, which included about
150ms of scala compiling and 350ms of time in the main method). If
instead we use the last modified time it drops down to 5-10ms. As
predicted, the total runtime of `run` in this project dropped down to
600ms which was on par with `test`.

I am not sure why a hash was used rather than last modified in the first place,
so I reworked things in such a way that, by default, sbt will use a hash
but if turbo mode is on, it will use the last modified instead. We can
revisit the default later.
2019-07-08 17:15:24 -07:00
eugene yokota 8413e259e1
Merge pull request #4838 from eatkins/multi-cleanup
Multi cleanup
2019-07-08 07:06:31 -04:00
eugene yokota fd87c34cde
Merge pull request #4836 from eed3si9n/wip/runarg
quote run argument if it contains a whitespace
2019-07-08 07:06:08 -04:00
eugene yokota 0a54d8facc
Merge pull request #4845 from eed3si9n/wip/bumpcoursier
lm-coursier-shaded 2.0.0-RC2-1
2019-07-06 00:36:24 +09:00
Eugene Yokota 83d35c8ed0 lm-coursier-shaded 2.0.0-RC2-1
Fixes #4823
2019-07-04 23:13:46 +09:00
Ethan Atkins 4e2c1858f2 Don't append empty comp.append
In some cases, comp.append could be an empty string. This would happen
if a parser was something like `(token(foo) <~ ;).+ <~ fo.?` because there were
no completions for the `fo` available anchor. The effect of this was
that tab would never complete foo;f to foo;foo, even though that was the
only possible completion. It would, _display_, foo as a possible
completion though.

This came up because the multi parser has a similar parser to that
described above and it broke tab completion to the right of a semi
colon.
2019-06-25 13:45:09 -07:00
Ethan Atkins 60b1ac7ac4 Improve multi parser performance
The multi parser had very poor performance if there were many commands.
Evaluating the expansion of something like "compile;" * 30 could cause
sbt to hang indefinitely. I believe this was due to excessive
backtracking due to the optional `(parser <~ semi.?).?` part of the
parser in the non-leading semicolon case.

I also reworked the implementation so that the multi command now has a
name. This allows us to partition the commands into multi and non-multi
commands more easily in State while still having multi in the command
list. With this change, builds and plugins can exclude the multi parser
if they wish.

Using the partitioned parsers, I removed the high/priority low priority
distinction. Instead, I made it so that the multi command will actually
check if the first command is a named command, like '~'. If it is, it
will pass the raw command argument with the named command stripped out
into the parser for the named command. If that is parseable, then we
directly apply the effect. Otherwise we prefix each multi command to the
state.
2019-06-25 13:45:09 -07:00
Eugene Yokota f0f55c38a5 quote run argument if it contains a whitespace
Fixes #4834
2019-06-24 19:32:55 -04:00
eugene yokota 9cc8913131
Merge pull request #4829 from eed3si9n/wip/crossplugin
add back typesafe-ivy-releases resolver
2019-06-23 20:44:59 -04:00
Eugene Yokota 29d3894b27 add back typesafe-ivy-releases resolver
Fixes #4698
Fixes #4827
2019-06-22 09:45:15 -04:00
eugene yokota eb877757ba
Merge pull request #4811 from eatkins/strict-multi-parser
Strict multi parser
2019-06-21 16:19:24 -04:00
eugene yokota ca8381e057
Merge pull request #4814 from eatkins/session-save
Clear meta build state on session save
2019-06-20 00:08:35 -04:00
eugene yokota 6fec4d350f
Merge pull request #4781 from eatkins/improve-reload-warnings
Improve reload warnings
2019-06-19 19:22:06 -04:00
Ethan Atkins 494755f0f7 Clear meta build state on session save
The `session save` command has the side effect of modifying a "*.sbt"
file so we don't want to warn about changes or automatically reload when
we return to the shell. Setting the hasCheckedMetaBuild attribute key to
false is sufficient to prevent this.

Ref: https://github.com/sbt/sbt/issues/4813
2019-06-19 16:15:00 -07:00
Ethan Atkins 30a16d1e10 Update Continuous to directly use multi parser
It didn't really make sense for Continuous to use the other command
parser and then reparse the results. I was able to slightly simplify
things by using the multi parser directly.
2019-06-19 16:12:45 -07:00
Ethan Atkins ff16d76ad3 Remove matchers from MultiParserSpec
We've been trying to move away from the wordy dsl and stick with bare
assertions.
2019-06-19 16:12:45 -07:00
Ethan Atkins 4c814752fb Support braces in multi command parser
We run into issues if we naively split the command input on ';' and
treat each part as a separate command unless the ';' is inside of a
string because it is also valid to have ';'s inside of braced
expressions, e.g. `set foo := { val x = 1; x + 1 }`. There was no parser
for expressions enclosed in braces. I add one that should parse any
expression wrapped in braces so long as each opening brace is matched by a
closing brace. The parser returns the original expression. This allows
the multi parser to ignore ';' inside of '{...}'.

I had to rework the scripted tests to individually run 'reload' and
'setUpScripted' because the new parser rejects setUpScripted because it
isn't a valid command until reload has run.
2019-06-19 16:12:45 -07:00
Ethan Atkins ccfc3d7bc7 Validate commands in multiparser
It was reported in https://github.com/sbt/sbt/issues/4808 that compared
to 1.2.8, sbt 1.3.0-RC2 will truncate the command args of an input task
that contains semicolons. This is actually intentional, but not
completely robust. For sbt >= 1.3.0, we are making ';' syntactically
meaningful. This means that it always represents a command separator
_unless_ it is inside of a quoted string. To enforce this, the multi parser
will effectively split the input on ';', it will then validate that each
command that it extracted is valid. If not, it throws an exception. If
the input is not a multi command, then parsing fails with a normal
failure.

I removed the multi command from the state's defined commands and reworked
State.combinedParser to explicitly first try multi parsing and fall back
to the regular combined parser if it is a regular command. If the multi
parser throws an uncaught exception, parsing fails even if the regular
parser could have successfully parsed the command. The reason is so that
we do not ever allow the user to evaluate, say 'run a;b'. Otherwise the
behavior would be inconsitent when the user runs 'compile; run a;b'
2019-06-19 16:12:45 -07:00
Ethan Atkins 2fe26403e7 Add test for using switch version in multi command
I noticed that in 1.3.0-RC2, the following commands worked:
  ++2.11.12 ;test
  ;++2.11.12;test
but
  ++2.11.12; test
did not. This issue is fixed in the next commit.
2019-06-19 16:12:45 -07:00
eugene yokota b0bfa63911
Merge pull request #4821 from eed3si9n/wip/source
Fix updateClassifiers
2019-06-19 17:39:35 -04:00
Eugene Yokota 7b10b372f8 Fix updateClassifiers
Fixes #4816

Copied sbt-lm-coursier hacks from 9173406bb3/modules/sbt-lm-coursier/src/main/scala/sbt/hack/Foo.scala.
2019-06-19 12:20:26 -04:00