Commit Graph

2437 Commits

Author SHA1 Message Date
Ethan Atkins 9c7acdb713 Force invalidate dependency changes
After adding the automatic lookup to external hooks for missing binary
jars, the scripted test dependency-management/invalidate-internal
started failing. This was because the previous analysis contained a jar
dependency that still existed on disk but was no longer a part of the
dependency classpath. Fundamentally the problem is that the zinc
compile analysis is not tightly coupled with the sbt build state.

To fix this, we can cache the dependency classpath file stamps in the
same way that we cache the input file stamps in external hooks and
manually diff them at the sbt level. We then force updates regardless of
the difference between the zinc state and the sbt state.
2019-08-15 11:31:24 -07:00
Ethan Atkins 32a6d0d5d7 Update ExternalHooks to look up changed binaries
It was reported that in community builds, sometimes there was
spurious over-compilation due to invalidation of the scala library jar
(https://github.com/sbt/sbt/issues/4948). The reason for this was that
the external hooks prefills the managed cache with all of the time
stamps for the project dependencies but was not looking up any jars that
weren't in the cache. I suspect I did this because I didn't realize that
zinc also includes its own classpath in the binaries which is not
a part of the dependencyClasspath. The fix is to just add the jar to the
cache if it doesn't already exist by switching to getOrElseUpdate from
get.

I followed the steps in #4948 and published a version of sbt locally
with this change and the spurious re-builds stopped.
2019-08-15 10:34:05 -07:00
Ethan Atkins 7c483909af Invalidate unmanagedFileStampCache in allOutputFiles
In the code formatting use case, the formatting task may modify the
source files in place. If the formatting task uses the nio
inputFileStamps, then it would fill the in-memory cache of source paths
to file stamps. This would cause compile to see the pre-formatted
stamps. To fix this, we can invalidate the file cache entries for the
outputs of a task. This will cause the side-effect of some extra io
because the hashes may be computed three times: once for the format
inputs, once for the format outputs and once for the compile inputs. I
think most users would understand that adding auto-formatting would
potentially slowdown compilation.

To really prove this out, I implemented a poor man's scalafmt plugin in
a scripted test. It is fully incremental. Even in the case when some
files cannot be formatted it will update all of the files that can be
formatted and not re-format them until they change.
2019-08-09 13:18:29 -07:00
Ethan Atkins 6d482eb166 Set scope for fileTreeView
It makes sense to add a scope for the `fileTreeView` key where it is
available. At the moment, there is only one `fileTreeView`
implementation but, if that changes down the road, these tasks will
automatically inherit the correct view.
2019-08-09 12:18:22 -07:00
Ethan Atkins 6700d5f77a Add nio path filter settings
It makes sense for the new glob/nio based apis that we provide first
class support for filtering the results. Because it isn't possible to
scope a task within a task within a task, i.e.
`compile / fileInputs / includePathFilter`, I had to add four new
filter settings of type `PathFilter`:

fileInputIncludeFilter :== AllPassFilter.toNio,
fileInputExcludeFilter :== DirectoryFilter.toNio || HiddenFileFilter,
fileOutputIncludeFilter :== AllPassFilter.toNio,
fileOutputExcludeFilter :== NothingFilter.toNio,

Before I was effectively hard-coding the filter: RegularFileFilter &&
!HiddenFileFilter in the inputFileStamps and allInputFiles tasks. These
remain the defaults, as seen in the fileInputExcludeFilter definition
above, but can be overridden by the user.

It makes sense to exclude directories and hidden files for the input
files, but it doesn't necessarily make sense to apply any output filters
by default. For symmetry, it makes sense to have them, but they are
unlikely to be used often.

Apart from adding and defining the default values for these keys, the
only other changes I had to make was to remove the hard-coded filters
from the allInputFiles and inputFileStamps tasks and also add the
filtering to the allOutputFiles task. Because we don't automatically
calculate the FileAttributes for the output files, I added logic for
bypassing the path filter application if the PathFilter is effectively
AllPass, which is the case for the default values because:
AllPassFilter.toNio == AllPass
NothingFilter.toNio == NoPass
AllPass && !NoPass == AllPass && AllPass == AllPass
2019-08-09 12:18:22 -07:00
Ethan Atkins 8ce2578060 Introduce FileChanges
Prior to this commit, change tracking in sbt 1.3.0 was done via the
changed(Input|Output)Files tasks which were tasks returning
Option[ChangedFiles]. The ChangedFiles case class was defined in io as

case class ChangedFiles(created: Seq[Path], deleted: Seq[Path], updated: Seq[Path])

When no changes were found, or if there were no previous stamps, the
changed(Input|Output)Files tasks returned None. This made it impossible
to tell whether nothing had changed or if it was the first time.
Moreover, the api was awkward as it required pattern matching or folding
the result into a default value.

To address these limitations, I introduce the FileChanges class. It can
be generated regardless of whether or not previous file stamps were
available. The changes contains all of the created, deleted, modified
and unmodified files so that the user can directly call these methods
without having to pattern match.
2019-08-09 12:18:22 -07:00
Ethan Atkins 8e9efbeaac Add extension methods for input and output files
It is tedious to write (foo / allInputFiles).value so I added simple
extension method macros that expand `foo.inputFiles` to
(foo / allInputFiles).value and `foo.outputFiles` to
`(foo / allOutputFiles).value`.
2019-08-09 12:18:22 -07:00
Ethan Atkins f126206231 Fix incremental task evaluation semantics
While writing documentation for the new file management/incremental
task evaluation features, I realized that incremental task evaluation
did not have the correct semantics. The problem was that calls to
`.previous` are not scoped within the current task. By this, I mean that
say there are tasks foo and bar and that the defintion of bar looks like

bar := {
    val current = foo.value
    foo.previous match {
        case Some(v) if v == current => // value hasn't changed
        case _ => process(current)
    }
}

The problem is that foo.previous is stored in
effectively (foo / streams).value.cacheDirectory / "previous". This
means that it is completely decoupled from foo. Now, suppose that the
user runs something like:
> set foo := 1
> bar // processes the value 1
> set foo := 2
> foo
> bar // does not process the new value 2 because foo was called, which updates the previous value

This is not an unrealistic scenario and is, in fact, common if the
incremental task evaluation is changed across multiple processing steps.
For example, in the make-clone scripted test, the linkLib task processes
the outputs of the compileLib task. If compileLib is invoked separately
from linkLib, then when we next call linkLib, it might not do anything
even if there was recompilation of objects because the objects hadn't
changed since the last time we called compileLib.

To fix this, I generalizedthe previous cache so that it can be keyed on
two tasks, one is the task whose value is being stored (foo in the
example above) and the other is the task in which the stored task value
is retrieved (bar in the example above). When the two tasks are the
same, the behavior is the same as before.

Currently the previous value for foo might be stored somewhere like:

base_directory/target/streams/_global/_global/foo/previous

Now, if foo is stored with respect to bar, it might be stored in

base_directory/target/streams/_global/_global/bar/previous-dependencies/_global/_gloal/foo/previous

By storing the files this way, it is easy to remove all of the previous
values for the dependencies of a task.

In addition to changing how the files are stored on disk, we have to store
the references in memory differently. A given task can now have multiple
previous references (if, say, two tasks bar and baz both depend on the
previous value). When we complete the results, we first have to collect
all of the successful tasks. Then for each successful task, we find all
of its references. For each of the references, we only complete the
value if the scope in which the task value is used is successful.

In the actual implemenation in Previous.scala, there are a number places
where we have to cast to ScopedKey[Task[Any]]. This is due to
limitations of ScopedKey and Task being type invariant. These casts are
all safe because we never try to get the value of anything, we only use
the portion of the apis of these types that are independent of the value
type. Structural typing where ScopedKey[Task[_]] gets inferred to
ScopedKey[Task[x]] forSome x is a big part of why we have problems with
type inference.
2019-08-09 12:18:22 -07:00
Ethan Atkins d18cb83b3c Switch from Vector to List in Settings
Using List instead of vector makes the code a bit more readable. We
don't need indexed access into the data structure so its unlikely that
Vector was providing any performance benefit.
2019-08-09 12:18:22 -07:00
Ethan Atkins fdeb6be667 Add scaladoc to FileStamp
As part of a documentation push, I noticed that these were undocumented
and that there were some public apis in FileStamp that I intended to be
private[sbt].
2019-08-09 12:18:22 -07:00
Ethan Atkins fb15065438 Move implicit FileStamp JsonFormats into object
I realized it was probably not ideal to have these implicit JsonFormats
defined directly in the FileStamp object because they might
inadvertently be brought into scope with a wildcard import.
2019-08-09 12:18:22 -07:00
Ethan Atkins 9cd88070ae Fix typo in allOutputFiles description 2019-08-09 12:18:22 -07:00
Ethan Atkins a7715e90a4 Rename cacheStoreFactory attribute
It references a CacheStoreFactoryFactory so it should have been named
accordingly.
2019-08-09 12:18:22 -07:00
eugene yokota 6865f32eae
Merge pull request #4934 from eed3si9n/wip/publishTo
Revert "don't require publishTo specified if publishArtifact is `false`"
2019-08-09 09:24:31 -04:00
Ethan Atkins 14f7177619 Fix implicit numeric widening warning 2019-08-08 16:06:13 -07:00
Ethan Atkins d86afb5745 Revert "Merge pull request #4930 from eatkins/2.12.9"
This reverts commit 053b72005d, reversing
changes made to d6b8e0388c.
2019-08-08 11:09:29 -07:00
Eugene Yokota ec6cf15f12 Revert "don't require publishTo specified if publishArtifact is `false`"
This reverts commit 4668faff7c.
Ref https://github.com/sbt/sbt/pull/3760
Ref https://github.com/sbt/sbt/pull/4931
2019-08-08 00:36:36 -04:00
eugene yokota 8365e4b189
Merge pull request #4926 from eatkins/auto-reload-fix
Improve auto-reload
2019-08-05 19:04:50 -04:00
Ethan Atkins b26ce819ca Bump default scala version to 2.12.9
I automatically generated with:

git grep "2.12.8" | \
  cut -d ':' -f1 | uniq | xargs perl -p -i -e "s/2.12.8/2.12.9/"
2019-08-05 13:12:28 -07:00
Ethan Atkins aa62386f4d Improve auto-reload
I noticed that sometimes if I changed a build source and then ran reload
in the shell, I'd still see a warning about build sources having
changed. We can eliminate this behavior by resetting the
hasCheckedMetaBuild state attribute to false and skipping the
checkBuildSources step if the current command is 'reload'. We also now
skip checking the build source step if the command is exit or reboot.
2019-08-05 07:42:24 -07:00
Ethan Atkins 4061dabf4d Override zinc compile analysis for source changes
Zinc records all of the compile source file hashes when compilation
completes. This is problematic because its possible that a source file
was changed during compilation. From the user perspective, this may mean
that their source change will not be recompiled even if a build is
triggered by the change.

To overcome this, I add logic in the sbt provided external hooks to
override the zinc analysis stamps. This is done by writing the source
file stamps to the previous cache after compilation completes. This
allows us to see the source differences from sbt's perspective, rather
than zinc's perspective. We then merge the combined differences in the
actual implementation of ExternalHooks. In some cases this may result in
over-compilation but generally over-compilation is preferred to under
compilation. Most of the time, the results should be the same.

The scripted test that I added modifies a file during compilation by
invoking a macro. It then effectively asserts that the file is
recompiled during the next test run by validating the compilation result
in the test. The test fails on the latest develop hash.
2019-07-29 20:13:41 -07:00
eugene yokota 5b0d0122af
Merge pull request #4906 from eatkins/turbo-resource-loader
Turbo resource loader
2019-07-29 16:21:17 -04:00
Ethan Atkins 6686e833b1 Sort dependency jars
I realized that it would be a good idea to sort the dependencyJars so
that they appear in the same order that they do in the fullClasspath.
2019-07-29 12:30:42 -07:00
Ethan Atkins 621789eeb2 Remove resource layer for AllDependencyJars strategy
Changed resources were causing the dependency layer to be invalidated on
resource changes in turbo mode because the resource layer was in between
the scala library layer. This commit reworks the layers for the
AllDependencyJars strategy so that the top layer is able to load _all_
of the resources during a test run.

The resource layer was added to address the problem that dependencies
may need to be able to load resources from the project classpath but
wouldn't be able to do so if the dependencies were in a separate layer
from the rest of the classpath. The resource layer was a classloader
that could load any resource on the full classpath but couldn't load any
classes. When I added the resource layer, I was thinking that when
resources changed, the resource class loader needed to be invalidated.
Resources, however, are different from classes in that the same
ClassLoader can find the same resources in a different place because
getResource and getResourceAsStream just return locations but do not
actually do any loading.

Taking advantage of this, I add a proxy classloader for finding
resource locations to ReverseLookupClassLoader. We can reset the
classpath of the resource loader in
ReverseLookupClassLoaderHolder.checkout. This allows us to see the new
versions of the resources without invalidating the dependency layer.
2019-07-29 12:30:42 -07:00
Ethan Atkins f5c8b8aad5 Don't use exception for reloading
I completely forgot about the StateTransform class which allows a task
to modify the state through its return value.
2019-07-26 15:03:32 -07:00
Ethan Atkins f7f6c3edfe Use '_' instead of '$' in path names
The use of '$' in the path names for streams is a pain because, since
'$' is a special character in the shell, it makes it impossible to
directly copy and paste the paths. If we make this change, some builds
will be left with vestigial directories with $global and $build in them
until they run clean. It also would break any scripts that manually
delete these paths. That doesn't seem like a common use case, but it's
worth mentioning.
2019-07-25 14:07:44 -07:00
Eugene Yokota ef05e07cc5 Fixes credential strictness
Fixes #4882

In #4855 I inadvertently introduced `credential` strictness. This makes relaxes it again by ignoring if the credential file doesn't exist.
2019-07-18 18:30:30 -04:00
Ethan Atkins a3ac4c76a6 Bump scalafmt
Intellij had issues resolving 2.0.0-RCX so it will be nice to be using
the latest.
2019-07-18 12:40:21 -07:00
kenji yoshida 534fbfffbb
fix OutOfMemoryError message
s/werecommend/we recommend/
2019-07-18 12:05:46 +09:00
Ethan Atkins 6c4e23f77c Only persist file stamps in turbo mode
The use of the persistent file stamp cache between watch runs didn't
seem to cause any issues, but there was some chance for inconsistency
between the file stamp cache and the file system so it makes sense to
put it behind the turbo flag.

After changing the default, the watch/on-change scripted test started
failing. It turns out that the reason is that the file stamp cache
managed by the watch process was not pre-filled by task evaluation. For
this reason, the first time a source file was modified, it was treated
as a creation regardless of whether or not it actually was.

To fix this, I add logic to pre-fill the watch file stamp cache if we
are _not_ persisting the file stamps between runs.

I ran a before and after with the scala build performance benchmark tool
and setting the watchPersistFileStamps key to true reduced the median
run time by about 200ms in the non-turbo case.
2019-07-15 17:59:14 -07:00
Ethan Atkins 5e374a8e7d Move onEvent callback definition
It makes the file more readable to me to have this definition below the
definition of the FileEventMonitor.
2019-07-15 14:21:14 -07:00
Ethan Atkins 272508596a Use one observer for all aggregated watch tasks
There was a bug where sometimes a source file change would not trigger a
new build if the change occurred during a build. Based on the logs, it
seemed to be because a number of redundant events were generated for the
same path and they triggered the anti-entropy constraint of the file
event monitor.

To fix this, I consolidated a number of observers of the global file
tree repository into a single observer. This way, I am able to ensure
that only one event is generated per file event.

I also reworked the onEvent callback to only stamp the file once. It was
previously stamping the modified source file for all of the aggregated
tasks. In the sbt project running `~compile` meant that we were stamping
a source file 22 times whenever the file changed.

This actually uncovered a zinc issue though as well. Zinc computes and
writes the hash of the source file to the analysis file after
compilation has completed. If a source file is modified during
compilation, then the new hash is written to the analysis file even when
the compilation may have been made against the previous version of the
file. Zinc will then refuse to re-compile that file until another change
is made.

I manually verified that in the sbt project if I ran `~compile` before
this change and modified a file during compilation, then no event was
triggered (there was a log message about the event being dropped due to
the anti-entropy constraint though). After this change, if I changed a
file during compilation, it seemed to always trigger, but due to the
zinc bug, it didn't always re-compile.
2019-07-15 14:21:14 -07:00
eugene yokota a6da4b5b90
Merge pull request #4862 from eatkins/fix-warnings
Fix warnings
2019-07-15 12:57:16 -04:00
Ethan Atkins 055d7cd626 Remove unneeded cast
This was causing an abstract type pattern T is unchecked since it is
eliminated by erasure. It was unneeded because store.get[T] return
Option[(T, Long)]. I'm surprised that the compiler complained about
this.
2019-07-13 15:35:27 -07:00
Ethan Atkins a071ce8224 Handle multi-command with reload correctly
@olegych reported that sbt would silently swallow the 'compile' command
in the multi-command, 'run;compile;reload'. I tracked this down to the
build source check. When the build has
Global / onChangedBuildSource := ReloadOnSourceChanges, the check build
sources command return a new state with "reload" prefixed. To actually
perform the reload, I returned this modified state with the prefixed
reload command.

There were two problems with this:
1) In the auto-reload case, the current command was not run after the
   reload
2) If the multi-command contained reload, the auto-reload check would
   have a false positive which triggered the bug in (1)

To fix this, I clear out the remaining commands before I run the check
command. That way, we know that if the remaining commands has a reload,
then it is an auto-reload. We then prefix the state with both the reload
and the current command.

I updated the scripted test for auto-reload to handle multi commands
containing reload.
2019-07-13 11:18:56 -07:00
Ethan Atkins 263f00f3b2 Rework watch options
In this commit, I both restore some sbt 1.2.8 behavior and enhance the
api for setting keyboard shortcuts in watch. I change the default start
message to just show the watch count, the tasks that are being monitored
and, on a new line, the instructions to terminate the watch or show more
options.

Here's what it looks like:
[info] 1. Monitoring source files for spark/compile...
[info]    Press <enter> to interrupt or '?' for more options.
?
[info] Options:
[info]   <enter>  : interrupt (exits sbt in batch mode)
[info]   <ctrl-d> : interrupt (exits sbt in batch mode)
[info]   'r'      : re-run the command
[info]   's'      : return to shell
[info]   'q'      : quit sbt
[info]   '?'      : print options

I also made it so that the new options can be added (and old options
removed) with the watchInputOptions key. For example, to add an option
to reload the build with the key 'l', you could add
ThisBuild / watchInputOptions += Watch.InputOption('l', "reload", Watch.Reload)
to your global build.sbt.

After adding that to my global ~/sbt/1.0/global.sbt file, the output of
'?' became:
[info] Options:
[info]   <ctrl-d> : interrupt (exits sbt in batch mode)
[info]   <enter>  : interrupt (exits sbt in batch mode)
[info]   '?'      : print options
[info]   'l'      : reload
[info]   'q'      : quit sbt
[info]   'r'      : re-run the command
[info]   's'      : return to shell
2019-07-12 14:10:51 -07:00
eugene yokota 680659210f
Merge pull request #4848 from eatkins/background-copy-hash
Use last modified instead of hash
2019-07-12 15:24:09 -04:00
eugene yokota 3301bce3b8
Merge pull request #4850 from eatkins/in-memory-cache-store
Add support for in memory cache store
2019-07-12 15:23:40 -04:00
eugene yokota 2af6ad5713
Merge pull request #4858 from eed3si9n/wip/thisbuild
scope the reference of useSuperShell to ThisBuild
2019-07-12 15:21:57 -04:00
Eugene Yokota 00f7d1fab5 scope the reference of useSuperShell to ThisBuild
Fixes #4800
2019-07-12 11:34:39 -04:00
Eugene Yokota 9755234a16 address review 2019-07-12 10:52:02 -04:00
Ethan Atkins 0172d118af Add parser for file size
At the suggestion of @eed3si9n, instead of specifying the file cache
size in bytes, we now specify it in a formatted string. For example,
instead of specifying 128 megabytes in bytes (134217728), we can specify
it with the string "128M".
2019-07-11 17:45:16 -07:00
Ethan Atkins cad89d17a9 Add support for in memory cache store
It can be quite slow to read and parse a large json file. Often, we are
reading and writing the same file over and over even though it isn't
actually changing. This is particularly noticeable with the
UpdateReport*. To speed this up, I introduce a global cache that can be
used to read values from a CacheStore. When using the cache, I've seen
the time for the update task drop from about 200ms to about 1ms. This
ends up being a 400ms time savings for test because update is called for
both Compile / compile and Test / compile.

The way that this works is that I add a new abstraction
CacheStoreFactoryFactory, which is the most enterprise java thing I've
ever written. We store a CacheStoreFactoryFactory in the sbt State.
When we make Streams for the task, we make the Stream's
cacheStoreFactory field using the CacheStoreFactoryFactory. The
generated CacheStoreFactory may or may not refer to a global cache.

The CacheStoreFactoryFactory may produce CacheStoreFactory instances
that delegate to a Caffeine cache with a max size parameter that is
specified in bytes by the fileCacheSize setting (which can also be set
with -Dsbt.file.cache.size). The size of the cache entry is estimated by
the size of the contents on disk. Since we are generally storing things
in the cache that are serialized as json, I figure that this should be a
reasonable estimate. I set the default max cache size to 128MB, which is
plenty of space for the previous cache entries for most projects. If the
size is set to 0, the CacheStoreFactoryFactory generates a regular
DirectoryStoreFactory.

To ensure that the cache accurately reflects the disk state of the
previous cache (or other cache's using a CacheStore), the Caffeine cache
stores the last modified time of the file whose contents it should
represent. If there is a discrepancy in the last modified times (which
would happen if, say, clean has been run), then the value is read from
disk even if the value hasn't changed.

* With the following build.sbt file, it takes roughly 200ms to read and
parse the update report on my compute:

libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.3"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.1"

This is because spark-sql has an enormous number of dependencies and the
update report ends up being 3MB.
2019-07-11 17:45:16 -07:00
Eugene Yokota c31e0b6b55 add allCredentials to emulate credential registration
Fixes #4802

For Ivy integration sbt uses credential task in a peculiar way. 9fa25de84c/main/src/main/scala/sbt/Defaults.scala (L2271-L2275)

This lets the build user put `credential` task in various places, like metabuild or root project, but they would all act as if they were scoped globally. This PR adds `allCredentials` task to emulate the behavior to pass credentials into lm-coursier.
2019-07-11 14:15:33 -04:00
Ethan Atkins 9bb88cd342 Rename typesafeRelease Resolver
This Resolver had the same name as the typesafe ivy resolver specified
in the launcher boot.properties. It was creating a number of verbose
warnings about having multiple resolvers with the same name. I noticed
that the ivy pattern is slightly different for the boot resolver with
this name. It didn't seem to be causing any problems to have both
resolvers.

Fixes #4839
2019-07-08 19:22:14 -07:00
Ethan Atkins a368bf7026 Use last modified instead of hash
I noticed that for a simple spark project that evaluating the test task
was faster than running run when both tasks evaluated the same code
block. I tracked this down to the BackgroundJobService.copyClasspath
method. This method was hashing the jar contents of all of the files in
the build. On my computer, this took 600ms (for context, the total run
time of the `run` task was around 1.2 seconds, which included about
150ms of scala compiling and 350ms of time in the main method). If
instead we use the last modified time it drops down to 5-10ms. As
predicted, the total runtime of `run` in this project dropped down to
600ms which was on par with `test`.

I am not sure why a hash was used rather than last modified in the first place,
so I reworked things in such a way that, by default, sbt will use a hash
but if turbo mode is on, it will use the last modified instead. We can
revisit the default later.
2019-07-08 17:15:24 -07:00
Ethan Atkins 60b1ac7ac4 Improve multi parser performance
The multi parser had very poor performance if there were many commands.
Evaluating the expansion of something like "compile;" * 30 could cause
sbt to hang indefinitely. I believe this was due to excessive
backtracking due to the optional `(parser <~ semi.?).?` part of the
parser in the non-leading semicolon case.

I also reworked the implementation so that the multi command now has a
name. This allows us to partition the commands into multi and non-multi
commands more easily in State while still having multi in the command
list. With this change, builds and plugins can exclude the multi parser
if they wish.

Using the partitioned parsers, I removed the high/priority low priority
distinction. Instead, I made it so that the multi command will actually
check if the first command is a named command, like '~'. If it is, it
will pass the raw command argument with the named command stripped out
into the parser for the named command. If that is parseable, then we
directly apply the effect. Otherwise we prefix each multi command to the
state.
2019-06-25 13:45:09 -07:00
Eugene Yokota 29d3894b27 add back typesafe-ivy-releases resolver
Fixes #4698
Fixes #4827
2019-06-22 09:45:15 -04:00
eugene yokota eb877757ba
Merge pull request #4811 from eatkins/strict-multi-parser
Strict multi parser
2019-06-21 16:19:24 -04:00
eugene yokota ca8381e057
Merge pull request #4814 from eatkins/session-save
Clear meta build state on session save
2019-06-20 00:08:35 -04:00