Major / important fix in dependency management handling

Version and scope from dependency management now override the ones from
the dependencies no matter what.

For spark 1.3 (a bit old now, but unit tested in CentralTests), this
makes spark-core depend on hadoop 1.x by default (because of dependency
management), instead of hadoop 2.2 whose versions appear in the dependency
sections of it (but are now overridden by those in dependency management).
Enabling the hadoop-2.2 profile reestablishes the former hadoop 2.2
versions.

This commit breaks the spark related test in CentralTests, repaired by
the next two commits.
This commit is contained in:
Alexandre Archambault 2016-07-03 16:34:09 +02:00
parent ffd58b51b0
commit 5146be5c39
No known key found for this signature in database
GPG Key ID: 14640A6839C263A9
1 changed files with 12 additions and 2 deletions

View File

@ -239,13 +239,23 @@ object Resolution {
var dep = dep0
for ((mgmtConfig, mgmtDep) <- dict.get(DepMgmt.key(dep0))) {
if (dep.version.isEmpty)
if (mgmtDep.version.nonEmpty)
dep = dep.copy(version = mgmtDep.version)
if (config.isEmpty)
if (mgmtConfig.nonEmpty)
config = mgmtConfig
// FIXME The version and scope/config from dependency management, if any, are substituted
// no matter what. The same is not done for the exclusions and optionality, for a lack of
// way of distinguishing empty exclusions from no exclusion section and optional set to
// false from no optional section in the dependency management for now.
if (dep.exclusions.isEmpty)
dep = dep.copy(exclusions = mgmtDep.exclusions)
if (mgmtDep.optional)
dep = dep.copy(optional = mgmtDep.optional)
}
(config, dep)