Revert be50ca111a2b3c73bc75a10a9d0fc712e251f6aa ... ef5a2b2b1a43ef14c0aae992aa9880e1656c09bc

cmbasics 2012-07-27 11:41:30 -07:00
parent ef5a2b2b1a
commit 9ca80c9c48
87 changed files with 0 additions and 12555 deletions

@ -1,229 +0,0 @@
[#304]: https://github.com/harrah/xsbt/issues/304
[#315]: https://github.com/harrah/xsbt/issues/315
[#327]: https://github.com/harrah/xsbt/issues/327
[#335]: https://github.com/harrah/xsbt/issues/335
[#393]: https://github.com/harrah/xsbt/issues/393
[#396]: https://github.com/harrah/xsbt/issues/396
[#380]: https://github.com/harrah/xsbt/issues/380
[#389]: https://github.com/harrah/xsbt/issues/389
[#388]: https://github.com/harrah/xsbt/issues/388
[#387]: https://github.com/harrah/xsbt/issues/387
[#386]: https://github.com/harrah/xsbt/issues/386
[#378]: https://github.com/harrah/xsbt/issues/378
[#377]: https://github.com/harrah/xsbt/issues/377
[#368]: https://github.com/harrah/xsbt/issues/368
[#394]: https://github.com/harrah/xsbt/issues/394
[#369]: https://github.com/harrah/xsbt/issues/369
[#403]: https://github.com/harrah/xsbt/issues/403
[#412]: https://github.com/harrah/xsbt/issues/412
[#415]: https://github.com/harrah/xsbt/issues/415
[#420]: https://github.com/harrah/xsbt/issues/420
[#462]: https://github.com/harrah/xsbt/pull/462
[#472]: https://github.com/harrah/xsbt/pull/472
[Launcher]: https://github.com/harrah/xsbt/wiki/Launcher
# Plan for 0.12.0
## Changes from 0.12.0-Beta2 to 0.12.0-RC1
* Support globally overriding repositories ([#472]). Define the repositories to use by putting a standalone `[repositories]` section (see the [Launcher] page) in `~/.sbt/repositories` and pass `-Dsbt.override.build.repos=true` to sbt. Only the repositories in that file will be used by the launcher for retrieving sbt and Scala and by sbt when retrieving project dependencies. (@jsuereth)
* The launcher can launch all released sbt versions back to 0.7.0.
* A more refined hint to run 'last' is given when a stack trace is suppressed.
* Use java 7 Redirect.INHERIT to inherit input stream of subprocess ([#462],[#327]). This should fix issues when forking interactive programs. (@vigdorchik)
* Delete a symlink and not its contents when recursively deleting a directory.
* The [Howto pages](http://www.scala-sbt.org/howto.html) on the [new site](http://www.scala-sbt.org) are at least readable now. There is more content to write and more formatting improvements are needed, so [pull requests are welcome](https://github.com/sbt/sbt.github.com).
* Use the binary version for cross-versioning even for snapshots and milestones.
Rely instead on users not publishing the same stable version against both stable Scala or sbt releases and snapshots/milestones.
* API for embedding incremental compilation. This interface is subject to change, but already being used in [a branch of the scala-maven-plugin](https://github.com/davidB/scala-maven-plugin/tree/feature/sbt-inc).
* Experimental support for keeping the Scala compiler resident. Enable by passing `-Dsbt.resident.limit=n` to sbt, where `n` is an integer indicating the maximum number of compilers to keep around.
## Changes from 0.12.0-M2 to 0.12.0-Beta2
* Support for forking tests ([#415])
* force 'update' to run when invoked directly ([#335])
* `projects add/remove <URI>` for temporarily working with other builds
* added `print-warnings` task that will print unchecked and deprecation warnings from the previous compilation without needing to recompile (Scala 2.10+ only)
* various improvements to `help` and `tasks` commands as well as new `settings` command ([#315])
* fix detection of ancestors for java sources
* fix the resolvers used for `update-sbt-classifiers` ([#304])
* fix auto-imports of plugins ([#412])
* poms for most artifacts available via a virtual repository on repo.typesafe.com ([#420])
* bump jsch version to 0.1.46. ([#403])
* Added support for loading an ivy settings file from a URL.
## Changes from 0.12.0-M1 to M2
* `test-quick` ([#393]) runs the tests specified as arguments (or all tests if no arguments are given) that:
1. have not been run yet OR
2. failed the last time they were run OR
3. had any transitive dependencies recompiled since the last successful run
* Argument quoting ([#396])
* `> command "arg with spaces,\n escapes interpreted"`
* `> command """arg with spaces,\n escapes not interpreted"""`
* For the first variant, note that paths on Windows use backslashes and need to be escaped (`\\`). Alternatively, use the second variant, which does not interpret escapes.
* For using either variant in batch mode, note that a shell will generally require the double quotes themselves to be escaped.
* The `help` command now accepts a regular expression to use to search the help. See `help help` for details.
* The sbt plugins repository is added by default for plugins and plugin definitions. [#380]
* Properly resets JLine after being stopped by Ctrl+z (unix only). [#394]
* `session save` overwrites settings in `build.sbt` (when appropriate). [#369]
* other fixes/improvements: [#368], [#377], [#378], [#386], [#387], [#388], [#389]
### Binary sbt plugin dependency declarations in 0.12.0-M2
Declaring sbt plugin dependencies, as declared in sbt 0.11.2, will not work 0.12.0-M2. Instead of declaring a binary sbt plugin dependency within your plugin definition with:
```scala
addSbtPlugin("a" % "b" % "1.0")
```
You instead want to declare that binary plugin dependency with:
```scala
libraryDependencies +=
Defaults.sbtPluginExtra("a" % "b" % "1.0", "0.12.0-M2", "2.9.1")
```
This will only be an issue with binary plugin dependencies published for milestone releases of sbt going forward.
For convenience in future releases, a variant of `addSbtPlugin` will be added to support a specific sbt version with
```scala
addSbtPlugin("a" % "b" % "1.0", sbtVersion = "0.12.0-M2")
```
## Changes from 0.11.2 to 0.12.0-M1
* Plugin configuration directory precedence (see details below)
* JLine 1.0 (details below)
* Fixed source dependencies (details below)
* Enhanced control over parallel execution (details below)
* The cross building convention has changed for sbt 0.12 and Scala 2.10 and later (details below)
* Aggregation has changed to be more flexible (details below)
* Task axis syntax has changed from key(for task) to task::key (details below)
* The organization for sbt has to changed to `org.scala-sbt` (was: org.scala-tools.sbt). This affects users of the scripted plugin in particular.
## Details of major changes from 0.11.2 to 0.12.0
## Plugin configuration directory
In 0.11.0, plugin configuration moved from `project/plugins/` to just `project/`, with `project/plugins/` being deprecated. Only 0.11.2 had a deprecation message, but in all of 0.11.x, the presence of the old style `project/plugins/` directory took precedence over the new style. In 0.12.0, the new style takes precedence. Support for the old style won't be removed until 0.13.0.
1. Ideally, a project should ensure there is never a conflict. Both styles are still supported, only the behavior when there is a conflict has changed.
2. In practice, switching from an older branch of a project to a new branch would often leave an empty `project/plugins/` directory that would cause the old style to be used, despite there being no configuration there.
3. Therefore, the intention is that this change is strictly an improvement for projects transitioning to the new style and isn't noticed by other projects.
## JLine
Move to jline 1.0. This is a (relatively) recent release that fixes several outstanding issues with jline but, as far as I can tell, remains binary compatible with 0.9.94, the version previously used. In particular:
1. Properly closes streams when forking stty on unix.
2. Delete key works on linux. Please check that this works for your environment as well.
3. Line wrapping seems correct.
## Parsing task axis
There is an important change related to parsing the task axis for settings and tasks that fixes [#202](https://github.com/harrah/xsbt/issues/202)
1. The syntax before 0.12 has been `{build}project/config:key(for task)`
2. The proposed (and implemented) change for 0.12 is `{build}project/config:task::key`
3. By moving the task axis before the key, it allows for easier discovery (via tab completion) of keys in plugins.
4. It is not planned to support the old syntax. It would be ideal to deprecate it first, but this would take too much time to implement.
## Aggregation
Aggregation has been made more flexible. This is along the direction that has been previously discussed on the mailing list.
1. Before 0.12, a setting was parsed according to the current project and only the exact setting parsed was aggregated.
2. Also, tab completion did not account for aggregation.
3. This meant that if the setting/task didn't exist on the current project, parsing failed even if an aggregated project contained the setting/task.
4. Additionally, if compile:package existed for the current project, *:package existed for an aggregated project, and the user requested 'package' run (without specifying the configuration) *:package wouldn't be run on the aggregated project (it isn't the same as the compile:package key that existed on the current).
5. In 0.12, both of these situations result in the aggregated settings being selected. For example,
1. Consider a project `root` that aggregates a subproject `sub`.
2. `root` defines `*:package`.
3. `sub` defines `compile:package` and `compile:compile`.
4. Running `root/package` will run `root/*:package` and `sub/compile:package`
5. Running `root/compile` will run `sub/compile:compile`
6. This change depends on the change to parsing the task axis.
## Parallel Execution
Fine control over parallel execution is supported as described here: https://github.com/harrah/xsbt/wiki/Parallel-Execution
1. The default behavior should be the same as before, including the `parallelExecution` settings.
2. The new capabilities of the system should otherwise be considered experimental.
3. Therefore, `parallelExecution` won't be deprecated at this time.
## Source dependencies
A fix for issue [#329](https://github.com/harrah/xsbt/issues/329) is included. This fix ensures that only one version of a plugin is loaded across all projects. There are two parts to this.
1. The version of a plugin is fixed by the first build to load it. In particular, the plugin version used in the root build (the one in which sbt is started in) always overrides the version used in dependencies.
2. Plugins from all builds are loaded in the same class loader.
Additionally, Sanjin's patches to add support for hg and svn URIs are included.
1. sbt uses subversion to retrieve URIs beginning with `svn` or `svn+ssh`. An optional fragment identifies a specific revision to checkout.
2. Because a URI for mercurial doesn't have a mercurial-specific scheme, sbt requires the URI to be prefixed with `hg:` to identify it as a mercurial repository.
3. Also, URIs that end with `.git` are now handled properly.
## Cross building
The cross version suffix is shortened to only include the major and minor version for Scala versions starting with the 2.10 series and for sbt versions starting with the 0.12 series. For example, `sbinary_2.10` for a normal library or `sbt-plugin_2.10_0.12` for an sbt plugin. This requires forward and backward binary compatibility across incremental releases for both Scala and sbt.
1. This change has been a long time coming, but it requires everyone publishing an open source project to switch to 0.12 to publish for 2.10 or adjust the cross versioned prefix in their builds appropriately.
2. Obviously, using 0.12 to publish a library for 2.10 requires 0.12.0 to be released before projects publish for 2.10.
3. At the same time, sbt 0.12.0 itself should be published against 2.10.0 or else it will be stuck in 2.9.x for the 0.12.x series.
4. There is now the concept of a binary version. This is a subset of the full version string that represents binary compatibility. That is, equal binary versions implies binary compatibility. All Scala versions prior to 2.10 use the full version for the binary version to reflect previous sbt behavior. For 2.10 and later, the binary version is `<major>.<minor>`.
5. The cross version behavior for published artifacts is configured by the crossVersion setting. It can be configured for dependencies by using the `cross` method on `ModuleID` or by the traditional %% dependency construction variant. By default, a dependency has cross versioning disabled when constructed with a single % and uses the binary Scala version when constructed with %%.
6. For snapshot/milestone versions of Scala or sbt (as determined by the presence of a '-' in the full version), dependencies use the binary Scala version by default, but any published artifacts use the full version. The purpose here is to ensure that versions published against a snapshot or milestone do not accidentally pollute the compatible universe. Note that this means that declaring a dependency on a version published against a milestone requires an explicit change to the dependency definition.
7. The artifactName function now accepts a type ScalaVersion as its first argument instead of a String. The full type is now `(ScalaVersion, ModuleID, Artifact) => String`. ScalaVersion contains both the full Scala version (such as 2.10.0) as well as the binary Scala version (such as 2.10).
8. The flexible version mapping added by Indrajit has been merged into the `cross` method and the %% variants accepting more than one argument have been deprecated. Some examples follow.
These are equivalent:
```scala
"a" % "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.Disabled
```
These are equivalent:
```scala
"a" %% "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.binary
```
This uses the full Scala version instead of the binary Scala version:
```scala
"a" % "b" % "1.0" cross CrossVersion.full
```
This uses a custom function to determine the Scala version to use based on the binary Scala version:
```scala
"a" % "b" % "1.0" cross CrossVersion.binaryMapped {
case "2.9.1" => "2.9.0" // remember that pre-2.10, binary=full
case x => x
}
```
This uses a custom function to determine the Scala version to use based on the full Scala version:
```scala
"a" % "b" % "1.0" cross CrossVersion.fullMapped {
case "2.9.1" => "2.9.0"
case x => x
}
```
Using a custom function is used when cross-building and a dependency isn't available for all Scala versions. This feature should be less necessary with the move to using a binary version.

@ -1 +0,0 @@
This page contains examples submitted by the community of SBT users.

@ -1,769 +0,0 @@
### 0.11.2 to 0.11.3
Dropping scala-tools.org:
* The sbt group ID is changed to `org.scala-sbt` (from org.scala-tools.sbt). This means you must use a 0.11.3 launcher to launch 0.11.3.
* The convenience objects `ScalaToolsReleases` and `ScalaToolsSnapshots` now point to `https://oss.sonatype.org/content/repositories/releases` and `.../snapshots`
* The launcher no longer includes `scala-tools.org` repositories by default and instead uses the Sonatype OSS snapshots repository for Scala snapshots.
* The `scala-tools.org` releases repository is no longer included as an application repository by default. The Sonatype OSS repository is _not_ included by default in its place.
Other fixes:
* Compiler interface works with 2.10
* `maxErrors` setting is no longer ignored
* Correct test count [#372] \(Eugene)
* Fix file descriptor leak in process library (Daniel)
* Buffer url input stream returned by Using [#437]
* Jsch version bumped to 0.1.46 [#403]
* JUnit test detection handles ancestors properly (Indrajit)
* Avoid unnecessarily re-resolving plugins [#368]
* Substitute variables in explicit version strings and custom repository definitions in launcher configuration
* Support setting sbt.version from system property, which overrides setting in a properties file [#354]
* Minor improvements to command/key suggestions
[#437]: https://github.com/harrah/xsbt/issues/437
[#403]: https://github.com/harrah/xsbt/issues/403
[#372]: https://github.com/harrah/xsbt/issues/372
[#368]: https://github.com/harrah/xsbt/issues/368
[#354]: https://github.com/harrah/xsbt/issues/354
### 0.11.1 to 0.11.2
Notable behavior change:
* The local Maven repository has been removed from the launcher's list of default repositories, which is used for obtaining sbt and Scala dependencies. This is motivated by the high probability that including this repository was causing the various problems some users have with the launcher not finding some dependencies ([#217]).
Fixes:
* [#257] Fix invalid classifiers in pom generation (Indrajit)
* [#255] Fix scripted plugin descriptor (Artyom)
* Fix forking git on windows (Stefan, Josh)
* [#261] Fix whitespace handling for semicolon-separated commands
* [#263] Fix handling of dependencies with an explicit URL
* [#272] Show deprecation message for `project/plugins/`
[#217]: https://github.com/harrah/xsbt/issues/217
[#255]: https://github.com/harrah/xsbt/issues/255
[#257]: https://github.com/harrah/xsbt/issues/257
[#263]: https://github.com/harrah/xsbt/issues/263
[#261]: https://github.com/harrah/xsbt/issues/261
[#272]: https://github.com/harrah/xsbt/issues/272
### 0.11.0 to 0.11.1
Breaking change:
* The scripted plugin is now in the `sbt` package so that it can be used from a named package
Notable behavior change:
* By default, there is more logging during update: one line per dependency resolved and two lines per dependency downloaded. This is to address the appearance that sbt hangs on larger 'update's.
Fixes and improvements:
* Show help for a key with `help <key>`
* [#21] Reduced memory and time overhead of incremental recompilation with signature hash based approach.
* Rotate global log so that only output since last prompt is displayed for `last`
* [#169] Add support for exclusions with excludeAll and exclude methods on ModuleID. (Indrajit)
* [#235] Checksums configurable for launcher
* [#246] Invalidate `update` when `update` is invalidated for an internal project dependency
* [#138] Include plugin sources and docs in `update-sbt-classifiers`
* [#219] Add cleanupCommands setting to specify commands to run before interpreter exits
* [#46] Fix regression in caching missing classifiers for `update-classifiers` and `update-sbt-classifiers`.
* [#228] Set `connectInput` to true to connect standard input to forked run
* [#229] Limited task execution interruption using ctrl+c
* [#220] Properly record source dependencies from separate compilation runs in the same step.
* [#214] Better default behavior for classpathConfiguration for external Ivy files
* [#212] Fix transitive plugin dependencies.
* [#222] Generate <classifiers> section in make-pom. (Jan)
* Build resolvers, loaders, and transformers.
* Allow project dependencies to be modified by a setting (buildDependencies) but with the restriction that new builds cannot be introduced.
* [#174], [#196], [#201], [#204], [#207], [#208], [#226], [#224], [#253]
[#253]: https://github.com/harrah/xsbt/issues/253
[#246]: https://github.com/harrah/xsbt/issues/246
[#235]: https://github.com/harrah/xsbt/issues/235
[#229]: https://github.com/harrah/xsbt/issues/229
[#228]: https://github.com/harrah/xsbt/issues/228
[#226]: https://github.com/harrah/xsbt/issues/226
[#224]: https://github.com/harrah/xsbt/issues/224
[#222]: https://github.com/harrah/xsbt/issues/222
[#220]: https://github.com/harrah/xsbt/issues/220
[#219]: https://github.com/harrah/xsbt/issues/219
[#214]: https://github.com/harrah/xsbt/issues/214
[#212]: https://github.com/harrah/xsbt/issues/212
[#208]: https://github.com/harrah/xsbt/issues/208
[#207]: https://github.com/harrah/xsbt/issues/207
[#204]: https://github.com/harrah/xsbt/issues/204
[#201]: https://github.com/harrah/xsbt/issues/201
[#196]: https://github.com/harrah/xsbt/issues/196
[#174]: https://github.com/harrah/xsbt/issues/174
[#169]: https://github.com/harrah/xsbt/issues/169
[#138]: https://github.com/harrah/xsbt/issues/138
[#46]: https://github.com/harrah/xsbt/issues/46
[#21]: https://github.com/harrah/xsbt/issues/21
[#114]: https://github.com/harrah/xsbt/issues/114
[#115]: https://github.com/harrah/xsbt/issues/115
[#118]: https://github.com/harrah/xsbt/issues/118
[#120]: https://github.com/harrah/xsbt/issues/120
[#121]: https://github.com/harrah/xsbt/issues/121
[#128]: https://github.com/harrah/xsbt/issues/128
[#131]: https://github.com/harrah/xsbt/issues/131
[#132]: https://github.com/harrah/xsbt/issues/132
[#135]: https://github.com/harrah/xsbt/issues/135
[#139]: https://github.com/harrah/xsbt/issues/139
[#140]: https://github.com/harrah/xsbt/issues/140
[#145]: https://github.com/harrah/xsbt/issues/145
[#156]: https://github.com/harrah/xsbt/issues/156
[#157]: https://github.com/harrah/xsbt/issues/157
[#162]: https://github.com/harrah/xsbt/issues/162
### 0.10.1 to 0.11.0
Major Improvements:
* Move to 2.9.1 for project definitions and plugins
* Drop support for 2.7
* Settings overhaul, mainly to make API documentation more usable
* Support using native libraries in `run` and `test` (but not `console`, for example)
* Automatic plugin cross-versioning. Use
```scala
addSbtPlugin("group" % "name" % "version")
```
in `project/plugins.sbt` instead of `libraryDependencies += ...` See [[Plugins]] for details
Fixes and Improvements:
* Display all undefined settings at once, instead of only the first one
* Deprecate separate `classpathFilter`, `defaultExcludes`, and `sourceFilter` keys in favor of `includeFilter` and `excludeFilter` explicitly scoped by `unmanagedSources`, `unmanagedResources`, or `unmanagedJars` as appropriate (Indrajit)
* Default to using shared boot directory in `~/.sbt/boot/`
* Can put contents of `project/plugins/` directly in `project/` instead. Will likely deprecate `plugins/` directory
* Key display is context sensitive. For example, in a single project, the build and project axes will not be displayed
* [#114], [#118], [#121], [#132], [#135], [#157]: Various settings and error message improvements
* [#115]: Support configuring checksums separately for `publish` and `update`
* [#118]: Add `about` command
* [#118], [#131]: Improve `last` command. Aggregate `last <task>` and display all recent output for `last`
* [#120]: Support read-only external file projects (Fred)
* [#128]: Add `skip` setting to override recompilation change detection
* [#139]: Improvements to pom generation (Indrajit)
* [#140], [#145]: Add standard manifest attributes to binary and source jars (Indrajit)
* Allow sources used for `doc` generation to be different from sources for `compile`
* [#156]: Made `package` an alias for `package-bin`
* [#162]: handling of optional dependencies in pom generation
### 0.10.0 to 0.10.1
Some of the more visible changes:
* Support "provided" as a valid configuration for inter-project dependencies [#53](https://github.com/harrah/xsbt/issues/53)
* Try out some better error messages for build.sbt in a few common situations [#58](https://github.com/harrah/xsbt/issues/58)
* Drop "Incomplete tasks ..." line from error messages. [#32](https://github.com/harrah/xsbt/issues/32)
* Better handling of javac logging. [#74](https://github.com/harrah/xsbt/pull/74)
* Warn when reload discards session settings
* Cache failing classifiers, making 'update-classifiers' a practical replacement for withSources()
* Global settings may be provided in ~/.sbt/build.sbt [#52](https://github.com/harrah/xsbt/issues/52)
* No need to define "sbtPlugin := true" in project/plugins/ or ~/.sbt/plugins/
* Provide statistics and list of evicted modules in UpdateReport
* Scope use of 'transitive-classifiers' by 'update-sbt-classifiers' and 'update-classifiers' for separate configuration.
* Default project ID includes a hash of base directory to avoid collisions in simple cases.
* 'extra-loggers' setting to make it easier to add loggers
* Associate ModuleID, Artifact and Configuration with a classpath entry (moduleID, artifact, and configuration keys). [#41](https://github.com/harrah/xsbt/issues/41)
* Put httpclient on Ivy's classpath, which seems to speed up 'update'.
### 0.7.7 to 0.10.0
**Major redesign, only prominent changes listed.**
* Project definitions in Scala 2.8.1
* New configuration system: [[Quick Configuration Examples]], [[Full Configuration]], and [[Basic Configuration]]
* New task engine: [[Tasks]]
* New multiple project support: [[Full Configuration]]
* More aggressive incremental recompilation for both Java and Scala sources
* Merged plugins and processors into improved plugins system: [[Plugins]]
* [[Web application|https://github.com/siasia/xsbt-web-plugin]] and webstart support moved to plugins instead of core features
* Fixed all of the issues in (Google Code) issue #44
* Managed dependencies automatically updated when configuration changes
* `update-sbt-classifiers` and `update-classifiers` tasks for retrieving sources and/or javadocs for dependencies, transitively
* Improved artifact handling and configuration [[Artifacts]]
* Tab completion parser combinators for commands and input tasks: [[Commands]]
* No project creation prompts anymore
* Moved to GitHub: <http://github.com/harrah/xsbt>
### 0.7.5 to 0.7.7
* Workaround for Scala issue [[#4426|http://lampsvn.epfl.ch/trac/scala/ticket/4426]]
* Fix issue 156
### 0.7.4 to 0.7.5
* Joonas's update to work with Jetty 7.1 logging API changes.
* Updated to work with Jetty 7.2 WebAppClassLoader binary incompatibility (issue 129).
* Provide application and boot classpaths to tests and 'run'ning code according to <http://gist.github.com/404272>
* Fix `provided` configuration. It is no longer included on the classpath of dependent projects.
* Scala 2.8.1 is the default version used when starting a new project.
* Updated to [[Ivy 2.2.0|http://ant.apache.org/ivy/history/2.2.0/release-notes.html]].
* Trond's patches that allow configuring [[jetty-env.xml|http://github.com/harrah/xsbt/commit/5e41a47f50e6]] and [[webdefault.xml|http://github.com/harrah/xsbt/commit/030e2ee91bac0]]
* Doug's [[patch|http://github.com/harrah/xsbt/commit/aa75ecf7055db]] to make 'projects' command show an asterisk next to current project
* Fixed issue 122
* Implemented issue 118
* Patch from Viktor and Ross for issue 123
* (RC1) Patch from Jorge for issue 100
* (RC1) Fix `<packaging>` type
### 0.7.3 to 0.7.4
* prefix continuous compilation with run number for better feedback when logging level is 'warn'
* Added `pomIncludeRepository(repo: MavenRepository): Boolean` that can be overridden to exclude local repositories by default
* Added `pomPostProcess(pom: Node): Node` to make advanced manipulation of the default pom easier (`pomExtra` already covers basic cases)
* Added `reset` command to reset JLine terminal. This needs to be run after suspending and then resuming sbt.
* Installer plugin is now a proper subproject of sbt.
* Plugins can now only be Scala sources. BND should be usable in a plugin now.
* More accurate detection of invalid test names. Invalid test names now generate an error and prevent the test action from running instead of just logging a warning.
* Fix issue with using 2.8.0.RC1 compiler in tests.
* Precompile compiler interface against 2.8.0.RC2
* Add `consoleOptions` for specifying options to the console. It defaults to `compileOptions`.
* Properly support sftp/ssh repositories using key-based authentication. See the updated section of the [[Resolvers]] page.
* `def ivyUpdateLogging = UpdateLogging.DownloadOnly | Full | Quiet`. Default is `DownloadOnly`. `Full` will log metadata resolution and provide a final summary.
* `offline` property for disabling checking for newer dynamic revisions (like `-SNAPSHOT`). This allows working offline with remote snapshots. Not honored for plugins yet.
* History commands: `!!, !?string, !-n, !n, !string, !:n, !:` Run `!` to see help.
* New section in launcher configuration `[ivy]` with a single label `cache-directory`. Specify this to change the cache location used by the launcher.
* New label `classifiers` under `[app]` to specify classifiers of additional artifacts to retrieve for the application.
* Honor `-Xfatal-warnings` option added to compiler in 2.8.0.RC2.
* Make `scaladocTask` a `fileTask` so that it runs only when `index.html` is older than some input source.
* Made it easier to create default `test-*` tasks with different options
* Sort input source files for consistency, addressing scalac's issues with source file ordering.
* Derive Java source file from name of class file when no `SourceFile` attribute is present in the class file. Improves tracking when `-g:none` option is used.
* Fix `FileUtilities.unzip` to be tail-recursive again.
### 0.7.2 to 0.7.3
* Fixed issue with scala.library.jar not being on javac's classpath
* Fixed buffered logging for parallel execution
* Fixed `test-*` tab completion being permanently set on first completion
* Works with Scala 2.8 trunk again.
* Launcher: Maven local repository excluded when the Scala version is a snapshot. This should fix issues with out of date Scala snapshots.
* The compiler interface is precompiled against common Scala versions (for this release, 2.7.7 and 2.8.0.Beta1).
* Added `PathFinder.distinct`
* Running multiple commands at once at the interactive prompt is now supported. Prefix each command with ';'.
* Run and return the output of a process as a String with `!!` or as a (blocking) `Stream[String]` with `lines`.
* Java tests + Annotation detection
* Test frameworks can now specify annotation fingerprints. Specify the names of annotations and sbt discovers classes with the annotations on it or one of its methods. Use version 0.5 of the test-interface.
* Detect subclasses and annotations in Java sources (really, their class files)
* Discovered is new root of hierarchy representing discovered subclasses + annotations. `TestDefinition` no longer fulfills this role.
* `TestDefinition` is modified to be name+`Fingerprint` and represents a runnable test. It need not be `Discovered`, but could be file-based in the future, for example.
* Replaced testDefinitionClassNames method with `fingerprints` in `CompileConfiguration`.
* Added foundAnnotation to `AnalysisCallback`
* Added `Runner2`, `Fingerprint`, `AnnotationFingerprint`, and `SubclassFingerprint` to the test-interface. Existing test frameworks should still work. Implement `Runner2` to use fingerprints other than `SubclassFingerprint`.
### 0.7.1 to 0.7.2
* `Process.apply` no longer uses `CommandParser`. This should fix issues with the android-plugin.
* Added `sbt.impl.Arguments` for parsing a command like a normal action (for `Processor`s)
* Arguments are passed to `javac` using an argument file (`@`)
* Added `webappUnmanaged: PathFinder` method to `DefaultWebProject`. Paths selected by this `PathFinder` will not be pruned by `prepare-webapp` and will not be packaged by `package`. For example, to exclude the GAE datastore directory:
```scala
override def webappUnmanaged =
(temporaryWarPath / "WEB-INF" / "appengine-generated" ***)
```
* Added some String generation methods to `PathFinder`: `toString` for debugging and `absString` and `relativeString` for joining the absolute (relative) paths by the platform separator.
* Made tab completors lazier to reduce startup time.
* Fixed `console-project` for custom subprojects
* `Processor` split into `Processor`/`BasicProcessor`. `Processor` provides high level of integration with command processing. `BasicProcessor` operates on a `Project` but does not affect command processing.
* Can now use `Launcher` externally, including launching `sbt` outside of the official jar. This means a `Project` can now be created from tests.
* Works with Scala 2.8 trunk
* Fixed logging level behavior on subprojects.
* All sbt code is now at <http://github.com/harrah/xsbt> in one project.
### 0.7.0 to 0.7.1
* Fixed Jetty 7 support to work with JRebel
* Fixed make-pom to generate valid dependencies section
### 0.5.6 to 0.7.0
* Unifed batch and interactive commands. All commands that can be executed at interactive prompt can be run from the command line. To run commands and then enter interactive prompt, make the last command 'shell'.
* Properly track certain types of synthetic classes, such as for comprehension with >30 clauses, during compilation.
* Jetty 7 support
* Allow launcher in the project root directory or the `lib` directory. The jar name must have the form`'*sbt-launch*.jar'` in order to be excluded from the classpath.
* Stack trace detail can be controlled with `'on'`, `'off'`, `'nosbt'`, or an integer level. `'nosbt'` means to show stack frames up to the first `sbt` method. An integer level denotes the number of frames to show for each cause. This feature is courtesty of Tony Sloane.
* New action 'test-run' method that is analogous to 'run', but for test classes.
* New action 'clean-plugins' task that clears built plugins (useful for plugin development).
* Can provide commands from a file with new command: `<filename`
* Can provide commands over loopback interface with new command: `<port`
* Scala version handling has been completely redone.
* The version of Scala used to run sbt (currently 2.7.7) is decoupled from the version used to build the project.
* Changing between Scala versions on the fly is done with the command: `++<version>`
* Cross-building is quicker. The project definition does not need to be recompiled against each version in the cross-build anymore.
* Scala versions are specified in a space-delimited list in the `build.scala.versions` property.
* Dependency management:
* `make-pom` task now uses custom pom generation code instead of Ivy's pom writer.
* Basic support for writing out Maven-style repositories to the pom
* Override the 'pomExtra' method to provide XML (`scala.xml.NodeSeq`) to insert directly into the generated pom.
* Complete control over repositories is now possible by overriding `ivyRepositories`.
* The [[interface to Ivy|Ivy-Interface]] can be used directly.
* Test framework support is now done through a uniform test interface. Implications:
* New versions of specs, ScalaCheck, and ScalaTest are supported as soon as they are released.
* Support is better, since the test framework authors provide the implementation.
* Arguments can be passed to the test framework. For example: {{{ > test-only your.test -- -a -b -c }}}
* Can provide custom task start and end delimiters by defining the system properties `sbt.start.delimiter` and `sbt.end.delimiter`.
* Revamped launcher that can launch Scala applications, not just `sbt`
* Provide a configuration file to the launcher and it can download the application and its dependencies from a repository and run it.
* sbt's configuration can be customized. For example,
* The `sbt` version to use in projects can be fixed, instead of read from `project/build.properties`.
* The default values used to create a new project can be changed.
* The repositories used to fetch `sbt` and its dependencies, including Scala, can be configured.
* The location `sbt` is retrieved to is configurable. For example, `/home/user/.ivy2/sbt/` could be used instead of `project/boot/`.
### 0.5.5 to 0.5.6
* Support specs specifications defined as classes
* Fix specs support for 1.6
* Support ScalaTest 1.0
* Support ScalaCheck 1.6
* Remove remaining uses of structural types
### 0.5.4 to 0.5.5
* Fixed problem with classifier support and the corresponding test
* No longer need `"->default"` in configurations (automatically mapped).
* Can specify a specific nightly of Scala 2.8 to use (for example: `2.8.0-20090910.003346-+`)
* Experimental support for searching for project (`-Dsbt.boot.search=none|only|root-first|nearest`)
* Fix issue where last path component of local repository was dropped if it did not exist.
* Added support for configuring repositories on a per-module basis.
* Unified batch-style and interactive-style commands. All commands that were previously interactive-only should be available batch-style. 'reboot' does not pick up changes to 'scala.version' properly, however.
### 0.5.2 to 0.5.4
* Many logging related changes and fixes. Added `FilterLogger` and cleaned up interaction between `Logger`, scripted testing, and the builder projects. This included removing the `recordingDepth` hack from Logger. Logger buffering is now enabled/disabled per thread.
* Fix `compileOptions` being fixed after the first compile
* Minor fixes to output directory checking
* Added `defaultLoggingLevel` method for setting the initial level of a project's `Logger`
* Cleaned up internal approach to adding extra default configurations like `plugin`
* Added `syncPathsTask` for synchronizing paths to a target directory
* Allow multiple instances of Jetty (new `jettyRunTasks` can be defined with different ports)
* `jettyRunTask` accepts configuration in a single configuration wrapper object instead of many parameters
* Fix web application class loading (issue #35) by using `jettyClasspath=testClasspath---jettyRunClasspath` for loading Jetty. A better way would be to have a `jetty` configuration and have `jettyClasspath=managedClasspath('jetty')`, but this maintains compatibility.
* Copy resources to `target/resources` and `target/test-resources` using `copyResources` and `copyTestResources` tasks. Properly include all resources in web applications and classpaths (issue #36). `mainResources` and `testResources` are now the definitive methods for getting resources.
* Updated for 2.8 (`sbt` now compiles against September 11, 2009 nightly build of Scala)
* Fixed issue with position of `^` in compile errors
* Changed order of repositories (local, shared, Maven Central, user, Scala Tools)
* Added Maven Central to resolvers used to find Scala library/compiler in launcher
* Fixed problem that prevented detecting user-specified subclasses
* Fixed exit code returned when exception thrown in main thread for `TrapExit`
* Added `javap` task to `DefaultProject`. It has tab completion on compiled project classes and the run classpath is passed to `javap` so that library classes are available. Examples:
```scala
> javap your.Clazz
> javap -c scala.List
```
* Added `exec` task. Mixin `Exec` to project definition to use. This forks the command following `exec`. Examples:
```scala
> exec echo Hi
> exec find src/main/scala -iname *.scala -exec wc -l {} ;
```
* Added `sh` task for users with a unix-style shell available (runs `/bin/sh -c <arguments>`). Mixin `Exec` to project definition to use. Example:
```scala
> sh find src/main/scala -iname *.scala | xargs cat | wc -l
```
* Proper dependency graph actions (previously was an unsupported prototype): `graph-src` and `graph-pkg` for source dependency graph and quasi-package dependency graph (based on source directories and source dependencies)
* Improved Ivy-related code to not load unnecessary default settings
* Fixed issue #39 (sources were not relative in src package)
* Implemented issue #38 (`InstallProject` with 'install' task)
* Vesa's patch for configuring the output of forked Scala/Java and processes
* Don't buffer logging of forked `run` by default
* Check `Project.terminateWatch` to determine if triggered execution should stop for a given keypress.
* Terminate triggered execution only on 'enter' by default (previously, any keypress stopped it)
* Fixed issue #41 (parent project should not declare jar artifact)
* Fixed issue #42 (search parent directories for `ivysettings.xml`)
* Added support for extra attributes with Ivy. Use `extra(key -> value)` on `ModuleIDs` and `Artifacts`. To define for a project's ID:
```scala
override def projectID = super.projectID extra(key -> value)
```
To specify in a dependency:
```scala
val dep = normalID extra(key -> value)
```
### 0.5.1 to 0.5.2
* Fixed problem where dependencies of `sbt` plugins were not on the compile classpath
* Added `execTask` that runs an `sbt.ProcessBuilder` when invoked
* Added implicit conversion from `scala.xml.Elem` to `sbt.ProcessBuilder` that takes the element's text content, trims it, and splits it around whitespace to obtain the command.
* Processes can now redirect standard input (see run with Boolean argument or !< operator on `ProcessBuilder`), off by default
* Made scripted framework a plugin and scripted tests now go in `src/sbt-test` by default
* Can define and use an sbt test framework extension in a project
* Fixed `run` action swallowing exceptions
* Fixed tab completion for method tasks for multi-project builds
* Check that tasks in `compoundTask` do not reference static tasks
* Make `toString` of `Path`s in subprojects relative to root project directory
* `crossScalaVersions` is now inherited from parent if not specified
* Added `scala-library.jar` to the `javac` classpath
* Project dependencies are added to published `ivy.xml`
* Added dependency tracking for Java sources using classfile parsing (with the usual limitations)
* Added `Process.cat` that will send contents of `URL`s and `File`s to standard output. Alternatively, `cat` can be used on a single `URL` or `File`. Example:
```scala
import java.net.URL
import java.io.File
val spde = new URL("http://technically.us/spde/About")
val dispatch = new URL("http://databinder.net/dispatch/About")
val build = new File("project/build.properties")
cat(spde, dispatch, build) #| "grep -i scala" !
```
### 0.4.6 to 0.5/0.5.1
* Fixed `ScalaTest` framework dropping stack traces
* Publish only public configurations by default
* Loader now adds `.m2/repository` for downloading Scala jars
* Can now fork the compiler and runner and the runner can use a different working directory.
* Maximum compiler errors shown is now configurable
* Fixed rebuilding and republishing released versions of `sbt` against new Scala versions (attempt #2)
* Fixed snapshot reversion handling (Ivy needs changing pattern set on cache, apparently)
* Fixed handling of default configuration when `useMavenConfiguration` is `true`
* Cleanup on Environment, Analysis, Conditional, `MapUtilities`, and more...
* Tests for Environment, source dependencies, library dependency management, and more...
* Dependency management and multiple Scala versions
* Experimental plugin for producing project bootstrapper in a self-extracting jar
* Added ability to directly specify `URL` to use for dependency with the `from(url: URL)` method defined on `ModuleID`
* Fixed issue #30
* Support cross-building with `+` when running batch actions
* Additional flattening for project definitions: sources can go either in `project/build/src` (recursively) or `project/build` (flat)
* Fixed manual `reboot` not changing the version of Scala when it is manually `set`
* Fixed tab completion for cross-building
* Fixed a class loading issue with web applications
### 0.4.5 to 0.4.6
* Publishing to ssh/sftp/filesystem repository supported
* Exception traces are printed by default
* Fixed warning message about no `Class-Path` attribute from showing up for `run`
* Fixed `package-project` operation
* Fixed `Path.fromFile`
* Fixed issue with external process output being lost when sent to a `BufferedLogger` with `parallelExecution` enabled.
* Preserve history across `clean`
* Fixed issue with making relative path in jar with wrong separator
* Added cross-build functionality (prefix action with `+`).
* Added methods `scalaLibraryJar` and `scalaCompilerJar` to `FileUtilities`
* Include project dependencies for `deliver`/`publish`
* Add Scala dependencies for `make-pom`/`deliver`/`publish`, which requires these to depend on `package`
* Properly add compiler jar to run/test classpaths when main sources depend on it
* `TestFramework` root `ClassLoader` filters compiler classes used by `sbt`, which is required for projects using the compiler.
* Better access to dependencies:
* `mainDependencies` and `testDependencies` provide an analysis of the dependencies of your code as determined during compilation
* `scalaJars` is deprecated, use `mainDependencies.scalaJars` instead (provides a `PathFinder`, which is generally more useful)
* Added `jettyPort` method to `DefaultWebProject`.
* Fixed `package-project` to exclude `project/boot` and `project/build/target`
* Support specs 1.5.0 for Scala 2.7.4 version.
* Parallelization at the subtask level
* Parallel test execution at the suite/specification level.
### 0.4.3 to 0.4.5
* Sorted out repository situation in loader
* Added support for `http_proxy` environment variable
* Added `download` method from Nathan to `FileUtilities` to retrieve the contents of a URL.
* Added special support for compiler plugins, see CompilerPlugins page.
* `reload` command in scripted tests will now properly handle success/failure
* Very basic support for Java sources: Java sources under `src/main/java` and `src/test/java` will be compiled.
* `parallelExecution` defaults to value in parent project if there is one.
* Added 'console-project' that enters the Scala interpreter with the current `Project` bound to the variable `project`.
* The default Ivy cache manager is now configured with `useOrigin=true` so that it doesn't cache artifacts from the local filesystem.
* For users building from trunk, if a project specifies a version of `sbt` that ends in `-SNAPSHOT`, the loader will update `sbt` every time it starts up. The trunk version of `sbt` will always end in `-SNAPSHOT` now.
* Added automatic detection of classes with main methods for use when `mainClass` is not explicitly specified in the project definition. If exactly one main class is detected, it is used for `run` and `package`. If multiple main classes are detected, the user is prompted for which one to use for `run`. For `package`, no `Main-Class` attribute is automatically added and a warning is printed.
* Updated build to cross-compile against Scala 2.7.4.
* Fixed `proguard` task in `sbt`'s project definition
* Added `manifestClassPath` method that accepts the value for the `Class-Path` attribute
* Added `PackageOption` called `ManifestAttributes` that accepts `(java.util.jar.Attributes.Name, String)` or `(String, String)` pairs and adds them to the main manifest attributes
* Fixed some situations where characters would not be echoed at prompts other than main prompt.
* Fixed issue #20 (use `http_proxy` environment variable)
* Implemented issue #21 (native process wrapper)
* Fixed issue #22 (rebuilding and republishing released versions of `sbt` against new Scala versions, specifically Scala 2.7.4)
* Implemented issue #23 (inherit inline repositories declared in parent project)
### 0.4 to 0.4.3
* Direct dependencies on Scala libraries are checked for version equality with `scala.version`
* Transitive dependencies on `scala-library` and `scala-compiler` are filtered
* They are fixed by `scala.version` and provided on the classpath by `sbt`
* To access them, use the `scalaJars` method, `classOf[ScalaObject].getProtectionDomain.getCodeSource`, or mainCompileConditional.analysis.allExternals
* The configurations checked/filtered as described above are configurable. Nonstandard configurations are not checked by default.
* Version of `sbt` and Scala printed on startup
* Launcher asks if you want to try a different version if `sbt` or Scala could not be retrieved.
* After changing `scala.version` or `sbt.version` with `set`, note is printed that `reboot` is required.
* Moved managed dependency actions to `BasicManagedProject` (`update` is now available on `ParentProject`)
* Cleaned up `sbt`'s build so that you just need to do `update` and `full-build` to build from source. The trunk version of `sbt` will be available for use from the loader.
* The loader is now a subproject.
* For development, you'll still want the usual actions (such as `package`) for the main builder and `proguard` to build the loader.
* Fixed analysis plugin improperly including traits/abstract classes in subclass search
* `ScalaProject`s already had everything required to be parent projects: flipped the switch to enable it
* Proper method task support in scripted tests (`package` group tests rightly pass again)
* Improved tests in loader that check that all necessary libraries were downloaded properly
### 0.3.7 to 0.4
* Fixed issue with `build.properties` being unnecessarily updated in sub-projects when loading.
* Added method to compute the SHA-1 hash of a `String`
* Added pack200 methods
* Added initial process interface
* Added initial webstart support
* Added gzip methods
* Added `sleep` and `newer` commands to scripted testing.
* Scripted tests now test the version of `sbt` being built instead of the version doing the building.
* `testResources` is put on the test classpath instead of `testResourcesPath`
* Added `jetty-restart`, which does `jetty-stop` and then `jetty-run`
* Added automatic reloading of default web application
* Changed packaging behaviors (still likely to change)
* Inline configurations now allowed (can be used with configurations in inline XML)
* Split out some code related to managed dependencies from `BasicScalaProject` to new class `BasicManagedProject`
* Can specify that maven-like configurations should be automatically declared
* Fixed problem with nested modules being detected as tests
* `testResources`, `integrationTestResources`, and `mainResources` should now be added to appropriate classpaths
* Added project organization as a property that defaults to inheriting from the parent project.
* Project creation now prompts for the organization.
* Added method tasks, which are top-level actions with parameters.
* Made `help`, `actions`, and `methods` commands available to batch-style invocation.
* Applied Mikko's two fixes for webstart and fixed problem with pack200+sign. Also, fixed nonstandard behavior when gzip enabled.
* Added `control` method to `Logger` for action lifecycle logging
* Made standard logging level convenience methods final
* Made `BufferedLogger` have a per-actor buffer instead of a global buffer
* Added a `SynchronizedLogger` and a `MultiLogger` (intended to be used with the yet unwritten `FileLogger`)
* Changed method of atomic logging to be a method `logAll` accepting `List[LogEvent]` instead of `doSynchronized`
* Improved action lifecycle logging
* Parallel logging now provides immediate feedback about starting an action
* General cleanup, including removing unused classes and methods and reducing dependencies between classes
* `run` is now a method task that accepts options to pass to the `main` method (`runOptions` has been removed, `runTask` is no longer interactive, and `run` no longer starts a console if `mainClass` is undefined)
* Major task execution changes:
* Tasks automatically have implicit dependencies on tasks with the same name in dependent projects
* Implicit dependencies on interactive tasks are ignored, explicit dependencies produce an error
* Interactive tasks must be executed directly on the project on which they are defined
* Method tasks accept input arguments (`Array[String]`) and dynamically create the task to run
* Tasks can depend on tasks in other projects
* Tasks are run in parallel breadth-first style
* Added `test-only` method task, which restricts the tests to run to only those passed as arguments.
* Added `test-failed` method task, which restricts the tests to run. First, only tests passed as arguments are run. If no tests are passed, no filtering is done. Then, only tests that failed the previous run are run.
* Added `test-quick` method task, which restricts the tests to run. First, only tests passed as arguments are run. If no tests are passed, no filtering is done. Then, only tests that failed the previous run or had a dependency change are run.
* Added launcher that allows declaring version of sbt/scala to build project with.
* Added tab completion with ~
* Added basic tab completion for method tasks, including `test-*`
* Changed default pack options to be the default options of Pack200.Packer
* Fixed ~ behavior when action doesn't exist
### 0.3.6 to 0.3.7
* Improved classpath methods
* Refactored various features into separate project traits
* `ParentProject` can now specify dependencies
* Support for `optional` scope
* More API documentation
* Test resource paths provided on classpath for testing
* Added some missing read methods in `FileUtilities`
* Added scripted test framework
* Change detection using hashes of files
* Fixed problem with manifests not being generated (bug #14)
* Fixed issue with scala-tools repository not being included by default (again)
* Added option to set ivy cache location (mainly for testing)
* trace is no longer a logging level but a flag enabling/disabling stack traces
* Project.loadProject and related methods now accept a Logger to use
* Made hidden files and files that start with `'.'` excluded by default (`'.*'` is required because subversion seems to not mark `.svn` directories hidden on Windows)
* Implemented exit codes
* Added continuous compilation command `cc`
### 0.3.5 to 0.3.6
* Fixed bug #12.
* Compiled with 2.7.2.
### 0.3.2 to 0.3.5
* Fixed bug #11.
* Fixed problem with dependencies where source jars would be used instead of binary jars.
* Fixed scala-tools not being used by default for inline configurations.
* Small dependency management error message correction
* Slight refactoring for specifying whether scala-tools releases gets added to configured resolvers
* Separated repository/dependency overriding so that repositories can be specified inline for use with `ivy.xml` or `pom.xml` files
* Added ability to specify Ivy XML configuration in Scala.
* Added `clean-cache` action for deleting Ivy's cache
* Some initial work towards accessing a resource directory from tests
* Initial tests for `Path`
* Some additional `FileUtilities` methods, some `FileUtilities` method adjustments and some initial tests for `FileUtilities`
* A basic framework for testing `ReflectUtilities`, not run by default because of run time
* Minor cleanup to `Path` and added non-empty check to path components
* Catch additional exceptions in `TestFramework`
* Added `copyTask` task creation method.
* Added `jetty-run` action and added ability to package war files.
* Added `jetty-stop` action.
* Added `console-quick` action that is the same as `console` but doesn't compile sources first.
* Moved some custom `ClassLoader`s to `ClasspathUtilities` and improved a check.
* Added ability to specify hooks to call before `sbt` shuts down.
* Added `zip`, `unzip` methods to `FileUtilities`
* Added `append` equivalents to `write*` methods in `FileUtilites`
* Added first draft of integration testing
* Added batch command `compile-stats`
* Added methods to create tasks that have basic conditional execution based on declared sources/products of the task
* Added `newerThan` and `olderThan` methods to `Path`
* Added `reload` action to reread the project definition without losing the performance benefits of an already running jvm
* Added `help` action to tab completion
* Added handling of (effectively empty) scala source files that create no class files: they are always interpreted as modified.
* Added prompt to retry project loading if compilation fails
* `package` action now uses `fileTask` so that it only executes if files are out of date
* fixed `ScalaTest` framework wrapper so that it fails the `test` action if tests fail
* Inline dependencies can now specify configurations
### 0.3.1 to 0.3.2
* Compiled jar with Java 1.5.
### 0.3 to 0.3.1
* Fixed bugs #8, #9, and #10.
### 0.2.3 to 0.3
* Version change only for first release.
### 0.2.2 to 0.2.3
* Added tests for `Dag`, `NameFilter`, `Version`
* Fixed handling of trailing `*`s in `GlobFilter` and added some error-checking for control characters, which `Pattern` doesn't seem to like
* Fixed `Analysis.allProducts` implementation
* It previously returned the sources instead of the generated classes
* Will only affect the count of classes (it should be correct now) and the debugging of missed classes (erroneously listed classes as missed)
* Made some implied preconditions on `BasicVersion` and `OpaqueVersion` explicit
* Made increment version behavior in `ScalaProject` easier to overload
* Added `Seq[..Option]` alternative to `...Option*` for tasks
* Documentation generation fixed to use latest value of version
* Fixed `BasicVersion.incrementMicro`
* Fixed test class loading so that `sbt` can test the version of `sbt` being developed (previously, the classes from the executing version of `sbt` were tested)
### 0.2.1 to 0.2.2
* Package name is now a call-by-name parameter for the package action
* Fixed release action calling compile multiple times
### 0.2.0 to 0.2.1
* Added some action descriptions
* jar name now comes from normalized name (lowercased and spaces to dashes)
* Some cleanups related to creating filters
* Path should only 'get' itself if the underlying file exists to be consistent with other `PathFinders`
* Added `---` operator for `PathFinder` that excludes paths from the `PathFinder` argument
* Removed `***` operator on `PathFinder`
* `**` operator on `PathFinder` matches all descendents or self that match the `NameFilter` argument
* The above should fix bug `#6`
* Added version increment and release actions.
* Can now build sbt with sbt. Build scripts `build` and `clean` will still exist.
### 0.1.9 to 0.2.0
* Implemented typed properties and access to system properties
* Renamed `metadata` directory to `project`
* Information previously in `info` file now obtained by properties:
* `info.name --> name`
* `info.currentVersion --> version`
* Concrete `Project` subclasses should have a constructor that accepts a single argument of type `ProjectInfo` (argument `dependencies: Iterable[Project]` has been merged into `ProjectInfo`)
### 0.1.8 to 0.1.9
* Better default implementation of `allSources`.
* Generate warning if two jars on classpath have the same name.
* Upgraded to specs 1.4.0
* Upgraded to `ScalaCheck` 1.5
* Changed some update options to be final vals instead of objects.
* Added some more API documentation.
* Removed release action.
* Split compilation into separate main and test compilations.
* A failure in a `ScalaTest` run now fails the test action.
* Implemented reporters for `compile/scaladoc`, `ScalaTest`, `ScalaCheck`, and `specs` that delegate to the appropriate `sbt.Logger`.
### 0.1.7 to 0.1.8
* Improved configuring of tests to exclude.
* Simplified version handling.
* Task `&&` operator properly handles dependencies of tasks it combines.
* Changed method of inline library dependency declarations to be simpler.
* Better handling of errors in parallel execution.
### 0.1.6 to 0.1.7
* Added graph action to generate dot files (for graphiz) from dependency information (work in progress).
* Options are now passed to tasks as varargs.
* Redesigned `Path` properly, including `PathFinder` returning a `Set[Path]` now instead of `Iterable[Path]`.
* Moved paths out of `ScalaProject` and into `BasicProjectPaths` to keep path definitions separate from task definitions.
* Added initial support for managing third-party libraries through the `update` task, which must be explicitly called (it is not a dependency of compile or any other task). This is experimental, undocumented, and known to be incomplete.
* Parallel execution implementation at the project level, disabled by default. To enable, add:
```scala
override def parallelExecution = true
```
to your project definition. In order for logging to make sense, all project logging is buffered until the project is finished executing. Still to be done is some sort of notification of project execution (which ones are currently executing, how many remain)
* `run` and `console` are now specified as "interactive" actions, which means they are only executed on the project in which they are defined when called directly, and not on all dependencies. Their dependencies are still run on dependent projects.
* Generalized conditional tasks a bit. Of note is that analysis is no longer required to be in metadata/analysis, but is now in target/analysis by default.
* Message now displayed when project definition is recompiled on startup
* Project no longer inherits from Logger, but now has a log member.
* Dependencies passed to `project` are checked for null (may help with errors related to initialization/circular dependencies)
* Task dependencies are checked for null
* Projects in a multi-project configuration are checked to ensure that output paths are different (check can be disabled)
* Made `update` task globally synchronized because Ivy is not thread-safe.
* Generalized test framework, directly invoking frameworks now (used reflection before).
* Moved license files to licenses/
* Added support for `specs` and some support for `ScalaTest` (the test action doesn't fail if `ScalaTest` tests fail).
* Added `specs`, `ScalaCheck`, `ScalaTest` jars to lib/
* These are now required for compilation, but are optional at runtime.
* Added the appropriate licenses and notices.
* Options for `update` action are now taken from updateOptions member.
* Fixed `SbtManager` inline dependency manager to work properly.
* Improved Ivy configuration handling (not compiled with test dependencies yet though).
* Added case class implementation of `SbtManager` called `SimpleManager`.
* Project definitions not specifying dependencies can now use just a single argument constructor.
### 0.1.5 to 0.1.6
* `run` and `console` handle `System.exit` and multiple threads in user code under certain circumstances (see RunningProjectCode).
### 0.1.4 to 0.1.5
* Generalized interface with plugin (see `AnalysisCallback`)
* Split out task implementations and paths from `Project` to `ScalaProject`
* Subproject support (changed required project constructor signature: see `sbt/DefaultProject.scala`)
* Can specify dependencies between projects
* Execute tasks across multiple projects
* Classpath of all dependencies included when compiling
* Proper inter-project source dependency handling
* Can change to a project in an interactive session to work only on that project (and its dependencies)
* External dependency handling
* Tracks non-source dependencies (compiled classes and jars)
* Requires each class to be provided by exactly one classpath element (This means you cannot have two versions of the same class on the classpath, e.g. from two versions of a library)
* Changes in a project propagate the right source recompilations in dependent projects
* Consequences:
* Recompilation when changing java/scala version
* Recompilation when upgrading libraries (again, as indicated in the second point, situations where you have library-1.0.jar and library-2.0.jar on the classpath at the same time are not handled predictably. Replacing library-1.0.jar with library-2.0.jar should work as expected.)
* Changing sbt version will recompile project definitions
### 0.1.3 to 0.1.4
* Autodetection of Project definitions.
* Simple tab completion/history in an interactive session with JLine
* Added descriptions for most actions
### 0.1.2 to 0.1.3
* Dependency management between tasks and auto-discovery tasks.
* Should work on Windows.
### 0.1.1 to 0.1.2
* Should compile/build on Java 1.5
* Fixed run action implementation to include scala library on classpath
* Made project configuration easier
### 0.1 to 0.1.1
* Fixed handling of source files without a package
* Added easy project setup

@ -1,9 +0,0 @@
# Community
This part of the wiki has project "meta-information" such as where
to find source code and how to contribute. Check out the sidebar
on the right for links.
The mailing list is at
<http://groups.google.com/group/simple-build-tool/topics>. Please
use it for questions and comments!

@ -1,35 +0,0 @@
# Credits
The following people have contributed ideas, documentation, or code to sbt:
* Trond Bjerkestrand
* Steven Blundy
* Josh Cough
* Nolan Darilek
* Fred Dubois
* Nathan Hamblen
* Mark Harrah
* Joonas Javanainen
* Ismael Juma
* Viktor Klang
* David R. MacIver
* Ross McDonald
* Simon Olofsson
* Artyom Olshevskiy
* Andrew O'Malley
* Jorge Ortiz
* Mikko Peltonen
* Paul Phillips
* Ray Racine
* Indrajit Raychaudhuri
* Stuart Roebuck
* Harshad RJ
* Sanjin Šehić
* Tony Sloane
* Doug Tangren
* Seth Tisue
* Francisco Treacy
* Aaron D. Valade
* Eugene Vigdorchik
* Vesa Vilhonen
* Jason Zaugg

@ -1,15 +0,0 @@
[sbt-launch]: http://repo.typesafe.com/typesafe/ivy-snapshots/org.scala-sbt/sbt-launch/
# Nightly Builds
Nightly builds are currently being published to <http://repo.typesafe.com/typesafe/ivy-snapshots/>.
To use a nightly build, follow the instructions for normal [[Setup|Getting Started Setup]], except:
1. Download the launcher jar from one of the subdirectories of [sbt-launch]. They should be listed in chronological order, so the most recent one will be last.
2. Call your script something like `sbt-nightly` to retain access to a stable `sbt` launcher.
3. The version number is the name of the subdirectory and is of the form `0.13.x-yyyyMMdd-HHmmss`. Use this in a `build.properties` file.
Related to the third point, remember that an `sbt.version` setting in `<build-base>/project/build.properties` determines the version of sbt to use in a project. If it is not present, the default version associated with the launcher is used. This means that you must set `sbt.version=yyyyMMdd-HHmmss` in an existing `<build-base>/project/build.properties`. You can verify the right version of sbt is being used to build a project by running `sbt-version`.
To reduce problems, it is recommended to not use a launcher jar for one nightly version to launch a different nightly version of sbt.

@ -1,60 +0,0 @@
[API]: https://github.com/harrah/xsbt/tree/0.11/interface
[the email thread]: https://groups.google.com/group/simple-build-tool/browse_thread/thread/7761f8b2ce51f02c/129064ea836c9baf
[advanced test interface and runner]: https://groups.google.com/group/simple-build-tool/browse_thread/thread/f5a5fe06bbf3f006/d771009d407d5765
# Opportunites (Round 2)
Below is a running list of potential areas of contribution. This list may become out of date quickly, so you may want to check on the mailing list if you are interested in a specific topic.
1. There are plenty of possible visualization and analysis opportunities.
* 'compile' produces an Analysis of the source code containing
- Source dependencies
- Inter-project source dependencies
- Binary dependencies (jars + class files)
- data structure representing the [API] of the source code
There is some code already for generating dot files that isn't hooked up, but graphing dependencies and inheritance relationships is a general area of work.
* 'update' produces an [[Update Report]] mapping `Configuration/ModuleID/Artifact` to the retrieved `File`
* Ivy produces more detailed XML reports on dependencies. These come with an XSL stylesheet to view them, but this does not scale to large numbers of dependencies. Working on this is pretty straightforward: the XML files are created in `~/.ivy2` and the `.xsl` and `.css` are there as well, so you don't even need to work with sbt. Other approaches described in [the email thread]
* Tasks are a combination of static and dynamic graphs and it would be useful to view the graph of a run
* Settings are a static graph and there is code to generate the dot files, but isn't hooked up anywhere.
2. If you really like testing and bigger projects, a long term, involved project is a more [advanced test interface and runner] that can handle testing JNI code and forking tests.
3. There is support for dependencies on external projects, like on GitHub. To be more useful, this probably needs to support being able to update the dependencies. It is also easy to extend this to svn or other ways of retrieving projects.
4. Dependency management is a general area. Working on Apache Ivy itself is another way to help. For example, I'm pretty sure Ivy is fundamentally single threaded. Either a) it's not and you can fix sbt to take advantage of this or b) make Ivy multi-threaded and faster at resolving dependencies.
5. If you like parsers, sbt commands and input tasks are written using custom parser combinators that provide tab completion and error handling. Among other things, the efficiency could be improved.
6. The javap task hasn't been reintegrated
7. Implement enhanced 0.11-style warn/debug/info/error/trace commands. Currently, you set it like any other setting:
```scala
set logLevel := Level.Warn
or
set logLevel in Test := Level.Warn
```
You could make commands that wrap this, like:
```text
warn test:run
```
Also, trace is currently an integer, but should really be an abstract data type.
8. There is more aggressive incremental compilation in sbt 0.11. I expect it to be more difficult to reproduce bugs. It would be helpful to have a mode that generates a diff between successive compilations and records the options passed to scalac. This could be replayed or inspected to try to find the cause.
9. Take the webstart support from 0.7 and make it a 0.11 plugin
10. Take ownership of the 0.7 installer plugin and make it an independent 0.11 plugin
# Documentation
1. There's a lot to do with this wiki. If you check the wiki out
from git, there's a directory called Dormant with some content
that needs going through.
2. the [[Home]] page mentions external project references (e.g. to a
git repo) but doesn't have anything to link to that explains how
to use those.
3. the [[Configurations]] page is missing a list of the built-in
configurations and the purpose of each.
4. grep the wiki's git checkout for "Wiki Maintenance Note" and
work on some of those
5. API docs are much needed.
6. Find useful answers or types/methods/values in the other docs, and pull references to them up into [[FAQ]] or [[Index]] so people can find the docs. In general the [[FAQ]] should feel a bit more like a bunch of pointers into the regular docs, rather than an alternative to the docs.
7. A lot of the pages could probably have better names, and/or little 2-4 word blurbs to the right of them in the sidebar.

@ -1,19 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Change history|Changes]]
* [[Credits]]
* [[License|https://github.com/harrah/xsbt/blob/0.11/LICENSE]]
* [[Source code (github)|https://github.com/harrah/xsbt/tree/0.11]]
* [[Source code (SXR)|http://harrah.github.com/xsbt/latest/sxr/index.html]]
* [[API Documentation|http://harrah.github.com/xsbt/latest/api/index.html]]
* [[Places to help|Opportunities]]
* [[Nightly Builds]]
* [[Plugins list|sbt-0.10-plugins-list]]
* [[Resources]]
* [[Examples|Community-Examples]]
* [[Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Extending sbt|Extending]] - internals docs

@ -1,128 +0,0 @@
The purpose of this page is to aid developers in finding plugins that work with sbt 0.10+ and for plugin developers to promote their plugins possibly by adding some brief description.
## Plugins
### Plugins for IDEs:
* IntelliJ IDEA
* SBT Plugin to generate IDEA project configuration: https://github.com/mpeltonen/sbt-idea
* IDEA Plugin to embed an SBT Console into the IDE: https://github.com/orfjackal/idea-sbt-plugin
* Netbeans: https://github.com/remeniuk/sbt-netbeans-plugin
* Eclipse: https://github.com/typesafehub/sbteclipse
### Web Plugins
* xsbt-web-plugin: https://github.com/siasia/xsbt-web-plugin
* xsbt-webstart: https://github.com/ritschwumm/xsbt-webstart
* sbt-appengine: https://github.com/sbt/sbt-appengine
* sbt-gwt-plugin: https://github.com/thunderklaus/sbt-gwt-plugin
* sbt-cloudbees-plugin: https://github.com/timperrett/sbt-cloudbees-plugin
### Test plugins
* junit_xml_listener: https://github.com/ijuma/junit_xml_listener
* sbt-growl-plugin: https://github.com/softprops/sbt-growl-plugin
* sbt-teamcity-test-reporting-plugin: https://github.com/guardian/sbt-teamcity-test-reporting-plugin
* xsbt-cucumber-plugin: https://github.com/skipoleschris/xsbt-cucumber-plugin
### Static Code Analysis plugins
* cpd4sbt: https://bitbucket.org/jmhofer/cpd4sbt (copy/paste detection, works for Scala, too)
* findbugs4sbt: https://bitbucket.org/jmhofer/findbugs4sbt (FindBugs only supports Java projects atm)
### One jar plugins
* sbt-assembly: https://github.com/sbt/sbt-assembly
* xsbt-proguard-plugin: https://github.com/siasia/xsbt-proguard-plugin
* sbt-deploy: https://github.com/reaktor/sbt-deploy
* sbt-appbundle (os x standalone): https://github.com/sbt/sbt-appbundle
### Frontend development plugins
* coffeescripted-sbt: https://github.com/softprops/coffeescripted-sbt
* less-sbt (for less-1.3.0): https://github.com/softprops/less-sbt
* sbt-less-plugin (it uses less-1.3.0): https://github.com/btd/sbt-less-plugin
* sbt-emberjs: https://github.com/stefri/sbt-emberjs
* sbt-closure: https://github.com/eltimn/sbt-closure
* sbt-yui-compressor: https://github.com/indrajitr/sbt-yui-compressor
* sbt-requirejs: https://github.com/scalatra/sbt-requirejs
### LWJGL (Light Weight Java Game Library) Plugin
* sbt-lwjgl-plugin: https://github.com/philcali/sbt-lwjgl-plugin
### Release plugins
* posterous-sbt: https://github.com/n8han/posterous-sbt
* sbt-signer-plugin: https://github.com/rossabaker/sbt-signer-plugin
* sbt-izpack (generates IzPack an installer): http://software.clapper.org/sbt-izpack/
* sbt-ghpages-plugin (publishes generated site and api): https://github.com/jsuereth/xsbt-ghpages-plugin
* sbt-gpg-plugin (PGP signing plugin, can generate keys too): https://github.com/sbt/xsbt-gpg-plugin
* sbt-release (customizable release process): https://github.com/gseitz/sbt-release
* sbt-unique-version (emulates unique snapshots): https://github.com/sbt/sbt-unique-version
### System plugins
* sbt-sh (executes shell commands): https://github.com/steppenwells/sbt-sh
* cronish-sbt (interval sbt / shell command execution): https://github.com/philcali/cronish-sbt
* git (executes git commands): https://github.com/sbt/sbt-git-plugin
* svn (execute svn commands): https://github.com/xuwei-k/sbtsvn
### Code generator plugins
* xsbt-fmpp-plugin (FreeMarker Scala/Java Templating): https://github.com/aloiscochard/xsbt-fmpp-plugin
* sbt-scalaxb (XSD and WSDL binding): https://github.com/eed3si9n/scalaxb
* sbt-protobuf (Google Protocol Buffers): https://github.com/gseitz/sbt-protobuf
* sbt-avro (Apache Avro): https://github.com/cavorite/sbt-avro
* sbt-xjc (XSD binding, using [JAXB XJC](http://download.oracle.com/javase/6/docs/technotes/tools/share/xjc.html)): https://github.com/retronym/sbt-xjc
* xsbt-scalate-generate (Generate/Precompile Scalate Templates): https://github.com/backchatio/xsbt-scalate-generate
* sbt-antlr (Generate Java source code based on ANTLR3 grammars): https://github.com/stefri/sbt-antlr
* xsbt-reflect (Generate Scala source code for project name and version): https://github.com/ritschwumm/xsbt-reflect
* sbt-buildinfo (Generate Scala source for any settings): https://github.com/sbt/sbt-buildinfo
* lifty (Brings scaffolding to SBT): https://github.com/lifty/lifty
* sbt-thrift (Thrift Code Generation): https://github.com/bigtoast/sbt-thrift
* xsbt-hginfo (Generate Scala source code for Mercurial repository information): https://bitbucket.org/pustina/xsbt-hginfo
* sbt-scalashim (Generate Scala shim like `sys.error`): https://github.com/sbt/sbt-scalashim
* sbtend (Generate Java source code from [xtend](http://www.eclipse.org/xtend/) ): https://github.com/xuwei-k/sbtend
### Database plugins
* sbt-liquibase (Liquibase RDBMS database migrations): https://github.com/bigtoast/sbt-liquibase
* sbt-dbdeploy (dbdeploy, a database change management tool): https://github.com/mr-ken/sbt-dbdeploy
### Documentation plugins
* sbt-lwm (Convert lightweight markup files, e.g., Markdown and Textile, to HTML): http://software.clapper.org/sbt-lwm/
### Utility plugins
* jot (Write down your ideas lest you forget them) https://github.com/softprops/jot
* ls-sbt (An sbt interface for ls.implicit.ly): https://github.com/softprops/ls
* np (Dead simple new project directory generation): https://github.com/softprops/np
* sbt-editsource (A poor man's *sed*(1), for SBT): http://software.clapper.org/sbt-editsource/
* sbt-dirty-money (Cleans Ivy2 cache): https://github.com/sbt/sbt-dirty-money
* sbt-dependency-graph (Creates a graphml file of the dependency tree): https://github.com/jrudolph/sbt-dependency-graph
* sbt-cross-building (Simplifies building your plugins for multiple versions of sbt): https://github.com/jrudolph/sbt-cross-building
* sbt-inspectr (Displays settings dependency tree): https://github.com/eed3si9n/sbt-inspectr
* sbt-revolver (Triggered restart, hot reloading): https://github.com/spray/sbt-revolver
* sbt-scalaedit (Open and upgrade ScalaEdit (text editor)): https://github.com/kjellwinblad/sbt-scalaedit-plugin
* sbt-man (Looks up scaladoc): https://github.com/sbt/sbt-man
* sbt-taglist (Looks for TODO-tags in the sources): https://github.com/johanandren/sbt-taglist
### Code coverage plugins
* sbt-scct: https://github.com/dvc94ch/sbt-scct
* jacoco4sbt: https://bitbucket.org/jmhofer/jacoco4sbt
### Android plugin
* android-plugin: https://github.com/jberkel/android-plugin
* android-sdk-plugin: https://github.com/pfn/android-sdk-plugin
### Build interoperability plugins
* ant4sbt: https://bitbucket.org/jmhofer/ant4sbt
### OSGi plugin
* sbtosgi: https://github.com/typesafehub/sbtosgi

@ -1,167 +0,0 @@
[Ivy documentation]: http://ant.apache.org/ivy/history/2.2.0/ivyfile/dependency-artifact.html
[Artifact API]: http://harrah.github.com/xsbt/latest/api/sbt/Artifact$.html
[SettingsDefinition]: http://harrah.github.com/xsbt/latest/api/#sbt.Init$SettingsDefinition
# Artifacts
# Selecting default artifacts
By default, the published artifacts are the main binary jar, a jar containing the main sources and resources, and a jar containing the API documentation. You can add artifacts for the test classes, sources, or API or you can disable some of the main artifacts.
To add all test artifacts:
```scala
publishArtifact in Test := true
```
To add them individually:
```scala
// enable publishing the jar produced by `test:package`
publishArtifact in (Test, packageBin) := true
// enable publishing the test API jar
publishArtifact in (Test, packageDoc) := true
// enable publishing the test sources jar
publishArtifact in (Test, packageSrc) := true
```
To disable main artifacts individually:
```scala
// disable publishing the main jar produced by `package`
publishArtifact in (Compile, packageBin) := false
// disable publishing the main API jar
publishArtifact in (Compile, packageDoc) := false
// disable publishing the main sources jar
publishArtifact in (Compile, packageSrc) := false
```
# Modifying default artifacts
Each built-in artifact has several configurable settings in addition to `publish-artifact`.
The basic ones are `artifact` (of type `SettingKey[Artifact]`), `mappings` (of type `TaskKey[(File,String)]`), and `artifactPath` (of type `SettingKey[File]`).
They are scoped by `(<config>, <task>)` as indicated in the previous section.
To modify the type of the main artifact, for example:
```scala
artifact in (Compile, packageBin) ~= { (art: Artifact) =>
art.copy(`type` = "bundle")
}
```
The generated artifact name is determined by the `artifact-name` setting. This setting is of type `(String, ModuleID, Artifact) => String`, where the String argument is the configuration and the String result is the name of the file to produce. The default implementation is `Artifact.artifactName _`. The function may be modified to produce different local names for artifacts without affecting the published name, which is determined by the `artifact` definition combined with the repository pattern.
For example, to produce a minimal name without a classifier or cross path:
```scala
artifactName := { (config: String, module: ModuleID, artifact: Artifact) =>
artifact.name + "-" + module.revision + "." + artifact.extension
}
```
(Note that in practice you rarely want to drop the classifier.)
Finally, you can get the `(Artifact, File)` pair for the artifact by mapping the `packaged-artifact` task. Note that if you don't need the `Artifact`, you can get just the File from the package task (`package`, `package-doc`, or `package-src`). In both cases, mapping the task to get the file ensures that the artifact is generated first and so the file is guaranteed to be up-to-date.
For example:
```scala
myTask <<= packagedArtifact in (Compile, packageBin) map { case (art: Artifact, file: File) =>
println("Artifact definition: " + art)
println("Packaged file: " + file.getAbsolutePath)
}
```
where `val myTask = TaskKey[Unit]`.
# Defining custom artifacts
In addition to configuring the built-in artifacts, you can declare other artifacts to publish. Multiple artifacts are allowed when using Ivy metadata, but a Maven POM file only supports distinguishing artifacts based on classifiers and these are not recorded in the POM.
Basic `Artifact` construction look like:
```scala
Artifact("name", "type", "extension")
Artifact("name", "classifier")
Artifact("name", url: URL)
Artifact("name", Map("extra1" -> "value1", "extra2" -> "value2"))
```
For example:
```scala
Artifact("myproject", "zip", "zip")
Artifact("myproject", "image", "jpg")
Artifact("myproject", "jdk15")
```
See the [Ivy documentation] for more details on artifacts. See the [Artifact API] for combining the parameters above and specifying [Configurations] and extra attributes.
To declare these artifacts for publishing, map them to the task that generates the artifact:
```scala
myImageTask := {
val artifact: File = makeArtifact(...)
artifact
}
addArtifact( Artifact("myproject", "image", "jpg"), myImageTask )
```
where `val myImageTask = TaskKey[File](...)`.
`addArtifact` returns a sequence of settings (wrapped in a [SettingsDefinition]). In a full build configuration, usage looks like:
```scala
...
lazy val proj = Project(...)
.settings( addArtifact(...).settings : _* )
...
```
# Publishing .war files
A common use case for web applications is to publish the `.war` file instead of the `.jar` file.
```scala
// disable .jar publishing
publishArtifact in (Compile, packageBin) := false
// create an Artifact for publishing the .war file
artifact in (Compile, packageWar) ~= { (art: Artifact) =>
art.copy(`type` = "war", extension = "war")
}
// add the .war file to what gets published
addArtifact(artifact in (Compile, packageWar), packageWar)
```
# Using dependencies with artifacts
To specify the artifacts to use from a dependency that has custom or multiple artifacts, use the `artifacts` method on your dependencies. For example:
```scala
libraryDependencies += "org" % "name" % "rev" artifacts(Artifact("name", "type", "ext"))
```
The `from` and `classifer` methods (described on the [[Library Management]] page) are actually convenience methods that translate to `artifacts`:
```scala
def from(url: String) = artifacts( Artifact(name, new URL(url)) )
def classifier(c: String) = artifacts( Artifact(name, c) )
```
That is, the following two dependency declarations are equivalent:
```scala
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
libraryDependencies += "org.testng" % "testng" % "5.7" artifacts( Artifact("testng", "jdk15") )
```

@ -1,130 +0,0 @@
# Best Practices
This page describes best practices for working with sbt.
Nontrivial additions and changes should generally be discussed on the [mailing list](http://groups.google.com/group/simple-build-tool/topics) first.
(Because there isn't built-in support for discussing GitHub wiki edits like normal commits, a subpar suggestion can only be reverted in its entirety without comment.)
### `project/` vs. `~/.sbt/`
Anything that is necessary for building the project should go in `project/`.
This includes things like the web plugin.
`~/.sbt/` should contain local customizations and commands for working with a build, but are not necessary.
An example is an IDE plugin.
### Local settings
There are two options for settings that are specific to a user. An example of such a setting is inserting the local Maven repository at the beginning of the resolvers list:
```scala
resolvers <<= resolvers {rs =>
val localMaven = "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
localMaven +: rs
}
```
1. Put settings specific to a user in a global `.sbt` file, such as `~/.sbt/local.sbt`. These settings will be applied to all projects.
2. Put settings in a `.sbt` file in a project that isn't checked into version control, such as `<project>/local.sbt`. sbt combines the settings from multiple `.sbt` files, so you can still have the standard `<project>/build.sbt` and check that into version control.
### .sbtrc
Put commands to be executed when sbt starts up in a `.sbtrc` file, one per line.
These commands run before a project is loaded and are useful for defining aliases, for example.
sbt executes commands in `$HOME/.sbtrc` (if it exists) and then `<project>/.sbtrc` (if it exists).
### Generated files
Write any generated files to a subdirectory of the output directory, which is specified by the `target` setting.
This makes it easy to clean up after a build and provides a single location to organize generated files.
Any generated files that are specific to a Scala version should go in `crossTarget` for efficient cross-building.
For generating sources and resources, see [[Common Tasks]].
### Don't hard code
Don't hard code constants, like the output directory `target/`.
This is especially important for plugins.
A user might change the `target` setting to point to `build/`, for example, and the plugin needs to respect that.
Instead, use the setting, like:
```scala
myDirectory <<= target(_ / "sub-directory")
```
### Don't "mutate" files
A build naturally consists of a lot of file manipulation.
How can we reconcile this with the task system, which otherwise helps us avoid mutable state?
One approach, which is the recommended approach and the approach used by sbt's default tasks, is to only write to any given file once and only from a single task.
A build product (or by-product) should be written exactly once by only one task.
The task should then, at a minimum, provide the Files created as its result.
Another task that wants to use Files should map the task, simultaneously obtaining the File reference and ensuring that the task has run (and thus the file is constructed).
Obviously you cannot do much about the user or other processes modifying the files, but you can make the I/O that is under the build's control more predictable by treating file contents as immutable at the level of Tasks.
For example:
```scala
lazy val makeFile = TaskKey[File]("make-file")
// define a task that creates a file,
// writes some content, and returns the File
// The write is completely
makeFile := {
val f: File = file("/tmp/data.txt")
IO.write(f, "Some content")
f
}
// The result of makeFile is the constructed File,
// so useFile can map makeFile and simultaneously
// get the File and declare the dependency on makeFile
useFile <<= makeFile map { (f: File) =>
doSomething( f )
}
```
This arrangement is not always possible, but it should be the rule and not the exception.
### Use absolute paths
Construct only absolute Files.
Either specify an absolute path
```scala
file("/home/user/A.scala")
```
or construct the file from an absolute base:
```scala
base / "A.scala"
```
This is related to the no hard coding best practice because the proper way involves referencing the `baseDirectory` setting.
For example, the following defines the myPath setting to be the `<base>/licenses/` directory.
```scala
myPath <<= baseDirectory(_ / "licenses")
```
In Java (and thus in Scala), a relative File is relative to the current working directory.
The working directory is not always the same as the build root directory for a number of reasons.
The only exception to this rule is when specifying the base directory for a Project.
Here, sbt will resolve a relative File against the build root directory for you for convenience.
### Parser combinators
1. Use `token` everywhere to clearly delimits tab completion boundaries.
2. Don't overlap or nest tokens. The behavior here is unspecified and will likely generate an error in the future.
3. Use `flatMap` for general recursion. sbt's combinators are strict to limit the number of classes generated, so use `flatMap` like:
```scala
lazy val parser: Parser[Int] = token(IntBasic) flatMap { i =>
if(i <= 0)
success(i)
else
token(Space ~> parser)
}
```
This example defines a parser a whitespace-delimited list of integers, ending with a negative number, and returning that final, negative number.

@ -1,122 +0,0 @@
[Attributed]: http://harrah.github.com/xsbt/latest/api/sbt/Attributed.html
# Classpaths, sources, and resources
This page discusses how sbt builds up classpaths for different actions, like `compile`, `run`, and `test` and how to override or augment these classpaths.
# Basics
In sbt 0.10 and later, classpaths now include the Scala library and (when declared as a dependency) the Scala compiler. Classpath-related settings and tasks typically provide a value of type `Classpath`. This is an alias for `Seq[Attributed[File]]`. [Attributed] is a type that associates a heterogeneous map with each classpath entry. Currently, this allows sbt to associate the `Analysis` resulting from compilation with the corresponding classpath entry and for managed entries, the `ModuleID` and `Artifact` that defined the dependency.
To explicitly extract the raw `Seq[File]`, use the `files` method implicitly added to `Classpath`:
```scala
val cp: Classpath = ...
val raw: Seq[File] = cp.files
```
To create a `Classpath` from a `Seq[File]`, use `classpath` and to create an `Attributed[File]` from a `File`, use `Attributed.blank`:
```scala
val raw: Seq[File] = ...
val cp: Classpath = raw.classpath
val rawFile: File = ..
val af: Attributed[File] = Attributed.blank(rawFile)
```
## Unmanaged v. managed
Classpaths, sources, and resources are separated into two main categories: unmanaged and managed.
Unmanaged files are manually created files that are outside of the control of the build.
They are the inputs to the build.
Managed files are under the control of the build.
These include generated sources and resources as well as resolved and retrieved dependencies and compiled classes.
Tasks that produce managed files should be inserted as follows:
```scala
sourceGenerators in Compile <+= sourceManaged in Compile map { out =>
generate(out / "some_directory")
}
```
In this example, `generate` is some function of type `File => Seq[File]` that actually does the work.
The `<+=` method is like `+=`, but allows the right hand side to have inputs (like the difference between `:=` and `<<=`).
So, we are appending a new task to the list of main source generators (`sourceGenerators in Compile`).
To insert a named task, which is the better approach for plugins:
```scala
sourceGenerators in Compile <+= (mySourceGenerator in Compile).task
mySourceGenerator in Compile <<= sourceManaged in Compile map { out =>
generate(out / "some_directory")
}
```
where `mySourceGenerator` is defined as:
```scala
val mySourceGenerator = TaskKey[Seq[File]](...)
```
The `task` method is used to refer to the actual task instead of the result of the task.
For resources, there are similar keys `resourceGenerators` and `resourceManaged`.
### Excluding source files by name
The project base directory is by default a source directory in addition to `src/main/scala`. You can exclude source files by name (`butler.scala` in the example below) like:
excludeFilter in unmanagedSources := "butler.scala"
Read more on [How to exclude .scala source file in project folder - Google Groups](http://groups.google.com/group/simple-build-tool/browse_thread/thread/cd5332a164405568?hl=en)
## External v. internal
Classpaths are also divided into internal and external dependencies.
The internal dependencies are inter-project dependencies.
These effectively put the outputs of one project on the classpath of another project.
External classpaths are the union of the unmanaged and managed classpaths.
## Keys
For classpaths, the relevant keys are:
* `unmanaged-classpath`
* `managed-classpath`
* `external-dependency-classpath`
* `internal-dependency-classpath`
For sources:
* `unmanaged-sources` These are by default built up from `unmanaged-source-directories`, which consists of `scala-source` and `java-source`.
* `managed-sources` These are generated sources.
* `sources` Combines `managed-sources` and `unmanaged-sources`.
* `source-generators` These are tasks that generate source files. Typically, these tasks will put sources in the directory provided by `source-managed`.
For resources
* `unmanaged-resources` These are by default built up from `unmanaged-resource-directories`, which by default is `resource-directory`, excluding files matched by `default-excludes`.
* `managed-resources` By default, this is empty for standard projects. sbt plugins will have a generated descriptor file here.
* `resource-generators` These are tasks that generate resource files. Typically, these tasks will put resources in the directory provided by `resource-managed`.
Use the [[inspect command|Inspecting Settings]] for more details.
See also a related [StackOverflow answer](http://stackoverflow.com/a/7862872/850196).
## Example
You have a standalone project which uses a library that loads xxx.properties from classpath at run time. You put xxx.properties inside directory "config". When you run "sbt run", you want the directory to be in classpath.
```scala
unmanagedClasspath in Runtime <<= (unmanagedClasspath in Runtime, baseDirectory) map { (cp, bd) => cp :+ Attributed.blank(bd / "config") }
```
Or shorter:
```scala
unmanagedClasspath in Runtime <+= (baseDirectory) map { bd => Attributed.blank(bd / "config") }
```

@ -1,180 +0,0 @@
# Command Line Reference
This page is a relatively complete list of command line options,
commands, and tasks you can use from the sbt interactive prompt or
in batch mode. See [[Running|Getting Started Running]] in the
Getting Started Guide for an intro to the basics, while this page
has a lot more detail.
## Notes on the command line
* There is a technical distinction in sbt between _tasks_, which
are "inside" the build definition, and _commands_, which
manipulate the build definition itself. If you're interested in
creating a command, see [[Commands]]. This specific sbt meaning of
"command" means there's no good general term for "thing you can
type at the sbt prompt", which may be a setting, task, or command.
* Some tasks produce useful values. The `toString` representation of these values can be shown using `show <task>` to run the task instead of just `<task>`.
* In a multi-project build, execution dependencies and the
`aggregate` setting control which tasks from which projects are
executed. See
[[multi-project builds|Getting Started Multi-Project]].
## Project-level tasks
* `clean`
Deletes all generated files (the `target` directory).
* `publish-local`
Publishes artifacts (such as jars) to the local Ivy repository as described in [[Publishing]].
* `publish`
Publishes artifacts (such as jars) to the repository defined by the `publish-to` setting, described in [[Publishing]].
* `update`
Resolves and retrieves external dependencies as described in
[[library dependencies|Getting Started Library Dependencies]].
## Configuration-level tasks
Configuration-level tasks are tasks associated with a configuration. For example, `compile`, which is equivalent to `compile:compile`, compiles the main source code (the `compile` configuration). `test:compile` compiles the test source code (test `test` configuration). Most tasks for the `compile` configuration have an equivalent in the `test` configuration that can be run using a `test:` prefix.
* `compile`
Compiles the main sources (in the `src/main/scala` directory). `test:compile` compiles test sources (in the `src/test/scala/` directory).
* `console`
Starts the Scala interpreter with a classpath including the compiled sources, all jars in the `lib` directory, and managed libraries. To return to sbt, type `:quit`, Ctrl+D (Unix), or Ctrl+Z (Windows). Similarly, `test:console` starts the interpreter with the test classes and classpath.
* `console-quick`
Starts the Scala interpreter with the project's compile-time dependencies on the classpath. `test:console-quick` uses the test dependencies. This task differs from `console` in that it does not force compilation of the current project's sources.
* `console-project`
Enters an interactive session with sbt and the build definition on the classpath. The build definition and related values are bound to variables and common packages and values are imported. See [[Console Project]] for more information.
* `doc`
Generates API documentation for Scala source files in `src/main/scala` using scaladoc. `test:doc` generates API documentation for source files in `src/test/scala`.
* `package`
Creates a jar file containing the files in `src/main/resources` and the classes compiled from `src/main/scala`.
`test:package` creates a jar containing the files in `src/test/resources` and the class compiled from `src/test/scala`.
* `package-doc`
Creates a jar file containing API documentation generated from Scala source files in `src/main/scala`.
`test:package-doc` creates a jar containing API documentation for test sources files in `src/test/scala`.
* `package-src`:
Creates a jar file containing all main source files and resources. The packaged paths are relative to `src/main/scala` and `src/main/resources`.
Similarly, `test:package-src` operates on test source files and resources.
* `run <argument>*`
Runs the main class for the project in the same virtual machine as `sbt`. The main class is passed the `argument`s provided. Please see [[Running Project Code]] for details on the use of `System.exit` and multithreading (including GUIs) in code run by this action.
`test:run` runs a main class in the test code.
* `run-main <main-class> <argument>*`
Runs the specified main class for the project in the same virtual machine as `sbt`. The main class is passed the `argument`s provided. Please see [[Running Project Code]] for details on the use of `System.exit` and multithreading (including GUIs) in code run by this action.
`test:run-main` runs the specified main class in the test code.
* `test`
Runs all tests detected during test compilation. See [[Testing]] for details.
* `test-only <test>*`
Runs the tests provided as arguments. `*` (will be) interpreted as a wildcard in the test name. See [[Testing]] for details.
## General commands
* `exit` or `quit`
End the current interactive session or build. Additionally, `Ctrl+D` (Unix) or `Ctrl+Z` (Windows) will exit the interactive prompt.
* `help <command>`
Displays detailed help for the specified command. If no command is provided, displays brief descriptions of all commands.
* `projects`
List all available projects. (See [[Full Configuration]] for details on multi-project builds.)
* `project <project-id>`
Change the current project to the project with ID `<project-id>`. Further operations will be done in the context of the given project. (See [[Full Configuration]] for details on multiple project builds.)
* `~ <command>`
Executes the project specified action or method whenever source files change. See [[Triggered Execution]] for details.
* `< filename`
Executes the commands in the given file. Each command should be on its own line. Empty lines and lines beginning with '#' are ignored
* `+ <command>`
Executes the project specified action or method for all versions of Scala defined in the `cross-scala-versions` setting.
* `++ <version> <command>`
Temporarily changes the version of Scala building the project and executes the provided command. `<command>` is optional. The specified version of Scala is used until the project is reloaded, settings are modified (such as by the `set` or `session` commands), or `++` is run again. `<version>` does not need to be listed in the build definition, but it must be available in a repository.
* `; A ; B`
Execute A and if it succeeds, run B. Note that the leading semicolon is required.
* `eval <Scala-expression>`
Evaluates the given Scala expression and returns the result and inferred type. This can be used to set system properties, as a calculator, fork processes, etc ...
For example:
```scala
> eval System.setProperty("demo", "true")
> eval 1+1
> eval "ls -l" !
```
## Commands for managing the build definition
* `reload [plugins|return]`
If no argument is specified, reloads the build, recompiling any build or plugin definitions as necessary.
`reload plugins` changes the current project to the build definition project (in `project/`). This can be useful to directly manipulate the build definition. For example, running `clean` on the build definition project will force snapshots to be updated and the build definition to be recompiled.
`reload return` changes back to the main project.
* `set <setting-expression>`
Evaluates and applies the given setting definition. The setting
applies until sbt is restarted, the build is reloaded, or the
setting is overridden by another `set` command or removed by the
`session` command. See
[[.sbt build definition|Getting Started Basic Def]] and [[Inspecting Settings]] for details.
* `session <command>`
Manages session settings defined by the `set` command. See [[Inspecting Settings]] for details.
* `inspect <setting-key>`
Displays information about settings, such as the value, description, defining scope, dependencies, delegation chain, and related settings. See [[Inspecting Settings]] for details.
## Command Line Options
System properties can be provided either as JVM options, or as SBT arguments, in both cases as `-Dprop=value`. The following properties influence SBT execution. Also see [[Launcher]]
<table>
<thead>
<tr>
<td>_Property_</td>
<td>_Values_</td>
<td>_Default_</td>
<td>_Meaning_</td>
</tr>
</thead>
<tbody>
<tr>
<td>`sbt.log.noformat`</td>
<td>Boolean</td>
<td>false</td>
<td>If true, disable ANSI color codes. Useful on build servers or terminals that don't support color.</td>
</tr>
<tr>
<td>`sbt.global.base`</td>
<td>Directory</td>
<td>`~/.sbt`</td>
<td>The directory containing global settings and plugins</td>
</tr>
<tr>
<td>`sbt.ivy.home`</td>
<td>Directory</td>
<td>`~/.ivy2`</td>
<td>The directory containing the local Ivy repository and artifact cache</td>
</tr>
<tr>
<td>`sbt.boot.directory`</td>
<td>Directory</td>
<td>`~/.sbt/boot`</td>
<td>Path to shared boot directory</td>
</tr>
<tr>
<td>`sbt.main.class`</td>
<td>String</td>
<td></td>
<td></td>
</tr>
<tr>
<td>`xsbt.inc.debug`</td>
<td>Boolean</td>
<td>false</td>
<td></td>
</tr>
<tr>
<td>`sbt.version`</td>
<td>Version</td>
<td>0.11.3</td>
<td>sbt version to use, usually taken from project/build.properties</td>
</tr>
<tr>
<td>`sbt.boot.properties`</td>
<td>File</td>
<td></td>
<td></td>
</tr>
</tbody>

@ -1,54 +0,0 @@
# Compiler Plugin Support
There is some special support for using compiler plugins. You can set `auto-compiler-plugins` to `true` to enable this functionality.
```scala
autoCompilerPlugins := true
```
To use a compiler plugin, you either put it in your unmanaged library directory (`lib/` by default) or add it as managed dependency in the `plugin` configuration. `addCompilerPlugin` is a convenience method for specifying `plugin` as the configuration for a dependency:
```scala
addCompilerPlugin("org.scala-tools.sxr" %% "sxr" % "0.2.7")
```
The `compile` and `test-compile` actions will use any compiler plugins found in the `lib` directory or in the `plugin` configuration. You are responsible for configuring the plugins as necessary. For example, Scala X-Ray requires the extra option:
```scala
// declare the main Scala source directory as the base directory
scalacOptions <<= (scalacOptions, scalaSource in Compile) { (options, base) =>
options :+ ("-Psxr:base-directory:" + base.getAbsolutePath)
}
```
You can still specify compiler plugins manually. For example:
```scala
scalacOptions += "-Xplugin:<path-to-sxr>/sxr-0.2.7.jar"
```
# Continuations Plugin Example
Support for continuations in Scala 2.8 is implemented as a compiler plugin. You can use the compiler plugin support for this, as shown here.
```scala
autoCompilerPlugins := true
addCompilerPlugin("org.scala-lang.plugins" % "continuations" % "2.8.1")
scalacOptions += "-P:continuations:enable"
```
# Version-specific Compiler Plugin Example
Adding a version-specific compiler plugin can be done as follows:
```scala
autoCompilerPlugins := true
libraryDependencies <<= (scalaVersion, libraryDependencies) { (ver, deps) =>
deps :+ compilerPlugin("org.scala-lang.plugins" % "continuations" % ver)
}
scalacOptions += "-P:continuations:enable"
```

@ -1,89 +0,0 @@
# Console Project
# Description
The `console-project` task starts the Scala interpreter with access to your project definition and to `sbt`. Specifically, the interpreter is started up with these commands already executed:
```scala
import sbt._
import Process._
import Keys._
import <your-project-definition>._
import currentState._
import extracted._
```
For example, running external processes with sbt's process library (to be included in the standard library in Scala 2.9):
```scala
> "tar -zcvf project-src.tar.gz src" !
> "find project -name *.jar" !
> "cat build.sbt" #| "grep version" #> new File("sbt-version") !
> "grep -r null src" #|| "echo null-free" !
> uri("http://databinder.net/dispatch/About").toURL #> file("About.html") !
```
`console-project` can be useful for creating and modifying your build in the same way that the Scala interpreter is normally used to explore writing code. Note that this gives you raw access to your build. Think about what you pass to `IO.delete`, for example.
This task was especially useful in prior versions of sbt for showing the value of settings. It is less useful for this now that `show <setting>` prints the result of a setting or task and `set` can define an anonymous task at the command line.
# Accessing settings
To get a particular setting, use the form:
```scala
> val value = get(<key> in <scope>)
```
## Examples
```scala
> IO.delete( get(classesDirectory in Compile) )
```
Show current compile options:
```scala
> get(scalacOptions in Compile) foreach println
```
Show additionally configured repositories.
```scala
> get( resolvers ) foreach println
```
# Evaluating tasks
To evaluate a task, use the form:
```scala
> val value = evalTask(<key> in <scope>, currentState)
```
## Examples
Show all repositories, including defaults.
```scala
> evalTask( fullResolvers, currentState ) foreach println
```
Show the classpaths used for compilation and testing:
```scala
> evalTask( fullClasspath in Compile, currentState ).files foreach println
> evalTask( fullClasspath in Test, currentState ).files foreach println
```
Show the remaining commands to be executed in the build (more interesting if you invoke `console-project` like `; console-project ; clean ; compile`):
```scala
> remainingCommands
```
Show the number of currently registered commands:
```scala
> definedCommands.size
```

@ -1,70 +0,0 @@
# Cross-building
# Introduction
Different versions of Scala can be binary incompatible, despite maintaining source compatibility. This page describes how to use `sbt` to build and publish your project against multiple versions of Scala and how to use libraries that have done the same.
# Publishing Conventions
The underlying mechanism used to indicate which version of Scala a library was compiled against is to append `_<scala-version>` to the library's name. For example, `dispatch` becomes `dispatch_2.8.1` for the variant compiled against Scala 2.8.1. This fairly simple approach allows interoperability with users of Maven, Ant and other build tools.
The rest of this page describes how `sbt` handles this for you as part of cross-building.
# Using Cross-Built Libraries
To use a library built against multiple versions of Scala, double the first `%` in an inline dependency to be `%%`. This tells `sbt` that it should append the current version of Scala being used to build the library to the dependency's name. For example:
```scala
libraryDependencies += "net.databinder" %% "dispatch" % "0.8.0"
```
A nearly equivalent, manual alternative for a fixed version of Scala is:
```scala
libraryDependencies += "net.databinder" % "dispatch_2.8.1" % "0.8.0"
```
# Cross-Building a Project
Define the versions of Scala to build against in the `cross-scala-versions` setting. Versions of Scala 2.8.0 or later are allowed. For example, in a `.sbt` build definition:
```scala
crossScalaVersions := Seq("2.8.0", "2.8.1", "2.9.1")
```
To build against all versions listed in `build.scala.versions`, prefix the action to run with `+`. For example:
```text
> + package
```
A typical way to use this feature is to do development on a single Scala version (no `+` prefix) and then cross-build (using `+`) occasionally and when releasing. The ultimate purpose of `+` is to cross-publish your project. That is, by doing:
```text
> + publish
```
you make your project available to users for different versions of Scala. See [[Publishing]] for more details on publishing your project.
In order to make this process as quick as possible, different output and managed dependency directories are used for different versions of Scala. For example, when building against Scala 2.8.1,
* `./target/` becomes `./target/scala_2.8.1/`
* `./lib_managed/` becomes `./lib_managed/scala_2.8.1/`
Packaged jars, wars, and other artifacts have `_<scala-version>` appended to the normal artifact ID as mentioned in the Publishing Conventions section above.
This means that the outputs of each build against each version of Scala are independent of the others. `sbt` will resolve your dependencies for each version separately. This way, for example, you get the version of Dispatch compiled against 2.8.1 for your 2.8.1 build, the version compiled against 2.8.0 for your 2.8.0 build, and so on. In fact, you can control your dependencies for different Scala versions. For example:
```scala
libraryDependencies <<= (scalaVersion, libraryDependencies) { (sv, deps) =>
// select the ScalaCheck version based on the Scala version
val versionMap = Map("2.8.0" -> "1.7", "2.8.1" => "1.8")
val testVersion = versionMap.getOrElse(sv, error("Unsupported Scala version " + sv))
// append the ScalaCheck dependency to the existing dependencies
deps :+ ("org.scala-tools.testing" % "scalacheck" % testVersion)
}
```
This works because your project definition is reloaded for each version of Scala you are building against. `scalaVersion` contains the current version of Scala being used to build the project.
As a final note, you can use `++ <version>` to temporarily switch the Scala version currently being used to build (see [[Running|Getting Started Running]] for details).

@ -1,13 +0,0 @@
# Detailed Topic Pages
This part of the wiki has pages documenting particular sbt topics.
Before reading anything in here, you will need the information in
the [[Getting Started Guide|Getting Started Welcome]] as a
foundation.
Other resources include the [[Examples]] and
[[extending sbt|Extending]] areas on the wiki, and the
[[API Documentation|http://harrah.github.com/xsbt/latest/api/index.html]].
See the sidebar on the right for an index of topics.

@ -1,106 +0,0 @@
[Fork API]: http://harrah.github.com/xsbt/latest/api/sbt/Fork$.html
[ForkJava]: http://harrah.github.com/xsbt/latest/api/sbt/Fork$.ForkJava.html
[ForkScala]: http://harrah.github.com/xsbt/latest/api/sbt/Fork$.ForkScala.html
[OutputStrategy]: http://harrah.github.com/xsbt/latest/api/sbt/OutputStrategy.html
# Forking
By default, the `run` task runs in the same JVM as sbt. Forking is required under [[certain circumstances|Running Project Code]], however. Or, you might want to fork Java processes when implementing new tasks.
By default, a forked process uses the same Java and Scala versions being used for the build and the working directory and JVM options of the current process. This page discusses how to enable and configure forking. Note that sbt cannot fork tests, only the `run` tasks.
# Enable forking
The following examples demonstrate forking the `run` action and changing the working directory or arguments.
To enable forking all `run`-like tasks (`run`, `run-main`, `test:run`, and `test:run-main`), set `fork` to `true`.
```scala
fork in run := true
```
To only fork `test:run` and `test:run-main`:
```scala
fork in (Test,run) := true
```
Similarly, set `fork in Compile := true` to only fork the main `run` tasks. `run` and `run-main` share the same configuration and cannot be configured separately.
# Change working directory
To change the working directory when forked, set `baseDirectory in run` or `baseDirectory in (Test, run)`:
```scala
// sets the working directory for all `run`-like tasks
baseDirectory in run := file("/path/to/working/directory/")
// sets the working directory for `run` and `run-main` only
baseDirectory in (Compile,run) := file("/path/to/working/directory/")
// sets the working directory for `test:run` and `test:run-main` only
baseDirectory in (Test,run) := file("/path/to/working/directory/")
```
# Forked JVM options
To specify options to be provided to the forked JVM, set `javaOptions`:
```scala
javaOptions in run += "-Xmx8G"
```
or specify the configuration to affect only the main or test `run` tasks:
```scala
javaOptions in (Test,run) += "-Xmx8G"
```
# Java Home
Select the Java installation to use by setting the `java-home` directory:
```scala
javaHome := file("/path/to/jre/")
```
Note that if this is set globally, it also sets the Java installation used to compile Java sources. You can restrict it to running only by setting it in the `run` scope:
```scala
javaHome in run := file("/path/to/jre/")
```
As with the other settings, you can specify the configuration to affect only the main or test `run` tasks.
# Configuring output
By default, forked output is sent to the Logger, with standard output logged at the `Info` level and standard error at the `Error` level.
This can be configured with the `output-strategy` setting, which is of type [OutputStrategy].
```scala
// send output to the build's standard output and error
outputStrategy := Some(StdoutOutput)
// send output to the provided OutputStream `someStream`
outputStrategy := Some(CustomOutput(someStream: OutputStream))
// send output to the provided Logger `log` (unbuffered)
outputStrategy := Some(LoggedOutput(log: Logger))
// send output to the provided Logger `log` after the process terminates
outputStrategy := Some(BufferedOutput(log: Logger))
```
As with other settings, this can be configured individually for main or test `run` tasks.
# Configuring Input
By default, the standard input of the sbt process is not forwarded to the forked process. To enable this, configure the `connectInput` setting:
```scala
connectInput in run := true
```
# Direct Usage
To fork a new Java process, use the [Fork API]. The methods of interest are `Fork.java`, `Fork.javac`, `Fork.scala`, and `Fork.scalac`. See the [ForkJava] and [ForkScala] classes for the arguments and types.

@ -1,41 +0,0 @@
# Global Settings
## Basic global configuration file
Settings that should be applied to all projects can go in `~/.sbt/global.sbt` (or any file in `~/.sbt/` with a `.sbt` extension). Plugins that are defined globally in `~/.sbt/plugins` are available to these settings. For example, to change the default `shellPrompt` for your projects:
`~/.sbt/global.sbt`
```scala
shellPrompt := { state =>
"sbt (%s)> ".format(Project.extract(state).currentProject.id)
}
```
## Global Settings using a Global Plugin
The `~/.sbt/plugins` directory is a global plugin project. This can be used to provide global commands, plugins, or other code.
To add a plugin globally, create `~/.sbt/plugins/build.sbt` containing the dependency definitions. For example:
```
addSbtPlugin("org.example" % "plugin" % "1.0")
```
To change the default `shellPrompt` for every project using this approach, create a local plugin `~/.sbt/plugins/ShellPrompt.scala`:
```scala
import sbt._
import Keys._
object ShellPrompt extends Plugin {
override def settings = Seq(
shellPrompt := { state =>
"sbt (%s)> ".format(Project.extract(state).currentProject.id) }
)
}
```
The `~/.sbt/plugins` directory is a full project that is included as an external dependency of every plugin project.
In practice, settings and code defined here effectively work as if they were defined in a project's `project/` directory.
This means that `~/.sbt/plugins` can be used to try out ideas for plugins such as shown in the shellPrompt example.

@ -1,253 +0,0 @@
# Using the Configuration System
Central to sbt is the new configuration system, which is designed to enable extensive customization.
The goal of this page is to explain the general model behind the configuration system and how to work with it.
The Getting Started Guide (see [[.sbt files|Getting Started Basic Def]]) describes how to define settings; this page describes interacting with them and exploring them at the command line.
# Selecting commands, tasks, and settings
A fully-qualified reference to a setting or task looks like:
```text
{<build-uri>}<project-id>/config:key(for key2)
```
This "scoped key" reference is used by commands like `last` and `inspect` and when selecting a task to run.
Only `key` is required by the parser; the remaining optional pieces select the scope.
These optional pieces are individually referred to as scope axes.
In the above description, `{<build-uri>}` and `<project-id>/` specify the project axis, `config:` is the configuration axis, and `(for key2)` is the task-specific axis.
Unspecified components are taken to be the current project (project axis), the `Global` context (task axis), or auto-detected (configuration axis).
An asterisk (`*`) is used to explicitly refer to the `Global` context, as in `*/*:key`.
## Selecting the configuration
In the case of an unspecified configuration (that is, when the `config:` part is omitted), if the key is defined in `Global`, that is selected.
Otherwise, the first configuration defining the key is selected, where order is determined by the project definition's `configurations` member.
By default, this ordering is `compile, test, ...`
For example, the following are equivalent when run in a project `root` in the build in `/home/user/sample/`:
```text
> compile
> compile:compile
> root/compile
> root/compile:compile
> {file:/home/user/sample/}root/compile:compile
```
As another example, `run` by itself refers to `compile:run` because there is no global `run` task and the first configuration searched, `compile`, defines a `run`.
Therefore, to reference the `run` task for the `test` configuration, the configuration axis must be specified like `test:run`.
Some other examples that require the explicit `test:` axis:
```text
> test:console-quick
> test:console
> test:doc
> test:package
```
## Task-specific Settings
Some settings are defined per-task.
This is used when there are several related tasks, such as `package`, `package-src`, and `package-doc`, in the same configuration (such as `compile` or `test`).
For package tasks, their settings are the files to package, the options to use, and the output file to produce.
Each package task should be able to have different values for these settings.
This is done with the task axis, which selects the task to apply a setting to.
For example, the following prints the output jar for the different package tasks.
```text
> artifact-path(for package)
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1.jar
> artifact-path(for package-src)
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-src.jar
> artifact-path(for package-doc)
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-doc.jar
> test:artifact-path(for package)
[info] /home/user/sample/target/scala-2.8.1.final/root_2.8.1-0.1-test.jar
```
# Discovering Settings and Tasks
This section discusses the `inspect` command, which is useful for exploring relationships between settings.
It can be used to determine which setting should be modified in order to affect another setting, for example.
## Value and Provided By
The first piece of information provided by `inspect` is the value of a setting or the type of task if it is a task.
The following section of output is labeled "Provided by".
This shows the actual scope where the setting is defined.
For example,
```text
> inspect library-dependencies
[info] Value:
[info] List(org.scalaz:scalaz-core:6.0-SNAPSHOT, org.scala-tools.testing:scalacheck:1.8:test)
[info] Provided by:
[info] {file:/home/user/sample/}root/*:library-dependencies
...
```
This shows that `library-dependencies` has been defined on the current project (`{file:/home/user/sample/}root`) in the global configuration (`*:`).
For a task like `update`, the output looks like:
```text
> inspect update
[info] Task
[info] Provided by:
[info] {file:/home/user/sample/}root/*:update
...
```
## Related Settings
The "Related" section of `inspect` output lists all of the definitions of a key.
For example,
```text
> inspect compile
...
[info] Related:
[info] {file:/home/user/sample/}root/test:compile
```
This shows that in addition to the requested `compile:compile` task, there is also a `test:compile` task.
## Dependencies
Forward dependencies show the other settings (or tasks) used to define a setting (or task).
Reverse dependencies go the other direction, showing what uses a given setting.
`inspect` provides this information based on either the requested dependencies or the actual dependencies.
Requested dependencies are those that a setting directly specifies.
Actual settings are what those dependencies get resolved to.
This distinction is explained in more detail in the following sections.
### Requested Dependencies
As an example, we'll look at `console`:
```text
> inspect console
...
[info] Dependencies:
[info] {file:/home/user/sample/}root/full-classpath
[info] {file:/home/user/sample/}root/scalac-options(for console)
[info] {file:/home/user/sample/}root/streams(for console)
[info] {file:/home/user/sample/}root/initial-commands(for console)
[info] {file:/home/user/sample/}root/compilers
...
```
This shows the inputs to the `console` task.
We can see that it gets its classpath and options from `full-classpath` and `scalac-options(for console)`.
The information provided by the `inspect` command can thus assist in finding the right setting to change.
The convention for keys, like `console` and `full-classpath`, is that the Scala identifier is camel case, while the String representation is lowercase and separated by dashes.
The Scala identifier for a configuration is uppercase to distinguish it from tasks like `compile` and `test`.
For example, we can infer from the previous example how to add code to be run when the Scala interpreter starts up:
```console
> set initialCommands in Compile in console := "import mypackage._"
> console
...
import mypackage._
...
```
`inspect` showed that `console` used the setting `initial-commands(for console)`.
Translating the `initial-commands` string to the Scala identifier gives us `initialCommands`.
No configuration is specified, so we know it is in the default `compile` configuration.
`(for console)` indicates that the setting is specific to `console`.
Because of this, we can set the initial commands on the `console` task without affecting the `console-quick` task, for example.
### Actual Dependencies
`inspect actual <scoped-key>` shows the actual dependency used.
This is useful because delegation means that the dependency can come from a scope other than the requested one.
Using `inspect actual`, we see exactly which scope is providing a value for a setting.
Combining `inspect actual` with plain `inspect`, we can see the range of scopes that will affect a setting.
Returning to the example in Requested Dependencies,
```text
> inspect actual console
...
[info] Dependencies:
[info] {file:/home/user/sample/}default/*:compilers
[info] {file:/home/user/sample/}default/full-classpath
[info] */*:scalac-options
[info] */*:initial-commands
[info] {file:/home/user/sample/}default/streams(for console)
...
```
For `initial-commands`, we see that it comes from the global scope (`*/*:`).
Combining this with the relevant output from `inspect console`:
```
{file:/home/user/sample/}root/initial-commands(for console)
```
we know that we can set `initial-commands` as generally as the global scope, as specific as the current project's `console` task scope, or anything in between.
This means that we can, for example, set `initial-commands` for the whole project and will affect `console`:
```console
> set initialCommands := "import mypackage._"
...
```
The reason we might want to set it here this is that other console tasks will use this value now.
We can see which ones use our new setting by looking at the reverse dependencies output of `inspect actual`:
```text
> inspect actual initial-commands
...
[info] Reverse dependencies:
[info] {file:/home/user/sample/}root/*:console-project
[info] {file:/home/user/sample/}root/test:console-quick
[info] {file:/home/user/sample/}root/test:console
[info] {file:/home/user/sample/}root/console
[info] {file:/home/user/sample/}root/console-quick
...
```
We now know that by setting `initial-commands` on the whole project, we affect all console tasks in all configurations in that project.
If we didn't want the initial commands to apply for `console-project`, which doesn't have our project's classpath available, we could use the more specific task axis:
```console
> set initialCommands in console := "import mypackage._"
> set initialCommands in consoleQuick := "import mypackage._"
```
or configuration axis:
```console
> set initialCommands in Compile := "import mypackage._"
> set initialCommands in Test := "import mypackage._"
```
The next part describes the Delegates section, which shows the chain of delegation for scopes.
## Delegates
A setting has a key and a scope.
A request for a key in a scope A may be delegated to another scope if A doesn't define a value for the key.
The delegation chain is well-defined and is displayed in the Delegates section of the `inspect` command.
The Delegates section shows the order in which scopes are searched when a value is not defined for the requested key.
As an example, consider the initial commands for `console` again:
```text
> inspect initial-commands(for console)
...
[info] Delegates:
[info] {file:/home/user/sample/}root/*:initial-commands(for console)
[info] {file:/home/user/sample/}root/*:initial-commands
[info] {file:/home/user/sample/}/*:initial-commands(for console)
[info] {file:/home/user/sample/}/*:initial-commands
[info] */*:initial-commands(for console)
[info] */*:initial-commands
...
```
This means that if there is no value specifically for `{file:/home/user/sample/}root/*:initial-commands(for console)`, the scopes listed under Delegates will be searched in order until a value is defined.

@ -1,48 +0,0 @@
# Java Sources
sbt has support for compiling Java sources with the limitation that dependency tracking is limited to the dependencies present in compiled class files.
# Usage
* `compile` will compile the sources under `src/main/java` by default.
* `test-compile` will compile the sources under `src/test/java` by default.
Pass options to the Java compiler by setting `javac-options`:
```scala
javacOptions += "-g:none"
```
As with options for the Scala compiler, the arguments are not parsed by sbt. Multi-element options, such as `-source 1.5`, are specified like:
```scala
javacOptions ++= Seq("-source", "1.5")
```
You can specify the order in which Scala and Java sources are built with the `compile-order` setting. Possible values are from the `CompileOrder` enumeration: `Mixed`, `JavaThenScala`, and `ScalaThenJava`. If you have circular dependencies between Scala and Java sources, you need the default, `Mixed`, which passes both Java and Scala sources to `scalac` and then compiles the Java sources with `javac`. If you do not have circular dependencies, you can use one of the other two options to speed up your build by not passing the Java sources to `scalac`. For example, if your Scala sources depend on your Java sources, but your Java sources do not depend on your Scala sources, you can do:
```scala
compileOrder := CompileOrder.JavaThenScala
```
To specify different orders for main and test sources, scope the setting by configuration:
```scala
// Java then Scala for main sources
compileOrder in Compile := CompileOrder.JavaThenScala
// allow circular dependencies for test sources
compileOrder in Test := CompileOrder.Mixed
```
Note that in an incremental compilation setting, it is not practical to ensure complete isolation between Java sources and Scala sources because they share the same output directory. So, previously compiled classes not involved in the current recompilation may be picked up. A clean compile will always provide full checking, however.
By default, sbt includes `src/main/scala` and `src/main/java` in its list of unmanaged source directories. For Java-only projects, the unnecessary Scala directories can be ignored by modifying `unmanagedSourceDirectories`:
```scala
// Include only src/main/java in the compile configuration
unmanagedSourceDirectories in Compile <<= Seq(javaSource in Compile).join
// Include only src/test/java in the test configuration
unmanagedSourceDirectories in Test <<= Seq(javaSource in Test).join
```

@ -1,223 +0,0 @@
# Launcher Specification
The sbt launcher component is a self-contained jar that boots a Scala application without Scala or the application already existing on the system. The only prerequisites are the launcher jar itself, an optional configuration file, and a java runtime version 1.6 or greater.
# Overview
A user downloads the launcher jar and creates a script to run it. In this documentation, the script will be assumed to be called `launch`. For unix, the script would look like:
```
java -jar sbt-launcher.jar "$@"
```
The user then downloads the configuration file for the application (call it `my.app.configuration`) and creates a script to launch it (call it `myapp`):
```
launch @my.app.configuration "$@"
```
The user can then launch the application using
```
myapp arg1 arg2 ...
```
Like the launcher used to distribute `sbt`, the downloaded launcher jar will retrieve Scala and the application according to the provided configuration file. The versions may be fixed or read from a different configuration file (the location of which is also configurable). The location to which the Scala and application jars are downloaded is configurable as well. The repositories searched are configurable. Optional initialization of a properties file is configurable.
Once the launcher has downloaded the necessary jars, it loads the application and calls its entry point. The application is passed information about how it was called: command line arguments, current working directory, Scala version, and application ID (organization, name, version). In addition, the application can ask the launcher to perform operations such as obtaining the Scala jars and a `ClassLoader` for any version of Scala retrievable from the repositories specified in the configuration file. It can request that other applications be downloaded and run. When the application completes, it can tell the launcher to exit with a specific exit code or to reload the application with a different version of Scala, a different version of the application, or different arguments.
There are some other options for setup, such as putting the configuration file inside the launcher jar and distributing that as a single download. The rest of this documentation describes the details of configuring, writing, distributing, and running the application.
## Configuration
The launcher may be configured in one of the following ways in increasing order of precedence:
* Replace the `/sbt/sbt.boot.properties` file in the jar
* Put a configuration file named `sbt.boot.properties` on the classpath. Put it in the classpath root without the `/sbt` prefix.
* Specify the location of an alternate configuration on the command line. This can be done by either specifying the location as the system property `sbt.boot.properties` or as the first argument to the launcher prefixed by `'@'`. The system property has lower precedence. Resolution of a relative path is first attempted against the current working directory, then against the user's home directory, and then against the directory containing the launcher jar. An error is generated if none of these attempts succeed.
The configuration file is line-based, read as UTF-8 encoded, and defined by the following grammar. `'nl'` is a newline or end of file and `'text'` is plain text without newlines or the surrounding delimiters (such as parentheses or square brackets):
```
configuration ::= scala app repositories boot log app-properties
scala ::= '[' 'scala' ']' nl version nl classifiers nl
app ::= '[' 'app' ']' nl org nl name nl version nl components nl class nl cross-versioned nl resources nl classifiers nl
repositories ::= '[' 'repositories' ']' nl (repository nl)*
boot ::= '[' 'boot' ']' nl directory nl bootProperties nl search nl promptCreate nl promptFill nl quickOption nl
log ::= '[' 'log' ']' nl logLevel nl
app-properties ::= '[' 'app-properties' ']' nl property*
ivy ::= '[' 'ivy' ']' nl homeDirectory nl checksums
directory ::= 'directory' ':' path
bootProperties ::= 'properties' ':' path
search ::= 'search' ':' ('none'|'nearest'|'root-first'|'only') (',' path)*
logLevel ::= 'log-level' ':' ('debug' | 'info' | 'warn' | 'error')
promptCreate ::= 'prompt-create' ':' label
promptFill ::= 'prompt-fill' ':' boolean
quickOption ::= 'quick-option' ':' boolean
version ::= 'version' ':' versionSpecification
versionSpecification ::= readProperty | fixedVersion
readProperty ::= 'read' '(' propertyName ')' '[' default ']'
fixedVersion ::= text
classifiers ::= 'classifiers' ':' text (',' text)*
homeDirectory ::= 'ivy-home' ':' path
checksums ::= 'checksums' ':' checksum (',' checksum)*
org ::= 'org' ':' text
name ::= 'name' ':' text
class ::= 'class' ':' text
components ::= 'components' ':' component (',' component)*
cross-versioned ::= 'cross-versioned' ':' boolean
resources ::= 'resources' ':' path (',' path)*
repository ::= ( predefinedRepository | customRepository ) nl
predefinedRepository ::= 'local' | 'maven-local' | 'maven-central'
customRepository ::= label ':' url [ [',' ivy-pattern] ',' artifact-pattern]
property ::= label ':' propertyDefinition (',' propertyDefinition)* nl
propertyDefinition ::= mode '=' (set | prompt)
mode ::= 'quick' | 'new' | 'fill'
set ::= 'set' '(' value ')'
prompt ::= 'prompt' '(' label ')' ('[' default ']')?
boolean ::= 'true' | 'false'
path, propertyName, label, default, checksum ::= text
```
The default configuration file for sbt looks like:
```
[scala]
version: 2.9.1
[app]
org: ${sbt.organization-org.scala-sbt}
name: sbt
version: ${sbt.version-read(sbt.version)[0.11.3]}
class: ${sbt.main.class-sbt.xMain}
components: xsbti,extra
cross-versioned: ${sbt.cross.versioned-true}
[repositories]
local
typesafe-ivy-releases: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext]
maven-central
sonatype-snapshots: https://oss.sonatype.org/content/repositories/snapshots
[boot]
directory: ${sbt.boot.directory-${sbt.global.base-${user.home}/.sbt}/boot/}
[ivy]
ivy-home: ${sbt.ivy.home-${user.home}/.ivy2/}
checksums: ${sbt.checksums-sha1,md5}
```
The `scala.version` property specifies the version of Scala used to run the application. If specified, the `scala.classifiers` property defines classifiers, such as 'sources', of extra Scala artifacts to retrieve. The `app.org`, `app.name`, and `app.version` properties specify the organization, module ID, and version of the application, respectively. These are used to resolve and retrieve the application from the repositories listed in `[repositories]`. If `app.cross-versioned` is true, the resolved module ID is `{app.name+'_'+scala.version}`. The paths given in `app.resources` are added to the application's classpath. If the path is relative, it is resolved against the application's working directory. If specified, the `app.classifiers` property defines classifiers, like 'sources', of extra artifacts to retrieve for the application.
Jars are retrieved to the directory given by `boot.directory`. You can make this an absolute path to be shared by all sbt instances on the machine. If multiple versions access it simultaneously.
, you might see messages like:
```
Waiting for lock on <lock-file> to be available...
```
The `boot.properties` property specifies the location of the properties file to use if `app.version` or `scala.version` is specified as `read`. The `prompt-create`, `prompt-fill`, and `quick-option` properties together with the property definitions in `[app.properties]` can be used to initialize the `boot.properties` file.
The app.class property specifies the name of the entry point to the application. An application entry point must be a public class with a no-argument constructor that implements `xsbti.AppMain`. The `AppMain` interface specifies the entry method signature 'run'. The run method is passed an instance of AppConfiguration, which provides access to the startup environment. `AppConfiguration` also provides an interface to retrieve other versions of Scala or other applications. Finally, the return type of the run method is `xsbti.MainResult`, which has two subtypes: `xsbti.Reboot` and `xsbti.Exit`. To exit with a specific code, return an instance of `xsbti.Exit` with the requested code. To restart the application, return an instance of Reboot. You can change some aspects of the configuration with a reboot, such as the version of Scala, the application ID, and the arguments.
The `ivy.cache-directory` property provides an alternative location for the Ivy cache used by the launcher. This does not set the Ivy cache for the application.
## Execution
On startup, the launcher searches for its configuration in the order described in the Configuration section and then parses it. If either the Scala version or the application version are specified as 'read', the launcher determines them in the following manner. The file given by the 'boot.properties' property is read as a Java properties file to obtain the version. The expected property names are `${app.name}.version` for the application version (where `${app.name}` is replaced with the value of the `app.name` property from the boot configuration file) and `scala.version` for the Scala version. If the properties file does not exist, the default value provided is used. If no default was provided, an error is generated.
Once the final configuration is resolved, the launcher proceeds to obtain the necessary jars to launch the application. The `boot.directory` property is used as a base directory to retrieve jars to. No locking is done on the directory, so it should not be shared system-wide. The launcher retrieves the requested version of Scala to
```
${boot.directory}/${scala.version}/lib/
```
If this directory already exists, the launcher takes a shortcut for startup performance and assumes that the jars have already been downloaded. If the directory does not exist, the launcher uses Apache Ivy to resolve and retrieve the jars. A similar process occurs for the application itself. It and its dependencies are retrieved to
```
${boot.directory}/${scala.version}/${app.org}/${app.name}/.
```
Once all required code is downloaded, the class loaders are set up. The launcher creates a class loader for the requested version of Scala. It then creates a child class loader containing the jars for the requested 'app.components' and with the paths specified in `app.resources`. An application that does not use components will have all of its jars in this class loader.
The main class for the application is then instantiated. It must be a public class with a public no-argument constructor and must conform to xsbti.AppMain. The `run` method is invoked and execution passes to the application. The argument to the 'run' method provides configuration information and a callback to obtain a class loader for any version of Scala that can be obtained from a repository in [repositories]. The return value of the run method determines what is done after the application executes. It can specify that the launcher should restart the application or that it should exit with the provided exit code.
## Creating a Launched Application
This section shows how to make an application that is launched by this launcher. First, declare a dependency on the launcher-interface. Do not declare a dependency on the launcher itself. The launcher interface consists strictly of Java interfaces in order to avoid binary incompatibility between the version of Scala used to compile the launcher and the version used to compile your application. The launcher interface class will be provided by the launcher, so it is only a compile-time dependency. If you are building with sbt, your dependency definition would be:
```scala
libraryDependencies += "org.scala-sbt" %% "launcher-interface" % "0.11.3" % "provided"
resolvers <+= sbtResolver
```
Make the entry point to your class implement 'xsbti.AppMain'. An example that uses some of the information:
```scala
package xsbt.test
class Main extends xsbti.AppMain
{
def run(configuration: xsbti.AppConfiguration) =
{
// get the version of Scala used to launch the application
val scalaVersion = configuration.provider.scalaProvider.version
// Print a message and the arguments to the application
println("Hello world! Running Scala " + scalaVersion)
configuration.arguments.foreach(println)
// demonstrate the ability to reboot the application into different versions of Scala
// and how to return the code to exit with
scalaVersion match
{
case "2.8.1" =>
new xsbti.Reboot {
def arguments = configuration.arguments
def baseDirectory = configuration.baseDirectory
def scalaVersion = "2.9.1"
def app = configuration.provider.id
}
case "2.9.1" => new Exit(1)
case _ => new Exit(0)
}
}
class Exit(val code: Int) extends xsbti.Exit
}
```
Next, define a configuration file for the launcher. For the above class, it might look like:
```scala
[scala]
version: 2.9.1
[app]
org: org.scala-sbt
name: xsbt-test
version: 0.11.3
class: xsbt.test.Main
cross-versioned: true
[repositories]
local
maven-central
[boot]
directory: boot
```
Then, `publish-local` or `+publish-local` the application to make it available.
## Running an Application
As mentioned above, there are a few options to actually run the application. The first involves providing a modified jar for download. The second two require providing a configuration file for download.
* Replace the /sbt/sbt.boot.properties file in the launcher jar and distribute the modified jar. The user would need a script to run 'java -jar your-launcher.jar arg1 arg2 ...'.
* The user downloads the launcher jar and you provide the configuration file.
* The user needs to run 'java -Dsbt.boot.properties=your.boot.properties -jar launcher.jar'.
* The user already has a script to run the launcher (call it 'launch'). The user needs to run
```
launch @your.boot.properties your-arg-1 your-arg-2
```

@ -1,371 +0,0 @@
[Apache Ivy]: http://ant.apache.org/ivy/
[Ivy revisions]: http://ant.apache.org/ivy/history/2.2.0/ivyfile/dependency.html#revision
[Extra attributes]: http://ant.apache.org/ivy/history/2.2.0/concept.html#extra
[through Ivy]: http://ant.apache.org/ivy/history/latest-milestone/concept.html#checksum
[ModuleID]: http://harrah.github.com/xsbt/latest/api/sbt/ModuleID.html
# Library Management
There's now a
[[getting started page|Getting Started Library Dependencies]]
about library management, which you may want to read first.
_Wiki Maintenance Note:_ it would be nice to remove the overlap
between this page and the getting started page, leaving this page
with the more advanced topics such as checksums and external Ivy
files.
# Introduction
There are two ways for you to manage libraries with sbt: manually
or automatically. These two ways can be mixed as well. This page
discusses the two approaches. All configurations shown here are
settings that go either directly in a
[[.sbt file|Getting Started Basic Def]] or are appended to the
`settings` of a Project in a [[.scala file|Getting Started Full Def]].
# Manual Dependency Management
Manually managing dependencies involves copying any jars that you want to use to the `lib` directory. sbt will put these jars on the classpath during compilation, testing, running, and when using the interpreter. You are responsible for adding, removing, updating, and otherwise managing the jars in this directory. No modifications to your project definition are required to use this method unless you would like to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the `unmanaged-base` setting in your project definition. For example, to use `custom_lib/`:
```scala
unmanagedBase <<= baseDirectory { base => base / "custom_lib" }
```
If you want more control and flexibility, override the `unmanaged-jars` task, which ultimately provides the manual dependencies to sbt. The default implementation is roughly:
```scala
unmanagedJars in Compile <<= baseDirectory map { base => (base ** "*.jar").classpath }
```
If you want to add jars from multiple directories in addition to the default directory, you can do:
```scala
unmanagedJars in Compile <++= baseDirectory map { base =>
val baseDirectories = (base / "libA") +++ (base / "b" / "lib") +++ (base / "libC")
val customJars = (baseDirectories ** "*.jar") +++ (base / "d" / "my.jar")
customJars.classpath
}
```
See [[Paths]] for more information on building up paths.
# Automatic Dependency Management
This method of dependency management involves specifying the direct dependencies of your project and letting sbt handle retrieving and updating your dependencies. sbt supports three ways of specifying these dependencies:
* Declarations in your project definition
* Maven POM files (dependency definitions only: no repositories)
* Ivy configuration and settings files
sbt uses [Apache Ivy] to implement dependency management in all three cases. The default is to use inline declarations, but external configuration can be explicitly selected. The following sections describe how to use each method of automatic dependency management.
## Inline Declarations
Inline declarations are a basic way of specifying the dependencies to be automatically retrieved. They are intended as a lightweight alternative to a full configuration using Ivy.
### Dependencies
Declaring a dependency looks like:
```scala
libraryDependencies += groupID % artifactID % revision
```
or
```scala
libraryDependencies += groupID % artifactID % revision % configuration
```
See [[Configurations]] for details on configuration mappings. Also, several dependencies can be declared together:
```scala
libraryDependencies ++= Seq(
groupID %% artifactID % revision,
groupID %% otherID % otherRevision
)
```
If you are using a dependency that was built with sbt, double the first `%` to be `%%`:
```scala
libraryDependencies += groupID %% artifactID % revision
```
This will use the right jar for the dependency built with the version of Scala that you are currently using. If you get an error while resolving this kind of dependency, that dependency probably wasn't published for the version of Scala you are using. See [[Cross Build]] for details.
Ivy can select the latest revision of a module according to constraints you specify. Instead of a fixed revision like `"1.6.1"`, you specify `"latest.integration"`, `"2.9.+"`, or `"[1.0,)"`. See the [Ivy revisions] documentation for details.
### Resolvers
sbt uses the standard Maven2 repository by default.
Declare additional repositories with the form:
```scala
resolvers += name at location
```
For example:
```scala
libraryDependencies ++= Seq(
"org.apache.derby" % "derby" % "10.4.1.3",
"org.specs" % "specs" % "1.6.1"
)
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
```
sbt can search your local Maven repository if you add it as a repository:
```scala
resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
```
See [[Resolvers]] for details on defining other types of repositories.
### Override default resolvers
`resolvers` configures additional, inline user resolvers. By default, `sbt` combines these resolvers with default repositories (Maven Central and the local Ivy repository) to form `external-resolvers`. To have more control over repositories, set `external-resolvers` directly. To only specify repositories in addition to the usual defaults, configure `resolvers`.
For example, to use the Sonatype OSS Snapshots repository in addition to the default repositories,
```scala
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
```
To use the local repository, but not the Maven Central repository:
```scala
externalResolvers <<= resolvers map { rs =>
Resolver.withDefaultResolvers(rs, mavenCentral = false)
}
```
For complete control, configure `full-resolvers`. This should rarely be modified, however, because `full-resolvers` combines `project-resolver` with `external-resolvers`. `project-resolver` is used for inter-project dependency management and should (almost) always be included.
### Explicit URL
If your project requires a dependency that is not present in a repository, a direct URL to its jar can be specified as follows:
```scala
libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar"
```
The URL is only used as a fallback if the dependency cannot be found through the configured repositories. Also, the explicit URL is not included in published metadata (that is, the pom or ivy.xml).
### Disable Transitivity
By default, these declarations fetch all project dependencies, transitively. In some instances, you may find that the dependencies listed for a project aren't necessary for it to build. Projects using the Felix OSGI framework, for instance, only explicitly require its main jar to compile and run. Avoid fetching artifact dependencies with either `intransitive()` or `notTransitive()`, as in this example:
```scala
libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive()
```
### Classifiers
You can specify the classifier for a dependency using the `classifier` method. For example, to get the jdk15 version of TestNG:
```scala
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
```
For multiple classifiers, use multiple `classifier` calls:
```scala
libraryDependencies +=
"org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifier "natives-linux" classifier "natives-osx"
```
To obtain particular classifiers for all dependencies transitively, run the `update-classifiers` task. By default, this resolves all artifacts with the `sources` or `javadoc` classifier. Select the classifiers to obtain by configuring the `transitive-classifiers` setting. For example, to only retrieve sources:
```scala
transitiveClassifiers := Seq("sources")
```
### Exclude Transitive Dependencies
To exclude certain transitive dependencies of a dependency, use the `excludeAll` or `exclude` methods. The `exclude` method should be used when a pom will be published for the project. It requires the organization and module name to exclude. For example,
```scala
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")
```
The `excludeAll` method is more flexible, but because it cannot be represented in a pom.xml, it should only be used when a pom doesn't need to be generated. For example,
```scala
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" excludeAll(
ExclusionRule(organization = "com.sun.jdmk"),
ExclusionRule(organization = "com.sun.jmx"),
ExclusionRule(organization = "javax.jms")
)
```
See [ModuleID] for API details.
### Download Sources
Downloading source and API documentation jars is usually handled by an IDE plugin. These plugins use the `update-classifiers` and `update-sbt-classifiers` tasks, which produce an [[Update Report]] referencing these jars.
To have sbt download the dependency's sources without using an IDE plugin, add `withSources()` to the dependency definition. For API jars, add `withJavadoc()`. For example:
```scala
libraryDependencies +=
"org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc()
```
### Extra Attributes
[Extra attributes] can be specified by passing key/value pairs to the `extra` method.
To select dependencies by extra attributes:
```scala
libraryDependencies += "org" % "name" % "rev" extra("color" -> "blue")
```
To define extra attributes on the current project:
```scala
projectID <<= projectID { id =>
id extra("color" -> "blue", "component" -> "compiler-interface")
}
```
### Inline Ivy XML
sbt additionally supports directly specifying the configurations or dependencies sections of an Ivy configuration file inline. You can mix this with inline Scala dependency and repository declarations.
For example:
```scala
ivyXML :=
<dependencies>
<dependency org="javax.mail" name="mail" rev="1.4.2">
<exclude module="activation"/>
</dependency>
</dependencies>
```
### Ivy Home Directory
By default, sbt uses the standard Ivy home directory location `${user.home}/.ivy2/`.
This can be configured machine-wide, for use by both the sbt launcher and by projects, by setting the system property `sbt.ivy.home` in the sbt startup script (described in [[Setup|Getting Started Setup]]).
For example:
```text
java -Dsbt.ivy.home=/tmp/.ivy2/ ...
```
### Checksums
sbt ([through Ivy]) verifies the checksums of downloaded files by default. It also publishes checksums of artifacts by default. The checksums to use are specified by the _checksums_ setting.
To disable checksum checking during update:
```scala
checksums in update := Nil
```
To disable checksum creation during artifact publishing:
```scala
checksums in publishLocal := Nil
checksums in publish := Nil
```
The default value is:
```scala
checksums := Seq("sha1", "md5")
```
### Publishing
Finally, see [[Publishing]] for how to publish your project.
## Maven/Ivy
For this method, create the configuration files as you would for Maven (`pom.xml`) or Ivy (`ivy.xml` and optionally `ivysettings.xml`).
External configuration is selected by using one of the following expressions.
### Ivy settings (resolver configuration)
```scala
externalIvySettings()
```
or
```scala
externalIvySettings(baseDirectory(_ / "custom-settings-name.xml"))
```
or **(sbt 0.12.0 or later only)**
```scala
externalIvySettings(url("your_url_here"))
```
### Ivy file (dependency configuration)
```scala
externalIvyFile()
```
or
```scala
externalIvyFile(baseDirectory(_ / "custom-name.xml"))
```
Because Ivy files specify their own configurations, sbt needs to know which configurations to use for the compile, runtime, and test classpaths. For example, to specify that the Compile classpath should use the 'default' configuration:
```scala
classpathConfiguration in Compile := config("default")
```
### Maven pom (dependencies only)
```scala
externalPom()
```
or
```scala
externalPom(baseDirectory(_ / "custom-name.xml"))
```
### Full Ivy Example
For example, a `build.sbt` using external Ivy files might look like:
```scala
externalIvySettings()
externalIvyFile( baseDirectory { base => base / "ivyA.xml"} )
classpathConfiguration in Compile := Compile
classpathConfiguration in Test := Test
classpathConfiguration in Runtime := Runtime
```
### Known limitations
Maven support is dependent on Ivy's support for Maven POMs.
Known issues with this support:
* Specifying `relativePath` in the `parent` section of a POM will produce an error.
* Ivy ignores repositories specified in the POM. A workaround is to specify repositories inline or in an Ivy `ivysettings.xml` file.

@ -1,18 +0,0 @@
# Local Scala
To use a locally built Scala version, define the `scala-home` setting, which is of type `Option[File]`.
This Scala version will only be used for the build and not for sbt, which will still use the version it was compiled against.
Example:
```scala
scalaHome := Some(file("/path/to/scala"))
```
Using a local Scala version will override the `scala-version` setting and will not work with [[cross building|Cross Build]].
sbt reuses the class loader for the local Scala version. If you recompile your local Scala version and you are using sbt interactively, run
```text
> reload
```
to use the new compilation results.

@ -1,95 +0,0 @@
[Path]: http://harrah.github.com/xsbt/latest/api/sbt/Path$.html
[PathFinder]: http://harrah.github.com/xsbt/latest/api/sbt/PathFinder.html
# Mapping Files
Tasks like `package`, `packageSrc`, and `packageDoc` accept mappings of type `Seq[(File, String)]` from an input file to the path to use in the resulting artifact (jar). Similarly, tasks that copy files accept mappings of type `Seq[(File, File)]` from an input file to the destination file. There are some methods on [PathFinder] and [Path] that can be useful for constructing the `Seq[(File, String)]` or `Seq[(File, File)]` sequences.
A common way of making this sequence is to start with a `PathFinder` or `Seq[File]` (which is implicitly convertible to `PathFinder`) and then call the `x` method. See the [PathFinder] API for details, but essentially this method accepts a function `File => Option[String]` or `File => Option[File]` that is used to generate mappings.
## Relative to a directory
The `Path.relativeTo` method is used to map a `File` to its path `String` relative to a base directory or directories. The `relativeTo` method accepts a base directory or sequence of base directories to relativize an input file against. The first directory that is an ancestor of the file is used in the case of a sequence of base directories.
For example:
```scala
import Path.relativeTo
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x relativeTo(baseDirectories)
val expected = (file("/a/b/C.scala") -> "b/C.scala") :: Nil
assert( mappings == expected )
```
## Rebase
The `Path.rebase` method relativizes an input file against one or more base directories (the first argument) and then prepends a base String or File (the second argument) to the result. As with `relativeTo`, the first base directory that is an ancestor of the input file is used in the case of multiple base directories.
For example, the following demonstrates building a `Seq[(File, String)]` using `rebase`:
```scala
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x rebase(baseDirectories, "pre/")
val expected = (file("/a/b/C.scala") -> "pre/b/C.scala" ) :: Nil
assert( mappings == expected )
```
Or, to build a `Seq[(File, File)]`:
```scala
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files x rebase(baseDirectories, newBase)
val expected = (file("/a/b/C.scala") -> file("/new/base/b/C.scala") ) :: Nil
assert( mappings == expected )
```
## Flatten
The `Path.flat` method provides a function that maps a file to the last component of the path (its name). For a File to File mapping, the input file is mapped to a file with the same name in a given target directory. For example:
```scala
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val mappings: Seq[(File,String)] = files x flat
val expected = (file("/a/b/C.scala") -> "C.scala" ) :: Nil
assert( mappings == expected )
```
To build a `Seq[(File, File)]` using `flat`:
```scala
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files x flat(newBase)
val expected = (file("/a/b/C.scala") -> file("/new/base/C.scala") ) :: Nil
assert( mappings == expected )
```
## Alternatives
To try to apply several alternative mappings for a file, use `|`, which is implicitly added to a function of type `A => Option[B]`. For example, to try to relativize a file against some base directories but fall back to flattening:
```scala
import Path.relativeTo
val files: Seq[File] = file("/a/b/C.scala") :: file("/zzz/D.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x ( relativeTo(baseDirectories) | flat )
val expected =
(file("/a/b/C.scala") -> "b/C.scala") ) ::
(file("/zzz/D.scala") -> "D.scala") ) ::
Nil
assert( mappings == expected )
```

@ -1,92 +0,0 @@
The assumption here is that you are familiar with sbt 0.7 but new to 0.11.
sbt 0.11's many new capabilities can be a bit overwhelming, but this page should help you migrate to 0.11 with a minimum of fuss.
## Why move to 0.11?
1. Faster builds (because it is smarter at re-compiling only what it must)
1. Easier configuration. For simple projects a single `build.sbt` file in your root directory is easier to create than `project/build/MyProject.scala` was.
1. No more `lib_managed` directory, reducing disk usage and avoiding backup and version control hassles.
1. `update` is now much faster and it's invoked automatically by sbt.
1. Terser output. (Yet you can ask for more details if something goes wrong.)
# Step 1: Read the Getting Started Guide for sbt 0.11
Reading the [[Getting Started Guide|Getting Started Welcome]] will
probably save you a lot of confusion.
# Step 2: Install sbt 0.11.3
Download sbt 0.11 as described on [[the setup page|Getting Started Setup]].
You can run 0.11 the same way that you run 0.7.x, either simply:
java -jar sbt-launch.jar
Or (as most users do) with a shell script, as described on
[[the setup page|Getting Started Setup]].
If you like, rename `sbt-launch.jar` and the script itself to
support multiple versions. For example you could have scripts for
`sbt7` and `sbt11`.
For more details see [[the setup page|Getting Started Setup]].
# Step 3: A technique for switching an existing project
Here is a technique for switching an existing project to 0.11 while retaining the ability to switch back again at will. Some builds, such as those with subprojects, are not suited for this technique, but if you learn how to transition a simple project it will help you do a more complex one next.
## Preserve `project/` for 0.7.x project
Rename your `project/` directory to something like `project-old`. This will hide it from sbt 0.11 but keep it in case you want to switch back to 0.7.x.
## Create `build.sbt` for 0.11
Create a `build.sbt` file in the root directory of your
project. See [[.sbt build definition|Getting Started Basic Def]]
in the Getting Started Guide, and for simple examples [[Quick-Configuration-Examples]]. If you have a simple project then converting your existing project file to this format is largely a matter of re-writing your dependencies and maven archive declarations in a modified yet familiar syntax.
This `build.sbt` file combines aspects of the old `project/build/ProjectName.scala` and `build.properties` files. It looks like a property file, yet contains Scala code in a special format.
A `build.properties` file like:
#Project properties
#Fri Jan 07 15:34:00 GMT 2011
project.organization=org.myproject
project.name=My Project
sbt.version=0.7.7
project.version=1.0
def.scala.version=2.7.7
build.scala.versions=2.8.1
project.initialize=false
Now becomes part of your `build.sbt` file with lines like:
```scala
name := "My Project"
version := "1.0"
organization := "org.myproject"
scalaVersion := "2.9.1"
```
Currently, a `project/build.properties` is still needed to explicitly select the sbt version. For example:
```text
sbt.version=0.11.3
```
## Run sbt 0.11
Now launch sbt. If you're lucky it works and you're done. For help debugging, see below.
## Switching back to sbt 0.7.x
If you get stuck and want to switch back, you can leave your `build.sbt` file alone. sbt 0.7.x will not understand or notice it. Just rename your 0.11.x `project` directory to something like `project10` and rename the backup of your old project from `project-old` to `project` again.
# FAQs
There's a section in the [[FAQ]] about migration from 0.7 that
covers several other important points.

@ -1,259 +0,0 @@
[sbt.ConcurrentRestrictions]: https://github.com/harrah/xsbt/blob/0.11/tasks/ConcurrentRestrictions.scala
Note: This page describes a feature in an unreleased version of sbt. The feature is currently expected to be included in version 0.12.0.
# Task ordering
Task ordering is specified by declaring a task's inputs.
Correctness of execution requires correct input declarations.
For example, the following two tasks do not have an ordering specified:
```scala
write := IO.write(file("/tmp/sample.txt"), "Some content.")
read := IO.read(file("/tmp/sample.txt"))
```
sbt is free to execute `write` first and then `read`, `read` first and then `write`, or `read` and `write` simultaneously.
Execution of these tasks is non-deterministic because they share a file.
A correct declaration of the tasks would be:
```scala
write := {
val f = file("/tmp/sample.txt")
IO.write(f, "Some content.")
f
}
read <<= write map { f => IO.read(f) }
```
This establishes an ordering: `read` must run after `write`.
We've also guaranteed that `read` will read from the same file that `write` created.
# Practical constraints
Note: The feature described in this section is experimental.
The default configuration of the feature is subject to change in particular.
## Background
Declaring inputs and dependencies of a task ensures the task is properly ordered and that code executes correctly.
In practice, tasks share finite hardware and software resources and can require control over utilization of these resources.
By default, sbt executes tasks in parallel (subject to the ordering constraints already described) in an effort to utilize all available processors.
Also by default, each test class is mapped to its own task to enable executing tests in parallel.
Prior to sbt 0.12, user control over this process was restricted to:
1. Enabling or disabling all parallel execution (`parallelExecution := false`, for example).
2. Enabling or disabling mapping tests to their own tasks (`parallelExecution in Test := false`, for example).
(Although never exposed as a setting, the maximum number of tasks running at a given time was internally configurable as well.)
The second configuration mechanism described above only selected between running all of a project's tests in the same task or in separate tasks.
Each project still had a separate task for running its tests and so test tasks in separate projects could still run in parallel if overall execution was parallel.
There was no way to restriction execution such that only a single test out of all projects executed.
## Configuration
sbt 0.12 contains a general infrastructure for restricting task concurrency beyond the usual ordering declarations.
There are two parts to these restrictions.
1. A task is tagged in order to classify its purpose and resource utilization. For example, the `compile` task may be tagged as `Tags.Compile` and `Tags.CPU`.
2. A list of rules restrict the tasks that may execute concurrently. For example, `Tags.limit(Tags.CPU, 4)` would allow up to four computation-heavy tasks to run at a time.
The system is thus dependent on proper tagging of tasks and then on a good set of rules.
### Tagging Tasks
In general, a tag is associated with a weight that represents the task's relative utilization of the resource represented by the tag.
Currently, this weight is an integer, but it may be a floating point in the future.
`Initialize[Task[T]]` defines two methods for tagging the constructed Task: `tag` and `tagw`.
The first method, `tag`, fixes the weight to be 1 for the tags provided to it as arguments.
The second method, `tagw`, accepts pairs of tags and weights.
For example, the following associates the `CPU` and `Compile` tags with the `compile` task (with a weight of 1).
```scala
compile <<= myCompileTask tag(Tags.CPU, Tags.Compile)
```
Different weights may be specified by passing tag/weight pairs to `tagw`:
```scala
download <<= downloadImpl.tagw(Tags.Network -> 3)
```
### Defining Restrictions
Once tasks are tagged, the `concurrentRestrictions` setting sets restrictions on the tasks that may be concurrently executed based on the weighted tags of those tasks.
For example,
```scala
concurrentRestrictions := Seq(
Tags.limit(Tags.CPU, 2),
Tags.limit(Tags.Network, 10),
Tags.limit(Tags.Test, 1),
Tags.limitAll( 15 )
)
```
The example limits:
* the number of CPU-using tasks to be no more than 2
* the number of tasks using the network to be no more than 10
* test execution to only one test at a time across all projects
* the total number of tasks to be less than or equal to 15
Note that these restrictions rely on proper tagging of tasks.
Also, the value provided as the limit must be at least 1 to ensure every task is able to be executed.
sbt will generate an error if this condition is not met.
Most tasks won't be tagged because they are very short-lived.
These tasks are automatically assigned the label `Untagged`.
You may want to include these tasks in the CPU rule by using the `limitSum` method.
For example:
```scala
...
Tags.limitSum(2, Tags.CPU, Tags.Untagged)
...
```
Note that the limit is the first argument so that tags can be provided as varargs.
Finally, for the most flexibility, you can specify a custom function of type `Map[Tag,Int] => Boolean`.
The `Map[Tag,Int]` represents the weighted tags of a set of tasks.
If the function returns `true`, it indicates that the set of tasks is allowed to execute concurrently.
If the return value is `false`, the set of tasks will not be allowed to execute concurrently.
For example, you might define a custom tag `Exclusive` and create a rule that ensures that a task tagged with `Exclusive` executes only when no other tasks execute.
```scala
...
Tags.customLimit { (tags: Map[Tag,Int]) =>
val exclusive = tags.getOrElse(Exclusive, 0)
// the total number of tasks in the group
val all = tags.getOrElse(Tags.All, 0)
// if there are no exclusive tasks in this group, this rule adds no restrictions
exclusive == 0 ||
// If there is only one task, allow it to execute.
all == 1
}
...
```
There are some basic rules that custom functions must follow, but the main one to be aware of in practice is that if there is only one task, it must be allowed to execute.
sbt will generate a warning if the user defines restrictions that prevent a task from executing at all and will then execute the task anyway.
### Built-in Tags and Rules
Built-in tags are defined in the `Tags` object.
All tags listed below must be qualified by this object.
For example, `CPU` refers to the `Tags.CPU` value.
The built-in semantic tags are:
* `Compile` - describes a task that compiles sources.
* `Test` - describes a task that performs a test.
* `Publish`
* `Update`
* `Untagged` - automatically added when a task doesn't explicitly define any tags.
* `All`- automatically added to every task.
The built-in resource tags are:
* `Network` - describes a task's network utilization.
* `Disk` - describes a task's filesystem utilization.
* `CPU` - describes a task's computational utilization.
The tasks that are currently tagged by default are:
* `compile`: `Compile`, `CPU`
* `test`: `Test`
* `update`: `Update`, `Network`
* `publish`, `publish-local`: `Publish`, `Network`
Of additional note is that the default `test` task will propagate its tags to each child task created for each test class.
The default rules provide the same behavior as previous versions of sbt:
```scala
concurrentRestrictions <<= parallelExecution { par =>
val max = Runtime.getRuntime.availableProcessors
Tags.limitAll(if(par) max else 1) :: Nil
}
```
As before, `parallelExecution in Test` controls whether tests are mapped to separate tasks.
To restrict the number of concurrently executing tests in all projects, use:
```scala
concurrentRestrictions += Tags.limit(Tags.Test, 1)
```
## Custom Tags
To define a new tag, pass a String to the `Tags.Tag` method. For example:
```scala
val Custom = Tags.Tag("custom")
```
Then, use this tag as any other tag. For example:
```scala
aCustomTask <<= aCustomTask.tag(Custom)
concurrentRestrictions +=
Tags.limit(Custom, 1)
```
## Future work
This is an experimental feature and there are several aspects that may change or require further work.
### Tagging Tasks
Currently, a tag applies only to the immediate computation it is defined on.
For example, in the following, the second compile definition has no tags applied to it.
Only the first computation is labeled.
```scala
compile <<= myCompileTask tag(Tags.CPU, Tags.Compile)
compile ~= { ... do some post processing ... }
```
Is this desirable? expected? If not, what is a better, alternative behavior?
### Fractional weighting
Weights are currently `int`s, but could be changed to be `double`s if fractional weights would be useful.
It is important to preserve a consistent notion of what a weight of 1 means so that built-in and custom tasks share this definition and useful rules can be written.
### Default Behavior
User feedback on what custom rules work for what workloads will help determine a good set of default tags and rules.
### Adjustments to Defaults
Rules should be easier to remove or redefine, perhaps by giving them names.
As it is, rules must be appended or all rules must be completely redefined.
Redefining the tags of a task looks like:
```scala
compile <<= compile.tag(Tags.Network)
```
This will overwrite the previous weight if the tag (Network) was already defined.
For removing tags, an implementation of `removeTag` should follow from the implementation of `tag` in a straightforward manner.
### Other characteristics
The system of a tag with a weight was selected as being reasonably powerful and flexible without being too complicated.
This selection is not fundamental and could be enhance, simplified, or replaced if necessary.
The fundamental interface that describes the constraints the system must work within is `sbt.ConcurrentRestrictions`.
This interface is used to provide an intermediate scheduling queue between task execution (`sbt.Execute`) and the underlying thread-based parallel execution service (`java.util.concurrent.CompletionService`).
This intermediate queue restricts new tasks from being forwarded to the `j.u.c.CompletionService` according to the `sbt.ConcurrentRestrictions` implementation.
See the [sbt.ConcurrentRestrictions] API documentation for details.

@ -1,148 +0,0 @@
# Parsing and tab completion
This page describes the parser combinators in sbt.
These parser combinators are typically used to parse user input and provide tab completion for [[Input Tasks]] and [[Commands]].
If you are already familiar with Scala's parser combinators, the methods are mostly the same except that their arguments are strict.
There are two additional methods for controlling tab completion that are discussed at the end of the section.
Parser combinators build up a parser from smaller parsers.
A `Parser[T]` in its most basic usage is a function `String => Option[T]`.
It accepts a `String` to parse and produces a value wrapped in `Some` if parsing succeeds or `None` if it fails.
Error handling and tab completion make this picture more complicated, but we'll stick with Option for this discussion.
The following examples assume the imports:
```scala
import sbt._
import complete.DefaultParsers._
```
## Basic parsers
The simplest parser combinators match exact inputs:
```scala
// A parser that succeeds if the input is 'x', returning the Char 'x'
// and failing otherwise
val singleChar: Parser[Char] = 'x'
// A parser that succeeds if the input is "blue", returning the String "blue"
// and failing otherwise
val litString: Parser[String] = "blue"
```
In these examples, implicit conversions produce a literal `Parser` from a `Char` or `String`.
Other basic parser constructors are the `charClass`, `success` and `failure` methods:
```scala
// A parser that succeeds if the character is a digit, returning the matched Char
// The second argument, "digit", describes the parser and is used in error messages
val digit: Parser[Char] = charClass( (c: Char) => c.isDigit, "digit")
// A parser that produces the value 3 for an empty input string, fails otherwise
val alwaysSucceed: Parser[Int] = success( 3 )
// Represents failure (always returns None for an input String).
// The argument is the error message.
val alwaysFail: Parser[Nothing] = failure("Invalid input.")
```
## Combining parsers
We build on these basic parsers to construct more interesting parsers.
We can combine parsers in a sequence, choose between parsers, or repeat a parser.
```scala
// A parser that succeeds if the input is "blue" or "green",
// returning the matched input
val color: Parser[String] = "blue" | "green"
// A parser that matches either "fg" or "bg"
val select: Parser[String] = "fg" | "bg"
// A parser that matches "fg" or "bg", a space, and then the color, returning the matched values.
// ~ is an alias for Tuple2.
val setColor: Parser[String ~ Char ~ String] =
select ~ ' ' ~ color
// Often, we don't care about the value matched by a parser, such as the space above
// For this, we can use ~> or <~, which keep the result of
// the parser on the right or left, respectively
val setColor2: Parser[String ~ String] = select ~ (' ' ~> color)
// Match one or more digits, returning a list of the matched characters
val digits: Parser[Seq[Char]] = charClass(_.isDigit, "digit").+
// Match zero or more digits, returning a list of the matched characters
val digits0: Parser[Seq[Char]] = charClass(_.isDigit, "digit").*
// Optionally match a digit
val optDigit: Parser[Option[Char]] = charClass(_.isDigit, "digit").?
```
## Transforming results
A key aspect of parser combinators is transforming results along the way into more useful data structures.
The fundamental methods for this are `map` and `flatMap`.
Here are examples of `map` and some convenience methods implemented on top of `map`.
```scala
// Apply the `digits` parser and apply the provided function to the matched
// character sequence
val num: Parser[Int] = digits map { (chars: Seq[Char]) => chars.mkString.toInt }
// Match a digit character, returning the matched character or return '0' if the input is not a digit
val digitWithDefault: Parser[Char] = charClass(_.isDigit, "digit") ?? '0'
// The previous example is equivalent to:
val digitDefault: Parser[Char] =
charClass(_.isDigit, "digit").? map { (d: Option[Char]) => d getOrElse '0' }
// Succeed if the input is "blue" and return the value 4
val blue = "blue" ^^^ 4
// The above is equivalent to:
val blueM = "blue" map { (s: String) => 4 }
```
## Controlling tab completion
Most parsers have reasonable default tab completion behavior.
For example, the string and character literal parsers will suggest the underlying literal for an empty input string.
However, it is impractical to determine the valid completions for `charClass`, since it accepts an arbitrary predicate.
The `examples` method defines explicit completions for such a parser:
```scala
val digit = charClass(_.isDigit, "digit").examples("0", "1", "2")
```
Tab completion will use the examples as suggestions.
The other method controlling tab completion is `token`.
The main purpose of `token` is to determine the boundaries for suggestions.
For example, if your parser is:
```scala
("fg" | "bg") ~ ' ' ~ ("green" | "blue")
```
then the potential completions on empty input are:
```console
fg green
fg blue
bg green
bg blue
```
Typically, you want to suggest smaller segments or the number of suggestions becomes unmanageable.
A better parser is:
```scala
token( ("fg" | "bg") ~ ' ') ~ token("green" | "blue")
```
Now, the initial suggestions would be (with _ representing a space):
```console
fg_
bg_
```
Be careful not to overlap or nest tokens, as in `token("green" ~ token("blue"))`. The behavior is unspecified (and should generate an error in the future), but typically the outer most token definition will be used.

@ -1,196 +0,0 @@
[java.io.File]: http://download.oracle.com/javase/6/docs/api/java/io/File.html
[java.io.FileFilter]: http://download.oracle.com/javase/6/docs/api/java/io/FileFilter.html
[RichFile]: http://harrah.github.com/xsbt/latest/api/sbt/RichFile.html
[PathFinder]: http://harrah.github.com/xsbt/latest/api/sbt/PathFinder.html
[Path]: http://harrah.github.com/xsbt/latest/api/sbt/Path$.html
[IO]: http://harrah.github.com/xsbt/latest/api/sbt/IO$.html
# Paths
This page describes files, sequences of files, and file filters. The base type used is [java.io.File], but several methods are augmented through implicits:
* [RichFile] adds methods to `File`
* [PathFinder] adds methods to `File` and `Seq[File]`
* [Path] and [IO] provide general methods related to files and I/O.
## Constructing a File
sbt 0.10+ uses [java.io.File] to represent a file instead of the custom `sbt.Path` class that was in sbt 0.7 and earlier.
sbt defines the alias `File` for `java.io.File` so that an extra import is not necessary.
The `file` method is an alias for the single-argument `File` constructor to simplify constructing a new file from a String:
```scala
val source: File = file("/home/user/code/A.scala")
```
Additionally, sbt augments File with a `/` method, which is an alias for the two-argument `File` constructor for building up a path:
```scala
def readme(base: File): File = base / "README"
```
Relative files should only be used when defining the base directory of a `Project`, where they will be resolved properly.
```scala
val root = Project("root", file("."))
```
Elsewhere, files should be absolute or be built up from an absolute base `File`. The `baseDirectory` setting defines the base directory of the build or project depending on the scope.
For example, the following setting sets the unmanaged library directory to be the "custom_lib" directory in a project's base directory:
```scala
unmanagedBase <<= baseDirectory( (base: File) => base /"custom_lib" )
```
Or, more concisely:
```scala
unmanagedBase <<= baseDirectory( _ /"custom_lib" )
```
This setting sets the location of the shell history to be in the base directory of the build, irrespective of the project the setting is defined in:
```scala
historyPath <<= (baseDirectory in ThisBuild)(t => Some(t / ".history")),
```
## Path Finders
A `PathFinder` computes a `Seq[File]` on demand. It is a way to build a sequence of files. There are several methods that augment `File` and `Seq[File]` to construct a `PathFinder`. Ultimately, call `get` on the resulting `PathFinder` to evaluate it and get back a `Seq[File]`.
### Selecting descendants
The `**` method accepts a `java.io.FileFilter` and selects all files matching that filter.
```scala
def scalaSources(base: File): PathFinder = (base / "src") ** "*.scala"
```
### get
This selects all files that end in `.scala` that are in `src` or a descendent directory. The list of files is not actually evaluated until `get` is called:
```scala
def scalaSources(base: File): Seq[File] = {
val finder: PathFinder = (base / "src") ** "*.scala"
finder.get
}
```
If the filesystem changes, a second call to `get` on the same `PathFinder` object will reflect the changes. That is, the `get` method reconstructs the list of files each time. Also, `get` only returns `File`s that existed at the time it was called.
### Selecting children
Selecting files that are immediate children of a subdirectory is done with a single `*`:
```scala
def scalaSources(base: File): PathFinder = (base / "src") * "*.scala"
```
This selects all files that end in `.scala` that are in the `src` directory.
### Existing files only
If a selector, such as `/`, `**`, or `*, is used on a path that does not represent a directory, the path list will be empty:
```scala
def emptyFinder(base: File) = (base / "lib" / "ivy.jar") * "not_possible"
```
### Name Filter
The argument to the child and descendent selectors `*` and `**` is actually a `NameFilter`. An implicit is used to convert a `String` to a `NameFilter` that interprets `*` to represent zero or more characters of any value. See the Name Filters section below for more information.
### Combining PathFinders
Another operation is concatenation of `PathFinder`s:
```scala
def multiPath(base: File): PathFinder =
(base / "src" / "main") +++
(base / "lib") +++
(base / "target" / "classes")
```
When evaluated using `get`, this will return `src/main/`, `lib/`, and `target/classes/`. The concatenated finder supports all standard methods. For example,
```scala
def jars(base: File): PathFinder =
(base / "lib" +++ base / "target") * "*.jar"
```
selects all jars directly in the "lib" and "target" directories.
A common problem is excluding version control directories. This can be accomplished as follows:
```scala
def sources(base: File) =
( (base / "src") ** "*.scala") --- ( (base / "src") ** ".svn" ** "*.scala")
```
The first selector selects all Scala sources and the second selects all sources that are a descendent of a `.svn` directory. The `---` method removes all files returned by the second selector from the sequence of files returned by the first selector.
### Filtering
There is a `filter` method that accepts a predicate of type `File => Boolean` and is non-strict:
```scala
// selects all directories under "src"
def srcDirs(base: File) = ( (base / "src") ** "*") filter { _.isDirectory }
// selects archives (.zip or .jar) that are selected by 'somePathFinder'
def archivesOnly(base: PathFinder) = base filter ClasspathUtilities.isArchive
```
### Empty PathFinder
`PathFinder.empty` is a `PathFinder` that returns the empty sequence when `get` is called:
```scala
assert( PathFinder.empty.get == Seq[File]() )
```
### PathFinder to String conversions
Convert a `PathFinder` to a String using one of the following methods:
* `toString` is for debugging. It puts the absolute path of each component on its own line.
* `absString` gets the absolute paths of each component and separates them by the platform's path separator.
* `getPaths` produces a `Seq[String]` containing the absolute paths of each component
### Mappings
The packaging and file copying methods in sbt expect values of type `Seq[(File,String)]` and `Seq[(File,File)]`, respectively.
These are mappings from the input file to its (String) path in the jar or its (File) destination.
This approach replaces the relative path approach (using the `##` method) from earlier versions of sbt.
Mappings are discussed in detail on the [[Mapping Files]] page.
## File Filters
The argument to `*` and `**` is of type [java.io.FileFilter].
sbt provides combinators for constructing `FileFilter`s.
First, a String may be implicitly converted to a `FileFilter`.
The resulting filter selects files with a name matching the string, with a `*` in the string interpreted as a wildcard.
For example, the following selects all Scala sources with the word "Test" in them:
```scala
def testSrcs(base: File): PathFinder = (base / "src") * "*Test*.scala"
```
There are some useful combinators added to `FileFilter`. The `||` method declares alternative `FileFilter`s. The following example selects all Java or Scala source files under "src":
```scala
def sources(base: File): PathFinder = (base / "src") ** ("*.scala" || "*.java")
```
The `--`method excludes a files matching a second filter from the files matched by the first:
```scala
def imageResources(base: File): PathFinder =
(base/"src"/"main"/"resources") * ("*.png" -- "logo.png")
```
This will get `right.png` and `left.png`, but not `logo.png`, for example.

@ -1,94 +0,0 @@
[ProcessBuilder API]: http://harrah.github.com/xsbt/latest/api/sbt/ProcessBuilder.html
# External Processes
# Usage
`sbt` includes a process library to simplify working with external processes. The library is available without import in build definitions and at the interpreter started by the [[console-project|Console Project]] task.
To run an external command, follow it with an exclamation mark `!`:
```scala
"find project -name *.jar" !
```
An implicit converts the `String` to `sbt.ProcessBuilder`, which defines the `!` method. This method runs the constructed command, waits until the command completes, and returns the exit code. Alternatively, the `run` method defined on `ProcessBuilder` runs the command and returns an instance of `sbt.Process`, which can be used to `destroy` the process before it completes. With no arguments, the `!` method sends output to standard output and and standard error. You can pass a `Logger` to the `!` method to send output to the `Logger`:
```scala
"find project -name *.jar" ! log
```
Two alternative implicit conversions are from `scala.xml.Elem` or `List[String]` to `sbt.ProcessBuilder`. These are useful for constructing commands. An example of the first variant from the android plugin:
```scala
<x> {dxPath.absolutePath} --dex --output={classesDexPath.absolutePath} {classesMinJarPath.absolutePath}</x> !
```
If you need to set the working directory or modify the environment, call `sbt.Process` explicitly, passing the command sequence (command and argument list) or command string first and the working directory second. Any environment variables can be passed as a vararg list of key/value String pairs.
```scala
Process("ls" :: "-l" :: Nil, Path.userHome, "key1" -> value1, "key2" -> value2) ! log
```
Operators are defined to combine commands. These operators start with `#` in order to keep the precedence the same and to separate them from the operators defined elsewhere in `sbt` for filters. In the following operator definitions, `a` and `b` are subcommands.
* `a #&& b` Execute `a`. If the exit code is nonzero, return that exit code and do not execute `b`. If the exit code is zero, execute `b` and return its exit code.
* `a #|| b` Execute `a`. If the exit code is zero, return zero for the exit code and do not execute `b`. If the exit code is nonzero, execute `b` and return its exit code.
* `a #| b` Execute `a` and `b`, piping the output of `a` to the input of `b`.
There are also operators defined for redirecting output to `File`s and input from `File`s and `URL`s. In the following definitions, `url` is an instance of `URL` and `file` is an instance of `File`.
* `a #< url` or `url #> a` Use `url` as the input to `a`. `a` may be a `File` or a command.
* `a #< file` or `file #> a` Use `file` as the input to `a`. `a` may be a `File` or a command.
* `a #> file` or `file #< a` Write the output of `a` to `file`. `a` may be a `File`, `URL`, or a command.
* `a #>> file` or `file #<< a` Append the output of `a` to `file`. `a` may be a `File`, `URL`, or a command.
There are some additional methods to get the output from a forked process into a `String` or the output lines as a `Stream[String]`. Here are some examples, but see the [ProcessBuilder API] for details.
```scala
val listed: String = "ls" !!
val lines2: Stream[String] = "ls" lines_!
```
Finally, there is a `cat` method to send the contents of `File`s and `URL`s to standard output.
## Examples
Download a `URL` to a `File`:
```scala
url("http://databinder.net/dispatch/About") #> file("About.html") !
or
file("About.html") #< url("http://databinder.net/dispatch/About") !
```
Copy a `File`:
```scala
file("About.html") #> file("About_copy.html") !
or
file("About_copy.html") #< file("About.html") !
```
Append the contents of a `URL` to a `File` after filtering through `grep`:
```scala
url("http://databinder.net/dispatch/About") #> "grep JSON" #>> file("About_JSON") !
or
file("About_JSON") #<< ( "grep JSON" #< url("http://databinder.net/dispatch/About") ) !
```
Search for uses of `null` in the source directory:
```scala
"find src -name *.scala -exec grep null {} ;" #| "xargs test -z" #&& "echo null-free" #|| "echo null detected" !
```
Use `cat`:
```scala
val spde = url("http://technically.us/spde/About")
val dispatch = url("http://databinder.net/dispatch/About")
val build = file("project/build.properties")
cat(spde, dispatch, build) #| "grep -i scala" !
```

@ -1,122 +0,0 @@
# Publish
This page describes how to publish your project. Publishing consists of uploading a descriptor, such as an Ivy file or Maven POM, and artifacts, such as a jar or war, to a repository so that other projects can specify your project as a dependency.
The `publish` action is used to publish your project to a remote repository. To use publishing, you need to specify the repository to publish to and the credentials to use. Once these are set up, you can run `publish`.
The `publish-local` action is used to publish your project to a local Ivy repository. You can then use this project from other projects on the same machine.
## Define the repository
To specify the repository, assign a repository to `publishTo` and optionally set the publishing style. For example, to upload to Nexus:
```scala
publishTo := Some("Sonatype Snapshots Nexus" at "https://oss.sonatype.org/content/repositories/snapshots")
```
To publish to a local repository:
```scala
publishTo := Some(Resolver.file("file", new File( "path/to/my/maven-repo/releases" )) )
```
Publishing to the users local maven repository:
```scala
publishTo := Some(Resolver.file("file", new File(Path.userHome.absolutePath+"/.m2/repository")))
```
If you're using Maven repositories you will also have to select the right repository depending on your artifacts: SNAPSHOT versions go to the /snapshot repository while other versions go to the /releases repository. Doing this selection can be done by using the value of the `version` SettingKey:
```scala
publishTo <<= version { (v: String) =>
val nexus = "https://oss.sonatype.org/"
if (v.trim.endsWith("SNAPSHOT"))
Some("snapshots" at nexus + "content/repositories/snapshots")
else
Some("releases" at nexus + "service/local/staging/deploy/maven2")
}
```
## Credentials
There are two ways to specify credentials for such a repository. The first is to specify them inline:
```scala
credentials += Credentials("Sonatype Nexus Repository Manager", "nexus.scala-tools.org", "admin", "admin123")
```
The second and better way is to load them from a file, for example:
```scala
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
```
The credentials file is a properties file with keys `realm`, `host`, `user`, and `password`. For example:
```text
realm=Sonatype Nexus Repository Manager
host=nexus.scala-tools.org
user=admin
password=admin123
```
## Cross-publishing
To support multiple incompatible Scala versions, enable cross building and do `+ publish` (see [[Cross Build]]). See [[Resolvers]] for other supported repository types.
## Published artifacts
By default, the main binary jar, a sources jar, and a API documentation jar are published. You can declare other types of artifacts to publish and disable or modify the default artifacts. See the [[Artifacts]] page for details.
## Modifying the generated POM
When `publish-maven-style` is `true`, a POM is generated by the `make-pom` action and published to the repository instead of an Ivy file. This POM file may be altered by changing a few settings. Set 'pom-extra' to provide XML (`scala.xml.NodeSeq`) to insert directly into the generated pom. For example:
```scala
pomExtra :=
<licenses>
<license>
<name>Apache 2</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
```
`make-pom` adds to the POM any Maven-style repositories you have declared. You can filter these by modifying `pom-repository-filter`, which by default excludes local repositories. To instead only include local repositories:
```scala
pomIncludeRepository := { (repo: MavenRepository) =>
repo.root.startsWith("file:")
}
```
There is also a `pom-post-process` setting that can be used to manipulate the final XML before it is written. It's type is `Node => Node`.
```scala
pomPostProcess := { (node: Node) =>
...
}
```
## Publishing Locally
The `publish-local` command will publish to the local Ivy repository. By default, this is in `${user.home}/.ivy2/local`. Other projects on the same machine can then list the project as a dependency. For example, if the SBT project you are publishing has configuration parameters like:
```
name := 'My Project'
organization := 'org.me'
version := '0.1-SNAPSHOT'
```
Then another project can depend on it:
```
libraryDependencies += "org.me" %% "my-project" % "0.1-SNAPSHOT"
```
The version number you select must end with `SNAPSHOT`, or you must change the version number each time you publish. Ivy maintains a cache, and it stores even local projects in that cache. If Ivy already has a version cached, it will not check the local repository for updates, unless the version number matches a [changing pattern](http://ant.apache.org/ivy/history/2.0.0/concept.html#change), and `SNAPSHOT` is one such pattern.

@ -1,163 +0,0 @@
[patterns]: http://ant.apache.org/ivy/history/latest-milestone/concept.html#patterns
[Patterns API]: http://harrah.github.com/xsbt/latest/api/sbt/Patterns$.html
[Ivy filesystem]: http://ant.apache.org/ivy/history/latest-milestone/resolver/filesystem.html (Ivy)
[filesystem factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$file$.html
[FileRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/FileRepository.html
[Ivy sftp]: http://ant.apache.org/ivy/history/latest-milestone/resolver/sftp.html
[sftp factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$Define.html
[SftpRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/SftpRepository.html
[Ivy ssh]: http://ant.apache.org/ivy/history/latest-milestone/resolver/ssh.html
[ssh factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$Define.html
[SshRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/SshRepository.html
[Ivy url]: http://ant.apache.org/ivy/history/latest-milestone/resolver/url.html
[url factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$url$.html
[URLRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/URLRepository.html
# Resolvers
## Maven
Resolvers for Maven2 repositories are added as follows:
```scala
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
```
This is the most common kind of user-defined resolvers. The rest of this page describes how to define other types of repositories.
## Predefined
A few predefined repositories are available and are listed below
* `DefaultMavenRepository`
This is the main Maven repository at [[http://repo1.maven.org/maven2/]] and is included by default
* `JavaNet1Repository`
This is the Maven 1 repository at [[http://download.java.net/maven/1/]]
For example, to use the `java.net` repository, use the following setting in your build definition:
```scala
resolvers += JavaNet1Repository
```
Predefined repositories will go under Resolver going forward so they are in one place:
```scala
Resolver.sonatypeRepo("releases") // Or "snapshots"
```
See: [[https://github.com/harrah/xsbt/blob/e9bfcdfc5895a8fbde89179289430d4ffccfb7ed/ivy/IvyInterface.scala#L209]]
## Custom
sbt provides an interface to the repository types available in Ivy: file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using [patterns] to configure repositories.
Construct a repository definition using the factory in `sbt.Resolver` for the desired type. This factory creates a `Repository` object that can be further configured. The following table contains links to the Ivy documentation for the repository type and the API documentation for the factory and repository class. The SSH and SFTP repositories are configured identically except for the name of the factory. Use `Resolver.ssh` for SSH and `Resolver.sftp` for SFTP.
Type | Factory | Ivy Docs | Factory API | Repository Class API
-----|---------|----------|-------------|---------------------:
Filesystem | `Resolver.file` | [Ivy filesystem] | [filesystem factory] | [FileRepository API]</td>
SFTP | `Resolver.sftp` | [Ivy sftp] | [sftp factory] | [SftpRepository API]</td>
SSH | `Resolver.ssh` | [Ivy ssh] | [ssh factory] | [SshRepository API]</td>
URL | `Resolver.url` | [Ivy url] | [url factory] | [URLRepository API]</td>
### Basic Examples
These are basic examples that use the default Maven-style repository layout.
#### Filesystem
Define a filesystem repository in the `test` directory of the current working directory and declare that publishing to this repository must be atomic.
```scala
resolvers += Resolver.file("my-test-repo", file("test")) transactional()
```
#### URL
Define a URL repository at .`"http://example.org/repo-releases/"`.
```scala
resolvers += Resolver.url("my-test-repo", url("http://example.org/repo-releases/"))
```
To specify an Ivy repository, use:
```scala
resolvers += Resolver.url("my-test-repo", url)(Resolver.ivyStylePatterns)
```
or customize the layout pattern described in the Custom Layout section below.
#### SFTP and SSH Repositories
The following defines a repository that is served by SFTP from host `"example.org"`:
```scala
resolvers += Resolver.sftp("my-sftp-repo", "example.org")
```
To explicitly specify the port:
```scala
resolvers += Resolver.sftp("my-sftp-repo", "example.org", 22)
```
To specify a base path:
```scala
resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/")
```
Authentication for the repositories returned by `sftp` and `ssh` can be configured by the `as` methods.
To use password authentication:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", "password")
```
or to be prompted for the password:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user")
```
To use key authentication:
```scala
resolvers += {
val keyFile: File = ...
Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword")
}
```
or if no keyfile password is required or if you want to be prompted for it:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile)
```
To specify the permissions used when publishing to the server:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") withPermissions("0644")
```
This is a chmod-like mode specification.
### Custom Layout
These examples specify custom repository layouts using patterns. The factory methods accept an `Patterns` instance that defines the patterns to use. The patterns are first resolved against the base file or URL. The default patterns give the default Maven-style layout. Provide a different Patterns object to use a different layout. For example:
```scala
resolvers += Resolver.url("my-test-repo", url)( Patterns("[organisation]/[module]/[revision]/[artifact].[ext]") )
```
You can specify multiple patterns or patterns for the metadata and artifacts separately. You can also specify whether the repository should be Maven compatible (as defined by Ivy). See the [patterns API] for the methods to use.
For filesystem and URL repositories, you can specify absolute patterns by omitting the base URL, passing an empty `Patterns` instance, and using `ivys` and `artifacts`:
```scala
resolvers += Resolver.url("my-test-repo") artifacts
"http://example.org/[organisation]/[module]/[revision]/[artifact].[ext]"
```

@ -1,40 +0,0 @@
# Running Project Code
The `run` and `console` actions provide a means for running user code in the same virtual machine as sbt. This page describes the problems with doing so, how sbt handles these problems, what types of code can use this feature, and what types of code must use a [[forked jvm|Forking]]. Skip to User Code if you just want to see when you should use a [[forked jvm|Forking]].
# Problems
## System.exit
User code can call `System.exit`, which normally shuts down the JVM. Because the `run` and `console` actions run inside the same JVM as sbt, this also ends the build and requires restarting sbt.
## Threads
User code can also start other threads. Threads can be left running after the main method returns. In particular, creating a GUI creates several threads, some of which may not terminate until the JVM terminates. The program is not completed until either `System.exit` is called or all non-daemon threads terminate.
# sbt's Solutions
## System.exit
User code is run with a custom `SecurityManager` that throws a custom `SecurityException` when `System.exit` is called. This exception is caught by sbt. sbt then disposes of all top-level windows, interrupts (not stops) all user-created threads, and handles the exit code. If the exit code is nonzero, `run` and `console` complete unsuccessfully. If the exit code is zero, they complete normally.
## Threads
sbt makes a list of all threads running before executing user code. After the user code returns, sbt can then determine the threads created by the user code. For each user-created thread, sbt replaces the uncaught exception handler with a custom one that handles the custom `SecurityException` thrown by calls to `System.exit` and delegates to the original handler for everything else. sbt then waits for each created thread to exit or for `System.exit` to be called. sbt handles a call to `System.exit` as described above.
A user-created thread is one that is not in the `system` thread group and is not an `AWT` implementation thread (e.g. `AWT-XAWT`, `AWT-Windows`). User-created threads include the `AWT-EventQueue-*` thread(s).
# User Code
Given the above, when can user code be run with the `run` and `console` actions?
The user code cannot rely on shutdown hooks and at least one of the following situations must apply for user code to run in the same JVM:
1. User code creates no threads.
2. User code creates a GUI and no other threads.
3. The program ends when user-created threads terminate on their own.
4. `System.exit` is used to end the program and user-created threads terminate when interrupted.
The requirements on threading and shutdown hooks are required because the JVM does not actually shut down. So, shutdown hooks cannot be run and threads are not terminated unless they stop when interrupted. If these requirements are not met, code must run in a [[forked jvm|Forking]].
The feature of allowing `System.exit` and multiple threads to be used cannot completely emulate the situation of running in a separate JVM and is intended for development. Program execution should be checked in a [[forked jvm|Forking]] when using multiple threads or `System.exit`.

@ -1,125 +0,0 @@
[IvyConsole]: http://harrah.github.com/xsbt/latest/sxr/IvyConsole.scala.html
[conscript]: https://github.com/n8han/conscript
[setup script]: https://github.com/paulp/xsbtscript
# Scripts, REPL, and Dependencies
sbt has two alternative entry points that may be used to:
* Compile and execute a Scala script containing dependency declarations or other sbt settings
* Start up the Scala REPL, defining the dependencies that should be on the classpath
These entry points should be considered experimental. A notable disadvantage of these approaches is the startup time involved.
# Setup
To set up these entry points, you can either use [conscript] or manually construct the startup scripts.
In addition, there is a [setup script] for the script mode that only requires a JRE installed.
## Setup with Conscript
Install [conscript].
```
cs harrah/xsbt --branch 0.11.3
```
This will create two scripts: `screpl` and `scalas`.
## Manual Setup
Duplicate your standard `sbt` script, which was set up according to [[Setup|Getting Started Setup]], as `scalas` and `screpl` (or whatever names you like).
`scalas` is the script runner and should use `sbt.ConsoleMain` as the main class, by adding the `-Dsbt.main.class=sbt.ScriptMain` parameter to the `java` command. Its command line should look like:
```scala
java -Dsbt.main.class=sbt.ScriptMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
```
For the REPL runner `screpl`, use `sbt.ConsoleMain` as the main class:
```scala
java -Dsbt.main.class=sbt.ConsoleMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
```
In each case, `/home/user/.sbt/boot` should be replaced with wherever you want sbt's boot directory to be; you might also need to give more memory to the JVM via `-Xms512M -Xmx1536M` or similar options, just like shown in [[Setup|Getting Started Setup]].
# Usage
## sbt Script runner
The script runner can run a standard Scala script, but with the additional ability to configure sbt.
sbt settings may be embedded in the script in a comment block that opens with `/***`.
### Example
Copy the following script and make it executable.
You may need to adjust the first line depending on your script name and operating system.
When run, the example should retrieve Scala, the required dependencies, compile the script, and run it directly.
For example, if you name it `dispatch_example.scala`, you would do on Unix:
```
chmod u+x dispatch_example.scala
./dispatch_example.scala
```
```scala
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
import json.{ Js, JsObject }
def process(param: JsObject) = {
val Search.text(txt) = param
val Search.from_user(usr) = param
val Search.created_at(time) = param
"(" + time + ")" + usr + ": " + txt
}
Http.x((Search("#scala") lang "en") ~> (_ map process foreach println))
```
## sbt REPL with dependencies
The arguments to the REPL mode configure the dependencies to use when starting up the REPL.
An argument may be either a jar to include on the classpath, a dependency definition to retrieve and put on the classpath, or a resolver to use when retrieving dependencies.
A dependency definition looks like:
```text
organization%module%revision
```
Or, for a cross-built dependency:
```text
organization%%module%revision
```
A repository argument looks like:
```text
"id at url"
```
### Example:
To add the Sonatype snapshots repository and add Scalaz 7.0-SNAPSHOT to REPL classpath:
```text
screpl "sonatype-releases at https://oss.sonatype.org/content/repositories/snapshots/" "org.scalaz%%scalaz-core%7.0-SNAPSHOT"
```
This syntax was a quick hack. Feel free to improve it. The relevant class is [IvyConsole].

@ -1,49 +0,0 @@
# Setup Notes
Some notes on how to set up your `sbt` script.
## Do not put `sbt-launch.jar` on your classpath.
Do _not_ put `sbt-launch.jar` in your `$SCALA_HOME/lib` directory, your project's `lib` directory, or anywhere it will be put on a classpath. It isn't a library.
## Terminal encoding
The character encoding used by your terminal may differ from Java's default encoding for your platform. In this case, you will need to add the option `-Dfile.encoding=<encoding>` in your `sbt` script to set the encoding, which might look like:
```text
java -Dfile.encoding=UTF8
```
## JVM heap, permgen, and stack sizes
If you find yourself running out of permgen space or your workstation is low
on memory, adjust the JVM configuration as you would for any application. For example
a common set of memory-related options is:
```text
java -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256m
```
## Boot directory
`sbt-launch.jar` is just a bootstrap; the actual meat of sbt, and the Scala
compiler and standard library, are downloaded to the shared directory `$HOME/.sbt/boot/`.
To change the location of this directory, set the `sbt.boot.directory` system property in your `sbt` script. A relative path will be resolved against the current working directory, which can be useful if you want to avoid sharing the boot directory between projects. For example, the following uses the pre-0.11 style of putting the boot directory in `project/boot/`:
```text
java -Dsbt.boot.directory=project/boot/
```
## HTTP Proxy
On Unix, sbt will pick up any HTTP proxy settings from the `http.proxy` environment variable. If you are behind a proxy requiring authentication, your `sbt` script must also pass flags to set the `http.proxyUser` and `http.proxyPassword` properties:
```text
java -Dhttp.proxyUser=username -Dhttp.proxyPassword=mypassword
```
On Windows, your script should set properties for proxy host, port, and if applicable, username and password:
```text
java -Dhttp.proxyHost=myproxy -Dhttp.proxyPort=8080 -Dhttp.proxyUser=username -Dhttp.proxyPassword=mypassword
```

@ -1,118 +0,0 @@
# Task Inputs/Dependencies
Tasks with dependencies are now introduced in the
[[getting started guide|Getting Started More About Settings]],
which you may wish to read first. This older page may have some
additional detail.
_Wiki Maintenance Note:_ This page should have its overlap with
the getting started guide cleaned up, and just have any advanced
or additional notes. It should maybe also be consolidated with
[[Tasks]].
An important aspect of the task system introduced in sbt 0.10 is to combine two common, related steps in a build:
1. Ensure some other task is performed.
2. Use some result from that task.
Previous versions of sbt configured these steps separately using
1. Dependency declarations
2. Some form of shared state
To see why it is advantageous to combine them, compare the situation to that of deferring initialization of a variable in Scala.
This Scala code is a bad way to expose a value whose initialization is deferred:
```scala
// Define a variable that will be initialized at some point
// We don't want to do it right away, because it might be expensive
var foo: Foo = _
// Define a function to initialize the variable
def makeFoo(): Unit = ... initialize foo ...
```
Typical usage would be:
```scala
makeFoo()
doSomething( foo )
```
This example is rather exaggerated in its badness, but I claim it is nearly the same situation as our two step task definitions.
Particular reasons this is bad include:
1. A client needs to know to call `makeFoo()` first.
2. `foo` could be changed by other code. There could be a `def makeFoo2()`, for example.
3. Access to foo is not thread safe.
The first point is like declaring a task dependency, the second is like two tasks modifying the same state (either project variables or files), and the third is a consequence of unsynchronized, shared state.
In Scala, we have the built-in functionality to easily fix this: `lazy val`.
```scala
lazy val foo: Foo = ... initialize foo ...
```
with the example usage:
```scala
doSomething( foo )
```
Here, `lazy val` gives us thread safety, guaranteed initialization before access, and immutability all in one, DRY construct.
The task system in sbt does the same thing for tasks (and more, but we won't go into that here) that `lazy val` did for our bad example.
A task definition must declare its inputs and the type of its output.
sbt will ensure that the input tasks have run and will then provide their results to the function that implements the task, which will generate its own result.
Other tasks can use this result and be assured that the task has run (once) and be thread-safe and typesafe in the process.
The general form of a task definition looks like:
```scala
myTask <<= (aTask, bTask) map { (a: A, b: B) =>
... do something with a, b and generate a result ...
}
```
(This is only intended to be a discussion of the ideas behind tasks, so see the [sbt Tasks](https://github.com/harrah/xsbt/wiki/Tasks) page for details on usage.)
Basically, `myTask` is defined by declaring `aTask` and `bTask` as inputs and by defining the function to apply to the results of these tasks.
Here, `aTask` is assumed to produce a result of type `A` and `bTask` is assumed to produce a result of type `B`.
## Application
Apply this in practice:
1. Determine the tasks that produce the values you need
2. `map` the tasks with the function that implements your task.
As an example, consider generating a zip file containing the binary jar, source jar, and documentation jar for your project.
First, determine what tasks produce the jars.
In this case, the input tasks are `packageBin`, `packageSrc`, and `packageDoc` in the main `Compile` scope.
The result of each of these tasks is the File for the jar that they generated.
Our zip file task is defined by mapping these package tasks and including their outputs in a zip file.
As good practice, we then return the File for this zip so that other tasks can map on the zip task.
```scala
zip <<= (packageBin in Compile, packageSrc in Compile, packageDoc in Compile, zipPath) map {
(bin: File, src: File, doc: File, out: File) =>
val inputs: Seq[(File,String)] = Seq(bin, src, doc) x Path.flat
IO.zip(inputs, out)
out
}
```
The `val inputs` line defines how the input files are mapped to paths in the zip.
See [Mapping Files](https://github.com/harrah/xsbt/wiki/Mapping-Files) for details.
The explicit types are not required, but are included for clarity.
The `zipPath` input would be a custom task to define the location of the zip file.
For example:
```scala
zipPath <<= target map {
(t: File) =>
t / "out.zip"
}
```

@ -1,457 +0,0 @@
[TaskStreams]: http://harrah.github.com/xsbt/latest/api/sbt/std/TaskStreams.html
[Logger]: http://harrah.github.com/xsbt/latest/api/sbt/Logger.html
[Incomplete]: https://github.com/harrah/xsbt/latest/api/sbt/Incomplete.html
[Result]: https://github.com/harrah/xsbt/latest/api/sbt/Result.html
# Tasks
Tasks and settings are now introduced in the
[[getting started guide|Getting Started Basic Def]], which you may
wish to read first. This older page has some additional detail.
_Wiki Maintenance Note:_ This page should have its overlap with
the getting started guide cleaned up, and just have any advanced
or additional notes. It should maybe also be consolidated with
[[TaskInputs]].
# Introduction
sbt 0.10+ has a new task system that integrates with the new settings system.
Both settings and tasks produce values, but there are two major differences between them:
1. Settings are evaluated at project load time. Tasks are executed on demand, often in response to a command from the user.
2. At the beginning of project loading, settings and their dependencies are fixed. Tasks can introduce new tasks during execution, however. (Tasks have flatMap, but Settings do not.)
# Features
There are several features of the task system:
1. By integrating with the settings system, tasks can be added, removed, and modified as easily and flexibly as settings.
2. [[Input Tasks]], the successor to method tasks, use [[parser combinators|Parsing Input]] to define the syntax for their arguments. This allows flexible syntax and tab-completions in the same way as [[Commands]].
3. Tasks produce values. Other tasks can access a task's value with the `map` and `flatMap` methods.
4. The `flatMap` method allows dynamically changing the structure of the task graph. Tasks can be injected into the execution graph based on the result of another task.
5. There are ways to handle task failure, similar to `try/catch/finally`.
6. Each task has access to its own Logger that by default persists the logging for that task at a more verbose level than is initially printed to the screen.
These features are discussed in detail in the following sections.
The context for the code snippets will be either the body of a
`Build` object in a [[.scala file|Getting Started Full Def]] or an
expression in a [[build.sbt|Getting Started Basic Def]].
# Defining a New Task
## Hello World example (sbt)
build.sbt
```scala
TaskKey[Unit]("hello") := println("hello world!")
```
## Hello World example (scala)
project/Build.scala
```scala
import sbt._
import Keys._
object HelloBuild extends Build {
val hwsettings = Defaults.defaultSettings ++ Seq(
organization := "hello",
name := "world",
version := "1.0-SNAPSHOT",
scalaVersion := "2.9.0-1"
)
val hello = TaskKey[Unit]("hello", "Prints 'Hello World'")
val helloTask = hello := {
println("Hello World")
}
lazy val project = Project (
"project",
file ("."),
settings = hwsettings ++ Seq(helloTask)
)
}
```
Run "sbt hello" from command line to invoke the task. Run "sbt tasks" to see this task listed.
## Define the key
To declare a new task, define a `TaskKey` in your [[Full Configuration]]:
```scala
val sampleTask = TaskKey[Int]("sample-task")
```
The name of the `val` is used when referring to the task in Scala code.
The string passed to the `TaskKey` method is used at runtime, such as at the command line.
By convention, the Scala identifier is camelCase and the runtime identifier uses hyphens.
The type parameter passed to `TaskKey` (here, `Int`) is the type of value produced by the task.
We'll define a couple of other of tasks for the examples:
```scala
val intTask = TaskKey[Int]("int-task")
val stringTask = TaskKey[String]("string-task")
```
The examples themselves are valid entries in a `build.sbt` or can be provided as part of a sequence to `Project.settings` (see [[Full Configuration]]).
## Implement the task
There are three main parts to implementing a task once its key is defined:
1. Determine the settings and other tasks needed by the task. They are the task's inputs.
2. Define a function that takes these inputs and produces a value.
3. Determine the scope the task will go in.
These parts are then combined like the parts of a setting are combined.
### Tasks without inputs
A task that takes no arguments can be defined using `:=`
```scala
intTask := 1 + 2
stringTask := System.getProperty("user.name")
sampleTask := {
val sum = 1 + 2
println("sum: " + sum)
sum
}
```
As mentioned in the introduction, a task is evaluated on demand.
Each time `sample-task` is invoked, for example, it will print the sum.
If the username changes between runs, `string-task` will take different values in those separate runs.
(Within a run, each task is evaluated at most once.)
In contrast, settings are evaluated once on project load and are fixed until the next reload.
### Tasks with inputs
Tasks with other tasks or settings as inputs are defined using `<<=`.
The right hand side will typically call `map` or `flatMap` on other settings or tasks.
(Contrast this with the `apply` method that is used for settings.)
The function argument to `map` or `flatMap` is the task body.
The following are equivalent ways of defining a task that adds one to value produced by `int-task` and returns the result.
```scala
sampleTask <<= intTask map { (count: Int) => count + 1 }
sampleTask <<= intTask map { _ + 1 }
```
Multiple inputs are handled as with settings.
The `map` and `flatMap` are done on a tuple of inputs:
```scala
stringTask <<= (sampleTask, intTask) map { (sample: Int, intValue: Int) =>
"Sample: " + sample + ", int: " + intValue
}
```
### Task Scope
As with settings, tasks can be defined in a specific scope.
For example, there are separate `compile` tasks for the `compile` and `test` scopes.
The scope of a task is defined the same as for a setting.
In the following example, `test:sample-task` uses the result of `compile:int-task`.
```scala
sampleTask.in(Test) <<= intTask.in(Compile).map { (intValue: Int) =>
intValue * 3
}
// more succinctly:
sampleTask in Test <<= intTask in Compile map { _ * 3 }
```
### Inline task keys
Although generally not recommended, it is possible to specify the task key inline:
```scala
TaskKey[Int]("sample-task") in Test <<= TaskKey[Int]("int-task") in Compile map { _ * 3 }
```
The type argument to `TaskKey` must be explicitly specified because of `SI-4653`. It is not recommended because:
1. Tasks are no longer referenced by Scala identifiers (like `sampleTask`), but by Strings (like `"sample-task"`)
2. The type information must be repeated.
3. Keys should come with a description, which would need to be repeated as well.
### On precedence
As a reminder, method precedence is by the name of the method.
1. Assignment methods have the lowest precedence. These are methods with names ending in `=`, except for `!=`, `<=`, `>=`, and names that start with `=`.
2. Methods starting with a letter have the next highest precedence.
3. Methods with names that start with a symbol and aren't included in 1. have the highest precedence. (This category is divided further according to the specific character it starts with. See the Scala specification for details.)
Therefore, the second variant in the previous example is equivalent to the following:
```scala
(sampleTask in Test) <<= (intTask in Compile map { _ * 3 })
```
# Modifying an Existing Task
The examples in this section use the following key definitions, which would go in a `Build` object in a [[Full Configuration]]. Alternatively, the keys may be specified inline, as discussed above.
```scala
val unitTask = TaskKey[Unit]("unit-task")
val intTask = TaskKey[Int]("int-task")
val stringTask = TaskKey[String]("string-task")
```
The examples themselves are valid settings in a `build.sbt` file or as part of a sequence provided to `Project.settings`.
In the general case, modify a task by declaring the previous task as an input.
```scala
// initial definition
intTask := 3
// overriding definition that references the previous definition
intTask <<= intTask map { (value: Int) => value + 1 }
```
Completely override a task by not declaring the previous task as an input.
Each of the definitions in the following example completely overrides the previous one.
That is, when `int-task` is run, it will only print `#3`.
```scala
intTask := {
println("#1")
3
}
intTask := {
println("#2")
5
}
intTask <<= sampleTask map { (value: Int) =>
println("#3")
value - 3
}
```
To apply a transformation to a single task, without using additional tasks as inputs, use `~=`.
This accepts the function to apply to the task's result:
```scala
intTask := 3
// increment the value returned by intTask
intTask ~= { (x: Int) => x + 1 }
```
# Task Operations
The previous sections used the `map` method to define a task in terms of the results of other tasks.
This is the most common method, but there are several others.
The examples in this section use the task keys defined in the previous section.
## Dependencies
To depend on the side effect of some tasks without using their values and without doing additional work, use `dependOn` on a sequence of tasks.
The defining task key (the part on the left side of `<<=`) must be of type `Unit`, since no value is returned.
```scala
unitTask <<= Seq(stringTask, sampleTask).dependOn
```
To add dependencies to an existing task without using their values, call `dependsOn` on the task and provide the tasks to depend on.
For example, the second task definition here modifies the original to require that `string-task` and `sample-task` run first:
```scala
intTask := 4
intTask <<= intTask.dependsOn(stringTask, sampleTask)
```
## Streams: Per-task logging
New in sbt 0.10+ are per-task loggers, which are part of a more general system for task-specific data called Streams. This allows controlling the verbosity of stack traces and logging individually for tasks as well as recalling the last logging for a task. Tasks also have access to their own persisted binary or text data.
To use Streams, `map` or `flatMap` the `streams` task. This is a special task that provides an instance of [TaskStreams] for the defining task. This type provides access to named binary and text streams, named loggers, and a default logger. The default [Logger], which is the most commonly used aspect, is obtained by the `log` method:
```scala
myTask <<= streams map { (s: TaskStreams) =>
s.log.debug("Saying hi...")
s.log.info("Hello!")
}
```
You can scope logging settings by the specific task's scope:
```scala
logLevel in myTask := Level.Debug
traceLevel in myTask := 5
```
To obtain the last logging output from a task, use the `last` command:
```scala
$ last my-task
[debug] Saying hi...
[info] Hello!
```
The verbosity with which logging is persisted is controlled using the `persist-log-level` and `persist-trace-level` settings.
The `last` command displays what was logged according to these levels.
The levels do not affect already logged information.
## Handling Failure
This section discusses the `andFinally`, `mapFailure`, and `mapR` methods, which are used to handle failure of other tasks.
### andFinally
The `andFinally` method defines a new task that runs the original task and evaluates a side effect regardless of whether the original task succeeded.
The result of the task is the result of the original task.
For example:
```scala
intTask := error("I didn't succeed.")
intTask <<= intTask andFinally { println("andFinally") }
```
This modifies the original `intTask` to always print "andFinally" even if the task fails.
Note that `andFinally` constructs a new task.
This means that the new task has to be invoked in order for the extra block to run.
This is important when calling andFinally on another task instead of overriding a task like in the previous example.
For example, consider this code:
```scala
intTask := error("I didn't succeed.")
otherIntTask <<= intTask andFinally { println("andFinally") }
```
If `int-task` is run directly, `other-int-task` is never involved in execution.
This case is similar to the following plain Scala code:
```scala
def intTask: Int =
error("I didn't succeed.")
def otherIntTask: Int =
try { intTask }
finally { println("finally") }
intTask()
```
It is obvious here that calling intTask() will never result in "finally" being printed.
### mapFailure
`mapFailure` accepts a function of type `Incomplete => T`, where `T` is a type parameter.
In the case of multiple inputs, the function has type `Seq[Incomplete] => T`.
[Incomplete] is an exception with information about any tasks that caused the failure and any underlying exceptions thrown during task execution.
The resulting task defined by `mapFailure` fails if its input succeeds and evaluates the provided function if it fails.
For example:
```scala
intTask := error("Failed.")
intTask <<= intTask mapFailure { (inc: Incomplete) =>
println("Ignoring failure: " + inc)
3
}
```
This overrides the `int-task` so that the original exception is printed and the constant `3` is returned.
`mapFailure` does not prevent other tasks that depend on the target from failing.
Consider the following example:
```scala
intTask := if(shouldSucceed) 5 else error("Failed.")
// return 3 if int-task fails. if it succeeds, this task will fail
aTask <<= intTask mapFailure { (inc: Incomplete) => 3 }
// a new task that increments the result of int-task
bTask <<= intTask map { _ + 1 }
cTask <<= (aTask, bTask) map { (a,b) => a + b }
```
The following table lists the results of each task depending on the initially invoked task:
<table>
<th>invoked task</th> <th>int-task result</th> <th>a-task result</th> <th>b-task result</th> <th>c-task result</th> <th>overall result</th>
<tr><td>int-task</td> <td>failure</td> <td>not run</td> <td>not run</td> <td>not run</td> <td>failure</td></tr>
<tr><td>a-task</td> <td>failure</td> <td>success</td> <td>not run</td> <td>not run</td> <td>success</td></tr>
<tr><td>b-task</td> <td>failure</td> <td>not run</td> <td>failure</td> <td>not run</td> <td>failure</td></tr>
<tr><td>c-task</td> <td>failure</td> <td>success</td> <td>failure</td> <td>failure</td> <td>failure</td></tr>
<tr><td>int-task</td> <td>success</td> <td>not run</td> <td>not run</td> <td>not run</td> <td>success</td></tr>
<tr><td>a-task</td> <td>success</td> <td>failure</td> <td>not run</td> <td>not run</td> <td>failure</td></tr>
<tr><td>b-task</td> <td>success</td> <td>not run</td> <td>success</td> <td>not run</td> <td>success</td></tr>
<tr><td>c-task</td> <td>success</td> <td>failure</td> <td>success</td> <td>failure</td> <td>failure</td></tr>
</table>
The overall result is always the same as the root task (the directly invoked task).
A `mapFailure` turns a success into a failure, and a failure into whatever the result of evaluating the supplied function is.
A `map` fails when the input fails and applies the supplied function to a successfully completed input.
In the case of more than one input, `mapFailure` fails if all inputs succeed.
If at least one input fails, the supplied function is provided with the list of `Incomplete`s.
For example:
```scala
cTask <<= (aTask, bTask) mapFailure { (incs: Seq[Incomplete]) => 3 }
```
The following table lists the results of invoking `c-task`, depending on the success of `aTask` and `bTask`:
<table>
<th>a-task result</th> <th>b-task result</th> <th>c-task result</th>
<tr> <td>failure</td> <td>failure</td> <td>success</td> </tr>
<tr> <td>failure</td> <td>success</td> <td>success</td> </tr>
<tr> <td>success</td> <td>failure</td> <td>success</td> </tr>
<tr> <td>success</td> <td>success</td> <td>failure</td> </tr>
</table>
### mapR
`mapR` accepts a function of type `Result[S] => T`, where `S` is the type of the task being mapped and `T` is a type parameter.
In the case of multiple inputs, the function has type `(Result[A], Result[B], ...) => T`.
[Result] has the same structure as `Either[Incomplete, S]` for a task result of type `S`.
That is, it has two subtypes:
* `Inc`, which wraps `Incomplete` in case of failure
* `Value`, which wraps a task's result in case of success.
Thus, `mapR` is always invoked whether or not the original task succeeds or fails.
For example:
```scala
intTask := error("Failed.")
intTask <<= intTask mapR {
case Inc(inc: Incomplete) =>
println("Ignoring failure: " + inc)
3
case Value(v) =>
println("Using successful result: " + v)
v
}
```
This overrides the original `int-task` definition so that if the original task fails, the exception is printed and the constant `3` is returned.
If it succeeds, the value is printed and returned.

@ -1,394 +0,0 @@
[uniform test interface]: http://github.com/harrah/test-interface
[TestReportListener]: http://harrah.github.com/xsbt/latest/api/sbt/TestReportListener.html
[TestsListener]: http://harrah.github.com/xsbt/latest/api/sbt/TestsListener.html
[junit-interface]: https://github.com/szeiger/junit-interface
[ScalaCheck]: http://code.google.com/p/scalacheck/
[specs2]: http://etorreborre.github.com/specs2/
[ScalaTest]: http://www.artima.com/scalatest/
# Testing
# Basics
The standard source locations for testing are:
* Scala sources in `src/test/scala/`
* Java sources in `src/test/java/`
* Resources for the test classpath in `src/test/resources/`
The resources may be accessed from tests by using the `getResource` methods of `java.lang.Class` or `java.lang.ClassLoader`.
The main Scala testing frameworks ([specs2], [ScalaCheck], and [ScalaTest]) provide an implementation of the common test interface and only need to be added to the classpath to work with sbt. For example, ScalaCheck may be used by declaring it as a [[managed dependency|Library Management]]:
```scala
libraryDependencies += "org.scala-tools.testing" %% "scalacheck" % "1.9" % "test"
```
The fourth component `"test"` is the [[configuration|Configurations]] and means that ScalaCheck will only be on the test classpath and it isn't needed by the main sources.
This is generally good practice for libraries because your users don't typically need your test dependencies to use your library.
With the library dependency defined, you can then add test sources in the locations listed above and compile and run tests.
The tasks for running tests are `test` and `test-only`.
The `test` task accepts no command line arguments and runs all tests:
```text
> test
```
## test-only
The `test-only` task accepts a whitespace separated list of test names to run. For example:
```text
> test-only org.example.MyTest1 org.example.MyTest2
```
It supports wildcards as well:
```text
> test-only org.example.*Slow org.example.MyTest1
```
## test-quick
The `test-quick` task, like `test-only`, allows to filter the tests to run to specific tests or wildcards using the same syntax to indicate the filters. In addition to the explicit filter, only the tests that satisfy one of the following conditions are run:
* The tests that failed in the previous run
* The tests that were not run before
* The tests that have one or more transitive dependencies, maybe in a different project, recompiled.
### Tab completion
Tab completion is provided for test names based on the results of the last `test:compile`. This means that a new sources aren't available for tab completion until they are compiled and deleted sources won't be removed from tab completion until a recompile. A new test source can still be manually written out and run using `test-only`.
## Other tasks
Tasks that are available for main sources are generally available for test sources, but are prefixed with `test:` on the command line and are referenced in Scala code with `in Test`. These tasks include:
* `test:compile`
* `test:console`
* `test:console-quick`
* `test:run`
* `test:run-main`
See [[Running|Getting Started Running]] for details on these tasks.
# Output
By default, logging is buffered for each test source file until all tests for that file complete.
This can be disabled with:
```scala
logBuffered in Test := false
```
# Options
## Test Framework Arguments
Arguments to the test framework may be provided on the command line to the `test-only` tasks following a `--` separator. For example:
```text
> test-only org.example.MyTest -- -d -S
```
To specify test framework arguments as part of the build, add options constructed by `Tests.Argument`:
```scala
testOptions in Test += Tests.Argument("-d", "-g")
```
To specify them for a specific test framework only:
```scala
testOptions in Test += Tests.Argument(TestFrameworks.ScalaCheck, "-d", "-g")
```
## Setup and Cleanup
Specify setup and cleanup actions using `Tests.Setup` and `Tests.Cleanup`.
These accept either a function of type `() => Unit` or a function of type `ClassLoader => Unit`.
The variant that accepts a ClassLoader is passed the class loader that is (or was) used for running the tests.
It provides access to the test classes as well as the test framework classes.
Examples:
```scala
testOptions in Test += Tests.Setup( () => println("Setup") )
testOptions in Test += Tests.Cleanup( () => println("Cleanup") )
testOptions in Test += Tests.Setup( loader => ... )
testOptions in Test += Tests.Cleanup( loader => ... )
```
## Disable Parallel Execution of Tests
By default, sbt runs all tasks in parallel. Because each test is mapped to a task, tests are also run in parallel by default. To disable parallel execution of tests:
```scala
parallelExecution in Test := false
```
`Test` can be replaced with `IntegrationTest` to only execute integration tests serially.
## Filter classes
If you want to only run test classes whose name ends with "Test", use `Tests.Filter`:
```
testOptions in Test := Seq(Tests.Filter(s => s.endsWith("Test")))
```
## Forking tests
In version 0.12 the facility to run tests in a separate JVM is added. The setting
```scala
fork in Test := true
```
now specifies all tests to be executed in a single external JVM. More control over how tests are assigned to JVMs and what options to pass to those is available with `testGrouping` key. For example:
```scala
import sbt._
import Tests._
...
testGrouping <<= definedTests in Test map groupByFirst
...
def groupByFirst(tests: Seq[TestDefinition]) =
tests groupBy (_.name(0)) map {
case (letter, tests) => new Group(letter.toString, tests, SubProcess(Seq("-Dfirst.letter"+letter)))
} toSeq
```
The tests in a single group are run sequentially. Controlling the number of forked JVMs allowed to run at the same time is through setting the limit on `Tags.ForkedTestGroup` tag which has 1 as a default value. `Setup` and `Cleanup` actions are not supported when a group is forked.
# Additional test configurations
You can add an additional test configuration to have a separate set of test sources and associated compilation, packaging, and testing tasks and settings.
The steps are:
* Define the configuration
* Add the tasks and settings
* Declare library dependencies
* Create sources
* Run tasks
The following two examples demonstrate this.
The first example shows how to enable integration tests.
The second shows how to define a customized test configuration.
This allows you to define multiple types of tests per project.
## Integration Tests
The following full build configuration demonstrates integration tests.
```scala
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( IntegrationTest )
.settings( Defaults.itSettings : _*)
.settings( libraryDependencies += specs )
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "it,test"
}
```
* `configs(IntegrationTest)` adds the predefined integration test configuration. This configuration is referred to by the name `it`.
* `settings( Defaults.itSettings : _* )` adds compilation, packaging, and testing actions and settings in the `IntegrationTest` configuration.
* `settings( libraryDependencies += specs )` adds specs to both the standard `test` configuration and the integration test configuration `it`. To define a dependency only for integration tests, use `"it"` as the configuration instead of `"it,test"`.
The standard source hierarchy is used:
* `src/it/scala` for Scala sources
* `src/it/java` for Java sources
* `src/it/resources` for resources that should go on the integration test classpath
The standard testing tasks are available, but must be prefixed with `it:`. For example,
```text
> it:test-only org.example.AnIntegrationTest
```
Similarly the standard settings may be configured for the `IntegrationTest` configuration.
If not specified directly, most `IntegrationTest` settings delegate to `Test` settings by default.
For example, if test options are specified as:
```scala
testOptions in Test += ...
```
then these will be picked up by the `Test` configuration and in turn by the `IntegrationTest` configuration.
Options can be added specifically for integration tests by putting them in the `IntegrationTest` configuration:
```scala
testOptions in IntegrationTest += ...
```
Or, use `:=` to overwrite any existing options, declaring these to be the definitive integration test options:
```scala
testOptions in IntegrationTest := Seq(...)
```
## Custom test configuration
The previous example may be generalized to a custom test configuration.
```scala
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( FunTest )
.settings( inConfig(FunTest)(Defaults.testSettings) : _*)
.settings( libraryDependencies += specs )
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "fun"
}
```
Instead of using the built-in configuration, we defined a new one:
```scala
lazy val FunTest = config("fun") extend(Test)
```
The `extend(Test)` part means to delegate to `Test` for undefined `CustomTest` settings.
The line that adds the tasks and settings for the new test configuration is:
```scala
settings( inConfig(FunTest)(Defaults.testSettings) : _*)
```
This says to add test and settings tasks in the `FunTest` configuration.
We could have done it this way for integration tests as well.
In fact, `Defaults.itSettings` is a convenience definition: `val itSettings = inConfig(IntegrationTest)(Defaults.testSettings)`.
The comments in the integration test section hold, except with `IntegrationTest` replaced with `FunTest` and `"it"` replaced with `"fun"`. For example, test options can be configured specifically for `FunTest`:
```scala
testOptions in FunTest += ...
```
Test tasks are run by prefixing them with `fun:`
```scala
> fun:test
```
## Additional test configurations with shared sources
An alternative to adding separate sets of test sources (and compilations) is to share sources.
In this approach, the sources are compiled together using the same classpath and are packaged together.
However, different tests are run depending on the configuration.
```scala
import sbt._
import Keys._
object B extends Build {
lazy val root =
Project("root", file("."))
.configs( FunTest )
.settings( inConfig(FunTest)(Defaults.testTasks) : _*)
.settings(
libraryDependencies += specs,
testOptions in Test := Seq(Tests.Filter(itFilter)),
testOptions in FunTest := Seq(Tests.Filter(unitFilter))
)
def itFilter(name: String): Boolean = name endsWith "ITest"
def unitFilter(name: String): Boolean = (name endsWith "Test") && !itFilter(name)
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "test"
}
```
The key differences are:
* We are now only adding the test tasks (`inConfig(FunTest)(Defaults.testTasks)`) and not compilation and packaging tasks and settings.
* We filter the tests to be run for each configuration.
To run standard unit tests, run `test` (or equivalently, `test:test`):
```text
> test
```
To run tests for the added configuration (here, `"fun"`), prefix it with the configuration name as before:
```text
> fun:test
> fun:test-only org.example.AFunTest
```
### Application to parallel execution
One use for this shared-source approach is to separate tests that can run in parallel from those that must execute serially.
Apply the procedure described in this section for an additional configuration.
Let's call the configuration `serial`:
```scala
lazy val Serial = config("serial") extend(Test)
```
Then, we can disable parallel execution in just that configuration using:
```text
parallelExecution in Serial := false
```
The tests to run in parallel would be run with `test` and the ones to run in serial would be run with `serial:test`.
# JUnit
Support for JUnit is provided by [junit-interface]. To add JUnit support into your project, add the junit-interface dependency in your project's main build.sbt file.
```scala
libraryDependencies += "com.novocode" % "junit-interface" % "0.8" % "test->default"
```
# Extensions
This page describes adding support for additional testing libraries and defining additional test reporters. You do this by implementing `sbt` interfaces (described below). If you are the author of the testing framework, you can depend on the test interface as a provided dependency. Alternatively, anyone can provide support for a test framework by implementing the interfaces in a separate project and packaging the project as an sbt [[Plugin|Plugins]].
## Custom Test Framework
`sbt` contains built-in support for the three main Scala testing libraries (specs 1 and 2, ScalaTest, and ScalaCheck). To add support for a different framework, implement the [uniform test interface].
## Custom Test Reporters
Test frameworks report status and results to test reporters. You can create a new test reporter by implementing either [TestReportListener] or [TestsListener].
## Using Extensions
To use your extensions in a project definition:
Modify the `testFrameworks`setting to reference your test framework:
```scala
testFrameworks += new TestFramework("custom.framework.ClassName")
```
Specify the test reporters you want to use by overriding the `testListeners` method in your project definition.
```scala
testListeners += customTestListener
```
where `customTestListener` is of type `sbt.TestReportListener`.

@ -1,41 +0,0 @@
[web plugin]: https://github.com/siasia/xsbt-web-plugin
# Triggered Execution
You can make a command run when certain files change by prefixing the command with `~`. Monitoring is terminated when `enter` is pressed. This triggered execution is configured by the `watch` setting, but typically the basic settings `watch-sources` and `poll-interval` are modified.
* `watch-sources` defines the files for a single project that are monitored for changes. By default, a project watches resources and Scala and Java sources.
* `watch-transitive-sources` then combines the `watch-sources` for the current project and all execution and classpath dependencies (see [[Full Configuration]] for details on inter-project dependencies).
* `poll-interval` selects the interval between polling for changes in milliseconds. The default value is `500 ms`.
Some example usages are described below.
# Compile
The original use-case was continuous compilation:
```scala
> ~ test:compile
> ~ compile
```
# Testing
You can use the triggered execution feature to run any command or task. One use is for test driven development, as suggested by Erick on the mailing list.
The following will poll for changes to your source code (main or test) and run `test-only` for the specified test.
```scala
> ~ test-only example.TestA
```
# Running Multiple Commands
Occasionally, you may need to trigger the execution of multiple commands. You can use semicolons to separate the commands to be triggered.
The following will poll for source changes and run `clean` and `test`.
```scala
> ~; clean; test
```

@ -1,154 +0,0 @@
[sbt.UpdateReport]: http://harrah.github.com/xsbt/latest/api/sbt/UpdateReport.html
[DependencyFilter]: http://harrah.github.com/xsbt/latest/api/sbt/DependencyFilter.html
[ConfigurationFilter]: http://harrah.github.com/xsbt/latest/api/sbt/ConfigurationFilter.html
[ModuleFilter]: http://harrah.github.com/xsbt/latest/api/sbt/ModuleFilter.html
[ArtifactFilter]: http://harrah.github.com/xsbt/latest/api/sbt/ArtifactFilter.html
# Update Report
`update` and related tasks produce a value of type [sbt.UpdateReport]
This data structure provides information about the resolved configurations, modules, and artifacts.
At the top level, `UpdateReport` provides reports of type `ConfigurationReport` for each resolved configuration.
A `ConfigurationReport` supplies reports (of type `ModuleReport`) for each module resolved for a given configuration.
Finally, a `ModuleReport` lists each successfully retrieved `Artifact` and the `File` it was retrieved to as well as the `Artifact`s that couldn't be downloaded.
This missing `Arifact` list is never empty for `update`, which will fail if it is non-empty.
However, it may be non-empty for `update-classifiers` and `update-sbt-classifers`.
# Filtering a Report and Getting Artifacts
A typical use of `UpdateReport` is to retrieve a list of files matching a filter.
A conversion of type `UpdateReport => RichUpdateReport` implicitly provides these methods for `UpdateReport`.
The filters are defined by the [DependencyFilter], [ConfigurationFilter], [ModuleFilter], and [ArtifactFilter] types.
Using these filter types, you can filter by the configuration name, the module organization, name, or revision, and the artifact name, type, extension, or classifier.
The relevant methods (implicitly on `UpdateReport`) are:
```scala
def matching(f: DependencyFilter): Seq[File]
def select(configuration: ConfigurationFilter = ..., module: ModuleFilter = ..., artifact: ArtifactFilter = ...): Seq[File]
```
Any argument to `select` may be omitted, in which case all values are allowed for the corresponding component.
For example, if the `ConfigurationFilter` is not specified, all configurations are accepted.
The individual filter types are discussed below.
## Filter Basics
Configuration, module, and artifact filters are typically built by applying a `NameFilter` to each component of a `Configuration`, `ModuleID`, or `Artifact`.
A basic `NameFilter` is implicitly constructed from a String, with `*` interpreted as a wildcard.
```scala
import sbt._
// each argument is of type NameFilter
val mf: ModuleFilter = moduleFilter(organization = "*sbt*", name = "main" | "actions", revision = "1.*" - "1.0")
// unspecified arguments match everything by default
val mf: ModuleFilter = moduleFilter(organization = "net.databinder")
// specifying "*" is the same as omitting the argument
val af: ArtifactFilter = artifactFilter(name = "*", `type` = "source", extension = "jar", classifier = "sources")
val cf: ConfigurationFilter = configurationFilter(name = "compile" | "test")
```
Alternatively, these filters, including a `NameFilter`, may be directly defined by an appropriate predicate (a single-argument function returning a Boolean).
```scala
import sbt._
// here the function value of type String => Boolean is implicitly converted to a NameFilter
val nf: NameFilter = (s: String) => s.startsWith("dispatch-")
// a Set[String] is a function String => Boolean
val acceptConfigs: Set[String] = Set("compile", "test")
// implicitly converted to a ConfigurationFilter
val cf: ConfigurationFilter = acceptConfigs
val mf: ModuleFilter = (m: ModuleID) => m.organization contains "sbt"
val af: ArtifactFilter = (a: Artifact) => a.classifier.isEmpty
```
## ConfigurationFilter
A configuration filter essentially wraps a `NameFilter` and is explicitly constructed by the `configurationFilter` method:
```scala
def configurationFilter(name: NameFilter = ...): ConfigurationFilter
```
If the argument is omitted, the filter matches all configurations.
Functions of type `String => Boolean` are implicitly convertible to a `ConfigurationFilter`.
As with `ModuleFilter`, `ArtifactFilter`, and `NameFilter`, the `&`, `|`, and `-` methods may be used to combine `ConfigurationFilter`s.
```scala
import sbt._
val a: ConfigurationFilter = Set("compile", "test")
val b: ConfigurationFilter = (c: String) => c.startsWith("r")
val c: ConfigurationFilter = a | b
```
(The explicit types are optional here.)
## ModuleFilter
A module filter is defined by three `NameFilter`s: one for the organization, one for the module name, and one for the revision.
Each component filter must match for the whole module filter to match.
A module filter is explicitly constructed by the `moduleFilter` method:
```scala
def moduleFilter(organization: NameFilter = ..., name: NameFilter = ..., revision: NameFilter = ...): ModuleFilter
```
An omitted argument does not contribute to the match. If all arguments are omitted, the filter matches all `ModuleID`s.
Functions of type `ModuleID => Boolean` are implicitly convertible to a `ModuleFilter`.
As with `ConfigurationFilter`, `ArtifactFilter`, and `NameFilter`, the `&`, `|`, and `-` methods may be used to combine `ModuleFilter`s:
```scala
import sbt._
val a: ModuleFilter = moduleFilter(name = "dispatch-twitter", revision = "0.7.8")
val b: ModuleFilter = moduleFilter(name = "dispatch-*")
val c: ModuleFilter = b - a
```
(The explicit types are optional here.)
## ArtifactFilter
An artifact filter is defined by four `NameFilter`s: one for the name, one for the type, one for the extension, and one for the classifier.
Each component filter must match for the whole artifact filter to match.
An artifact filter is explicitly constructed by the `artifactFilter` method:
```scala
def artifactFilter(name: NameFilter = ..., `type`: NameFilter = ..., extension: NameFilter = ..., classifier: NameFilter = ...): ArtifactFilter
```
Functions of type `Artifact => Boolean` are implicitly convertible to an `ArtifactFilter`.
As with `ConfigurationFilter`, `ModuleFilter`, and `NameFilter`, the `&`, `|`, and `-` methods may be used to combine `ArtifactFilter`s:
```scala
import sbt._
val a: ArtifactFilter = artifactFilter(classifier = "javadoc")
val b: ArtifactFilter = artifactFilter(`type` = "jar")
val c: ArtifactFilter = b - a
```
(The explicit types are optional here.)
## DependencyFilter
A `DependencyFilter` is typically constructed by combining other `DependencyFilter`s together using `&&`, `||`, and `--`.
Configuration, module, and artifact filters are `DependencyFilter`s themselves and can be used directly as a `DependencyFilter` or they can build up a `DependencyFilter`.
Note that the symbols for the `DependencyFilter` combining methods are doubled up to distinguish them from the combinators of the more specific filters for configurations, modules, and artifacts.
These double-character methods will always return a `DependencyFilter`, whereas the single character methods preserve the more specific filter type.
For example:
```scala
import sbt._
val df: DependencyFilter =
configurationFilter(name = "compile" | "test") && artifactFilter(`type` = "jar") || moduleFilter(name = "dispatch-*")
```
Here, we used `&&` and `||` to combine individual component filters into a dependency filter, which can then be provided to the `UpdateReport.matches` method. Alternatively, the `UpdateReport.select` method may be used, which is equivalent to calling `matches` with its arguments combined with `&&`.

@ -1,38 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Artifacts]] what to publish
* [[Best Practices]]
* [[Classpaths]]
* [[Command Line Reference]]
* [[Compiler Plugins]]
* [[Console Project]]
* [[Cross Build]]
* [[Forking]]
* [[Global Settings]]
* [[Inspecting Settings]]
* [[Java Sources]]
* [[Launcher]]
* [[Library Management]]
* [[Local Scala]]
* [[Mapping Files]]
* [[Migrating to 0.10+|Migrating from SBT 0.7.x to 0.10.x]]
* [[Parallel Execution]]
* [[Parsing Input]]
* [[Paths]]
* [[Process]]
* [[Publishing]]
* [[Resolvers]]
* [[Running Project Code]]
* [[Scripts]]
* [[Setup Notes]]
* [[Tasks]]
* [[TaskInputs]]
* [[Testing]]
* [[Triggered Execution]]
* [[Update Report]]
* [[Extending sbt|Extending]] - internals docs

@ -1,3 +0,0 @@
Why could I?
Unfortunately, the GitHub wiki only provides two roles. One can't modify anything while the other can edit, delete, or create new pages. The delete page link doesn't ask for confirmation and so we get pages accidentally deleted. We have to live with it if we want to allow users to edit the wiki (and we do). Don't worry about it and thanks for promptly reverting.

@ -1,188 +0,0 @@
[sbt.Keys]: http://harrah.github.com/xsbt/latest/api/sbt/Keys$.html
[Scoped]: http://harrah.github.com/xsbt/latest/api/sbt/Scoped$.html
[Scope]: http://harrah.github.com/xsbt/latest/api/sbt/Scope$.html
[Settings]: http://harrah.github.com/xsbt/latest/sxr/Settings.scala.html
[Attributes]: http://harrah.github.com/xsbt/latest/sxr/Attributes.scala.html
[Defaults]: http://harrah.github.com/xsbt/latest/sxr/Defaults.scala.html
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html
_Wiki Maintenance Note:_ This page has been replaced a couple of times; first by
[[Settings]] and most recently by [[Getting Started Basic Def]] and
[[Getting Started More About Settings]]. It has some obsolete
terminology:
- we now avoid referring to build definition as "configuration"
to avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full
configuration, in favor of ".sbt build definition files" and
".scala build definition files"
However, it may still be worth combing this page for examples or
points that are not made in new pages. After doing so, this page
could simply be a redirect (delete the content, link to the new
pages about build definition).
# Configuration
A build definition is written in Scala. There are two types of definitions: light and full. A light definition is a quick way of configuring a build. It consists of a list of Scala expressions describing project settings in one or more ".sbt" files located in the base directory of the project. This also applies to sub-projects.
A full definition is made up of one or more Scala source files that describe relationships between projects, introduce new configurations and settings, and define more complex aspects of the build. The capabilities of a light definition are a proper subset of those of a full definition.
Light configuration and full configuration can co-exist. Settings defined in the light configuration are appended to the settings defined in the full configuration for the corresponding project.
# Light Configuration
## By Example
Create a file with extension `.sbt` in your root project directory (such as `<your-project>/build.sbt`). This file contains Scala expressions of type `Setting[T]` that are separated by blank lines. Built-in settings typically have reasonable defaults (an exception is `publishTo`). A project typically redefines at least `name` and `version` and often `libraryDependencies`. All built-in settings are listed in [Keys].
A sample `build.sbt`:
```scala
// Set the project name to the string 'My Project'
name := "My Project"
// The := method used in Name and Version is one of two fundamental methods.
// The other method is <<=
// All other initialization methods are implemented in terms of these.
version := "1.0"
// Add a single dependency
libraryDependencies += "junit" % "junit" % "4.8" % "test"
// Add multiple dependencies
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-google" % "0.7.8",
"net.databinder" %% "dispatch-meetup" % "0.7.8"
)
// Exclude backup files by default. This uses ~=, which accepts a function of
// type T => T (here T = FileFilter) that is applied to the existing value.
// A similar idea is overriding a member and applying a function to the super value:
// override lazy val defaultExcludes = f(super.defaultExcludes)
//
defaultExcludes ~= (filter => filter || "*~")
/* Some equivalent ways of writing this:
defaultExcludes ~= (_ || "*~")
defaultExcludes ~= ( (_: FileFilter) || "*~")
defaultExcludes ~= ( (filter: FileFilter) => filter || "*~")
*/
// Use the project version to determine the repository to publish to.
publishTo <<= version { (v: String) =>
if(v endsWith "-SNAPSHOT")
Some(ScalaToolsSnapshots)
else
Some(ScalaToolsReleases)
}
```
## Notes
* Because everything is parsed as an expression, no semicolons are allowed at the ends of lines.
* All initialization methods end with `=` so that they have the lowest possible precedence. Except when passing a function literal to `~=`, you do not need to use parentheses for either side of the method.
Ok:
```scala
libraryDependencies += "junit" % "junit" % "4.8" % "test"
libraryDependencies.+=("junit" % "junit" % "4.8" % "test")
defaultExcludes ~= (_ || "*~")
defaultExcludes ~= (filter => filter || "*~")
```
Error:
```console
defaultExcludes ~= _ || "*~"
error: missing parameter type for expanded function ((x$1) => defaultExcludes.$colon$tilde(x$1).$bar("*~"))
defaultExcludes ~= _ || "*~"
^
error: value | is not a member of sbt.Project.Setting[sbt.FileFilter]
defaultExcludes ~= _ || "*~"
^
```
* A block is an expression, with the last statement in the block being the result. For example, the following is an expression:
```scala
{
val x = 3
def y = 2
x + y
}
```
An example of using a block to construct a Setting:
```scala
version := {
// Define a regular expression to match the current branch
val current = """\*\s+(\w+)""".r
// Process the output of 'git branch' to get the current branch
val branch = "git branch --no-color".lines_!.collect { case current(name) => "-" + name }
// Append the current branch to the version.
"1.0" + branch.mkString
}
```
* Remember that blank lines are used to clearly delineate expressions. This happens before the expression is sent to the Scala compiler, so no blank lines are allowed within a block.
## More Information
* A `Setting[T]` describes how to initialize a value of type T. The expressions shown in the example are expressions, not statements. In particular, there is no hidden mutable map that is being modified. Each `Setting[T]` describes an update to a map. The actual map is rarely directly referenced by user code. It is not the final map that is important, but the operations on the map.
* There are fundamentally two types of initializations, `:=` and `<<=`. The methods `+=`, `++=`, and `~=` are defined in terms of these. `:=` assigns a value, overwriting any existing value. `<<=` uses existing values to initialize a setting.
* `key ~= f` is equivalent to `key <<= key(f)`
* `key += value` is equivalent to `key ~= (_ :+ value)` or `key <<= key(_ :+ value)`
* `key ++= value` is equivalent to `key ~= (_ ++ value)` or `key <<= key(_ ++ value)`
* There can be multiple `.sbt` files per project. This feature can be used, for example, to put user-specific configurations in a separate file.
* Import clauses are allowed at the beginning of a `.sbt` file. Since they are clauses, no semicolons are allowed. They need not be separated by blank lines, but each import must be on one line. For example,
```scala
import scala.xml.NodeSeq
import math.{abs, pow}
```
* These imports are defined by default in a `.sbt` file:
```scala
import sbt._
import Process._
import Keys._
```
In addition, the contents of all public `Build` and `Plugin` objects from the full definition are imported.
sbt uses the blank lines to separate the expressions and then it sends them off to the Scala compiler. Each expression is parsed, compiled, and loaded independently. The settings are combined into a `Seq[Setting[_]]` and passed to the settings engine. The engine groups the settings by key, preserving order per key though, and then computes the order in which each setting needs to be evaluated. Cycles and references to uninitialized settings are detected here and dead settings are dropped. Finally, the settings are transformed into a function that is applied to an initially empty map.
Because the expressions can be separated before the compiler, sbt only needs to recompile expressions that change. So, the work to respond to changes is proportional to the number of settings that changed and not the number of settings defined in the build. If imports change, all expression in the `.sbt` file need to be recompiled.
## Implementation Details (even more information)
Each expression describes an initialization operation. The simplest operation is context-free assignment using `:=`. That is, no outside information is used to determine the setting value. Operations other than `:=` are implemented in terms of `<<=`. The `<<=` method specifies an operation that requires other settings to be initialized and uses their values to define a new setting.
The target (left side value) of a method like `:=` identifies one of the constructs in sbt: settings, tasks, and input tasks. It is not an actual setting or task, but a key representing a setting or task. A setting is a value assigned when a project is loaded. A task is a unit of work that is run on-demand zero or more times after a project is loaded and also produces a value. An input task, previously known as a Method Task in 0.7 and earlier, accepts an input string and produces a task to be run. The renaming is because it can accept arbitrary input in 0.10 and not just a space-delimited sequence of arguments like in 0.7.
A construct (setting, task, or input task) is identified by a scoped key, which is a pair `(Scope, AttributeKey[T])`. An `AttributeKey` associates a name with a type and is a typesafe key for use in an `AttributeMap`. Attributes are best illustrated by the `get` and `put` methods on `AttributeMap`:
```scala
def get[T](key: AttributeKey[T]): Option[T]
def put[T](key: AttributeKey[T], value: T): AttributeMap
```
For example, given a value `k: AttributeKey[String]` and a value `m: AttributeMap`, `m.get(k)` has type `Option[String]`.
In sbt, a Scope is mainly defined by a project reference and a configuration (such as 'test' or 'compile'). Project data is stored in a Map[Scope, AttributeMap]. Each Scope identifies a map. You can sort of compare a Scope to a reference to an object and an AttributeMap to the object's data.
In order to provide appropriate convenience methods for constructing an initialization operation for each construct, an AttributeKey is constructed through either a SettingKey, TaskKey, or InputKey:
```scala
// underlying key: AttributeKey[String]
val name = SettingKey[String]("name")
// underlying key: AttributeKey[Task[String]]
val hello = TaskKey[String]("hello")
// underlying key: AttributeKey[InputTask[String]]
val helloArgs = InputKey[String]("hello-with-args")
```
In the basic expression `name := "asdf"`, the `:=` method is implicitly available for a `SettingKey` and accepts an argument that conforms to the type parameter of name, which is String.
The high-level API for constructing settings is defined in [Scoped]. Scopes are defined in [Scope]. The underlying engine is in [Settings] and the heterogeneous map is in [Attributes].
Built-in keys are in [Keys] and default settings are defined in [Defaults].

@ -1,52 +0,0 @@
[Ivy documentation]: http://ant.apache.org/ivy/history/2.2.0/tutorial/conf.html
[Maven Scopes]: http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope
_Wiki Maintenance Note:_ Most of what's on this page is now covered in
[[Getting Started Library Dependencies]]. This page should be
analyzed for any points that aren't covered on the new page, and
those points moved somewhere (maybe the [[FAQ]] or an "advanced
library deps" page). Then this page could become a redirect with
no content except a link pointing to the new page(s).
_Wiki Maintenance Note 2:_ There probably should be a page called
Configurations that's less about library dependency management and
more about listing all the configurations that exist and
describing what they are used for. This would complement the way
this page is linked, for example in [[Index]].
# Configurations
Ivy configurations are a useful feature for your build when you use managed dependencies. They are essentially named sets of dependencies. You can read the [Ivy documentation] for details. Their use in sbt is described on this page.
# Usage
The built-in use of configurations in sbt is similar to scopes in Maven. sbt adds dependencies to different classpaths by the configuration that they are defined in. See the description of [Maven Scopes] for details.
You put a dependency in a configuration by selecting one or more of its configurations to map to one or more of your project's configurations. The most common case is to have one of your configurations `A` use a dependency's configuration `B`. The mapping for this looks like `"A->B"`. To apply this mapping to a dependency, add it to the end of your dependency definition:
```scala
libraryDependencies += "org.scalatest" % "scalatest" % "1.2" % "test->compile"
```
This says that your project's `test` configuration uses `ScalaTest`'s `default` configuration. Again, see the [Ivy documentation] for more advanced mappings. Most projects published to Maven repositories will use the `default` or `compile` configuration.
A useful application of configurations is to group dependencies that are not used on normal classpaths. For example, your project might use a `"js"` configuration to automatically download jQuery and then include it in your jar by modifying `resources`. For example:
```scala
ivyConfigurations += config("js") hide
libraryDependencies += "jquery" % "jquery" % "1.3.2" % "js->default" from "http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"
resources <<= (resources, update) { (rs, report) =>
rs ++ report.select( configurationFilter("js") )
}
```
The `config` method defines a new configuration with name `"js"` and makes it private to the project so that it is not used for publishing.
See [[Update Report]] for more information on selecting managed artifacts.
A configuration without a mapping (no `"->"`) is mapped to `default` or `compile`. The `->` is only needed when mapping to a different configuration than those. The ScalaTest dependency above can then be shortened to:
```scala
libraryDependencies += "org.scala-tools.testing" % "scalatest" % "1.0" % "test"
```

@ -1,18 +0,0 @@
# Dormant Pages
If you check out the wiki as a git repository, there's a `Dormant`
directory (this one) which contains:
- "redirect" pages (empty pages that point to some new page).
If you want to rename a page and think it has lots of incoming
links from outside the wiki, you could leave the old page name
in here. The directory name is not part of the link so it's
safe to move the old page into the `Dormant` directory.
- "clipboard" pages that contain some amount of useful text, that
needs to be extracted and organized, maybe moved to existing
pages or the FAQ or maybe there's a new page that should exist.
Basically content that may be good but needs massaging into the
big picture.
Ideally, pages in here have a note at the top pointing to
alternative content and explaining the status of the page.

@ -1,260 +0,0 @@
[#35]: https://github.com/harrah/xsbt/issues/35
_Wiki Maintenance Note:_ This page has been _mostly_ replaced by
[[Getting Started Full Def]] and other pages. It has some obsolete
terminology:
- we now avoid referring to build definition as "configuration"
to avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full
configuration, in favor of ".sbt build definition files" and
".scala build definition files"
However, it may still be worth combing this page for examples or
points that are not made in new pages. Some stuff that may not be
elsewhere:
- discussion of cycles
- discussion of build-level settings
- discussion of omitting or augmenting defaults
Also, the discussion of configuration delegation which is teased
here, needs to exist somewhere.
After extracting useful content, this page could simply be a
redirect (delete the content, link to the new pages about build
definition).
There is a related page [[Introduction to Full Configurations]]
which could benefit from cleanup at the same time.
# Full Configuration (Draft)
A full configuration consists of one or more Scala source files that define concrete Builds.
A Build defines project relationships and configurations.
## By Example
Create a file with extension `.scala` in your `project/` directory (such as `<your-project>/project/Build.scala`).
A sample `project/Build.scala`:
```scala
import sbt._
object MyBuild extends Build {
// Declare a project in the root directory of the build with ID "root".
// Declare an execution dependency on sub1.
lazy val root = Project("root", file(".")) aggregate(sub1)
// Declare a project with ID 'sub1' in directory 'a'.
// Declare a classpath dependency on sub2 in the 'test' configuration.
lazy val sub1: Project = Project("sub1", file("a")) dependsOn(sub2 % "test")
// Declare a project with ID 'sub2' in directory 'b'.
// Declare a configuration dependency on the root project.
lazy val sub2 = Project("sub2", file("b"), delegates = root :: Nil)
}
```
### Cycles
(It is probably best to skip this section and come back after reading about project relationships. It is near the example for easier reference.)
The configuration dependency `sub2 -> root` is specified as an argument to the `delegates` parameter of `Project`, which is by-name and of type `Seq[ProjectReference]` because by-name repeated parameters are not allowed in Scala.
There are also corresponding by-name parameters `aggregate` and `dependencies` for execution and classpath dependencies.
By-name parameters, being non-strict, are useful when there are cycles between the projects, as is the case for `root` and `sub2`.
In the example, there is a *configuration* dependency `sub2 -> root`, a *classpath* dependency `sub1 -> sub2`, and an *execution* dependency `root -> sub1`.
This causes cycles at the Scala-level, but not within a particular dependency type, which is not allowed.
## Defining Projects
An internal project is defined by constructing an instance of `Project`. The minimum information for a new project is its ID string and base directory. For example:
```scala
import sbt._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"))
}
```
This constructs a project definition for a project with ID 'a' and located in the `<project root>/subA` directory.
Here, `file(...)` is equivalent to `new File(...)` and is resolved relative to the build's base directory.
There are additional optional parameters to the Project constructor.
These parameters configure the project and declare project relationships, as discussed in the next sections.
## Project Settings
A full build definition can configure settings for a project, just like a light configuration.
Unlike a light configuration, the default settings can be replaced or manipulated and sequences of settings can be manipulated.
In addition, a light configuration has default imports defined. A full definition needs to import these explicitly.
In particular, all keys (like `name` and `version`) need to be imported from `sbt.Keys`.
### No defaults
For example, to define a build from scratch (with no default settings or tasks):
```scala
import sbt._
import Keys._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"), settings = Seq(name := "From Scratch"))
}
```
### Augment Defaults
To augment the default settings, the following Project definitions are equivalent:
```scala
lazy val a1 = Project("a", file("subA")) settings(name := "Additional", version := "1.0")
lazy val a2 = Project("a", file("subA"),
settings = Defaults.defaultSettings ++ Seq(name := "Additional", version := "1.0")
)
```
### Select Defaults
Web support is now split out into a plugin.
With the plugin declared, its settings can be selected like:
```scala
import sbt_
import Keys._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"), settings = Web.webSettings)
}
```
Settings defined in `.sbt` files are appended to the settings for each `Project` definition.
### Build-level Settings
Lastly, settings can be defined for the entire build.
In general, these are used when a setting is not defined for a project.
These settings are declared either by augmenting `Build.settings` or defining settings in the scope of the current build.
For example, to set the shell prompt to be the id for the current project, the following setting can be added to a `.sbt` file:
```scala
shellPrompt in ThisBuild := { s => Project.extract(s).currentProject.id + "> " }
```
(The value is a function `State => String`. `State` contains everything about the build and will be discussed elsewhere.)
Alternatively, the setting can be defined in `Build.settings`:
```scala
import sbt._
import Keys._
object MyBuild extends Build {
override lazy val settings = super.settings :+
(shellPrompt := { s => Project.extract(s).currentProject.id + "> " })
...
}
```
## Project Relationships
There are three kinds of project relationships in sbt. These are described by execution, classpath, and configuration dependencies.
### Project References
When defining a dependency on another project, you provide a `ProjectReference`.
In the simplest case, this is a `Project` object. (Technically, there is an implicit conversion `Project => ProjectReference`)
This indicates a dependency on a project within the same build.
It is possible to declare a dependency on a project in a directory separate from the current build, in a git repository, or in a project packaged into a jar and accessible via http/https.
These are referred to as external builds and projects. You can reference the root project in an external build with `RootProject`:
```scala
RootProject( file("/home/user/a-project") )
RootProject( uri("git://github.com/dragos/dupcheck.git") )
```
or a specific project within the external build can be referenced using a `ProjectRef`:
```scala
ProjectRef( uri("git://github.com/dragos/dupcheck.git"), "project-id")
```
The fragment part of the git URI can be used to specify a specific branch or tag. For example:
```scala
RootProject( uri("git://github.com/typesafehub/sbteclipse.git#v1.2") )
```
Ultimately, a `RootProject` is resolved to a `ProjectRef` once the external project is loaded.
Additionally, there are implicit conversions `URI => RootProject` and `File => RootProject` so that URIs and Files can be used directly.
External, remote builds are retrieved or checked out to a staging directory in the user's `.sbt` directory so that they can be manipulated like local builds.
Examples of using project references follow in the next sections.
When using external projects, the `sbt.boot.directory` should be set (see [[Setup|Getting Started Setup]]) so that unnecessary recompilations do not occur (see [#35]).
### Execution Dependency
If project A has an execution dependency on project B, then when you execute a task on project A, it will also be run on project B. No ordering of these tasks is implied.
An execution dependency is declared using the `aggregate` method on `Project`. For example:
```scala
lazy val root = Project(...) aggregate(sub1)
lazy val sub1 = Project(...) aggregate(sub2)
lazy val sub2 = Project(...) aggregate(ext)
lazy val ext = uri("git://github.com/dragos/dupcheck.git")
```
If 'clean' is executed on `sub2`, it will also be executed on `ext` (the locally checked out version).
If 'clean' is executed on `root`, it will also be executed on `sub1`, `sub2`, and `ext`.
Aggregation can be controlled more finely by configuring the `aggregate` setting. This setting is of type `Aggregation`:
```scala
sealed trait Aggregation
final case class Implicit(enabled: Boolean) extends Aggregation
final class Explicit(val deps: Seq[ProjectReference], val transitive: Boolean) extends Aggregation
```
This key can be set in any scope, including per-task scopes. By default, aggregation is disabled for `run`, `console-quick`, `console`, and `console-project`. Re-enabling it from the command line for the current project for `run` would look like:
```scala
> set aggregate in run := true
```
(There is an implicit `Boolean => Implicit` where `true` translates to `Implicit(true)` and `false` translates to `Implicit(false)`). Similarly, aggregation can be disabled for the current project using:
```scala
> set aggregate in clean := false
```
`Explicit` allows finer control over the execution dependencies and transitivity. An instance is normally constructed using `Aggregation.apply`. No new projects may be introduced here (that is, internal references have to be defined already in the Build's `projects` and externals must be a dependency in the Build definition). For example, to declare that `root/clean` aggregates `sub1/clean` and `sub2/clean` intransitively (that is, excluding `ext` even though `sub2` aggregates it):
```scala
> set aggregate in clean := Aggregation(Seq(sub1, sub2), transitive = false)
```
### Classpath Dependencies
A classpath dependency declares that a project needs the full classpath of another project on its classpath.
Typically, this implies that the dependency will ensure its classpath is up-to-date, such as by fetching dependencies and recompiling modified sources.
A classpath dependency declaration consists of a project reference and an optional configuration mapping.
For example, to use project b's `compile` configuration from project a's `test` configuration:
```scala
lazy val a = Project(...) dependsOn(b % "test->compile")
lazy val b = Project(...)
```
`"test->compile"` may be shortened to `"test"` in this case. The `%` call may be omitted, in which case the mapping is `"compile->compile"` by default.
A useful configuration declaration is `test->test`. This means to use a dependency's test classes on the dependent's test classpath.
Multiple declarations may be separated by a semicolon. For example, the following says to use the main classes of `b` for the compile classpath of `a` as well as the test classes of `b` for the test classpath of `a`:
```scala
lazy val a = Project(...) dependsOn(b % "compile;test->test")
lazy val b = Project(...)
```
### Configuration Dependencies
Suppose project A has a configuration dependency on project B.
If a setting is not found on project A, it will be looked up in project B.
This is one aspect of delegation and will be described in detail elsewhere.

@ -1,102 +0,0 @@
_Wiki Maintenance Note:_ This page has been _mostly_ replaced by
[[Getting Started Full Def]] and other pages. See the note at the
top of [[Full Configuration]] for details. If we can establish
(or cause to be true) that everything in here is covered
elsewhere, this page can be empty except for links to the new pages.
There are two types of file for configuring a build: a `build.sbt` file in you project root directory, or a `Build.scala` file in your `project/` directory. The former is often referred to as a "light", "quick" or "basic" configuration and the latter is often referred to as "full" configuration. This page is about "full" configuration.
# Naming the Scala build file
`Build.scala` is the typical name for this build file but in reality it can be called anything that ends with `.scala` as it is a standard Scala source file and sbt will detect and use it regardless of its name.
# Overview of what goes in the file
The most basic form of this file defines one object which extends `sbt.Build` e.g.:
```scala
import sbt._
object AnyName extends Build {
val anyName = Project("anyname", file("."))
// Declarations go here
}
```
There needs to be at least one `sbt.Project` defined and in this case we are giving it an arbitrary name and saying that it can be found in the root of this project. In other words we are saying that this is a build file to build the current project.
The declarations define any number of objects which can be used by sbt to determine what to build and how to build it.
Most of the time you are not telling sbt what to do, you are simply declaring the dependencies of your project and the particular settings you require. sbt then uses this information to determine how to carry out the tasks you give it when you interact with sbt on the command line. For this reason the order of declarations tends to be unimportant.
When you define something and assign it to a val the name of the val is often irrelevant. By defining it and making it part of an object, sbt can then interrogate it and extract the information it requires. So, for example, the line:
```scala
val apachenet = "commons-net" % "commons-net" % "2.0"
```
defines a dependency and assigns it to the val `apachenet` but, unless you refer to that val again in the build file, the name of it is of no significance to sbt. sbt simply sees that the dependency object exists and uses it when it needs it.
# Combining "light" and "full" configuration files
It is worth noting at this stage that you can have both a `build.sbt` file and a `Build.scala` file for the same project. If you do this, sbt will append the configurations in `build.sbt` to those in the `Build.scala` file. In fact you can also have multiple ".sbt" files in your root directory and they are all appended together.
# A simple example comparing a "light" and "full" configuration of the same project
Here is a short "light" `build.sbt` file which defines a build project with a single test dependency on "scalacheck":
```scala
name := "My Project"
version := "1.0"
organization := "org.myproject"
scalaVersion := "2.9.0-1"
libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
```
Here is an equivalent "full" `Build.scala` file which defines exactly the same thing:
```scala
import sbt._
import Keys._
object MyProjectBuild extends Build {
val mySettings = Defaults.defaultSettings ++ Seq(
name := "My Project",
version := "1.0",
organization := "org.myproject",
scalaVersion := "2.9.0-1",
libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
)
val myProject = Project("MyProject", file("."), settings = mySettings)
}
```
Note that we have to explicitly declare the build and project and we have to explicitly append our settings to the default settings. All of this work is done for us when we use a "light" build file.
To understand what is really going on you may find it helpful to see this `Build.scala` without the imports and associated implicit conversions:
```scala
object MyProjectBuild extends sbt.Build {
val mySettings = sbt.Defaults.defaultSettings ++ scala.Seq(
sbt.Keys.name := "My Project",
sbt.Keys.version := "1.0",
sbt.Keys.organization := "org.myproject",
sbt.Keys.scalaVersion := "2.9.0-1",
sbt.Keys.libraryDependencies += sbt.toGroupID("org.scalatest").%("scalatest_2.9.0").%("1.4.1").%("test")
)
val myProject = sbt.Project("MyProject", new java.io.File("."), settings = mySettings)
}
```

@ -1,269 +0,0 @@
_Wiki Maintenance Note:_ This page is a dumping ground for little
bits of text, examples, and information that needs to find a new
home somewhere else on the wiki.
# Snippets of docs that need to move to another page
Temporarily change the logging level and configure how stack traces are displayed by modifying the `log-level` or `trace-level` settings:
```text
> set logLevel := Level.Warn
```
Valid `Level` values are `Debug, Info, Warn, Error`.
You can run an action for multiple versions of Scala by prefixing the action with `+`. See [[Cross Build]] for details. You can temporarily switch to another version of Scala using `++ <version>`. This version does not have to be listed in your build definition, but it does have to be available in a repository. You can also include the initial command to run after switching to that version. For example:
```text
> ++2.9.1 console-quick
...
Welcome to Scala version 2.9.1.final (Java HotSpot(TM) Server VM, Java 1.6.0).
...
scala>
...
> ++2.8.1 console-quick
...
Welcome to Scala version 2.8.1 (Java HotSpot(TM) Server VM, Java 1.6.0).
...
scala>
```
# Manual Dependency Management
Manually managing dependencies involves copying any jars that you want to use to the `lib` directory. sbt will put these jars on the classpath during compilation, testing, running, and when using the interpreter. You are responsible for adding, removing, updating, and otherwise managing the jars in this directory. No modifications to your project definition are required to use this method unless you would like to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the `unmanaged-base` setting in your project definition. For example, to use `custom_lib/`:
```scala
unmanagedBase <<= baseDirectory { base => base / "custom_lib" }
```
If you want more control and flexibility, override the `unmanaged-jars` task, which ultimately provides the manual dependencies to sbt. The default implementation is roughly:
```scala
unmanagedJars in Compile <<= baseDirectory map { base => (base ** "*.jar").classpath }
```
If you want to add jars from multiple directories in addition to the default directory, you can do:
```scala
unmanagedJars in Compile <++= baseDirectory map { base =>
val baseDirectories = (base / "libA") +++ (base / "b" / "lib") +++ (base / "libC")
val customJars = (baseDirectories ** "*.jar") +++ (base / "d" / "my.jar")
customJars.classpath
}
```
See [[Paths]] for more information on building up paths.
### Resolver.withDefaultResolvers method
To use the local and Maven Central repositories, but not the Scala Tools releases repository:
```scala
externalResolvers <<= resolvers map { rs =>
Resolver.withDefaultResolvers(rs, mavenCentral = true, scalaTools = false)
}
```
### Explicit URL
If your project requires a dependency that is not present in a repository, a
direct URL to its jar can be specified with the `from` method as follows:
```scala
libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar"
```
The URL is only used as a fallback if the dependency cannot be found through
the configured repositories. Also, when you publish a project, a pom or
ivy.xml is created listing your dependencies; the explicit URL is not
included in this published metadata.
### Disable Transitivity
By default, sbt fetches all dependencies, transitively. (That is, it downloads
the dependencies of the dependencies you list.)
In some instances, you may find that the dependencies listed for a project
aren't necessary for it to build. Avoid fetching artifact dependencies with
`intransitive()`, as in this example:
```scala
libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive()
```
### Classifiers
You can specify the classifer for a dependency using the `classifier` method. For example, to get the jdk15 version of TestNG:
```scala
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
```
To obtain particular classifiers for all dependencies transitively, run the `update-classifiers` task. By default, this resolves all artifacts with the `sources` or `javadoc` classifer. Select the classifiers to obtain by configuring the `transitive-classifiers` setting. For example, to only retrieve sources:
```scala
transitiveClassifiers := Seq("sources")
```
### Extra Attributes
[Extra attributes] can be specified by passing key/value pairs to the `extra` method.
To select dependencies by extra attributes:
```scala
libraryDependencies += "org" % "name" % "rev" extra("color" -> "blue")
```
To define extra attributes on the current project:
```scala
projectID <<= projectID { id =>
id extra("color" -> "blue", "component" -> "compiler-interface")
}
```
### Inline Ivy XML
sbt additionally supports directly specifying the configurations or dependencies sections of an Ivy configuration file inline. You can mix this with inline Scala dependency and repository declarations.
For example:
```scala
ivyXML :=
<dependencies>
<dependency org="javax.mail" name="mail" rev="1.4.2">
<exclude module="activation"/>
</dependency>
</dependencies>
```
### Ivy Home Directory
By default, sbt uses the standard Ivy home directory location `${user.home}/.ivy2/`.
This can be configured machine-wide, for use by both the sbt launcher and by projects, by setting the system property `sbt.ivy.home` in the sbt startup script (described in [[Setup|Getting Started Setup]]).
For example:
```text
java -Dsbt.ivy.home=/tmp/.ivy2/ ...
```
### Checksums
sbt ([through Ivy]) verifies the checksums of downloaded files by default. It also publishes checksums of artifacts by default. The checksums to use are specified by the _checksums_ setting.
To disable checksum checking during update:
```scala
checksums in update := Nil
```
To disable checksum creation during artifact publishing:
```scala
checksums in publishLocal := Nil
checksums in publish := Nil
```
The default value is:
```scala
checksums := Seq("sha1", "md5")
```
### Publishing
Finally, see [[Publishing]] for how to publish your project.
## Maven/Ivy
For this method, create the configuration files as you would for Maven (`pom.xml`) or Ivy (`ivy.xml` and optionally `ivysettings.xml`).
External configuration is selected by using one of the following expressions.
### Ivy settings (resolver configuration)
```scala
externalIvySettings()
```
or
```scala
externalIvySettings(baseDirectory(_ / "custom-settings-name.xml"))
```
### Ivy file (dependency configuration)
```scala
externalIvyFile()
```
or
```scala
externalIvyFile(baseDirectory(_ / "custom-name.xml"))
```
Because Ivy files specify their own configurations, sbt needs to know which configurations to use for the compile, runtime, and test classpaths. For example, to specify that the Compile classpath should use the 'default' configuration:
```scala
classpathConfiguration in Compile := config("default")
```
### Maven pom (dependencies only)
```scala
externalPom()
```
or
```scala
externalPom(baseDirectory(_ / "custom-name.xml"))
```
### Full Ivy Example
For example, a `build.sbt` using external Ivy files might look like:
```scala
externalIvySettings()
externalIvyFile( baseDirectory { base => base / "ivyA.xml"} )
classpathConfiguration in Compile := Compile
classpathConfiguration in Test := Test
classpathConfiguration in Runtime := Runtime
```
### Known limitations
Maven support is dependent on Ivy's support for Maven POMs.
Known issues with this support:
* Specifying `relativePath` in the `parent` section of a POM will produce an error.
* Ivy ignores repositories specified in the POM. A workaround is to specify repositories inline or in an Ivy `ivysettings.xml` file.
### Configuration dependencies
The GSG on multi-project builds doesn't describe delegation among
configurations. The FAQ entry about porting multi-project build
from 0.7 mentions "configuration dependencies" but there's nothing
really to link to that explains them.
### These should be FAQs (maybe just pointing to topic pages)
* Run your program in its own VM
* Run your program with a particular version of Scala
* Run your webapp within an embedded jetty server
* Create a WAR that can be deployed to an external app server

@ -1,324 +0,0 @@
[light definition]: https://github.com/harrah/xsbt/wiki/Basic-Configuration
[full definition]: https://github.com/harrah/xsbt/wiki/Full-Configuration
[ScopedSetting]: http://harrah.github.com/xsbt/latest/api/sbt/ScopedSetting.html
[Scope]: http://harrah.github.com/xsbt/latest/api/sbt/Scope$.html
[Initialize]: http://harrah.github.com/xsbt/latest/api/sbt/Init$Initialize.html
[SettingKey]: http://harrah.github.com/xsbt/latest/api/sbt/SettingKey.html
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html "Keys.scala"
[InputKey]: http://harrah.github.com/xsbt/latest/api/sbt/InputKey.html
[TaskKey]: http://harrah.github.com/xsbt/latest/api/sbt/TaskKey.html
[Append]: http://harrah.github.com/xsbt/latest/api/sbt/Append$.html
_Wiki Maintenance Note:_ This page has been partly replaced by [[Getting Started Basic Def]] and
[[Getting Started More About Settings]]. It has some obsolete
terminology:
- we now avoid referring to build definition as "configuration"
to avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full
configuration, in favor of ".sbt build definition files" and
".scala build definition files"
However, it may still be worth combing this page for examples or
points that are not made in new pages. We may want to add FAQs or
topic pages to supplement the Getting Started pages with some of
that information. After doing so, this page could simply be a
redirect (delete the content, link to the new pages about build
definition).
## Introduction
A build definition is written in Scala.
There are two types of definitions: light and full.
A [light definition] is a quick way of configuring a build, consisting of a list of Scala expressions describing project settings.
A [full definition] is made up of one or more Scala source files that describe relationships between projects and introduce new configurations and settings.
This page introduces the `Setting` type, which is used by light and full definitions for general configuration.
### Introductory Examples
Basic examples of each type of definition are shown below for the purpose of getting an idea of what they look like, not for full comprehension of details, which are described at [light definition] and [full definition].
`<base>/build.sbt` (light)
```scala
name := "My Project"
libraryDependencies += "junit" % "junit" % "4.8" % "test"
```
`<base>/project/Build.scala` (full)
```scala
import sbt._
import Keys._
object MyBuild extends Build
{
lazy val root = Project("root", file(".")) dependsOn(sub)
lazy val sub = Project("sub", file("sub")) settings(
name := "My Project",
libraryDependencies += "junit" % "junit" % "4.8" % "test"
)
}
```
## Important Settings Background
The fundamental type of a configurable in sbt is a `Setting[T]`.
Each line in the `build.sbt` example above is of this type.
The arguments to the `settings` method in the `Build.scala` example are of type `Setting[T]`.
Specifically, the `name` setting has type `Setting[String]` and the `libraryDependencies` setting has type `Setting[Seq[ModuleID]]`, where `ModuleID` represents a dependency.
Throughout the documentation, many examples show a setting, such as:
```scala
libraryDependencies += "junit" % "junit" % "4.8" % "test"
```
This setting expression either goes in a [light definition] `(build.sbt)` as is or in the `settings` of a `Project` instance in a [full definition] `(Build.scala)` as shown in the example.
This is an important point to understanding the context of examples in the documentation.
(That is, you now know where to copy and paste examples now.)
A `Setting[T]` describes how to initialize a setting of type `T`.
The settings shown in the examples are expressions, not statements.
In particular, there is no hidden mutable map that is being modified.
Each `Setting[T]` is a value that describes an update to a map.
The actual map is rarely directly referenced by user code.
It is not the final map that is usually important, but the operations on the map.
To emphasize this, the setting in the following `Build.scala` fragment *is ignored* because it is a value that need to be included in the `settings` of a `Project`.
(Unfortunately, Scala will discard non-Unit values to get Unit, which is why there is no compile error.)
```scala
object Bad extends Build {
libraryDependencies += "junit" % "junit" % "4.8" % "test"
}
```
```scala
object Good extends Build
{
lazy val root = Project("root", file(".")) settings(
libraryDependencies += "junit" % "junit" % "4.8" % "test"
)
}
```
## Declaring a Setting
There is fundamentally one type of initialization, represented by the `<<=` method.
The other initialization methods `:=`, `+=`, `++=`, `<+=`, `<++=`, and `~=` are convenience methods that can be defined in terms of `<<=`.
The motivation behind the method names is:
* All methods end with `=` to obtain the lowest possible infix precedence.
* A method starting with `<` indicates that the initialization uses other settings.
* A single `+` means a single value is expected and will be appended to the current sequence.
* `++` means a `Seq[T]` is expected. The sequence will be appended to the current sequence.
The following sections include descriptions and examples of each initialization method.
The descriptions use "will initialize" or "will append" to emphasize that they construct a value describing an update and do not mutate anything.
Each setting may be directly included in a light configuration (build.sbt), appropriately separated by blank lines.
For a full configuration (Build.scala), the setting must go in a settings Seq as described in the previous section.
Information about the types of the left and right hand sides of the methods follows this section.
### :=
`:=` is used to define a setting that overwrites any previous value without referring to other settings.
For example, the following defines a setting that will set _name_ to "My Project" regardless of whether _name_ has already been initialized.
```scala
name := "My Project"
```
No other settings are used. The value assigned is just a constant.
### += and ++=
`+=` is used to define a setting that will append a single value to the current sequence without referring to other settings.
For example, the following defines a setting that will append a JUnit dependency to _libraryDependencies_.
No other settings are referenced.
```scala
libraryDependencies += "junit" % "junit" % "4.8" % "test"
```
The related method `++=` appends a sequence to the current sequence, also without using other settings.
For example, the following defines a setting that will add dependencies on ScalaCheck and specs to the current list of dependencies.
Because it will append a `Seq`, it uses ++= instead of +=.
```scala
libraryDependencies ++= Seq(
"org.scala-tools.testing" %% "scalacheck" % "1.9" % "test",
"org.scala-tools.testing" %% "specs" % "1.6.8" % "test"
)
)
```
The types involved in += and ++= are constrained by the existence of an implicit parameter of type Append.Value[A,B] in the case of += or Append.Values[A,B] in the case of ++=.
Here, B is the type of the value being appended and A is the type of the setting that the value is being appended to.
See [Append] for the provided instances.
### ~=
`~=` is used to transform the current value of a setting.
For example, the following defines a setting that will remove `-Y` compiler options from the current list of compiler options.
```scala
scalacOptions in Compile ~= { (options: Seq[String]) =>
options filterNot ( _ startsWith "-Y" )
}
```
The earlier declaration of JUnit as a library dependency using `+=` could also be written as:
```scala
libraryDependencies ~= { (deps: Seq[ModuleID]) =>
deps :+ ("junit" % "junit" % "4.8" % "test")
}
```
### <<=
The most general method is <<=.
All other methods can be implemented in terms of <<=.
<<= defines a setting using other settings, possibly including the previous value of the setting being defined.
For example, declaring JUnit as a dependency using <<= would look like:
```scala
libraryDependencies <<= libraryDependencies apply { (deps: Seq[ModuleID]) =>
// Note that :+ is a method on Seq that appends a single value
deps :+ ("junit" % "junit" % "4.8" % "test")
}
```
This defines a setting that will apply the provided function to the previous value of _libraryDependencies_.
`apply` and `Seq[ModuleID]` are explicit for demonstration only and may be omitted.
### <+= and <++=
The <+= method is a hybrid of the += and <<= methods.
Similarly, <++= is a hybrid of the ++= and <<= methods.
These methods are convenience methods for using other settings to append to the current value of a setting.
For example, the following will add a dependency on the Scala compiler to the current list of dependencies.
Because the _scalaVersion_ setting is used, the method is <+= instead of +=.
```scala
libraryDependencies <+= scalaVersion( "org.scala-lang" % "scala-compiler" % _ )
```
This next example adds a dependency on the Scala compiler to the current list of dependencies.
Because another setting (_scalaVersion_) is used and a Seq is appended, the method is <++=.
```scala
libraryDependencies <++= scalaVersion { sv =>
("org.scala-lang" % "scala-compiler" % sv) ::
("org.scala-lang" % "scala-swing" % sv) ::
Nil
}
```
The types involved in <+= and <++=, like += and ++=, are constrained by the existence of an implicit parameter of type Append.Value[A,B] in the case of <+= or Append.Values[A,B] in the case of <++=.
Here, B is the type of the value being appended and A is the type of the setting that the value is being appended to.
See [Append] for the provided instances.
## Setting types
This section provides information about the types of the left and right-hand sides of the initialization methods. It is currently incomplete.
### Setting Keys
The left hand side of a setting definition is of type [ScopedSetting].
This type has two parts: a key (of type [SettingKey]) and a scope (of type [Scope]).
An unspecified scope is like using `this` to refer to the current context.
The previous examples on this page have not defined an explicit scope. See [[Inspecting Settings]] for details on the axes that make up scopes.
The target (the value on the left) of a method like `:=` identifies one of the main constructs in sbt: a setting, a task, or an input task.
It is not an actual setting or task, but a key representing a setting or task.
A setting is a value assigned when a project is loaded.
A task is a unit of work that is run on-demand after a project is loaded and produces a value.
An input task, previously known as a method task in sbt 0.7 and earlier, accepts an input string and produces a task to be run.
(The renaming is because it can accept arbitrary input in 0.10+ and not just a space-delimited sequence of arguments like in 0.7.)
A setting key has type [SettingKey], a task key has type [TaskKey], and an input task has type [InputKey].
The remainder of this section only discusses settings.
See [[Tasks]] and [[Input Tasks]] for details on the other types (those pages assume an understanding of this page).
To construct a [ScopedSetting], select the key and then scope it using the `in` method (see the [ScopedSetting] for API details).
For example, the setting for compiler options for the test sources is referenced using the _scalacOptions_ key and the `Test` configuration in the current project.
```scala
val ref: ScopedSetting[Seq[String]] = scalacOptions in Test
```
The current project doesn't need to be explicitly specified, since that is the default in most cases.
Some settings are specific to a task, in which case the task should be specified as part of the scope as well.
For example, the compiler options used for the _console_ task for test sources is referenced like:
```scala
val ref: ScopedSetting[Seq[String]] = scalacOptions in Test in console
```
In these examples, the type of the setting reference key is given explicitly and the key is assigned to a value to emphasize that it is a normal (immutable) Scala value and can be manipulated and passed around as such.
### Computing the value for a setting
The right hand side of a setting definition varies by the initialization method used.
In the case of :=, +=, ++=, and ~=, the type of the argument is straightforward (see the [ScopedSetting] API).
For <<=, <+=, and <++=, the type is `Initialize[T]` (for <<= and <+=) or `Initialize[Seq[T]]` (for <++=).
This section discusses the [Initialize] type.
A value of type `Initialize[T]` represents a computation that takes the values of other settings as inputs.
For example, in the following setting, the argument to <<= is of type `Initialize[File]`:
```scala
scalaSource in Compile <<= baseDirectory {
(base: File) => base / "src"
}
```
This example can be written more explicitly as:
```scala
{
val key: ScopedSetting[File] = scalaSource.in(Compile)
val init: Initialize[File] = baseDirectory.apply( (base: File) => base / "src" )
key.<<=(init)
}
```
To construct a value of type `Initialize`, construct a tuple of up to nine input `ScopedSetting`s.
Then, define the function that will compute the value of the setting given the values for these input settings.
```scala
val path: Initialize[File] =
(baseDirectory, name, version).apply( (base: File, n: String, v: String) =>
base / (n + "-" + v + ".jar")
)
```
This example takes the base directory, project name, and project version as inputs.
The keys for these settings are defined in [sbt.Keys], along with all other built-in keys.
The argument to the `apply` method is a function that takes the values of those settings and computes a new value.
In this case, that value is the path of a jar.
### Initialize[Task[T]]
To initialize tasks, the procedure is similar.
There are a few differences.
First, the inputs are of type [ScopedTaskable].
The means that either settings ([ScopedSetting]) or tasks ([ScopedTask]) may be used as the input to a task.
Second, the name of the method used is `map` instead of `apply` and the resulting value is of type `Initialize[Task[T]]`.
In the following example, the inputs are the [report|Update-Report] produced by the _update_ task and the context _configuration_.
The function computes the locations of the dependencies for that configuration.
```scala
val mainDeps: Initialize[Task[File]] =
(update, configuration).map( (report: UpdateReport, config: Configuration) =>
report.select(configuration = config.name)
)
```
As before, _update_ and _configuration_ are defined in [Keys].
_update_ is of type `TaskKey[UpdateReport]` and _configuration_ is of type `SettingKey[Configuration]`.

@ -1,55 +0,0 @@
# Advanced Command Example
This is an advanced example showing some of the power of the new settings system. It shows how to temporarily modify all declared dependencies in the build, regardless of where they are defined. It directly operates on the final Seq[Setting[_]] produced from every setting involved in the build.
The modifications are applied by running _canonicalize_. A _reload_ or using _set_ reverts the modifications, requiring _canonicalize_ to be run again.
This particular example shows how to transform all declared dependencies on ScalaCheck to use version 1.8. As an exercise, you might try transforming other dependencies, the repositories used, or the scalac options used. It is possible to add or remove settings as well.
This kind of transformation is possible directly on the settings of Project, but it would not include settings automatically added from plugins or build.sbt files. What this example shows is doing it unconditionally on all settings in all projects in all builds, including external builds.
```scala
import sbt._
import Keys._
object Canon extends Plugin
{
// Registers the canonicalize command in every project
override def settings = Seq(commands += canonicalize)
// Define the command. This takes the existing settings (including any session settings)
// and applies 'f' to each Setting[_]
def canonicalize = Command.command("canonicalize") { (state: State) =>
val extracted = Project.extract(state)
import extracted._
val transformed = session.mergeSettings map ( s => f(s) )
val newStructure = Load.reapply(transformed, structure)
Project.setProject(session, newStructure, state)
}
// Transforms a Setting[_].
def f(s: Setting[_]): Setting[_] = s.key.key match {
// transform all settings that modify libraryDependencies
case Keys.libraryDependencies.key =>
// hey scalac. T == Seq[ModuleID]
s.asInstanceOf[Setting[Seq[ModuleID]]].mapInit(mapLibraryDependencies)
// preserve other settings
case _ => s
}
// This must be idempotent because it gets applied after every transformation.
// That is, if the user does:
// libraryDependencies += a
// libraryDependencies += b
// then this method will be called for Seq(a) and Seq(a,b)
def mapLibraryDependencies(key: ScopedKey[Seq[ModuleID]], value: Seq[ModuleID]): Seq[ModuleID] =
value map mapSingle
// This is the fundamental transformation.
// Here we map all declared ScalaCheck dependencies to be version 1.8
def mapSingle(module: ModuleID): ModuleID =
if(module.name == "scalacheck")
module.copy(revision = "1.8")
else
module
}
```

@ -1,67 +0,0 @@
## Advanced Configurations Example
This is an example [[full build definition|Full Configuration]] that demonstrates using Ivy configurations to group dependencies.
The `utils` module provides utilities for other modules. It uses Ivy configurations to
group dependencies so that a dependent project doesn't have to pull in all dependencies
if it only uses a subset of functionality. This can be an alternative to having multiple
utilities modules (and consequently, multiple utilities jars).
In this example, consider a `utils` project that provides utilities related to both Scalate and Saxon.
It therefore needs both Scalate and Saxon on the compilation classpath and a project that uses
all of the functionality of 'utils' will need these dependencies as well.
However, project `a` only needs the utilities related to Scalate, so it doesn't need Saxon.
By depending only on the `scalate` configuration of `utils`, it only gets the Scalate-related dependencies.
```scala
import sbt._
import Keys._
object B extends Build
{
/********** Projects ************/
// An example project that only uses the Scalate utilities.
lazy val a = Project("a", file("a")) dependsOn(utils % "compile->scalate")
// An example project that uses the Scalate and Saxon utilities.
// For the configurations defined here, this is equivalent to doing dependsOn(utils),
// but if there were more configurations, it would select only the Scalate and Saxon
// dependencies.
lazy val b = Project("b", file("b")) dependsOn(utils % "compile->scalate,saxon")
// Defines the utilities project
lazy val utils = Project("utils", file("utils")) settings(utilsSettings : _*)
def utilsSettings: Seq[Setting[_]] =
// Add the src/common/scala/ compilation configuration.
inConfig(Common)(Defaults.configSettings) ++
// Publish the common artifact
addArtifact(artifact in (Common, packageBin), packageBin in Common) ++ Seq(
// We want our Common sources to have access to all of the dependencies on the classpaths
// for compile and test, but when depended on, it should only require dependencies in 'common'
classpathConfiguration in Common := CustomCompile,
// Modify the default Ivy configurations.
// 'overrideConfigs' ensures that Compile is replaced by CustomCompile
ivyConfigurations ~= overrideConfigs(Scalate, Saxon, Common, CustomCompile),
// Put all dependencies without an explicit configuration into Common (optional)
defaultConfiguration := Some(Common),
// Declare dependencies in the appropriate configurations
libraryDependencies ++= Seq(
"org.fusesource.scalate" % "scalate-core" % "1.5.0" % "scalate",
"org.squeryl" %% "squeryl" % "0.9.4" % "scalate",
"net.sf.saxon" % "saxon" % "8.7" % "saxon"
)
)
/********* Configurations *******/
lazy val Scalate = config("scalate") extend(Common) describedAs("Dependencies for using Scalate utilities.")
lazy val Common = config("common") describedAs("Dependencies required in all configurations.")
lazy val Saxon = config("saxon") extend(Common) describedAs("Dependencies for using Saxon utilities.")
// Define a customized compile configuration that includes
// dependencies defined in our other custom configurations
lazy val CustomCompile = config("compile") extend(Saxon, Common, Scalate)
}
```

@ -1,10 +0,0 @@
# Examples
This section of the wiki has example sbt build definitions and
code. Contributions are welcome!
You may want to read the [[Getting Started Guide|Getting Started Welcome]] as a
foundation for understanding the examples.
See the sidebar on the right for an index of available examples.

@ -1,149 +0,0 @@
## Full Configuration Example
Full configurations are written in Scala, so this example would be placed as project/Build.scala, not build.sbt. The build can be split into multiple files.
```scala
import sbt._
import Keys._
object BuildSettings {
val buildOrganization = "odp"
val buildVersion = "2.0.29"
val buildScalaVersion = "2.9.0-1"
val buildSettings = Defaults.defaultSettings ++ Seq (
organization := buildOrganization,
version := buildVersion,
scalaVersion := buildScalaVersion,
shellPrompt := ShellPrompt.buildShellPrompt
)
}
// Shell prompt which show the current project,
// git branch and build version
object ShellPrompt {
object devnull extends ProcessLogger {
def info (s: => String) {}
def error (s: => String) { }
def buffer[T] (f: => T): T = f
}
def currBranch = (
("git status -sb" lines_! devnull headOption)
getOrElse "-" stripPrefix "## "
)
val buildShellPrompt = {
(state: State) => {
val currProject = Project.extract (state).currentProject.id
"%s:%s:%s> ".format (
currProject, currBranch, BuildSettings.buildVersion
)
}
}
}
object Resolvers {
val sunrepo = "Sun Maven2 Repo" at "http://download.java.net/maven/2"
val sunrepoGF = "Sun GF Maven2 Repo" at "http://download.java.net/maven/glassfish"
val oraclerepo = "Oracle Maven2 Repo" at "http://download.oracle.com/maven"
val oracleResolvers = Seq (sunrepo, sunrepoGF, oraclerepo)
}
object Dependencies {
val logbackVer = "0.9.16"
val grizzlyVer = "1.9.19"
val logbackcore = "ch.qos.logback" % "logback-core" % logbackVer
val logbackclassic = "ch.qos.logback" % "logback-classic" % logbackVer
val jacksonjson = "org.codehaus.jackson" % "jackson-core-lgpl" % "1.7.2"
val grizzlyframwork = "com.sun.grizzly" % "grizzly-framework" % grizzlyVer
val grizzlyhttp = "com.sun.grizzly" % "grizzly-http" % grizzlyVer
val grizzlyrcm = "com.sun.grizzly" % "grizzly-rcm" % grizzlyVer
val grizzlyutils = "com.sun.grizzly" % "grizzly-utils" % grizzlyVer
val grizzlyportunif = "com.sun.grizzly" % "grizzly-portunif" % grizzlyVer
val sleepycat = "com.sleepycat" % "je" % "4.0.92"
val apachenet = "commons-net" % "commons-net" % "2.0"
val apachecodec = "commons-codec" % "commons-codec" % "1.4"
val scalatest = "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
}
object CDAP2Build extends Build {
import Resolvers._
import Dependencies._
import BuildSettings._
// Sub-project specific dependencies
val commonDeps = Seq (
logbackcore,
logbackclassic,
jacksonjson,
scalatest
)
val serverDeps = Seq (
grizzlyframwork,
grizzlyhttp,
grizzlyrcm,
grizzlyutils,
grizzlyportunif,
sleepycat,
scalatest
)
val pricingDeps = Seq (apachenet, apachecodec, scalatest)
lazy val cdap2 = Project (
"cdap2",
file ("."),
settings = buildSettings
) aggregate (common, server, compact, pricing, pricing_service)
lazy val common = Project (
"common",
file ("cdap2-common"),
settings = buildSettings ++ Seq (libraryDependencies ++= commonDeps)
)
lazy val server = Project (
"server",
file ("cdap2-server"),
settings = buildSettings ++ Seq (resolvers := oracleResolvers,
libraryDependencies ++= serverDeps)
) dependsOn (common)
lazy val pricing = Project (
"pricing",
file ("cdap2-pricing"),
settings = buildSettings ++ Seq (libraryDependencies ++= pricingDeps)
) dependsOn (common, compact, server)
lazy val pricing_service = Project (
"pricing-service",
file ("cdap2-pricing-service"),
settings = buildSettings
) dependsOn (pricing, server)
lazy val compact = Project (
"compact",
file ("compact-hashmap"),
settings = buildSettings
)
}
```
## External Builds
* [Mojolly Backchat Build](http://gist.github.com/1021873)
* [Scalaz Build](https://github.com/scalaz/scalaz/blob/master/project/ScalazBuild.scala)
* Source Code Generation
* Generates Scaladoc and Scala X-Ray HTML Sources, with a unified view of source from all sub-projects
* Builds an archive will the artifacts from all modules
* "Roll your own" approach to appending the Scala version to the module id of dependencies to allow using snapshot releases of Scala.

@ -1,192 +0,0 @@
[sbt.SettingDefinition]: http://harrah.github.com/xsbt/latest/api/sbt/Init$SettingsDefinition.html
Listed here are some examples of settings (each setting is
independent). See [[.sbt build definition|Getting Started Basic Def]] for details.
_Please note_ that blank lines are used to separate individual settings. Avoid using blank lines within a single multiline expression. As explained in [[.sbt build definition|Getting Started Basic Def]], each setting is otherwise a normal Scala expression with expected type [sbt.SettingDefinition].
```scala
// set the name of the project
name := "My Project"
version := "1.0"
organization := "org.myproject"
// set the Scala version used for the project
scalaVersion := "2.9.0-SNAPSHOT"
// set the main Scala source directory to be <base>/src
scalaSource in Compile <<= baseDirectory(_ / "src")
// set the Scala test source directory to be <base>/test
scalaSource in Test <<= baseDirectory(_ / "test")
// add a test dependency on ScalaCheck
libraryDependencies += "org.scala-tools.testing" %% "scalacheck" % "1.8" % "test"
// add compile dependencies on some dispatch modules
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-meetup" % "0.7.8",
"net.databinder" %% "dispatch-twitter" % "0.7.8"
)
// Set a dependency based partially on a val.
{
val libosmVersion = "2.5.2-RC1"
libraryDependencies += "net.sf.travelingsales" % "osmlib" % libosmVersion from "http://downloads.sourceforge.net/project/travelingsales/libosm/"+libosmVersion+"/libosm-"+libosmVersion+".jar"
}
// reduce the maximum number of errors shown by the Scala compiler
maxErrors := 20
// increase the time between polling for file changes when using continuous execution
pollInterval := 1000
// append several options to the list of options passed to the Java compiler
javacOptions ++= Seq("-source", "1.5", "-target", "1.5")
// append -deprecation to the options passed to the Scala compiler
scalacOptions += "-deprecation"
// define the statements initially evaluated when entering 'console', 'console-quick', or 'console-project'
initialCommands := """
import System.{currentTimeMillis => now}
def time[T](f: => T): T = {
val start = now
try { f } finally { println("Elapsed: " + (now - start)/1000.0 + " s") }
}
"""
// set the initial commands when entering 'console' or 'console-quick', but not 'console-project'
initialCommands in console := "import myproject._"
// set the main class for packaging the main jar
// 'run' will still auto-detect and prompt
// change Compile to Test to set it for the test jar
mainClass in (Compile, packageBin) := Some("myproject.MyMain")
// set the main class for the main 'run' task
// change Compile to Test to set it for 'test:run'
mainClass in (Compile, run) := Some("myproject.MyMain")
// add <base>/input to the files that '~' triggers on
watchSources <+= baseDirectory map { _ / "input" }
// add a maven-style repository
resolvers += "name" at "url"
// add a sequence of maven-style repositories
resolvers ++= Seq("name" at "url")
// define the repository to publish to
publishTo := Some("name" at "url")
// set Ivy logging to be at the highest level
ivyLoggingLevel := UpdateLogging.Full
// disable updating dynamic revisions (including -SNAPSHOT versions)
offline := true
// set the prompt (for this build) to include the project id.
shellPrompt in ThisBuild := { state => Project.extract(state).currentRef.project + "> " }
// set the prompt (for the current project) to include the username
shellPrompt := { state => System.getProperty("user.name") + "> " }
// disable printing timing information, but still print [success]
showTiming := false
// disable printing a message indicating the success or failure of running a task
showSuccess := false
// change the format used for printing task completion time
timingFormat := {
import java.text.DateFormat
DateFormat.getDateTimeInstance(DateFormat.SHORT, DateFormat.SHORT)
}
// disable using the Scala version in output paths and artifacts
crossPaths := false
// fork a new JVM for 'run' and 'test:run'
fork := true
// fork a new JVM for 'test:run', but not 'run'
fork in Test := true
// add a JVM option to use when forking a JVM for 'run'
javaOptions += "-Xmx2G"
// only use a single thread for building
parallelExecution := false
// Execute tests in the current project serially
// Tests from other projects may still run concurrently.
parallelExecution in Test := false
// set the location of the JDK to use for compiling Java code.
// if 'fork' is true, this is used for 'run' as well
javaHome := Some(file("/usr/lib/jvm/sun-jdk-1.6"))
// Use Scala from a directory on the filesystem instead of retrieving from a repository
scalaHome := Some(file("/home/user/scala/trunk/"))
// don't aggregate clean (See FullConfiguration for aggregation details)
aggregate in clean := false
// only show warnings and errors on the screen for compilations.
// this applies to both test:compile and compile and is Info by default
logLevel in compile := Level.Warn
// only show warnings and errors on the screen for all tasks (the default is Info)
// individual tasks can then be more verbose using the previous setting
logLevel := Level.Warn
// only store messages at info and above (the default is Debug)
// this is the logging level for replaying logging with 'last'
persistLogLevel := Level.Debug
// only show 10 lines of stack traces
traceLevel := 10
// only show stack traces up to the first sbt stack frame
traceLevel := 0
// add SWT to the unmanaged classpath
unmanagedJars in Compile += Attributed.blank(file("/usr/share/java/swt.jar"))
// publish test jar, sources, and docs
publishArtifact in Test := true
// disable publishing of main docs
publishArtifact in (Compile, packageDoc) := false
// change the classifier for the docs artifact
artifactClassifier in packageDoc := Some("doc")
// Copy all managed dependencies to <build-root>/lib_managed/
// This is essentially a project-local cache and is different
// from the lib_managed/ in sbt 0.7.x. There is only one
// lib_managed/ in the build root (not per-project).
retrieveManaged := true
/* Specify a file containing credentials for publishing. The format is:
realm=Sonatype Nexus Repository Manager
host=nexus.scala-tools.org
user=admin
password=admin123
*/
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
// Directly specify credentials for publishing.
credentials += Credentials("Sonatype Nexus Repository Manager", "nexus.scala-tools.org", "admin", "admin123")
// Exclude transitive dependencies, e.g., include log4j without including logging via jdmk, jmx, or jms.
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" excludeAll(
ExclusionRule(organization = "com.sun.jdmk"),
ExclusionRule(organization = "com.sun.jmx"),
ExclusionRule(organization = "javax.jms")
)
```

@ -1,14 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Examples]]
* [[.scala examples|Full Configuration Example]]
* [[.sbt examples|Quick Configuration Examples]]
* [[Simple project using plugins]]
* [[Advanced Command Example]]
* [[Advanced Configurations Example]]
* [[Community Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Extending sbt|Extending]] - internals docs

@ -1,256 +0,0 @@
[BuildDependencies]: http://harrah.github.com/xsbt/latest/api/sbt/BuildDependencies.html
[TransformInfo]: http://harrah.github.com/xsbt/latest/api/index.html#sbt.BuildLoader$$TransformInfo
[ResolveInfo]: http://harrah.github.com/xsbt/latest/api/index.html#sbt.BuildLoader$$ResolveInfo
[BuildLoader]: http://harrah.github.com/xsbt/latest/api/sbt/BuildLoader$.html
[BuildInfo]: http://harrah.github.com/xsbt/latest/api/sbt/BuildLoader$$BuildInfo.html
[BuildUnit]: http://harrah.github.com/xsbt/latest/api/index.html#sbt.Load$$BuildUnit
[ProjectRef]: http://harrah.github.com/xsbt/latest/api/sbt/ProjectRef.html
[ClasspathDep]: http://harrah.github.com/xsbt/latest/api/sbt/ClasspathDep.html
# Build Loaders
Build loaders are the means by which sbt resolves, builds, and transforms build definitions.
Each aspect of loading may be customized for special applications.
Customizations are specified by overriding the _buildLoaders_ methods of your build definition's Build object.
These customizations apply to external projects loaded by the build, but not the (already loaded) Build in which they are defined.
Also documented on this page is how to manipulate inter-project dependencies from a setting.
## Custom Resolver
The first type of customization introduces a new resolver.
A resolver provides support for taking a build URI and retrieving it to a local directory on the filesystem.
For example, the built-in resolver can checkout a build using git based on a git URI, use a build in an existing local directory, or download and extract a build packaged in a jar file.
A resolver has type:
```scala
ResolveInfo => Option[() => File]
```
The resolver should return None if it cannot handle the URI or Some containing a function that will retrieve the build.
The ResolveInfo provides a staging directory that can be used or the resolver can determine its own target directory.
Whichever is used, it should be returned by the loading function.
A resolver is registered by passing it to _BuildLoader.resolve_ and overriding _Build.buildLoaders_ with the result:
```scala
...
object Demo extends Build {
...
override def buildLoader =
BuildLoader.resolve(demoResolver) ::
Nil
def demoResolver: BuildLoader.ResolveInfo => Option[() => File] = ...
}
```
### API Documentation
Relevant API documentation for custom resolvers:
* [ResolveInfo]
* [BuildLoader]
### Full Example
```scala
import sbt._
import Keys._
object Demo extends Build
{
// Define a project that depends on an external project with a custom URI
lazy val root = Project("root", file(".")).dependsOn(
uri("demo:a")
)
// Register the custom resolver
override def buildLoaders =
BuildLoader.resolve(demoResolver) ::
Nil
// Define the custom resolver, which handles the 'demo' scheme.
// The resolver's job is to produce a directory containing the project to load.
// A subdirectory of info.staging can be used to create new local
// directories, such as when doing 'git clone ...'
def demoResolver(info: BuildLoader.ResolveInfo): Option[() => File] =
if(info.uri.getScheme != "demo")
None
else
{
// Use a subdirectory of the staging directory for the new local build.
// The subdirectory name is derived from a hash of the URI,
// and so identical URIs will resolve to the same directory (as desired).
val base = RetrieveUnit.temporary(info.staging, info.uri)
// Return a closure that will do the actual resolution when requested.
Some(() => resolveDemo(base, info.uri.getSchemeSpecificPart))
}
// Construct a sample project on the fly with the name specified in the URI.
def resolveDemo(base: File, ssp: String): File =
{
// Only create the project if it hasn't already been created.
if(!base.exists)
IO.write(base / "build.sbt", template.format(ssp))
base
}
def template = """
name := "%s"
version := "1.0"
"""
}
```
## Custom Builder
Once a project is resolved, it needs to be built and then presented to sbt as an instance of `sbt.BuildUnit`.
A custom builder has type:
```scala
BuildInfo => Option[() => BuildUnit]
```
A builder returns None if it does not want to handle the build identified by the `BuildInfo`.
Otherwise, it provides a function that will load the build when evaluated.
Register a builder by passing it to _BuildLoader.build_ and overriding _Build.buildLoaders_ with the result:
```scala
...
object Demo extends Build {
...
override def buildLoader =
BuildLoader.build(demoBuilder) ::
Nil
def demoBuilder: BuildLoader.BuildInfo => Option[() => BuildUnit] = ...
}
```
### API Documentation
Relevant API documentation for custom builders:
* [BuildInfo]
* [BuildLoader]
* [BuildUnit]
### Example
This example demonstrates the structure of how a custom builder could read configuration from a pom.xml instead of the standard .sbt files and project/ directory.
```scala
... imports ...
object Demo extends Build
{
lazy val root = Project("root", file(".")) dependsOn( file("basic-pom-project") )
override def buildLoaders =
build(demoBuilder) ::
Nil
def demoBuilder: BuildInfo => Option[() => BuildUnit] = info =>
if(pomFile(info).exists)
Some(() => pomBuild(info))
else
None
def pomBuild(info: BuildInfo): BuildUnit =
{
val pom = pomFile(info)
val model = readPom(pom)
val n = StringUtilities.normalize(model.getName)
val base = Option(model.getProjectDirectory) getOrElse info.base
val root = Project(n, base) settings( pomSettings(model) : _*)
val build = new Build { override def projects = Seq(root) }
val loader = this.getClass.getClassLoader
val definitions = new LoadedDefinitions(info.base, Nil, loader, build :: Nil, Nil)
val plugins = new LoadedPlugins(info.base / "project", Nil, loader, Nil, Nil)
new BuildUnit(info.uri, info.base, definitions, plugins)
}
def readPom(file: File): Model = ...
def pomSettings(m: Model): Seq[Setting[_]] = ...
def pomFile(info: BuildInfo): File = info.base / "pom.xml"
```
## Custom Transformer
Once a project has been loaded into an `sbt.BuildUnit`, it is transformed by all registered transformers.
A custom transformer has type:
```scala
TransformInfo => BuildUnit
```
A transformer is registered by passing it to _BuildLoader.transform_ and overriding _Build.buildLoaders_ with the result:
```scala
...
object Demo extends Build {
...
override def buildLoader =
BuildLoader.transform(demoTransformer) ::
Nil
def demoBuilder: BuildLoader.TransformInfo => BuildUnit = ...
}
```
### API Documentation
Relevant API documentation for custom transformers:
* [TransformInfo]
* [BuildLoader]
* [BuildUnit]
# Manipulating Project Dependencies in Settings
The `buildDependencies` setting, in the Global scope, defines the aggregation and classpath dependencies between projects.
By default, this information comes from the dependencies defined by `Project` instances by the `aggregate` and `dependsOn` methods.
Because `buildDependencies` is a setting and is used everywhere dependencies need to be known (once all projects are loaded), plugins and build definitions can transform it to manipulate inter-project dependencies at setting evaluation time.
The only requirement is that no new projects are introduced because all projects are loaded before settings get evaluated.
That is, all Projects must have been declared directly in a Build or referenced as the argument to `Project.aggregate` or `Project.dependsOn`.
## The BuildDependencies type
The type of the `buildDependencies` setting is [BuildDependencies].
`BuildDependencies` provides mappings from a project to its aggregate or classpath dependencies.
For classpath dependencies, a dependency has type `ClasspathDep[ProjectRef]`, which combines a `ProjectRef` with a configuration (see [ClasspathDep] and [ProjectRef]).
For aggregate dependencies, the type of a dependency is just `ProjectRef`.
The API for `BuildDependencies` is not extensive, covering only a little more than the minimum required, and related APIs have more of an internal, unpolished feel.
Most manipulations consist of modifying the relevant map (classpath or aggregate) manually and creating a new `BuildDependencies` instance.
### Example
As an example, the following replaces a reference to a specific build URI with a new URI.
This could be used to translate all references to a certain git repository to a different one or to a different mechanism, like a local directory.
```scala
buildDependencies in Global ~= { deps =>
val oldURI = uri("...") // the URI to replace
val newURI = uri("...") // the URI replacing oldURI
def substitute(dep: ClasspathDep[ProjectRef]): ClasspathDep[ProjectRef] =
if(dep.project.build == oldURI)
ResolvedClasspathDependency(ProjectRef(newURI, dep.project.project), dep.configuration)
else
dep
val newcp =
for( (proj, deps) <- deps.cp) yield
(proj, deps map substitute)
new BuildDependencies(cp, deps.aggregate)
}
```
It is not limited to such basic translations, however.
The configuration a dependency is defined in may be modified and dependencies may be added or removed.
Modifying `buildDependencies` can be combined with modifying `libraryDependencies` to convert binary dependencies to and from source dependencies, for example.

@ -1,195 +0,0 @@
[State]: http://harrah.github.com/xsbt/latest/api/sbt/State$.html
[Extracted]: http://harrah.github.com/xsbt/latest/api/sbt/Extracted.html
[Keys]: http://harrah.github.com/xsbt/latest/api/sbt/Keys$.html
[Eval]: http://harrah.github.com/xsbt/latest/api/sbt/compiler/Eval.html
[Scope]: http://harrah.github.com/xsbt/latest/api/sbt/Scope.html
[BuildStructure]: http://harrah.github.com/xsbt/latest/api/sbt/Load$$BuildStructure.html
[LoadedBuildUnit]: http://harrah.github.com/xsbt/latest/api/sbt/Load$$LoadedBuildUnit.html
[Structure.scala]: http://harrah.github.com/xsbt/latest/sxr/Structure.scala.html
[ResolvedProject]: http://harrah.github.com/xsbt/latest/api/sbt/ResolvedProject.html
[ProjectReferences]: http://harrah.github.com/xsbt/latest/api/sbt/ProjectReference.html
# State and actions
[State] is the entry point to all available information in sbt.
The key methods are:
* `definedCommands: Seq[Command]` returns all registered Command definitions
* `remainingCommands: Seq[String]` returns the remaining commands to be run
* `attributes: AttributeMap` contains generic data.
The action part of a command performs work and transforms `State`.
The following sections discuss `State => State` transformations.
As mentioned previously, a command will typically handle a parsed value as well: `(State, T) => State`.
## Command-related data
A Command can modify the currently registered commands or the commands to be executed.
This is done in the action part by transforming the (immutable) State provided to the command.
A function that registers additional power commands might look like:
```scala
val powerCommands: Seq[Command] = ...
val addPower: State => State =
(state: State) =>
state.copy(definedCommands =
(state.definedCommands ++ powerCommands).distinct
)
```
This takes the current commands, appends new commands, and drops duplicates.
Alternatively, State has a convenience method for doing the above:
```scala
val addPower2 = (state: State) => state ++ powerCommands
```
Some examples of functions that modify the remaining commands to execute:
```scala
val appendCommand: State => State =
(state: State) =>
state.copy(remainingCommands = state.remainingCommands :+ "cleanup")
val insertCommand: State => State =
(state: State) =>
state.copy(remainingCommands = "next-command" +: state.remainingCommands)
```
The first adds a command that will run after all currently specified commands run.
The second inserts a command that will run next.
The remaining commands will run after the inserted command completes.
To indicate that a command has failed and execution should not continue, return `state.fail`.
```scala
(state: State) => {
val success: Boolean = ...
if(success) state else state.fail
}
```
## Project-related data
Project-related information is stored in `attributes`.
Typically, commands won't access this directly but will instead use a convenience method to extract the most useful information:
```scala
val state: State
val extracted: Extracted = Project.extract(state)
import extracted._
```
[Extracted] provides:
* Access to the current build and project (`currentRef`)
* Access to initialized project setting data (`structure.data`)
* Access to session `Setting`s and the original, permanent settings from `.sbt` and `.scala` files (`session.append` and `session.original`, respectively)
* Access to the current [Eval] instance for evaluating Scala expressions in the build context.
## Project data
All project data is stored in `structure.data`, which is of type `sbt.Settings[Scope]`.
Typically, one gets information of type `T` in the following way:
```scala
val key: SettingKey[T]
val scope: Scope
val value: Option[T] = key in scope get structure.data
```
Here, a `SettingKey[T]` is typically obtained from [Keys] and is the same type that is used to define settings in `.sbt` files, for example.
[Scope] selects the scope the key is obtained for.
There are convenience overloads of `in` that can be used to specify only the required scope axes. See [Structure.scala] for where `in` and other parts of the settings interface are defined.
Some examples:
```scala
import Keys._
val extracted: Extracted
import extracted._
// get name of current project
val nameOpt: Option[String] = name in currentRef get structure.data
// get the package options for the `test:package-src` task or Nil if none are defined
val pkgOpts: Seq[PackageOption] = packageOptions in (currentRef, Test, packageSrc) get structure.data getOrElse Nil
```
[BuildStructure] contains information about build and project relationships.
Key members are:
```scala
units: Map[URI, LoadedBuildUnit]
root: URI
```
A `URI` identifies a build and `root` identifies the initial build loaded.
[LoadedBuildUnit] provides information about a single build.
The key members of `LoadedBuildUnit` are:
```scala
// Defines the base directory for the build
localBase: File
// maps the project ID to the Project definition
defined: Map[String, ResolvedProject]
```
[ResolvedProject] has the same information as the `Project` used in a `project/Build.scala` except that [ProjectReferences] are resolved to `ProjectRef`s.
## Classpaths
Classpaths in sbt 0.10+ are of type `Seq[Attributed[File]]`.
This allows tagging arbitrary information to classpath entries.
sbt currently uses this to associate an `Analysis` with an entry.
This is how it manages the information needed for multi-project incremental recompilation.
It also associates the ModuleID and Artifact with managed entries (those obtained by dependency management).
When you only want the underlying `Seq[File]`, use `files`:
```scala
val attributedClasspath: Seq[Attribute[File]] = ...
val classpath: Seq[File] = attributedClasspath.files
```
## Running tasks
It can be useful to run a specific project task from a [[command|Commands]] (*not from another task*) and get its result.
For example, an IDE-related command might want to get the classpath from a project or a task might analyze the results of a compilation.
The relevant method is `Project.evaluateTask`, which has the following signature:
```scala
def evaluateTask[T](taskKey: ScopedKey[Task[T]], state: State,
checkCycles: Boolean = false, maxWorkers: Int = ...): Option[Result[T]]
```
For example,
```scala
val eval: State => State = (state: State) => {
// This selects the main 'compile' task for the current project.
// The value produced by 'compile' is of type inc.Analysis,
// which contains information about the compiled code.
val taskKey = Keys.compile in Compile
// Evaluate the task
// None if the key is not defined
// Some(Inc) if the task does not complete successfully (Inc for incomplete)
// Some(Value(v)) with the resulting value
val result: Option[Result[inc.Analysis]] = Project.evaluateTask(taskKey, state)
// handle the result
result match
{
case None => // Key wasn't defined.
case Some(Inc(inc)) => // error detail, inc is of type Incomplete, use Incomplete.show(inc.tpe) to get an error message
case Some(Value(v)) => // do something with v: inc.Analysis
}
}
```
For getting the test classpath of a specific project, use this key:
```scala
val projectRef: ProjectRef = ...
val taskKey: Task[Seq[Attributed[File]]] =
Keys.fullClasspath in (projectRef, Test)
```

@ -1,113 +0,0 @@
[xsbti.AppMain]: http://harrah.github.com/xsbt/latest/api/xsbti/AppMain.html
# Creating Command Line Applications Using sbt
*Note:* This page applies to sbt 0.12.0 and later.
There are several components of sbt that may be used to create a command line application.
The [[launcher|Launcher]] and the [[command system|Commands]] are the two main ones illustrated here.
As described on the [[launcher page|Launcher]], a launched application implements the xsbti.AppMain interface and defines a brief configuration file that users pass to the launcher to run the application.
To use the command system, an application sets up a [[State|Build State]] instance that provides [[command implementations|Commands]] and the initial commands to run.
A minimal hello world example is given below.
# Hello World Example
There are three files in this example:
1. build.sbt
2. Main.scala
3. hello.build.properties
To try out this example:
1. Put the first two files in a new directory
2. Run 'sbt publish-local' in that directory
3. Run 'sbt @path/to/hello.build.properties' to run the application.
Like for sbt itself, you can specify commands from the command line (batch mode) or run them at an prompt (interactive mode).
### Build Definition: build.sbt
The build.sbt file should define the standard settings: name, version, and organization. To use the sbt command system, a dependency on the `command` module is needed. To use the task system, add a dependency on the `task-system` module as well.
```scala
organization := "org.example"
name := "hello"
version := "0.1-SNAPSHOT"
libraryDependencies += "org.scala-sbt" %% "command" % "0.12.0"
```
### Application: Main.scala
The application itself is defined by implementing [xsbti.AppMain]. The basic steps are
1. Provide command definitions. These are the commands that are available for users to run.
2. Define initial commands. These are the commands that are initially scheduled to run. For example, an application will typically add anything specified on the command line (what sbt calls batch mode) and if no commands are defined, enter interactive mode by running the 'shell' command.
3. Set up logging. The default setup in the example rotates the log file after each user interaction and sends brief logging to the console and verbose logging to the log file.
```scala
package org.example
import sbt._
import java.io.{File, PrintWriter}
final class Main extends xsbti.AppMain
{
/** Defines the entry point for the application.
* The call to `initialState` sets up the application.
* The call to runLogged starts command processing. */
def run(configuration: xsbti.AppConfiguration): xsbti.MainResult =
MainLoop.runLogged( initialState(configuration) )
/** Sets up the application by constructing an initial State instance with the supported commands
* and initial commands to run. See the State API documentation for details. */
def initialState(configuration: xsbti.AppConfiguration): State =
{
val commandDefinitions = hello +: BasicCommands.allBasicCommands
val commandsToRun = Hello +: "iflast shell" +: configuration.arguments.map(_.trim)
State( configuration, commandDefinitions, Set.empty, None, commandsToRun, State.newHistory,
AttributeMap.empty, initialGlobalLogging, State.Continue )
}
// defines an example command. see the Commands page for details.
val Hello = "hello"
val hello = Command.command(Hello) { s =>
s.log.info("Hello!")
s
}
/** Configures logging to log to a temporary backing file as well as to the console.
* An application would need to do more here to customize the logging level and
* provide access to the backing file (like sbt's last command and logLevel setting).*/
def initialGlobalLogging: GlobalLogging =
GlobalLogging.initial(MainLogging.globalDefault _, File.createTempFile("hello", "log"))
}
```
### Launcher configuration file: hello.build.properties
The launcher needs a configuration file in order to retrieve and run an application.
`hello.build.properties`
```
[scala]
version: 2.9.1
[app]
org: org.example
name: hello
version: 0.1-SNAPSHOT
class: org.example.Main
components: xsbti
cross-versioned: true
[repositories]
local
maven-central
typesafe-ivy-releases: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext]
```

@ -1,165 +0,0 @@
[State]: http://harrah.github.com/xsbt/latest/api/sbt/State.html
[Command.scala]: http://harrah.github.com/xsbt/latest/sxr/Command.scala.html#10761
# Commands
# Introduction
There are three main aspects to commands:
1. The syntax used by the user to invoke the command, including:
* Tab completion for the syntax
* The parser to turn input into an appropriate data structure
2. The action to perform using the parsed data structure. This action transforms the build [State].
3. Help provided to the user
In sbt, the syntax part, including tab completion, is specified with parser combinators.
If you are familiar with the parser combinators in Scala's standard library, these are very similar.
The action part is a function `(State, T) => State`, where `T` is the data structure produced by the parser.
See the [[Parsing Input]] page for how to use the parser combinators.
[State] provides access to the build state, such as all registered `Command`s, the remaining commands to execute, and all project-related information. See [[Build State]] for details on State.
Finally, basic help information may be provided that is used by the `help` command to display command help.
# Defining a Command
A command combines a function `State => Parser[T]` with an action `(State, T) => State`.
The reason for `State => Parser[T]` and not simply `Parser[T]` is that often the current `State` is used to build the parser.
For example, the currently loaded projects (provided by `State`) determine valid completions for the `project` command.
Examples for the general and specific cases are shown in the following sections.
See [Command.scala] for the source API details for constructing commands.
## General commands
General command construction looks like:
```scala
val action: (State, T) => State = ...
val parser: State => Parser[T] = ...
val command: Command = Command("name")(parser)(action)
```
## No-argument commands
There is a convenience method for constructing commands that do not accept any arguments.
```scala
val action: State => State = ...
val command: Command = Command.command("name")(action)
```
## Single-argument command
There is a convenience method for constructing commands that accept a single argument with arbitrary content.
```scala
// accepts the state and the single argument
val action: (State, String) => State = ...
val command: Command = Command.single("name")(action)
```
## Multi-argument command
There is a convenience method for constructing commands that accept multiple arguments separated by spaces.
```scala
val action: (State, Seq[String]) => State = ...
// <arg> is the suggestion printed for tab completion on an argument
val command: Command = Command.args("name", "<arg>")(action)
```
# Full Example
The following example is a valid `project/Build.scala` that adds commands to a project.
To try it out:
1. Copy the following build definition into `project/Build.scala` for a new project.
2. Run sbt on the project.
3. Try out the `hello`, `hello-all`, `fail-if-true`, `color`, and `print-state` commands.
4. Use tab-completion and the code below as guidance.
```scala
import sbt._
import Keys._
// imports standard command parsing functionality
import complete.DefaultParsers._
object CommandExample extends Build
{
// Declare a single project, adding several new commands, which are discussed below.
lazy override val projects = Seq(root)
lazy val root = Project("root", file(".")) settings(
commands ++= Seq(hello, helloAll, failIfTrue, changeColor, printState)
)
// A simple, no-argument command that prints "Hi",
// leaving the current state unchanged.
def hello = Command.command("hello") { state =>
println("Hi!")
state
}
// A simple, multiple-argument command that prints "Hi" followed by the arguments.
// Again, it leaves the current state unchanged.
def helloAll = Command.args("hello-all", "<name>") { (state, args) =>
println("Hi " + args.mkString(" "))
state
}
// A command that demonstrates failing or succeeding based on the input
def failIfTrue = Command.single("fail-if-true") {
case (state, "true") => state.fail
case (state, _) => state
}
// Demonstration of a custom parser.
// The command changes the foreground or background terminal color
// according to the input.
lazy val change = Space ~> (reset | setColor)
lazy val reset = token("reset" ^^^ "\033[0m")
lazy val color = token( Space ~> ("blue" ^^^ "4" | "green" ^^^ "2") )
lazy val select = token( "fg" ^^^ "3" | "bg" ^^^ "4" )
lazy val setColor = (select ~ color) map { case (g, c) => "\033[" + g + c + "m" }
def changeColor = Command("color")(_ => change) { (state, ansicode) =>
print(ansicode)
state
}
// A command that demonstrates getting information out of State.
def printState = Command.command("print-state") { state =>
import state._
println(definedCommands.size + " registered commands")
println("commands to run: " + show(remainingCommands))
println()
println("original arguments: " + show(configuration.arguments))
println("base directory: " + configuration.baseDirectory)
println()
println("sbt version: " + configuration.provider.id.version)
println("Scala version (for sbt): " + configuration.provider.scalaProvider.version)
println()
val extracted = Project.extract(state)
import extracted._
println("Current build: " + currentRef.build)
println("Current project: " + currentRef.project)
println("Original setting count: " + session.original.size)
println("Session setting count: " + session.append.size)
state
}
def show[T](s: Seq[T]) =
s.map("'" + _ + "'").mkString("[", ", ", "]")
}
```

@ -1,9 +0,0 @@
# Extending sbt
This part of the wiki has pages documenting sbt "internals,"
and how to extend them with plugins and commands.
To understand the pages in here, you'll probably need the
[[Getting Started Guide|Getting Started Welcome]] as a foundation.
See the sidebar on the right for an index of topics.

@ -1,86 +0,0 @@
[InputTask.apply]: http://harrah.github.com/xsbt/latest/api/sbt/InputTask$.html
# Input Tasks
Input Tasks parse user input and produce a task to run. [[Parsing Input]] describes how to use the parser combinators that define the input syntax and tab completion. This page describes how to hook those parser combinators into the input task system.
# Input Keys
A key for an input task is of type `InputKey` and represents the input task like a `SettingKey` represents a setting or a `TaskKey` represents a task. Define a new input task key using the `InputKey.apply` factory method:
```scala
// goes in <base>/project/Build.scala
val demo = InputKey[Unit]("demo")
```
# Basic Input Task Definition
The simplest input task accepts a space-delimited sequence of arguments. It does not provide useful tab completion and parsing is basic. Such a task may be defined using the `inputTask` method, which accepts a single function of type `TaskKey[Seq[String]] => Initialize[Task[O]]` for some parse result type `O`. The input to this function is a `TaskKey` for a task that will provide the parsed `Seq[String]`. The function should return a task that uses that parsed input. For example:
```scala
demo <<= inputTask { (argTask: TaskKey[Seq[String]]) =>
// Here, we map the argument task `argTask`
// and a normal setting `scalaVersion`
(argTask, scalaVersion) map { (args: Seq[String], sv: String) =>
println("The current Scala version is " + sv)
println("The arguments to demo were:")
args foreach println
}
}
```
# Input Task using Parsers
The `inputTask` method does not provide any flexibility in defining the input syntax. To use an arbitrary `Parser` described on the [[Parsing Input]] page for parsing your input task's command line, use the more advanced [InputTask.apply] factory method. This method accepts two arguments, which will be described in the following two sections.
## Constructing the Parser
The first step is to construct the actual `Parser` by defining a value of type `Initialize[State => Parser[I]]` for some parse result type `I` that you decide on. `Initialize` is the type that results from using other settings and the `State => Parser[I]` function provides access to the [[Build State]] when constructing the parser. As an example, the following defines a contrived `Parser` that uses the project's Scala and sbt version settings as well as the state.
```scala
import complete.DefaultParsers._
val parser: Initialize[State => Parser[(String,String)]] =
(scalaVersion, sbtVersion) { (scalaV: String, sbtV: String) =>
(state: State) =>
( token("scala" <~ Space) ~ token(scalaV) ) |
( token("sbt" <~ Space) ~ token(sbtV) ) |
( token("commands" <~ Space) ~
token(state.remainingCommands.size.toString) )
}
```
This Parser definition will produce a value of type `(String,String)`.
The input syntax isn't very flexible; it is just a demonstration.
It will produce one of the following values for a successful parse (assuming the current Scala version is 2.9.1, the current sbt version is 0.11.3, and there are 3 commands left to run):
```scala
("scala", "2.9.1")
("sbt", "0.11.3")
("commands", "3")
```
## Constructing the Task
Next, we construct the actual task to execute from the result of the `Parser`. For this, we construct a value of type `TaskKey[I] => Initialize[Task[O]]`, where `I` is the type returned by the `Parser` we just defined and `O` is the type of the `Task` we will produce. The `TaskKey[I]` provides a task that will provide the result of parsing.
The following contrived example uses the previous example's output (of type `(String,String)`) and the result of the `package` task to print some information to the screen.
```scala
val taskDef = (parsedTask: TaskKey[(String,String)]) => {
// we are making a task, so use 'map'
(parsedTask, packageBin) map { case ( (tpe: String, value: String), pkg: File) =>
println("Type: " + tpe)
println("Value: " + value)
println("Packaged: " + pkg.getAbsolutePath)
}
}
```
## Putting it together
To construct the input task, combine the key, the parser, and the task definition in a setting that goes in `build.sbt` or in the `settings` member of a `Project` in `project/Build.scala`:
```scala
demo <<= InputTask(parser)(taskDef)
```

@ -1,203 +0,0 @@
# Plugins Best Practices
_This page is intended primarily for SBT plugin authors._
A plugin developer should strive for consistency and ease of use. Specifically:
* Plugins should play well with other plugins. Avoiding namespace clashes (in both SBT and Scala) is paramount.
* Plugins should follow consistent conventions. The experiences of an SBT _user_ should be consistent, no matter
what plugins are pulled in.
Here are some current plugin best practices. **NOTE:** Best practices are evolving, so check back frequently.
## Avoid overriding `settings`
SBT will automatically load your plugin's `settings` into the build. Overriding `val settings` should only be done by plugins intending to provide commands. Regular plugins defining tasks and settings should provide a sequence named after the plugin like so:
```scala
val obfuscateSettings = Seq(...)
```
This allows build user to choose which subproject the plugin would be used. See later section for how the settings should be scoped.
## Reuse existing keys
SBT has a number of [predefined keys](http://harrah.github.com/xsbt/latest/api/sbt/Keys%24.html). Where possible, reuse them in your plugin. For instance, don't define:
```scala
val sourceFiles = SettingKey[Seq[File]]("source-files")
```
Instead, simply reuse SBT's existing `sources` key.
## Avoid namespace clashes
Sometimes, you need a new key, because there is no existing SBT key. In this case, use a plugin-specific prefix, both in the (string) key name used in the SBT namespace and in the Scala `val`. There are two acceptable ways to accomplish this goal.
### Just use a `val` prefix
```scala
package sbtobfuscate
object Plugin extends sbt.Plugin {
val obfuscateStylesheet = SettingKey[File]("obfuscate-stylesheet")
}
```
In this approach, every `val` starts with `obfuscate`. A user of the plugin would refer to the settings like this:
```scala
obfuscateStylesheet <<= ...
```
### Use a nested object
```scala
package sbtobfuscate
object Plugin extends sbt.Plugin {
object ObfuscateKeys {
val stylesheet = SettingKey[File]("obfuscate-stylesheet")
}
}
```
In this approach, all non-common settings are in a nested object. A user of the plugin would refer to the settings like this:
```scala
import ObfuscateKeys._ // place this at the top of build.sbt
stylesheet <<= ...
```
## Configuration Advice
Due to usability concerns from the shell, you could opt out of task-scoping described in this section, if your plugin makes heavy use of the shell.
Using configuration-scoping the user could discover your tasks using tab completion:
```
coffee:[tab]
```
This method no longer works with per-task keys, but there's a pending case, so hopefully it will be addressed in the future.
### When to define your own configuration
If your plugin introduces a new concept (even if that concept reuses an existing key), you want your own configuration. For instance, suppose you've built a plugin that produces PDF files from some kind of markup, and your plugin defines a target directory to receive the resulting PDFs. That target directory is scoped in its own configuration, so it is distinct from other target directories. Thus, these two definitions use the same _key_, but they represent distinct _values_. So, in a user's `build.sbt`, we might see:
```scala
target in PDFPlugin <<= baseDirectory(_ / "mytarget" / "pdf")
target in Compile <<= baseDirectory(_ / "mytarget")
```
In the PDF plugin, this is achieved with an `inConfig` definition:
```scala
val settings: Seq[sbt.Project.Setting[_]] = inConfig(LWM)(Seq(
target <<= baseDirectory(_ / "target" / "docs") # the default value
))
```
### When _not_ to define your own configuration.
If you're merely adding to existing definitions, don't define your own configuration. Instead, reuse an existing one _or_ scope by the main task (see below).
```scala
val akka = config("akka") // This isn't needed.
val akkaStartCluster = TaskKey[Unit]("akka-start-cluster")
target in akkaStartCluster <<= ... // This is ok.
akkaStartCluster in akka <<= ... // BAD. No need for a Config for plugin-specific task.
```
### Configuration Cat says "Configuration is for configuration"
When defining a new type of configuration, e.g.
```scala
val Config = config("profile")
```
should be used to create a "cross-task" configuration. The task definitions don't change in this case, but the default configuration does. For example, the `profile` configuration can extend the test configuration with additional settings and changes to allow profiling in SBT. Plugins should not create arbitrary Configurations, but utilize them for specific purposes and builds.
Configurations actually tie into dependency resolution (with Ivy) and can alter generated pom files.
Configurations should *not* be used to namespace keys for a plugin. e.g.
```scala
val Config = config("my-plugin")
val pluginKey = SettingKey[String]("plugin-specific-key")
val settings = plugin-key in Config // DON'T DO THIS!
```
### Playing nice with configurations
Whether you ship with a configuration or not, a plugin should strive to support multiple configurations, including those created by the build user. Some tasks that are tied to a particular configuration can be re-used in other configurations. While you may not see the need immediately in your plugin, some project may and will ask you for the flexibility.
#### Provide raw settings and configured settings
Split your settings by the configuration axis like so:
```scala
val obfuscate = TaskKey[Seq[File]]("obfuscate")
val obfuscateSettings = inConfig(Compile)(baseObfuscateSettings)
val baseObfuscateSettings: Seq[Setting[_]] = Seq(
obfuscate <<= (sources in obfuscate) map { s => ... },
sources in obfuscate <<= (sources).identity
)
```
The `baseObfuscateSettings` value provides base configuration for the plugin's tasks. This can be re-used in other configurations if projects require it. The `obfuscateSettings` value provides the default `Compile` scoped settings for projects to use directly. This gives the greatest flexibility in using features provided by a plugin. Here's how the raw settings may be reused:
```scala
seq(Project.inConfig(Test)(sbtObfuscate.Plugin.baseObfuscateSettings): _*)
```
Alternatively, one could provide a utility method to load settings in a given configuration:
```scala
def obfuscateSettingsIn(c: Configuration): Seq[Project.Setting[_]] =
inConfig(c)(baseObfuscateSettings)
```
This could be used as follows:
```scala
seq(obfuscateSettingsIn(Test): _*)
```
#### Using a 'main' task scope for settings
Sometimes you want to define some settings for a particular 'main' task in your plugin. In this instance, you can scope your settings using the task itself.
```scala
val obfuscate = TaskKey[Seq[File]]("obfuscate")
val obfuscateSettings = inConfig(Compile)(baseObfuscateSettings)
val baseObfuscateSettings: Seq[Setting[_]] = Seq(
obfuscate <<= (sources in obfuscate) map { s => ... },
sources in obfuscate <<= (sources).identity
)
```
In the above example, `sources in obfuscate` is scoped under the main task, `obfuscate`.
## Mucking with Global build state
There may be times when you need to muck with global build state. The general rule is *be careful what you touch*.
First, make sure your user do not include global build configuration in *every* project but rather in the build itself. e.g.
```scala
object MyBuild extends Build {
override lazy val settings = super.settings ++ MyPlugin.globalSettings
val main = project(file("."), "root") settings(MyPlugin.globalSettings:_*) // BAD!
}
```
Global settings should *not* be placed into a `build.sbt` file.
When overriding global settings, care should be taken to ensure previous settings from other plugins are not ignored. e.g. when creating a new `onLoad` handler, ensure that the previous `onLoad` handler is not removed.
```scala
object MyPlugin extends Plugin {
val globalSettigns: Seq[Setting[_]] = Seq(
onLoad in Global <<= onLoad in Global apply (_ andThen { state =>
... return new state ...
})
)
}
```

@ -1,302 +0,0 @@
# Plugins
# Introduction
A plugin is essentially a way to use external code in a build definition.
A plugin can be a library used to implement a task. For example, you might use [Knockoff](http://tristanhunt.com/projects/knockoff/) to write a markdown processing task.
A plugin can define a sequence of sbt Settings that are automatically added to all projects or that are explicitly declared for selected projects.
For example, a plugin might add a 'proguard' task and associated (overridable) settings.
Because [[Commands]] can be added with the `commands` setting, a plugin can also fulfill the role that processors did in 0.7.x.
The [[Plugin Best Practices|Plugins Best Practices]] page describes the currently evolving guidelines to writing sbt plugins. See also the general [[Best Practices]].
# Using a binary sbt plugin
A common situation is using a binary plugin published to a repository.
Create `project/plugins.sbt` with the desired sbt plugins, any general dependencies, and any necessary repositories:
```scala
addSbtPlugin("org.example" % "plugin" % "1.0")
addSbtPlugin("org.example" % "another-plugin" % "2.0")
// plain library (not an sbt plugin) for use in the build definition
libraryDependencies += "org.example" % "utilities" % "1.3"
resolvers += "Example Plugin Repository" at "http://example.org/repo/"
```
See the rest of the page for more information on creating and using plugins.
# By Description
A plugin definition is a project in `<main-project>/project/`.
This project's classpath is the classpath used for build definitions in `<main-project>/project/` and any `.sbt` files in the project's base directory. It is also used for the `eval` and `set` commands.
Specifically,
1. Managed dependencies declared by the `project/` project are retrieved and are available on the build definition classpath, just like for a normal project.
2. Unmanaged dependencies in `project/lib/` are available to the build definition, just like for a normal project.
3. Sources in the `project/` project are the build definition files and are compiled using the classpath built from the managed and unmanaged dependencies.
4. Project dependencies can be declared in `project/project/Build.scala` and will be available to the build definition sources. Think of `project/project/` as the build definition for the build definition.
The build definition classpath is searched for `sbt/sbt.plugins` descriptor files containing the names of Plugin implementations.
A Plugin is a module that defines settings to automatically inject to projects.
Additionally, all Plugin modules are wildcard imported for the `eval` and `set` commands and `.sbt` files.
A Plugin implementation is not required to produce a plugin, however.
It is a convenience for plugin consumers and because of the automatic nature, it is not always appropriate.
The `reload plugins` command changes the current build to `<current-build>/project/`.
This allows manipulating the build definition project like a normal project.
`reload return` changes back to the original build.
Any session settings for the plugin definition project that have not been saved are dropped.
### Global plugins
In sbt 0.7.x, a processor was a way to add new commands to sbt and distribute them to users. A key feature was the ability to have per-user processors so that once declared, it could be used in all projects for that user. In sbt 0.10+, plugins and processors are unified. Specifically, a plugin can add commands and plugins can be declared globally for a user.
The `~/.sbt/plugins/` directory is treated as a global plugin definition project. It is a normal sbt project whose classpath is available to all sbt project definitions for that user as described above for per-project plugins.
# By Example
## Using a library in a build definition
As an example, we'll add the Grizzled Scala library as a plugin. Although this does not provide sbt-specific functionality, it demonstrates how to declare plugins.
### 1a) Manually managed
1. Download the jar manually from [[https://oss.sonatype.org/content/repositories/releases/org/clapper/grizzled-scala_2.8.1/1.0.4/grizzled-scala_2.8.1-1.0.4.jar]]
2. Put it in `project/lib/`
### 1b) Automatically managed: direct editing approach
Edit `project/plugins.sbt` to contain:
```scala
libraryDependencies += "org.clapper" %% "grizzled-scala" % "1.0.4"
```
If sbt is running, do `reload`.
### 1c) Automatically managed: command line approach
We can change to the plugins project in `project/` using `reload plugins`.
```console
$ xsbt
> reload plugins
[info] Set current project to default (in build file:/Users/harrah/demo2/project/)
>
```
Then, we can add dependencies like usual and save them to `project/plugins.sbt`.
It is useful, but not required, to run `update` to verify that the dependencies are correct.
```console
> set libraryDependencies += "org.clapper" %% "grizzled-scala" % "1.0.4"
...
> update
...
> session save
...
```
To switch back to the main project:
```console
> reload return
[info] Set current project to root (in build file:/Users/harrah/demo2/)
```
### 1d) Project dependency
This variant shows how to use the external project support in sbt 0.10 to declare a source dependency on a plugin.
This means that the plugin will be built from source and used on the classpath.
Edit `project/project/Build.scala`
```scala
import sbt._
object PluginDef extends Build {
lazy val projects = Seq(root)
lazy val root = Project("plugins", file(".")) dependsOn( webPlugin )
lazy val webPlugin = uri("git://github.com/siasia/xsbt-web-plugin")
}
```
If sbt is running, run `reload`.
Note that this approach can be useful used when developing a plugin.
A project that uses the plugin will rebuild the plugin on `reload`.
This saves the intermediate steps of `publish-local` and `clean-plugins` required in 0.7.
It can also be used to work with the development version of a plugin from its repository.
It is recommended to explicitly specify the commit or tag by appending it to the repository as a fragment:
```scala
lazy val webPlugin = uri("git://github.com/siasia/xsbt-web-plugin#0.9.7")
```
### 2) Use the library
Grizzled Scala is ready to be used in build definitions.
This includes the `eval` and `set` commands and `.sbt` and `project/*.scala` files.
```console
> eval grizzled.sys.os
```
In a `build.sbt` file:
```scala
import grizzled.sys._
import OperatingSystem._
libraryDependencies ++=
if(os ==Windows)
("org.example" % "windows-only" % "1.0") :: Nil
else
Nil
```
# Creating a plugin
## Introduction
A minimal plugin is a Scala library that is built against the version of Scala that sbt runs (currently, 2.9.1) or a Java library.
Nothing special needs to be done for this type of library, as shown in the previous section.
A more typical plugin will provide sbt tasks, commands, or settings. This kind of plugin may provide these settings automatically or make them available for the user to explicitly integrate.
## Description
To make a plugin, create a project and configure `sbtPlugin` to `true`.
Then, write the plugin code and publish your project to a repository.
The plugin can be used as described in the previous section.
A plugin can implement `sbt.Plugin`.
The contents of a Plugin singleton, declared like `object MyPlugin extends Plugin`, are wildcard imported in `set`, `eval`, and `.sbt` files.
Typically, this is used to provide new keys (SettingKey, TaskKey, or InputKey) or core methods without requiring an import or qualification.
In addition, the `settings` member of the `Plugin` is automatically appended to each project's settings.
This allows a plugin to automatically provide new functionality or new defaults.
One main use of this feature is to globally add commands, like a processor in sbt 0.7.x.
These features should be used judiciously because the automatic activation removes control from the build author (the user of the plugin).
## Example Plugin
An example of a typical plugin:
`build.sbt`:
```scala
sbtPlugin := true
name := "example-plugin"
organization := "org.example"
```
`MyPlugin.scala`:
```scala
import sbt._
object MyPlugin extends Plugin
{
// configuration points, like the built in `version`, `libraryDependencies`, or `compile`
// by implementing Plugin, these are automatically imported in a user's `build.sbt`
val newTask = TaskKey[Unit]("new-task")
val newSetting = SettingKey[String]("new-setting")
// a group of settings ready to be added to a Project
// to automatically add them, do
val newSettings = Seq(
newSetting := "test",
newTask <<= newSetting map { str => println(str) }
)
// alternatively, by overriding `settings`, they could be automatically added to a Project
// override val settings = Seq(...)
}
```
## Usage example
A light build definition that uses the plugin might look like:
```scala
seq( MyPlugin.newSettings : _*)
newSetting := "light"
```
A full build definition that uses this plugin might look like:
```scala
object MyBuild extends Build
{
lazy val projects = Seq(root)
lazy val root = Project("root", file(".")) settings( MyPlugin.newSettings : _*) settings(
MyPlugin.newSetting := "full"
)
}
```
Individual settings could be defined in `MyBuild.scala` above or in a `build.sbt` file:
```scala
newSettings := "overridden"
```
## Example command plugin
A basic plugin that adds commands looks like:
`build.sbt`
```scala
sbtPlugin := true
name := "example-plugin"
organization := "org.example"
```
`MyPlugin.scala`
```scala
import sbt._
import Keys._
object MyPlugin extends Plugin
{
override lazy val settings = Seq(commands += myCommand)
lazy val myCommand =
Command.command("hello") { (state: State) =>
println("Hi!")
state
}
}
```
This example demonstrates how to take a Command (here, `myCommand`) and distribute it in a plugin. Note that multiple commands can be included in one plugin (for example, use `commands ++= Seq(a,b)`). See [[Commands]] for defining more useful commands, including ones that accept arguments and affect the execution state.
## Global plugins example
The simplest global plugin definition is declaring a library or plugin in `~/.sbt/plugins/build.sbt`:
```scala
libraryDependencies += "org.example" %% "example-plugin" % "0.1"
```
This plugin will be available for every sbt project for the current user.
In addition:
1. Jars may be placed directly in `~/.sbt/plugins/lib/` and will be available to every build definition for the current user.
2. Dependencies on plugins built from source may be declared in ~/.sbt/plugins/project/Build.scala` as described at [[FullConfiguration]].
3. A Plugin may be directly defined in Scala source files in `~/.sbt/plugins/`, such as `~/.sbt/plugins/MyPlugin.scala`. `~/.sbt/plugins/build.sbt` should contain `sbtPlugin := true`. This can be used for quicker turnaround when developing a plugin initially:
1. Edit the global plugin code
2. `reload` the project you want to use the modified plugin in
3. sbt will rebuild the plugin and use it for the project. Additionally, the plugin will be available in other projects on the machine without recompiling again.
This approach skips the overhead of `publish-local` and cleaning the plugins directory of the project using the plugin.
These are all consequences of `~/.sbt/plugins/` being a standard project whose classpath is added to every sbt project's build definition.
# Best Practices
If you're a plugin writer, please consult the [[Plugins Best Practices]] page; it contains a set of guidelines to help you ensure that your plugin is consistent with and plays well with other plugins.

@ -1,167 +0,0 @@
[Global]: http://harrah.github.com/xsbt/latest/api/sbt/Global$.html
[This]: http://harrah.github.com/xsbt/latest/api/sbt/This$.html
[Select]: http://harrah.github.com/xsbt/latest/api/sbt/Select.html
# Settings Core
This page describes the core settings engine a bit. This may be useful for using it outside of sbt. It may also be useful for understanding how sbt 0.11 works internally.
The documentation is comprised of two parts. The first part shows an example settings system built on top of the settings engine. The second part comments on how sbt's settings system is built on top of the settings engine. This may help illuminate what exactly the core settings engine provides and what is needed to build something like the sbt settings system.
## Example
### Setting up
To run this example, first create a new project with the following build.sbt file:
```scala
libraryDependencies <+= sbtVersion("org.scala-sbt" %% "collections" % _)
resolvers <+= sbtResolver
```
Then, put the following examples in source files `SettingsExample.scala` and `SettingsUsage.scala`. Finally, run sbt and enter the REPL using `console`. To see the output described below, enter `SettingsUsage`.
### Example Settings System
The first part of the example defines the custom settings system. There are three main parts:
1. Define the Scope type.
2. Define a function that converts that Scope (plus an AttributeKey) to a String.
3. Define a delegation function that defines the sequence of Scopes in which to look up a value.
There is also a fourth, but its usage is likely to be specific to sbt at this time. The example uses a trivial implementation for this part.
`SettingsExample.scala`
```scala
import sbt._
/** Define our settings system */
// A basic scope indexed by an integer.
final case class Scope(index: Int)
// Extend the Init trait.
// (It is done this way because the Scope type parameter is used everywhere in Init.
// Lots of type constructors would become binary, which as you may know requires lots of type lambdas
// when you want a type function with only one parameter.
// That would be a general pain.)
object SettingsExample extends Init[Scope]
{
// Provides a way of showing a Scope+AttributeKey[_]
val showFullKey: Show[ScopedKey[_]] = new Show[ScopedKey[_]] {
def apply(key: ScopedKey[_]) = key.scope.index + "/" + key.key.label
}
// A sample delegation function that delegates to a Scope with a lower index.
val delegates: Scope => Seq[Scope] = { case s @ Scope(index) =>
s +: (if(index <= 0) Nil else delegates(Scope(index-1)) )
}
// Not using this feature in this example.
val scopeLocal: ScopeLocal = _ => Nil
// These three functions + a scope (here, Scope) are sufficient for defining our settings system.
}
```
### Example Usage
This part shows how to use the system we just defined. The end result is a `Settings[Scope]` value. This type is basically a mapping `Scope -> AttributeKey[T] -> Option[T]`. See the [Settings API documentation](http://harrah.github.com/xsbt/latest/api/sbt/Settings.html) for details.
`SettingsUsage.scala`
```scala
/** Usage Example **/
import sbt._
import SettingsExample._
import Types._
object SettingsUsage
{
// Define some keys
val a = AttributeKey[Int]("a")
val b = AttributeKey[Int]("b")
// Scope these keys
val a3 = ScopedKey(Scope(3), a)
val a4 = ScopedKey(Scope(4), a)
val a5 = ScopedKey(Scope(5), a)
val b4 = ScopedKey(Scope(4), b)
// Define some settings
val mySettings: Seq[Setting[_]] = Seq(
setting( a3, value( 3 ) ),
setting( b4, app(a4 :^: KNil) { case av :+: HNil => av * 3 } ),
update(a5)(_ + 1)
)
// "compiles" and applies the settings.
// This can be split into multiple steps to access intermediate results if desired.
// The 'inspect' command operates on the output of 'compile', for example.
val applied: Settings[Scope] = make(mySettings)(delegates, scopeLocal, showFullKey)
// Show results.
for(i <- 0 to 5; k <- Seq(a, b)) {
println( k.label + i + " = " + applied.get( Scope(i), k) )
}
```
This produces the following output when run:
```
a0 = None
b0 = None
a1 = None
b1 = None
a2 = None
b2 = None
a3 = Some(3)
b3 = None
a4 = Some(3)
b4 = Some(9)
a5 = Some(4)
b5 = Some(9)
```
* For the None results, we never defined the value and there was no value to delegate to.
* For a3, we explicitly defined it to be 3.
* a4 wasn't defined, so it delegates to a3 according to our delegates function.
* b4 gets the value for a4 (which delegates to a3, so it is 3) and multiplies by 3
* a5 is defined as the previous value of a5 + 1 and
since no previous value of a5 was defined, it delegates to a4, resulting in 3+1=4.
* b5 isn't defined explicitly, so it delegates to b4 and is therefore equal to 9 as well
## sbt Settings Discussion
### Scopes
sbt defines a more complicated scope than the one shown here for the standard usage of settings in a build. This scope has four components: the project axis, the configuration axis, the task axis, and the extra axis. Each component may be [Global] (no specific value), [This] (current context), or [Select] (containing a specific value). sbt resolves This to either [Global] or [Select] depending on the context.
For example, in a project, a [This] project axis becomes a [Select] referring to the defining project. All other axes that are [This] are translated to [Global]. Functions like inConfig and inTask transform This into a [Select] for a specific value. For example, `inConfig(Compile)(someSettings)` translates the configuration axis for all settings in _someSettings_ to be `Select(Compile)` if the axis value is [This].
So, from the example and from sbt's scopes, you can see that the core settings engine does not impose much on the structure of a scope. All it requires is a delegates function `Scope => Seq[Scope]` and a `display` function. You can choose a scope type that makes sense for your situation.
### Constructing settings
The _app_, _value_, _update_, and related methods are the core methods for constructing settings.
This example obviously looks rather different from sbt's interface because these methods are not typically used directly, but are wrapped in a higher-level abstraction.
With the core settings engine, you work with HLists to access other settings. In sbt's higher-level system, there are wrappers around HList for TupleN and FunctionN for N = 1-9 (except Tuple1 isn't actually used). When working with arbitrary arity, it is useful to make these wrappers at the highest level possible. This is because once wrappers are defined, code must be duplicated for every N. By making the wrappers at the top-level, this requires only one level of duplication.
Additionally, sbt uniformly integrates its task engine into the settings system.
The underlying settings engine has no notion of tasks.
This is why sbt uses a `SettingKey` type and a `TaskKey` type.
Methods on an underlying `TaskKey[T]` are basically translated to operating on an underlying `SettingKey[Task[T]]` (and they both wrap an underlying `AttributeKey`).
For example, `a := 3` for a SettingKey _a_ will very roughly translate to `setting(a, value(3))`.
For a TaskKey _a_, it will roughly translate to `setting(a, value( task { 3 } ) )`.
See [main/Structure.scala](https://github.com/harrah/xsbt/blob/0.11/main/Structure.scala) for details.
### Settings definitions
sbt also provides a way to define these settings in a file (build.sbt and Build.scala).
This is done for build.sbt using basic parsing and then passing the resulting chunks of code to `compile/Eval.scala`.
For all definitions, sbt manages the classpaths and recompilation process to obtain the settings.
It also provides a way for users to define project, task, and configuration delegation, which ends up being used by the delegates function.

@ -1,17 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Extending sbt|Extending]] - internals docs
* [[API Documentation|http://harrah.github.com/xsbt/latest/api/index.html]]
* [[Build Loaders]]
* [[Commands]]
* [[Input Tasks]]
* [[Plugins Best Practices]]
* [[Plugins]]
* [[Settings engine|Settings Core]]
* [[Command Line Applications]]
* [[State objects|Build State]]

765
FAQ.md

@ -1,765 +0,0 @@
[API Documentation]: http://harrah.github.com/xsbt/latest/api/index.html
[ChangeReport]: http://harrah.github.com/xsbt/latest/api/sbt/ChangeReport.html
[FileFunction.cached]: http://harrah.github.com/xsbt/latest/api/sbt/FileFunction$.html
[FileUtilities]: http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/FileUtilities$object.html
[FilesInfo API]: http://harrah.github.com/xsbt/latest/api/sbt/FilesInfo$.html
[IO]: http://harrah.github.com/xsbt/latest/api/sbt/IO$.html
[Path 0.11]: http://harrah.github.com/xsbt/latest/api/sbt/Path$.html
[Path 0.7]: http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/Path.html
[Path object]: http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/Path$.html
[PathFinder 0.11]: http://harrah.github.com/xsbt/latest/api/sbt/PathFinder.html
[PathFinder 0.7]: http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/PathFinder.html
[PathFinder object]: http://harrah.github.com/xsbt/latest/api/sbt/PathFinder$.html
[RichFile]: http://harrah.github.com/xsbt/latest/api/sbt/RichFile.html
[State]: http://harrah.github.com/xsbt/latest/api/sbt/State$.html
[checksum report]: https://issues.sonatype.org/browse/MVNCENTRAL-46
[hyperlinked sources]: http://harrah.github.com/xsbt/latest/sxr/index.html
[issue tracker]: https://github.com/harrah/xsbt/issues
[mailing list]: http://groups.google.com/group/simple-build-tool/
[mailing list]: http://groups.google.com/group/simple-build-tool/topics
[migration page]: https://github.com/harrah/xsbt/wiki/Migrating-from-SBT-0.7.x-to-0.10.x
[original proposal]: https://gist.github.com/404272
[sbt-launch.jar]: http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.11.3-2/sbt-launch.jar
[xsbt-web-plugin]: https://github.com/siasia/xsbt-web-plugin
[xsbt-webstart]: https://github.com/ritschwumm/xsbt-webstart
[xsbti.ComponentProvider]: http://harrah.github.com/xsbt/latest/api/xsbti/ComponentProvider.html
# Frequently Asked Questions
## Project Information
### How do I get help?
Please use the [mailing list] for questions, comments, and discussions.
* Please state the problem or question clearly and provide enough context. Code examples and build transcripts are often useful when appropriately edited.
* Providing small, reproducible examples are a good way to get help quickly.
* Include relevant information such as the version of sbt and Scala being used.
### How do I report a bug?
Please use the [issue tracker] to report confirmed bugs. Do not use it to ask questions. If you are uncertain whether something is a bug, please ask on the [mailing list] first.
### How can I help?
* Fix mistakes that you notice on the wiki.
* Make [bug reports][issue tracker] that are clear and reproducible.
* Answer questions on the [mailing list].
* Fix issues that affect you. [Fork, fix, and submit a pull request](http://help.github.com/fork-a-repo/).
* Implement features that are important to you. There is an [[Opportunities]] page for some ideas, but the most useful contributions are usually ones you want yourself.
For more details on developing sbt, see [Developing.pdf](http://harrah.github.com/xsbt/Developing.pdf)
## 0.7 to 0.10+ Migration
### How do I migrate from 0.7 to 0.10+?
See the [[migration page|Migrating-from-SBT-0.7.x-to-0.10.x]]
first and then the following questions.
### Where has 0.7's `lib_managed` gone?
By default, sbt 0.11 loads managed libraries from your ivy cache without copying them to a `lib_managed` directory. This fixes some bugs with the previous solution and keeps your project directory small. If you want to insulate your builds from the ivy cache being cleared, set `retrieveManaged := true` and the dependencies will be copied to `lib_managed` as a build-local cache (while avoiding the issues of `lib_managed` in 0.7.x).
This does mean that existing solutions for sharing libraries with your favoured IDE may not work. There are 0.11.x plugins for IDEs being developed:
* IntelliJ IDEA: [[https://github.com/mpeltonen/sbt-idea]]
* Netbeans: [[https://github.com/remeniuk/sbt-netbeans-plugin]]
* Eclipse: [[https://github.com/typesafehub/sbteclipse]]
### What are the commands I can use in 0.11 vs. 0.7?
For a list of commands, run `help`. For details on a specific
command, run `help <command>`. To view a list of tasks defined on
the current project, run `tasks`. Alternatively, see the
[[Running|Getting Started Running]] page in the Getting Started Guide for descriptions of common commands and tasks.
If in doubt start by just trying the old command as it may just work. The built in TAB completion will also assist you, so you can just press TAB at the beginning of a line and see what you get.
The following commands work pretty much as in 0.7 out of the box:
reload
update
compile
test
test-only
publish-local
exit
### Why have the resolved dependencies in a multi-module project changed since 0.7?
sbt 0.10 fixes a flaw in how dependencies get resolved in multi-module projects. This change ensures that only one version of a library appears on a classpath.
Use `last update` to view the debugging output for the last `update` run. Use `show update` to view a summary of files comprising managed classpaths.
### My tests all run really fast but some are broken that weren't in 0.7!
Be aware that compilation and tests run in parallel by default in sbt 0.11. If your test code isn't thread-safe then you may want to change this behaviour by adding one of the following to your `build.sbt`:
```scala
// Execute tests in the current project serially.
// Tests from other projects may still run concurrently.
parallelExecution in Test := false
// Execute everything serially (including compilation and tests)
parallelExecution := false
```
### How do I set log levels in 0.11 vs. 0.7?
`warn`, `info`, `debug` and `error` don't work any more.
The new syntax in the sbt 0.11.x shell is:
```text
> set logLevel := Level.Warn
```
Or in your `build.sbt` file write:
```scala
logLevel := Level.Warn
```
### What happened to the web development and Web Start support since 0.7?
Web application support was split out into a plugin. See the [xsbt-web-plugin] project.
For an early version of an xsbt Web Start plugin, visit the [xsbt-webstart] project.
### How are inter-project dependencies different in 0.11 vs. 0.7?
In 0.11, there are three types of project dependencies (classpath, execution, and configuration) and they are independently defined. These were combined in a single dependency type in 0.7.x. A declaration like:
```scala
lazy val a = project("a", "A")
lazy val b = project("b", "B", a)
```
meant that the `B` project had a classpath and execution dependency on `A` and `A` had a configuration dependency on `B`. Specifically, in 0.7.x:
1. Classpath: Classpaths for `A` were available on the appropriate classpath for `B`.
1. Execution: A task executed on `B` would be executed on `A` first.
1. Configuration: For some settings, if they were not overridden in `A`, they would default to the value provided in `B`.
In 0.11, declare the specific type of dependency you want. Read
about [[multi-project builds|Getting Started Multi-Project]] in
the Getting Started Guide for details.
### Where did class/object X go since 0.7?
| 0.7 | 0.11 |
| --- | --- |
| [FileUtilities] | [IO] |
| [Path class][Path 0.7] and [object][Path object] | [Path object][Path 0.11], `File`, [RichFile] |
| [PathFinder class][PathFinder 0.7] | `Seq[File]`, [PathFinder class][PathFinder 0.11], [PathFinder object][PathFinder object] |
### Where can I find plugins for 0.11?
See [[sbt 0.10 plugins list]] for a list of currently available plugins.
## Usage
### My last command didn't work but I can't see an explanation. Why?
sbt 0.11 by default suppresses most stack traces and debugging information. It has the nice side effect of giving you less noise on screen, but as a newcomer it can leave you lost for explanation. To see the previous output of a command at a higher verbosity, type `last <task>` where `<task>` is the task that failed or that you want to view detailed output for. For example, if you find that your `update` fails to load all the dependencies as you expect you can enter:
```text
> last update
```
and it will display the full output from the last run of the `update` command.
### How do I disable ansi codes in the output?
Sometimes sbt doesn't detect that ansi codes aren't supported and you get output that looks like:
```
[0m[ [0minfo [0m] [0mSet current project to root
```
or ansi codes are supported but you want to disable colored output. To completely disable ansi codes, set the `sbt.log.noformat` system property to `true`. For example,
```
sbt -Dsbt.log.noformat=true
```
### How can I start a Scala interpreter (REPL) with sbt project configuration (dependencies, etc.)?
You may run `sbt console`.
## Build definitions
### What are the `:=`, `~=`, `<<=`, `+=`, `++=`, `<+=`, and `<++=` methods?
These are methods on keys used to construct a `Setting`. The Getting Started Guide covers all these methods, see [[.sbt build definition|Getting Started Basic Def]] and
[[more about settings|Getting Started More About Settings]] for example.
### What is the `%` method?
It's used to create a `ModuleID` from strings, when specifying
managed dependencies. Read the Getting Started Guide about
[[library dependencies|Getting Started Library Dependencies]].
### What is `ModuleID`, `Project`, ...?
To figure out an unknown type or method, have a look at the
[[Getting Started Guide|Getting Started Welcome]] if you have
not. Also try the [[Index]] of commonly used methods, values, and
types, the [API Documentation], and the [hyperlinked sources].
### How can one key depend on multiple other keys?
See [[More About Settings|Getting Started More About Settings]] in the Getting Started Guide,
scroll down to the discussion of `<<=` with multiple keys.
Briefly: You need to use a tuple rather than a single key by itself. Scala's syntax for a tuple is with parentheses, like `(a, b, c)`.
If you're creating a value for a task key, then you'll use `map`:
```scala
packageBin in Compile <<= (name, organization, version) map { (n, o, v) => file(o + "-" + n + "-" + v + ".jar") }
```
If you're creating a value for a setting key, then you'll use `apply`:
```scala
name <<= (name, organization, version) apply { (n, o, v) => "project " + n + " from " + o + " version " + v }
```
Typing `apply` is optional in that code, since Scala treats any object with an `apply` method as a function. See [[More About Settings|Getting Started More About Settings]] for a longer explanation.
To learn about task keys vs. setting keys, read [[.sbt build definition|Getting Started Basic Def]].
### How do I add files to a jar package?
The files included in an artifact are configured by default by a task `mappings` that is scoped by the relevant package task. The `mappings` task returns a sequence `Seq[(File,String)]` of mappings from the file to include to the path within the jar. See [[Mapping Files]] for details on creating these mappings.
For example, to add generated sources to the packaged source artifact:
```scala
mappings in (Compile, packageSrc) <++=
(sourceManaged in Compile, managedSources in Compile) map { (base, srcs) =>
import Path.{flat, relativeTo}
srcs x (relativeTo(base) | flat)
}
```
This takes sources from the `managedSources` task and relativizes them against the `managedSource` base directory, falling back to a flattened mapping. If a source generation task doesn't write the sources to the `managedSource` directory, the mapping function would have to be adjusted to try relativizing against additional directories or something more appropriate for the generator.
### How can I generate source code or resources?
sbt provides standard hooks for adding source or resource generation tasks. A generation task should generate sources in a subdirectory of `sourceManaged` for sources or `resourceManaged` for resources and return a sequence of files generated. The key to add the task to is called `sourceGenerators` for sources and `resourceGenerators` for resources. It should be scoped according to whether the generated files are main (`Compile`) or test (`Test`) sources or resources. This basic structure looks like:
```scala
sourceGenerators in Compile <+= <your Task[Seq[File]] here>
```
For example, assuming a method `def makeSomeSources(base: File): Seq[File]`,
```scala
sourceGenerators in Compile <+= sourceManaged in Compile map { outDir: File =>
makeSomeSources(outDir / "demo")
}
```
As a specific example, the following generates a hello world source file:
```scala
sourceGenerators in Compile <+= sourceManaged in Compile map { dir =>
val file = dir / "demo" / "Test.scala"
IO.write(file, """object Test extends App { println("Hi") }""")
Seq(file)
}
```
Executing 'run' will print "Hi". Change `Compile` to `Test` to make it a test source. To generate resources, change `sourceGenerators` to `resourceGenerators` and `sourceManaged` to `resourceManaged`. Normally, you would only want to generate sources when necessary and not every run.
By default, generated sources and resources are not included in the packaged source artifact. To do so, add them as you would other mappings. See the `Adding files to a package` section.
### How can a task avoid redoing work if the input files are unchanged?
There is basic support for only doing work when input files have changed or when the outputs haven't been generated yet. This support is primitive and subject to change.
The relevant methods are two overloaded methods called [FileFunction.cached]. Each requires a directory in which to store cached data. Sample usage is:
```scala
// define a task that takes some inputs
// and generates files in an output directory
myTask <<= (cacheDirectory, inputs, target) map {
(cache: File, inFiles: Seq[File], outDir: File) =>
// wraps a function taskImpl in an uptodate check
// taskImpl takes the input files, the output directory,
// generates the output files and returns the set of generated files
val cachedFun = FileFunction.cached(cache / "my-task") { (in: Set[File]) =>
taskImpl(in, outDir) : Set[File]
}
// Applies the cached function to the inputs files
cachedFun(inFiles)
}
```
There are two additional arguments for the first parameter list that allow the file tracking style to be explicitly specified. By default, the input tracking style is `FilesInfo.lastModified`, based on a file's last modified time, and the output tracking style is `FilesInfo.exists`, based only on whether the file exists. The other available style is `FilesInfo.hash`, which tracks a file based on a hash of its contents. See the [FilesInfo API] for details.
A more advanced version of `FileFunction.cached` passes a data structure of type [ChangeReport] describing the changes to input and output files since the last evaluation. This version of `cached` also expects the set of files generated as output to be the result of the evaluated function.
## Extending sbt
### How can I add a new configuration?
The following example demonstrates adding a new set of compilation settings and tasks to a new configuration called `samples`. The sources for this configuration go in `src/samples/scala/`. Unspecified settings delegate to those defined for the `compile` configuration. For example, if `scalacOptions` are not overridden for `samples`, the options for the main sources are used.
Options specific to `samples` may be declared like:
```scala
scalacOptions in Samples += "-deprecation"
```
This uses the main options as base options because of `+=`. Use `:=` to ignore the main options:
```scala
scalacOptions in Samples := "-deprecation" :: Nil
```
The example adds all of the usual compilation related settings and tasks to `samples`:
```text
samples:run
samples:run-main
samples:compile
samples:console
samples:console-quick
samples:scalac-options
samples:full-classpath
samples:package
samples:package-src
...
```
#### Example of adding a new configuration
`project/Sample.scala`
```scala
import sbt._
import Keys._
object Sample extends Build {
// defines a new configuration "samples" that will delegate to "compile"
lazy val Samples = config("samples") extend(Compile)
// defines the project to have the "samples" configuration
lazy val p = Project("p", file("."))
.configs(Samples)
.settings(sampleSettings : _*)
def sampleSettings =
// adds the default compile/run/... tasks in "samples"
inConfig(Samples)(Defaults.configSettings) ++
Seq(
// (optional) makes "test:compile" depend on "samples:compile"
compile in Test <<= compile in Test dependsOn (compile in Samples)
) ++
// (optional) declare that the samples binary and
// source jars should be published
publishArtifact(packageBin) ++
publishArtifact(packageSrc)
def publishArtifact(task: TaskKey[File]): Seq[Setting[_]] =
addArtifact(artifact in (Samples, task), task in Samples).settings
}
```
### How do I add a test configuration?
See the `Additional test configurations` section of [[Testing]].
### How can I create a custom run task, in addition to `run`?
This answer is extracted from a [mailing list discussion](http://groups.google.com/group/simple-build-tool/browse_thread/thread/4c28ee5b7e18b46a/).
Read the Getting Started Guide up to
[[custom settings|Getting Started Custom Settings]] for background.
A basic run task is created by:
```scala
// this lazy val has to go in a full configuration
lazy val myRunTask = TaskKey[Unit]("my-run-task")
// this can go either in a `build.sbt` or the settings member
// of a Project in a full configuration
fullRunTask(myRunTask, Test, "foo.Foo", "arg1", "arg2")
```
or, if you really want to define it inline (as in a basic `build.sbt` file):
```scala
fullRunTask(TaskKey[Unit]("my-run-task"), Test, "foo.Foo", "arg1", "arg2")
```
If you want to be able to supply arguments on the command line, replace `TaskKey` with `InputKey` and `fullRunTask` with `fullRunInputTask`.
The `Test` part can be replaced with another configuration, such as `Compile`, to use that configuration's classpath.
This run task can be configured individually by specifying the task key in the scope. For example:
```scala
fork in myRunTask := true
javaOptions in myRunTask += "-Xmx6144m"
```
### How can I delegate settings from one task to another task?
Settings [[scoped|Getting Started Scopes]] to one task can fall
back to another task if undefined in the first task. This is
called delegation.
The following key definitions specify that settings for `myRun` delegate to `aRun`
```scala
val aRun = TaskKey[Unit]("a-run", "A run task.")
// The last parameter to TaskKey.apply here is a repeated one
val myRun = TaskKey[Unit]("my-run", "Custom run task.", aRun)
```
In use, this looks like:
```scala
// Make the run task as before.
fullRunTask(myRun, Compile, "pkg.Main", "arg1", "arg2")
// If fork in myRun is not explicitly set,
// then this also configures myRun to fork.
// If fork in myRun is set, it overrides this setting
// because it is more specific.
fork in aRun := true
// Appends "-Xmx2G" to the current options for myRun.
// Because we haven't defined them explicitly,
// the current options are delegated to aRun.
// So, this says to use the same options as aRun
// plus -Xmx2G.
javaOptions in myRun += "-Xmx2G"
```
### How should I express a dependency on an outside tool such as proguard?
Tool dependencies are used to implement a task and are not needed by project source code. These dependencies can be declared in their own configuration and classpaths. These are the steps:
1. Define a new [[configuration|Configurations]].
2. Declare the tool [[dependencies|Library Management]] in that configuration.
3. Define a classpath that pulls the dependencies from the [[Update Report]] produced by `update`.
4. Use the classpath to implement the task.
As an example, consider a `proguard` task. This task needs the ProGuard jars in order to run the tool. Assuming a new configuration defined in the full build definition (#1):
```scala
val ProguardConfig = config("proguard") hide
```
the following are settings that implement #2-#4:
```scala
// Add proguard as a dependency in the custom configuration.
// This keeps it separate from project dependencies.
libraryDependencies +=
"net.sf.proguard" % "proguard" % "4.4" % ProguardConfig.name
// Extract the dependencies from the UpdateReport.
managedClasspath in proguard <<=
(classpathTypes in proguard, update) map { (ct, report) =>
Classpaths.managedJars(proguardConfig, ct, report)
}
// Use the dependencies in a task, typically by putting them
// in a ClassLoader and reflectively calling an appropriate
// method.
proguard <<= managedClasspath in proguard { (cp: Seq[File] =>
// ... do something with 'cp', which includes proguard ...
}
```
### How would I change sbt's classpath dynamically?
It is possible to register additional jars that will be placed on sbt's classpath (since version 0.10.1).
Through [State], it is possible to obtain a [xsbti.ComponentProvider], which manages application components.
Components are groups of files in the `~/.sbt/boot/` directory and, in this case, the application is sbt.
In addition to the base classpath, components in the "extra" component are included on sbt's classpath.
(Note: the additional components on an application's classpath are declared by the `components` property in the `[main]` section of the launcher configuration file `boot.properties`.)
Because these components are added to the `~/.sbt/boot/` directory and `~/.sbt/boot/` may be read-only, this can fail.
In this case, the user has generally intentionally set sbt up this way, so error recovery is not typically necessary (just a short error message explaining the situation.)
#### Example of dynamic classpath augmentation
The following code can be used where a `State => State` is required, such as in the `onLoad` setting (described below) or in a [[command|Commands]].
It adds some files to the "extra" component and reloads sbt if they were not already added.
Note that reloading will drop the user's session state.
```scala
def augment(extra: Seq[File])(s: State): State =
{
// Get the component provider
val cs: xsbti.ComponentProvider = s.configuration.provider.components()
// Adds the files in 'extra' to the "extra" component
// under an exclusive machine-wide lock.
// The returned value is 'true' if files were actually copied and 'false'
// if the target files already exists (based on name only).
val copied: Boolean = s.locked(cs.lockFile, cs.addToComponent("extra", extra.toArray))
// If files were copied, reload so that we use the new classpath.
if(copied) s.reload else s
}
```
### How can I take action when the project is loaded or unloaded?
The single, global setting `onLoad` is of type `State => State` (see [[Build State]]) and is executed once, after all projects are built and loaded. There is a similar hook `onUnload` for when a project is unloaded. Project unloading typically occurs as a result of a `reload` command or a `set` command. Because the `onLoad` and `onUnload` hooks are global, modifying this setting typically involves composing a new function with the previous value. The following example shows the basic structure of defining `onLoad`:
```scala
// Compose our new function 'f' with the existing transformation.
{
val f: State => State = ...
onLoad in Global ~= (f compose _)
}
```
#### Example of project load/unload hooks
The following example maintains a count of the number of times a project has been loaded and prints that number:
```scala
{
// the key for the current count
val key = AttributeKey[Int]("load-count")
// the State transformer
val f = (s: State) => {
val previous = s get key getOrElse 0
println("Project load count: " + previous)
s.put(key, previous + 1)
}
onLoad in Global ~= (f compose _)
}
```
## Errors
### Type error, found: `Initialize[Task[String]]`, required: `Initialize[String]` or found: `TaskKey[String]` required: `Initialize[String]`
This means that you are trying to supply a task when defining a
setting key. See
[[.sbt build definition|Getting Started Basic Def]] for the
difference between task and setting keys, and
[[more about settings|Getting Started More About Settings]] for
more on how to define one key in terms of other keys.
Setting keys are only evaluated once, on project load, while tasks
are evaluated repeatedly. Defining a setting in terms of a task does
not make sense because tasks must be re-evaluated every time.
One way to get a task when you didn't want one is to use the `map`
method instead of the `apply`
method. [[More about settings|Getting Started More About Settings]]
covers this topic as well.
Suppose we define these keys, in `./project/Build.scala` (For
details, see [[.scala build definition|Getting Started Full Def]]).
```
val baseSetting = SettingKey[String]("base-setting")
val derivedSetting = SettingKey[String]("derived-setting")
val baseTask = TaskKey[Long]("base-task")
val derivedTask = TaskKey[String]("derived-task")
```
Let's define an initialization for `base-setting` and `base-task`. We will then use these as inputs to other setting and task initializations.
```scala
baseSetting := "base setting"
baseTask := { System.currentTimeMillis() }
```
Then this will not work:
```
// error: found: Initialize[Task[String]], required: Initialize[String]
derivedSetting <<= baseSetting.map(_.toString),
derivedSetting <<= baseTask.map(_.toString),
derivedSetting <<= (baseSetting, baseTask).map((a, b) => a.toString + b.toString),
```
One or more settings can be used as inputs to initialize another setting, using the `apply` method.
```
derivedSetting <<= baseSetting.apply(_.toString)
derivedSetting <<= baseSetting(_.toString)
derivedSetting <<= (baseSetting, baseSetting)((a, b) => a.toString + b.toString)
```
Both settings and tasks can be used to initialize a task, using the `map` method.
```
derivedTask <<= baseSetting.map(_.toString)
derivedTask <<= baseTask.map(_.toString)
derivedTask <<= (baseSetting, baseTask).map((a, b) => a.toString + b.toString)
```
But, it is a compile time error to use `map` to initialize a setting:
```
// error: found: Initialize[Task[String]], required: Initialize[String]
derivedSetting <<= baseSetting.map(_.toString),
derivedSetting <<= baseTask.map(_.toString),
derivedSetting <<= (baseSetting, baseTask).map((a, b) => a.toString + b.toString),
```
It is not allowed to use a task as input to a settings initialization with `apply`:
```
// error: value apply is not a member of TaskKey[Long]
derivedSetting <<= baseTask.apply(_.toString)
// error: value apply is not a member of TaskKey[Long]
derivedTask <<= baseTask.apply(_.toString)
// error: value apply is not a member of (sbt.SettingKey[String], sbt.TaskKey[Long])
derivedTask <<= (baseSetting, baseTask).apply((a, b) => a.toString + b.toString)
```
Finally, it is not directly possible to use `apply` to initialize a task.
```
// error: found String, required Task[String]
derivedTask <<= baseSetting.apply(_.toString)
```
### On project load, "Reference to uninitialized setting"
Setting initializers are executed in order. If the initialization
of a setting depends on other settings that has not been
initialized, sbt will stop loading. This can happen using `+=`,
`++=`, `<<=`, `<+=`, `<++=`, and `~=`. (To understand those
methods, [[read this|Getting Started More About Settings]].)
In this example, we try to append a library to `libraryDependencies` before it is initialized with an empty sequence.
```
object MyBuild extends Build {
val root = Project(id = "root", base = file("."),
settings = Seq(
libraryDependencies += "commons-io" % "commons-io" % "1.4" % "test"
)
)
}
```
To correct this, include the default settings, which includes `libraryDependencies := Seq()`.
```
settings = Defaults.defaultSettings ++ Seq(
libraryDependencies += "commons-io" % "commons-io" % "1.4" % "test"
)
```
A more subtle variation of this error occurs when using
[[scoped settings|Getting Started Scopes]].
```
// error: Reference to uninitialized setting
settings = Defaults.defaultSettings ++ Seq(
libraryDependencies += "commons-io" % "commons-io" % "1.2" % "test",
fullClasspath ~= (_.filterNot(_.data.name.contains("commons-io")))
)
```
Generally, all of the update operators can be expressed in terms of `<<=`. To better understand the error, we can rewrite the setting as:
```
// error: Reference to uninitialized setting
fullClasspath <<= (fullClasspath).map(_.filterNot(_.data.name.contains("commons-io")))
```
This setting varies between the test and compile scopes. The solution is use the scoped setting, both as the input to the initializer, and the setting that we update.
```
fullClasspath in Compile <<= (fullClasspath in Compile).map(_.filterNot(_.data.name.contains("commons-io")))
// or equivalently
fullClasspath in Compile ~= (_.filterNot(_.data.name.contains("commons-io")))
```
## Dependency Management
### How do I resolve a checksum error?
This error occurs when the published checksum, such as a sha1 or md5 hash, differs from the checksum computed for a downloaded artifact, such as a jar or pom.xml. An example of such an error is:
```
[warn] problem while downloading module descriptor:
http://repo1.maven.org/maven2/commons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.pom:
invalid sha1: expected=ad3fda4adc95eb0d061341228cc94845ddb9a6fe computed=0ce5d4a03b07c8b00ab60252e5cacdc708a4e6d8 (1070ms)
```
The invalid checksum should generally be reported to the repository owner (as [was done][checksum report] for the above error). In the meantime, you can temporarily disable checking with the following setting:
```scala
checksums in update := Nil
```
See [[Library Management]] for details.
### I've added a plugin, and now my cross-compilations fail!
This problem crops up frequently. Plugins are only published for the Scala version that SBT uses (currently, 2.9.1). You can still _use_ plugins during cross-compilation, because SBT only looks for a 2.9.1 version of the plugin.
**... unless you specify the plugin in the wrong place!**
A typical mistake is to put global plugin definitions in `~/.sbt/plugins.sbt`. **THIS IS WRONG.** `.sbt` files in `~/.sbt` are loaded for _each_ build--that is, for _each_ cross-compilation. So, if you build for Scala 2.9.0, SBT will try to find a version of the plugin that's compiled for 2.9.0--and it usually won't. That's because it doesn't _know_ the dependency is a plugin.
To tell SBT that the dependency is an SBT plugin, make sure you define your global plugins in a `.sbt` file in `~/.sbt/plugins/`. SBT knows that files in `~/.sbt/plugins` are only to be used by SBT itself, not as part of the general build definition. If you define your plugins in a file under _that_ directory, they won't foul up your cross-compilations. Any file name ending in `.sbt` will do, but most people use `~/.sbt/plugins/build.sbt` or `~/.sbt/plugins/plugins.sbt`.
## Miscellaneous
### How do I use the Scala interpreter in my code?
sbt runs tests in the same JVM as sbt itself and Scala classes are not in the same class loader as the application classes. Therefore, when using the Scala interpreter, it is important to set it up properly to avoid an error message like:
```
Failed to initialize compiler: class scala.runtime.VolatileBooleanRef not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.
```
The key is to initialize the Settings for the interpreter using _embeddedDefaults_. For example:
```scala
val settings = new Settings
settings.embeddedDefaults[MyType]
val interpreter = new Interpreter(settings, ...)
```
Here, MyType is a representative class that should be included on the interpreter's classpath and in its application class loader. For more background, see the [original proposal] that resulted in _embeddedDefaults_ being added.
Similarly, use a representative class as the type argument when using the _break_ and _breakIf_ methods of _ILoop_, as in the following example:
```scala
def x(a: Int, b: Int) = {
import scala.tools.nsc.interpreter.ILoop
ILoop.breakIf[MyType](a != b, "a" -> a, "b" -> b )
}
```

@ -1,272 +0,0 @@
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html
# `.sbt` Build Definition
[[Previous|Getting Started Running]] _Getting Started Guide page 6 of 14._ [[Next|Getting Started Scopes]]
This page describes sbt build definitions, including some "theory" and the
syntax of `build.sbt`. It assumes you know how to [[use sbt|Getting Started Running]] and
have read the previous pages in the Getting Started Guide.
## `.sbt` vs. `.scala` Definition
An sbt build definition can contain files ending in `.sbt`,
located in the base directory, and files ending in `.scala`,
located in the `project` subdirectory of the base directory.
You can use either one exclusively, or use both. A good approach
is to use `.sbt` files for most purposes, and use `.scala` files
only to contain what can't be done in `.sbt`:
- to customize sbt (add new settings or tasks)
- to define nested sub-projects
This page discusses `.sbt` files. See
[[.scala build definition|Getting Started Full Def]] (later in
Getting Started) for more on `.scala` files and how they relate to
`.sbt` files.
## What is a build definition?
** PLEASE READ THIS SECTION **
After examining a project and processing any build definition files, sbt
will end up with an immutable map (set of key-value pairs) describing the
build.
For example, one key is `name` and it maps to a string value, the name of
your project.
_Build definition files do not affect sbt's map directly._
Instead, the build definition creates a huge list of objects with type
`Setting[T]` where `T` is the type of the value in the map. (Scala's
`Setting[T]` is like `Setting<T>` in Java.) A `Setting` describes a
_transformation to the map_, such as adding a new key-value pair or
appending to an existing value. (In the spirit of functional programming, a
transformation returns a new map, it does not update the old map in-place.)
In `build.sbt`, you might create a `Setting[String]` for the name of your
project like this:
```scala
name := "hello"
```
This `Setting[String]` transforms the map by adding (or replacing) the
`name` key, giving it the value `"hello"`. The transformed map becomes sbt's
new map.
To create its map, sbt first sorts the list of settings so that
all changes to the same key are made together, and values that depend on
other keys are processed after the keys they depend on. Then sbt walks over
the sorted list of `Setting` and applies each one to the map in turn.
Summary: _A build definition defines a list of `Setting[T]`, where a
`Setting[T]` is a transformation affecting sbt's map of key-value pairs and
`T` is the type of each value_.
## How `build.sbt` defines settings
`build.sbt` defines a `Seq[Setting[_]]`; it's a list of Scala expressions, separated by blank lines, where each one becomes one element in the sequence. If you put `Seq(` in front of the `.sbt` file and `)` at the end and replace the blank lines with commas, you'd be looking at the equivalent `.scala` code.
Here's an example:
```scala
name := "hello"
version := "1.0"
scalaVersion := "2.9.1"
```
A `build.sbt` file is a list of `Setting`, separated by blank lines. Each
`Setting` is defined with a Scala expression.
The expressions in `build.sbt` are independent of one another, and
they are expressions, rather than complete Scala statements. An
implication of this is that you can't define a top-level `val`,
`object`, class, or method in `build.sbt`.
On the left, `name`, `version`, and `scalaVersion` are _keys_. A
key is an instance of `SettingKey[T]`, `TaskKey[T]`, or
`InputKey[T]` where `T` is the expected value type. The kinds of
key are explained more below.
Keys have a method called `:=`, which returns a `Setting[T]`. You could
use a Java-like syntax to call the method:
```scala
name.:=("hello")
```
But Scala allows `name := "hello"` instead (in Scala, any method can use either syntax).
The `:=` method on key `name` returns a `Setting`, specifically a
`Setting[String]`. `String` also appears in the type of `name` itself, which
is `SettingKey[String]`. In this case, the returned `Setting[String]` is
a transformation to add or replace the `name` key in sbt's map, giving it
the value `"hello"`.
If you use the wrong value type, the build definition will not compile:
```scala
name := 42 // will not compile
```
### Settings are separated by blank lines
You can't write a `build.sbt` like this:
```scala
// will NOT work, no blank lines
name := "hello"
version := "1.0"
scalaVersion := "2.9.1"
```
sbt needs some kind of delimiter to tell where one expression stops and the next begins.
`.sbt` files contain a list of Scala expressions, not a single Scala program. These expressions have to be split up and passed to the compiler individually.
If you want a single Scala program, use [[.scala files|Getting Started Full Def]] rather than `.sbt` files; `.sbt` files are optional. [[Later on|Getting Started Full Def]] this guide explains how to use `.scala` files. (Preview: the same settings expressions found in a `.sbt` file can always be listed in a `Seq[Setting]` in a `.scala` file instead.)
## Keys are defined in the Keys object
The built-in keys are just fields in an object called [Keys]. A
`build.sbt` implicitly has an `import sbt.Keys._`, so
`sbt.Keys.name` can be referred to as `name`.
Custom keys may be defined in a
[[.scala file|Getting Started Full Def]] or a [[plugin|Getting Started Using Plugins]].
## Other ways to transform settings
Replacement with `:=` is the simplest transformation, but there are several
others. For example you can append to a list value with `+=`.
The other transformations require an understanding of [[scopes|Getting Started Scopes]], so the
[[next page|Getting Started Scopes]] is about scopes and the
[[page after that|Getting Started More About Settings]] goes into more detail about settings.
## Task Keys
There are three flavors of key:
- `SettingKey[T]`: a key with a value computed once (the value is
computed one time when loading the project, and kept around).
- `TaskKey[T]`: a key with a value that has to be recomputed each time,
potentially creating side effects.
- `InputKey[T]`: a task key which has command line arguments as
input. The Getting Started Guide doesn't cover `InputKey`,
but when you finish this guide, check out [[Input Tasks]] for more.
A `TaskKey[T]` is said to define a _task_. Tasks are operations such as
`compile` or `package`. They may return `Unit` (`Unit` is Scala for `void`),
or they may return a value related to the task, for example `package` is a
`TaskKey[File]` and its value is the jar file it creates.
Each time you start a task execution, for example by typing `compile` at the
interactive sbt prompt, sbt will re-run any tasks involved exactly once.
sbt's map describing the project can keep around a fixed string value for a setting such
as `name`, but it has to keep around some executable code for a task such as
`compile` -- even if that executable code eventually returns a string, it
has to be re-run every time.
_A given key always refers to either a task or a plain setting._ That is,
"taskiness" (whether to re-run each time) is a property of the key, not the
value.
Using `:=`, you can assign a computation to a task, and that computation will be
re-run each time:
```scala
hello := { println("Hello!") }
```
From a type-system perspective, the `Setting` created from a task key is
slightly different from the one created from a setting key. `taskKey := 42`
results in a `Setting[Task[T]]` while `settingKey := 42` results in a
`Setting[T]`. For most purposes this makes no difference; the task key still
creates a value of type `T` when the task executes.
The `T` vs. `Task[T]` type difference has this implication: a setting key
can't depend on a task key, because a setting key is evaluated only once on project load, and not
re-run. More on this in [[more about settings|Getting Started More About Settings]], coming up soon.
## Keys in sbt interactive mode
In sbt's interactive mode, you can type the name of any task to
execute that task. This is why typing `compile` runs the compile
task. `compile` is a task key.
If you type the name of a setting key rather than a task key, the
value of the setting key will be displayed. Typing a task key name
executes the task but doesn't display the resulting value; to see
a task's result, use `show <task name>` rather than plain `<task
name>`.
In build definition files, keys are named with `camelCase` following Scala
convention, but the sbt command line uses `hyphen-separated-words`
instead. The hyphen-separated string used in sbt comes from the definition of the key (see
[Keys]). For example, in `Keys.scala`, there's this key:
```scala
val scalacOptions = TaskKey[Seq[String]]("scalac-options", "Options for the Scala compiler.")
```
In sbt you type `scalac-options` but in a build definition file you use `scalacOptions`.
To learn more about any key, type `inspect <keyname>` at the sbt interactive
prompt. Some of the information `inspect` displays won't make sense yet, but
at the top it shows you the setting's value type and a brief description of
the setting.
## Imports in `build.sbt`
You can place import statements at the top of `build.sbt`; they need not be
separated by blank lines.
There are some implied default imports, as follows:
```scala
import sbt._
import Process._
import Keys._
```
(In addition, if you have [[.scala files|Getting Started Full Def]],
the contents of any `Build` or `Plugin` objects in those files will be
imported. More on that when we get to
[[.scala build definitions|Getting Started Full Def]].)
## Adding library dependencies
To depend on third-party libraries, there are two options. The
first is to drop jars in `lib/` (unmanaged dependencies) and the
other is to add managed dependencies, which will look like this in
`build.sbt`:
```scala
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3"
```
This is how you add a managed dependency on the Apache Derby
library, version 10.4.1.3.
The `libraryDependencies` key involves two complexities: `+=`
rather than `:=`, and the `%` method. `+=` appends to the key's
old value rather than replacing it, this is explained in
[[more about settings|Getting Started More About Settings]]. The
`%` method is used to construct an Ivy module ID from strings,
explained in
[[library dependencies|Getting Started Library Dependencies]].
We'll skip over the details of library dependencies until later in
the Getting Started Guide. There's a
[[whole page|Getting Started Library Dependencies]] covering it
later on.
## Next
Move on to [[learn about scopes|Getting Started Scopes]].

@ -1,104 +0,0 @@
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html "Keys.scala"
[Defaults]: http://harrah.github.com/xsbt/latest/sxr/Defaults.scala.html "Defaults.scala"
[IO]: http://harrah.github.com/xsbt/latest/api/index.html#sbt.IO$ "IO object"
# Custom Settings and Tasks
[[Previous|Getting Started Multi-Project]] _Getting Started Guide page
13 of 14._ [[Next|Getting Started Summary]]
This page gets you started creating your own settings and tasks.
To understand this page, be sure you've read earlier pages in the
Getting Started Guide, especially
[[build.sbt|Getting Started Basic Def]] and
[[more about settings|Getting Started More About Settings]].
## Defining a key
[Keys] is packed with examples illustrating how to define
keys. Most of the keys are implemented in [Defaults].
Keys have one of three types. `SettingKey` and `TaskKey` are described in
[[.sbt build definition|Getting Started Basic Def]]. Read about `InputKey` on the [[Input Tasks]]
page.
Some examples from [Keys]:
```scala
val scalaVersion = SettingKey[String]("scala-version", "The version of Scala used for building.")
val clean = TaskKey[Unit]("clean", "Deletes files produced by the build, such as generated sources, compiled classes, and task caches.")
```
The key constructors have two string parameters: the name of the key
(`"scala-version"`) and a documentation string (`"The version of scala used for
building."`).
Remember from [[.sbt build definition|Getting Started Basic Def]] that the type parameter `T` in `SettingKey[T]`
indicates the type of value a setting has. `T` in `TaskKey[T]` indicates the
type of the task's result. Also remember from [[.sbt build definition|Getting Started Basic Def]]
that a setting has a fixed value until project reload, while a task is re-computed
for every "task execution" (every time someone types a command at the sbt
interactive prompt or in batch mode).
Keys may be defined in a `.scala` file (as described in
[[.scala build definition|Getting Started Full Def]]), or in a plugin (as described in
[[using plugins|Getting Started Using Plugins]]). Any `val` found in a `Build` object in your `.scala` build definition files, or any `val` found in a `Plugin` object from a plugin, will be imported automatically into your `.sbt` files.
## Implementing a task
Once you've defined a key, you'll need to use it in some task. You could be
defining your own task, or you could be planning to redefine an existing
task. Either way looks the same; if the task has no dependencies on other
settings or tasks, use `:=` to associate some code with the task key:
```scala
sampleStringTask := System.getProperty("user.home")
sampleIntTask := {
val sum = 1 + 2
println("sum: " + sum)
sum
}
```
If the task has dependencies, you'd use `<<=` instead of course, as
discussed in [[more about settings|Getting Started More About Settings]].
The hardest part about implementing tasks is often not sbt-specific; tasks
are just Scala code. The hard part could be writing the "meat" of your task
that does whatever you're trying to do. For example, maybe you're trying to
format HTML in which case you might want to use an HTML library (you would
[[add a library dependency to your build definition|Getting Started Using Plugins]] and
write code based on the HTML library, perhaps).
sbt has some utility libraries and convenience functions, in particular you
can often use the convenient APIs in [IO] to manipulate files and directories.
## Extending but not replacing a task
If you want to run an existing task while also taking another action, use
`~=` or `<<=` to take the existing task as input (which will imply running
that task), and then do whatever else you like after the previous
implementation completes.
```scala
// These two settings are equivalent
intTask <<= intTask map { (value: Int) => value + 1 }
intTask ~= { (value: Int) => value + 1 }
```
## Use plugins!
If you find you have a lot of custom code in `.scala` files, consider moving
it to a plugin for re-use across multiple projects.
It's very easy to create a plugin, as [[teased earlier|Getting Started Using Plugins]] and
[[discussed at more length here|Plugins]].
## Next
This page has been a quick taste; there's much much more about custom tasks
on the [[Tasks]] page.
You're at the end of Getting Started! There's a [[brief recap|Getting Started Summary]].

@ -1,83 +0,0 @@
[Maven]: http://maven.apache.org/
# Directory structure
[[Previous|Getting Started Hello]] _Getting Started Guide page 4 of 14._ [[Next|Getting Started Running]]
This page assumes you've [[installed sbt|Getting Started Setup]] and seen the [[Hello, World|Getting Started Hello]] example.
## Base directory
In sbt's terminology, the "base directory" is the directory containing the
project. So if you created a project `hello` containing `hello/build.sbt`
and `hello/hw.scala` as in the [[Hello, World|Getting Started Hello]] example, `hello`
is your base directory.
## Source code
Source code can be placed in the project's base directory as with
`hello/hw.scala`. However, most people don't do this for real projects; too
much clutter.
sbt uses the same directory structure as [Maven] for source files by default
(all paths are relative to the base directory):
```text
src/
main/
resources/
<files to include in main jar here>
scala/
<main Scala sources>
java/
<main Java sources>
test/
resources
<files to include in test jar here>
scala/
<test Scala sources>
java/
<test Java sources>
```
Other directories in `src/` will be ignored. Additionally, all hidden directories will be ignored.
## sbt build definition files
You've already seen `build.sbt` in the project's base directory. Other sbt
files appear in a `project` subdirectory.
`project` can contain `.scala` files, which are combined with
`.sbt` files to form the complete build definition. See
[[.scala build definitions|Getting Started Full Def]] for more.
```text
build.sbt
project/
Build.scala
```
You may see `.sbt` files inside `project/` but they are not equivalent to
`.sbt` files in the project's base directory. Explaining this will
[[come later|Getting Started Full Def]], since you'll need some background
information first.
## Build products
Generated files (compiled classes, packaged jars, managed files, caches, and documentation) will be written to the `target` directory by default.
## Configuring version control
Your `.gitignore` (or equivalent for other version control systems) should contain:
```text
target/
```
Note that this deliberately has a trailing `/` (to match only
directories) and it deliberately has no leading `/` (to match
`project/target/` in addition to plain `target/`).
# Next
Learn about [[running sbt|Getting Started Running]].

@ -1,269 +0,0 @@
# `.scala` Build Definition
[[Previous|Getting Started Library Dependencies]] _Getting Started Guide page
10 of 14._ [[Next|Getting Started Using Plugins]]
This page assumes you've read previous pages in the Getting
Started Guide, _especially_
[[.sbt build definition|Getting Started Basic Def]] and
[[more about settings|Getting Started More About Settings]].
## sbt is recursive
`build.sbt` is so simple, it conceals how sbt really works. sbt builds are
defined with Scala code. That code, itself, has to be built. What better way
than with sbt?
The `project` directory _is another project inside your project_ which knows
how to build your project. The project inside `project` can (in theory) do
anything any other project can do. _Your build definition is an sbt
project._
And the turtles go all the way down. If you like, you can tweak the build
definition of the build definition project, by creating a `project/project/`
directory.
Here's an illustration.
```text
hello/ # your project's base directory
Hello.scala # a source file in your project (could be in
# src/main/scala too)
build.sbt # build.sbt is part of the source code for the
# build definition project inside project/
project/ # base directory of the build definition project
Build.scala # a source file in the project/ project,
# that is, a source file in the build definition
build.sbt # this is part of a build definition for a project
# in project/project ; build definition's build
# definition
project/ # base directory of the build definition project
# for the build definition
Build.scala # source file in the project/project/ project
```
_Don't worry!_ Most of the time you are not going to need all that. But
understanding the principle can be helpful.
By the way: any time files ending in `.scala` or `.sbt` are used, naming them
`build.sbt` and `Build.scala` are conventions only. This also means that
multiple files are allowed.
## `.scala` source files in the build definition project
`.sbt` files are merged into their sibling `project`
directory. Looking back at the project layout:
```text
hello/ # your project's base directory
build.sbt # build.sbt is part of the source code for the
# build definition project inside project/
project/ # base directory of the build definition project
Build.scala # a source file in the project/ project,
# that is, a source file in the build definition
```
The Scala expressions in `build.sbt` are compiled alongside and merged with
`Build.scala` (or any other `.scala` files in the `project/` directory).
_`.sbt` files in the base directory for a project become part of the
`project` build definition project also located in that base directory._
The `.sbt` file format is a convenient shorthand for adding
settings to the build definition project.
## Relating `build.sbt` to `Build.scala`
To mix `.sbt` and `.scala` files in your build definition, you need to
understand how they relate.
The following two files illustrate. First, if your project is in `hello`,
create `hello/project/Build.scala` as follows:
```scala
import sbt._
import Keys._
object HelloBuild extends Build {
val sampleKeyA = SettingKey[String]("sample-a", "demo key A")
val sampleKeyB = SettingKey[String]("sample-b", "demo key B")
val sampleKeyC = SettingKey[String]("sample-c", "demo key C")
val sampleKeyD = SettingKey[String]("sample-d", "demo key D")
override lazy val settings = super.settings ++
Seq(sampleKeyA := "A: in Build.settings in Build.scala", resolvers := Seq())
lazy val root = Project(id = "hello",
base = file("."),
settings = Project.defaultSettings ++ Seq(sampleKeyB := "B: in the root project settings in Build.scala"))
}
```
Now, create `hello/build.sbt` as follows:
```scala
sampleKeyC in ThisBuild := "C: in build.sbt scoped to ThisBuild"
sampleKeyD := "D: in build.sbt"
```
Start up the sbt interactive prompt. Type `inspect sample-a` and you should
see (among other things):
```text
[info] Setting: java.lang.String = A: in Build.settings in Build.scala
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}/*:sample-a
```
and then `inspect sample-c` and you should see:
```text
[info] Setting: java.lang.String = C: in build.sbt scoped to ThisBuild
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}/*:sample-c
```
Note that the "Provided by" shows the same scope for the two values. That
is, `sampleKeyC in ThisBuild` in a `.sbt` file is equivalent to placing a
setting in the `Build.settings` list in a `.scala` file. sbt takes
build-scoped settings from both places to create the build definition.
Now, `inspect sample-b`:
```text
[info] Setting: java.lang.String = B: in the root project settings in Build.scala
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}hello/*:sample-b
```
Note that `sample-b` is scoped to the project
(`{file:/home/hp/checkout/hello/}hello`) rather than the entire build
(`{file:/home/hp/checkout/hello/}`).
As you've probably guessed, `inspect sample-d` matches `sample-b`:
```text
[info] Setting: java.lang.String = D: in build.sbt
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}hello/*:sample-d
```
sbt _appends_ the settings from `.sbt` files to the settings from
`Build.settings` and `Project.settings` which means `.sbt` settings take
precedence. Try changing `Build.scala` so it sets key `sample-c` or
`sample-d`, which are also set in `build.sbt`. The setting in `build.sbt`
should "win" over the one in `Build.scala`.
One other thing you may have noticed: `sampleKeyC` and `sampleKeyD` were
available inside `build.sbt`. That's because sbt imports the contents of
your `Build` object into your `.sbt` files. In this case `import
HelloBuild._` was implicitly done for the `build.sbt` file.
In summary:
- In `.scala` files, you can add settings to `Build.settings` for sbt to
find, and they are automatically build-scoped.
- In `.scala` files, you can add settings to `Project.settings` for sbt to
find, and they are automatically project-scoped.
- Any `Build` object you write in a `.scala` file will have its contents
imported and available to `.sbt` files.
- The settings in `.sbt` files are _appended_ to the settings in `.scala`
files.
- The settings in `.sbt` files are project-scoped unless you explicitly
specify another scope.
## When to use `.scala` files
In `.scala` files, you are not limited to a series of settings
expressions. You can write any Scala code including `val`, `object`, and
method definitions.
_One recommended approach is to define settings in `.sbt` files, using
`.scala` files when you need to factor out a `val` or `object` or method
definition._
Because the `.sbt` format allows only single expressions, it doesn't give
you a way to share code among expressions. When you need to share code, you
need a `.scala` file so you can set common variables or define methods.
There's one build definition, which is a nested project inside
your main project. `.sbt` and `.scala` files are compiled
together to create that single definition.
`.scala` files are also required to define multiple projects in a single
build. More on that is coming up in
[[Multi-Project Builds|Getting Started Multi-Project]].
(A disadvantage of using `.sbt` files in a
[[multi-project build|Getting Started Multi-Project]] is that
they'll be spread around in different directories; for that
reason, some people prefer to put settings in their `.scala` files
if they have sub-projects. This will be clearer after you see
how [[multi-project builds|Getting Started Multi-Project]] work.)
## The build definition project in interactive mode
You can switch the sbt interactive prompt to have the build definition
project in `project/` as the current project. To do so, type `reload
plugins`.
```text
> reload plugins
[info] Set current project to default-a0e8e4 (in build file:/home/hp/checkout/hello/project/)
> show sources
[info] ArrayBuffer(/home/hp/checkout/hello/project/Build.scala)
> reload return
[info] Loading project definition from /home/hp/checkout/hello/project
[info] Set current project to hello (in build file:/home/hp/checkout/hello/)
> show sources
[info] ArrayBuffer(/home/hp/checkout/hello/hw.scala)
>
```
As shown above, you use `reload return` to leave the build definition
project and return to your regular project.
## Reminder: it's all immutable
It would be wrong to think that the settings in `build.sbt` are added to the
`settings` fields in `Build` and `Project` objects. Instead, the settings
list from `Build` and `Project`, and the settings from `build.sbt`, are
concatenated into another immutable list which is then used by sbt. The
`Build` and `Project` objects are "immutable configuration" forming only
part of the complete build definition.
In fact, there are other sources of settings as well. They are appended in
this order:
- Settings from `Build.settings` and `Project.settings` in your `.scala` files.
- Your user-global settings; for example in `~/.sbt/build.sbt` you can
define settings affecting _all_ your projects.
- Settings injected by plugins, see [[using plugins|Getting Started Using Plugins]] coming up next.
- Settings from `.sbt` files in the project.
- Build definition projects (i.e. projects inside `project`) have
settings from global plugins (`~/.sbt/plugins`) added.
[[Using plugins|Getting Started Using Plugins]] explains this
more.
Later settings override earlier ones. The entire list of settings forms the
build definition.
## Next
Move on to [[using plugins|Getting Started Using Plugins]].

@ -1,81 +0,0 @@
# Hello, World
[[Previous|Getting Started Setup]] _Getting Started Guide page 3 of 14._ [[Next|Getting Started Directories]]
This page assumes you've [[installed sbt|Getting Started Setup]].
## Create a project directory with source code
A valid sbt project can be a directory containing a single source file. Try creating a directory `hello` with a file `hw.scala`, containing the following:
```scala
object Hi {
def main(args: Array[String]) = println("Hi!")
}
```
Now from inside the `hello` directory, start sbt and type `run` at the sbt interactive console. On Linux or OS X the commands might look like this:
```text
$ mkdir hello
$ cd hello
$ echo 'object Hi { def main(args: Array[String]) = println("Hi!") }' > hw.scala
$ sbt
...
> run
...
Hi!
```
In this case, sbt works purely by convention. sbt will find the
following automatically:
- Sources in the base directory
- Sources in `src/main/scala` or `src/main/java`
- Tests in `src/test/scala` or `src/test/java`
- Data files in `src/main/resources` or `src/test/resources`
- jars in `lib`
By default, sbt will build projects with the same version of Scala used to run sbt itself.
You can run the project with `sbt run` or enter the [Scala REPL](http://www.scala-lang.org/node/2097)
with `sbt console`. `sbt console` sets up your project's classpath so you can
try out live Scala examples based on your project's code.
## Build definition
Most projects will need some manual setup. Basic build settings go
in a file called `build.sbt`, located in the project's base directory.
For example, if your project is in the directory `hello`, in `hello/build.sbt` you might write:
```scala
name := "hello"
version := "1.0"
scalaVersion := "2.9.1"
```
Notice the blank line between every item. This isn't just for show; they're actually required in order to separate each item. In [[.sbt build definition|Getting Started Basic Def]] you'll learn more about how to write a `build.sbt` file.
If you plan to package your project in a jar, you will want to set at least
the name and version in a `build.sbt`.
## Setting the sbt version
You can force a particular version of sbt by creating a file `hello/project/build.properties`.
In this file, write:
```text
sbt.version=0.11.3
```
From 0.10 onwards, sbt is 99% source compatible from release to release. Still,
setting the sbt version in `project/build.properties` avoids any potential
confusion.
# Next
Learn about the [[file and directory layout|Getting Started Directories]] of an sbt project.

@ -1,231 +0,0 @@
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html "Keys.scala"
[Apache Ivy]: http://ant.apache.org/ivy/
[Ivy revisions]: http://ant.apache.org/ivy/history/2.2.0/ivyfile/dependency.html#revision
[Extra attributes]: http://ant.apache.org/ivy/history/2.2.0/concept.html#extra
[through Ivy]: http://ant.apache.org/ivy/history/latest-milestone/concept.html#checksum
[ScalaCheck]: https://github.com/rickynils/scalacheck
[specs]: http://code.google.com/p/specs/
[ScalaTest]: http://www.scalatest.org/
# Library Dependencies
[[Previous|Getting Started More About Settings]] _Getting Started Guide page
9 of 14._ [[Next|Getting Started Full Def]]
This page assumes you've read the earlier Getting Started pages, in particular
[[.sbt build definition|Getting Started Basic Def]],
[[scopes|Getting Started Scopes]], and [[more about settings|Getting Started More About Settings]].
Library dependencies can be added in two ways:
- _unmanaged dependencies_ are jars dropped into the `lib` directory
- _managed dependencies_ are configured in the build definition and
downloaded automatically from repositories
## Unmanaged dependencies
Most people use managed dependencies instead of unmanaged. But unmanaged can
be simpler when starting out.
Unmanaged dependencies work like this: add jars to `lib` and they will be
placed on the project classpath. Not much else to it!
You can place test jars such as [ScalaCheck], [specs], and [ScalaTest] in
`lib` as well.
Dependencies in `lib` go on all the classpaths (for `compile`, `test`,
`run`, and `console`). If you wanted to change the classpath for just one of
those, you would adjust `dependencyClasspath in Compile` or
`dependencyClasspath in Runtime` for example. You could use `~=` to get the
previous classpath value, filter some entries out, and return a new
classpath value. See [[more about settings|Getting Started More About Settings]] for details of `~=`.
There's nothing to add to `build.sbt` to use unmanaged dependencies, though
you could change the `unmanaged-base` key if you'd like to use a different
directory rather than `lib`.
To use `custom_lib` instead of `lib`:
```scala
unmanagedBase <<= baseDirectory { base => base / "custom_lib" }
```
`baseDirectory` is the project's root directory, so here you're changing
`unmanagedBase` depending on `baseDirectory`, using `<<=` as explained in
[[more about settings|Getting Started More About Settings]].
There's also an `unmanaged-jars` task which lists the jars from the
`unmanaged-base` directory. If you wanted to use multiple directories or do
something else complex, you might need to replace the whole `unmanaged-jars`
task with one that does something else.
## Managed Dependencies
sbt uses [Apache Ivy] to implement managed dependencies, so if you're
familiar with Maven or Ivy, you won't have much trouble.
### The `libraryDependencies` key
Most of the time, you can simply list your dependencies in the setting
`libraryDependencies`. It's also possible to write a Maven POM file or Ivy
configuration file to externally configure your dependencies, and have sbt
use those external configuration files. You can learn more about that
[[here|Library Management]].
Declaring a dependency looks like this, where `groupId`, `artifactId`, and
`revision` are strings:
```scala
libraryDependencies += groupID % artifactID % revision
```
or like this, where `configuration` is also a string:
```scala
libraryDependencies += groupID % artifactID % revision % configuration
```
`libraryDependencies` is declared in [Keys] like this:
```scala
val libraryDependencies = SettingKey[Seq[ModuleID]]("library-dependencies", "Declares managed dependencies.")
```
The `%` methods create `ModuleID` objects from strings, then you add those
`ModuleID` to `libraryDependencies`.
Of course, sbt (via Ivy) has to know where to download the module. If
your module is in one of the default repositories sbt comes with, this will
just work. For example, Apache Derby is in a default repository:
```scala
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3"
```
If you type that in `build.sbt` and then `update`, sbt should download
Derby to `~/.ivy2/cache/org.apache.derby/`. (By the way, `update` is a
dependency of `compile` so there's no need to manually type `update` most of
the time.)
Of course, you can also use `++=` to add a list of dependencies all at once:
```scala
libraryDependencies ++= Seq(
groupID % artifactID % revision,
groupID % otherID % otherRevision
)
```
And in rare cases you might find reasons to use `:=`, `<<=`, `<+=`,
etc. with `libraryDependencies` as well.
### Getting the right Scala version with `%%`
If you use `groupID %% artifactID % revision` rather than `groupID %
artifactID % revision` (the difference is the double `%%` after the
groupID), sbt will add your project's Scala version to the artifact name.
This is just a shortcut. You could write this without the `%%`:
```scala
libraryDependencies += "org.scala-tools" % "scala-stm_2.9.1" % "0.3"
```
Assuming the `scalaVersion` for your build is `2.9.1`, the following is
identical:
```scala
libraryDependencies += "org.scala-tools" %% "scala-stm" % "0.3"
```
The idea is that many dependencies are compiled for multiple Scala versions,
and you'd like to get the one that matches your project.
The complexity in practice is that often a dependency will work with a slightly different Scala version; but `%%` is not smart about that. So if
the dependency is available for `2.9.0` but you're using `scalaVersion :=
"2.9.1"`, you won't be able to use `%%` even though the `2.9.0` dependency
likely works. If `%%` stops working just go see which versions the
dependency is really built for, and hardcode the one you think will work
(assuming there is one).
See [[Cross Build]] for some more detail on this.
### Ivy revisions
The `revision` in `groupID % artifactID % revision` does not have to be a
single fixed version. Ivy can select the latest revision of a module
according to constraints you specify. Instead of a fixed revision like
`"1.6.1"`, you specify `"latest.integration"`, `"2.9.+"`, or
`"[1.0,)"`. See the [Ivy revisions] documentation for details.
### Resolvers
Not all packages live on the same server; sbt uses the standard Maven2
repository by default. If your dependency isn't on one of the default
repositories, you'll have to add a _resolver_ to help Ivy find it.
To add an additional repository, use
```scala
resolvers += name at location
```
For example:
```scala
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
```
The `resolvers` key is defined in [Keys] like this:
```scala
val resolvers = SettingKey[Seq[Resolver]]("resolvers", "The user-defined additional resolvers for automatically managed dependencies.")
```
The `at` method creates a `Resolver` object from two strings.
sbt can search your local Maven repository if you add it as a repository:
```scala
resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
```
See [[Resolvers]] for details on defining other types of repositories.
### Overriding default resolvers
`resolvers` does not contain the default resolvers; only additional ones
added by your build definition.
`sbt` combines `resolvers` with some default repositories to form
`external-resolvers`.
Therefore, to change or remove the default resolvers, you would need to
override `external-resolvers` instead of `resolvers`.
### Per-configuration dependencies
Often a dependency is used by your test code (in `src/test/scala`, which is
compiled by the `Test` configuration) but not your main code.
If you want a dependency to show up in the classpath only for the `Test`
configuration and not the `Compile` configuration, add `% "test"` like this:
```scala
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3" % "test"
```
Now, if you type `show compile:dependency-classpath` at the sbt interactive
prompt, you should not see derby. But if you type `show
test:dependency-classpath`, you should see the derby jar in the list.
Typically, test-related dependencies such as [ScalaCheck], [specs], and
[ScalaTest] would be defined with `% "test"`.
# Next
There are some more details and tips-and-tricks related to library
dependencies [[on this page|Library-Management]], if you didn't find an
answer on this introductory page.
If you're reading Getting Started in order, for now, you might move on to
read [[.scala build definition|Getting Started Full Def]].

@ -1,414 +0,0 @@
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html "Keys.scala"
[ScopedSetting]: http://harrah.github.com/xsbt/latest/api/sbt/ScopedSetting.html
# More Kinds of Setting
[[Previous|Getting Started Scopes]] _Getting Started Guide page
8 of 14._ [[Next|Getting Started Library Dependencies]]
This page explains other ways to create a `Setting`, beyond the basic `:=`
method. It assumes you've read [[.sbt build definition|Getting Started Basic Def]] and [[scopes|Getting Started Scopes]].
## Refresher: Settings
[[Remember|Getting Started Basic Def]], a build definition creates a list of
`Setting`, which is then used to transform sbt's description of the build
(which is a map of key-value pairs). A `Setting` is a transformation with
sbt's earlier map as input and a new map as output. The new map becomes
sbt's new state.
Different settings transform the map in different
ways. [[Earlier|Getting Started Basic Def]], you read about the `:=` method.
The `Setting` which `:=` creates puts a fixed, constant value in the new,
transformed map. For example, if you transform a map with the setting
`name := "hello"` the new map has the string `"hello"` stored under the key
`name`.
Settings must end up in the master list of settings to do any good (all
lines in a `build.sbt` automatically end up in the list, but in a
[[.scala file|Getting Started Full Def]] you can get it wrong by
creating a `Setting` without putting it where sbt will find it).
## Appending to previous values: `+=` and `++=`
Replacement with `:=` is the simplest transformation, but keys have other
methods as well. If the `T` in `SettingKey[T]` is a sequence, i.e. the key's value
type is a sequence, you can append to the sequence rather than replacing it.
- `+=` will append a single element to the sequence.
- `++=` will concatenate another sequence.
For example, the key `sourceDirectories in Compile` has a `Seq[File]` as its
value. By default this key's value would include `src/main/scala`.
If you wanted to also compile source code in a directory called `source`
(since you just have to be nonstandard), you could add that directory:
```scala
sourceDirectories in Compile += new File("source")
```
Or, using the `file()` function from the sbt package for convenience:
```scala
sourceDirectories in Compile += file("source")
```
(`file()` just creates a new `File`.)
You could use `++=` to add more than one directory at a time:
```scala
sourceDirectories in Compile ++= Seq(file("sources1"), file("sources2"))
```
Where `Seq(a, b, c, ...)` is standard Scala syntax to construct a sequence.
To replace the default source directories entirely, you use `:=` of
course:
```scala
sourceDirectories in Compile := Seq(file("sources1"), file("sources2"))
```
## Transforming a value: `~=`
What happens if you want to _prepend_ to `sourceDirectories in Compile`, or
filter out one of the default directories?
You can create a `Setting` that depends on the previous value of a key.
- `~=` applies a function to the setting's previous value, producing a new
value of the same type.
To modify `sourceDirectories in Compile`, you could use `~=` as follows:
```scala
// filter out src/main/scala
sourceDirectories in Compile ~= { srcDirs => srcDirs filter(!_.getAbsolutePath.endsWith("src/main/scala")) }
```
Here, `srcDirs` is a parameter to an anonymous function, and the old value
of `sourceDirectories in Compile` gets passed in to the anonymous
function. The result of this function becomes the new value of
`sourceDirectories in Compile`.
Or a simpler example:
```scala
// make the project name upper case
name ~= { _.toUpperCase }
```
The function you pass to the `~=` method will always have type `T
=> T`, if the key has type `SettingKey[T]` or `TaskKey[T]`. The
function transforms the key's value into another value of the same
type.
## Computing a value based on other keys' values: `<<=`
`~=` defines a new value in terms of a key's previously-associated
value. But what if you want to define a value in terms of _other_ keys'
values?
- `<<=` lets you compute a new value using the value(s) of arbitrary other keys.
`<<=` has one argument, of type `Initialize[T]`. An `Initialize[T]` instance
is a computation which takes the values associated with a set of keys as
input, and returns a value of type `T` based on those other values. It
initializes a value of type `T`.
Given an `Initialize[T]`, `<<=` returns a `Setting[T]`, of course (just like
`:=`, `+=`, `~=`, etc.).
### Trivial `Initialize[T]`: depending on one other key with `<<=`
All keys extend the `Initialize` trait already. So the simplest `Initialize`
is just a key:
```scala
// useless but valid
name <<= name
```
When treated as an `Initialize[T]`, a `SettingKey[T]` computes its
current value. So `name <<= name` sets the value of `name` to the
value that `name` already had.
It gets a little more useful if you set a key to a _different_ key. The keys
must have identical value types, though.
```scala
// name our organization after our project (both are SettingKey[String])
organization <<= name
```
(Note: this is how you alias one key to another.)
If the value types are not identical, you'll need to convert from
`Initialize[T]` to another type, like `Initialize[S]`. This is done with the
`apply` method on `Initialize`, like this:
```scala
// name is a Key[String], baseDirectory is a Key[File]
// name the project after the directory it's inside
name <<= baseDirectory.apply(_.getName)
```
`apply` is special in Scala and means you can invoke the object with
function syntax; so you could also write this:
```scala
name <<= baseDirectory(_.getName)
```
That transforms the value of `baseDirectory` using the function `_.getName`,
where the function `_.getName` takes a `File` and returns a
`String`. `getName` is a method on the standard `java.io.File` object.
### Settings with dependencies
In the setting `name <<= baseDirectory(_.getName)`, `name` will have a
_dependency_ on `baseDirectory`. If you place the above in `build.sbt` and
run the sbt interactive console, then type `inspect name`, you should see
(in part):
```text
[info] Dependencies:
[info] *:base-directory
```
This is how sbt knows which settings depend on which other
settings. Remember that some settings describe tasks, so this approach also
creates dependencies between tasks.
For example, if you `inspect compile` you'll see it depends on another key
`compile-inputs`, and if you inspect `compile-inputs` it in turn depends on
other keys. Keep following the dependency chains and magic happens. When
you type `compile` sbt automatically performs an `update`, for example. It
Just Works because the values required as inputs to the `compile`
computation require sbt to do the `update` computation first.
In this way, all build dependencies in sbt are _automatic_ rather than
explicitly declared. If you use a key's value in another computation, then
the computation depends on that key. It just works!
### Complex `Initialize[T]`: depending on multiple keys with `<<=`
To support dependencies on multiple other keys, sbt adds `apply` and
`identity` methods to tuples of `Initialize` objects. In Scala, you write a
tuple like `(1, "a")` (that one has type `(Int, String)`).
So say you have a tuple of three `Initialize` objects; its type would be
`(Initialize[A], Initialize[B], Initialize[C])`. The `Initialize` objects
could be keys, since all `SettingKey[T]` are also instances of `Initialize[T]`.
Here's a simple example, in this case all three keys are strings:
```scala
// a tuple of three SettingKey[String], also a tuple of three Initialize[String]
(name, organization, version)
```
The `apply` method on a tuple of `Initialize` takes a function as its
argument. Using each `Initialize` in the tuple, sbt computes a corresponding
value (the current value of the key). These values are passed in to the
function. The function then returns _one_ value, which is wrapped up in a
new `Initialize`. If you wrote it out with explicit types (Scala does not
require this), it would look like:
```scala
val tuple: (Initialize[String], Initialize[String], Initialize[String]) = (name, organization, version)
val combined: Initialize[String] = tuple.apply({ (n, o, v) =>
"project " + n + " from " + o + " version " + v })
val setting: Setting[String] = name <<= combined
```
So each key is already an `Initialize`; but you can combine up to nine
simple `Initialize` (such as keys) into one composite `Initialize` by
placing them in tuples, and invoking the `apply` method.
The `<<=` method on `SettingKey[T]` is expecting an `Initialize[T]`, so you can use
this technique to create an `Initialize[T]` with multiple dependencies on
arbitrary keys.
Because function syntax in Scala just calls the `apply` method, you
could write the code like this, omitting the explicit `.apply` and just
treating `tuple` as a function:
```scala
val tuple: (Initialize[String], Initialize[String], Initialize[String]) = (name, organization, version)
val combined: Initialize[String] = tuple({ (n, o, v) =>
"project " + n + " from " + o + " version " + v })
val setting: Setting[String] = name <<= combined
```
In a `build.sbt`, this code using intermediate `val` will not work, since you
can only write single expressions in a `.sbt` file, not multiple statements.
You can use a more concise syntax in `build.sbt`, like this:
```scala
name <<= (name, organization, version) { (n, o, v) => "project " + n + " from " + o + " version " + v }
```
Here the tuple of `Initialize` (also a tuple of `SettingKey`) works as a function,
taking the anonymous function delimited by `{}` as its argument, and returning an
`Initialize[T]` where `T` is the result type of the anonymous function.
Tuples of `Initialize` have one other method, `identity`, which simply
returns an `Initialize` with a tuple value.
`(a: Initialize[A], b: Initialize[B]).identity`
would result in a value of type
`Initialize[(A, B)]`. `identity` combines two `Initialize` into one, without
losing or modifying any of the values.
### When settings are undefined
Whenever a setting uses `~=` or `<<=` to create a dependency on itself or
another key's value, the value it depends on must exist. If it does not,
sbt will complain. It might say _"Reference to undefined setting"_, for
example. When this happens, be sure you're using the key in the
[[scope|Getting Started Scopes]] that defines it.
It's possible to create cycles, which is an error; sbt will tell you if you
do this.
### Tasks with dependencies
As noted in [[.sbt build definition|Getting Started Basic Def]], task keys create a
`Setting[Task[T]]` rather than a `Setting[T]` when you build a setting with
`:=`, `<<=`, etc. Similarly, task keys are instances of
`Initialize[Task[T]]` rather than `Initialize[T]`, and `<<=` on a task key
takes an `Initialize[Task[T]]` parameter.
The practical importance of this is that you can't have tasks as
dependencies for a non-task setting.
Take these two keys (from [Keys]):
```scala
val scalacOptions = TaskKey[Seq[String]]("scalac-options", "Options for the Scala compiler.")
val checksums = SettingKey[Seq[String]]("checksums", "The list of checksums to generate and to verify for dependencies.")
```
(`scalacOptions` and `checksums` have nothing to do with each other, they
are just two keys with the same value type, where one is a task.)
You cannot compile a `build.sbt` that tries to alias one of these to the
other like this:
```scala
scalacOptions <<= checksums
checksums <<= scalacOptions
```
The issue is that `scalacOptions.<<=` expects an
`Initialize[Task[Seq[String]]]` and `checksums.<<=` expects an
`Initialize[Seq[String]]`. There is, however, a way to convert an
`Initialize[T]` to an `Initialize[Task[T]]`, called `map`:
```scala
scalacOptions <<= checksums map identity
```
(`identity` is a standard Scala function that returns its input as its result.)
There is no way to go the _other_ direction, that is, a setting
key can't depend on a task key. That's because a setting key is
only computed once on project load, so the task would not be
re-run every time, and tasks expect to re-run every time.
A task can depend on both settings and other tasks, though, just use `map`
rather than `apply` to build an `Initialize[Task[T]]` rather than an `Initialize[T]`.
Remember the usage of `apply` with a non-task setting looks like this:
```scala
name <<= (name, organization, version) { (n, o, v) => "project " + n + " from " + o + " version " + v }
```
(`(name, organization, version)` has an apply method and is thus a function,
taking the anonymous function in `{}` braces as a parameter.)
To create an `Initialize[Task[T]]` you need a `map` in there rather than `apply`:
```scala
// this WON'T compile because name (on the left of <<=) is not a task and we used map
name <<= (name, organization, version) map { (n, o, v) => "project " + n + " from " + o + " version " + v }
// this WILL compile because packageBin is a task and we used map
packageBin in Compile <<= (name, organization, version) map { (n, o, v) => file(o + "-" + n + "-" + v + ".jar") }
// this WILL compile because name is not a task and we used apply
name <<= (name, organization, version) { (n, o, v) => "project " + n + " from " + o + " version " + v }
// this WON'T compile because packageBin is a task and we used apply
packageBin in Compile <<= (name, organization, version) { (n, o, v) => file(o + "-" + n + "-" + v + ".jar") }
```
_Bottom line:_ when converting a tuple of keys into an
`Initialize[Task[T]]`, use `map`; when converting a tuple of keys into an
`Initialize[T]` use `apply`; and you need the `Initialize[Task[T]]` if the
key on the left side of `<<=` is a `TaskKey[T]` rather than a
`SettingKey[T]`.
### Remember, aliases use `<<=` not `:=`
If you want one key to be an alias for another, you might be tempted to
use `:=` to create the following nonsense alias:
```scala
// doesn't work, and not useful
packageBin in Compile := packageDoc in Compile
```
The problem is that `:=`'s argument must be a value (or for tasks, a
function returning a value). For `packageBin` which
is a `TaskKey[File]`, it must be a `File` or a function `=> File`.
`packageDoc` is not a `File`, it's a key.
The proper way to do this is with `<<=`, which takes a key (really an
`Initialize`, but keys are instances of `Initialize`):
```scala
// works, still not useful
packageBin in Compile <<= packageDoc in Compile
```
Here, `<<=` expects an `Initialize[Task[File]]`, which is a computation that
will return a file later, when sbt runs the task. Which is what you want:
you want to alias a task by making it run another task, not by setting it
one time when sbt loads the project.
(By the way: the `in Compile` scope is needed to avoid "undefined" errors,
because the packaging tasks like `packageBin` are per-configuration, not
global.)
## Appending with dependencies: `<+=` and `<++=`
There are a couple more methods for appending to lists, which combine `+=`
and `++=` with `<<=`. That is, they let you compute a new list element or
new list to concatenate, using dependencies on other keys in order to do so.
These methods work exactly like `<<=`, but for `<++=`, the function you
write to convert the dependencies' values into a new value should create a
`Seq[T]` instead of a `T`.
Unlike `<<=` of course, `<+=` and `<++=` will append to the previous value
of the key on the left, rather than replacing it.
For example, say you have a coverage report named after the project, and you
want to add it to the files removed by `clean`:
```scala
cleanFiles <+= (name) { n => file("coverage-report-" + n + ".txt") }
```
## Next
At this point you know how to get things done with settings, so we can move
on to a specific key that comes up often: `libraryDependencies`.
[[Learn about library dependencies|Getting Started Library Dependencies]].

@ -1,182 +0,0 @@
# Multi-Project Builds
[[Previous|Getting Started Using Plugins]] _Getting Started Guide page
12 of 14._ [[Next|Getting Started Custom Settings]]
This page introduces multiple projects in a single build.
Please read the earlier pages in the Getting Started Guide first,
in particular you need to understand
[[build.sbt|Getting Started Basic Def]] and
[[.scala build definition|Getting Started Full Def]] before reading
this page.
## Multiple projects
It can be useful to keep multiple related projects in a single build,
especially if they depend on one another and you tend to modify them
together.
Each sub-project in a build has its own `src/main/scala`, generates its own
jar file when you run `package`, and in general works like any other
project.
## Defining projects in a `.scala` file
To have multiple projects, you must declare each project and how they relate
in a `.scala` file; there's no way to do it in a `.sbt` file. However, you
can define settings for each project in `.sbt` files. Here's an example of a
`.scala` file which defines a root project `hello`, where the root project
aggregates two sub-projects, `hello-foo` and `hello-bar`:
```scala
import sbt._
import Keys._
object HelloBuild extends Build {
lazy val root = Project(id = "hello",
base = file(".")) aggregate(foo, bar)
lazy val foo = Project(id = "hello-foo",
base = file("foo"))
lazy val bar = Project(id = "hello-bar",
base = file("bar"))
}
```
sbt finds the list of `Project` objects using reflection, looking for fields
with type `Project` in the `Build` object.
Because project `hello-foo` is defined with `base = file("foo")`, it will be
contained in the subdirectory `foo`. Its sources could be directly under
`foo`, like `foo/Foo.scala`, or in `foo/src/main/scala`. The usual sbt
[[directory structure|Getting Started Directories]] applies underneath `foo` with
the exception of build definition files.
Any `.sbt` files in `foo`, say `foo/build.sbt`, will be merged with the
build definition for the entire build, but scoped to the `hello-foo`
project.
If your whole project is in `hello`, try defining a different version
(`version := "0.6"`) in `hello/build.sbt`, `hello/foo/build.sbt`, and
`hello/bar/build.sbt`. Now `show version` at the sbt interactive
prompt. You should get something like this (with whatever versions you
defined):
```text
> show version
[info] hello-foo/*:version
[info] 0.7
[info] hello-bar/*:version
[info] 0.9
[info] hello/*:version
[info] 0.5
```
`hello-foo/*:version` was defined in `hello/foo/build.sbt`,
`hello-bar/*:version` was defined in `hello/bar/build.sbt`, and
`hello/*:version` was defined in `hello/build.sbt`. Remember the
[[syntax for scoped keys|Getting Started Scopes]]. Each `version` key is scoped to a
project, based on the location of the `build.sbt`. But all three `build.sbt`
are part of the same build definition.
_Each project's settings can go in `.sbt` files in the base
directory of that project_, while the `.scala` file can be as simple as the
one shown above, listing the projects and base directories. _There is no need
to put settings in the `.scala` file._
You may find it cleaner to put everything including settings in
`.scala` files in order to keep all build definition under a
single `project` directory, however. It's up to you.
You cannot have a `project` subdirectory or `project/*.scala` files in the
sub-projects. `foo/project/Build.scala` would be ignored.
## Aggregation
Projects in the build can be completely independent of one another, if you
want.
In the above example, however, you can see the method call `aggregate(foo, bar)`.
This aggregates `hello-foo` and `hello-bar` underneath the root project.
Aggregation means that running a task on the aggregate project will also run
it on the aggregated projects. Start up sbt with two subprojects as in the
example, and try `compile`. You should see that all three projects are
compiled.
_In the project doing the aggregating_, the root `hello` project in this
case, you can control aggregation per-task. So for example in
`hello/build.sbt` you could avoid aggregating the `update` task:
```scala
aggregate in update := false
```
`aggregate in update` is the `aggregate` key scoped to the `update` task,
see [[scopes|Getting Started Scopes]].
Note: aggregation will run the aggregated tasks in parallel and with no defined
ordering.
## Classpath dependencies
A project may depend on code in another project. This is done by adding a
`dependsOn` method call. For example, if `hello-foo` needed `hello-bar` on its classpath,
you would write in your `Build.scala`:
```scala
lazy val foo = Project(id = "hello-foo",
base = file("foo")) dependsOn(bar)
```
Now code in `hello-foo` can use classes from `hello-bar`. This also creates
an ordering between the projects when compiling them; `hello-bar` must be
updated and compiled before `hello-foo` can be compiled.
To depend on multiple projects, use multiple arguments to `dependsOn`, like
`dependsOn(bar, baz)`.
### Per-configuration classpath dependencies
`foo dependsOn(bar)` means that the `Compile` configuration in `foo` depends
on the `Compile` configuration in `bar`. You could write this explicitly as
`dependsOn(bar % "compile->compile")`.
The `->` in `"compile->compile"` means "depends on" so `"test->compile"`
means the `Test` configuration in `foo` would depend on the `Compile`
configuration in `bar`.
Omitting the `->config` part implies `->compile`, so `dependsOn(bar %
"test")` means that the `Test` configuration in `foo` depends on the
`Compile` configuration in `bar`.
A useful declaration is `"test->test"` which means `Test` depends on
`Test`. This allows you to put utility code for testing in
`bar/src/test/scala` and then use that code in `foo/src/test/scala`, for
example.
You can have multiple configurations for a dependency, separated by
semicolons. For example, `dependsOn(bar % "test->test;compile->compile")`.
## Navigating projects interactively
At the sbt interactive prompt, type `projects` to list your projects and
`project <projectname>` to select a current project. When you run a task
like `compile`, it runs on the current project. So you don't necessarily
have to compile the root project, you could compile only a subproject.
## Sharing settings
When having a single `.scala` file setting up the different projects, it's easy to use reuse settings across different projects. But even when using different `build.sbt` files, it's still easy to share settings across all projects from the main build, by using the `ThisBuild` scope to make a setting apply globally. For instance, when a main project depends on a subproject, these two projects must typically be compiled with the same Scala version. To set it only once, it is enough to write, in the main `build.sbt` file, the following line:
```scala
scalaVersion in ThisBuild := "2.10.0"
```
replacing `2.10.0` with the desired Scala version. This setting will propagate across all subprojects. For more information on the `ThisBuild` scope, go back to the [[page on scopes|Getting Started Scopes]].
## Next
Move on to create [[custom settings|Getting Started Custom Settings]].

@ -1,120 +0,0 @@
# Running
[[Previous|Getting Started Directories]] _Getting Started Guide page 5 of 14._ [[Next|Getting Started Basic Def]]
This page describes how to use `sbt` once you have set up your project. It
assumes you've [[installed sbt|Getting Started Setup]] and created a [[Hello, World|Getting Started Hello]] or other project.
## Interactive mode
Run sbt in your project directory with no arguments:
```text
$ sbt
```
Running sbt with no command line arguments starts it in interactive mode.
Interactive mode has a command prompt (with tab completion and
history!).
For example, you could type `compile` at the sbt prompt:
```text
> compile
```
To `compile` again, press up arrow and then enter.
To run your program, type `run`.
To leave interactive mode, type `exit` or use Ctrl+D (Unix) or Ctrl+Z (Windows).
## Batch mode
You can also run sbt in batch mode, specifying a space-separated list of
sbt commands as arguments. For sbt commands that take arguments, pass the command and arguments as one argument to `sbt` by enclosing them in quotes. For example,
```text
$ sbt clean compile "test-only TestA TestB"
```
In this example, `test-only` has arguments, `TestA` and `TestB`. The commands will be
run in sequence (`clean`, `compile`, then `test-only`).
## Continuous build and test
To speed up your edit-compile-test cycle, you can ask sbt to automatically
recompile or run tests whenever you save a source file.
Make a command run when one or more source files change by prefixing the
command with `~`. For example, in interactive mode try:
```text
> ~ compile
```
Press enter to stop watching for changes.
You can use the `~` prefix with either interactive mode or batch mode.
See [[Triggered Execution]] for more details.
## Common commands
Here are some of the most common sbt commands. For a more complete
list, see [[Command Line Reference]].
* `clean`
Deletes all generated files (in the `target` directory).
* `compile`
Compiles the main sources (in `src/main/scala` and `src/main/java` directories).
* `test`
Compiles and runs all tests.
* `console`
Starts the Scala interpreter with a classpath including the compiled
sources and all dependencies. To return to sbt, type `:quit`, Ctrl+D
(Unix), or Ctrl+Z (Windows).
* `run <argument>*`
Runs the main class for the project in the same virtual machine as `sbt`.
* `package`
Creates a jar file containing the files in `src/main/resources` and the classes compiled from `src/main/scala` and `src/main/java`.
* `help <command>`
Displays detailed help for the specified command. If no command is
provided, displays brief descriptions of all commands.
* `reload`
Reloads the build definition (`build.sbt`, `project/*.scala`,
`project/*.sbt` files). Needed if you change the build definition.
## Tab completion
Interactive mode has tab completion, including at an empty
prompt. A special sbt convention is that pressing tab once may
show only a subset of most likely completions, while pressing it
more times shows more verbose choices.
## History Commands
Interactive mode remembers history, even if you exit sbt and restart it.
The simplest way to access history is with the up arrow key. The following
commands are also supported:
* `!`
Show history command help.
* `!!`
Execute the previous command again.
* `!:`
Show all previous commands.
* `!:n`
Show the last n commands.
* `!n`
Execute the command with index `n`, as shown by the `!:` command.
* `!-n`
Execute the nth command before this one.
* `!string`
Execute the most recent command starting with 'string'
* `!?string`
Execute the most recent command containing 'string'
## Next
Move on to [[understanding build.sbt|Getting Started Basic Def]].

@ -1,336 +0,0 @@
[MavenScopes]:
http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope
"Maven scopes"
# Scopes
[[Previous|Getting Started Basic Def]] _Getting Started Guide page
7 of 14._ [[Next|Getting Started More About Settings]]
This page describes scopes. It assumes you've read and understood the
previous page, [[.sbt build definition|Getting Started Basic Def]].
## The whole story about keys
[[Previously|Getting Started Basic Def]] we pretended that a key like `name`
corresponded to one entry in sbt's map of key-value pairs. This was a
simplification.
In truth, each key can have an associated value in more than one context,
called a "scope."
Some concrete examples:
- if you have multiple projects in your build definition, a key can have
a different value in each project.
- the `compile` key may have a different value for your main sources and
your test sources, if you want to compile them differently.
- the `package-options` key (which contains options for creating jar
packages) may have different values when packaging class files
(`package-bin`) or packaging source code (`package-src`).
_There is no single value for a given key name_, because the value may differ
according to scope.
However, there is a single value for a given _scoped_ key.
If you think about sbt processing a list of settings to generate a key-value
map describing the project, as [[discussed earlier|Getting Started Basic Def]],
the keys in that key-value map are _scoped_ keys. Each setting defined in
the build definition (for example in `build.sbt`) applies to a scoped key as
well.
Often the scope is implied or has a default, but if the defaults are wrong,
you'll need to mention the desired scope in `build.sbt`.
## Scope axes
A _scope axis_ is a type, where each instance of the type can define its own
scope (that is, each instance can have its own unique values for keys).
There are three scope axes:
- Projects
- Configurations
- Tasks
### Scoping by project axis
If you [[put multiple projects in a single build|Getting Started Multi-Project]], each
project needs its own settings. That is, keys can be scoped according to the
project.
The project axis can also be set to "entire build", so a setting applies to
the entire build rather than a single project. Build-level settings are
often used as a fallback when a project doesn't define a project-specific
setting.
### Scoping by configuration axis
A _configuration_ defines a flavor of build, potentially with its own
classpath, sources, generated packages, etc. The configuration concept comes
from Ivy, which sbt uses for [[managed dependencies|Getting Started Library Dependencies]], and
from [MavenScopes].
Some configurations you'll see in sbt:
- `Compile` which defines the main build (`src/main/scala`).
- `Test` which defines how to build tests (`src/test/scala`).
- `Runtime` which defines the classpath for the `run` task.
By default, all the keys associated with compiling, packaging, and running
are scoped to a configuration and therefore may work differently in each
configuration. The most obvious examples are the task keys `compile`,
`package`, and `run`; but all the keys which _affect_ those keys (such as
`source-directories` or `scalac-options` or `full-classpath`) are also
scoped to the configuration.
### Scoping by task axis
Settings can affect how a task works. For example, the `package-src` task is
affected by the `package-options` setting.
To support this, a task key (such as `package-src`) can be a scope for
another key (such as `package-options`).
The various tasks that build a package (`package-src`, `package-bin`,
`package-doc`) can share keys related to packaging, such as `artifact-name`
and `package-options`. Those keys can have distinct values for each
packaging task.
## Global scope
Each scope axis can be filled in with an instance of the axis type (for
example the task axis can be filled in with a task), or the axis can be
filled in with the special value `Global`.
`Global` means what you would expect: the setting's value applies to all
instances of that axis. For example if the task axis is `Global`, then the
setting would apply to all tasks.
## Delegation
A scoped key may be undefined, if it has no value associated with it in its scope.
For each scope, sbt has a fallback search path made up of other scopes.
Typically, if a key has no associated value in a more-specific scope, sbt
will try to get a value from a more general scope, such as the `Global`
scope or the entire-build scope.
This feature allows you to set a value once in a more general scope,
allowing multiple more-specific scopes to inherit the value.
You can see the fallback search path or "delegates" for a key using the
`inspect` command, as described below. Read on.
## Referring to scoped keys when running sbt
On the command line and in interactive mode, sbt displays (and parses)
scoped keys like this:
```text
{<build-uri>}<project-id>/config:key(for task-key)
```
- `{<build-uri>}<project-id>` identifies the project axis. The `<project-id>`
part will be missing if the project axis has "entire build" scope.
- `config` identifies the configuration axis.
- `(for task-key)` identifies the task axis.
- `key` identifies the key being scoped.
`*` can appear for each axis, referring to the `Global` scope.
If you omit part of the scoped key, it will be inferred as follows:
- the current project will be used if you omit the project.
- a key-dependent configuration will be auto-detected if you omit the
configuration.
- the `Global` task scope will be used if you omit the task.
For more details, see [[Inspecting Settings]].
### Examples of scoped key notation
- `full-classpath`: just a key, so the default scopes are used: current project, a key-dependent configuration, and global task scope.
- `test:full-classpath`: specifies the configuration, so this is `full-classpath` in the `test` configuration, with defaults for the other two scope axes.
- `*:full-classpath`: specifies `Global` for the configuration, rather than the default configuration.
- `full-classpath(for doc)`: specifies the `full-classpath` key scoped to the `doc` task, with the defaults for the project and configuration axes.
- `{file:/home/hp/checkout/hello/}default-aea33a/test:full-classpath` specifies a project, `{file:/home/hp/checkout/hello/}default-aea33a`, where the project is identified with the build `{file:/home/hp/checkout/hello/}` and then a project id inside that build `default-aea33a`. Also specifies configuration `test`, but leaves the default task axis.
- `{file:/home/hp/checkout/hello/}/test:full-classpath` sets the project axis to "entire build" where the build is `{file:/home/hp/checkout/hello/}`
- `{.}/test:full-classpath` sets the project axis to "entire build" where the build is `{.}`. `{.}` can be written `ThisBuild` in Scala code.
- `{file:/home/hp/checkout/hello/}/compile:full-classpath(for doc)` sets all three scope axes.
## Inspecting scopes
In sbt's interactive mode, you can use the `inspect` command to understand
keys and their scopes. Try `inspect test:full-classpath`:
```text
$ sbt
> inspect test:full-classpath
[info] Task: scala.collection.Seq[sbt.Attributed[java.io.File]]
[info] Description:
[info] The exported classpath, consisting of build products and unmanaged and managed, internal and external dependencies.
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}default-aea33a/test:full-classpath
[info] Dependencies:
[info] test:exported-products
[info] test:dependency-classpath
[info] Reverse dependencies:
[info] test:run-main
[info] test:run
[info] test:test-loader
[info] test:console
[info] Delegates:
[info] test:full-classpath
[info] runtime:full-classpath
[info] compile:full-classpath
[info] *:full-classpath
[info] {.}/test:full-classpath
[info] {.}/runtime:full-classpath
[info] {.}/compile:full-classpath
[info] {.}/*:full-classpath
[info] */test:full-classpath
[info] */runtime:full-classpath
[info] */compile:full-classpath
[info] */*:full-classpath
[info] Related:
[info] compile:full-classpath
[info] compile:full-classpath(for doc)
[info] test:full-classpath(for doc)
[info] runtime:full-classpath
```
On the first line, you can see this is a task (as opposed to a setting, as
explained in [[.sbt build definition|Getting Started Basic Def]]). The value resulting from the task
will have type `scala.collection.Seq[sbt.Attributed[java.io.File]]`.
"Provided by" points you to the scoped key that defines the value, in this
case `{file:/home/hp/checkout/hello/}default-aea33a/test:full-classpath` (which
is the `full-classpath` key scoped to the `test` configuration and the
`{file:/home/hp/checkout/hello/}default-aea33a` project).
"Dependencies" may not make sense yet; stay tuned for the
[[next page|Getting Started More About Settings]].
You can also see the delegates; if the value were not defined, sbt would
search through:
- two other configurations (`runtime:full-classpath`,
`compile:full-classpath`). In these scoped keys, the project is unspecified meaning "current
project" and the task is unspecified meaning `Global`
- configuration set to `Global` (`*:full-classpath`), since project is
still unspecified it's "current project" and task is still unspecified so
`Global`
- project set to `{.}` or `ThisBuild` (meaning the entire build, no
specific project)
- project axis set to `Global` (`*/test:full-classpath`) (remember,
an unspecified project means current, so searching `Global` here is new;
i.e. `*` and "no project shown" are different for the project axis;
i.e. `*/test:full-classpath` is not the same as `test:full-classpath`)
- both project and configuration set to `Global` (`*/*:full-classpath`)
(remember that unspecified task means `Global` already, so
`*/*:full-classpath` uses `Global` for all three axes)
Try `inspect full-classpath` (as opposed to the above example, `inspect
test:full-classpath`) to get a sense of the difference. Because the
configuration is omitted, it is autodetected as `compile`.
`inspect compile:full-classpath` should therefore look the same as
`inspect full-classpath`.
Try `inspect *:full-classpath` for another contrast. `full-classpath`
is not defined in the `Global` configuration by default.
Again, for more details, see [[Inspecting Settings]].
## Referring to scopes in a build definition
If you create a setting in `build.sbt` with a bare key, it will be scoped to
the current project, configuration `Global` and task `Global`:
```scala
name := "hello"
```
Run sbt and `inspect name` to see that it's provided by
`{file:/home/hp/checkout/hello/}default-aea33a/*:name`, that is, the project is
`{file:/home/hp/checkout/hello/}default-aea33a`, the configuration is `*`
(meaning global), and the task is not shown (which also means global).
`build.sbt` always defines settings for a single project, so the "current
project" is the project you're defining in that particular `build.sbt`.
(For [[multi-project builds|Getting Started Multi-Project]], each project has its own
`build.sbt`.)
Keys have an overloaded method called `in` used to set the scope. The
argument to `in` can be an instance of any of the scope axes. So for
example, though there's no real reason to do this,
you could set the name scoped to the `Compile` configuration:
```scala
name in Compile := "hello"
```
or you could set the name scoped to the `package-bin` task (pointless! just
an example):
```scala
name in packageBin := "hello"
```
or you could set the name with multiple scope axes, for example in the
`packageBin` task in the `Compile` configuration:
```scala
name in (Compile, packageBin) := "hello"
```
or you could use `Global` for all axes:
```scala
name in Global := "hello"
```
(`name in Global` implicitly converts the scope axis `Global` to a scope
with all axes set to `Global`; the task and configuration are already
`Global` by default, so here the effect is to make the project `Global`,
that is, define `*/*:name` rather than `{file:/home/hp/checkout/hello/}default-aea33a/*:name`)
If you aren't used to Scala, a reminder: it's important to understand that
`in` and `:=` are just methods, not magic. Scala lets you write them in a
nicer way, but you could also use the Java style:
```scala
name.in(Compile).:=("hello")
```
There's no reason to use this ugly syntax, but it illustrates that these are
in fact methods.
## When to specify a scope
You need to specify the scope if the key in question is normally scoped.
For example, the `compile` task, by default, is scoped to `Compile` and
`Test` configurations, and does not exist outside of those scopes.
To change the value associated with the `compile` key, you need to write
`compile in Compile` or `compile in Test`. Using plain `compile` would
define a new compile task scoped to the current project, rather than
overriding the standard compile tasks which are scoped to a configuration.
If you get an error like _"Reference to undefined setting"_, often
you've failed to specify a scope, or you've specified the wrong
scope. The key you're using may be defined in some other
scope. sbt will try to suggest what you meant as part of the error
message; look for "Did you mean compile:compile?"
One way to think of it is that a name is only _part_ of a key. In reality,
all keys consist of both a name, and a scope (where the scope has three
axes). The entire expression `packageOptions in (Compile, packageBin)` is a
key name, in other words. Simply `packageOptions` is also a key name, but a
different one (for keys with no `in`, a scope is implicitly assumed: current
project, global config, global task).
## Next
Now that you understand scopes, you can [[learn more about settings|Getting Started More About Settings]].

@ -1,113 +0,0 @@
[sbt-launch.jar]: http://typesafe.artifactoryonline.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.11.3-2/sbt-launch.jar
# Setup
[[Previous|Getting Started Welcome]] _Getting Started Guide page 2 of 14._ [[Next|Getting Started Hello]]
# Overview
To create an sbt project, you'll need to take these steps:
- Install sbt and create a script to launch it.
- Setup a simple [[hello world|Getting Started Hello]] project
- Create a project directory with source files in it.
- Create your build definition.
- Move on to [[running|Getting Started Running]] to learn how to run sbt.
- Then move on to [[.sbt build definition|Getting Started Basic Def]] to learn more about build definitions.
# Installing sbt
You need two files; [sbt-launch.jar] and a script to run it.
*Note: Relevant information is moving to the [download page](http://www.scala-sbt.org/download.html)*
## Yum
The sbt package is available from the [Typesafe Yum Repository](http://rpm.typesafe.com). Please install [this rpm](http://rpm.typesafe.com/typesafe-repo-2.0.0-1.noarch.rpm) to add the typesafe yum repository to your list of approved sources. Then run:
```text
yum install sbt
```
to grab the latest release of sbt.
*Note: please make sure to report any issues you may find [here](https://github.com/sbt/sbt-launcher-package/issues).
## Apt
The sbt package is available from the [Typesafe Debian Repository](http://apt.typesafe.com). Please install [this deb](http://apt.typesafe.com/repo-deb-build-0002.deb) to add the typesafe debian repository to your list of approved sources. Then run:
```text
apt-get install sbt
```
to grab the latest release of sbt.
If sbt cannot be found, dont forget to update your list of repositories. To do so, run:
```text
apt-get update
```
*Note: please make sure to report any issues you may find [here](https://github.com/sbt/sbt-launcher-package/issues).
## Gentoo
In official tree there is no ebuild for sbt. But there are ebuilds to merge sbt from binaries: https://github.com/whiter4bbit/overlays/tree/master/dev-java/sbt-bin. To merge sbt from this ebuilds you can do next:
```text
mkdir -p /usr/local/portage && cd /usr/local/portage
git clone git://github.com/whiter4bbit/overlays.git
echo "PORTDIR_OVERLAY=$PORTDIR_OVERLAY /usr/local/portage/overlay" >> /etc/make.conf
emerge sbt-bin
```
## Mac
Use either [MacPorts](http://macports.org/):
```text
$ sudo port install sbt
```
Or [HomeBrew](http://mxcl.github.com/homebrew/):
```text
$ brew install sbt
```
There is no need to download the sbt-launch.jar separately with either approach.
## Windows
You can download the [msi](http://scalasbt.artifactoryonline.com/scalasbt/sbt-native-packages/org/scala-sbt/sbt-launcher/0.11.3/sbt.msi)
*or*
Create a batch file `sbt.bat`:
```text
set SCRIPT_DIR=%~dp0
java -Xmx512M -jar "%SCRIPT_DIR%sbt-launch.jar" %*
```
and put [sbt-launch.jar] in the same directory as the batch file. Put `sbt.bat` on your path so that you can launch `sbt` in any directory by typing `sbt` at the command prompt.
## Unix
Download [sbt-launch.jar] and place it in `~/bin`.
Create a script to run the jar, by placing this in a file called `sbt` in your `~/bin` directory:
```text
java -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=384M -jar `dirname $0`/sbt-launch.jar "$@"
```
Make the script executable:
```text
$ chmod u+x ~/bin/sbt
```
## Tips and Notes
If you have any trouble running `sbt`, see [[Setup Notes]] on terminal encodings, HTTP proxies, and JVM options.
To install sbt, you could also use this fairly elaborated shell script: https://github.com/paulp/sbt-extras (see sbt file in the root dir). It has the same purpose as the simple shell script above but it will install sbt if necessary. It knows all recent versions of sbt and it also comes with a lot of useful command line options.
## Next
Move on to [[create a simple project|Getting Started Hello]].

@ -1,67 +0,0 @@
# Getting Started Summary
[[Previous|Getting Started Custom Settings]] _Getting Started Guide page
14 of 14._
This page wraps up the Getting Started Guide.
To use sbt, there are a small number of concepts you must understand. These
have some learning curve, but on the positive side, there isn't much to sbt
_except_ these concepts. sbt uses a small core of powerful concepts to do
everything it does.
If you've read the whole Getting Started series, now you know what you need
to know.
## sbt: The Core Concepts
- the basics of Scala. It's undeniably helpful to be familiar with Scala
syntax. [Programming in Scala](http://www.artima.com/shop/programming_in_scala_2ed)
written by the creator of Scala is a great introduction.
- [[.sbt build definition|Getting Started Basic Def]]
- your build definition is one big list of `Setting` objects, where a
`Setting` transforms the set of key-value pairs sbt uses to perform tasks.
- to create a `Setting`, call one of a few methods on a key (the `:=` and
`<<=` methods are particularly important).
- there is no mutable state, only transformation; for example, a `Setting`
transforms sbt's collection of key-value pairs into a new collection. It
doesn't change anything in-place.
- each setting has a value of a particular type, determined by the key.
- _tasks_ are special settings where the computation to produce
the key's value will be re-run each time you kick off a
task. Non-tasks compute the value once, when first loading the build
definition.
- [[Scopes|Getting Started Scopes]]
- each key may have multiple values, in distinct scopes.
- scoping may use three axes: configuration, project, and task.
- scoping allows you to have different behaviors per-project,
per-task, or per-configuration.
- a configuration is a kind of build, such as the main one (`Compile`) or
the test one (`Test`).
- the per-project axis also supports "entire build" scope.
- scopes fall back to or _delegate_ to more general scopes.
- [[.sbt|Getting Started Basic Def]] vs. [[.scala|Getting Started Full Def]] build definition
- put most of your settings in `build.sbt`, but use `.scala`
build definition files to
[[define multiple subprojects|Getting Started Multi-Project]], and to factor out
common values, objects, and methods.
- the build definition is an sbt project in its own right,
rooted in the `project` directory.
- [[Plugins|Getting Started Using Plugins]] are extensions to the build definition
- add plugins with the `addSbtPlugin` method in `project/build.sbt` (NOT
`build.sbt` in the project's base directory).
If any of this leaves you wondering rather than nodding, please ask for help
on the
[mailing list](http://groups.google.com/group/simple-build-tool/topics),
go back and re-read, or try some experiments in sbt's interactive mode.
Good luck!
## Advanced Notes
The rest of this wiki consists of deeper dives and less-commonly-needed
information.
Since sbt is open source, don't forget you can check out the source code
too!

@ -1,240 +0,0 @@
# Using Plugins
[[Previous|Getting Started Full Def]] _Getting Started Guide page
11 of 14._ [[Next|Getting Started Multi-Project]]
Please read the earlier pages in the Getting Started Guide first,
in particular you need to understand
[[build.sbt|Getting Started Basic Def]],
[[library dependencies|Getting Started Library Dependencies]], and
[[.scala build definition|Getting Started Full Def]] before reading
this page.
## What is a plugin?
A plugin extends the build definition, most commonly by adding new
settings. The new settings could be new tasks. For example, a plugin could
add a `code-coverage` task which would generate a test coverage report.
## Adding a plugin
### The short answer
If your project is in directory `hello`, edit `hello/project/build.sbt` and
add the plugin location as a resolver, then call `addSbtPlugin` with the
plugin's Ivy module ID:
```scala
resolvers += Classpaths.typesafeResolver
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.0.0")
```
If the plugin were located on one of the default repositories, you wouldn't
have to add a resolver, of course.
So that's how you do it... read on to understand what's going on.
### How it works
Be sure you understand the
[[recursive nature of sbt projects|Getting Started Full Def]] described
earlier and how to add a [[managed dependency|Getting Started Library Dependencies]].
#### Dependencies for the build definition
Adding a plugin means _adding a library dependency to the build
definition_. To do that, you edit the build definition for the build
definition.
Recall that for a project `hello`, its build definition project lives in
`hello/*.sbt` and `hello/project/*.scala`:
```text
hello/ # your project's base directory
build.sbt # build.sbt is part of the source code for the
# build definition project inside project/
project/ # base directory of the build definition project
Build.scala # a source file in the project/ project,
# that is, a source file in the build definition
```
If you wanted to add a managed dependency to project `hello`, you would add
to the `libraryDependencies` setting either in `hello/*.sbt` or
`hello/project/*.scala`.
You could add this in `hello/build.sbt`:
```scala
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3" % "test"
```
If you add that and start up the sbt interactive mode and type `show
dependency-classpath`, you should see the derby jar on your classpath.
To add a plugin, do the same thing but recursed one level. We want the
_build definition project_ to have a new dependency. That means changing the
`libraryDependencies` setting for the build definition of the build
definition.
The build definition of the build definition, if your project is `hello`,
would be in `hello/project/*.sbt` and `hello/project/project/*.scala`.
The simplest "plugin" has no special sbt support; it's just a jar file.
For example, edit `hello/project/build.sbt` and add this line:
```scala
libraryDependencies += "net.liftweb" % "lift-json" % "2.0"
```
Now, at the sbt interactive prompt, `reload plugins` to enter the build
definition project, and try `show dependency-classpath`. You should see the
lift-json jar on the classpath. This means: you could use classes from
lift-json in your `Build.scala` or `build.sbt` to implement a task. You
could parse a JSON file and generate other files based on it, for example.
Remember, use `reload return` to leave the build definition project and go
back to the parent project.
(Stupid sbt trick: type `reload plugins` over and over. You'll find yourself
in the project rooted in
`project/project/project/project/project/project/`. Don't worry, it isn't
useful. Also, it creates `target` directories all the way down, which you'll
have to clean up.)
#### `addSbtPlugin`
`addSbtPlugin` is just a convenience method. Here's its definition:
```scala
def addSbtPlugin(dependency: ModuleID): Setting[Seq[ModuleID]] =
libraryDependencies <+= (sbtVersion in update,scalaVersion) { (sbtV, scalaV) =>
sbtPluginExtra(dependency, sbtV, scalaV)
}
```
Remember from [[more about settings|Getting Started More About Settings]] that `<+=` combines `<<=` and `+=`, so
this builds a value based on other settings, and then appends it to
`libraryDependencies`. The value is based on `sbtVersion in update` (sbt's
version scoped to the `update` task) and `scalaVersion` (the version of
scala used to compile the project, in this case used to compile the build
definition). `sbtPluginExtra` adds the sbt and Scala version information to
the module ID.
#### `plugins.sbt`
Some people like to list plugin dependencies (for a project `hello`) in
`hello/project/plugins.sbt` to avoid confusion with `hello/build.sbt`. sbt
does not care what `.sbt` files are called, so both `build.sbt` and
`project/plugins.sbt` are conventions. sbt _does_ of course care where
the sbt files are _located_. `hello/*.sbt` would contain dependencies for
`hello` and `hello/project/*.sbt` would contain dependencies for `hello`'s
build definition.
## Plugins can add settings and imports automatically
In one sense a plugin is just a jar added to `libraryDependencies` for the
build definition; you can then use the jar from build definition code as in
the lift-json example above.
However, jars intended for use as sbt plugins can do more.
If you download a plugin jar
([here's one for sbteclipse](http://repo.typesafe.com/typesafe/ivy-releases/com.typesafe.sbteclipse/sbteclipse/scala_2.9.1/sbt_0.11.0/1.4.0/jars/sbteclipse.jar))
and unpack it with `jar xf`, you'll see that it contains a text file `sbt/sbt.plugins`. In `sbt/sbt.plugins`
there's an object name on each line like this:
```text
com.typesafe.sbteclipse.SbtEclipsePlugin
```
`com.typesafe.sbteclipse.SbtEclipsePlugin` is the name of an object that
extends `sbt.Plugin`. The `sbt.Plugin` trait is very simple:
```scala
trait Plugin {
def settings: Seq[Setting[_]] = Nil
}
```
sbt looks for objects listed in `sbt/sbt.plugins`. When it finds
`com.typesafe.sbteclipse.SbtEclipsePlugin`, it adds
`com.typesafe.sbteclipse.SbtEclipsePlugin.settings` to the settings for the
project. It also does `import com.typesafe.sbteclipse.SbtEclipsePlugin._`
for any `.sbt` files, allowing a plugin to provide values, objects, and
methods to `.sbt` files in the build definition.
## Adding settings manually from a plugin
If a plugin defines settings in the `settings` field of a `Plugin` object,
you don't have to do anything to add them.
However, plugins often avoid this because you could not control which
projects in a [[multi-project build|Getting Started Multi-Project]] would use the plugin.
sbt provides a method called `seq` which adds a whole batch of settings at
once. So if a plugin has something like this:
```scala
object MyPlugin extends Plugin {
val myPluginSettings = Seq(settings in here)
}
```
You could add all those settings in `build.sbt` with this syntax:
```scala
seq(myPluginSettings: _*)
```
If you aren't familiar with the `_*` syntax:
- `seq` is defined with a variable number of arguments: `def seq(settings: Setting[_]*)`
- `_*` converts a sequence into a variable argument list
Short version: `seq(myPluginSettings: _*)` in a `build.sbt` adds all the
settings in `myPluginSettings` to the project.
## Creating a plugin
After reading this far, you pretty much know how to _create_ an
sbt plugin as well. There's one trick to know; set `sbtPlugin :=
true` in `build.sbt`. If `sbtPlugin` is true, the project will
scan its compiled classes for instances of `Plugin`, and list them
in `sbt/sbt.plugins` when it packages a jar. `sbtPlugin := true`
also adds sbt to the project's classpath, so you can use sbt APIs
to implement your plugin.
Learn more about creating a plugin at [[Plugins]] and [[Plugins Best Practices]].
## Global plugins
Plugins can be installed for all your projects at once by dropping them in
`~/.sbt/plugins/`. `~/.sbt/plugins/` is an sbt project whose classpath is
exported to all sbt build definition projects. Roughly speaking, any `.sbt`
files in `~/.sbt/plugins/` behave as if they were in the
`project/` directory for all projects, and any `.scala` files in
`~/.sbt/plugins/project/` behave as if they were in the `project/project/`
directory for all projects.
You can create `~/.sbt/plugins/build.sbt` and put `addSbtPlugin()`
expressions in there to add plugins to all your projects at once.
## Available Plugins
There's [[a list of available plugins|sbt 0.10 plugins list]].
Some especially popular plugins are:
- those for IDEs (to import an sbt project into your IDE)
- those supporting web frameworks, such as [xsbt-web-plugin](https://github.com/siasia/xsbt-web-plugin).
[[Check out the list.|sbt 0.10 plugins list]]
## Next
Move on to [[multi-project builds|Getting Started Multi-Project]].

@ -1,21 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[Setup|Getting Started Setup]] - Install sbt
* [[Hello, World|Getting Started Hello]] - Create a simple project
* [[Directory Layout|Getting Started Directories]] - Basic project layout
* [[Running|Getting Started Running]] - The command line and interactive mode
* [[.sbt Build Definition|Getting Started Basic Def]] - Understanding build.sbt settings
* [[Scopes|Getting Started Scopes]] - Put settings in context
* [[More About Settings|Getting Started More About Settings]] - Settings based on other settings
* [[Library Dependencies|Getting Started Library Dependencies]] - Adding jars or managed dependencies
* [[.scala Build Definition|Getting Started Full Def]] - When build.sbt is not enough
* [[Using Plugins|Getting Started Using Plugins]] - Adding plugins to the build
* [[Multi-Project Builds|Getting Started Multi-Project]] - Adding sub-projects to the build
* [[Custom Settings and Tasks|Getting Started Custom Settings]] - Intro to extending sbt
* [[Summary|Getting Started Summary]] - What you should know now
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Extending sbt|Extending]] - internals docs

41
Home.md

@ -1,41 +0,0 @@
sbt is a build tool for Scala and Java projects that aims to do the basics well. It requires Java 1.6 or later.
## Install
See the [[install instructions|Getting Started Setup]].
## Features
* Easy to set up for simple projects
* [[.sbt build definition|Getting Started Basic Def]] uses a Scala-based "domain-specific language" (DSL)
* More advanced [[.scala build definitions|Getting Started Full Def]] and [[extensions|Getting Started Custom Settings]] use the full flexibility of unrestricted Scala code
* Accurate incremental recompilation using information extracted from the compiler
* Continuous compilation and testing with [[triggered execution|Triggered Execution]]
* Packages and publishes jars
* Generates documentation with scaladoc
* Supports mixed Scala/[[Java|Java Sources]] projects
* Supports [[Testing|testing]] with ScalaCheck, specs, and ScalaTest (JUnit is supported by a plugin)
* Starts the Scala REPL with project classes and dependencies on the classpath
* [[Sub-project|Getting Started Multi-Project]] support (put multiple packages in one project)
* External project support (list a git repository as a dependency!)
* Parallel task execution, including parallel test execution
* [[Library management support|Getting Started Library Dependencies]]: inline declarations, external Ivy or Maven configuration files, or manual management
## Getting Started
To get started, read the
[[Getting Started Guide|Getting Started Welcome]].
_Please read the
[[Getting Started Guide|Getting Started Welcome]]._ You will save
yourself a _lot_ of time if you have the right understanding of
the big picture up-front.
If you are familiar with 0.7.x, please see the
[[migration page|Migrating from sbt 0.7.x to 0.10.x]]. Documentation for
0.7.x is still available on the
[Google Code Site](http://code.google.com/p/simple-build-tool/wiki/DocumentationHome).
This wiki applies to sbt 0.10 and later.
The mailing list is at <http://groups.google.com/group/simple-build-tool/topics>. Please use it for questions and comments!
This wiki is editable if you have a GitHub account. Feel free to make corrections and add documentation. Use the mailing list if you have questions or comments.

138
Index.md

@ -1,138 +0,0 @@
[Initialize]: http://harrah.github.com/xsbt/latest/api/sbt/Init$Initialize.html
[dependency]: http://harrah.github.com/xsbt/latest/api/sbt/ModuleID.html
[Process]: http://harrah.github.com/xsbt/latest/api/sbt/Process.html
[Process companion object]: http://harrah.github.com/xsbt/latest/api/sbt/Process$.html
[ProcessBuilder]: http://harrah.github.com/xsbt/latest/api/sbt/ProcessBuilder.html
[Parser]: http://harrah.github.com/xsbt/latest/api/sbt/complete/Parser.html
[Keys]: http://harrah.github.com/xsbt/latest/api/sbt/Keys$.html
[Scope]: http://harrah.github.com/xsbt/latest/api/sbt/Scope.html
[ModuleID]: http://harrah.github.com/xsbt/latest/api/sbt/ModuleID.html
[Configuration]: http://harrah.github.com/xsbt/latest/api/sbt/Configuration.html
[Artifact]: http://harrah.github.com/xsbt/latest/api/sbt/Artifact.html
[Resolver]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver.html
[NameFilter]: http://harrah.github.com/xsbt/latest/api/sbt/NameFilter.html
[FileFilter]: http://harrah.github.com/xsbt/latest/api/sbt/FileFilter.html
[Setting]: http://harrah.github.com/xsbt/latest/api/sbt/Init$Setting.html
[SettingList]: http://harrah.github.com/xsbt/latest/api/sbt/Init$SettingList.html
[SettingsDefinition]: http://harrah.github.com/xsbt/latest/api/sbt/Init$SettingsDefinition.html
[Build]: http://harrah.github.com/xsbt/latest/api/sbt/Build.html
[Plugin]: http://harrah.github.com/xsbt/latest/api/sbt/Plugin.html
[Project]: http://harrah.github.com/xsbt/latest/api/sbt/Project.html
[RichFile]: http://harrah.github.com/xsbt/latest/api/sbt/RichFile.html
[PathFinder]: http://harrah.github.com/xsbt/latest/api/sbt/PathFinder.html
[SettingKey]: http://harrah.github.com/xsbt/latest/api/sbt/SettingKey.html
[InputKey]: http://harrah.github.com/xsbt/latest/api/sbt/InputKey.html
[TaskKey]: http://harrah.github.com/xsbt/latest/api/sbt/TaskKey.html
[ScopedSetting]: http://harrah.github.com/xsbt/latest/api/sbt/ScopedSetting.html
[ScopedInput]: http://harrah.github.com/xsbt/latest/api/sbt/ScopedInput.html
[ScopedTask]: http://harrah.github.com/xsbt/latest/api/sbt/ScopedTask.html
[InputTask]: http://harrah.github.com/xsbt/latest/api/sbt/InputTask.html
[State]: http://harrah.github.com/xsbt/latest/api/sbt/State.html
[Task]: http://harrah.github.com/xsbt/latest/api/sbt/Task.html
# Index
This is an index of common methods, types, and values you might find in an sbt build definition.
For command names, see [[Running|Getting Started Running]].
For available plugins, see [[sbt 0.10 plugins list]].
## Values and Types
### Dependency Management
* [ModuleID] is the type of a dependency definition. See [[Library Management]].
* [Artifact] represents a single artifact (such as a jar or a pom) to be built and published. See [[Library Management]] and [[Artifacts]].
* A [Resolver] can resolve and retrieve dependencies. Many types of Resolvers can publish dependencies as well. A repository is a closely linked idea that typically refers to the actual location of the dependencies. However, sbt is not very consistent with this terminology and repository and resolver are occasionally used interchangeably.
* A [ModuleConfiguration] defines a specific resolver to use for a group of dependencies.
* A [Configuration] is a useful Ivy construct for grouping
dependencies. See [[Configurations]]. It is also used for
[[scoping settings|Getting Started Scopes]].
* `Compile`, `Test`, `Runtime`, `Provided`, and `Optional` are predefined [[Configurations]].
### Settings and Tasks
* A [Setting] describes how to initialize a specific setting in the build. It can use the values of other settings or the previous value of the setting being initialized.
* A [SettingsDefinition] is the actual type of an expression in a build.sbt. This allows either a single [Setting] or a sequence of settings ([SettingList]) to be defined at once. The types in a [[Full Configuration]] always use just a plain [Setting].
* [Initialize] describes how to initialize a setting using other settings, but isn't bound to a particular setting yet. Combined with an initialization method and a setting to initialize, it produces a full [Setting].
* [TaskKey], [SettingKey], and [InputKey] are keys that represent a task or setting. These are not the actual tasks, but keys that are used to refer to them. They can be scoped to produce [ScopedTask], [ScopedSetting], and [ScopedInput]. These form the base types that the [[Settings]] implicits add methods to.
* [InputTask] parses and tab completes user input, producing a task to run.
* [Task] is the type of a task. A task is an action that runs on demand. This is in contrast to a setting, which is run once at project initialization.
### Process
* A [ProcessBuilder] is the type used to define a process. It provides combinators for building up processes from smaller processes.
* A [Process] represents the actual forked process.
* The [Process companion object] provides methods for constructing primitive processes.
### Build Structure
* [Build] is the trait implemented for a [[Full Configuration]], which defines project relationships and settings.
* [Plugin] is the trait implemented for sbt [[Plugins]].
* [Project] is both a trait and a companion object that declares a single module in a build. See [[Full Configuration]].
* [Keys] is an object that provides all of the built-in keys for settings and tasks.
* [State] contains the full state for a build. It is mainly used by [[Commands]] and sometimes [[Input Tasks]]. See also [[Build State]].
## Methods
### Settings and Tasks
See the [[Getting Started Guide|Getting Started Basic Def]] for details.
* `:=`, `<<=`, `+=`, `++=`, `~=`, `<+=`, `<++=` These construct a
[Setting], which is the fundamental type in the
[[settings|Getting Started Basic Def]] system.
* `map` This defines a task initialization that uses other tasks
or settings. See
[[more about settings|Getting Started More About Settings]]. It is a common name used for many other types in Scala, such as collections.
* `apply` This defines a setting initialization using other settings. It is not typically written out. See [[more about settings|Getting Started More About Settings]]. This is a common name in Scala.
* `in` specifies the [Scope] or part of the [Scope] of a setting
being referenced. See [[scopes|Getting Started Scopes]].
### File and IO
See [RichFile], [PathFinder], and [[Paths]] for the full documentation.
* `/` When called on a single File, this is `new File(x,y)`. For `Seq[File]`, this is applied for each member of the sequence..
* `*` and `**` are methods for selecting children (`*`) or descendants (`**`) of a `File` or `Seq[File]` that match a filter.
* `|`, `||`, `&&`, `&`, `-`, and `--` are methods for combining filters, which are often used for selecting `File`s. See [NameFilter] and [FileFilter]. Note that methods with these names also exist for other types, such as collections (like `Seq) and [Parser] (see [[Parsing Input]]).
* `x` Used to construct mappings from a `File` to another `File` or to a `String`. See [[Mapping Files]].
* `get` forces a [PathFinder] (a call-by-name data structure) to a strict `Seq[File]` representation. This is a common name in Scala, used by types like `Option`.
### Dependency Management
See [[Library Management]] for full documentation.
* `%` This is used to build up a [ModuleID].
* `%%` This is similar to `%` except that it identifies a dependency that has been [[cross built|Cross Build]].
* `from` Used to specify the fallback URL for a dependency
* `classifier` Used to specify the classifier for a dependency.
* `at` Used to define a Maven-style resolver.
* `intransitive` Marks a [dependency] or [Configuration] as being intransitive.
* `hide` Marks a [Configuration] as internal and not to be included in the published metadata.
### Parsing
These methods are used to build up [Parser]s from smaller [Parser]s. They closely follow the names of the standard library's parser combinators. See [[Parsing Input]] for the full documentation. These are used for [[Input Tasks]] and [[Commands]].
* `~`, `~>`, `<~` Sequencing methods.
* `??`, `?` Methods for making a Parser optional. `?` is postfix.
* `id` Used for turning a Char or String literal into a Parser. It is generally used to trigger an implicit conversion to a Parser.
* `|`, `||` Choice methods. These are common method names in Scala.
* `^^^` Produces a constant value when a Parser matches.
* `+`, `*` Postfix repetition methods. These are common method names in Scala.
* `map`, `flatMap` Transforms the result of a Parser. These are common method names in Scala.
* `filter` Restricts the inputs that a Parser matches on. This is a common method name in Scala.
* `-` Prefix negation. Only matches the input when the original parser doesn't match the input.
* `examples`, `token` Tab completion
* `!!!` Provides an error message to use when the original parser doesn't match the input.
### Processes
These methods are used to [[fork external processes|Process]]. Note that this API has been included in the Scala standard library for version 2.9.
[ProcessBuilder] is the builder type and [Process] is the type representing the actual forked process.
The methods to combine processes start with `#` so that they share the same precedence.
* `run`, `!`, `!!`, `!<`, `lines`, `lines_!` are different ways to start a process once it has been defined. The `lines` variants produce a `Stream[String]` to obtain the output lines.
* `#<`, `#<<`, `#>` are used to get input for a process from a source or send the output of a process to a sink.
* `#|` is used to pipe output from one process into the input of another.
* `#||`, `#&&`, `###` sequence processes in different ways.

@ -1,4 +0,0 @@
Here is gathered a few other places with SBT information:
* Josh Suereth's [SBT Introduction & Cookbook](http://www.youtube.com/watch?v=vED2LMbdFDc) and [slides](https://docs.google.com/present/view?id=dfqn4jb_115x89dq2dg&pli=)
* [an unofficial guide to sbt 0.10 v2.0](http://eed3si9n.com/sbt-010-guide)

@ -1 +0,0 @@
ttt

@ -1 +0,0 @@
If you are looking for a sample project leveraging project, go to https://github.com/emcastro/sbt-sample, download it and run sbt on it.

@ -1,102 +0,0 @@
Compiling Scala code is slow, and SBT makes it often faster. By understanding how, you can even understand how to make compilation even faster. Modifying source files with many dependencies might require recompiling only those source files&mdash;which might take, say, 5 seconds&mdash;instead of all the dependencies&mdash;which might take, say, 2 minutes. Often you can control which will be your case and make development much faster by some simple coding practices.
In fact, improving Scala compilation times is one major goal of SBT, and conversely the speedups it gives are one of the major motivations to use it. A significant portion of SBT sources and development efforts deals with strategies for speeding up compilation.
To reduce compile times, SBT uses two strategies:
1. reduce the overhead for restarting Scalac;
2. implement smart and transparent strategies for incremental recompilation, so that only modified files and the needed dependencies are recompiled.
1. SBT runs Scalac always in the same virtual machine. If one compiles source code using SBT, keeps SBT alive, modifies source code and triggers a new compilation, this compilation will be faster because (part of) Scalac will have already been JIT-compiled. In the future, SBT will reintroduce support for reusing the same compiler instance, similarly to FSC.
2. When a source file `A.scala` is modified, SBT goes to great effort to recompile other source files depending on `A.scala` only if required - that is, only if the interface of `A.scala` was modified.
With other build management tools (especially for Java, like ant), when a developer changes a source file in a non-binary-compatible way, he needs to manually ensure that dependencies are also recompiled - often by manually running the `clean` command to remove existing compilation output; otherwise compilation might succeed even when dependent class files might need to be recompiled. What is worse, the change to one source might make dependencies incorrect, but this is not discovered automatically: One might get a compilation success with incorrect source code. Since Scala compile times are so high, running `clean` is particularly undesirable.
By organizing your source code appropriately, you can minimize the amount of code affected by a change. SBT cannot determine precisely which dependencies have to be recompiled; the goal is to compute a conservative approximation, so that whenever a file must be recompiled, it will, even though we might recompile extra files.
## SBT heuristics
SBT tracks source dependencies at the granularity of source files. For each source file, SBT tracks files which depend on it directly; if the **interface** of classes, objects or traits in a file changes, all files dependent on that source must be recompiled. In particular, this currently includes all transitive dependencies, that is, also dependencies of dependencies, dependencies of these and so on to arbitrary depth.
SBT does not instead track dependencies to source code at the granularity of individual output `.class` files, as one might hope. Doing so would be incorrect, because of some problems with sealed classes (see below for discussion).
Dependencies on binary files are different - they are tracked both on the `.class` level and on the source file level. Adding a new implementation of a sealed trait to source file `A` affects all clients of that sealed trait, and such dependencies are tracked at the source file level.
Different sources are moreover recompiled together; hence a compile error in one source implies that no bytecode is generated for any of those. When a lot of files need to be recompiled and the compile fix is not clear, it might be best to comment out the offending location (if possible) to allow other sources to be compiled, and then try to figure out how to fix the offending location&mdash;this way, trying out a possible solution to the compile error will take less time, say 5 seconds instead of 2 minutes.
## What is included in the interface of a Scala class
It is surprisingly tricky to understand which changes to a class require recompiling its clients. The rules valid for Java are much simpler (even if they include some subtle points as well); trying to apply them to Scala will prove frustrating.
Here is a list of a few surprising points, just to illustrate the ideas; this list is not intended to be complete.
1. Since Scala supports named arguments in method invocations, the name of method arguments are part of its interface.
2. Adding a method to a trait requires recompiling all implementing classes. The same is true for most changes to a method signature in a trait.
2. Calls to `super.methodName` in traits are resolved to calls to an abstract method called `fullyQualifiedTraitName$$super$methodName`; such methods only exist if they are used. Hence, adding the first call to `super.methodName` for a specific `methodName` changes the interface. At present, this is not yet handled&mdash;see [issue #466](https://github.com/harrah/xsbt/issues/466).
3. `sealed` hierarchies of case classes allow to check exhaustiveness of pattern matching. Hence pattern matches using case classes must depend on the complete hierarchy - this is one reason why dependencies cannot be easily tracked at the class level (see Scala issue [SI-2559](https://issues.scala-lang.org/browse/SI-2559) for an example.)
## How to take advantage of SBT heuristics
The heuristics used by SBT imply the following user-visible consequences, which determine whether a change to a class affects other classes.
XXX Please note that this part of the documentation is a first draft; part of the strategy might be unsound, part of it might be not yet implemented.
1. Adding, removing, modifying `private` methods does not require recompilation of client classes. Therefore, suppose you add a method to a class with a lot of dependencies, and that this method is only used in the declaring class; marking it `private` will prevent recompilation of clients. However, this only applies to methods which are not accessible to other classes, hence methods marked with `private` or `private[this]`; methods which are private to a package, marked with `private[name]`, are part of the API.
2. Modifying the interface of a non-private method requires recompiling all clients, even if the method is not used.
3. Modifying one class does require recompiling dependencies of other classes defined in the same file (unlike said in a previous version of this guide). Hence separating different classes in different source files might reduce recompilations.
4. Adding a method which did not exist requires recompiling all clients, counterintuitively, due to complex scenarios with implicit conversions. Hence in some cases you might want to start implementing a new method in a separate, new class, complete the implementation, and then cut-n-paste the complete implementation back into the original source.
5. Changing the implementation of a method should _not_ affect its clients, unless the return type is inferred, and the new implementation leads to a slightly different type being inferred. Hence, annotating the return type of a non-private method explicitly, if it is more general than the type actually returned, can reduce the code to be recompiled when the implementation of such a method changes. (Explicitly annotating return types of a public API is a good practice in general.)
All the above discussion about methods also applies to fields and members in general; similarly, references to classes also extend to objects and traits.
### Why changing the implementation of a method might affect clients, and why type annotations help ###
This section explains why relying on type inference for return types of public methods is not always appropriate. However this is an important design issue, so we cannot give fixed rules. Moreover, this change is often invasive, and reducing compilation times is not often a good enough motivation. That is why we discuss also some of the implications from the point of view of binary compatibility and software engineering.
Consider the following source file `A.scala`:
```scala
import java.io._
object A {
def openFiles(list: List[File]) = list.map(name => new FileWriter(name))
}
```
Let us now consider the public interface of trait `A`. Note that the return type of method `openFiles` is not specified explicitly, but computed by type inference to be `List[FileWriter]`.
Suppose that after writing this source code, we introduce client code and then modify `A.scala` as follows:
```scala
import java.io._
object A {
def openFiles(list: List[File]) = Vector(list.map(name => new BufferedWriter(new FileWriter(name))): _*)
}
```
Type inference will now compute as result type `Vector[BufferedWriter]`; in other words, changing the implementation lead to a change of the public interface, with two undesirable consequences:
1. Concerning our topic, client code needs to be recompiled, since changing the return type of a method, in the JVM, is a binary-incompatible interface change.
2. If our component is a released library, using our new version requires recompiling all client code, changing the version number, and so on. Often not good, if you distribute a library where binary compatibility becomes an issue.
3. More in general, client code might now even be invalid. The following code will for instance become invalid after the change:
```scala
val res: List[FileWriter] = A.openFiles(List(new File("foo.input")))
```
Also the following code will break:
```scala
val a: Seq[Writer] = new BufferedWriter(new FileWriter("bar.input")) :: A.openFiles(List(new File("foo.input")))
```
How can we avoid these problems?
Of course, we cannot solve them in general: if we want to alter the interface of a module, breakage might result. However, often we can remove _implementation details_ from the interface of a module. In the example above, for instance, it might well be that the intended return type is more general - namely `Seq[Writer]`. It might also not be the case - this is a design choice to be decided on a case-by-case basis. In this example I will assume however that the designer chooses `Seq[Writer]`, since it is a reasonable choice both in the above simplified example and in a real-world extension of the above code.
The client snippets above will now become
```scala
val res: Seq[Writer] = A.openFiles(List(new File("foo.input")))
val a: Seq[Writer] = new BufferedWriter(new FileWriter("bar.input")) +: A.openFiles(List(new File("foo.input")))
```
XXX the rest of the section must be reintegrated or dropped:
In general, changing the return type of a method might be source-compatible, for instance if the new type is more specific, or if it is less specific, but still more specific than the type required by clients (note however that making the type more specific might still invalidate clients in non-trivial scenarios involving for instance type inference or implicit conversions&mdash;for a more specific type, too many implicit conversions might be available, leading to ambiguity); however, the bytecode for a method call includes the return type of the invoked method, hence the client code needs to be recompiled.
Hence, adding explicit return types on classes with many dependencies might reduce the occasions where client code needs to be recompiled. Moreover, this is in general a good development practice when interface between different modules become important&mdash;specifying such interface documents the intended behavior and helps ensuring binary compatibility, which is especially important when the exposed interface is used by other software component.
### Why adding a member requires recompiling existing clients
In Java adding a member does not require recompiling existing valid source code. The same should seemingly hold also in Scala, but this is not the case: implicit conversions might enrich class `Foo` with method `bar` without modifying class `Foo` itself through the [pimp-my-library pattern](http://www.artima.com/weblogs/viewpost.jsp?thread=179766) (see discussion in issue [#288](https://github.com/harrah/xsbt/issues/288) - XXX integrate more). However, if another method `bar` is introduced in class `Foo`, this method should be used in preference to the one added through implicit conversions. Therefore any class depending on `Foo` should be recompiled. One can imagine more fine-grained tracking of dependencies, but this is currently not implemented.
## Further references
The incremental compilation logic is implemented in https://github.com/harrah/xsbt/blob/0.13/compile/inc/Incremental.scala. Some related documentation for SBT 0.7 is available at: https://code.google.com/p/simple-build-tool/wiki/ChangeDetectionAndTesting.
Some discussion on the incremental recompilation policies is available in issue [#322](https://github.com/harrah/xsbt/issues/322) and [#288](https://github.com/harrah/xsbt/issues/288).

@ -1,19 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Change history|Changes]]
* [[Credits]]
* [[License|https://github.com/harrah/xsbt/blob/0.11/LICENSE]]
* [[Source code (github)|https://github.com/harrah/xsbt/tree/0.11]]
* [[Source code (SXR)|http://harrah.github.com/xsbt/latest/sxr/index.html]]
* [[API Documentation|http://harrah.github.com/xsbt/latest/api/index.html]]
* [[Places to help|Opportunities]]
* [[Nightly Builds]]
* [[Plugins list|sbt-0.10-plugins-list]]
* [[Resources]]
* [[Examples|Community-Examples]]
* [[Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Extending sbt|Extending]] - internals docs