first step in md -> rst conversion

This commit is contained in:
Mark Harrah 2012-09-14 18:08:35 -04:00
parent 9b9a09f9af
commit b98e12e9dd
173 changed files with 16609 additions and 12588 deletions

View File

@ -56,3 +56,19 @@ This is the 0.13.x series of sbt.
4. After each `publish-local`, clean the `~/.sbt/boot/` directory. Alternatively, if sbt is running and the launcher hasn't changed, run `reboot full` to have sbt do this for you.
5. If a project has `project/build.properties` defined, either delete the file or change `sbt.version` to `0.13.0-SNAPSHOT`.
## Building Documentation
Documentation is built using jekyll and sphinx and requires some external programs and libraries to be manually installed first:
```text
$ pip install pygments
$ pip install sphinx
$ pip install sphinxcontrib-issuetracker
$ gem install rdiscount
$ gem install jekyll
```
To build the full site, run the `make-site` task, which will generate the manual, API, SXR, and other site pages in `target/site/`.
Individual pieces of the site may be generated using `xsbt/sphinx:mappings`, `xsbt/jekyll:mappings`, `xsbt/doc`, or `xsbt/sxr`. The output directories will be under `target/`, such as `target/sphinx`.

View File

@ -1,182 +0,0 @@
[#274]: https://github.com/harrah/xsbt/pull/274
[#304]: https://github.com/harrah/xsbt/issues/304
[#315]: https://github.com/harrah/xsbt/issues/315
[#327]: https://github.com/harrah/xsbt/issues/327
[#335]: https://github.com/harrah/xsbt/issues/335
[#361]: https://github.com/harrah/xsbt/issues/361
[#393]: https://github.com/harrah/xsbt/issues/393
[#396]: https://github.com/harrah/xsbt/issues/396
[#380]: https://github.com/harrah/xsbt/issues/380
[#389]: https://github.com/harrah/xsbt/issues/389
[#388]: https://github.com/harrah/xsbt/issues/388
[#387]: https://github.com/harrah/xsbt/issues/387
[#386]: https://github.com/harrah/xsbt/issues/386
[#378]: https://github.com/harrah/xsbt/issues/378
[#377]: https://github.com/harrah/xsbt/issues/377
[#368]: https://github.com/harrah/xsbt/issues/368
[#394]: https://github.com/harrah/xsbt/issues/394
[#369]: https://github.com/harrah/xsbt/issues/369
[#403]: https://github.com/harrah/xsbt/issues/403
[#412]: https://github.com/harrah/xsbt/issues/412
[#415]: https://github.com/harrah/xsbt/issues/415
[#420]: https://github.com/harrah/xsbt/issues/420
[#462]: https://github.com/harrah/xsbt/pull/462
[#472]: https://github.com/harrah/xsbt/pull/472
[Launcher]: https://github.com/harrah/xsbt/wiki/Launcher
# 0.12.0 Changes
## Features, fixes, changes with compatibility implications (incomplete, please help)
* The cross versioning convention has changed for Scala versions 2.10 and later as well as for sbt plugins.
* When invoked directly, 'update' will always perform an update ([#335])
* The sbt plugins repository is added by default for plugins and plugin definitions. [#380]
* Plugin configuration directory precedence has changed (see details section below)
* Source dependencies have been fixed, but the fix required changes (see details section below)
* Aggregation has changed to be more flexible (see details section below)
* Task axis syntax has changed from key(for task) to task::key (see details section below)
* The organization for sbt has to changed to `org.scala-sbt` (was: org.scala-tools.sbt). This affects users of the scripted plugin in particular.
* `artifactName` type has changed to `(ScalaVersion, Artifact, ModuleID) => String`
* `javacOptions` is now a task
* `session save` overwrites settings in `build.sbt` (when appropriate). [#369]
* scala-library.jar is now required to be on the classpath in order to compile Scala code. See the `scala-library.jar` section at the bottom of the page for details.
## Features
* Support for forking tests ([#415])
* `test-quick` (see details section below)
* Support globally overriding repositories ([#472]).
* Added `print-warnings` task that will print unchecked and deprecation warnings from the previous compilation without needing to recompile (Scala 2.10+ only)
* Support for loading an ivy settings file from a URL.
* `projects add/remove <URI>` for temporarily working with other builds
* Enhanced control over parallel execution (see details section below)
* `inspect tree <key>` for calling `inspect` command recursively ([#274])
## Fixes
* Delete a symlink and not its contents when recursively deleting a directory.
* Fix detection of ancestors for java sources
* Fix the resolvers used for `update-sbt-classifiers` ([#304])
* Fix auto-imports of plugins ([#412])
* Argument quoting (see details section below)
* Properly reset JLine after being stopped by Ctrl+z (unix only). [#394]
## Improvements
* The launcher can launch all released sbt versions back to 0.7.0.
* A more refined hint to run 'last' is given when a stack trace is suppressed.
* Use java 7 Redirect.INHERIT to inherit input stream of subprocess ([#462],[#327]). This should fix issues when forking interactive programs. (@vigdorchik)
* Mirror ivy 'force' attribute ([#361])
* Various improvements to `help` and `tasks` commands as well as new `settings` command ([#315])
* Bump jsch version to 0.1.46. ([#403])
* Improved help commands: `help`, `tasks`, `settings`.
* Bump to JLine 1.0 (see details section below)
* Global repository setting (see details section below)
* Other fixes/improvements: [#368], [#377], [#378], [#386], [#387], [#388], [#389]
## Experimental or In-progress
* API for embedding incremental compilation. This interface is subject to change, but already being used in [a branch of the scala-maven-plugin](https://github.com/davidB/scala-maven-plugin/tree/feature/sbt-inc).
* Experimental support for keeping the Scala compiler resident. Enable by passing `-Dsbt.resident.limit=n` to sbt, where `n` is an integer indicating the maximum number of compilers to keep around.
* The [Howto pages](http://www.scala-sbt.org/howto.html) on the [new site](http://www.scala-sbt.org) are at least readable now. There is more content to write and more formatting improvements are needed, so [pull requests are welcome](https://github.com/sbt/sbt.github.com).
## Details of major changes from 0.11.2 to 0.12.0
## Plugin configuration directory
In 0.11.0, plugin configuration moved from `project/plugins/` to just `project/`, with `project/plugins/` being deprecated. Only 0.11.2 had a deprecation message, but in all of 0.11.x, the presence of the old style `project/plugins/` directory took precedence over the new style. In 0.12.0, the new style takes precedence. Support for the old style won't be removed until 0.13.0.
1. Ideally, a project should ensure there is never a conflict. Both styles are still supported; only the behavior when there is a conflict has changed.
2. In practice, switching from an older branch of a project to a new branch would often leave an empty `project/plugins/` directory that would cause the old style to be used, despite there being no configuration there.
3. Therefore, the intention is that this change is strictly an improvement for projects transitioning to the new style and isn't noticed by other projects.
## Parsing task axis
There is an important change related to parsing the task axis for settings and tasks that fixes [#202](https://github.com/harrah/xsbt/issues/202)
1. The syntax before 0.12 has been `{build}project/config:key(for task)`
2. The proposed (and implemented) change for 0.12 is `{build}project/config:task::key`
3. By moving the task axis before the key, it allows for easier discovery (via tab completion) of keys in plugins.
4. It is not planned to support the old syntax.
## Aggregation
Aggregation has been made more flexible. This is along the direction that has been previously discussed on the mailing list.
1. Before 0.12, a setting was parsed according to the current project and only the exact setting parsed was aggregated.
2. Also, tab completion did not account for aggregation.
3. This meant that if the setting/task didn't exist on the current project, parsing failed even if an aggregated project contained the setting/task.
4. Additionally, if compile:package existed for the current project, *:package existed for an aggregated project, and the user requested 'package' to run (without specifying the configuration), *:package wouldn't be run on the aggregated project (because it isn't the same as the compile:package key that existed on the current project).
5. In 0.12, both of these situations result in the aggregated settings being selected. For example,
1. Consider a project `root` that aggregates a subproject `sub`.
2. `root` defines `*:package`.
3. `sub` defines `compile:package` and `compile:compile`.
4. Running `root/package` will run `root/*:package` and `sub/compile:package`
5. Running `root/compile` will run `sub/compile:compile`
6. This change was made possible in part by the change to task axis parsing.
## Parallel Execution
Fine control over parallel execution is supported as described here: https://github.com/harrah/xsbt/wiki/Parallel-Execution
1. The default behavior should be the same as before, including the `parallelExecution` settings.
2. The new capabilities of the system should otherwise be considered experimental.
3. Therefore, `parallelExecution` won't be deprecated at this time.
## Source dependencies
A fix for issue [#329](https://github.com/harrah/xsbt/issues/329) is included in 0.12.0. This fix ensures that only one version of a plugin is loaded across all projects. There are two parts to this.
1. The version of a plugin is fixed by the first build to load it. In particular, the plugin version used in the root build (the one in which sbt is started in) always overrides the version used in dependencies.
2. Plugins from all builds are loaded in the same class loader.
Additionally, Sanjin's patches to add support for hg and svn URIs are included.
1. sbt uses subversion to retrieve URIs beginning with `svn` or `svn+ssh`. An optional fragment identifies a specific revision to checkout.
2. Because a URI for mercurial doesn't have a mercurial-specific scheme, sbt requires the URI to be prefixed with `hg:` to identify it as a mercurial repository.
3. Also, URIs that end with `.git` are now handled properly.
## Cross building
The cross version suffix is shortened to only include the major and minor version for Scala versions starting with the 2.10 series and for sbt versions starting with the 0.12 series. For example, `sbinary_2.10` for a normal library or `sbt-plugin_2.10_0.12` for an sbt plugin. This requires forward and backward binary compatibility across incremental releases for both Scala and sbt.
1. This change has been a long time coming, but it requires everyone publishing an open source project to switch to 0.12 to publish for 2.10 or adjust the cross versioned prefix in their builds appropriately.
2. Obviously, using 0.12 to publish a library for 2.10 requires 0.12.0 to be released before projects publish for 2.10.
3. There is now the concept of a binary version. This is a subset of the full version string that represents binary compatibility. That is, equal binary versions implies binary compatibility. All Scala versions prior to 2.10 use the full version for the binary version to reflect previous sbt behavior. For 2.10 and later, the binary version is `<major>.<minor>`.
4. The cross version behavior for published artifacts is configured by the crossVersion setting. It can be configured for dependencies by using the `cross` method on `ModuleID` or by the traditional %% dependency construction variant. By default, a dependency has cross versioning disabled when constructed with a single % and uses the binary Scala version when constructed with %%.
5. The artifactName function now accepts a type ScalaVersion as its first argument instead of a String. The full type is now `(ScalaVersion, ModuleID, Artifact) => String`. ScalaVersion contains both the full Scala version (such as 2.10.0) as well as the binary Scala version (such as 2.10).
6. The flexible version mapping added by Indrajit has been merged into the `cross` method and the %% variants accepting more than one argument have been deprecated. See [[Cross Build]] for details.
## Global repository setting
Define the repositories to use by putting a standalone `[repositories]` section (see the [Launcher] page) in `~/.sbt/repositories` and pass `-Dsbt.override.build.repos=true` to sbt. Only the repositories in that file will be used by the launcher for retrieving sbt and Scala and by sbt when retrieving project dependencies. (@jsuereth)
## test-quick
`test-quick` ([#393]) runs the tests specified as arguments (or all tests if no arguments are given) that:
1. have not been run yet OR
2. failed the last time they were run OR
3. had any transitive dependencies recompiled since the last successful run
## Argument quoting
Argument quoting ([#396]) from the intereactive mode works like Scala string literals.
1. `> command "arg with spaces,\n escapes interpreted"`
2. `> command """arg with spaces,\n escapes not interpreted"""`
3. For the first variant, note that paths on Windows use backslashes and need to be escaped (`\\`). Alternatively, use the second variant, which does not interpret escapes.
4. For using either variant in batch mode, note that a shell will generally require the double quotes themselves to be escaped.
## scala-library.jar
sbt versions prior to 0.12.0 provided the location of scala-library.jar to scalac even if scala-library.jar wasn't on the classpath. This allowed compiling Scala code without scala-library as a dependency, for example, but this was a misfeature. Instead, the Scala library should be declared as `provided`:
```scala
// Don't automatically add the scala-library dependency
// in the 'compile' configuration
autoScalaLibrary := false
libraryDependencies +=
"org.scala-lang" % "scala-library" % "2.9.2" % "provided"
```

View File

@ -1 +0,0 @@
This page contains examples submitted by the community of SBT users.

View File

@ -0,0 +1,275 @@
==============
0.12.0 Changes
==============
Features, fixes, changes with compatibility implications (incomplete, please help)
----------------------------------------------------------------------------------
- The cross versioning convention has changed for Scala versions 2.10
and later as well as for sbt plugins.
- When invoked directly, 'update' will always perform an update (gh-335)
- The sbt plugins repository is added by default for plugins and plugin definitions. gh-380
- Plugin configuration directory precedence has changed (see details
section below)
- Source dependencies have been fixed, but the fix required changes
(see details section below)
- Aggregation has changed to be more flexible (see details section
below)
- Task axis syntax has changed from key(for task) to task::key (see
details section below)
- The organization for sbt has to changed to ``org.scala-sbt`` (was:
org.scala-tools.sbt). This affects users of the scripted plugin in
particular.
- ``artifactName`` type has changed to
``(ScalaVersion, Artifact, ModuleID) => String``
- ``javacOptions`` is now a task
- ``session save`` overwrites settings in ``build.sbt`` (when appropriate). gh-369
- scala-library.jar is now required to be on the classpath in order to
compile Scala code. See the ``scala-library.jar`` section at the
bottom of the page for details.
Features
--------
- Support for forking tests (gh-415)
- ``test-quick`` (see details section below)
- Support globally overriding repositories (gh-472)
- Added ``print-warnings`` task that will print unchecked and
deprecation warnings from the previous compilation without needing to
recompile (Scala 2.10+ only)
- Support for loading an ivy settings file from a URL.
- ``projects add/remove <URI>`` for temporarily working with other builds
- Enhanced control over parallel execution (see details section below)
- ``inspect tree <key>`` for calling ``inspect`` command recursively (gh-274)
Fixes
-----
- Delete a symlink and not its contents when recursively deleting a directory.
- Fix detection of ancestors for java sources
- Fix the resolvers used for ``update-sbt-classifiers`` (gh-304)
- Fix auto-imports of plugins (gh-412)
- Argument quoting (see details section below)
- Properly reset JLine after being stopped by Ctrl+z (unix only). gh-394
Improvements
------------
- The launcher can launch all released sbt versions back to 0.7.0.
- A more refined hint to run 'last' is given when a stack trace is suppressed.
- Use java 7 Redirect.INHERIT to inherit input stream of subprocess (gh-462,\ gh-327).
This should fix issues when forking interactive programs. (@vigdorchik)
- Mirror ivy 'force' attribute (gh-361)
- Various improvements to ``help`` and ``tasks`` commands as well as
new ``settings`` command (gh-315)
- Bump jsch version to 0.1.46. (gh-403)
- Improved help commands: ``help``, ``tasks``, ``settings``.
- Bump to JLine 1.0 (see details section below)
- Global repository setting (see details section below)
- Other fixes/improvements: gh-368, gh-377, gh-378, gh-386, gh-387, gh-388, gh-389
Experimental or In-progress
---------------------------
- API for embedding incremental compilation. This interface is subject
to change, but already being used in `a branch of the
scala-maven-plugin <https://github.com/davidB/scala-maven-plugin/tree/feature/sbt-inc>`_.
- Experimental support for keeping the Scala compiler resident. Enable
by passing ``-Dsbt.resident.limit=n`` to sbt, where ``n`` is an
integer indicating the maximum number of compilers to keep around.
- The `Howto pages <http://www.scala-sbt.org/howto.html>`_ on the `new
site <http://www.scala-sbt.org>`_ are at least readable now. There is
more content to write and more formatting improvements are needed, so
`pull requests are welcome <https://github.com/sbt/sbt.github.com>`_.
Details of major changes from 0.11.2 to 0.12.0
----------------------------------------------
Plugin configuration directory
------------------------------
In 0.11.0, plugin configuration moved from ``project/plugins/`` to just
``project/``, with ``project/plugins/`` being deprecated. Only 0.11.2
had a deprecation message, but in all of 0.11.x, the presence of the old
style ``project/plugins/`` directory took precedence over the new style.
In 0.12.0, the new style takes precedence. Support for the old style
won't be removed until 0.13.0.
1. Ideally, a project should ensure there is never a conflict. Both
styles are still supported; only the behavior when there is a
conflict has changed.
2. In practice, switching from an older branch of a project to a new
branch would often leave an empty ``project/plugins/`` directory that
would cause the old style to be used, despite there being no
configuration there.
3. Therefore, the intention is that this change is strictly an
improvement for projects transitioning to the new style and isn't
noticed by other projects.
Parsing task axis
-----------------
There is an important change related to parsing the task axis for
settings and tasks that fixes gh-202
1. The syntax before 0.12 has been
``{build}project/config:key(for task)``
2. The proposed (and implemented) change for 0.12 is
``{build}project/config:task::key``
3. By moving the task axis before the key, it allows for easier
discovery (via tab completion) of keys in plugins.
4. It is not planned to support the old syntax.
Aggregation
-----------
Aggregation has been made more flexible. This is along the direction
that has been previously discussed on the mailing list.
1. Before 0.12, a setting was parsed according to the current project
and only the exact setting parsed was aggregated.
2. Also, tab completion did not account for aggregation.
3. This meant that if the setting/task didn't exist on the current
project, parsing failed even if an aggregated project contained the
setting/task.
4. Additionally, if compile:package existed for the current project,
\*:package existed for an aggregated project, and the user requested
'package' to run (without specifying the configuration), \*:package
wouldn't be run on the aggregated project (because it isn't the same
as the compile:package key that existed on the current project).
5. In 0.12, both of these situations result in the aggregated settings
being selected. For example,
1. Consider a project ``root`` that aggregates a subproject ``sub``.
2. ``root`` defines ``*:package``.
3. ``sub`` defines ``compile:package`` and ``compile:compile``.
4. Running ``root/package`` will run ``root/*:package`` and
``sub/compile:package``
5. Running ``root/compile`` will run ``sub/compile:compile``
6. This change was made possible in part by the change to task axis
parsing.
Parallel Execution
------------------
Fine control over parallel execution is supported as described here:
:doc:`/Detailed-Topics/Parallel-Execution`
1. The default behavior should be the same as before, including the
``parallelExecution`` settings.
2. The new capabilities of the system should otherwise be considered
experimental.
3. Therefore, ``parallelExecution`` won't be deprecated at this time.
Source dependencies
-------------------
A fix for issue gh-329 is included in 0.12.0. This fix ensures that only one version of a plugin
is loaded across all projects. There are two parts to this.
1. The version of a plugin is fixed by the first build to load it. In
particular, the plugin version used in the root build (the one in
which sbt is started in) always overrides the version used in
dependencies.
2. Plugins from all builds are loaded in the same class loader.
Additionally, Sanjin's patches to add support for hg and svn URIs are
included.
1. sbt uses subversion to retrieve URIs beginning with ``svn`` or
``svn+ssh``. An optional fragment identifies a specific revision to
checkout.
2. Because a URI for mercurial doesn't have a mercurial-specific scheme,
sbt requires the URI to be prefixed with ``hg:`` to identify it as a
mercurial repository.
3. Also, URIs that end with ``.git`` are now handled properly.
Cross building
--------------
The cross version suffix is shortened to only include the major and
minor version for Scala versions starting with the 2.10 series and for
sbt versions starting with the 0.12 series. For example,
``sbinary_2.10`` for a normal library or ``sbt-plugin_2.10_0.12`` for an
sbt plugin. This requires forward and backward binary compatibility
across incremental releases for both Scala and sbt.
1. This change has been a long time coming, but it requires everyone
publishing an open source project to switch to 0.12 to publish for
2.10 or adjust the cross versioned prefix in their builds
appropriately.
2. Obviously, using 0.12 to publish a library for 2.10 requires 0.12.0
to be released before projects publish for 2.10.
3. There is now the concept of a binary version. This is a subset of the
full version string that represents binary compatibility. That is,
equal binary versions implies binary compatibility. All Scala
versions prior to 2.10 use the full version for the binary version to
reflect previous sbt behavior. For 2.10 and later, the binary version
is ``<major>.<minor>``.
4. The cross version behavior for published artifacts is configured by
the crossVersion setting. It can be configured for dependencies by
using the ``cross`` method on ``ModuleID`` or by the traditional %%
dependency construction variant. By default, a dependency has cross
versioning disabled when constructed with a single % and uses the
binary Scala version when constructed with %%.
5. The artifactName function now accepts a type ScalaVersion as its
first argument instead of a String. The full type is now
``(ScalaVersion, ModuleID, Artifact) => String``. ScalaVersion
contains both the full Scala version (such as 2.10.0) as well as the
binary Scala version (such as 2.10).
6. The flexible version mapping added by Indrajit has been merged into
the ``cross`` method and the %% variants accepting more than one
argument have been deprecated. See :doc:`/Detailed-Topics/Cross-Build` for details.
Global repository setting
-------------------------
Define the repositories to use by putting a standalone
``[repositories]`` section (see the
:doc:`/Detailed-Topics/Launcher` page) in
``~/.sbt/repositories`` and pass ``-Dsbt.override.build.repos=true`` to
sbt. Only the repositories in that file will be used by the launcher for
retrieving sbt and Scala and by sbt when retrieving project
dependencies. (@jsuereth)
test-quick
----------
``test-quick`` (gh-393) runs the tests specified as arguments (or all tests if no arguments are
given) that:
1. have not been run yet OR
2. failed the last time they were run OR
3. had any transitive dependencies recompiled since the last successful
run
Argument quoting
----------------
Argument quoting (gh-396) from the intereactive mode works like Scala string literals.
1. ``> command "arg with spaces,\n escapes interpreted"``
2. ``> command """arg with spaces,\n escapes not interpreted"""``
3. For the first variant, note that paths on Windows use backslashes and
need to be escaped (``\\``). Alternatively, use the second variant,
which does not interpret escapes.
4. For using either variant in batch mode, note that a shell will
generally require the double quotes themselves to be escaped.
scala-library.jar
-----------------
sbt versions prior to 0.12.0 provided the location of scala-library.jar
to scalac even if scala-library.jar wasn't on the classpath. This
allowed compiling Scala code without scala-library as a dependency, for
example, but this was a misfeature. Instead, the Scala library should be
declared as ``provided``:
::
// Don't automatically add the scala-library dependency
// in the 'compile' configuration
autoScalaLibrary := false
libraryDependencies += "org.scala-lang" % "scala-library" % "2.9.2" % "provided"

View File

@ -1,834 +0,0 @@
### 0.12.0 to 0.12.1 (unreleased)
Dependency management fixes:
* Merge multiple dependency definitions for the same ID. Workaround for [#468], [#285], [#419], [#480].
* Don't write <scope> section of pom if scope is 'compile'.
* Ability to properly match on artifact type. Fixes [#507] (Thomas).
* Force `update` to run on changes to last modified time of artifacts or cached descriptor (part of fix for [#532]). It may also fix issues when working with multiple local projects via 'publish-local' and binary dependencies.
* Per-project resolution cache that deletes cached files before `update`. Notes:
- The resolution cache differs from the repository cache and does not contain dependency metadata or artifacts.
- The resolution cache contains the generated ivy files, properties, and resolve reports for the project.
- There will no longer be individual files directly in `~/.ivy2/cache/`
- Resolve reports are now in `target/resolution-cache/reports/`, viewable with a browser.
- Cache location includes extra attributes so that cross builds of a plugin do not overwrite each other. [#532]
Three stage incremental compilation:
* As before, the first step recompiles sources that were edited (or otherwise directly invalidated).
* The second step recompiles sources from the first step whose API has changed, their direct dependencies, and sources forming a cycle with these sources.
* The third step recompiles transitive dependencies of sources from the second step whose API changed.
* Code relying mainly on composition should see decreased compilation times with this approach.
* Code with deep inheritance hierarchies and large cycles between sources may take longer to compile.
* `last compile` will show cycles that were processed in step 2. Reducing large cycles of sources shown here may decrease compile times.
Miscellaneous fixes and improvements:
* Various test forking fixes. Fixes [#512], [#515].
* Proper isolation of build definition classes. Fixes [#536], [#511].
* `orbit` packaging should be handled like a standard jar. Fixes [#499].
* In `IO.copyFile`, limit maximum size transferred via NIO. Fixes [#491].
* Add OSX JNI library extension in `includeFilter` by default. Fixes [#500]. (Indrajit)
* Translate `show x y` into `;show x ;show y` . Fixes [#495].
* Clean up temporary directory on exit. Fixes [#502].
* `set` prints the scopes+keys it defines and affects.
* Tab completion for `set` (experimental).
* Report file name when an error occurs while opening a corrupt zip file in incremental compilation code. (James)
* Defer opening logging output files until an actual write. Helps reduce number of open file descriptors.
* Back all console loggers by a common console interface that merges (overwrites) consecutive `Resolving xxxx ...` lines when ansi codes are enabled (as first done by Play).
Forward-compatible-only change (not present in 0.12.0):
* `sourcesInBase` setting controls whether sources in base directory are included. Fixes [#494].
[#285]: https://github.com/harrah/xsbt/issues/285
[#419]: https://github.com/harrah/xsbt/issues/419
[#468]: https://github.com/harrah/xsbt/issues/468
[#480]: https://github.com/harrah/xsbt/issues/480
[#491]: https://github.com/harrah/xsbt/issues/491
[#494]: https://github.com/harrah/xsbt/issues/494
[#495]: https://github.com/harrah/xsbt/issues/495
[#499]: https://github.com/harrah/xsbt/issues/499
[#500]: https://github.com/harrah/xsbt/issues/500
[#502]: https://github.com/harrah/xsbt/issues/502
[#507]: https://github.com/harrah/xsbt/issues/507
[#511]: https://github.com/harrah/xsbt/issues/511
[#512]: https://github.com/harrah/xsbt/issues/512
[#515]: https://github.com/harrah/xsbt/issues/515
[#532]: https://github.com/harrah/xsbt/issues/532
[#536]: https://github.com/harrah/xsbt/issues/536
### 0.11.3 to 0.12.0
The changes for 0.12.0 are listed on a separate page. See [[ChangeSummary_0.12.0]].
### 0.11.2 to 0.11.3
Dropping scala-tools.org:
* The sbt group ID is changed to `org.scala-sbt` (from org.scala-tools.sbt). This means you must use a 0.11.3 launcher to launch 0.11.3.
* The convenience objects `ScalaToolsReleases` and `ScalaToolsSnapshots` now point to `https://oss.sonatype.org/content/repositories/releases` and `.../snapshots`
* The launcher no longer includes `scala-tools.org` repositories by default and instead uses the Sonatype OSS snapshots repository for Scala snapshots.
* The `scala-tools.org` releases repository is no longer included as an application repository by default. The Sonatype OSS repository is _not_ included by default in its place.
Other fixes:
* Compiler interface works with 2.10
* `maxErrors` setting is no longer ignored
* Correct test count [#372] \(Eugene)
* Fix file descriptor leak in process library (Daniel)
* Buffer url input stream returned by Using [#437]
* Jsch version bumped to 0.1.46 [#403]
* JUnit test detection handles ancestors properly (Indrajit)
* Avoid unnecessarily re-resolving plugins [#368]
* Substitute variables in explicit version strings and custom repository definitions in launcher configuration
* Support setting sbt.version from system property, which overrides setting in a properties file [#354]
* Minor improvements to command/key suggestions
[#437]: https://github.com/harrah/xsbt/issues/437
[#403]: https://github.com/harrah/xsbt/issues/403
[#372]: https://github.com/harrah/xsbt/issues/372
[#368]: https://github.com/harrah/xsbt/issues/368
[#354]: https://github.com/harrah/xsbt/issues/354
### 0.11.1 to 0.11.2
Notable behavior change:
* The local Maven repository has been removed from the launcher's list of default repositories, which is used for obtaining sbt and Scala dependencies. This is motivated by the high probability that including this repository was causing the various problems some users have with the launcher not finding some dependencies ([#217]).
Fixes:
* [#257] Fix invalid classifiers in pom generation (Indrajit)
* [#255] Fix scripted plugin descriptor (Artyom)
* Fix forking git on windows (Stefan, Josh)
* [#261] Fix whitespace handling for semicolon-separated commands
* [#263] Fix handling of dependencies with an explicit URL
* [#272] Show deprecation message for `project/plugins/`
[#217]: https://github.com/harrah/xsbt/issues/217
[#255]: https://github.com/harrah/xsbt/issues/255
[#257]: https://github.com/harrah/xsbt/issues/257
[#263]: https://github.com/harrah/xsbt/issues/263
[#261]: https://github.com/harrah/xsbt/issues/261
[#272]: https://github.com/harrah/xsbt/issues/272
### 0.11.0 to 0.11.1
Breaking change:
* The scripted plugin is now in the `sbt` package so that it can be used from a named package
Notable behavior change:
* By default, there is more logging during update: one line per dependency resolved and two lines per dependency downloaded. This is to address the appearance that sbt hangs on larger 'update's.
Fixes and improvements:
* Show help for a key with `help <key>`
* [#21] Reduced memory and time overhead of incremental recompilation with signature hash based approach.
* Rotate global log so that only output since last prompt is displayed for `last`
* [#169] Add support for exclusions with excludeAll and exclude methods on ModuleID. (Indrajit)
* [#235] Checksums configurable for launcher
* [#246] Invalidate `update` when `update` is invalidated for an internal project dependency
* [#138] Include plugin sources and docs in `update-sbt-classifiers`
* [#219] Add cleanupCommands setting to specify commands to run before interpreter exits
* [#46] Fix regression in caching missing classifiers for `update-classifiers` and `update-sbt-classifiers`.
* [#228] Set `connectInput` to true to connect standard input to forked run
* [#229] Limited task execution interruption using ctrl+c
* [#220] Properly record source dependencies from separate compilation runs in the same step.
* [#214] Better default behavior for classpathConfiguration for external Ivy files
* [#212] Fix transitive plugin dependencies.
* [#222] Generate <classifiers> section in make-pom. (Jan)
* Build resolvers, loaders, and transformers.
* Allow project dependencies to be modified by a setting (buildDependencies) but with the restriction that new builds cannot be introduced.
* [#174], [#196], [#201], [#204], [#207], [#208], [#226], [#224], [#253]
[#253]: https://github.com/harrah/xsbt/issues/253
[#246]: https://github.com/harrah/xsbt/issues/246
[#235]: https://github.com/harrah/xsbt/issues/235
[#229]: https://github.com/harrah/xsbt/issues/229
[#228]: https://github.com/harrah/xsbt/issues/228
[#226]: https://github.com/harrah/xsbt/issues/226
[#224]: https://github.com/harrah/xsbt/issues/224
[#222]: https://github.com/harrah/xsbt/issues/222
[#220]: https://github.com/harrah/xsbt/issues/220
[#219]: https://github.com/harrah/xsbt/issues/219
[#214]: https://github.com/harrah/xsbt/issues/214
[#212]: https://github.com/harrah/xsbt/issues/212
[#208]: https://github.com/harrah/xsbt/issues/208
[#207]: https://github.com/harrah/xsbt/issues/207
[#204]: https://github.com/harrah/xsbt/issues/204
[#201]: https://github.com/harrah/xsbt/issues/201
[#196]: https://github.com/harrah/xsbt/issues/196
[#174]: https://github.com/harrah/xsbt/issues/174
[#169]: https://github.com/harrah/xsbt/issues/169
[#138]: https://github.com/harrah/xsbt/issues/138
[#46]: https://github.com/harrah/xsbt/issues/46
[#21]: https://github.com/harrah/xsbt/issues/21
[#114]: https://github.com/harrah/xsbt/issues/114
[#115]: https://github.com/harrah/xsbt/issues/115
[#118]: https://github.com/harrah/xsbt/issues/118
[#120]: https://github.com/harrah/xsbt/issues/120
[#121]: https://github.com/harrah/xsbt/issues/121
[#128]: https://github.com/harrah/xsbt/issues/128
[#131]: https://github.com/harrah/xsbt/issues/131
[#132]: https://github.com/harrah/xsbt/issues/132
[#135]: https://github.com/harrah/xsbt/issues/135
[#139]: https://github.com/harrah/xsbt/issues/139
[#140]: https://github.com/harrah/xsbt/issues/140
[#145]: https://github.com/harrah/xsbt/issues/145
[#156]: https://github.com/harrah/xsbt/issues/156
[#157]: https://github.com/harrah/xsbt/issues/157
[#162]: https://github.com/harrah/xsbt/issues/162
### 0.10.1 to 0.11.0
Major Improvements:
* Move to 2.9.1 for project definitions and plugins
* Drop support for 2.7
* Settings overhaul, mainly to make API documentation more usable
* Support using native libraries in `run` and `test` (but not `console`, for example)
* Automatic plugin cross-versioning. Use
```scala
addSbtPlugin("group" % "name" % "version")
```
in `project/plugins.sbt` instead of `libraryDependencies += ...` See [[Plugins]] for details
Fixes and Improvements:
* Display all undefined settings at once, instead of only the first one
* Deprecate separate `classpathFilter`, `defaultExcludes`, and `sourceFilter` keys in favor of `includeFilter` and `excludeFilter` explicitly scoped by `unmanagedSources`, `unmanagedResources`, or `unmanagedJars` as appropriate (Indrajit)
* Default to using shared boot directory in `~/.sbt/boot/`
* Can put contents of `project/plugins/` directly in `project/` instead. Will likely deprecate `plugins/` directory
* Key display is context sensitive. For example, in a single project, the build and project axes will not be displayed
* [#114], [#118], [#121], [#132], [#135], [#157]: Various settings and error message improvements
* [#115]: Support configuring checksums separately for `publish` and `update`
* [#118]: Add `about` command
* [#118], [#131]: Improve `last` command. Aggregate `last <task>` and display all recent output for `last`
* [#120]: Support read-only external file projects (Fred)
* [#128]: Add `skip` setting to override recompilation change detection
* [#139]: Improvements to pom generation (Indrajit)
* [#140], [#145]: Add standard manifest attributes to binary and source jars (Indrajit)
* Allow sources used for `doc` generation to be different from sources for `compile`
* [#156]: Made `package` an alias for `package-bin`
* [#162]: handling of optional dependencies in pom generation
### 0.10.0 to 0.10.1
Some of the more visible changes:
* Support "provided" as a valid configuration for inter-project dependencies [#53](https://github.com/harrah/xsbt/issues/53)
* Try out some better error messages for build.sbt in a few common situations [#58](https://github.com/harrah/xsbt/issues/58)
* Drop "Incomplete tasks ..." line from error messages. [#32](https://github.com/harrah/xsbt/issues/32)
* Better handling of javac logging. [#74](https://github.com/harrah/xsbt/pull/74)
* Warn when reload discards session settings
* Cache failing classifiers, making 'update-classifiers' a practical replacement for withSources()
* Global settings may be provided in ~/.sbt/build.sbt [#52](https://github.com/harrah/xsbt/issues/52)
* No need to define "sbtPlugin := true" in project/plugins/ or ~/.sbt/plugins/
* Provide statistics and list of evicted modules in UpdateReport
* Scope use of 'transitive-classifiers' by 'update-sbt-classifiers' and 'update-classifiers' for separate configuration.
* Default project ID includes a hash of base directory to avoid collisions in simple cases.
* 'extra-loggers' setting to make it easier to add loggers
* Associate ModuleID, Artifact and Configuration with a classpath entry (moduleID, artifact, and configuration keys). [#41](https://github.com/harrah/xsbt/issues/41)
* Put httpclient on Ivy's classpath, which seems to speed up 'update'.
### 0.7.7 to 0.10.0
**Major redesign, only prominent changes listed.**
* Project definitions in Scala 2.8.1
* New configuration system: [[Quick Configuration Examples]], [[Full Configuration]], and [[Basic Configuration]]
* New task engine: [[Tasks]]
* New multiple project support: [[Full Configuration]]
* More aggressive incremental recompilation for both Java and Scala sources
* Merged plugins and processors into improved plugins system: [[Plugins]]
* [[Web application|https://github.com/siasia/xsbt-web-plugin]] and webstart support moved to plugins instead of core features
* Fixed all of the issues in (Google Code) issue #44
* Managed dependencies automatically updated when configuration changes
* `update-sbt-classifiers` and `update-classifiers` tasks for retrieving sources and/or javadocs for dependencies, transitively
* Improved artifact handling and configuration [[Artifacts]]
* Tab completion parser combinators for commands and input tasks: [[Commands]]
* No project creation prompts anymore
* Moved to GitHub: <http://github.com/harrah/xsbt>
### 0.7.5 to 0.7.7
* Workaround for Scala issue [[#4426|http://lampsvn.epfl.ch/trac/scala/ticket/4426]]
* Fix issue 156
### 0.7.4 to 0.7.5
* Joonas's update to work with Jetty 7.1 logging API changes.
* Updated to work with Jetty 7.2 WebAppClassLoader binary incompatibility (issue 129).
* Provide application and boot classpaths to tests and 'run'ning code according to <http://gist.github.com/404272>
* Fix `provided` configuration. It is no longer included on the classpath of dependent projects.
* Scala 2.8.1 is the default version used when starting a new project.
* Updated to [[Ivy 2.2.0|http://ant.apache.org/ivy/history/2.2.0/release-notes.html]].
* Trond's patches that allow configuring [[jetty-env.xml|http://github.com/harrah/xsbt/commit/5e41a47f50e6]] and [[webdefault.xml|http://github.com/harrah/xsbt/commit/030e2ee91bac0]]
* Doug's [[patch|http://github.com/harrah/xsbt/commit/aa75ecf7055db]] to make 'projects' command show an asterisk next to current project
* Fixed issue 122
* Implemented issue 118
* Patch from Viktor and Ross for issue 123
* (RC1) Patch from Jorge for issue 100
* (RC1) Fix `<packaging>` type
### 0.7.3 to 0.7.4
* prefix continuous compilation with run number for better feedback when logging level is 'warn'
* Added `pomIncludeRepository(repo: MavenRepository): Boolean` that can be overridden to exclude local repositories by default
* Added `pomPostProcess(pom: Node): Node` to make advanced manipulation of the default pom easier (`pomExtra` already covers basic cases)
* Added `reset` command to reset JLine terminal. This needs to be run after suspending and then resuming sbt.
* Installer plugin is now a proper subproject of sbt.
* Plugins can now only be Scala sources. BND should be usable in a plugin now.
* More accurate detection of invalid test names. Invalid test names now generate an error and prevent the test action from running instead of just logging a warning.
* Fix issue with using 2.8.0.RC1 compiler in tests.
* Precompile compiler interface against 2.8.0.RC2
* Add `consoleOptions` for specifying options to the console. It defaults to `compileOptions`.
* Properly support sftp/ssh repositories using key-based authentication. See the updated section of the [[Resolvers]] page.
* `def ivyUpdateLogging = UpdateLogging.DownloadOnly | Full | Quiet`. Default is `DownloadOnly`. `Full` will log metadata resolution and provide a final summary.
* `offline` property for disabling checking for newer dynamic revisions (like `-SNAPSHOT`). This allows working offline with remote snapshots. Not honored for plugins yet.
* History commands: `!!, !?string, !-n, !n, !string, !:n, !:` Run `!` to see help.
* New section in launcher configuration `[ivy]` with a single label `cache-directory`. Specify this to change the cache location used by the launcher.
* New label `classifiers` under `[app]` to specify classifiers of additional artifacts to retrieve for the application.
* Honor `-Xfatal-warnings` option added to compiler in 2.8.0.RC2.
* Make `scaladocTask` a `fileTask` so that it runs only when `index.html` is older than some input source.
* Made it easier to create default `test-*` tasks with different options
* Sort input source files for consistency, addressing scalac's issues with source file ordering.
* Derive Java source file from name of class file when no `SourceFile` attribute is present in the class file. Improves tracking when `-g:none` option is used.
* Fix `FileUtilities.unzip` to be tail-recursive again.
### 0.7.2 to 0.7.3
* Fixed issue with scala.library.jar not being on javac's classpath
* Fixed buffered logging for parallel execution
* Fixed `test-*` tab completion being permanently set on first completion
* Works with Scala 2.8 trunk again.
* Launcher: Maven local repository excluded when the Scala version is a snapshot. This should fix issues with out of date Scala snapshots.
* The compiler interface is precompiled against common Scala versions (for this release, 2.7.7 and 2.8.0.Beta1).
* Added `PathFinder.distinct`
* Running multiple commands at once at the interactive prompt is now supported. Prefix each command with ';'.
* Run and return the output of a process as a String with `!!` or as a (blocking) `Stream[String]` with `lines`.
* Java tests + Annotation detection
* Test frameworks can now specify annotation fingerprints. Specify the names of annotations and sbt discovers classes with the annotations on it or one of its methods. Use version 0.5 of the test-interface.
* Detect subclasses and annotations in Java sources (really, their class files)
* Discovered is new root of hierarchy representing discovered subclasses + annotations. `TestDefinition` no longer fulfills this role.
* `TestDefinition` is modified to be name+`Fingerprint` and represents a runnable test. It need not be `Discovered`, but could be file-based in the future, for example.
* Replaced testDefinitionClassNames method with `fingerprints` in `CompileConfiguration`.
* Added foundAnnotation to `AnalysisCallback`
* Added `Runner2`, `Fingerprint`, `AnnotationFingerprint`, and `SubclassFingerprint` to the test-interface. Existing test frameworks should still work. Implement `Runner2` to use fingerprints other than `SubclassFingerprint`.
### 0.7.1 to 0.7.2
* `Process.apply` no longer uses `CommandParser`. This should fix issues with the android-plugin.
* Added `sbt.impl.Arguments` for parsing a command like a normal action (for `Processor`s)
* Arguments are passed to `javac` using an argument file (`@`)
* Added `webappUnmanaged: PathFinder` method to `DefaultWebProject`. Paths selected by this `PathFinder` will not be pruned by `prepare-webapp` and will not be packaged by `package`. For example, to exclude the GAE datastore directory:
```scala
override def webappUnmanaged =
(temporaryWarPath / "WEB-INF" / "appengine-generated" ***)
```
* Added some String generation methods to `PathFinder`: `toString` for debugging and `absString` and `relativeString` for joining the absolute (relative) paths by the platform separator.
* Made tab completors lazier to reduce startup time.
* Fixed `console-project` for custom subprojects
* `Processor` split into `Processor`/`BasicProcessor`. `Processor` provides high level of integration with command processing. `BasicProcessor` operates on a `Project` but does not affect command processing.
* Can now use `Launcher` externally, including launching `sbt` outside of the official jar. This means a `Project` can now be created from tests.
* Works with Scala 2.8 trunk
* Fixed logging level behavior on subprojects.
* All sbt code is now at <http://github.com/harrah/xsbt> in one project.
### 0.7.0 to 0.7.1
* Fixed Jetty 7 support to work with JRebel
* Fixed make-pom to generate valid dependencies section
### 0.5.6 to 0.7.0
* Unifed batch and interactive commands. All commands that can be executed at interactive prompt can be run from the command line. To run commands and then enter interactive prompt, make the last command 'shell'.
* Properly track certain types of synthetic classes, such as for comprehension with >30 clauses, during compilation.
* Jetty 7 support
* Allow launcher in the project root directory or the `lib` directory. The jar name must have the form`'*sbt-launch*.jar'` in order to be excluded from the classpath.
* Stack trace detail can be controlled with `'on'`, `'off'`, `'nosbt'`, or an integer level. `'nosbt'` means to show stack frames up to the first `sbt` method. An integer level denotes the number of frames to show for each cause. This feature is courtesty of Tony Sloane.
* New action 'test-run' method that is analogous to 'run', but for test classes.
* New action 'clean-plugins' task that clears built plugins (useful for plugin development).
* Can provide commands from a file with new command: `<filename`
* Can provide commands over loopback interface with new command: `<port`
* Scala version handling has been completely redone.
* The version of Scala used to run sbt (currently 2.7.7) is decoupled from the version used to build the project.
* Changing between Scala versions on the fly is done with the command: `++<version>`
* Cross-building is quicker. The project definition does not need to be recompiled against each version in the cross-build anymore.
* Scala versions are specified in a space-delimited list in the `build.scala.versions` property.
* Dependency management:
* `make-pom` task now uses custom pom generation code instead of Ivy's pom writer.
* Basic support for writing out Maven-style repositories to the pom
* Override the 'pomExtra' method to provide XML (`scala.xml.NodeSeq`) to insert directly into the generated pom.
* Complete control over repositories is now possible by overriding `ivyRepositories`.
* The [[interface to Ivy|Ivy-Interface]] can be used directly.
* Test framework support is now done through a uniform test interface. Implications:
* New versions of specs, ScalaCheck, and ScalaTest are supported as soon as they are released.
* Support is better, since the test framework authors provide the implementation.
* Arguments can be passed to the test framework. For example: {{{ > test-only your.test -- -a -b -c }}}
* Can provide custom task start and end delimiters by defining the system properties `sbt.start.delimiter` and `sbt.end.delimiter`.
* Revamped launcher that can launch Scala applications, not just `sbt`
* Provide a configuration file to the launcher and it can download the application and its dependencies from a repository and run it.
* sbt's configuration can be customized. For example,
* The `sbt` version to use in projects can be fixed, instead of read from `project/build.properties`.
* The default values used to create a new project can be changed.
* The repositories used to fetch `sbt` and its dependencies, including Scala, can be configured.
* The location `sbt` is retrieved to is configurable. For example, `/home/user/.ivy2/sbt/` could be used instead of `project/boot/`.
### 0.5.5 to 0.5.6
* Support specs specifications defined as classes
* Fix specs support for 1.6
* Support ScalaTest 1.0
* Support ScalaCheck 1.6
* Remove remaining uses of structural types
### 0.5.4 to 0.5.5
* Fixed problem with classifier support and the corresponding test
* No longer need `"->default"` in configurations (automatically mapped).
* Can specify a specific nightly of Scala 2.8 to use (for example: `2.8.0-20090910.003346-+`)
* Experimental support for searching for project (`-Dsbt.boot.search=none|only|root-first|nearest`)
* Fix issue where last path component of local repository was dropped if it did not exist.
* Added support for configuring repositories on a per-module basis.
* Unified batch-style and interactive-style commands. All commands that were previously interactive-only should be available batch-style. 'reboot' does not pick up changes to 'scala.version' properly, however.
### 0.5.2 to 0.5.4
* Many logging related changes and fixes. Added `FilterLogger` and cleaned up interaction between `Logger`, scripted testing, and the builder projects. This included removing the `recordingDepth` hack from Logger. Logger buffering is now enabled/disabled per thread.
* Fix `compileOptions` being fixed after the first compile
* Minor fixes to output directory checking
* Added `defaultLoggingLevel` method for setting the initial level of a project's `Logger`
* Cleaned up internal approach to adding extra default configurations like `plugin`
* Added `syncPathsTask` for synchronizing paths to a target directory
* Allow multiple instances of Jetty (new `jettyRunTasks` can be defined with different ports)
* `jettyRunTask` accepts configuration in a single configuration wrapper object instead of many parameters
* Fix web application class loading (issue #35) by using `jettyClasspath=testClasspath---jettyRunClasspath` for loading Jetty. A better way would be to have a `jetty` configuration and have `jettyClasspath=managedClasspath('jetty')`, but this maintains compatibility.
* Copy resources to `target/resources` and `target/test-resources` using `copyResources` and `copyTestResources` tasks. Properly include all resources in web applications and classpaths (issue #36). `mainResources` and `testResources` are now the definitive methods for getting resources.
* Updated for 2.8 (`sbt` now compiles against September 11, 2009 nightly build of Scala)
* Fixed issue with position of `^` in compile errors
* Changed order of repositories (local, shared, Maven Central, user, Scala Tools)
* Added Maven Central to resolvers used to find Scala library/compiler in launcher
* Fixed problem that prevented detecting user-specified subclasses
* Fixed exit code returned when exception thrown in main thread for `TrapExit`
* Added `javap` task to `DefaultProject`. It has tab completion on compiled project classes and the run classpath is passed to `javap` so that library classes are available. Examples:
```scala
> javap your.Clazz
> javap -c scala.List
```
* Added `exec` task. Mixin `Exec` to project definition to use. This forks the command following `exec`. Examples:
```scala
> exec echo Hi
> exec find src/main/scala -iname *.scala -exec wc -l {} ;
```
* Added `sh` task for users with a unix-style shell available (runs `/bin/sh -c <arguments>`). Mixin `Exec` to project definition to use. Example:
```scala
> sh find src/main/scala -iname *.scala | xargs cat | wc -l
```
* Proper dependency graph actions (previously was an unsupported prototype): `graph-src` and `graph-pkg` for source dependency graph and quasi-package dependency graph (based on source directories and source dependencies)
* Improved Ivy-related code to not load unnecessary default settings
* Fixed issue #39 (sources were not relative in src package)
* Implemented issue #38 (`InstallProject` with 'install' task)
* Vesa's patch for configuring the output of forked Scala/Java and processes
* Don't buffer logging of forked `run` by default
* Check `Project.terminateWatch` to determine if triggered execution should stop for a given keypress.
* Terminate triggered execution only on 'enter' by default (previously, any keypress stopped it)
* Fixed issue #41 (parent project should not declare jar artifact)
* Fixed issue #42 (search parent directories for `ivysettings.xml`)
* Added support for extra attributes with Ivy. Use `extra(key -> value)` on `ModuleIDs` and `Artifacts`. To define for a project's ID:
```scala
override def projectID = super.projectID extra(key -> value)
```
To specify in a dependency:
```scala
val dep = normalID extra(key -> value)
```
### 0.5.1 to 0.5.2
* Fixed problem where dependencies of `sbt` plugins were not on the compile classpath
* Added `execTask` that runs an `sbt.ProcessBuilder` when invoked
* Added implicit conversion from `scala.xml.Elem` to `sbt.ProcessBuilder` that takes the element's text content, trims it, and splits it around whitespace to obtain the command.
* Processes can now redirect standard input (see run with Boolean argument or !< operator on `ProcessBuilder`), off by default
* Made scripted framework a plugin and scripted tests now go in `src/sbt-test` by default
* Can define and use an sbt test framework extension in a project
* Fixed `run` action swallowing exceptions
* Fixed tab completion for method tasks for multi-project builds
* Check that tasks in `compoundTask` do not reference static tasks
* Make `toString` of `Path`s in subprojects relative to root project directory
* `crossScalaVersions` is now inherited from parent if not specified
* Added `scala-library.jar` to the `javac` classpath
* Project dependencies are added to published `ivy.xml`
* Added dependency tracking for Java sources using classfile parsing (with the usual limitations)
* Added `Process.cat` that will send contents of `URL`s and `File`s to standard output. Alternatively, `cat` can be used on a single `URL` or `File`. Example:
```scala
import java.net.URL
import java.io.File
val spde = new URL("http://technically.us/spde/About")
val dispatch = new URL("http://databinder.net/dispatch/About")
val build = new File("project/build.properties")
cat(spde, dispatch, build) #| "grep -i scala" !
```
### 0.4.6 to 0.5/0.5.1
* Fixed `ScalaTest` framework dropping stack traces
* Publish only public configurations by default
* Loader now adds `.m2/repository` for downloading Scala jars
* Can now fork the compiler and runner and the runner can use a different working directory.
* Maximum compiler errors shown is now configurable
* Fixed rebuilding and republishing released versions of `sbt` against new Scala versions (attempt #2)
* Fixed snapshot reversion handling (Ivy needs changing pattern set on cache, apparently)
* Fixed handling of default configuration when `useMavenConfiguration` is `true`
* Cleanup on Environment, Analysis, Conditional, `MapUtilities`, and more...
* Tests for Environment, source dependencies, library dependency management, and more...
* Dependency management and multiple Scala versions
* Experimental plugin for producing project bootstrapper in a self-extracting jar
* Added ability to directly specify `URL` to use for dependency with the `from(url: URL)` method defined on `ModuleID`
* Fixed issue #30
* Support cross-building with `+` when running batch actions
* Additional flattening for project definitions: sources can go either in `project/build/src` (recursively) or `project/build` (flat)
* Fixed manual `reboot` not changing the version of Scala when it is manually `set`
* Fixed tab completion for cross-building
* Fixed a class loading issue with web applications
### 0.4.5 to 0.4.6
* Publishing to ssh/sftp/filesystem repository supported
* Exception traces are printed by default
* Fixed warning message about no `Class-Path` attribute from showing up for `run`
* Fixed `package-project` operation
* Fixed `Path.fromFile`
* Fixed issue with external process output being lost when sent to a `BufferedLogger` with `parallelExecution` enabled.
* Preserve history across `clean`
* Fixed issue with making relative path in jar with wrong separator
* Added cross-build functionality (prefix action with `+`).
* Added methods `scalaLibraryJar` and `scalaCompilerJar` to `FileUtilities`
* Include project dependencies for `deliver`/`publish`
* Add Scala dependencies for `make-pom`/`deliver`/`publish`, which requires these to depend on `package`
* Properly add compiler jar to run/test classpaths when main sources depend on it
* `TestFramework` root `ClassLoader` filters compiler classes used by `sbt`, which is required for projects using the compiler.
* Better access to dependencies:
* `mainDependencies` and `testDependencies` provide an analysis of the dependencies of your code as determined during compilation
* `scalaJars` is deprecated, use `mainDependencies.scalaJars` instead (provides a `PathFinder`, which is generally more useful)
* Added `jettyPort` method to `DefaultWebProject`.
* Fixed `package-project` to exclude `project/boot` and `project/build/target`
* Support specs 1.5.0 for Scala 2.7.4 version.
* Parallelization at the subtask level
* Parallel test execution at the suite/specification level.
### 0.4.3 to 0.4.5
* Sorted out repository situation in loader
* Added support for `http_proxy` environment variable
* Added `download` method from Nathan to `FileUtilities` to retrieve the contents of a URL.
* Added special support for compiler plugins, see CompilerPlugins page.
* `reload` command in scripted tests will now properly handle success/failure
* Very basic support for Java sources: Java sources under `src/main/java` and `src/test/java` will be compiled.
* `parallelExecution` defaults to value in parent project if there is one.
* Added 'console-project' that enters the Scala interpreter with the current `Project` bound to the variable `project`.
* The default Ivy cache manager is now configured with `useOrigin=true` so that it doesn't cache artifacts from the local filesystem.
* For users building from trunk, if a project specifies a version of `sbt` that ends in `-SNAPSHOT`, the loader will update `sbt` every time it starts up. The trunk version of `sbt` will always end in `-SNAPSHOT` now.
* Added automatic detection of classes with main methods for use when `mainClass` is not explicitly specified in the project definition. If exactly one main class is detected, it is used for `run` and `package`. If multiple main classes are detected, the user is prompted for which one to use for `run`. For `package`, no `Main-Class` attribute is automatically added and a warning is printed.
* Updated build to cross-compile against Scala 2.7.4.
* Fixed `proguard` task in `sbt`'s project definition
* Added `manifestClassPath` method that accepts the value for the `Class-Path` attribute
* Added `PackageOption` called `ManifestAttributes` that accepts `(java.util.jar.Attributes.Name, String)` or `(String, String)` pairs and adds them to the main manifest attributes
* Fixed some situations where characters would not be echoed at prompts other than main prompt.
* Fixed issue #20 (use `http_proxy` environment variable)
* Implemented issue #21 (native process wrapper)
* Fixed issue #22 (rebuilding and republishing released versions of `sbt` against new Scala versions, specifically Scala 2.7.4)
* Implemented issue #23 (inherit inline repositories declared in parent project)
### 0.4 to 0.4.3
* Direct dependencies on Scala libraries are checked for version equality with `scala.version`
* Transitive dependencies on `scala-library` and `scala-compiler` are filtered
* They are fixed by `scala.version` and provided on the classpath by `sbt`
* To access them, use the `scalaJars` method, `classOf[ScalaObject].getProtectionDomain.getCodeSource`, or mainCompileConditional.analysis.allExternals
* The configurations checked/filtered as described above are configurable. Nonstandard configurations are not checked by default.
* Version of `sbt` and Scala printed on startup
* Launcher asks if you want to try a different version if `sbt` or Scala could not be retrieved.
* After changing `scala.version` or `sbt.version` with `set`, note is printed that `reboot` is required.
* Moved managed dependency actions to `BasicManagedProject` (`update` is now available on `ParentProject`)
* Cleaned up `sbt`'s build so that you just need to do `update` and `full-build` to build from source. The trunk version of `sbt` will be available for use from the loader.
* The loader is now a subproject.
* For development, you'll still want the usual actions (such as `package`) for the main builder and `proguard` to build the loader.
* Fixed analysis plugin improperly including traits/abstract classes in subclass search
* `ScalaProject`s already had everything required to be parent projects: flipped the switch to enable it
* Proper method task support in scripted tests (`package` group tests rightly pass again)
* Improved tests in loader that check that all necessary libraries were downloaded properly
### 0.3.7 to 0.4
* Fixed issue with `build.properties` being unnecessarily updated in sub-projects when loading.
* Added method to compute the SHA-1 hash of a `String`
* Added pack200 methods
* Added initial process interface
* Added initial webstart support
* Added gzip methods
* Added `sleep` and `newer` commands to scripted testing.
* Scripted tests now test the version of `sbt` being built instead of the version doing the building.
* `testResources` is put on the test classpath instead of `testResourcesPath`
* Added `jetty-restart`, which does `jetty-stop` and then `jetty-run`
* Added automatic reloading of default web application
* Changed packaging behaviors (still likely to change)
* Inline configurations now allowed (can be used with configurations in inline XML)
* Split out some code related to managed dependencies from `BasicScalaProject` to new class `BasicManagedProject`
* Can specify that maven-like configurations should be automatically declared
* Fixed problem with nested modules being detected as tests
* `testResources`, `integrationTestResources`, and `mainResources` should now be added to appropriate classpaths
* Added project organization as a property that defaults to inheriting from the parent project.
* Project creation now prompts for the organization.
* Added method tasks, which are top-level actions with parameters.
* Made `help`, `actions`, and `methods` commands available to batch-style invocation.
* Applied Mikko's two fixes for webstart and fixed problem with pack200+sign. Also, fixed nonstandard behavior when gzip enabled.
* Added `control` method to `Logger` for action lifecycle logging
* Made standard logging level convenience methods final
* Made `BufferedLogger` have a per-actor buffer instead of a global buffer
* Added a `SynchronizedLogger` and a `MultiLogger` (intended to be used with the yet unwritten `FileLogger`)
* Changed method of atomic logging to be a method `logAll` accepting `List[LogEvent]` instead of `doSynchronized`
* Improved action lifecycle logging
* Parallel logging now provides immediate feedback about starting an action
* General cleanup, including removing unused classes and methods and reducing dependencies between classes
* `run` is now a method task that accepts options to pass to the `main` method (`runOptions` has been removed, `runTask` is no longer interactive, and `run` no longer starts a console if `mainClass` is undefined)
* Major task execution changes:
* Tasks automatically have implicit dependencies on tasks with the same name in dependent projects
* Implicit dependencies on interactive tasks are ignored, explicit dependencies produce an error
* Interactive tasks must be executed directly on the project on which they are defined
* Method tasks accept input arguments (`Array[String]`) and dynamically create the task to run
* Tasks can depend on tasks in other projects
* Tasks are run in parallel breadth-first style
* Added `test-only` method task, which restricts the tests to run to only those passed as arguments.
* Added `test-failed` method task, which restricts the tests to run. First, only tests passed as arguments are run. If no tests are passed, no filtering is done. Then, only tests that failed the previous run are run.
* Added `test-quick` method task, which restricts the tests to run. First, only tests passed as arguments are run. If no tests are passed, no filtering is done. Then, only tests that failed the previous run or had a dependency change are run.
* Added launcher that allows declaring version of sbt/scala to build project with.
* Added tab completion with ~
* Added basic tab completion for method tasks, including `test-*`
* Changed default pack options to be the default options of Pack200.Packer
* Fixed ~ behavior when action doesn't exist
### 0.3.6 to 0.3.7
* Improved classpath methods
* Refactored various features into separate project traits
* `ParentProject` can now specify dependencies
* Support for `optional` scope
* More API documentation
* Test resource paths provided on classpath for testing
* Added some missing read methods in `FileUtilities`
* Added scripted test framework
* Change detection using hashes of files
* Fixed problem with manifests not being generated (bug #14)
* Fixed issue with scala-tools repository not being included by default (again)
* Added option to set ivy cache location (mainly for testing)
* trace is no longer a logging level but a flag enabling/disabling stack traces
* Project.loadProject and related methods now accept a Logger to use
* Made hidden files and files that start with `'.'` excluded by default (`'.*'` is required because subversion seems to not mark `.svn` directories hidden on Windows)
* Implemented exit codes
* Added continuous compilation command `cc`
### 0.3.5 to 0.3.6
* Fixed bug #12.
* Compiled with 2.7.2.
### 0.3.2 to 0.3.5
* Fixed bug #11.
* Fixed problem with dependencies where source jars would be used instead of binary jars.
* Fixed scala-tools not being used by default for inline configurations.
* Small dependency management error message correction
* Slight refactoring for specifying whether scala-tools releases gets added to configured resolvers
* Separated repository/dependency overriding so that repositories can be specified inline for use with `ivy.xml` or `pom.xml` files
* Added ability to specify Ivy XML configuration in Scala.
* Added `clean-cache` action for deleting Ivy's cache
* Some initial work towards accessing a resource directory from tests
* Initial tests for `Path`
* Some additional `FileUtilities` methods, some `FileUtilities` method adjustments and some initial tests for `FileUtilities`
* A basic framework for testing `ReflectUtilities`, not run by default because of run time
* Minor cleanup to `Path` and added non-empty check to path components
* Catch additional exceptions in `TestFramework`
* Added `copyTask` task creation method.
* Added `jetty-run` action and added ability to package war files.
* Added `jetty-stop` action.
* Added `console-quick` action that is the same as `console` but doesn't compile sources first.
* Moved some custom `ClassLoader`s to `ClasspathUtilities` and improved a check.
* Added ability to specify hooks to call before `sbt` shuts down.
* Added `zip`, `unzip` methods to `FileUtilities`
* Added `append` equivalents to `write*` methods in `FileUtilites`
* Added first draft of integration testing
* Added batch command `compile-stats`
* Added methods to create tasks that have basic conditional execution based on declared sources/products of the task
* Added `newerThan` and `olderThan` methods to `Path`
* Added `reload` action to reread the project definition without losing the performance benefits of an already running jvm
* Added `help` action to tab completion
* Added handling of (effectively empty) scala source files that create no class files: they are always interpreted as modified.
* Added prompt to retry project loading if compilation fails
* `package` action now uses `fileTask` so that it only executes if files are out of date
* fixed `ScalaTest` framework wrapper so that it fails the `test` action if tests fail
* Inline dependencies can now specify configurations
### 0.3.1 to 0.3.2
* Compiled jar with Java 1.5.
### 0.3 to 0.3.1
* Fixed bugs #8, #9, and #10.
### 0.2.3 to 0.3
* Version change only for first release.
### 0.2.2 to 0.2.3
* Added tests for `Dag`, `NameFilter`, `Version`
* Fixed handling of trailing `*`s in `GlobFilter` and added some error-checking for control characters, which `Pattern` doesn't seem to like
* Fixed `Analysis.allProducts` implementation
* It previously returned the sources instead of the generated classes
* Will only affect the count of classes (it should be correct now) and the debugging of missed classes (erroneously listed classes as missed)
* Made some implied preconditions on `BasicVersion` and `OpaqueVersion` explicit
* Made increment version behavior in `ScalaProject` easier to overload
* Added `Seq[..Option]` alternative to `...Option*` for tasks
* Documentation generation fixed to use latest value of version
* Fixed `BasicVersion.incrementMicro`
* Fixed test class loading so that `sbt` can test the version of `sbt` being developed (previously, the classes from the executing version of `sbt` were tested)
### 0.2.1 to 0.2.2
* Package name is now a call-by-name parameter for the package action
* Fixed release action calling compile multiple times
### 0.2.0 to 0.2.1
* Added some action descriptions
* jar name now comes from normalized name (lowercased and spaces to dashes)
* Some cleanups related to creating filters
* Path should only 'get' itself if the underlying file exists to be consistent with other `PathFinders`
* Added `---` operator for `PathFinder` that excludes paths from the `PathFinder` argument
* Removed `***` operator on `PathFinder`
* `**` operator on `PathFinder` matches all descendents or self that match the `NameFilter` argument
* The above should fix bug `#6`
* Added version increment and release actions.
* Can now build sbt with sbt. Build scripts `build` and `clean` will still exist.
### 0.1.9 to 0.2.0
* Implemented typed properties and access to system properties
* Renamed `metadata` directory to `project`
* Information previously in `info` file now obtained by properties:
* `info.name --> name`
* `info.currentVersion --> version`
* Concrete `Project` subclasses should have a constructor that accepts a single argument of type `ProjectInfo` (argument `dependencies: Iterable[Project]` has been merged into `ProjectInfo`)
### 0.1.8 to 0.1.9
* Better default implementation of `allSources`.
* Generate warning if two jars on classpath have the same name.
* Upgraded to specs 1.4.0
* Upgraded to `ScalaCheck` 1.5
* Changed some update options to be final vals instead of objects.
* Added some more API documentation.
* Removed release action.
* Split compilation into separate main and test compilations.
* A failure in a `ScalaTest` run now fails the test action.
* Implemented reporters for `compile/scaladoc`, `ScalaTest`, `ScalaCheck`, and `specs` that delegate to the appropriate `sbt.Logger`.
### 0.1.7 to 0.1.8
* Improved configuring of tests to exclude.
* Simplified version handling.
* Task `&&` operator properly handles dependencies of tasks it combines.
* Changed method of inline library dependency declarations to be simpler.
* Better handling of errors in parallel execution.
### 0.1.6 to 0.1.7
* Added graph action to generate dot files (for graphiz) from dependency information (work in progress).
* Options are now passed to tasks as varargs.
* Redesigned `Path` properly, including `PathFinder` returning a `Set[Path]` now instead of `Iterable[Path]`.
* Moved paths out of `ScalaProject` and into `BasicProjectPaths` to keep path definitions separate from task definitions.
* Added initial support for managing third-party libraries through the `update` task, which must be explicitly called (it is not a dependency of compile or any other task). This is experimental, undocumented, and known to be incomplete.
* Parallel execution implementation at the project level, disabled by default. To enable, add:
```scala
override def parallelExecution = true
```
to your project definition. In order for logging to make sense, all project logging is buffered until the project is finished executing. Still to be done is some sort of notification of project execution (which ones are currently executing, how many remain)
* `run` and `console` are now specified as "interactive" actions, which means they are only executed on the project in which they are defined when called directly, and not on all dependencies. Their dependencies are still run on dependent projects.
* Generalized conditional tasks a bit. Of note is that analysis is no longer required to be in metadata/analysis, but is now in target/analysis by default.
* Message now displayed when project definition is recompiled on startup
* Project no longer inherits from Logger, but now has a log member.
* Dependencies passed to `project` are checked for null (may help with errors related to initialization/circular dependencies)
* Task dependencies are checked for null
* Projects in a multi-project configuration are checked to ensure that output paths are different (check can be disabled)
* Made `update` task globally synchronized because Ivy is not thread-safe.
* Generalized test framework, directly invoking frameworks now (used reflection before).
* Moved license files to licenses/
* Added support for `specs` and some support for `ScalaTest` (the test action doesn't fail if `ScalaTest` tests fail).
* Added `specs`, `ScalaCheck`, `ScalaTest` jars to lib/
* These are now required for compilation, but are optional at runtime.
* Added the appropriate licenses and notices.
* Options for `update` action are now taken from updateOptions member.
* Fixed `SbtManager` inline dependency manager to work properly.
* Improved Ivy configuration handling (not compiled with test dependencies yet though).
* Added case class implementation of `SbtManager` called `SimpleManager`.
* Project definitions not specifying dependencies can now use just a single argument constructor.
### 0.1.5 to 0.1.6
* `run` and `console` handle `System.exit` and multiple threads in user code under certain circumstances (see RunningProjectCode).
### 0.1.4 to 0.1.5
* Generalized interface with plugin (see `AnalysisCallback`)
* Split out task implementations and paths from `Project` to `ScalaProject`
* Subproject support (changed required project constructor signature: see `sbt/DefaultProject.scala`)
* Can specify dependencies between projects
* Execute tasks across multiple projects
* Classpath of all dependencies included when compiling
* Proper inter-project source dependency handling
* Can change to a project in an interactive session to work only on that project (and its dependencies)
* External dependency handling
* Tracks non-source dependencies (compiled classes and jars)
* Requires each class to be provided by exactly one classpath element (This means you cannot have two versions of the same class on the classpath, e.g. from two versions of a library)
* Changes in a project propagate the right source recompilations in dependent projects
* Consequences:
* Recompilation when changing java/scala version
* Recompilation when upgrading libraries (again, as indicated in the second point, situations where you have library-1.0.jar and library-2.0.jar on the classpath at the same time are not handled predictably. Replacing library-1.0.jar with library-2.0.jar should work as expected.)
* Changing sbt version will recompile project definitions
### 0.1.3 to 0.1.4
* Autodetection of Project definitions.
* Simple tab completion/history in an interactive session with JLine
* Added descriptions for most actions
### 0.1.2 to 0.1.3
* Dependency management between tasks and auto-discovery tasks.
* Should work on Windows.
### 0.1.1 to 0.1.2
* Should compile/build on Java 1.5
* Fixed run action implementation to include scala library on classpath
* Made project configuration easier
### 0.1 to 0.1.1
* Fixed handling of source files without a package
* Added easy project setup

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +0,0 @@
# Community
This part of the wiki has project "meta-information" such as where
to find source code and how to contribute. Check out the sidebar
on the right for links.
The mailing list is at
<http://groups.google.com/group/simple-build-tool/topics>. Please
use it for questions and comments!

View File

@ -0,0 +1,11 @@
=========
Community
=========
This part of the wiki has project "meta-information" such as where to
find source code and how to contribute. Check out the sidebar on the
right for links.
The mailing list is at
http://groups.google.com/group/simple-build-tool/topics. Please use it
for questions and comments!

View File

@ -1,35 +0,0 @@
# Credits
The following people have contributed ideas, documentation, or code to sbt:
* Trond Bjerkestrand
* Steven Blundy
* Josh Cough
* Nolan Darilek
* Fred Dubois
* Nathan Hamblen
* Mark Harrah
* Joonas Javanainen
* Ismael Juma
* Viktor Klang
* David R. MacIver
* Ross McDonald
* Simon Olofsson
* Artyom Olshevskiy
* Andrew O'Malley
* Jorge Ortiz
* Mikko Peltonen
* Paul Phillips
* Ray Racine
* Indrajit Raychaudhuri
* Stuart Roebuck
* Harshad RJ
* Sanjin Šehić
* Tony Sloane
* Doug Tangren
* Seth Tisue
* Francisco Treacy
* Aaron D. Valade
* Eugene Vigdorchik
* Vesa Vilhonen
* Jason Zaugg

View File

@ -0,0 +1,39 @@
=======
Credits
=======
The following people have contributed ideas, documentation, or code to
sbt:
- Trond Bjerkestrand
- Steven Blundy
- Josh Cough
- Nolan Darilek
- Fred Dubois
- Nathan Hamblen
- Mark Harrah
- Joonas Javanainen
- Ismael Juma
- Viktor Klang
- David R. MacIver
- Ross McDonald
- Simon Olofsson
- Artyom Olshevskiy
- Andrew O'Malley
- Jorge Ortiz
- Mikko Peltonen
- Paul Phillips
- Ray Racine
- Indrajit Raychaudhuri
- Stuart Roebuck
- Harshad RJ
- Sanjin Šehić
- Tony Sloane
- Doug Tangren
- Seth Tisue
- Francisco Treacy
- Aaron D. Valade
- Eugene Vigdorchik
- Vesa Vilhonen
- Jason Zaugg

View File

@ -1,15 +0,0 @@
[sbt-launch]: http://repo.typesafe.com/typesafe/ivy-snapshots/org.scala-sbt/sbt-launch/
# Nightly Builds
Nightly builds are currently being published to <http://repo.typesafe.com/typesafe/ivy-snapshots/>.
To use a nightly build, follow the instructions for normal [[Setup|Getting Started Setup]], except:
1. Download the launcher jar from one of the subdirectories of [sbt-launch]. They should be listed in chronological order, so the most recent one will be last.
2. Call your script something like `sbt-nightly` to retain access to a stable `sbt` launcher.
3. The version number is the name of the subdirectory and is of the form `0.13.x-yyyyMMdd-HHmmss`. Use this in a `build.properties` file.
Related to the third point, remember that an `sbt.version` setting in `<build-base>/project/build.properties` determines the version of sbt to use in a project. If it is not present, the default version associated with the launcher is used. This means that you must set `sbt.version=yyyyMMdd-HHmmss` in an existing `<build-base>/project/build.properties`. You can verify the right version of sbt is being used to build a project by running `sbt-version`.
To reduce problems, it is recommended to not use a launcher jar for one nightly version to launch a different nightly version of sbt.

View File

@ -0,0 +1,27 @@
==============
Nightly Builds
==============
Nightly builds are currently being published to |typesafe-snapshots|_.
To use a nightly build, follow the instructions for normal
:doc:`Setup </Getting-Started/Setup>`, except:
1. Download the launcher jar from one of the subdirectories of |nightly-launcher|.
They should be listed in chronological order, so the most recent one will be last.
2. Call your script something like ``sbt-nightly`` to retain access to a
stable ``sbt`` launcher.
3. The version number is the name of the subdirectory and is of the form
``0.13.x-yyyyMMdd-HHmmss``. Use this in a ``build.properties`` file.
Related to the third point, remember that an ``sbt.version`` setting in
``<build-base>/project/build.properties`` determines the version of sbt
to use in a project. If it is not present, the default version
associated with the launcher is used. This means that you must set
``sbt.version=yyyyMMdd-HHmmss`` in an existing
``<build-base>/project/build.properties``. You can verify the right
version of sbt is being used to build a project by running
``sbt-version``.
To reduce problems, it is recommended to not use a launcher jar for one
nightly version to launch a different nightly version of sbt.

View File

@ -1,61 +0,0 @@
[API]: https://github.com/harrah/xsbt/tree/0.11/interface
[the email thread]: https://groups.google.com/group/simple-build-tool/browse_thread/thread/7761f8b2ce51f02c/129064ea836c9baf
[advanced test interface and runner]: https://groups.google.com/group/simple-build-tool/browse_thread/thread/f5a5fe06bbf3f006/d771009d407d5765
# Opportunites (Round 2)
Below is a running list of potential areas of contribution. This list may become out of date quickly, so you may want to check on the mailing list if you are interested in a specific topic.
1. There are plenty of possible visualization and analysis opportunities.
* 'compile' produces an Analysis of the source code containing
- Source dependencies
- Inter-project source dependencies
- Binary dependencies (jars + class files)
- data structure representing the [API] of the source code
There is some code already for generating dot files that isn't hooked up, but graphing dependencies and inheritance relationships is a general area of work.
* 'update' produces an [[Update Report]] mapping `Configuration/ModuleID/Artifact` to the retrieved `File`
* Ivy produces more detailed XML reports on dependencies. These come with an XSL stylesheet to view them, but this does not scale to large numbers of dependencies. Working on this is pretty straightforward: the XML files are created in `~/.ivy2` and the `.xsl` and `.css` are there as well, so you don't even need to work with sbt. Other approaches described in [the email thread]
* Tasks are a combination of static and dynamic graphs and it would be useful to view the graph of a run
* Settings are a static graph and there is code to generate the dot files, but isn't hooked up anywhere.
2. There is support for dependencies on external projects, like on GitHub. To be more useful, this should support being able to update the dependencies. It is also easy to extend this to other ways of retrieving projects. Support for svn and hg was a recent contribution, for example.
3. Dependency management is a general area. Working on Apache Ivy itself is another way to help. For example, I'm pretty sure Ivy is fundamentally single threaded. Either a) it's not and you can fix sbt to take advantage of this or b) make Ivy multi-threaded and faster at resolving dependencies.
4. If you like parsers, sbt commands and input tasks are written using custom parser combinators that provide tab completion and error handling. Among other things, the efficiency could be improved.
5. The javap task hasn't been reintegrated
6. Implement enhanced 0.11-style warn/debug/info/error/trace commands. Currently, you set it like any other setting:
```scala
set logLevel := Level.Warn
or
set logLevel in Test := Level.Warn
```
You could make commands that wrap this, like:
```text
warn test:run
```
Also, trace is currently an integer, but should really be an abstract data type.
7. There is more aggressive incremental compilation in sbt 0.12. I expect it to be more difficult to reproduce bugs. It would be helpful to have a mode that generates a diff between successive compilations and records the options passed to scalac. This could be replayed or inspected to try to find the cause.
# Documentation
1. There's a lot to do with this wiki. If you check the wiki out
from git, there's a directory called Dormant with some content
that needs going through.
2. the [[Home]] page mentions external project references (e.g. to a
git repo) but doesn't have anything to link to that explains how
to use those.
3. the [[Configurations]] page is missing a list of the built-in
configurations and the purpose of each.
4. grep the wiki's git checkout for "Wiki Maintenance Note" and
work on some of those
5. API docs are much needed.
6. Find useful answers or types/methods/values in the other docs, and pull references to them up into [[FAQ]] or [[Index]] so people can find the docs. In general the [[FAQ]] should feel a bit more like a bunch of pointers into the regular docs, rather than an alternative to the docs.
7. A lot of the pages could probably have better names, and/or little 2-4 word blurbs to the right of them in the sidebar.

View File

@ -0,0 +1,101 @@
============
Opportunites
============
Below is a running list of potential areas of contribution. This list
may become out of date quickly, so you may want to check on the mailing
list if you are interested in a specific topic.
1. There are plenty of possible visualization and analysis
opportunities.
- 'compile' produces an Analysis of the source code containing
- Source dependencies
- Inter-project source dependencies
- Binary dependencies (jars + class files)
- data structure representing the
`API <https://github.com/harrah/xsbt/tree/0.13/interface>`_ of
the source code There is some code already for generating dot
files that isn't hooked up, but graphing dependencies and
inheritance relationships is a general area of work.
- 'update' produces an :doc:`/Detailed-Topics/Update-Report` mapping
``Configuration/ModuleID/Artifact`` to the retrieved ``File``
- Ivy produces more detailed XML reports on dependencies. These come
with an XSL stylesheet to view them, but this does not scale to
large numbers of dependencies. Working on this is pretty
straightforward: the XML files are created in ``~/.ivy2`` and the
``.xsl`` and ``.css`` are there as well, so you don't even need to
work with sbt. Other approaches described in `the email
thread <https://groups.google.com/group/simple-build-tool/browse_thread/thread/7761f8b2ce51f02c/129064ea836c9baf>`_
- Tasks are a combination of static and dynamic graphs and it would
be useful to view the graph of a run
- Settings are a static graph and there is code to generate the dot
files, but isn't hooked up anywhere.
2. There is support for dependencies on external projects, like on
GitHub. To be more useful, this should support being able to update
the dependencies. It is also easy to extend this to other ways of
retrieving projects. Support for svn and hg was a recent
contribution, for example.
3. Dependency management is a general area. Working on Apache Ivy itself
is another way to help. For example, I'm pretty sure Ivy is
fundamentally single threaded. Either a) it's not and you can fix sbt
to take advantage of this or b) make Ivy multi-threaded and faster at
resolving dependencies.
4. If you like parsers, sbt commands and input tasks are written using
custom parser combinators that provide tab completion and error
handling. Among other things, the efficiency could be improved.
5. The javap task hasn't been reintegrated
6. Implement enhanced 0.11-style warn/debug/info/error/trace commands.
Currently, you set it like any other setting:
::
set logLevel := Level.Warn
or
set logLevel in Test := Level.Warn
You could make commands that wrap this, like:
::
warn test:run
Also, trace is currently an integer, but should really be an abstract
data type. 7. There is more aggressive incremental compilation in sbt
0.12. I expect it to be more difficult to reproduce bugs. It would be
helpful to have a mode that generates a diff between successive
compilations and records the options passed to scalac. This could be
replayed or inspected to try to find the cause.
Documentation
=============
1. There's a lot to do with this wiki. If you check the wiki out from
git, there's a directory called Dormant with some content that needs
going through.
2. the :doc:`main </index>` page mentions external project references (e.g. to a git
repo) but doesn't have anything to link to that explains how to use
those.
3. the :doc:`/Dormant/Configurations` page is missing a list of the built-in
configurations and the purpose of each.
4. grep the wiki's git checkout for "Wiki Maintenance Note" and work on
some of those
5. API docs are much needed.
6. Find useful answers or types/methods/values in the other docs, and
pull references to them up into :doc:`/faq` or :doc:`/Name-Index` so people can
find the docs. In general the :doc:`/faq` should feel a bit more like a
bunch of pointers into the regular docs, rather than an alternative
to the docs.
7. A lot of the pages could probably have better names, and/or little
2-4 word blurbs to the right of them in the sidebar.

View File

@ -1,19 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Change history|Changes]]
* [[Credits]]
* [[License|https://github.com/harrah/xsbt/blob/0.11/LICENSE]]
* [[Source code (github)|https://github.com/harrah/xsbt/tree/0.11]]
* [[Source code (SXR)|http://harrah.github.com/xsbt/latest/sxr/index.html]]
* [[API Documentation|http://harrah.github.com/xsbt/latest/api/index.html]]
* [[Places to help|Opportunities]]
* [[Nightly Builds]]
* [[Plugins list|sbt-0.10-plugins-list]]
* [[Resources]]
* [[Examples|Community-Examples]]
* [[Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Extending sbt|Extending]] - internals docs

View File

@ -0,0 +1,10 @@
.. toctree::
:maxdepth: 2
Changes
ChangeSummary_0.12.0
Community
Credits
Nightly-Builds
Opportunities
sbt-0.10-plugins-list

View File

@ -1,130 +0,0 @@
The purpose of this page is to aid developers in finding plugins that work with sbt 0.10+ and for plugin developers to promote their plugins possibly by adding some brief description.
## Plugins
### Plugins for IDEs:
* IntelliJ IDEA
* SBT Plugin to generate IDEA project configuration: https://github.com/mpeltonen/sbt-idea
* IDEA Plugin to embed an SBT Console into the IDE: https://github.com/orfjackal/idea-sbt-plugin
* Netbeans: https://github.com/remeniuk/sbt-netbeans-plugin
* Eclipse: https://github.com/typesafehub/sbteclipse
### Web Plugins
* xsbt-web-plugin: https://github.com/siasia/xsbt-web-plugin
* xsbt-webstart: https://github.com/ritschwumm/xsbt-webstart
* sbt-appengine: https://github.com/sbt/sbt-appengine
* sbt-gwt-plugin: https://github.com/thunderklaus/sbt-gwt-plugin
* sbt-cloudbees-plugin: https://github.com/timperrett/sbt-cloudbees-plugin
* sbt-jelastic-deploy: https://github.com/casualjim/sbt-jelastic-deploy
### Test plugins
* junit_xml_listener: https://github.com/ijuma/junit_xml_listener
* sbt-growl-plugin: https://github.com/softprops/sbt-growl-plugin
* sbt-teamcity-test-reporting-plugin: https://github.com/guardian/sbt-teamcity-test-reporting-plugin
* xsbt-cucumber-plugin: https://github.com/skipoleschris/xsbt-cucumber-plugin
### Static Code Analysis plugins
* cpd4sbt: https://bitbucket.org/jmhofer/cpd4sbt (copy/paste detection, works for Scala, too)
* findbugs4sbt: https://bitbucket.org/jmhofer/findbugs4sbt (FindBugs only supports Java projects atm)
### One jar plugins
* sbt-assembly: https://github.com/sbt/sbt-assembly
* xsbt-proguard-plugin: https://github.com/siasia/xsbt-proguard-plugin
* sbt-deploy: https://github.com/reaktor/sbt-deploy
* sbt-appbundle (os x standalone): https://github.com/sbt/sbt-appbundle
### Frontend development plugins
* coffeescripted-sbt: https://github.com/softprops/coffeescripted-sbt
* less-sbt (for less-1.3.0): https://github.com/softprops/less-sbt
* sbt-less-plugin (it uses less-1.3.0): https://github.com/btd/sbt-less-plugin
* sbt-emberjs: https://github.com/stefri/sbt-emberjs
* sbt-closure: https://github.com/eltimn/sbt-closure
* sbt-yui-compressor: https://github.com/indrajitr/sbt-yui-compressor
* sbt-requirejs: https://github.com/scalatra/sbt-requirejs
### LWJGL (Light Weight Java Game Library) Plugin
* sbt-lwjgl-plugin: https://github.com/philcali/sbt-lwjgl-plugin
### Release plugins
* sbt-aether-plugin (Published artifacts using Sonatype Aether): https://github.com/arktekk/sbt-aether-deploy
* posterous-sbt: https://github.com/n8han/posterous-sbt
* sbt-signer-plugin: https://github.com/rossabaker/sbt-signer-plugin
* sbt-izpack (generates IzPack an installer): http://software.clapper.org/sbt-izpack/
* sbt-ghpages-plugin (publishes generated site and api): https://github.com/jsuereth/xsbt-ghpages-plugin
* sbt-gpg-plugin (PGP signing plugin, can generate keys too): https://github.com/sbt/xsbt-gpg-plugin
* sbt-release (customizable release process): https://github.com/gseitz/sbt-release
* sbt-unique-version (emulates unique snapshots): https://github.com/sbt/sbt-unique-version
### System plugins
* sbt-sh (executes shell commands): https://github.com/steppenwells/sbt-sh
* cronish-sbt (interval sbt / shell command execution): https://github.com/philcali/cronish-sbt
* git (executes git commands): https://github.com/sbt/sbt-git-plugin
* svn (execute svn commands): https://github.com/xuwei-k/sbtsvn
### Code generator plugins
* xsbt-fmpp-plugin (FreeMarker Scala/Java Templating): https://github.com/aloiscochard/xsbt-fmpp-plugin
* sbt-scalaxb (XSD and WSDL binding): https://github.com/eed3si9n/scalaxb
* sbt-protobuf (Google Protocol Buffers): https://github.com/gseitz/sbt-protobuf
* sbt-avro (Apache Avro): https://github.com/cavorite/sbt-avro
* sbt-xjc (XSD binding, using [JAXB XJC](http://download.oracle.com/javase/6/docs/technotes/tools/share/xjc.html)): https://github.com/retronym/sbt-xjc
* xsbt-scalate-generate (Generate/Precompile Scalate Templates): https://github.com/backchatio/xsbt-scalate-generate
* sbt-antlr (Generate Java source code based on ANTLR3 grammars): https://github.com/stefri/sbt-antlr
* xsbt-reflect (Generate Scala source code for project name and version): https://github.com/ritschwumm/xsbt-reflect
* sbt-buildinfo (Generate Scala source for any settings): https://github.com/sbt/sbt-buildinfo
* lifty (Brings scaffolding to SBT): https://github.com/lifty/lifty
* sbt-thrift (Thrift Code Generation): https://github.com/bigtoast/sbt-thrift
* xsbt-hginfo (Generate Scala source code for Mercurial repository information): https://bitbucket.org/lukas_pustina/xsbt-hginfo
* sbt-scalashim (Generate Scala shim like `sys.error`): https://github.com/sbt/sbt-scalashim
* sbtend (Generate Java source code from [xtend](http://www.eclipse.org/xtend/) ): https://github.com/xuwei-k/sbtend
### Database plugins
* sbt-liquibase (Liquibase RDBMS database migrations): https://github.com/bigtoast/sbt-liquibase
* sbt-dbdeploy (dbdeploy, a database change management tool): https://github.com/mr-ken/sbt-dbdeploy
### Documentation plugins
* sbt-lwm (Convert lightweight markup files, e.g., Markdown and Textile, to HTML): http://software.clapper.org/sbt-lwm/
### Utility plugins
* jot (Write down your ideas lest you forget them) https://github.com/softprops/jot
* ls-sbt (An sbt interface for ls.implicit.ly): https://github.com/softprops/ls
* np (Dead simple new project directory generation): https://github.com/softprops/np
* sbt-editsource (A poor man's *sed*(1), for SBT): http://software.clapper.org/sbt-editsource/
* sbt-dirty-money (Cleans Ivy2 cache): https://github.com/sbt/sbt-dirty-money
* sbt-dependency-graph (Creates a graphml file of the dependency tree): https://github.com/jrudolph/sbt-dependency-graph
* sbt-cross-building (Simplifies building your plugins for multiple versions of sbt): https://github.com/jrudolph/sbt-cross-building
* sbt-inspectr (Displays settings dependency tree): https://github.com/eed3si9n/sbt-inspectr
* sbt-revolver (Triggered restart, hot reloading): https://github.com/spray/sbt-revolver
* sbt-scalaedit (Open and upgrade ScalaEdit (text editor)): https://github.com/kjellwinblad/sbt-scalaedit-plugin
* sbt-man (Looks up scaladoc): https://github.com/sbt/sbt-man
* sbt-taglist (Looks for TODO-tags in the sources): https://github.com/johanandren/sbt-taglist
### Code coverage plugins
* sbt-scct: https://github.com/dvc94ch/sbt-scct
* jacoco4sbt: https://bitbucket.org/jmhofer/jacoco4sbt
### Android plugin
* android-plugin: https://github.com/jberkel/android-plugin
* android-sdk-plugin: https://github.com/pfn/android-sdk-plugin
### Build interoperability plugins
* ant4sbt: https://bitbucket.org/jmhofer/ant4sbt
### OSGi plugin
* sbtosgi: https://github.com/typesafehub/sbtosgi

View File

@ -0,0 +1,199 @@
===========
Plugin List
===========
The purpose of this page is to aid developers in finding plugins that
work with sbt 0.10+ and for plugin developers to promote their plugins
possibly by adding some brief description.
Plugins
-------
Plugins for IDEs:
~~~~~~~~~~~~~~~~~
- IntelliJ IDEA
- SBT Plugin to generate IDEA project configuration:
https://github.com/mpeltonen/sbt-idea
- IDEA Plugin to embed an SBT Console into the IDE:
https://github.com/orfjackal/idea-sbt-plugin
- Netbeans: https://github.com/remeniuk/sbt-netbeans-plugin
- Eclipse: https://github.com/typesafehub/sbteclipse
Web Plugins
~~~~~~~~~~~
- xsbt-web-plugin: https://github.com/siasia/xsbt-web-plugin
- xsbt-webstart: https://github.com/ritschwumm/xsbt-webstart
- sbt-appengine: https://github.com/sbt/sbt-appengine
- sbt-gwt-plugin: https://github.com/thunderklaus/sbt-gwt-plugin
- sbt-cloudbees-plugin:
https://github.com/timperrett/sbt-cloudbees-plugin
- sbt-jelastic-deploy: https://github.com/casualjim/sbt-jelastic-deploy
Test plugins
~~~~~~~~~~~~
- junit\_xml\_listener: https://github.com/ijuma/junit\_xml\_listener
- sbt-growl-plugin: https://github.com/softprops/sbt-growl-plugin
- sbt-teamcity-test-reporting-plugin:
https://github.com/guardian/sbt-teamcity-test-reporting-plugin
- xsbt-cucumber-plugin:
https://github.com/skipoleschris/xsbt-cucumber-plugin
Static Code Analysis plugins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- cpd4sbt: https://bitbucket.org/jmhofer/cpd4sbt (copy/paste detection,
works for Scala, too)
- findbugs4sbt: https://bitbucket.org/jmhofer/findbugs4sbt (FindBugs
only supports Java projects atm)
One jar plugins
~~~~~~~~~~~~~~~
- sbt-assembly: https://github.com/sbt/sbt-assembly
- xsbt-proguard-plugin: https://github.com/siasia/xsbt-proguard-plugin
- sbt-deploy: https://github.com/reaktor/sbt-deploy
- sbt-appbundle (os x standalone): https://github.com/sbt/sbt-appbundle
Frontend development plugins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- coffeescripted-sbt: https://github.com/softprops/coffeescripted-sbt
- less-sbt (for less-1.3.0): https://github.com/softprops/less-sbt
- sbt-less-plugin (it uses less-1.3.0):
https://github.com/btd/sbt-less-plugin
- sbt-emberjs: https://github.com/stefri/sbt-emberjs
- sbt-closure: https://github.com/eltimn/sbt-closure
- sbt-yui-compressor: https://github.com/indrajitr/sbt-yui-compressor
- sbt-requirejs: https://github.com/scalatra/sbt-requirejs
LWJGL (Light Weight Java Game Library) Plugin
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- sbt-lwjgl-plugin: https://github.com/philcali/sbt-lwjgl-plugin
Release plugins
~~~~~~~~~~~~~~~
- sbt-aether-plugin (Published artifacts using Sonatype Aether):
https://github.com/arktekk/sbt-aether-deploy
- posterous-sbt: https://github.com/n8han/posterous-sbt
- sbt-signer-plugin: https://github.com/rossabaker/sbt-signer-plugin
- sbt-izpack (generates IzPack an installer):
http://software.clapper.org/sbt-izpack/
- sbt-ghpages-plugin (publishes generated site and api):
https://github.com/jsuereth/xsbt-ghpages-plugin
- sbt-gpg-plugin (PGP signing plugin, can generate keys too):
https://github.com/sbt/xsbt-gpg-plugin
- sbt-release (customizable release process):
https://github.com/gseitz/sbt-release
- sbt-unique-version (emulates unique snapshots):
https://github.com/sbt/sbt-unique-version
System plugins
~~~~~~~~~~~~~~
- sbt-sh (executes shell commands):
https://github.com/steppenwells/sbt-sh
- cronish-sbt (interval sbt / shell command execution):
https://github.com/philcali/cronish-sbt
- git (executes git commands): https://github.com/sbt/sbt-git-plugin
- svn (execute svn commands): https://github.com/xuwei-k/sbtsvn
Code generator plugins
~~~~~~~~~~~~~~~~~~~~~~
- xsbt-fmpp-plugin (FreeMarker Scala/Java Templating):
https://github.com/aloiscochard/xsbt-fmpp-plugin
- sbt-scalaxb (XSD and WSDL binding):
https://github.com/eed3si9n/scalaxb
- sbt-protobuf (Google Protocol Buffers):
https://github.com/gseitz/sbt-protobuf
- sbt-avro (Apache Avro): https://github.com/cavorite/sbt-avro
- sbt-xjc (XSD binding, using `JAXB
XJC <http://download.oracle.com/javase/6/docs/technotes/tools/share/xjc.html>`_):
https://github.com/retronym/sbt-xjc
- xsbt-scalate-generate (Generate/Precompile Scalate Templates):
https://github.com/backchatio/xsbt-scalate-generate
- sbt-antlr (Generate Java source code based on ANTLR3 grammars):
https://github.com/stefri/sbt-antlr
- xsbt-reflect (Generate Scala source code for project name and
version): https://github.com/ritschwumm/xsbt-reflect
- sbt-buildinfo (Generate Scala source for any settings):
https://github.com/sbt/sbt-buildinfo
- lifty (Brings scaffolding to SBT): https://github.com/lifty/lifty
- sbt-thrift (Thrift Code Generation):
https://github.com/bigtoast/sbt-thrift
- xsbt-hginfo (Generate Scala source code for Mercurial repository
information): https://bitbucket.org/lukas\_pustina/xsbt-hginfo
- sbt-scalashim (Generate Scala shim like ``sys.error``):
https://github.com/sbt/sbt-scalashim
- sbtend (Generate Java source code from
`xtend <http://www.eclipse.org/xtend/>`_ ):
https://github.com/xuwei-k/sbtend
Database plugins
~~~~~~~~~~~~~~~~
- sbt-liquibase (Liquibase RDBMS database migrations):
https://github.com/bigtoast/sbt-liquibase
- sbt-dbdeploy (dbdeploy, a database change management tool):
https://github.com/mr-ken/sbt-dbdeploy
Documentation plugins
~~~~~~~~~~~~~~~~~~~~~
- sbt-lwm (Convert lightweight markup files, e.g., Markdown and
Textile, to HTML): http://software.clapper.org/sbt-lwm/
Utility plugins
~~~~~~~~~~~~~~~
- jot (Write down your ideas lest you forget them)
https://github.com/softprops/jot
- ls-sbt (An sbt interface for ls.implicit.ly):
https://github.com/softprops/ls
- np (Dead simple new project directory generation):
https://github.com/softprops/np
- sbt-editsource (A poor man's *sed*\ (1), for SBT):
http://software.clapper.org/sbt-editsource/
- sbt-dirty-money (Cleans Ivy2 cache):
https://github.com/sbt/sbt-dirty-money
- sbt-dependency-graph (Creates a graphml file of the dependency tree):
https://github.com/jrudolph/sbt-dependency-graph
- sbt-cross-building (Simplifies building your plugins for multiple
versions of sbt): https://github.com/jrudolph/sbt-cross-building
- sbt-inspectr (Displays settings dependency tree):
https://github.com/eed3si9n/sbt-inspectr
- sbt-revolver (Triggered restart, hot reloading):
https://github.com/spray/sbt-revolver
- sbt-scalaedit (Open and upgrade ScalaEdit (text editor)):
https://github.com/kjellwinblad/sbt-scalaedit-plugin
- sbt-man (Looks up scaladoc): https://github.com/sbt/sbt-man
- sbt-taglist (Looks for TODO-tags in the sources):
https://github.com/johanandren/sbt-taglist
Code coverage plugins
~~~~~~~~~~~~~~~~~~~~~
- sbt-scct: https://github.com/dvc94ch/sbt-scct
- jacoco4sbt: https://bitbucket.org/jmhofer/jacoco4sbt
Android plugin
~~~~~~~~~~~~~~
- android-plugin: https://github.com/jberkel/android-plugin
- android-sdk-plugin: https://github.com/pfn/android-sdk-plugin
Build interoperability plugins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ant4sbt: https://bitbucket.org/jmhofer/ant4sbt
OSGi plugin
~~~~~~~~~~~
- sbtosgi: https://github.com/typesafehub/sbtosgi

View File

@ -1,167 +0,0 @@
[Ivy documentation]: http://ant.apache.org/ivy/history/2.2.0/ivyfile/dependency-artifact.html
[Artifact API]: http://harrah.github.com/xsbt/latest/api/sbt/Artifact$.html
[SettingsDefinition]: http://harrah.github.com/xsbt/latest/api/#sbt.Init$SettingsDefinition
# Artifacts
# Selecting default artifacts
By default, the published artifacts are the main binary jar, a jar containing the main sources and resources, and a jar containing the API documentation. You can add artifacts for the test classes, sources, or API or you can disable some of the main artifacts.
To add all test artifacts:
```scala
publishArtifact in Test := true
```
To add them individually:
```scala
// enable publishing the jar produced by `test:package`
publishArtifact in (Test, packageBin) := true
// enable publishing the test API jar
publishArtifact in (Test, packageDoc) := true
// enable publishing the test sources jar
publishArtifact in (Test, packageSrc) := true
```
To disable main artifacts individually:
```scala
// disable publishing the main jar produced by `package`
publishArtifact in (Compile, packageBin) := false
// disable publishing the main API jar
publishArtifact in (Compile, packageDoc) := false
// disable publishing the main sources jar
publishArtifact in (Compile, packageSrc) := false
```
# Modifying default artifacts
Each built-in artifact has several configurable settings in addition to `publish-artifact`.
The basic ones are `artifact` (of type `SettingKey[Artifact]`), `mappings` (of type `TaskKey[(File,String)]`), and `artifactPath` (of type `SettingKey[File]`).
They are scoped by `(<config>, <task>)` as indicated in the previous section.
To modify the type of the main artifact, for example:
```scala
artifact in (Compile, packageBin) ~= { (art: Artifact) =>
art.copy(`type` = "bundle")
}
```
The generated artifact name is determined by the `artifact-name` setting. This setting is of type `(ScalaVersion, ModuleID, Artifact) => String`. The ScalaVersion argument provides the full Scala version String and the binary compatible part of the version String. The String result is the name of the file to produce. The default implementation is `Artifact.artifactName _`. The function may be modified to produce different local names for artifacts without affecting the published name, which is determined by the `artifact` definition combined with the repository pattern.
For example, to produce a minimal name without a classifier or cross path:
```scala
artifactName := { (sv: ScalaVersion, module: ModuleID, artifact: Artifact) =>
artifact.name + "-" + module.revision + "." + artifact.extension
}
```
(Note that in practice you rarely want to drop the classifier.)
Finally, you can get the `(Artifact, File)` pair for the artifact by mapping the `packaged-artifact` task. Note that if you don't need the `Artifact`, you can get just the File from the package task (`package`, `package-doc`, or `package-src`). In both cases, mapping the task to get the file ensures that the artifact is generated first and so the file is guaranteed to be up-to-date.
For example:
```scala
myTask <<= packagedArtifact in (Compile, packageBin) map { case (art: Artifact, file: File) =>
println("Artifact definition: " + art)
println("Packaged file: " + file.getAbsolutePath)
}
```
where `val myTask = TaskKey[Unit]`.
# Defining custom artifacts
In addition to configuring the built-in artifacts, you can declare other artifacts to publish. Multiple artifacts are allowed when using Ivy metadata, but a Maven POM file only supports distinguishing artifacts based on classifiers and these are not recorded in the POM.
Basic `Artifact` construction look like:
```scala
Artifact("name", "type", "extension")
Artifact("name", "classifier")
Artifact("name", url: URL)
Artifact("name", Map("extra1" -> "value1", "extra2" -> "value2"))
```
For example:
```scala
Artifact("myproject", "zip", "zip")
Artifact("myproject", "image", "jpg")
Artifact("myproject", "jdk15")
```
See the [Ivy documentation] for more details on artifacts. See the [Artifact API] for combining the parameters above and specifying [Configurations] and extra attributes.
To declare these artifacts for publishing, map them to the task that generates the artifact:
```scala
myImageTask := {
val artifact: File = makeArtifact(...)
artifact
}
addArtifact( Artifact("myproject", "image", "jpg"), myImageTask )
```
where `val myImageTask = TaskKey[File](...)`.
`addArtifact` returns a sequence of settings (wrapped in a [SettingsDefinition]). In a full build configuration, usage looks like:
```scala
...
lazy val proj = Project(...)
.settings( addArtifact(...).settings : _* )
...
```
# Publishing .war files
A common use case for web applications is to publish the `.war` file instead of the `.jar` file.
```scala
// disable .jar publishing
publishArtifact in (Compile, packageBin) := false
// create an Artifact for publishing the .war file
artifact in (Compile, packageWar) ~= { (art: Artifact) =>
art.copy(`type` = "war", extension = "war")
}
// add the .war file to what gets published
addArtifact(artifact in (Compile, packageWar), packageWar)
```
# Using dependencies with artifacts
To specify the artifacts to use from a dependency that has custom or multiple artifacts, use the `artifacts` method on your dependencies. For example:
```scala
libraryDependencies += "org" % "name" % "rev" artifacts(Artifact("name", "type", "ext"))
```
The `from` and `classifer` methods (described on the [[Library Management]] page) are actually convenience methods that translate to `artifacts`:
```scala
def from(url: String) = artifacts( Artifact(name, new URL(url)) )
def classifier(c: String) = artifacts( Artifact(name, c) )
```
That is, the following two dependency declarations are equivalent:
```scala
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
libraryDependencies += "org.testng" % "testng" % "5.7" artifacts( Artifact("testng", "jdk15") )
```

View File

@ -0,0 +1,204 @@
=========
Artifacts
=========
Selecting default artifacts
===========================
By default, the published artifacts are the main binary jar, a jar
containing the main sources and resources, and a jar containing the API
documentation. You can add artifacts for the test classes, sources, or
API or you can disable some of the main artifacts.
To add all test artifacts:
::
publishArtifact in Test := true
To add them individually:
::
// enable publishing the jar produced by `test:package`
publishArtifact in (Test, packageBin) := true
// enable publishing the test API jar
publishArtifact in (Test, packageDoc) := true
// enable publishing the test sources jar
publishArtifact in (Test, packageSrc) := true
To disable main artifacts individually:
::
// disable publishing the main jar produced by `package`
publishArtifact in (Compile, packageBin) := false
// disable publishing the main API jar
publishArtifact in (Compile, packageDoc) := false
// disable publishing the main sources jar
publishArtifact in (Compile, packageSrc) := false
Modifying default artifacts
===========================
Each built-in artifact has several configurable settings in addition to
``publish-artifact``. The basic ones are ``artifact`` (of type
``SettingKey[Artifact]``), ``mappings`` (of type
``TaskKey[(File,String)]``), and ``artifactPath`` (of type
``SettingKey[File]``). They are scoped by ``(<config>, <task>)`` as
indicated in the previous section.
To modify the type of the main artifact, for example:
::
artifact in (Compile, packageBin) ~= { (art: Artifact) =>
art.copy(`type` = "bundle")
}
The generated artifact name is determined by the ``artifact-name``
setting. This setting is of type
``(ScalaVersion, ModuleID, Artifact) => String``. The ScalaVersion
argument provides the full Scala version String and the binary
compatible part of the version String. The String result is the name of
the file to produce. The default implementation is
``Artifact.artifactName _``. The function may be modified to produce
different local names for artifacts without affecting the published
name, which is determined by the ``artifact`` definition combined with
the repository pattern.
For example, to produce a minimal name without a classifier or cross
path:
::
artifactName := { (sv: ScalaVersion, module: ModuleID, artifact: Artifact) =>
artifact.name + "-" + module.revision + "." + artifact.extension
}
(Note that in practice you rarely want to drop the classifier.)
Finally, you can get the ``(Artifact, File)`` pair for the artifact by
mapping the ``packaged-artifact`` task. Note that if you don't need the
``Artifact``, you can get just the File from the package task
(``package``, ``package-doc``, or ``package-src``). In both cases,
mapping the task to get the file ensures that the artifact is generated
first and so the file is guaranteed to be up-to-date.
For example:
::
myTask <<= packagedArtifact in (Compile, packageBin) map { case (art: Artifact, file: File) =>
println("Artifact definition: " + art)
println("Packaged file: " + file.getAbsolutePath)
}
where ``val myTask = TaskKey[Unit]``.
Defining custom artifacts
=========================
In addition to configuring the built-in artifacts, you can declare other
artifacts to publish. Multiple artifacts are allowed when using Ivy
metadata, but a Maven POM file only supports distinguishing artifacts
based on classifiers and these are not recorded in the POM.
Basic ``Artifact`` construction look like:
::
Artifact("name", "type", "extension")
Artifact("name", "classifier")
Artifact("name", url: URL)
Artifact("name", Map("extra1" -> "value1", "extra2" -> "value2"))
For example:
::
Artifact("myproject", "zip", "zip")
Artifact("myproject", "image", "jpg")
Artifact("myproject", "jdk15")
See the `Ivy
documentation <http://ant.apache.org/ivy/history/2.2.0/ivyfile/dependency-artifact.html>`_
for more details on artifacts. See the `Artifact
API <../../api/sbt/Artifact$.html>`_ for
combining the parameters above and specifying [Configurations] and extra
attributes.
To declare these artifacts for publishing, map them to the task that
generates the artifact:
::
myImageTask := {
val artifact: File = makeArtifact(...)
artifact
}
addArtifact( Artifact("myproject", "image", "jpg"), myImageTask )
where ``val myImageTask = TaskKey[File](...)``.
``addArtifact`` returns a sequence of settings (wrapped in a
`SettingsDefinition <../../api/#sbt.Init$SettingsDefinition>`_).
In a full build configuration, usage looks like:
::
...
lazy val proj = Project(...)
.settings( addArtifact(...).settings : _* )
...
Publishing .war files
=====================
A common use case for web applications is to publish the ``.war`` file
instead of the ``.jar`` file.
::
// disable .jar publishing
publishArtifact in (Compile, packageBin) := false
// create an Artifact for publishing the .war file
artifact in (Compile, packageWar) ~= { (art: Artifact) =>
art.copy(`type` = "war", extension = "war")
}
// add the .war file to what gets published
addArtifact(artifact in (Compile, packageWar), packageWar)
Using dependencies with artifacts
=================================
To specify the artifacts to use from a dependency that has custom or
multiple artifacts, use the ``artifacts`` method on your dependencies.
For example:
::
libraryDependencies += "org" % "name" % "rev" artifacts(Artifact("name", "type", "ext"))
The ``from`` and ``classifer`` methods (described on the :doc:`Library Management <Library-Management>`
page) are actually convenience methods that translate to ``artifacts``:
::
def from(url: String) = artifacts( Artifact(name, new URL(url)) )
def classifier(c: String) = artifacts( Artifact(name, c) )
That is, the following two dependency declarations are equivalent:
\`\`\`scala libraryDependencies += "org.testng" % "testng" % "5.7"
classifier "jdk15"
libraryDependencies += "org.testng" % "testng" % "5.7" artifacts(
Artifact("testng", "jdk15") ) \`\`\`

View File

@ -1,130 +0,0 @@
# Best Practices
This page describes best practices for working with sbt.
Nontrivial additions and changes should generally be discussed on the [mailing list](http://groups.google.com/group/simple-build-tool/topics) first.
(Because there isn't built-in support for discussing GitHub wiki edits like normal commits, a subpar suggestion can only be reverted in its entirety without comment.)
### `project/` vs. `~/.sbt/`
Anything that is necessary for building the project should go in `project/`.
This includes things like the web plugin.
`~/.sbt/` should contain local customizations and commands for working with a build, but are not necessary.
An example is an IDE plugin.
### Local settings
There are two options for settings that are specific to a user. An example of such a setting is inserting the local Maven repository at the beginning of the resolvers list:
```scala
resolvers <<= resolvers {rs =>
val localMaven = "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
localMaven +: rs
}
```
1. Put settings specific to a user in a global `.sbt` file, such as `~/.sbt/local.sbt`. These settings will be applied to all projects.
2. Put settings in a `.sbt` file in a project that isn't checked into version control, such as `<project>/local.sbt`. sbt combines the settings from multiple `.sbt` files, so you can still have the standard `<project>/build.sbt` and check that into version control.
### .sbtrc
Put commands to be executed when sbt starts up in a `.sbtrc` file, one per line.
These commands run before a project is loaded and are useful for defining aliases, for example.
sbt executes commands in `$HOME/.sbtrc` (if it exists) and then `<project>/.sbtrc` (if it exists).
### Generated files
Write any generated files to a subdirectory of the output directory, which is specified by the `target` setting.
This makes it easy to clean up after a build and provides a single location to organize generated files.
Any generated files that are specific to a Scala version should go in `crossTarget` for efficient cross-building.
For generating sources and resources, see [[Common Tasks]].
### Don't hard code
Don't hard code constants, like the output directory `target/`.
This is especially important for plugins.
A user might change the `target` setting to point to `build/`, for example, and the plugin needs to respect that.
Instead, use the setting, like:
```scala
myDirectory <<= target(_ / "sub-directory")
```
### Don't "mutate" files
A build naturally consists of a lot of file manipulation.
How can we reconcile this with the task system, which otherwise helps us avoid mutable state?
One approach, which is the recommended approach and the approach used by sbt's default tasks, is to only write to any given file once and only from a single task.
A build product (or by-product) should be written exactly once by only one task.
The task should then, at a minimum, provide the Files created as its result.
Another task that wants to use Files should map the task, simultaneously obtaining the File reference and ensuring that the task has run (and thus the file is constructed).
Obviously you cannot do much about the user or other processes modifying the files, but you can make the I/O that is under the build's control more predictable by treating file contents as immutable at the level of Tasks.
For example:
```scala
lazy val makeFile = TaskKey[File]("make-file")
// define a task that creates a file,
// writes some content, and returns the File
// The write is completely
makeFile := {
val f: File = file("/tmp/data.txt")
IO.write(f, "Some content")
f
}
// The result of makeFile is the constructed File,
// so useFile can map makeFile and simultaneously
// get the File and declare the dependency on makeFile
useFile <<= makeFile map { (f: File) =>
doSomething( f )
}
```
This arrangement is not always possible, but it should be the rule and not the exception.
### Use absolute paths
Construct only absolute Files.
Either specify an absolute path
```scala
file("/home/user/A.scala")
```
or construct the file from an absolute base:
```scala
base / "A.scala"
```
This is related to the no hard coding best practice because the proper way involves referencing the `baseDirectory` setting.
For example, the following defines the myPath setting to be the `<base>/licenses/` directory.
```scala
myPath <<= baseDirectory(_ / "licenses")
```
In Java (and thus in Scala), a relative File is relative to the current working directory.
The working directory is not always the same as the build root directory for a number of reasons.
The only exception to this rule is when specifying the base directory for a Project.
Here, sbt will resolve a relative File against the build root directory for you for convenience.
### Parser combinators
1. Use `token` everywhere to clearly delimit tab completion boundaries.
2. Don't overlap or nest tokens. The behavior here is unspecified and will likely generate an error in the future.
3. Use `flatMap` for general recursion. sbt's combinators are strict to limit the number of classes generated, so use `flatMap` like:
```scala
lazy val parser: Parser[Int] = token(IntBasic) flatMap { i =>
if(i <= 0)
success(i)
else
token(Space ~> parser)
}
```
This example defines a parser a whitespace-delimited list of integers, ending with a negative number, and returning that final, negative number.

View File

@ -0,0 +1,163 @@
==============
Best Practices
==============
This page describes best practices for working with sbt. Nontrivial
additions and changes should generally be discussed on the `mailing
list <http://groups.google.com/group/simple-build-tool/topics>`_ first.
(Because there isn't built-in support for discussing GitHub wiki edits
like normal commits, a subpar suggestion can only be reverted in its
entirety without comment.)
``project/`` vs. ``~/.sbt/``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Anything that is necessary for building the project should go in
``project/``. This includes things like the web plugin. ``~/.sbt/``
should contain local customizations and commands for working with a
build, but are not necessary. An example is an IDE plugin.
Local settings
~~~~~~~~~~~~~~
There are two options for settings that are specific to a user. An
example of such a setting is inserting the local Maven repository at the
beginning of the resolvers list:
::
resolvers <<= resolvers {rs =>
val localMaven = "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
localMaven +: rs
}
1. Put settings specific to a user in a global ``.sbt`` file, such as
``~/.sbt/local.sbt``. These settings will be applied to all projects.
2. Put settings in a ``.sbt`` file in a project that isn't checked into
version control, such as ``<project>/local.sbt``. sbt combines the
settings from multiple ``.sbt`` files, so you can still have the
standard ``<project>/build.sbt`` and check that into version control.
.sbtrc
~~~~~~
Put commands to be executed when sbt starts up in a ``.sbtrc`` file, one
per line. These commands run before a project is loaded and are useful
for defining aliases, for example. sbt executes commands in
``$HOME/.sbtrc`` (if it exists) and then ``<project>/.sbtrc`` (if it
exists).
Generated files
~~~~~~~~~~~~~~~
Write any generated files to a subdirectory of the output directory,
which is specified by the ``target`` setting. This makes it easy to
clean up after a build and provides a single location to organize
generated files. Any generated files that are specific to a Scala
version should go in ``crossTarget`` for efficient cross-building.
For generating sources and resources, see :ref:`the faq entry <generate-sources-resources>`.
Don't hard code
~~~~~~~~~~~~~~~
Don't hard code constants, like the output directory ``target/``. This
is especially important for plugins. A user might change the ``target``
setting to point to ``build/``, for example, and the plugin needs to
respect that. Instead, use the setting, like:
::
myDirectory <<= target(_ / "sub-directory")
Don't "mutate" files
~~~~~~~~~~~~~~~~~~~~
A build naturally consists of a lot of file manipulation. How can we
reconcile this with the task system, which otherwise helps us avoid
mutable state? One approach, which is the recommended approach and the
approach used by sbt's default tasks, is to only write to any given file
once and only from a single task.
A build product (or by-product) should be written exactly once by only
one task. The task should then, at a minimum, provide the Files created
as its result. Another task that wants to use Files should map the task,
simultaneously obtaining the File reference and ensuring that the task
has run (and thus the file is constructed). Obviously you cannot do much
about the user or other processes modifying the files, but you can make
the I/O that is under the build's control more predictable by treating
file contents as immutable at the level of Tasks.
For example:
::
lazy val makeFile = TaskKey[File]("make-file")
// define a task that creates a file,
// writes some content, and returns the File
// The write is completely
makeFile := {
val f: File = file("/tmp/data.txt")
IO.write(f, "Some content")
f
}
// The result of makeFile is the constructed File,
// so useFile can map makeFile and simultaneously
// get the File and declare the dependency on makeFile
useFile <<= makeFile map { (f: File) =>
doSomething( f )
}
This arrangement is not always possible, but it should be the rule and
not the exception.
Use absolute paths
~~~~~~~~~~~~~~~~~~
Construct only absolute Files. Either specify an absolute path
::
file("/home/user/A.scala")
or construct the file from an absolute base:
::
base / "A.scala"
This is related to the no hard coding best practice because the proper
way involves referencing the ``baseDirectory`` setting. For example, the
following defines the myPath setting to be the ``<base>/licenses/``
directory.
::
myPath <<= baseDirectory(_ / "licenses")
In Java (and thus in Scala), a relative File is relative to the current
working directory. The working directory is not always the same as the
build root directory for a number of reasons.
The only exception to this rule is when specifying the base directory
for a Project. Here, sbt will resolve a relative File against the build
root directory for you for convenience.
Parser combinators
~~~~~~~~~~~~~~~~~~
1. Use ``token`` everywhere to clearly delimit tab completion
boundaries.
2. Don't overlap or nest tokens. The behavior here is unspecified and
will likely generate an error in the future.
3. Use ``flatMap`` for general recursion. sbt's combinators are strict
to limit the number of classes generated, so use ``flatMap`` like:
``scala lazy val parser: Parser[Int] = token(IntBasic) flatMap { i => if(i <= 0) success(i) else token(Space ~> parser) }``
This example defines a parser a whitespace-delimited list of
integers, ending with a negative number, and returning that final,
negative number.

View File

@ -1,122 +0,0 @@
[Attributed]: http://harrah.github.com/xsbt/latest/api/sbt/Attributed.html
# Classpaths, sources, and resources
This page discusses how sbt builds up classpaths for different actions, like `compile`, `run`, and `test` and how to override or augment these classpaths.
# Basics
In sbt 0.10 and later, classpaths now include the Scala library and (when declared as a dependency) the Scala compiler. Classpath-related settings and tasks typically provide a value of type `Classpath`. This is an alias for `Seq[Attributed[File]]`. [Attributed] is a type that associates a heterogeneous map with each classpath entry. Currently, this allows sbt to associate the `Analysis` resulting from compilation with the corresponding classpath entry and for managed entries, the `ModuleID` and `Artifact` that defined the dependency.
To explicitly extract the raw `Seq[File]`, use the `files` method implicitly added to `Classpath`:
```scala
val cp: Classpath = ...
val raw: Seq[File] = cp.files
```
To create a `Classpath` from a `Seq[File]`, use `classpath` and to create an `Attributed[File]` from a `File`, use `Attributed.blank`:
```scala
val raw: Seq[File] = ...
val cp: Classpath = raw.classpath
val rawFile: File = ..
val af: Attributed[File] = Attributed.blank(rawFile)
```
## Unmanaged v. managed
Classpaths, sources, and resources are separated into two main categories: unmanaged and managed.
Unmanaged files are manually created files that are outside of the control of the build.
They are the inputs to the build.
Managed files are under the control of the build.
These include generated sources and resources as well as resolved and retrieved dependencies and compiled classes.
Tasks that produce managed files should be inserted as follows:
```scala
sourceGenerators in Compile <+= sourceManaged in Compile map { out =>
generate(out / "some_directory")
}
```
In this example, `generate` is some function of type `File => Seq[File]` that actually does the work.
The `<+=` method is like `+=`, but allows the right hand side to have inputs (like the difference between `:=` and `<<=`).
So, we are appending a new task to the list of main source generators (`sourceGenerators in Compile`).
To insert a named task, which is the better approach for plugins:
```scala
sourceGenerators in Compile <+= (mySourceGenerator in Compile).task
mySourceGenerator in Compile <<= sourceManaged in Compile map { out =>
generate(out / "some_directory")
}
```
where `mySourceGenerator` is defined as:
```scala
val mySourceGenerator = TaskKey[Seq[File]](...)
```
The `task` method is used to refer to the actual task instead of the result of the task.
For resources, there are similar keys `resourceGenerators` and `resourceManaged`.
### Excluding source files by name
The project base directory is by default a source directory in addition to `src/main/scala`. You can exclude source files by name (`butler.scala` in the example below) like:
excludeFilter in unmanagedSources := "butler.scala"
Read more on [How to exclude .scala source file in project folder - Google Groups](http://groups.google.com/group/simple-build-tool/browse_thread/thread/cd5332a164405568?hl=en)
## External v. internal
Classpaths are also divided into internal and external dependencies.
The internal dependencies are inter-project dependencies.
These effectively put the outputs of one project on the classpath of another project.
External classpaths are the union of the unmanaged and managed classpaths.
## Keys
For classpaths, the relevant keys are:
* `unmanaged-classpath`
* `managed-classpath`
* `external-dependency-classpath`
* `internal-dependency-classpath`
For sources:
* `unmanaged-sources` These are by default built up from `unmanaged-source-directories`, which consists of `scala-source` and `java-source`.
* `managed-sources` These are generated sources.
* `sources` Combines `managed-sources` and `unmanaged-sources`.
* `source-generators` These are tasks that generate source files. Typically, these tasks will put sources in the directory provided by `source-managed`.
For resources
* `unmanaged-resources` These are by default built up from `unmanaged-resource-directories`, which by default is `resource-directory`, excluding files matched by `default-excludes`.
* `managed-resources` By default, this is empty for standard projects. sbt plugins will have a generated descriptor file here.
* `resource-generators` These are tasks that generate resource files. Typically, these tasks will put resources in the directory provided by `resource-managed`.
Use the [[inspect command|Inspecting Settings]] for more details.
See also a related [StackOverflow answer](http://stackoverflow.com/a/7862872/850196).
## Example
You have a standalone project which uses a library that loads xxx.properties from classpath at run time. You put xxx.properties inside directory "config". When you run "sbt run", you want the directory to be in classpath.
```scala
unmanagedClasspath in Runtime <<= (unmanagedClasspath in Runtime, baseDirectory) map { (cp, bd) => cp :+ Attributed.blank(bd / "config") }
```
Or shorter:
```scala
unmanagedClasspath in Runtime <+= (baseDirectory) map { bd => Attributed.blank(bd / "config") }
```

View File

@ -0,0 +1,165 @@
==================================
Classpaths, sources, and resources
==================================
This page discusses how sbt builds up classpaths for different actions,
like ``compile``, ``run``, and ``test`` and how to override or augment
these classpaths.
Basics
======
In sbt 0.10 and later, classpaths now include the Scala library and
(when declared as a dependency) the Scala compiler. Classpath-related
settings and tasks typically provide a value of type ``Classpath``. This
is an alias for ``Seq[Attributed[File]]``.
`Attributed <../../api/sbt/Attributed.html>`_
is a type that associates a heterogeneous map with each classpath entry.
Currently, this allows sbt to associate the ``Analysis`` resulting from
compilation with the corresponding classpath entry and for managed
entries, the ``ModuleID`` and ``Artifact`` that defined the dependency.
To explicitly extract the raw ``Seq[File]``, use the ``files`` method
implicitly added to ``Classpath``:
::
val cp: Classpath = ...
val raw: Seq[File] = cp.files
To create a ``Classpath`` from a ``Seq[File]``, use ``classpath`` and to
create an ``Attributed[File]`` from a ``File``, use
``Attributed.blank``:
::
val raw: Seq[File] = ...
val cp: Classpath = raw.classpath
val rawFile: File = ..
val af: Attributed[File] = Attributed.blank(rawFile)
Unmanaged v. managed
--------------------
Classpaths, sources, and resources are separated into two main
categories: unmanaged and managed. Unmanaged files are manually created
files that are outside of the control of the build. They are the inputs
to the build. Managed files are under the control of the build. These
include generated sources and resources as well as resolved and
retrieved dependencies and compiled classes.
Tasks that produce managed files should be inserted as follows:
::
sourceGenerators in Compile <+= sourceManaged in Compile map { out =>
generate(out / "some_directory")
}
In this example, ``generate`` is some function of type
``File => Seq[File]`` that actually does the work. The ``<+=`` method is
like ``+=``, but allows the right hand side to have inputs (like the
difference between ``:=`` and ``<<=``). So, we are appending a new task
to the list of main source generators (``sourceGenerators in Compile``).
To insert a named task, which is the better approach for plugins:
::
sourceGenerators in Compile <+= (mySourceGenerator in Compile).task
mySourceGenerator in Compile <<= sourceManaged in Compile map { out =>
generate(out / "some_directory")
}
where ``mySourceGenerator`` is defined as:
::
val mySourceGenerator = TaskKey[Seq[File]](...)
The ``task`` method is used to refer to the actual task instead of the
result of the task.
For resources, there are similar keys ``resourceGenerators`` and
``resourceManaged``.
Excluding source files by name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The project base directory is by default a source directory in addition
to ``src/main/scala``. You can exclude source files by name
(``butler.scala`` in the example below) like:
::
excludeFilter in unmanagedSources := "butler.scala"
Read more on `How to exclude .scala source file in project folder -
Google
Groups <http://groups.google.com/group/simple-build-tool/browse_thread/thread/cd5332a164405568?hl=en>`_
External v. internal
--------------------
Classpaths are also divided into internal and external dependencies. The
internal dependencies are inter-project dependencies. These effectively
put the outputs of one project on the classpath of another project.
External classpaths are the union of the unmanaged and managed
classpaths.
Keys
----
For classpaths, the relevant keys are:
- ``unmanaged-classpath``
- ``managed-classpath``
- ``external-dependency-classpath``
- ``internal-dependency-classpath``
For sources:
- ``unmanaged-sources`` These are by default built up from
``unmanaged-source-directories``, which consists of ``scala-source``
and ``java-source``.
- ``managed-sources`` These are generated sources.
- ``sources`` Combines ``managed-sources`` and ``unmanaged-sources``.
- ``source-generators`` These are tasks that generate source files.
Typically, these tasks will put sources in the directory provided by
``source-managed``.
For resources
- ``unmanaged-resources`` These are by default built up from
``unmanaged-resource-directories``, which by default is
``resource-directory``, excluding files matched by
``default-excludes``.
- ``managed-resources`` By default, this is empty for standard
projects. sbt plugins will have a generated descriptor file here.
- ``resource-generators`` These are tasks that generate resource files.
Typically, these tasks will put resources in the directory provided
by ``resource-managed``.
Use the :doc:`inspect command </Detailed-Topics/Inspecting-Settings>` for more details.
See also a related `StackOverflow
answer <http://stackoverflow.com/a/7862872/850196>`_.
Example
-------
You have a standalone project which uses a library that loads
xxx.properties from classpath at run time. You put xxx.properties inside
directory "config". When you run "sbt run", you want the directory to be
in classpath.
::
unmanagedClasspath in Runtime <<= (unmanagedClasspath in Runtime, baseDirectory) map { (cp, bd) => cp :+ Attributed.blank(bd / "config") }
Or shorter:
``scala unmanagedClasspath in Runtime <+= (baseDirectory) map { bd => Attributed.blank(bd / "config") }``

View File

@ -1,199 +0,0 @@
# Command Line Reference
This page is a relatively complete list of command line options,
commands, and tasks you can use from the sbt interactive prompt or
in batch mode. See [[Running|Getting Started Running]] in the
Getting Started Guide for an intro to the basics, while this page
has a lot more detail.
## Notes on the command line
* There is a technical distinction in sbt between _tasks_, which
are "inside" the build definition, and _commands_, which
manipulate the build definition itself. If you're interested in
creating a command, see [[Commands]]. This specific sbt meaning of
"command" means there's no good general term for "thing you can
type at the sbt prompt", which may be a setting, task, or command.
* Some tasks produce useful values. The `toString` representation of these values can be shown using `show <task>` to run the task instead of just `<task>`.
* In a multi-project build, execution dependencies and the
`aggregate` setting control which tasks from which projects are
executed. See
[[multi-project builds|Getting Started Multi-Project]].
## Project-level tasks
* `clean`
Deletes all generated files (the `target` directory).
* `publish-local`
Publishes artifacts (such as jars) to the local Ivy repository as described in [[Publishing]].
* `publish`
Publishes artifacts (such as jars) to the repository defined by the `publish-to` setting, described in [[Publishing]].
* `update`
Resolves and retrieves external dependencies as described in
[[library dependencies|Getting Started Library Dependencies]].
## Configuration-level tasks
Configuration-level tasks are tasks associated with a configuration. For example, `compile`, which is equivalent to `compile:compile`, compiles the main source code (the `compile` configuration). `test:compile` compiles the test source code (test `test` configuration). Most tasks for the `compile` configuration have an equivalent in the `test` configuration that can be run using a `test:` prefix.
* `compile`
Compiles the main sources (in the `src/main/scala` directory). `test:compile` compiles test sources (in the `src/test/scala/` directory).
* `console`
Starts the Scala interpreter with a classpath including the compiled sources, all jars in the `lib` directory, and managed libraries. To return to sbt, type `:quit`, Ctrl+D (Unix), or Ctrl+Z (Windows). Similarly, `test:console` starts the interpreter with the test classes and classpath.
* `console-quick`
Starts the Scala interpreter with the project's compile-time dependencies on the classpath. `test:console-quick` uses the test dependencies. This task differs from `console` in that it does not force compilation of the current project's sources.
* `console-project`
Enters an interactive session with sbt and the build definition on the classpath. The build definition and related values are bound to variables and common packages and values are imported. See [[Console Project]] for more information.
* `doc`
Generates API documentation for Scala source files in `src/main/scala` using scaladoc. `test:doc` generates API documentation for source files in `src/test/scala`.
* `package`
Creates a jar file containing the files in `src/main/resources` and the classes compiled from `src/main/scala`.
`test:package` creates a jar containing the files in `src/test/resources` and the class compiled from `src/test/scala`.
* `package-doc`
Creates a jar file containing API documentation generated from Scala source files in `src/main/scala`.
`test:package-doc` creates a jar containing API documentation for test sources files in `src/test/scala`.
* `package-src`:
Creates a jar file containing all main source files and resources. The packaged paths are relative to `src/main/scala` and `src/main/resources`.
Similarly, `test:package-src` operates on test source files and resources.
* `run <argument>*`
Runs the main class for the project in the same virtual machine as `sbt`. The main class is passed the `argument`s provided. Please see [[Running Project Code]] for details on the use of `System.exit` and multithreading (including GUIs) in code run by this action.
`test:run` runs a main class in the test code.
* `run-main <main-class> <argument>*`
Runs the specified main class for the project in the same virtual machine as `sbt`. The main class is passed the `argument`s provided. Please see [[Running Project Code]] for details on the use of `System.exit` and multithreading (including GUIs) in code run by this action.
`test:run-main` runs the specified main class in the test code.
* `test`
Runs all tests detected during test compilation. See [[Testing]] for details.
* `test-only <test>*`
Runs the tests provided as arguments. `*` (will be) interpreted as a wildcard in the test name. See [[Testing]] for details.
* `test-quick <test>*`
Runs the tests specified as arguments (or all tests if no arguments are given) that:
1. have not been run yet OR
2. failed the last time they were run OR
3. had any transitive dependencies recompiled since the last successful run
`*` (will be) interpreted as a wildcard in the test name. See [[Testing]] for details.
## General commands
* `exit` or `quit`
End the current interactive session or build. Additionally, `Ctrl+D` (Unix) or `Ctrl+Z` (Windows) will exit the interactive prompt.
* `help <command>`
Displays detailed help for the specified command. If the command does not exist, `help` lists detailed help for commands whose name or description match the argument, which is interpreted as a regular expression. If no command is provided, displays brief descriptions of the main commands. Related commands are `tasks` and `settings`.
* `projects [add|remove <URI>]`
List all available projects if no arguments provided or adds/removes the build at the provided URI. (See [[Full Configuration]] for details on multi-project builds.)
* `project <project-id>`
Change the current project to the project with ID `<project-id>`. Further operations will be done in the context of the given project. (See [[Full Configuration]] for details on multiple project builds.)
* `~ <command>`
Executes the project specified action or method whenever source files change. See [[Triggered Execution]] for details.
* `< filename`
Executes the commands in the given file. Each command should be on its own line. Empty lines and lines beginning with '#' are ignored
* `+ <command>`
Executes the project specified action or method for all versions of Scala defined in the `cross-scala-versions` setting.
* `++ <version> <command>`
Temporarily changes the version of Scala building the project and executes the provided command. `<command>` is optional. The specified version of Scala is used until the project is reloaded, settings are modified (such as by the `set` or `session` commands), or `++` is run again. `<version>` does not need to be listed in the build definition, but it must be available in a repository.
* `; A ; B`
Execute A and if it succeeds, run B. Note that the leading semicolon is required.
* `eval <Scala-expression>`
Evaluates the given Scala expression and returns the result and inferred type. This can be used to set system properties, as a calculator, to fork processes, etc ...
For example:
```scala
> eval System.setProperty("demo", "true")
> eval 1+1
> eval "ls -l" !
```
## Commands for managing the build definition
* `reload [plugins|return]`
If no argument is specified, reloads the build, recompiling any build or plugin definitions as necessary.
`reload plugins` changes the current project to the build definition project (in `project/`). This can be useful to directly manipulate the build definition. For example, running `clean` on the build definition project will force snapshots to be updated and the build definition to be recompiled.
`reload return` changes back to the main project.
* `set <setting-expression>`
Evaluates and applies the given setting definition. The setting
applies until sbt is restarted, the build is reloaded, or the
setting is overridden by another `set` command or removed by the
`session` command. See
[[.sbt build definition|Getting Started Basic Def]] and [[Inspecting Settings]] for details.
* `session <command>`
Manages session settings defined by the `set` command. It can persist settings configured at the prompt. See [[Inspecting Settings]] for details.
* `inspect <setting-key>`
Displays information about settings, such as the value, description, defining scope, dependencies, delegation chain, and related settings. See [[Inspecting Settings]] for details.
## Command Line Options
System properties can be provided either as JVM options, or as SBT arguments, in both cases as `-Dprop=value`. The following properties influence SBT execution. Also see [[Launcher]]
<table>
<thead>
<tr>
<td>_Property_</td>
<td>_Values_</td>
<td>_Default_</td>
<td>_Meaning_</td>
</tr>
</thead>
<tbody>
<tr>
<td>`sbt.log.noformat`</td>
<td>Boolean</td>
<td>false</td>
<td>If true, disable ANSI color codes. Useful on build servers or terminals that don't support color.</td>
</tr>
<tr>
<td>`sbt.global.base`</td>
<td>Directory</td>
<td>`~/.sbt`</td>
<td>The directory containing global settings and plugins</td>
</tr>
<tr>
<td>`sbt.ivy.home`</td>
<td>Directory</td>
<td>`~/.ivy2`</td>
<td>The directory containing the local Ivy repository and artifact cache</td>
</tr>
<tr>
<td>`sbt.boot.directory`</td>
<td>Directory</td>
<td>`~/.sbt/boot`</td>
<td>Path to shared boot directory</td>
</tr>
<tr>
<td>`sbt.main.class`</td>
<td>String</td>
<td></td>
<td></td>
</tr>
<tr>
<td>`xsbt.inc.debug`</td>
<td>Boolean</td>
<td>false</td>
<td></td>
</tr>
<tr>
<td>`sbt.version`</td>
<td>Version</td>
<td>0.11.3</td>
<td>sbt version to use, usually taken from project/build.properties</td>
</tr>
<tr>
<td>`sbt.boot.properties`</td>
<td>File</td>
<td></td>
<td></td>
</tr>
<tr>
<td>`sbt.override.build.repos`</td>
<td>Boolean</td>
<td>false</td>
<td>If true, repositories configured in a build definition are ignored and the repositories configured for the launcher are used instead. See `sbt.repository.config` and the [[Launcher]] documentation. </td>
</tr>
<tr>
<td>`sbt.repository.config`</td>
<td>File</td>
<td>~/.sbt/repositories</td>
<td>A file containing the repositories to use for the launcher. The format is the same as a `[repositories]` section for a [[Launcher]] configuration file. This setting is typically used in conjuction with setting `sbt.override.build.repos` to true (see previous row and the [[Launcher]] documentation). </td>
</tr>
</tbody>

View File

@ -0,0 +1,504 @@
======================
Command Line Reference
======================
This page is a relatively complete list of command line options,
commands, and tasks you can use from the sbt interactive prompt or in
batch mode. See :doc:`Running </Getting-Started/Running>` in the Getting
Started Guide for an intro to the basics, while this page has a lot more
detail.
Notes on the command line
-------------------------
- There is a technical distinction in sbt between *tasks*, which are
"inside" the build definition, and *commands*, which manipulate the
build definition itself. If you're interested in creating a command,
see :doc:`/Extending/Commands`. This specific sbt meaning of "command" means
there's no good general term for "thing you can type at the sbt
prompt", which may be a setting, task, or command.
- Some tasks produce useful values. The ``toString`` representation of
these values can be shown using ``show <task>`` to run the task
instead of just ``<task>``.
- In a multi-project build, execution dependencies and the
``aggregate`` setting control which tasks from which projects are
executed. See :doc:`multi-project builds </Getting-Started/Multi-Project>`.
Project-level tasks
-------------------
- ``clean`` Deletes all generated files (the ``target`` directory).
- ``publish-local`` Publishes artifacts (such as jars) to the local Ivy
repository as described in :doc:`Publishing`.
- ``publish`` Publishes artifacts (such as jars) to the repository
defined by the ``publish-to`` setting, described in :doc:`Publishing`.
- ``update`` Resolves and retrieves external dependencies as described
in :doc:`library dependencies </Getting-Started/Library-Dependencies>`.
Configuration-level tasks
-------------------------
Configuration-level tasks are tasks associated with a configuration. For
example, ``compile``, which is equivalent to ``compile:compile``,
compiles the main source code (the ``compile`` configuration).
``test:compile`` compiles the test source code (test ``test``
configuration). Most tasks for the ``compile`` configuration have an
equivalent in the ``test`` configuration that can be run using a
``test:`` prefix.
- ``compile`` Compiles the main sources (in the ``src/main/scala``
directory). ``test:compile`` compiles test sources (in the
``src/test/scala/`` directory).
- ``console`` Starts the Scala interpreter with a classpath including
the compiled sources, all jars in the ``lib`` directory, and managed
libraries. To return to sbt, type ``:quit``, Ctrl+D (Unix), or Ctrl+Z
(Windows). Similarly, ``test:console`` starts the interpreter with
the test classes and classpath.
- ``console-quick`` Starts the Scala interpreter with the project's
compile-time dependencies on the classpath. ``test:console-quick``
uses the test dependencies. This task differs from ``console`` in
that it does not force compilation of the current project's sources.
- ``console-project`` Enters an interactive session with sbt and the
build definition on the classpath. The build definition and related
values are bound to variables and common packages and values are
imported. See the :doc:`console-project documentation <Console-Project>` for more information.
- ``doc`` Generates API documentation for Scala source files in
``src/main/scala`` using scaladoc. ``test:doc`` generates API
documentation for source files in ``src/test/scala``.
- ``package`` Creates a jar file containing the files in
``src/main/resources`` and the classes compiled from
``src/main/scala``. ``test:package`` creates a jar containing the
files in ``src/test/resources`` and the class compiled from
``src/test/scala``.
- ``package-doc`` Creates a jar file containing API documentation
generated from Scala source files in ``src/main/scala``.
``test:package-doc`` creates a jar containing API documentation for
test sources files in ``src/test/scala``.
- ``package-src``: Creates a jar file containing all main source files
and resources. The packaged paths are relative to ``src/main/scala``
and ``src/main/resources``. Similarly, ``test:package-src`` operates
on test source files and resources.
- ``run <argument>*`` Runs the main class for the project in the same
virtual machine as ``sbt``. The main class is passed the
``argument``\ s provided. Please see :doc:`Running-Project-Code` for
details on the use of ``System.exit`` and multithreading (including
GUIs) in code run by this action. ``test:run`` runs a main class in
the test code.
- ``run-main <main-class> <argument>*`` Runs the specified main class
for the project in the same virtual machine as ``sbt``. The main
class is passed the ``argument``\ s provided. Please see :doc:`Running-Project-Code`
for details on the use of ``System.exit`` and
multithreading (including GUIs) in code run by this action.
``test:run-main`` runs the specified main class in the test code.
- ``test`` Runs all tests detected during test compilation. See
:doc:`Testing` for details.
- ``test-only <test>*`` Runs the tests provided as arguments. ``*``
(will be) interpreted as a wildcard in the test name. See :doc:`Testing`
for details.
- ``test-quick <test>*`` Runs the tests specified as arguments (or all
tests if no arguments are given) that:
1. have not been run yet OR
2. failed the last time they were run OR
3. had any transitive dependencies recompiled since the last
successful run ``*`` (will be) interpreted as a wildcard in the
test name. See :doc:`Testing` for details.
General commands
----------------
- ``exit`` or ``quit`` End the current interactive session or build.
Additionally, ``Ctrl+D`` (Unix) or ``Ctrl+Z`` (Windows) will exit the
interactive prompt.
- ``help <command>`` Displays detailed help for the specified command.
If the command does not exist, ``help`` lists detailed help for
commands whose name or description match the argument, which is
interpreted as a regular expression. If no command is provided,
displays brief descriptions of the main commands. Related commands
are ``tasks`` and ``settings``.
- ``projects [add|remove <URI>]`` List all available projects if no
arguments provided or adds/removes the build at the provided URI.
(See :doc:`/Getting-Started/Full-Def/` for details on multi-project builds.)
- ``project <project-id>`` Change the current project to the project
with ID ``<project-id>``. Further operations will be done in the
context of the given project. (See :doc:`/Getting-Started/Full-Def/` for details
on multiple project builds.)
- ``~ <command>`` Executes the project specified action or method
whenever source files change. See :doc:`/Detailed-Topics/Triggered-Execution` for
details.
- ``< filename`` Executes the commands in the given file. Each command
should be on its own line. Empty lines and lines beginning with '#'
are ignored
- ``+ <command>`` Executes the project specified action or method for
all versions of Scala defined in the ``cross-scala-versions``
setting.
- ``++ <version> <command>`` Temporarily changes the version of Scala
building the project and executes the provided command. ``<command>``
is optional. The specified version of Scala is used until the project
is reloaded, settings are modified (such as by the ``set`` or
``session`` commands), or ``++`` is run again. ``<version>`` does not
need to be listed in the build definition, but it must be available
in a repository.
- ``; A ; B`` Execute A and if it succeeds, run B. Note that the
leading semicolon is required.
- ``eval <Scala-expression>`` Evaluates the given Scala expression and
returns the result and inferred type. This can be used to set system
properties, as a calculator, to fork processes, etc ... For example:
::
> eval System.setProperty("demo", "true")
> eval 1+1
> eval "ls -l" !
Commands for managing the build definition
------------------------------------------
- ``reload [plugins|return]`` If no argument is specified, reloads the
build, recompiling any build or plugin definitions as necessary.
``reload plugins`` changes the current project to the build
definition project (in ``project/``). This can be useful to directly
manipulate the build definition. For example, running ``clean`` on
the build definition project will force snapshots to be updated and
the build definition to be recompiled. ``reload return`` changes back
to the main project.
- ``set <setting-expression>`` Evaluates and applies the given setting
definition. The setting applies until sbt is restarted, the build is
reloaded, or the setting is overridden by another ``set`` command or
removed by the ``session`` command. See
:doc:`.sbt build definition </Getting-Started/Basic-Def>` and
:doc:`Inspecting-Settings` for details.
- ``session <command>`` Manages session settings defined by the ``set``
command. It can persist settings configured at the prompt. See
:doc:`Inspecting-Settings` for details.
- ``inspect <setting-key>`` Displays information about settings, such
as the value, description, defining scope, dependencies, delegation
chain, and related settings. See :doc:`Inspecting-Settings` for details.
Command Line Options
--------------------
System properties can be provided either as JVM options, or as SBT
arguments, in both cases as ``-Dprop=value``. The following properties
influence SBT execution. Also see :doc:`Launcher`.
.. raw:: html
<table>
<thead>
<tr>
<td>
*Property*
.. raw:: html
</td>
<td>
*Values*
.. raw:: html
</td>
<td>
*Default*
.. raw:: html
</td>
<td>
*Meaning*
.. raw:: html
</td>
</tr>
</thead>
<tbody>
<tr>
<td>
``sbt.log.noformat``
.. raw:: html
</td>
<td>
Boolean
.. raw:: html
</td>
<td>
false
.. raw:: html
</td>
<td>
If true, disable ANSI color codes. Useful on build servers or terminals
that don't support color.
.. raw:: html
</td>
</tr>
<tr>
<td>
``sbt.global.base``
.. raw:: html
</td>
<td>
Directory
.. raw:: html
</td>
<td>
``~/.sbt``
.. raw:: html
</td>
<td>
The directory containing global settings and plugins
.. raw:: html
</td>
</tr>
<tr>
<td>
``sbt.ivy.home``
.. raw:: html
</td>
<td>
Directory
.. raw:: html
</td>
<td>
``~/.ivy2``
.. raw:: html
</td>
<td>
The directory containing the local Ivy repository and artifact cache
.. raw:: html
</td>
</tr>
<tr>
<td>
``sbt.boot.directory``
.. raw:: html
</td>
<td>
Directory
.. raw:: html
</td>
<td>
``~/.sbt/boot``
.. raw:: html
</td>
<td>
Path to shared boot directory
.. raw:: html
</td>
</tr>
<tr>
<td>
``sbt.main.class``
.. raw:: html
</td>
<td>
String
.. raw:: html
</td>
<td></td>
<td></td>
</tr>
<tr>
<td>
``xsbt.inc.debug``
.. raw:: html
</td>
<td>
Boolean
.. raw:: html
</td>
<td>
false
.. raw:: html
</td>
<td></td>
</tr>
<tr>
<td>
``sbt.version``
.. raw:: html
</td>
<td>
Version
.. raw:: html
</td>
<td>
0.11.3
.. raw:: html
</td>
<td>
sbt version to use, usually taken from project/build.properties
.. raw:: html
</td>
</tr>
<tr>
<td>
``sbt.boot.properties``
.. raw:: html
</td>
<td>
File
.. raw:: html
</td>
<td></td>
<td></td>
</tr>
<tr>
<td>
``sbt.override.build.repos``
.. raw:: html
</td>
<td>
Boolean
.. raw:: html
</td>
<td>
false
.. raw:: html
</td>
<td>
If true, repositories configured in a build definition are ignored and
the repositories configured for the launcher are used instead. See
``sbt.repository.config`` and the :doc:`Launcher` documentation.
.. raw:: html
</td>
</tr>
<tr>
<td>
``sbt.repository.config``
.. raw:: html
</td>
<td>
File
.. raw:: html
</td>
<td>
~/.sbt/repositories
.. raw:: html
</td>
<td>
A file containing the repositories to use for the launcher. The format
is the same as a ``[repositories]`` section for a :doc:`Launcher`
configuration file. This setting is typically used in conjuction with
setting ``sbt.override.build.repos`` to true (see previous row and the
:doc:`Launcher` documentation).
.. raw:: html
</td>
</tr>
.. raw:: html
</tbody>

View File

@ -1,54 +0,0 @@
# Compiler Plugin Support
There is some special support for using compiler plugins. You can set `auto-compiler-plugins` to `true` to enable this functionality.
```scala
autoCompilerPlugins := true
```
To use a compiler plugin, you either put it in your unmanaged library directory (`lib/` by default) or add it as managed dependency in the `plugin` configuration. `addCompilerPlugin` is a convenience method for specifying `plugin` as the configuration for a dependency:
```scala
addCompilerPlugin("org.scala-tools.sxr" %% "sxr" % "0.2.7")
```
The `compile` and `test-compile` actions will use any compiler plugins found in the `lib` directory or in the `plugin` configuration. You are responsible for configuring the plugins as necessary. For example, Scala X-Ray requires the extra option:
```scala
// declare the main Scala source directory as the base directory
scalacOptions <<= (scalacOptions, scalaSource in Compile) { (options, base) =>
options :+ ("-Psxr:base-directory:" + base.getAbsolutePath)
}
```
You can still specify compiler plugins manually. For example:
```scala
scalacOptions += "-Xplugin:<path-to-sxr>/sxr-0.2.7.jar"
```
# Continuations Plugin Example
Support for continuations in Scala 2.8 is implemented as a compiler plugin. You can use the compiler plugin support for this, as shown here.
```scala
autoCompilerPlugins := true
addCompilerPlugin("org.scala-lang.plugins" % "continuations" % "2.8.1")
scalacOptions += "-P:continuations:enable"
```
# Version-specific Compiler Plugin Example
Adding a version-specific compiler plugin can be done as follows:
```scala
autoCompilerPlugins := true
libraryDependencies <<= (scalaVersion, libraryDependencies) { (ver, deps) =>
deps :+ compilerPlugin("org.scala-lang.plugins" % "continuations" % ver)
}
scalacOptions += "-P:continuations:enable"
```

View File

@ -0,0 +1,64 @@
=======================
Compiler Plugin Support
=======================
There is some special support for using compiler plugins. You can set
``auto-compiler-plugins`` to ``true`` to enable this functionality.
::
autoCompilerPlugins := true
To use a compiler plugin, you either put it in your unmanaged library
directory (``lib/`` by default) or add it as managed dependency in the
``plugin`` configuration. ``addCompilerPlugin`` is a convenience method
for specifying ``plugin`` as the configuration for a dependency:
::
addCompilerPlugin("org.scala-tools.sxr" %% "sxr" % "0.2.7")
The ``compile`` and ``test-compile`` actions will use any compiler
plugins found in the ``lib`` directory or in the ``plugin``
configuration. You are responsible for configuring the plugins as
necessary. For example, Scala X-Ray requires the extra option:
::
// declare the main Scala source directory as the base directory
scalacOptions <<= (scalacOptions, scalaSource in Compile) { (options, base) =>
options :+ ("-Psxr:base-directory:" + base.getAbsolutePath)
}
You can still specify compiler plugins manually. For example:
::
scalacOptions += "-Xplugin:<path-to-sxr>/sxr-0.2.7.jar"
Continuations Plugin Example
============================
Support for continuations in Scala 2.8 is implemented as a compiler
plugin. You can use the compiler plugin support for this, as shown here.
::
autoCompilerPlugins := true
addCompilerPlugin("org.scala-lang.plugins" % "continuations" % "2.8.1")
scalacOptions += "-P:continuations:enable"
Version-specific Compiler Plugin Example
========================================
Adding a version-specific compiler plugin can be done as follows:
\`\`\`scala autoCompilerPlugins := true
libraryDependencies <<= (scalaVersion, libraryDependencies) { (ver,
deps) => deps :+ compilerPlugin("org.scala-lang.plugins" %
"continuations" % ver) }
scalacOptions += "-P:continuations:enable" \`\`\`

View File

@ -1,89 +0,0 @@
# Console Project
# Description
The `console-project` task starts the Scala interpreter with access to your project definition and to `sbt`. Specifically, the interpreter is started up with these commands already executed:
```scala
import sbt._
import Process._
import Keys._
import <your-project-definition>._
import currentState._
import extracted._
```
For example, running external processes with sbt's process library (to be included in the standard library in Scala 2.9):
```scala
> "tar -zcvf project-src.tar.gz src" !
> "find project -name *.jar" !
> "cat build.sbt" #| "grep version" #> new File("sbt-version") !
> "grep -r null src" #|| "echo null-free" !
> uri("http://databinder.net/dispatch/About").toURL #> file("About.html") !
```
`console-project` can be useful for creating and modifying your build in the same way that the Scala interpreter is normally used to explore writing code. Note that this gives you raw access to your build. Think about what you pass to `IO.delete`, for example.
This task was especially useful in prior versions of sbt for showing the value of settings. It is less useful for this now that `show <setting>` prints the result of a setting or task and `set` can define an anonymous task at the command line.
# Accessing settings
To get a particular setting, use the form:
```scala
> val value = get(<key> in <scope>)
```
## Examples
```scala
> IO.delete( get(classesDirectory in Compile) )
```
Show current compile options:
```scala
> get(scalacOptions in Compile) foreach println
```
Show additionally configured repositories.
```scala
> get( resolvers ) foreach println
```
# Evaluating tasks
To evaluate a task, use the form:
```scala
> val value = evalTask(<key> in <scope>, currentState)
```
## Examples
Show all repositories, including defaults.
```scala
> evalTask( fullResolvers, currentState ) foreach println
```
Show the classpaths used for compilation and testing:
```scala
> evalTask( fullClasspath in Compile, currentState ).files foreach println
> evalTask( fullClasspath in Test, currentState ).files foreach println
```
Show the remaining commands to be executed in the build (more interesting if you invoke `console-project` like `; console-project ; clean ; compile`):
```scala
> remainingCommands
```
Show the number of currently registered commands:
```scala
> definedCommands.size
```

View File

@ -0,0 +1,105 @@
===============
Console Project
===============
Description
===========
The ``console-project`` task starts the Scala interpreter with access to
your project definition and to ``sbt``. Specifically, the interpreter is
started up with these commands already executed:
::
import sbt._
import Process._
import Keys._
import <your-project-definition>._
import currentState._
import extracted._
For example, running external processes with sbt's process library (to
be included in the standard library in Scala 2.9):
::
> "tar -zcvf project-src.tar.gz src" !
> "find project -name *.jar" !
> "cat build.sbt" #| "grep version" #> new File("sbt-version") !
> "grep -r null src" #|| "echo null-free" !
> uri("http://databinder.net/dispatch/About").toURL #> file("About.html") !
``console-project`` can be useful for creating and modifying your build
in the same way that the Scala interpreter is normally used to explore
writing code. Note that this gives you raw access to your build. Think
about what you pass to ``IO.delete``, for example.
This task was especially useful in prior versions of sbt for showing the
value of settings. It is less useful for this now that
``show <setting>`` prints the result of a setting or task and ``set``
can define an anonymous task at the command line.
Accessing settings
==================
To get a particular setting, use the form:
::
> val value = get(<key> in <scope>)
Examples
--------
::
> IO.delete( get(classesDirectory in Compile) )
Show current compile options:
::
> get(scalacOptions in Compile) foreach println
Show additionally configured repositories.
::
> get( resolvers ) foreach println
Evaluating tasks
================
To evaluate a task, use the form:
::
> val value = evalTask(<key> in <scope>, currentState)
Examples
--------
Show all repositories, including defaults.
::
> evalTask( fullResolvers, currentState ) foreach println
Show the classpaths used for compilation and testing:
::
> evalTask( fullClasspath in Compile, currentState ).files foreach println
> evalTask( fullClasspath in Test, currentState ).files foreach println
Show the remaining commands to be executed in the build (more
interesting if you invoke ``console-project`` like
``; console-project ; clean ; compile``):
::
> remainingCommands
Show the number of currently registered commands:
``scala > definedCommands.size``

View File

@ -1,103 +0,0 @@
# Cross-building
# Introduction
Different versions of Scala can be binary incompatible, despite maintaining source compatibility. This page describes how to use `sbt` to build and publish your project against multiple versions of Scala and how to use libraries that have done the same.
# Publishing Conventions
The underlying mechanism used to indicate which version of Scala a library was compiled against is to append `_<scala-version>` to the library's name. For Scala 2.10.0 and later, the binary version is used. For example, `dispatch` becomes `dispatch_2.8.1` for the variant compiled against Scala 2.8.1 and `dispatch_2.10` when compiled against 2.10.0, 2.10.0-M1 or any 2.10.x version. This fairly simple approach allows interoperability with users of Maven, Ant and other build tools.
The rest of this page describes how `sbt` handles this for you as part of cross-building.
# Using Cross-Built Libraries
To use a library built against multiple versions of Scala, double the first `%` in an inline dependency to be `%%`. This tells `sbt` that it should append the current version of Scala being used to build the library to the dependency's name. For example:
```scala
libraryDependencies += "net.databinder" %% "dispatch" % "0.8.0"
```
A nearly equivalent, manual alternative for a fixed version of Scala is:
```scala
libraryDependencies += "net.databinder" % "dispatch_2.10" % "0.8.0"
```
or for Scala versions before 2.10:
```scala
libraryDependencies += "net.databinder" % "dispatch_2.8.1" % "0.8.0"
```
# Cross-Building a Project
Define the versions of Scala to build against in the `cross-scala-versions` setting. Versions of Scala 2.8.0 or later are allowed. For example, in a `.sbt` build definition:
```scala
crossScalaVersions := Seq("2.8.2", "2.9.2", "2.10.0")
```
To build against all versions listed in `build.scala.versions`, prefix the action to run with `+`. For example:
```text
> + package
```
A typical way to use this feature is to do development on a single Scala version (no `+` prefix) and then cross-build (using `+`) occasionally and when releasing. The ultimate purpose of `+` is to cross-publish your project. That is, by doing:
```text
> + publish
```
you make your project available to users for different versions of Scala. See [[Publishing]] for more details on publishing your project.
In order to make this process as quick as possible, different output and managed dependency directories are used for different versions of Scala. For example, when building against Scala 2.10.0,
* `./target/` becomes `./target/scala_2.1.0/`
* `./lib_managed/` becomes `./lib_managed/scala_2.10/`
Packaged jars, wars, and other artifacts have `_<scala-version>` appended to the normal artifact ID as mentioned in the Publishing Conventions section above.
This means that the outputs of each build against each version of Scala are independent of the others. `sbt` will resolve your dependencies for each version separately. This way, for example, you get the version of Dispatch compiled against 2.8.1 for your 2.8.1 build, the version compiled against 2.10 for your 2.10.x builds, and so on. You can have fine-grained control over the behavior for for different Scala versions by using the `cross` method on `ModuleID` These are equivalent:
```scala
"a" % "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.Disabled
```
These are equivalent:
```scala
"a" %% "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.binary
```
This overrides the defaults to always use the full Scala version instead of the binary Scala version:
```scala
"a" % "b" % "1.0" cross CrossVersion.full
```
This uses a custom function to determine the Scala version to use based on the binary Scala version:
```scala
"a" % "b" % "1.0" cross CrossVersion.binaryMapped {
case "2.9.1" => "2.9.0" // remember that pre-2.10, binary=full
case "2.10" => "2.10.0" // useful if a%b was released with the old style
case x => x
}
```
This uses a custom function to determine the Scala version to use based on the full Scala version:
```scala
"a" % "b" % "1.0" cross CrossVersion.fullMapped {
case "2.9.1" => "2.9.0"
case x => x
}
```
A custom function is mainly used when cross-building and a dependency isn't available for all Scala versions or it uses a different convention than the default.
As a final note, you can use `++ <version>` to temporarily switch the Scala version currently being used to build (see [[Running|Getting Started Running]] for details).

View File

@ -0,0 +1,146 @@
==============
Cross-building
==============
Introduction
============
Different versions of Scala can be binary incompatible, despite
maintaining source compatibility. This page describes how to use ``sbt``
to build and publish your project against multiple versions of Scala and
how to use libraries that have done the same.
Publishing Conventions
======================
The underlying mechanism used to indicate which version of Scala a
library was compiled against is to append ``_<scala-version>`` to the
library's name. For Scala 2.10.0 and later, the binary version is used.
For example, ``dispatch`` becomes ``dispatch_2.8.1`` for the variant
compiled against Scala 2.8.1 and ``dispatch_2.10`` when compiled against
2.10.0, 2.10.0-M1 or any 2.10.x version. This fairly simple approach
allows interoperability with users of Maven, Ant and other build tools.
The rest of this page describes how ``sbt`` handles this for you as part
of cross-building.
Using Cross-Built Libraries
===========================
To use a library built against multiple versions of Scala, double the
first ``%`` in an inline dependency to be ``%%``. This tells ``sbt``
that it should append the current version of Scala being used to build
the library to the dependency's name. For example:
::
libraryDependencies += "net.databinder" %% "dispatch" % "0.8.0"
A nearly equivalent, manual alternative for a fixed version of Scala is:
::
libraryDependencies += "net.databinder" % "dispatch_2.10" % "0.8.0"
or for Scala versions before 2.10:
::
libraryDependencies += "net.databinder" % "dispatch_2.8.1" % "0.8.0"
Cross-Building a Project
========================
Define the versions of Scala to build against in the
``cross-scala-versions`` setting. Versions of Scala 2.8.0 or later are
allowed. For example, in a ``.sbt`` build definition:
::
crossScalaVersions := Seq("2.8.2", "2.9.2", "2.10.0")
To build against all versions listed in ``build.scala.versions``, prefix
the action to run with ``+``. For example:
::
> + package
A typical way to use this feature is to do development on a single Scala
version (no ``+`` prefix) and then cross-build (using ``+``)
occasionally and when releasing. The ultimate purpose of ``+`` is to
cross-publish your project. That is, by doing:
::
> + publish
you make your project available to users for different versions of
Scala. See :doc:`Publishing` for more details on publishing your project.
In order to make this process as quick as possible, different output and
managed dependency directories are used for different versions of Scala.
For example, when building against Scala 2.10.0,
- ``./target/`` becomes ``./target/scala_2.1.0/``
- ``./lib_managed/`` becomes ``./lib_managed/scala_2.10/``
Packaged jars, wars, and other artifacts have ``_<scala-version>``
appended to the normal artifact ID as mentioned in the Publishing
Conventions section above.
This means that the outputs of each build against each version of Scala
are independent of the others. ``sbt`` will resolve your dependencies
for each version separately. This way, for example, you get the version
of Dispatch compiled against 2.8.1 for your 2.8.1 build, the version
compiled against 2.10 for your 2.10.x builds, and so on. You can have
fine-grained control over the behavior for for different Scala versions
by using the ``cross`` method on ``ModuleID`` These are equivalent:
::
"a" % "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.Disabled
These are equivalent:
::
"a" %% "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.binary
This overrides the defaults to always use the full Scala version instead
of the binary Scala version:
::
"a" % "b" % "1.0" cross CrossVersion.full
This uses a custom function to determine the Scala version to use based
on the binary Scala version:
::
"a" % "b" % "1.0" cross CrossVersion.binaryMapped {
case "2.9.1" => "2.9.0" // remember that pre-2.10, binary=full
case "2.10" => "2.10.0" // useful if a%b was released with the old style
case x => x
}
This uses a custom function to determine the Scala version to use based
on the full Scala version:
::
"a" % "b" % "1.0" cross CrossVersion.fullMapped {
case "2.9.1" => "2.9.0"
case x => x
}
A custom function is mainly used when cross-building and a dependency
isn't available for all Scala versions or it uses a different convention
than the default.
As a final note, you can use ``++ <version>`` to temporarily switch the
Scala version currently being used to build (see
:doc:`Running </Getting-Started/Running>` for details).

View File

@ -1,13 +0,0 @@
# Detailed Topic Pages
This part of the wiki has pages documenting particular sbt topics.
Before reading anything in here, you will need the information in
the [[Getting Started Guide|Getting Started Welcome]] as a
foundation.
Other resources include the [[Examples]] and
[[extending sbt|Extending]] areas on the wiki, and the
[[API Documentation|http://harrah.github.com/xsbt/latest/api/index.html]].
See the sidebar on the right for an index of topics.

View File

@ -0,0 +1,14 @@
====================
Detailed Topic Pages
====================
This part of the wiki has pages documenting particular sbt topics.
Before reading anything in here, you will need the information in the
:doc:`Getting Started Guide </Getting-Started/Welcome>` as a foundation.
Other resources include the :doc:`Examples </Examples/Examples>` and
:doc:`extending sbt </Extending/Extending>` areas on the wiki, and the
`API Documentation <../../api/index.html>`_
See the sidebar on the right for an index of topics.

View File

@ -1,129 +0,0 @@
[Fork API]: http://harrah.github.com/xsbt/latest/api/sbt/Fork$.html
[ForkJava]: http://harrah.github.com/xsbt/latest/api/sbt/Fork$.ForkJava.html
[ForkScala]: http://harrah.github.com/xsbt/latest/api/sbt/Fork$.ForkScala.html
[OutputStrategy]: http://harrah.github.com/xsbt/latest/api/sbt/OutputStrategy.html
# Forking
By default, the `run` task runs in the same JVM as sbt. Forking is required under [[certain circumstances|Running Project Code]], however. Or, you might want to fork Java processes when implementing new tasks.
By default, a forked process uses the same Java and Scala versions being used for the build and the working directory and JVM options of the current process. This page discusses how to enable and configure forking for both `run` and `test` tasks. Each kind of task may be configured separately by scoping the relevant keys as explained below.
# Enable forking
The `fork` setting controls whether forking is enabled (true) or not (false). It can be set in the `run` scope to only fork `run` commands or in the `test` scope to only fork `test` commands.
To fork all test tasks (`test`, `test-only`, and `test-quick`) and run tasks (`run`, `run-main`, `test:run`, and `test:run-main`),
```scala
fork := true
```
To enable forking `run` tasks only, set `fork` to `true` in the `run` scope.
```scala
fork in run := true
```
To only fork `test:run` and `test:run-main`:
```scala
fork in (Test,run) := true
```
Similarly, set `fork in (Compile,run) := true` to only fork the main `run` tasks. `run` and `run-main` share the same configuration and cannot be configured separately.
To enable forking all `test` tasks only, set `fork` to `true` in the `test` scope:
```scala
fork in test := true
```
See [[Testing]] for more control over how tests are assigned to JVMs and what options to pass to each group.
# Change working directory
To change the working directory when forked, set `baseDirectory in run` or `baseDirectory in test`:
```scala
// sets the working directory for all `run`-like tasks
baseDirectory in run := file("/path/to/working/directory/")
// sets the working directory for `run` and `run-main` only
baseDirectory in (Compile,run) := file("/path/to/working/directory/")
// sets the working directory for `test:run` and `test:run-main` only
baseDirectory in (Test,run) := file("/path/to/working/directory/")
// sets the working directory for `test`, `test-quick`, and `test-only`
baseDirectory in test := file("/path/to/working/directory/")
```
# Forked JVM options
To specify options to be provided to the forked JVM, set `javaOptions`:
```scala
javaOptions in run += "-Xmx8G"
```
or specify the configuration to affect only the main or test `run` tasks:
```scala
javaOptions in (Test,run) += "-Xmx8G"
```
or only affect the `test` tasks:
```scala
javaOptions in test += "-Xmx8G"
```
# Java Home
Select the Java installation to use by setting the `java-home` directory:
```scala
javaHome := file("/path/to/jre/")
```
Note that if this is set globally, it also sets the Java installation used to compile Java sources. You can restrict it to running only by setting it in the `run` scope:
```scala
javaHome in run := file("/path/to/jre/")
```
As with the other settings, you can specify the configuration to affect only the main or test `run` tasks or just the `test` tasks.
# Configuring output
By default, forked output is sent to the Logger, with standard output logged at the `Info` level and standard error at the `Error` level.
This can be configured with the `output-strategy` setting, which is of type [OutputStrategy].
```scala
// send output to the build's standard output and error
outputStrategy := Some(StdoutOutput)
// send output to the provided OutputStream `someStream`
outputStrategy := Some(CustomOutput(someStream: OutputStream))
// send output to the provided Logger `log` (unbuffered)
outputStrategy := Some(LoggedOutput(log: Logger))
// send output to the provided Logger `log` after the process terminates
outputStrategy := Some(BufferedOutput(log: Logger))
```
As with other settings, this can be configured individually for main or test `run` tasks or for `test` tasks.
# Configuring Input
By default, the standard input of the sbt process is not forwarded to the forked process. To enable this, configure the `connectInput` setting:
```scala
connectInput in run := true
```
# Direct Usage
To fork a new Java process, use the [Fork API]. The methods of interest are `Fork.java`, `Fork.javac`, `Fork.scala`, and `Fork.scalac`. See the [ForkJava] and [ForkScala] classes for the arguments and types.

View File

@ -0,0 +1,167 @@
=======
Forking
=======
By default, the ``run`` task runs in the same JVM as sbt. Forking is
required under :doc:`certain circumstances <Running-Project-Code>`, however.
Or, you might want to fork Java processes when implementing new tasks.
By default, a forked process uses the same Java and Scala versions being
used for the build and the working directory and JVM options of the
current process. This page discusses how to enable and configure forking
for both ``run`` and ``test`` tasks. Each kind of task may be configured
separately by scoping the relevant keys as explained below.
Enable forking
==============
The ``fork`` setting controls whether forking is enabled (true) or not
(false). It can be set in the ``run`` scope to only fork ``run``
commands or in the ``test`` scope to only fork ``test`` commands.
To fork all test tasks (``test``, ``test-only``, and ``test-quick``) and
run tasks (``run``, ``run-main``, ``test:run``, and ``test:run-main``),
::
fork := true
To enable forking ``run`` tasks only, set ``fork`` to ``true`` in the
``run`` scope.
::
fork in run := true
To only fork ``test:run`` and ``test:run-main``:
::
fork in (Test,run) := true
Similarly, set ``fork in (Compile,run) := true`` to only fork the main
``run`` tasks. ``run`` and ``run-main`` share the same configuration and
cannot be configured separately.
To enable forking all ``test`` tasks only, set ``fork`` to ``true`` in
the ``test`` scope:
::
fork in test := true
See :doc:`Testing` for more control over how tests are assigned to JVMs and
what options to pass to each group.
Change working directory
========================
To change the working directory when forked, set
``baseDirectory in run`` or ``baseDirectory in test``:
::
// sets the working directory for all `run`-like tasks
baseDirectory in run := file("/path/to/working/directory/")
// sets the working directory for `run` and `run-main` only
baseDirectory in (Compile,run) := file("/path/to/working/directory/")
// sets the working directory for `test:run` and `test:run-main` only
baseDirectory in (Test,run) := file("/path/to/working/directory/")
// sets the working directory for `test`, `test-quick`, and `test-only`
baseDirectory in test := file("/path/to/working/directory/")
Forked JVM options
==================
To specify options to be provided to the forked JVM, set
``javaOptions``:
::
javaOptions in run += "-Xmx8G"
or specify the configuration to affect only the main or test ``run``
tasks:
::
javaOptions in (Test,run) += "-Xmx8G"
or only affect the ``test`` tasks:
::
javaOptions in test += "-Xmx8G"
Java Home
=========
Select the Java installation to use by setting the ``java-home``
directory:
::
javaHome := file("/path/to/jre/")
Note that if this is set globally, it also sets the Java installation
used to compile Java sources. You can restrict it to running only by
setting it in the ``run`` scope:
::
javaHome in run := file("/path/to/jre/")
As with the other settings, you can specify the configuration to affect
only the main or test ``run`` tasks or just the ``test`` tasks.
Configuring output
==================
By default, forked output is sent to the Logger, with standard output
logged at the ``Info`` level and standard error at the ``Error`` level.
This can be configured with the ``output-strategy`` setting, which is of
type
`OutputStrategy <../../api/sbt/OutputStrategy.html>`_.
::
// send output to the build's standard output and error
outputStrategy := Some(StdoutOutput)
// send output to the provided OutputStream `someStream`
outputStrategy := Some(CustomOutput(someStream: OutputStream))
// send output to the provided Logger `log` (unbuffered)
outputStrategy := Some(LoggedOutput(log: Logger))
// send output to the provided Logger `log` after the process terminates
outputStrategy := Some(BufferedOutput(log: Logger))
As with other settings, this can be configured individually for main or
test ``run`` tasks or for ``test`` tasks.
Configuring Input
=================
By default, the standard input of the sbt process is not forwarded to
the forked process. To enable this, configure the ``connectInput``
setting:
::
connectInput in run := true
Direct Usage
============
To fork a new Java process, use the `Fork
API <../../api/sbt/Fork$.html>`_. The
methods of interest are ``Fork.java``, ``Fork.javac``, ``Fork.scala``,
and ``Fork.scalac``. See the
`ForkJava <../../api/sbt/Fork$.ForkJava.html>`_
and
`ForkScala <../../api/sbt/Fork$.ForkScala.html>`_
classes for the arguments and types.

View File

@ -1,41 +0,0 @@
# Global Settings
## Basic global configuration file
Settings that should be applied to all projects can go in `~/.sbt/global.sbt` (or any file in `~/.sbt/` with a `.sbt` extension). Plugins that are defined globally in `~/.sbt/plugins` are available to these settings. For example, to change the default `shellPrompt` for your projects:
`~/.sbt/global.sbt`
```scala
shellPrompt := { state =>
"sbt (%s)> ".format(Project.extract(state).currentProject.id)
}
```
## Global Settings using a Global Plugin
The `~/.sbt/plugins` directory is a global plugin project. This can be used to provide global commands, plugins, or other code.
To add a plugin globally, create `~/.sbt/plugins/build.sbt` containing the dependency definitions. For example:
```
addSbtPlugin("org.example" % "plugin" % "1.0")
```
To change the default `shellPrompt` for every project using this approach, create a local plugin `~/.sbt/plugins/ShellPrompt.scala`:
```scala
import sbt._
import Keys._
object ShellPrompt extends Plugin {
override def settings = Seq(
shellPrompt := { state =>
"sbt (%s)> ".format(Project.extract(state).currentProject.id) }
)
}
```
The `~/.sbt/plugins` directory is a full project that is included as an external dependency of every plugin project.
In practice, settings and code defined here effectively work as if they were defined in a project's `project/` directory.
This means that `~/.sbt/plugins` can be used to try out ideas for plugins such as shown in the shellPrompt example.

View File

@ -0,0 +1,55 @@
===============
Global Settings
===============
Basic global configuration file
-------------------------------
Settings that should be applied to all projects can go in
``~/.sbt/global.sbt`` (or any file in ``~/.sbt/`` with a ``.sbt``
extension). Plugins that are defined globally in ``~/.sbt/plugins`` are
available to these settings. For example, to change the default
``shellPrompt`` for your projects:
``~/.sbt/global.sbt``
::
shellPrompt := { state =>
"sbt (%s)> ".format(Project.extract(state).currentProject.id)
}
Global Settings using a Global Plugin
-------------------------------------
The ``~/.sbt/plugins`` directory is a global plugin project. This can be
used to provide global commands, plugins, or other code.
To add a plugin globally, create ``~/.sbt/plugins/build.sbt`` containing
the dependency definitions. For example:
::
addSbtPlugin("org.example" % "plugin" % "1.0")
To change the default ``shellPrompt`` for every project using this
approach, create a local plugin ``~/.sbt/plugins/ShellPrompt.scala``:
::
import sbt._
import Keys._
object ShellPrompt extends Plugin {
override def settings = Seq(
shellPrompt := { state =>
"sbt (%s)> ".format(Project.extract(state).currentProject.id) }
)
}
The ``~/.sbt/plugins`` directory is a full project that is included as
an external dependency of every plugin project. In practice, settings
and code defined here effectively work as if they were defined in a
project's ``project/`` directory. This means that ``~/.sbt/plugins`` can
be used to try out ideas for plugins such as shown in the shellPrompt
example.

View File

@ -1,261 +0,0 @@
# Using the Configuration System
Central to sbt is the new configuration system, which is designed to enable extensive customization.
The goal of this page is to explain the general model behind the configuration system and how to work with it.
The Getting Started Guide (see [[.sbt files|Getting Started Basic Def]]) describes how to define settings; this page describes interacting with them and exploring them at the command line.
# Selecting commands, tasks, and settings
A fully-qualified reference to a setting or task looks like:
```text
{<build-uri>}<project-id>/config:inkey::key
```
This "scoped key" reference is used by commands like `last` and `inspect` and when selecting a task to run.
Only `key` is usually required by the parser; the remaining optional pieces select the scope.
These optional pieces are individually referred to as scope axes.
In the above description, `{<build-uri>}` and `<project-id>/` specify the project axis, `config:` is the configuration axis, and `inkey` is the task-specific axis.
Unspecified components are taken to be the current project (project axis) or auto-detected (configuration and task axes).
An asterisk (`*`) is used to explicitly refer to the `Global` context, as in `*/*:key`.
## Selecting the configuration
In the case of an unspecified configuration (that is, when the `config:` part is omitted), if the key is defined in `Global`, that is selected.
Otherwise, the first configuration defining the key is selected, where order is determined by the project definition's `configurations` member.
By default, this ordering is `compile, test, ...`
For example, the following are equivalent when run in a project `root` in the build in `/home/user/sample/`:
```text
> compile
> compile:compile
> root/compile
> root/compile:compile
> {file:/home/user/sample/}root/compile:compile
```
As another example, `run` by itself refers to `compile:run` because there is no global `run` task and the first configuration searched, `compile`, defines a `run`.
Therefore, to reference the `run` task for the `test` configuration, the configuration axis must be specified like `test:run`.
Some other examples that require the explicit `test:` axis:
```text
> test:console-quick
> test:console
> test:doc
> test:package
```
## Task-specific Settings
Some settings are defined per-task.
This is used when there are several related tasks, such as `package`, `package-src`, and `package-doc`, in the same configuration (such as `compile` or `test`).
For package tasks, their settings are the files to package, the options to use, and the output file to produce.
Each package task should be able to have different values for these settings.
This is done with the task axis, which selects the task to apply a setting to.
For example, the following prints the output jar for the different package tasks.
```text
> package::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1.jar
> package-src::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-src.jar
> package-doc::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-doc.jar
> test:package::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/root_2.8.1-0.1-test.jar
```
Note that a single colon `:` follows a configuration axis and a double colon `::` follows a task axis.
# Discovering Settings and Tasks
This section discusses the `inspect` command, which is useful for exploring relationships between settings.
It can be used to determine which setting should be modified in order to affect another setting, for example.
## Value and Provided By
The first piece of information provided by `inspect` is the type of a task or the value and type of a setting.
The following section of output is labeled "Provided by".
This shows the actual scope where the setting is defined.
For example,
```text
> inspect library-dependencies
[info] Setting: scala.collection.Seq[sbt.ModuleID] = List(org.scalaz:scalaz-core:6.0-SNAPSHOT, org.scala-tools.testing:scalacheck:1.8:test)
[info] Provided by:
[info] {file:/home/user/sample/}root/*:library-dependencies
...
```
This shows that `library-dependencies` has been defined on the current project (`{file:/home/user/sample/}root`) in the global configuration (`*:`).
For a task like `update`, the output looks like:
```text
> inspect update
[info] Task: sbt.UpdateReport
[info] Provided by:
[info] {file:/home/user/sample/}root/*:update
...
```
## Related Settings
The "Related" section of `inspect` output lists all of the definitions of a key.
For example,
```text
> inspect compile
...
[info] Related:
[info] test:compile
```
This shows that in addition to the requested `compile:compile` task, there is also a `test:compile` task.
## Dependencies
Forward dependencies show the other settings (or tasks) used to define a setting (or task).
Reverse dependencies go the other direction, showing what uses a given setting.
`inspect` provides this information based on either the requested dependencies or the actual dependencies.
Requested dependencies are those that a setting directly specifies.
Actual settings are what those dependencies get resolved to.
This distinction is explained in more detail in the following sections.
### Requested Dependencies
As an example, we'll look at `console`:
```text
> inspect console
...
[info] Dependencies:
[info] compile:console::full-classpath
[info] compile:console::scalac-options
[info] compile:console::initial-commands
[info] compile:console::cleanup-commands
[info] compile:console::compilers
[info] compile:console::task-temporary-directory
[info] compile:console::scala-instance
[info] compile:console::streams
...
```
This shows the inputs to the `console` task.
We can see that it gets its classpath and options from `full-classpath` and `scalac-options(for console)`.
The information provided by the `inspect` command can thus assist in finding the right setting to change.
The convention for keys, like `console` and `full-classpath`, is that the Scala identifier is camel case, while the String representation is lowercase and separated by dashes.
The Scala identifier for a configuration is uppercase to distinguish it from tasks like `compile` and `test`.
For example, we can infer from the previous example how to add code to be run when the Scala interpreter starts up:
```console
> set initialCommands in Compile in console := "import mypackage._"
> console
...
import mypackage._
...
```
`inspect` showed that `console` used the setting `compile:console::initial-commands`.
Translating the `initial-commands` string to the Scala identifier gives us `initialCommands`.
`compile` indicates that this is for the main sources.
`console::` indicates that the setting is specific to `console`.
Because of this, we can set the initial commands on the `console` task without affecting the `console-quick` task, for example.
### Actual Dependencies
`inspect actual <scoped-key>` shows the actual dependency used.
This is useful because delegation means that the dependency can come from a scope other than the requested one.
Using `inspect actual`, we see exactly which scope is providing a value for a setting.
Combining `inspect actual` with plain `inspect`, we can see the range of scopes that will affect a setting.
Returning to the example in Requested Dependencies,
```text
> inspect actual console
...
[info] Dependencies:
[info] compile:scalac-options
[info] compile:full-classpath
[info] *:scala-instance
[info] */*:initial-commands
[info] */*:cleanup-commands
[info] */*:task-temporary-directory
[info] *:console::compilers
[info] compile:console::streams
...
```
For `initial-commands`, we see that it comes from the global scope (`*/*:`).
Combining this with the relevant output from `inspect console`:
```
compile:console::initial-commands
```
we know that we can set `initial-commands` as generally as the global scope, as specific as the current project's `console` task scope, or anything in between.
This means that we can, for example, set `initial-commands` for the whole project and will affect `console`:
```console
> set initialCommands := "import mypackage._"
...
```
The reason we might want to set it here this is that other console tasks will use this value now.
We can see which ones use our new setting by looking at the reverse dependencies output of `inspect actual`:
```text
> inspect actual initial-commands
...
[info] Reverse dependencies:
[info] test:console
[info] compile:console-quick
[info] compile:console
[info] test:console-quick
[info] *:console-project
...
```
We now know that by setting `initial-commands` on the whole project, we affect all console tasks in all configurations in that project.
If we didn't want the initial commands to apply for `console-project`, which doesn't have our project's classpath available, we could use the more specific task axis:
```console
> set initialCommands in console := "import mypackage._"
> set initialCommands in consoleQuick := "import mypackage._"
```
or configuration axis:
```console
> set initialCommands in Compile := "import mypackage._"
> set initialCommands in Test := "import mypackage._"
```
The next part describes the Delegates section, which shows the chain of delegation for scopes.
## Delegates
A setting has a key and a scope.
A request for a key in a scope A may be delegated to another scope if A doesn't define a value for the key.
The delegation chain is well-defined and is displayed in the Delegates section of the `inspect` command.
The Delegates section shows the order in which scopes are searched when a value is not defined for the requested key.
As an example, consider the initial commands for `console` again:
```text
> inspect console::initial-commands
...
[info] Delegates:
[info] *:console::initial-commands
[info] *:initial-commands
[info] {.}/*:console::initial-commands
[info] {.}/*:initial-commands
[info] */*:console::initial-commands
[info] */*:initial-commands
...
```
This means that if there is no value specifically for `*:console::initial-commands`, the scopes listed under Delegates will be searched in order until a defined value is found.

View File

@ -0,0 +1,315 @@
=========================================
Interacting with the Configuration System
=========================================
Central to sbt is the new configuration system, which is designed to
enable extensive customization. The goal of this page is to explain the
general model behind the configuration system and how to work with it.
The Getting Started Guide (see :doc:`.sbt files </Getting-Started/Basic-Def>`)
describes how to define settings; this page describes interacting
with them and exploring them at the command line.
Selecting commands, tasks, and settings
=======================================
A fully-qualified reference to a setting or task looks like:
::
{<build-uri>}<project-id>/config:inkey::key
This "scoped key" reference is used by commands like ``last`` and
``inspect`` and when selecting a task to run. Only ``key`` is usually
required by the parser; the remaining optional pieces select the scope.
These optional pieces are individually referred to as scope axes. In the
above description, ``{<build-uri>}`` and ``<project-id>/`` specify the
project axis, ``config:`` is the configuration axis, and ``inkey`` is
the task-specific axis. Unspecified components are taken to be the
current project (project axis) or auto-detected (configuration and task
axes). An asterisk (``*``) is used to explicitly refer to the ``Global``
context, as in ``*/*:key``.
Selecting the configuration
---------------------------
In the case of an unspecified configuration (that is, when the
``config:`` part is omitted), if the key is defined in ``Global``, that
is selected. Otherwise, the first configuration defining the key is
selected, where order is determined by the project definition's
``configurations`` member. By default, this ordering is
``compile, test, ...``
For example, the following are equivalent when run in a project ``root``
in the build in ``/home/user/sample/``:
::
> compile
> compile:compile
> root/compile
> root/compile:compile
> {file:/home/user/sample/}root/compile:compile
As another example, ``run`` by itself refers to ``compile:run`` because
there is no global ``run`` task and the first configuration searched,
``compile``, defines a ``run``. Therefore, to reference the ``run`` task
for the ``test`` configuration, the configuration axis must be specified
like ``test:run``. Some other examples that require the explicit
``test:`` axis:
::
> test:console-quick
> test:console
> test:doc
> test:package
Task-specific Settings
----------------------
Some settings are defined per-task. This is used when there are several
related tasks, such as ``package``, ``package-src``, and
``package-doc``, in the same configuration (such as ``compile`` or
``test``). For package tasks, their settings are the files to package,
the options to use, and the output file to produce. Each package task
should be able to have different values for these settings.
This is done with the task axis, which selects the task to apply a
setting to. For example, the following prints the output jar for the
different package tasks.
::
> package::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1.jar
> package-src::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-src.jar
> package-doc::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/demo_2.8.1-0.1-doc.jar
> test:package::artifact-path
[info] /home/user/sample/target/scala-2.8.1.final/root_2.8.1-0.1-test.jar
Note that a single colon ``:`` follows a configuration axis and a double
colon ``::`` follows a task axis.
Discovering Settings and Tasks
==============================
This section discusses the ``inspect`` command, which is useful for
exploring relationships between settings. It can be used to determine
which setting should be modified in order to affect another setting, for
example.
Value and Provided By
---------------------
The first piece of information provided by ``inspect`` is the type of a
task or the value and type of a setting. The following section of output
is labeled "Provided by". This shows the actual scope where the setting
is defined. For example,
::
> inspect library-dependencies
[info] Setting: scala.collection.Seq[sbt.ModuleID] = List(org.scalaz:scalaz-core:6.0-SNAPSHOT, org.scala-tools.testing:scalacheck:1.8:test)
[info] Provided by:
[info] {file:/home/user/sample/}root/*:library-dependencies
...
This shows that ``library-dependencies`` has been defined on the current
project (``{file:/home/user/sample/}root``) in the global configuration
(``*:``). For a task like ``update``, the output looks like:
::
> inspect update
[info] Task: sbt.UpdateReport
[info] Provided by:
[info] {file:/home/user/sample/}root/*:update
...
Related Settings
----------------
The "Related" section of ``inspect`` output lists all of the definitions
of a key. For example,
::
> inspect compile
...
[info] Related:
[info] test:compile
This shows that in addition to the requested ``compile:compile`` task,
there is also a ``test:compile`` task.
Dependencies
------------
Forward dependencies show the other settings (or tasks) used to define a
setting (or task). Reverse dependencies go the other direction, showing
what uses a given setting. ``inspect`` provides this information based
on either the requested dependencies or the actual dependencies.
Requested dependencies are those that a setting directly specifies.
Actual settings are what those dependencies get resolved to. This
distinction is explained in more detail in the following sections.
Requested Dependencies
~~~~~~~~~~~~~~~~~~~~~~
As an example, we'll look at ``console``:
::
> inspect console
...
[info] Dependencies:
[info] compile:console::full-classpath
[info] compile:console::scalac-options
[info] compile:console::initial-commands
[info] compile:console::cleanup-commands
[info] compile:console::compilers
[info] compile:console::task-temporary-directory
[info] compile:console::scala-instance
[info] compile:console::streams
...
This shows the inputs to the ``console`` task. We can see that it gets
its classpath and options from ``full-classpath`` and
``scalac-options(for console)``. The information provided by the
``inspect`` command can thus assist in finding the right setting to
change. The convention for keys, like ``console`` and
``full-classpath``, is that the Scala identifier is camel case, while
the String representation is lowercase and separated by dashes. The
Scala identifier for a configuration is uppercase to distinguish it from
tasks like ``compile`` and ``test``. For example, we can infer from the
previous example how to add code to be run when the Scala interpreter
starts up:
::
> set initialCommands in Compile in console := "import mypackage._"
> console
...
import mypackage._
...
``inspect`` showed that ``console`` used the setting
``compile:console::initial-commands``. Translating the
``initial-commands`` string to the Scala identifier gives us
``initialCommands``. ``compile`` indicates that this is for the main
sources. ``console::`` indicates that the setting is specific to
``console``. Because of this, we can set the initial commands on the
``console`` task without affecting the ``console-quick`` task, for
example.
Actual Dependencies
~~~~~~~~~~~~~~~~~~~
``inspect actual <scoped-key>`` shows the actual dependency used. This
is useful because delegation means that the dependency can come from a
scope other than the requested one. Using ``inspect actual``, we see
exactly which scope is providing a value for a setting. Combining
``inspect actual`` with plain ``inspect``, we can see the range of
scopes that will affect a setting. Returning to the example in Requested
Dependencies,
::
> inspect actual console
...
[info] Dependencies:
[info] compile:scalac-options
[info] compile:full-classpath
[info] *:scala-instance
[info] */*:initial-commands
[info] */*:cleanup-commands
[info] */*:task-temporary-directory
[info] *:console::compilers
[info] compile:console::streams
...
For ``initial-commands``, we see that it comes from the global scope
(``*/*:``). Combining this with the relevant output from
``inspect console``:
::
compile:console::initial-commands
we know that we can set ``initial-commands`` as generally as the global
scope, as specific as the current project's ``console`` task scope, or
anything in between. This means that we can, for example, set
``initial-commands`` for the whole project and will affect ``console``:
::
> set initialCommands := "import mypackage._"
...
The reason we might want to set it here this is that other console tasks
will use this value now. We can see which ones use our new setting by
looking at the reverse dependencies output of ``inspect actual``:
::
> inspect actual initial-commands
...
[info] Reverse dependencies:
[info] test:console
[info] compile:console-quick
[info] compile:console
[info] test:console-quick
[info] *:console-project
...
We now know that by setting ``initial-commands`` on the whole project,
we affect all console tasks in all configurations in that project. If we
didn't want the initial commands to apply for ``console-project``, which
doesn't have our project's classpath available, we could use the more
specific task axis:
``console > set initialCommands in console := "import mypackage._" > set initialCommands in consoleQuick := "import mypackage._"``
or configuration axis:
::
> set initialCommands in Compile := "import mypackage._"
> set initialCommands in Test := "import mypackage._"
The next part describes the Delegates section, which shows the chain of
delegation for scopes.
Delegates
---------
A setting has a key and a scope. A request for a key in a scope A may be
delegated to another scope if A doesn't define a value for the key. The
delegation chain is well-defined and is displayed in the Delegates
section of the ``inspect`` command. The Delegates section shows the
order in which scopes are searched when a value is not defined for the
requested key.
As an example, consider the initial commands for ``console`` again:
::
> inspect console::initial-commands
...
[info] Delegates:
[info] *:console::initial-commands
[info] *:initial-commands
[info] {.}/*:console::initial-commands
[info] {.}/*:initial-commands
[info] */*:console::initial-commands
[info] */*:initial-commands
...
This means that if there is no value specifically for
``*:console::initial-commands``, the scopes listed under Delegates will
be searched in order until a defined value is found.

View File

@ -1,50 +0,0 @@
# Java Sources
sbt has support for compiling Java sources with the limitation that dependency tracking is limited to the dependencies present in compiled class files.
# Usage
* `compile` will compile the sources under `src/main/java` by default.
* `test-compile` will compile the sources under `src/test/java` by default.
Pass options to the Java compiler by setting `javac-options`:
```scala
javacOptions += "-g:none"
```
As with options for the Scala compiler, the arguments are not parsed by sbt. Multi-element options, such as `-source 1.5`, are specified like:
```scala
javacOptions ++= Seq("-source", "1.5")
```
You can specify the order in which Scala and Java sources are built with the `compile-order` setting. Possible values are from the `CompileOrder` enumeration: `Mixed`, `JavaThenScala`, and `ScalaThenJava`. If you have circular dependencies between Scala and Java sources, you need the default, `Mixed`, which passes both Java and Scala sources to `scalac` and then compiles the Java sources with `javac`. If you do not have circular dependencies, you can use one of the other two options to speed up your build by not passing the Java sources to `scalac`. For example, if your Scala sources depend on your Java sources, but your Java sources do not depend on your Scala sources, you can do:
```scala
compileOrder := CompileOrder.JavaThenScala
```
To specify different orders for main and test sources, scope the setting by configuration:
```scala
// Java then Scala for main sources
compileOrder in Compile := CompileOrder.JavaThenScala
// allow circular dependencies for test sources
compileOrder in Test := CompileOrder.Mixed
```
Note that in an incremental compilation setting, it is not practical to ensure complete isolation between Java sources and Scala sources because they share the same output directory. So, previously compiled classes not involved in the current recompilation may be picked up. A clean compile will always provide full checking, however.
By default, sbt includes `src/main/scala` and `src/main/java` in its list of unmanaged source directories. For Java-only projects, the unnecessary Scala directories can be ignored by modifying `unmanagedSourceDirectories`:
```scala
// Include only src/main/java in the compile configuration
unmanagedSourceDirectories in Compile <<= Seq(javaSource in Compile).join
// Include only src/test/java in the test configuration
unmanagedSourceDirectories in Test <<= Seq(javaSource in Test).join
```
However, there should not be any harm in leaving the Scala directories if they are empty.

View File

@ -0,0 +1,77 @@
============
Java Sources
============
sbt has support for compiling Java sources with the limitation that
dependency tracking is limited to the dependencies present in compiled
class files.
Usage
=====
- ``compile`` will compile the sources under ``src/main/java`` by
default.
- ``test-compile`` will compile the sources under ``src/test/java`` by
default.
Pass options to the Java compiler by setting ``javac-options``:
::
javacOptions += "-g:none"
As with options for the Scala compiler, the arguments are not parsed by
sbt. Multi-element options, such as ``-source 1.5``, are specified like:
::
javacOptions ++= Seq("-source", "1.5")
You can specify the order in which Scala and Java sources are built with
the ``compile-order`` setting. Possible values are from the
``CompileOrder`` enumeration: ``Mixed``, ``JavaThenScala``, and
``ScalaThenJava``. If you have circular dependencies between Scala and
Java sources, you need the default, ``Mixed``, which passes both Java
and Scala sources to ``scalac`` and then compiles the Java sources with
``javac``. If you do not have circular dependencies, you can use one of
the other two options to speed up your build by not passing the Java
sources to ``scalac``. For example, if your Scala sources depend on your
Java sources, but your Java sources do not depend on your Scala sources,
you can do:
::
compileOrder := CompileOrder.JavaThenScala
To specify different orders for main and test sources, scope the setting
by configuration:
::
// Java then Scala for main sources
compileOrder in Compile := CompileOrder.JavaThenScala
// allow circular dependencies for test sources
compileOrder in Test := CompileOrder.Mixed
Note that in an incremental compilation setting, it is not practical to
ensure complete isolation between Java sources and Scala sources because
they share the same output directory. So, previously compiled classes
not involved in the current recompilation may be picked up. A clean
compile will always provide full checking, however.
By default, sbt includes ``src/main/scala`` and ``src/main/java`` in its
list of unmanaged source directories. For Java-only projects, the
unnecessary Scala directories can be ignored by modifying
``unmanagedSourceDirectories``:
::
// Include only src/main/java in the compile configuration
unmanagedSourceDirectories in Compile <<= Seq(javaSource in Compile).join
// Include only src/test/java in the test configuration
unmanagedSourceDirectories in Test <<= Seq(javaSource in Test).join
However, there should not be any harm in leaving the Scala directories
if they are empty.

View File

@ -1,248 +0,0 @@
# Launcher Specification
The sbt launcher component is a self-contained jar that boots a Scala application without Scala or the application already existing on the system. The only prerequisites are the launcher jar itself, an optional configuration file, and a java runtime version 1.6 or greater.
# Overview
A user downloads the launcher jar and creates a script to run it. In this documentation, the script will be assumed to be called `launch`. For unix, the script would look like:
```
java -jar sbt-launcher.jar "$@"
```
The user then downloads the configuration file for the application (call it `my.app.configuration`) and creates a script to launch it (call it `myapp`):
```
launch @my.app.configuration "$@"
```
The user can then launch the application using
```
myapp arg1 arg2 ...
```
Like the launcher used to distribute `sbt`, the downloaded launcher jar will retrieve Scala and the application according to the provided configuration file. The versions may be fixed or read from a different configuration file (the location of which is also configurable). The location to which the Scala and application jars are downloaded is configurable as well. The repositories searched are configurable. Optional initialization of a properties file on launch is configurable.
Once the launcher has downloaded the necessary jars, it loads the application and calls its entry point. The application is passed information about how it was called: command line arguments, current working directory, Scala version, and application ID (organization, name, version). In addition, the application can ask the launcher to perform operations such as obtaining the Scala jars and a `ClassLoader` for any version of Scala retrievable from the repositories specified in the configuration file. It can request that other applications be downloaded and run. When the application completes, it can tell the launcher to exit with a specific exit code or to reload the application with a different version of Scala, a different version of the application, or different arguments.
There are some other options for setup, such as putting the configuration file inside the launcher jar and distributing that as a single download. The rest of this documentation describes the details of configuring, writing, distributing, and running the application.
## Configuration
The launcher may be configured in one of the following ways in increasing order of precedence:
* Replace the `/sbt/sbt.boot.properties` file in the jar
* Put a configuration file named `sbt.boot.properties` on the classpath. Put it in the classpath root without the `/sbt` prefix.
* Specify the location of an alternate configuration on the command line. This can be done by either specifying the location as the system property `sbt.boot.properties` or as the first argument to the launcher prefixed by `'@'`. The system property has lower precedence. Resolution of a relative path is first attempted against the current working directory, then against the user's home directory, and then against the directory containing the launcher jar. An error is generated if none of these attempts succeed.
### Syntax
The configuration file is line-based, read as UTF-8 encoded, and defined by the following grammar. `'nl'` is a newline or end of file and `'text'` is plain text without newlines or the surrounding delimiters (such as parentheses or square brackets):
```
configuration ::= scala app repositories boot log app-properties
scala ::= '[' 'scala' ']' nl version nl classifiers nl
app ::= '[' 'app' ']' nl org nl name nl version nl components nl class nl cross-versioned nl resources nl classifiers nl
repositories ::= '[' 'repositories' ']' nl (repository nl)*
boot ::= '[' 'boot' ']' nl directory nl bootProperties nl search nl promptCreate nl promptFill nl quickOption nl
log ::= '[' 'log' ']' nl logLevel nl
app-properties ::= '[' 'app-properties' ']' nl (property nl)*
ivy ::= '[' 'ivy' ']' nl homeDirectory nl checksums nl overrideRepos nl repoConfig nl
directory ::= 'directory' ':' path
bootProperties ::= 'properties' ':' path
search ::= 'search' ':' ('none'|'nearest'|'root-first'|'only') (',' path)*
logLevel ::= 'log-level' ':' ('debug' | 'info' | 'warn' | 'error')
promptCreate ::= 'prompt-create' ':' label
promptFill ::= 'prompt-fill' ':' boolean
quickOption ::= 'quick-option' ':' boolean
version ::= 'version' ':' versionSpecification
versionSpecification ::= readProperty | fixedVersion
readProperty ::= 'read' '(' propertyName ')' '[' default ']'
fixedVersion ::= text
classifiers ::= 'classifiers' ':' text (',' text)*
homeDirectory ::= 'ivy-home' ':' path
checksums ::= 'checksums' ':' checksum (',' checksum)*
overrideRepos ::= 'override-build-repos' ':' boolean
repoConfig ::= 'repository-config' ':' path
org ::= 'org' ':' text
name ::= 'name' ':' text
class ::= 'class' ':' text
components ::= 'components' ':' component (',' component)*
cross-versioned ::= 'cross-versioned' ':' boolean
resources ::= 'resources' ':' path (',' path)*
repository ::= ( predefinedRepository | customRepository ) nl
predefinedRepository ::= 'local' | 'maven-local' | 'maven-central'
customRepository ::= label ':' url [ [',' ivy-pattern] ',' artifact-pattern]
property ::= label ':' propertyDefinition (',' propertyDefinition)*
propertyDefinition ::= mode '=' (set | prompt)
mode ::= 'quick' | 'new' | 'fill'
set ::= 'set' '(' value ')'
prompt ::= 'prompt' '(' label ')' ('[' default ']')?
boolean ::= 'true' | 'false'
path, propertyName, label, default, checksum ::= text
```
In addition to the grammar specified here, property values may include variable substitutions.
A variable substitution has one of these forms:
* `${variable.name}`
* `${variable.name-default}`
where `variable.name` is the name of a system property.
If a system property by that name exists, the value is substituted.
If it does not exists and a default is specified, the default is substituted after recursively substituting variables in it.
If the system property does not exist and no default is specified, the original string is not substituted.
### Example
The default configuration file for sbt looks like:
```
[scala]
version: ${sbt.scala.version-auto}
[app]
org: ${sbt.organization-org.scala-sbt}
name: sbt
version: ${sbt.version-read(sbt.version)[0.12.0]}
class: ${sbt.main.class-sbt.xMain}
components: xsbti,extra
cross-versioned: ${sbt.cross.versioned-false}
[repositories]
local
typesafe-ivy-releases: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext]
maven-central
sonatype-snapshots: https://oss.sonatype.org/content/repositories/snapshots
[boot]
directory: ${sbt.boot.directory-${sbt.global.base-${user.home}/.sbt}/boot/}
[ivy]
ivy-home: ${sbt.ivy.home-${user.home}/.ivy2/}
checksums: ${sbt.checksums-sha1,md5}
override-build-repos: ${sbt.override.build.repos-false}
repository-config: ${sbt.repository.config-${sbt.global.base-${user.home}/.sbt}/repositories}
```
### Semantics
The `scala.version` property specifies the version of Scala used to run the application. If the application is not cross-built, this may be set to `auto` and it will be auto-detected from the application's dependencies. If specified, the `scala.classifiers` property defines classifiers, such as 'sources', of extra Scala artifacts to retrieve.
The `app.org`, `app.name`, and `app.version` properties specify the organization, module ID, and version of the application, respectively. These are used to resolve and retrieve the application from the repositories listed in `[repositories]`. If `app.cross-versioned` is true, the resolved module ID is `{app.name+'_'+scala.version}`. The `scala.version` property must be specified and cannot be `auto` when cross-versioned. The paths given in `app.resources` are added to the application's classpath. If the path is relative, it is resolved against the application's working directory. If specified, the `app.classifiers` property defines classifiers, like 'sources', of extra artifacts to retrieve for the application.
Jars are retrieved to the directory given by `boot.directory`. By default, this is an absolute path that is shared by all launched instances on the machine. If multiple versions access it simultaneously.
, you might see messages like:
```
Waiting for lock on <lock-file> to be available...
```
This boot directory may be relative to the current directory instead. In this case, the launched application will have a separate boot directory for each directory it is launched in.
The `boot.properties` property specifies the location of the properties file to use if `app.version` or `scala.version` is specified as `read`. The `prompt-create`, `prompt-fill`, and `quick-option` properties together with the property definitions in `[app.properties]` can be used to initialize the `boot.properties` file.
The app.class property specifies the name of the entry point to the application. An application entry point must be a public class with a no-argument constructor that implements `xsbti.AppMain`. The `AppMain` interface specifies the entry method signature 'run'. The run method is passed an instance of AppConfiguration, which provides access to the startup environment. `AppConfiguration` also provides an interface to retrieve other versions of Scala or other applications. Finally, the return type of the run method is `xsbti.MainResult`, which has two subtypes: `xsbti.Reboot` and `xsbti.Exit`. To exit with a specific code, return an instance of `xsbti.Exit` with the requested code. To restart the application, return an instance of Reboot. You can change some aspects of the configuration with a reboot, such as the version of Scala, the application ID, and the arguments.
The `ivy.cache-directory` property provides an alternative location for the Ivy cache used by the launcher. This does not automatically set the Ivy cache for the application, but the application is provided this location through the AppConfiguration instance. The `checksums` property selects the checksum algorithms (sha1 or md5) that are used to verify artifacts downloaded by the launcher. `override-build-repos` is a flag that can inform the application that the repositories configured for the launcher should be used in the application. If `repository-config` is defined, the file it specifies should contain a `[repositories]` section that is used in place of the section in the original configuration file.
## Execution
On startup, the launcher searches for its configuration in the order described in the Configuration section and then parses it. If either the Scala version or the application version are specified as 'read', the launcher determines them in the following manner. The file given by the 'boot.properties' property is read as a Java properties file to obtain the version. The expected property names are `${app.name}.version` for the application version (where `${app.name}` is replaced with the value of the `app.name` property from the boot configuration file) and `scala.version` for the Scala version. If the properties file does not exist, the default value provided is used. If no default was provided, an error is generated.
Once the final configuration is resolved, the launcher proceeds to obtain the necessary jars to launch the application. The `boot.directory` property is used as a base directory to retrieve jars to. Locking is done on the directory, so it can be shared system-wide. The launcher retrieves the requested version of Scala to
```
${boot.directory}/${scala.version}/lib/
```
If this directory already exists, the launcher takes a shortcut for startup performance and assumes that the jars have already been downloaded. If the directory does not exist, the launcher uses Apache Ivy to resolve and retrieve the jars. A similar process occurs for the application itself. It and its dependencies are retrieved to
```
${boot.directory}/${scala.version}/${app.org}/${app.name}/.
```
Once all required code is downloaded, the class loaders are set up. The launcher creates a class loader for the requested version of Scala. It then creates a child class loader containing the jars for the requested 'app.components' and with the paths specified in `app.resources`. An application that does not use components will have all of its jars in this class loader.
The main class for the application is then instantiated. It must be a public class with a public no-argument constructor and must conform to xsbti.AppMain. The `run` method is invoked and execution passes to the application. The argument to the 'run' method provides configuration information and a callback to obtain a class loader for any version of Scala that can be obtained from a repository in [repositories]. The return value of the run method determines what is done after the application executes. It can specify that the launcher should restart the application or that it should exit with the provided exit code.
## Creating a Launched Application
This section shows how to make an application that is launched by this launcher. First, declare a dependency on the launcher-interface. Do not declare a dependency on the launcher itself. The launcher interface consists strictly of Java interfaces in order to avoid binary incompatibility between the version of Scala used to compile the launcher and the version used to compile your application. The launcher interface class will be provided by the launcher, so it is only a compile-time dependency. If you are building with sbt, your dependency definition would be:
```scala
libraryDependencies += "org.scala-sbt" % "launcher-interface" % "0.12.0" % "provided"
resolvers <+= sbtResolver
```
Make the entry point to your class implement 'xsbti.AppMain'. An example that uses some of the information:
```scala
package xsbt.test
class Main extends xsbti.AppMain
{
def run(configuration: xsbti.AppConfiguration) =
{
// get the version of Scala used to launch the application
val scalaVersion = configuration.provider.scalaProvider.version
// Print a message and the arguments to the application
println("Hello world! Running Scala " + scalaVersion)
configuration.arguments.foreach(println)
// demonstrate the ability to reboot the application into different versions of Scala
// and how to return the code to exit with
scalaVersion match
{
case "2.8.2" =>
new xsbti.Reboot {
def arguments = configuration.arguments
def baseDirectory = configuration.baseDirectory
def scalaVersion = "2.9.2
def app = configuration.provider.id
}
case "2.9.2" => new Exit(1)
case _ => new Exit(0)
}
}
class Exit(val code: Int) extends xsbti.Exit
}
```
Next, define a configuration file for the launcher. For the above class, it might look like:
```scala
[scala]
version: 2.9.2
[app]
org: org.scala-sbt
name: xsbt-test
version: 0.12.0
class: xsbt.test.Main
cross-versioned: true
[repositories]
local
maven-central
[boot]
directory: ${user.home}/.myapp/boot
```
Then, `publish-local` or `+publish-local` the application to make it available.
## Running an Application
As mentioned above, there are a few options to actually run the application. The first involves providing a modified jar for download. The second two require providing a configuration file for download.
* Replace the /sbt/sbt.boot.properties file in the launcher jar and distribute the modified jar. The user would need a script to run 'java -jar your-launcher.jar arg1 arg2 ...'.
* The user downloads the launcher jar and you provide the configuration file.
* The user needs to run 'java -Dsbt.boot.properties=your.boot.properties -jar launcher.jar'.
* The user already has a script to run the launcher (call it 'launch'). The user needs to run
```
launch @your.boot.properties your-arg-1 your-arg-2
```

View File

@ -0,0 +1,387 @@
======================
Launcher Specification
======================
The sbt launcher component is a self-contained jar that boots a Scala
application without Scala or the application already existing on the
system. The only prerequisites are the launcher jar itself, an optional
configuration file, and a java runtime version 1.6 or greater.
Overview
========
A user downloads the launcher jar and creates a script to run it. In
this documentation, the script will be assumed to be called ``launch``.
For unix, the script would look like:
``java -jar sbt-launcher.jar "$@"``
The user then downloads the configuration file for the application (call
it ``my.app.configuration``) and creates a script to launch it (call it
``myapp``): ``launch @my.app.configuration "$@"``
The user can then launch the application using ``myapp arg1 arg2 ...``
Like the launcher used to distribute ``sbt``, the downloaded launcher
jar will retrieve Scala and the application according to the provided
configuration file. The versions may be fixed or read from a different
configuration file (the location of which is also configurable). The
location to which the Scala and application jars are downloaded is
configurable as well. The repositories searched are configurable.
Optional initialization of a properties file on launch is configurable.
Once the launcher has downloaded the necessary jars, it loads the
application and calls its entry point. The application is passed
information about how it was called: command line arguments, current
working directory, Scala version, and application ID (organization,
name, version). In addition, the application can ask the launcher to
perform operations such as obtaining the Scala jars and a
``ClassLoader`` for any version of Scala retrievable from the
repositories specified in the configuration file. It can request that
other applications be downloaded and run. When the application
completes, it can tell the launcher to exit with a specific exit code or
to reload the application with a different version of Scala, a different
version of the application, or different arguments.
There are some other options for setup, such as putting the
configuration file inside the launcher jar and distributing that as a
single download. The rest of this documentation describes the details of
configuring, writing, distributing, and running the application.
Configuration
-------------
The launcher may be configured in one of the following ways in
increasing order of precedence:
- Replace the ``/sbt/sbt.boot.properties`` file in the jar
- Put a configuration file named ``sbt.boot.properties`` on the
classpath. Put it in the classpath root without the ``/sbt`` prefix.
- Specify the location of an alternate configuration on the command
line. This can be done by either specifying the location as the
system property ``sbt.boot.properties`` or as the first argument to
the launcher prefixed by ``'@'``. The system property has lower
precedence. Resolution of a relative path is first attempted against
the current working directory, then against the user's home
directory, and then against the directory containing the launcher
jar. An error is generated if none of these attempts succeed.
Syntax
~~~~~~
The configuration file is line-based, read as UTF-8 encoded, and defined
by the following grammar. ``'nl'`` is a newline or end of file and
``'text'`` is plain text without newlines or the surrounding delimiters
(such as parentheses or square brackets):
::
configuration ::= scala app repositories boot log app-properties
scala ::= '[' 'scala' ']' nl version nl classifiers nl
app ::= '[' 'app' ']' nl org nl name nl version nl components nl class nl cross-versioned nl resources nl classifiers nl
repositories ::= '[' 'repositories' ']' nl (repository nl)*
boot ::= '[' 'boot' ']' nl directory nl bootProperties nl search nl promptCreate nl promptFill nl quickOption nl
log ::= '[' 'log' ']' nl logLevel nl
app-properties ::= '[' 'app-properties' ']' nl (property nl)*
ivy ::= '[' 'ivy' ']' nl homeDirectory nl checksums nl overrideRepos nl repoConfig nl
directory ::= 'directory' ':' path
bootProperties ::= 'properties' ':' path
search ::= 'search' ':' ('none'|'nearest'|'root-first'|'only') (',' path)*
logLevel ::= 'log-level' ':' ('debug' | 'info' | 'warn' | 'error')
promptCreate ::= 'prompt-create' ':' label
promptFill ::= 'prompt-fill' ':' boolean
quickOption ::= 'quick-option' ':' boolean
version ::= 'version' ':' versionSpecification
versionSpecification ::= readProperty | fixedVersion
readProperty ::= 'read' '(' propertyName ')' '[' default ']'
fixedVersion ::= text
classifiers ::= 'classifiers' ':' text (',' text)*
homeDirectory ::= 'ivy-home' ':' path
checksums ::= 'checksums' ':' checksum (',' checksum)*
overrideRepos ::= 'override-build-repos' ':' boolean
repoConfig ::= 'repository-config' ':' path
org ::= 'org' ':' text
name ::= 'name' ':' text
class ::= 'class' ':' text
components ::= 'components' ':' component (',' component)*
cross-versioned ::= 'cross-versioned' ':' boolean
resources ::= 'resources' ':' path (',' path)*
repository ::= ( predefinedRepository | customRepository ) nl
predefinedRepository ::= 'local' | 'maven-local' | 'maven-central'
customRepository ::= label ':' url [ [',' ivy-pattern] ',' artifact-pattern]
property ::= label ':' propertyDefinition (',' propertyDefinition)*
propertyDefinition ::= mode '=' (set | prompt)
mode ::= 'quick' | 'new' | 'fill'
set ::= 'set' '(' value ')'
prompt ::= 'prompt' '(' label ')' ('[' default ']')?
boolean ::= 'true' | 'false'
path, propertyName, label, default, checksum ::= text
In addition to the grammar specified here, property values may include
variable substitutions. A variable substitution has one of these forms:
- ``${variable.name}``
- ``${variable.name-default}``
where ``variable.name`` is the name of a system property. If a system
property by that name exists, the value is substituted. If it does not
exists and a default is specified, the default is substituted after
recursively substituting variables in it. If the system property does
not exist and no default is specified, the original string is not
substituted.
Example
~~~~~~~
The default configuration file for sbt looks like:
::
[scala]
version: ${sbt.scala.version-auto}
[app]
org: ${sbt.organization-org.scala-sbt}
name: sbt
version: ${sbt.version-read(sbt.version)[0.12.0]}
class: ${sbt.main.class-sbt.xMain}
components: xsbti,extra
cross-versioned: ${sbt.cross.versioned-false}
[repositories]
local
typesafe-ivy-releases: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext]
maven-central
sonatype-snapshots: https://oss.sonatype.org/content/repositories/snapshots
[boot]
directory: ${sbt.boot.directory-${sbt.global.base-${user.home}/.sbt}/boot/}
[ivy]
ivy-home: ${sbt.ivy.home-${user.home}/.ivy2/}
checksums: ${sbt.checksums-sha1,md5}
override-build-repos: ${sbt.override.build.repos-false}
repository-config: ${sbt.repository.config-${sbt.global.base-${user.home}/.sbt}/repositories}
Semantics
~~~~~~~~~
The ``scala.version`` property specifies the version of Scala used to
run the application. If the application is not cross-built, this may be
set to ``auto`` and it will be auto-detected from the application's
dependencies. If specified, the ``scala.classifiers`` property defines
classifiers, such as 'sources', of extra Scala artifacts to retrieve.
The ``app.org``, ``app.name``, and ``app.version`` properties specify
the organization, module ID, and version of the application,
respectively. These are used to resolve and retrieve the application
from the repositories listed in ``[repositories]``. If
``app.cross-versioned`` is true, the resolved module ID is
``{app.name+'_'+scala.version}``. The ``scala.version`` property must be
specified and cannot be ``auto`` when cross-versioned. The paths given
in ``app.resources`` are added to the application's classpath. If the
path is relative, it is resolved against the application's working
directory. If specified, the ``app.classifiers`` property defines
classifiers, like 'sources', of extra artifacts to retrieve for the
application.
Jars are retrieved to the directory given by ``boot.directory``. By
default, this is an absolute path that is shared by all launched
instances on the machine. If multiple versions access it simultaneously.
, you might see messages like:
::
Waiting for lock on <lock-file> to be available...
This boot directory may be relative to the current directory instead. In
this case, the launched application will have a separate boot directory
for each directory it is launched in.
The ``boot.properties`` property specifies the location of the
properties file to use if ``app.version`` or ``scala.version`` is
specified as ``read``. The ``prompt-create``, ``prompt-fill``, and
``quick-option`` properties together with the property definitions in
``[app.properties]`` can be used to initialize the ``boot.properties``
file.
The app.class property specifies the name of the entry point to the
application. An application entry point must be a public class with a
no-argument constructor that implements ``xsbti.AppMain``. The
``AppMain`` interface specifies the entry method signature 'run'. The
run method is passed an instance of AppConfiguration, which provides
access to the startup environment. ``AppConfiguration`` also provides an
interface to retrieve other versions of Scala or other applications.
Finally, the return type of the run method is ``xsbti.MainResult``,
which has two subtypes: ``xsbti.Reboot`` and ``xsbti.Exit``. To exit
with a specific code, return an instance of ``xsbti.Exit`` with the
requested code. To restart the application, return an instance of
Reboot. You can change some aspects of the configuration with a reboot,
such as the version of Scala, the application ID, and the arguments.
The ``ivy.cache-directory`` property provides an alternative location
for the Ivy cache used by the launcher. This does not automatically set
the Ivy cache for the application, but the application is provided this
location through the AppConfiguration instance. The ``checksums``
property selects the checksum algorithms (sha1 or md5) that are used to
verify artifacts downloaded by the launcher. ``override-build-repos`` is
a flag that can inform the application that the repositories configured
for the launcher should be used in the application. If
``repository-config`` is defined, the file it specifies should contain a
``[repositories]`` section that is used in place of the section in the
original configuration file.
Execution
---------
On startup, the launcher searches for its configuration in the order
described in the Configuration section and then parses it. If either the
Scala version or the application version are specified as 'read', the
launcher determines them in the following manner. The file given by the
'boot.properties' property is read as a Java properties file to obtain
the version. The expected property names are ``${app.name}.version`` for
the application version (where ``${app.name}`` is replaced with the
value of the ``app.name`` property from the boot configuration file) and
``scala.version`` for the Scala version. If the properties file does not
exist, the default value provided is used. If no default was provided,
an error is generated.
Once the final configuration is resolved, the launcher proceeds to
obtain the necessary jars to launch the application. The
``boot.directory`` property is used as a base directory to retrieve jars
to. Locking is done on the directory, so it can be shared system-wide.
The launcher retrieves the requested version of Scala to
::
${boot.directory}/${scala.version}/lib/
If this directory already exists, the launcher takes a shortcut for
startup performance and assumes that the jars have already been
downloaded. If the directory does not exist, the launcher uses Apache
Ivy to resolve and retrieve the jars. A similar process occurs for the
application itself. It and its dependencies are retrieved to
::
${boot.directory}/${scala.version}/${app.org}/${app.name}/.
Once all required code is downloaded, the class loaders are set up. The
launcher creates a class loader for the requested version of Scala. It
then creates a child class loader containing the jars for the requested
'app.components' and with the paths specified in ``app.resources``. An
application that does not use components will have all of its jars in
this class loader.
The main class for the application is then instantiated. It must be a
public class with a public no-argument constructor and must conform to
xsbti.AppMain. The ``run`` method is invoked and execution passes to the
application. The argument to the 'run' method provides configuration
information and a callback to obtain a class loader for any version of
Scala that can be obtained from a repository in [repositories]. The
return value of the run method determines what is done after the
application executes. It can specify that the launcher should restart
the application or that it should exit with the provided exit code.
Creating a Launched Application
-------------------------------
This section shows how to make an application that is launched by this
launcher. First, declare a dependency on the launcher-interface. Do not
declare a dependency on the launcher itself. The launcher interface
consists strictly of Java interfaces in order to avoid binary
incompatibility between the version of Scala used to compile the
launcher and the version used to compile your application. The launcher
interface class will be provided by the launcher, so it is only a
compile-time dependency. If you are building with sbt, your dependency
definition would be:
::
libraryDependencies += "org.scala-sbt" % "launcher-interface" % "0.12.0" % "provided"
resolvers <+= sbtResolver
Make the entry point to your class implement 'xsbti.AppMain'. An example
that uses some of the information:
::
package xsbt.test
class Main extends xsbti.AppMain
{
def run(configuration: xsbti.AppConfiguration) =
{
// get the version of Scala used to launch the application
val scalaVersion = configuration.provider.scalaProvider.version
// Print a message and the arguments to the application
println("Hello world! Running Scala " + scalaVersion)
configuration.arguments.foreach(println)
// demonstrate the ability to reboot the application into different versions of Scala
// and how to return the code to exit with
scalaVersion match
{
case "2.8.2" =>
new xsbti.Reboot {
def arguments = configuration.arguments
def baseDirectory = configuration.baseDirectory
def scalaVersion = "2.9.2
def app = configuration.provider.id
}
case "2.9.2" => new Exit(1)
case _ => new Exit(0)
}
}
class Exit(val code: Int) extends xsbti.Exit
}
Next, define a configuration file for the launcher. For the above class,
it might look like:
::
[scala]
version: 2.9.2
[app]
org: org.scala-sbt
name: xsbt-test
version: 0.12.0
class: xsbt.test.Main
cross-versioned: true
[repositories]
local
maven-central
[boot]
directory: ${user.home}/.myapp/boot
Then, ``publish-local`` or ``+publish-local`` the application to make it
available.
Running an Application
----------------------
As mentioned above, there are a few options to actually run the
application. The first involves providing a modified jar for download.
The second two require providing a configuration file for download.
- Replace the /sbt/sbt.boot.properties file in the launcher jar and
distribute the modified jar. The user would need a script to run
'java -jar your-launcher.jar arg1 arg2 ...'.
- The user downloads the launcher jar and you provide the configuration
file.
- The user needs to run 'java
-Dsbt.boot.properties=your.boot.properties -jar launcher.jar'.
- The user already has a script to run the launcher (call it
'launch'). The user needs to run
``launch @your.boot.properties your-arg-1 your-arg-2``

View File

@ -1,391 +0,0 @@
[Apache Ivy]: http://ant.apache.org/ivy/
[Ivy revisions]: http://ant.apache.org/ivy/history/2.2.0/ivyfile/dependency.html#revision
[Extra attributes]: http://ant.apache.org/ivy/history/2.2.0/concept.html#extra
[through Ivy]: http://ant.apache.org/ivy/history/latest-milestone/concept.html#checksum
[ModuleID]: http://harrah.github.com/xsbt/latest/api/sbt/ModuleID.html
# Library Management
There's now a
[[getting started page|Getting Started Library Dependencies]]
about library management, which you may want to read first.
_Wiki Maintenance Note:_ it would be nice to remove the overlap
between this page and the getting started page, leaving this page
with the more advanced topics such as checksums and external Ivy
files.
# Introduction
There are two ways for you to manage libraries with sbt: manually
or automatically. These two ways can be mixed as well. This page
discusses the two approaches. All configurations shown here are
settings that go either directly in a
[[.sbt file|Getting Started Basic Def]] or are appended to the
`settings` of a Project in a [[.scala file|Getting Started Full Def]].
# Manual Dependency Management
Manually managing dependencies involves copying any jars that you want to use to the `lib` directory. sbt will put these jars on the classpath during compilation, testing, running, and when using the interpreter. You are responsible for adding, removing, updating, and otherwise managing the jars in this directory. No modifications to your project definition are required to use this method unless you would like to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the `unmanaged-base` setting in your project definition. For example, to use `custom_lib/`:
```scala
unmanagedBase <<= baseDirectory { base => base / "custom_lib" }
```
If you want more control and flexibility, override the `unmanaged-jars` task, which ultimately provides the manual dependencies to sbt. The default implementation is roughly:
```scala
unmanagedJars in Compile <<= baseDirectory map { base => (base ** "*.jar").classpath }
```
If you want to add jars from multiple directories in addition to the default directory, you can do:
```scala
unmanagedJars in Compile <++= baseDirectory map { base =>
val baseDirectories = (base / "libA") +++ (base / "b" / "lib") +++ (base / "libC")
val customJars = (baseDirectories ** "*.jar") +++ (base / "d" / "my.jar")
customJars.classpath
}
```
See [[Paths]] for more information on building up paths.
# Automatic Dependency Management
This method of dependency management involves specifying the direct dependencies of your project and letting sbt handle retrieving and updating your dependencies. sbt supports three ways of specifying these dependencies:
* Declarations in your project definition
* Maven POM files (dependency definitions only: no repositories)
* Ivy configuration and settings files
sbt uses [Apache Ivy] to implement dependency management in all three cases. The default is to use inline declarations, but external configuration can be explicitly selected. The following sections describe how to use each method of automatic dependency management.
## Inline Declarations
Inline declarations are a basic way of specifying the dependencies to be automatically retrieved. They are intended as a lightweight alternative to a full configuration using Ivy.
### Dependencies
Declaring a dependency looks like:
```scala
libraryDependencies += groupID % artifactID % revision
```
or
```scala
libraryDependencies += groupID % artifactID % revision % configuration
```
See [[Configurations]] for details on configuration mappings. Also, several dependencies can be declared together:
```scala
libraryDependencies ++= Seq(
groupID %% artifactID % revision,
groupID %% otherID % otherRevision
)
```
If you are using a dependency that was built with sbt, double the first `%` to be `%%`:
```scala
libraryDependencies += groupID %% artifactID % revision
```
This will use the right jar for the dependency built with the version of Scala that you are currently using. If you get an error while resolving this kind of dependency, that dependency probably wasn't published for the version of Scala you are using. See [[Cross Build]] for details.
Ivy can select the latest revision of a module according to constraints you specify. Instead of a fixed revision like `"1.6.1"`, you specify `"latest.integration"`, `"2.9.+"`, or `"[1.0,)"`. See the [Ivy revisions] documentation for details.
### Resolvers
sbt uses the standard Maven2 repository by default.
Declare additional repositories with the form:
```scala
resolvers += name at location
```
For example:
```scala
libraryDependencies ++= Seq(
"org.apache.derby" % "derby" % "10.4.1.3",
"org.specs" % "specs" % "1.6.1"
)
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
```
sbt can search your local Maven repository if you add it as a repository:
```scala
resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
```
See [[Resolvers]] for details on defining other types of repositories.
### Override default resolvers
`resolvers` configures additional, inline user resolvers. By default, `sbt` combines these resolvers with default repositories (Maven Central and the local Ivy repository) to form `external-resolvers`. To have more control over repositories, set `external-resolvers` directly. To only specify repositories in addition to the usual defaults, configure `resolvers`.
For example, to use the Sonatype OSS Snapshots repository in addition to the default repositories,
```scala
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
```
To use the local repository, but not the Maven Central repository:
```scala
externalResolvers <<= resolvers map { rs =>
Resolver.withDefaultResolvers(rs, mavenCentral = false)
}
```
### Override all resolvers for all builds
The repositories used to retrieve sbt, Scala, plugins, and application dependencies can be configured globally and declared to override the resolvers configured in a build or plugin definition.
There are two parts:
1. Define the repositories used by the launcher.
2. Specify that these repositories should override those in build definitions.
The repositories used by the launcher can be overridden by defining `~/.sbt/repositories`, which must contain a `[repositories]` section with the same format as the [[Launcher]] configuration file. For example:
```text
[repositories]
local
my-maven-repo: http://example.org/repo
my-ivy-repo: http://example.org/ivy-repo/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext]
```
A different location for the repositories file may be specified by the `sbt.repository.config` system property in the sbt startup script.
The final step is to set `sbt.override.build.repos` to true to use these repositories for dependency resolution and retrieval.
### Explicit URL
If your project requires a dependency that is not present in a repository, a direct URL to its jar can be specified as follows:
```scala
libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar"
```
The URL is only used as a fallback if the dependency cannot be found through the configured repositories. Also, the explicit URL is not included in published metadata (that is, the pom or ivy.xml).
### Disable Transitivity
By default, these declarations fetch all project dependencies, transitively. In some instances, you may find that the dependencies listed for a project aren't necessary for it to build. Projects using the Felix OSGI framework, for instance, only explicitly require its main jar to compile and run. Avoid fetching artifact dependencies with either `intransitive()` or `notTransitive()`, as in this example:
```scala
libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive()
```
### Classifiers
You can specify the classifier for a dependency using the `classifier` method. For example, to get the jdk15 version of TestNG:
```scala
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
```
For multiple classifiers, use multiple `classifier` calls:
```scala
libraryDependencies +=
"org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifier "natives-linux" classifier "natives-osx"
```
To obtain particular classifiers for all dependencies transitively, run the `update-classifiers` task. By default, this resolves all artifacts with the `sources` or `javadoc` classifier. Select the classifiers to obtain by configuring the `transitive-classifiers` setting. For example, to only retrieve sources:
```scala
transitiveClassifiers := Seq("sources")
```
### Exclude Transitive Dependencies
To exclude certain transitive dependencies of a dependency, use the `excludeAll` or `exclude` methods. The `exclude` method should be used when a pom will be published for the project. It requires the organization and module name to exclude. For example,
```scala
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")
```
The `excludeAll` method is more flexible, but because it cannot be represented in a pom.xml, it should only be used when a pom doesn't need to be generated. For example,
```scala
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" excludeAll(
ExclusionRule(organization = "com.sun.jdmk"),
ExclusionRule(organization = "com.sun.jmx"),
ExclusionRule(organization = "javax.jms")
)
```
See [ModuleID] for API details.
### Download Sources
Downloading source and API documentation jars is usually handled by an IDE plugin. These plugins use the `update-classifiers` and `update-sbt-classifiers` tasks, which produce an [[Update Report]] referencing these jars.
To have sbt download the dependency's sources without using an IDE plugin, add `withSources()` to the dependency definition. For API jars, add `withJavadoc()`. For example:
```scala
libraryDependencies +=
"org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc()
```
Note that this is not transitive. Use the `update-*classifiers` tasks for that.
### Extra Attributes
[Extra attributes] can be specified by passing key/value pairs to the `extra` method.
To select dependencies by extra attributes:
```scala
libraryDependencies += "org" % "name" % "rev" extra("color" -> "blue")
```
To define extra attributes on the current project:
```scala
projectID <<= projectID { id =>
id extra("color" -> "blue", "component" -> "compiler-interface")
}
```
### Inline Ivy XML
sbt additionally supports directly specifying the configurations or dependencies sections of an Ivy configuration file inline. You can mix this with inline Scala dependency and repository declarations.
For example:
```scala
ivyXML :=
<dependencies>
<dependency org="javax.mail" name="mail" rev="1.4.2">
<exclude module="activation"/>
</dependency>
</dependencies>
```
### Ivy Home Directory
By default, sbt uses the standard Ivy home directory location `${user.home}/.ivy2/`.
This can be configured machine-wide, for use by both the sbt launcher and by projects, by setting the system property `sbt.ivy.home` in the sbt startup script (described in [[Setup|Getting Started Setup]]).
For example:
```text
java -Dsbt.ivy.home=/tmp/.ivy2/ ...
```
### Checksums
sbt ([through Ivy]) verifies the checksums of downloaded files by default. It also publishes checksums of artifacts by default. The checksums to use are specified by the _checksums_ setting.
To disable checksum checking during update:
```scala
checksums in update := Nil
```
To disable checksum creation during artifact publishing:
```scala
checksums in publishLocal := Nil
checksums in publish := Nil
```
The default value is:
```scala
checksums := Seq("sha1", "md5")
```
### Publishing
Finally, see [[Publishing]] for how to publish your project.
## Maven/Ivy
For this method, create the configuration files as you would for Maven (`pom.xml`) or Ivy (`ivy.xml` and optionally `ivysettings.xml`).
External configuration is selected by using one of the following expressions.
### Ivy settings (resolver configuration)
```scala
externalIvySettings()
```
or
```scala
externalIvySettings(baseDirectory(_ / "custom-settings-name.xml"))
```
or
```scala
externalIvySettings(url("your_url_here"))
```
### Ivy file (dependency configuration)
```scala
externalIvyFile()
```
or
```scala
externalIvyFile(baseDirectory(_ / "custom-name.xml"))
```
Because Ivy files specify their own configurations, sbt needs to know which configurations to use for the compile, runtime, and test classpaths. For example, to specify that the Compile classpath should use the 'default' configuration:
```scala
classpathConfiguration in Compile := config("default")
```
### Maven pom (dependencies only)
```scala
externalPom()
```
or
```scala
externalPom(baseDirectory(_ / "custom-name.xml"))
```
### Full Ivy Example
For example, a `build.sbt` using external Ivy files might look like:
```scala
externalIvySettings()
externalIvyFile( baseDirectory { base => base / "ivyA.xml"} )
classpathConfiguration in Compile := Compile
classpathConfiguration in Test := Test
classpathConfiguration in Runtime := Runtime
```
### Known limitations
Maven support is dependent on Ivy's support for Maven POMs.
Known issues with this support:
* Specifying `relativePath` in the `parent` section of a POM will produce an error.
* Ivy ignores repositories specified in the POM. A workaround is to specify repositories inline or in an Ivy `ivysettings.xml` file.

View File

@ -0,0 +1,498 @@
==================
Library Management
==================
There's now a :doc:`getting started page </Getting-Started/Library-Dependencies>`
about library management, which you may want to read first.
*Wiki Maintenance Note:* it would be nice to remove the overlap between
this page and the getting started page, leaving this page with the more
advanced topics such as checksums and external Ivy files.
Introduction
============
There are two ways for you to manage libraries with sbt: manually or
automatically. These two ways can be mixed as well. This page discusses
the two approaches. All configurations shown here are settings that go
either directly in a :doc:`.sbt file </Getting-Started/Basic-Def>` or are
appended to the ``settings`` of a Project in a :doc:`.scala file </Getting-Started/Full-Def>`.
Manual Dependency Management
============================
Manually managing dependencies involves copying any jars that you want
to use to the ``lib`` directory. sbt will put these jars on the
classpath during compilation, testing, running, and when using the
interpreter. You are responsible for adding, removing, updating, and
otherwise managing the jars in this directory. No modifications to your
project definition are required to use this method unless you would like
to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the
``unmanaged-base`` setting in your project definition. For example, to
use ``custom_lib/``:
::
unmanagedBase <<= baseDirectory { base => base / "custom_lib" }
If you want more control and flexibility, override the
``unmanaged-jars`` task, which ultimately provides the manual
dependencies to sbt. The default implementation is roughly:
::
unmanagedJars in Compile <<= baseDirectory map { base => (base ** "*.jar").classpath }
If you want to add jars from multiple directories in addition to the
default directory, you can do:
::
unmanagedJars in Compile <++= baseDirectory map { base =>
val baseDirectories = (base / "libA") +++ (base / "b" / "lib") +++ (base / "libC")
val customJars = (baseDirectories ** "*.jar") +++ (base / "d" / "my.jar")
customJars.classpath
}
See :doc:`Paths` for more information on building up paths.
Automatic Dependency Management
===============================
This method of dependency management involves specifying the direct
dependencies of your project and letting sbt handle retrieving and
updating your dependencies. sbt supports three ways of specifying these
dependencies:
- Declarations in your project definition
- Maven POM files (dependency definitions only: no repositories)
- Ivy configuration and settings files
sbt uses `Apache Ivy <http://ant.apache.org/ivy/>`_ to implement
dependency management in all three cases. The default is to use inline
declarations, but external configuration can be explicitly selected. The
following sections describe how to use each method of automatic
dependency management.
Inline Declarations
-------------------
Inline declarations are a basic way of specifying the dependencies to be
automatically retrieved. They are intended as a lightweight alternative
to a full configuration using Ivy.
Dependencies
~~~~~~~~~~~~
Declaring a dependency looks like:
::
libraryDependencies += groupID % artifactID % revision
or
::
libraryDependencies += groupID % artifactID % revision % configuration
See :doc:`/Dormant/Configurations` for details on configuration mappings. Also,
several dependencies can be declared together:
::
libraryDependencies ++= Seq(
groupID %% artifactID % revision,
groupID %% otherID % otherRevision
)
If you are using a dependency that was built with sbt, double the first
``%`` to be ``%%``:
::
libraryDependencies += groupID %% artifactID % revision
This will use the right jar for the dependency built with the version of
Scala that you are currently using. If you get an error while resolving
this kind of dependency, that dependency probably wasn't published for
the version of Scala you are using. See :doc:`Cross-Build` for details.
Ivy can select the latest revision of a module according to constraints
you specify. Instead of a fixed revision like ``"1.6.1"``, you specify
``"latest.integration"``, ``"2.9.+"``, or ``"[1.0,)"``. See the `Ivy
revisions <http://ant.apache.org/ivy/history/2.2.0/ivyfile/dependency.html#revision>`_
documentation for details.
Resolvers
~~~~~~~~~
sbt uses the standard Maven2 repository by default.
Declare additional repositories with the form:
::
resolvers += name at location
For example:
::
libraryDependencies ++= Seq(
"org.apache.derby" % "derby" % "10.4.1.3",
"org.specs" % "specs" % "1.6.1"
)
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
sbt can search your local Maven repository if you add it as a
repository:
::
resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
See :doc:`Resolvers` for details on defining other types of repositories.
Override default resolvers
~~~~~~~~~~~~~~~~~~~~~~~~~~
``resolvers`` configures additional, inline user resolvers. By default,
``sbt`` combines these resolvers with default repositories (Maven
Central and the local Ivy repository) to form ``external-resolvers``. To
have more control over repositories, set ``external-resolvers``
directly. To only specify repositories in addition to the usual
defaults, configure ``resolvers``.
For example, to use the Sonatype OSS Snapshots repository in addition to
the default repositories,
::
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
To use the local repository, but not the Maven Central repository:
::
externalResolvers <<= resolvers map { rs =>
Resolver.withDefaultResolvers(rs, mavenCentral = false)
}
Override all resolvers for all builds
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The repositories used to retrieve sbt, Scala, plugins, and application
dependencies can be configured globally and declared to override the
resolvers configured in a build or plugin definition. There are two
parts:
1. Define the repositories used by the launcher.
2. Specify that these repositories should override those in build
definitions.
The repositories used by the launcher can be overridden by defining
``~/.sbt/repositories``, which must contain a ``[repositories]`` section
with the same format as the :doc:`Launcher` configuration file. For
example:
::
[repositories]
local
my-maven-repo: http://example.org/repo
my-ivy-repo: http://example.org/ivy-repo/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext]
A different location for the repositories file may be specified by the
``sbt.repository.config`` system property in the sbt startup script. The
final step is to set ``sbt.override.build.repos`` to true to use these
repositories for dependency resolution and retrieval.
Explicit URL
~~~~~~~~~~~~
If your project requires a dependency that is not present in a
repository, a direct URL to its jar can be specified as follows:
::
libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar"
The URL is only used as a fallback if the dependency cannot be found
through the configured repositories. Also, the explicit URL is not
included in published metadata (that is, the pom or ivy.xml).
Disable Transitivity
~~~~~~~~~~~~~~~~~~~~
By default, these declarations fetch all project dependencies,
transitively. In some instances, you may find that the dependencies
listed for a project aren't necessary for it to build. Projects using
the Felix OSGI framework, for instance, only explicitly require its main
jar to compile and run. Avoid fetching artifact dependencies with either
``intransitive()`` or ``notTransitive()``, as in this example:
::
libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive()
Classifiers
~~~~~~~~~~~
You can specify the classifier for a dependency using the ``classifier``
method. For example, to get the jdk15 version of TestNG:
::
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
For multiple classifiers, use multiple ``classifier`` calls:
::
libraryDependencies +=
"org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifier "natives-linux" classifier "natives-osx"
To obtain particular classifiers for all dependencies transitively, run
the ``update-classifiers`` task. By default, this resolves all artifacts
with the ``sources`` or ``javadoc`` classifier. Select the classifiers
to obtain by configuring the ``transitive-classifiers`` setting. For
example, to only retrieve sources:
::
transitiveClassifiers := Seq("sources")
Exclude Transitive Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To exclude certain transitive dependencies of a dependency, use the
``excludeAll`` or ``exclude`` methods. The ``exclude`` method should be
used when a pom will be published for the project. It requires the
organization and module name to exclude. For example,
::
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")
The ``excludeAll`` method is more flexible, but because it cannot be
represented in a pom.xml, it should only be used when a pom doesn't need
to be generated. For example,
::
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" excludeAll(
ExclusionRule(organization = "com.sun.jdmk"),
ExclusionRule(organization = "com.sun.jmx"),
ExclusionRule(organization = "javax.jms")
)
See
`ModuleID <../../api/sbt/ModuleID.html>`_
for API details.
Download Sources
~~~~~~~~~~~~~~~~
Downloading source and API documentation jars is usually handled by an
IDE plugin. These plugins use the ``update-classifiers`` and
``update-sbt-classifiers`` tasks, which produce an :doc:`Update-Report`
referencing these jars.
To have sbt download the dependency's sources without using an IDE
plugin, add ``withSources()`` to the dependency definition. For API
jars, add ``withJavadoc()``. For example:
::
libraryDependencies +=
"org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc()
Note that this is not transitive. Use the ``update-*classifiers`` tasks
for that.
Extra Attributes
~~~~~~~~~~~~~~~~
`Extra
attributes <http://ant.apache.org/ivy/history/2.2.0/concept.html#extra>`_
can be specified by passing key/value pairs to the ``extra`` method.
To select dependencies by extra attributes:
::
libraryDependencies += "org" % "name" % "rev" extra("color" -> "blue")
To define extra attributes on the current project:
::
projectID <<= projectID { id =>
id extra("color" -> "blue", "component" -> "compiler-interface")
}
Inline Ivy XML
~~~~~~~~~~~~~~
sbt additionally supports directly specifying the configurations or
dependencies sections of an Ivy configuration file inline. You can mix
this with inline Scala dependency and repository declarations.
For example:
::
ivyXML :=
<dependencies>
<dependency org="javax.mail" name="mail" rev="1.4.2">
<exclude module="activation"/>
</dependency>
</dependencies>
Ivy Home Directory
~~~~~~~~~~~~~~~~~~
By default, sbt uses the standard Ivy home directory location
``${user.home}/.ivy2/``. This can be configured machine-wide, for use by
both the sbt launcher and by projects, by setting the system property
``sbt.ivy.home`` in the sbt startup script (described in
:doc:`Setup </Getting-Started/Setup>`).
For example:
::
java -Dsbt.ivy.home=/tmp/.ivy2/ ...
Checksums
~~~~~~~~~
sbt (`through
Ivy <http://ant.apache.org/ivy/history/latest-milestone/concept.html#checksum>`_)
verifies the checksums of downloaded files by default. It also publishes
checksums of artifacts by default. The checksums to use are specified by
the *checksums* setting.
To disable checksum checking during update:
::
checksums in update := Nil
To disable checksum creation during artifact publishing:
::
checksums in publishLocal := Nil
checksums in publish := Nil
The default value is:
::
checksums := Seq("sha1", "md5")
Publishing
~~~~~~~~~~
Finally, see :doc:`Publishing` for how to publish your project.
.. _external-maven-ivy:
Maven/Ivy
---------
For this method, create the configuration files as you would for Maven
(``pom.xml``) or Ivy (``ivy.xml`` and optionally ``ivysettings.xml``).
External configuration is selected by using one of the following
expressions.
Ivy settings (resolver configuration)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
externalIvySettings()
or
::
externalIvySettings(baseDirectory(_ / "custom-settings-name.xml"))
or
::
externalIvySettings(url("your_url_here"))
Ivy file (dependency configuration)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
externalIvyFile()
or
::
externalIvyFile(baseDirectory(_ / "custom-name.xml"))
Because Ivy files specify their own configurations, sbt needs to know
which configurations to use for the compile, runtime, and test
classpaths. For example, to specify that the Compile classpath should
use the 'default' configuration:
::
classpathConfiguration in Compile := config("default")
Maven pom (dependencies only)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
externalPom()
or
::
externalPom(baseDirectory(_ / "custom-name.xml"))
Full Ivy Example
~~~~~~~~~~~~~~~~
For example, a ``build.sbt`` using external Ivy files might look like:
::
externalIvySettings()
externalIvyFile( baseDirectory { base => base / "ivyA.xml"} )
classpathConfiguration in Compile := Compile
classpathConfiguration in Test := Test
classpathConfiguration in Runtime := Runtime
Known limitations
~~~~~~~~~~~~~~~~~
Maven support is dependent on Ivy's support for Maven POMs. Known issues
with this support:
- Specifying ``relativePath`` in the ``parent`` section of a POM will
produce an error.
- Ivy ignores repositories specified in the POM. A workaround is to
specify repositories inline or in an Ivy ``ivysettings.xml`` file.

View File

@ -1,18 +0,0 @@
# Local Scala
To use a locally built Scala version, define the `scala-home` setting, which is of type `Option[File]`.
This Scala version will only be used for the build and not for sbt, which will still use the version it was compiled against.
Example:
```scala
scalaHome := Some(file("/path/to/scala"))
```
Using a local Scala version will override the `scala-version` setting and will not work with [[cross building|Cross Build]].
sbt reuses the class loader for the local Scala version. If you recompile your local Scala version and you are using sbt interactively, run
```text
> reload
```
to use the new compilation results.

View File

@ -0,0 +1,19 @@
===========
Local Scala
===========
To use a locally built Scala version, define the ``scala-home`` setting,
which is of type ``Option[File]``. This Scala version will only be used
for the build and not for sbt, which will still use the version it was
compiled against.
Example: ``scala scalaHome := Some(file("/path/to/scala"))``
Using a local Scala version will override the ``scala-version`` setting
and will not work with :doc:`cross building <Cross-Build>`.
sbt reuses the class loader for the local Scala version. If you
recompile your local Scala version and you are using sbt interactively,
run ``text > reload``
to use the new compilation results.

View File

@ -1,95 +0,0 @@
[Path]: http://harrah.github.com/xsbt/latest/api/sbt/Path$.html
[PathFinder]: http://harrah.github.com/xsbt/latest/api/sbt/PathFinder.html
# Mapping Files
Tasks like `package`, `packageSrc`, and `packageDoc` accept mappings of type `Seq[(File, String)]` from an input file to the path to use in the resulting artifact (jar). Similarly, tasks that copy files accept mappings of type `Seq[(File, File)]` from an input file to the destination file. There are some methods on [PathFinder] and [Path] that can be useful for constructing the `Seq[(File, String)]` or `Seq[(File, File)]` sequences.
A common way of making this sequence is to start with a `PathFinder` or `Seq[File]` (which is implicitly convertible to `PathFinder`) and then call the `x` method. See the [PathFinder] API for details, but essentially this method accepts a function `File => Option[String]` or `File => Option[File]` that is used to generate mappings.
## Relative to a directory
The `Path.relativeTo` method is used to map a `File` to its path `String` relative to a base directory or directories. The `relativeTo` method accepts a base directory or sequence of base directories to relativize an input file against. The first directory that is an ancestor of the file is used in the case of a sequence of base directories.
For example:
```scala
import Path.relativeTo
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x relativeTo(baseDirectories)
val expected = (file("/a/b/C.scala") -> "b/C.scala") :: Nil
assert( mappings == expected )
```
## Rebase
The `Path.rebase` method relativizes an input file against one or more base directories (the first argument) and then prepends a base String or File (the second argument) to the result. As with `relativeTo`, the first base directory that is an ancestor of the input file is used in the case of multiple base directories.
For example, the following demonstrates building a `Seq[(File, String)]` using `rebase`:
```scala
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x rebase(baseDirectories, "pre/")
val expected = (file("/a/b/C.scala") -> "pre/b/C.scala" ) :: Nil
assert( mappings == expected )
```
Or, to build a `Seq[(File, File)]`:
```scala
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files x rebase(baseDirectories, newBase)
val expected = (file("/a/b/C.scala") -> file("/new/base/b/C.scala") ) :: Nil
assert( mappings == expected )
```
## Flatten
The `Path.flat` method provides a function that maps a file to the last component of the path (its name). For a File to File mapping, the input file is mapped to a file with the same name in a given target directory. For example:
```scala
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val mappings: Seq[(File,String)] = files x flat
val expected = (file("/a/b/C.scala") -> "C.scala" ) :: Nil
assert( mappings == expected )
```
To build a `Seq[(File, File)]` using `flat`:
```scala
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files x flat(newBase)
val expected = (file("/a/b/C.scala") -> file("/new/base/C.scala") ) :: Nil
assert( mappings == expected )
```
## Alternatives
To try to apply several alternative mappings for a file, use `|`, which is implicitly added to a function of type `A => Option[B]`. For example, to try to relativize a file against some base directories but fall back to flattening:
```scala
import Path.relativeTo
val files: Seq[File] = file("/a/b/C.scala") :: file("/zzz/D.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x ( relativeTo(baseDirectories) | flat )
val expected =
(file("/a/b/C.scala") -> "b/C.scala") ) ::
(file("/zzz/D.scala") -> "D.scala") ) ::
Nil
assert( mappings == expected )
```

View File

@ -0,0 +1,124 @@
=============
Mapping Files
=============
Tasks like ``package``, ``packageSrc``, and ``packageDoc`` accept
mappings of type ``Seq[(File, String)]`` from an input file to the path
to use in the resulting artifact (jar). Similarly, tasks that copy files
accept mappings of type ``Seq[(File, File)]`` from an input file to the
destination file. There are some methods on
`PathFinder <../../api/sbt/PathFinder.html>`_
and `Path <../../api/sbt/Path$.html>`_
that can be useful for constructing the ``Seq[(File, String)]`` or
``Seq[(File, File)]`` sequences.
A common way of making this sequence is to start with a ``PathFinder``
or ``Seq[File]`` (which is implicitly convertible to ``PathFinder``) and
then call the ``x`` method. See the
`PathFinder <../../api/sbt/PathFinder.html>`_
API for details, but essentially this method accepts a function
``File => Option[String]`` or ``File => Option[File]`` that is used to
generate mappings.
Relative to a directory
-----------------------
The ``Path.relativeTo`` method is used to map a ``File`` to its path
``String`` relative to a base directory or directories. The
``relativeTo`` method accepts a base directory or sequence of base
directories to relativize an input file against. The first directory
that is an ancestor of the file is used in the case of a sequence of
base directories.
For example:
::
import Path.relativeTo
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x relativeTo(baseDirectories)
val expected = (file("/a/b/C.scala") -> "b/C.scala") :: Nil
assert( mappings == expected )
Rebase
------
The ``Path.rebase`` method relativizes an input file against one or more
base directories (the first argument) and then prepends a base String or
File (the second argument) to the result. As with ``relativeTo``, the
first base directory that is an ancestor of the input file is used in
the case of multiple base directories.
For example, the following demonstrates building a
``Seq[(File, String)]`` using ``rebase``:
::
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val mappings: Seq[(File,String)] = files x rebase(baseDirectories, "pre/")
val expected = (file("/a/b/C.scala") -> "pre/b/C.scala" ) :: Nil
assert( mappings == expected )
Or, to build a ``Seq[(File, File)]``:
::
import Path.rebase
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val baseDirectories: Seq[File] = file("/a") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files x rebase(baseDirectories, newBase)
val expected = (file("/a/b/C.scala") -> file("/new/base/b/C.scala") ) :: Nil
assert( mappings == expected )
Flatten
-------
The ``Path.flat`` method provides a function that maps a file to the
last component of the path (its name). For a File to File mapping, the
input file is mapped to a file with the same name in a given target
directory. For example:
::
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val mappings: Seq[(File,String)] = files x flat
val expected = (file("/a/b/C.scala") -> "C.scala" ) :: Nil
assert( mappings == expected )
To build a ``Seq[(File, File)]`` using ``flat``:
::
import Path.flat
val files: Seq[File] = file("/a/b/C.scala") :: Nil
val newBase: File = file("/new/base")
val mappings: Seq[(File,File)] = files x flat(newBase)
val expected = (file("/a/b/C.scala") -> file("/new/base/C.scala") ) :: Nil
assert( mappings == expected )
Alternatives
------------
To try to apply several alternative mappings for a file, use ``|``,
which is implicitly added to a function of type ``A => Option[B]``. For
example, to try to relativize a file against some base directories but
fall back to flattening:
\`\`\`scala import Path.relativeTo val files: Seq[File] =
file("/a/b/C.scala") :: file("/zzz/D.scala") :: Nil val baseDirectories:
Seq[File] = file("/a") :: Nil val mappings: Seq[(File,String)] = files x
( relativeTo(baseDirectories) \| flat )
val expected = (file("/a/b/C.scala") -> "b/C.scala") ) ::
(file("/zzz/D.scala") -> "D.scala") ) :: Nil assert( mappings ==
expected ) \`\`\`

View File

@ -1,92 +0,0 @@
The assumption here is that you are familiar with sbt 0.7 but new to 0.12.
sbt 0.12's many new capabilities can be a bit overwhelming, but this page should help you migrate to 0.12 with a minimum of fuss.
## Why move to 0.12?
1. Faster builds (because it is smarter at re-compiling only what it must)
1. Easier configuration. For simple projects a single `build.sbt` file in your root directory is easier to create than `project/build/MyProject.scala` was.
1. No more `lib_managed` directory, reducing disk usage and avoiding backup and version control hassles.
1. `update` is now much faster and it's invoked automatically by sbt.
1. Terser output. (Yet you can ask for more details if something goes wrong.)
# Step 1: Read the Getting Started Guide for sbt 0.12
Reading the [[Getting Started Guide|Getting Started Welcome]] will
probably save you a lot of confusion.
# Step 2: Install sbt 0.12.0
Download sbt 0.12 as described on [[the setup page|Getting Started Setup]].
You can run 0.12 the same way that you run 0.7.x, either simply:
java -jar sbt-launch.jar
Or (as most users do) with a shell script, as described on
[[the setup page|Getting Started Setup]].
If you like, rename `sbt-launch.jar` and the script itself to
support multiple versions. For example you could have scripts for
`sbt7` and `sbt12`.
For more details see [[the setup page|Getting Started Setup]].
# Step 3: A technique for switching an existing project
Here is a technique for switching an existing project to 0.12 while retaining the ability to switch back again at will. Some builds, such as those with subprojects, are not suited for this technique, but if you learn how to transition a simple project it will help you do a more complex one next.
## Preserve `project/` for 0.7.x project
Rename your `project/` directory to something like `project-old`. This will hide it from sbt 0.12 but keep it in case you want to switch back to 0.7.x.
## Create `build.sbt` for 0.12
Create a `build.sbt` file in the root directory of your
project. See [[.sbt build definition|Getting Started Basic Def]]
in the Getting Started Guide, and for simple examples [[Quick-Configuration-Examples]]. If you have a simple project then converting your existing project file to this format is largely a matter of re-writing your dependencies and maven archive declarations in a modified yet familiar syntax.
This `build.sbt` file combines aspects of the old `project/build/ProjectName.scala` and `build.properties` files. It looks like a property file, yet contains Scala code in a special format.
A `build.properties` file like:
#Project properties
#Fri Jan 07 15:34:00 GMT 2011
project.organization=org.myproject
project.name=My Project
sbt.version=0.7.7
project.version=1.0
def.scala.version=2.7.7
build.scala.versions=2.8.1
project.initialize=false
Now becomes part of your `build.sbt` file with lines like:
```scala
name := "My Project"
version := "1.0"
organization := "org.myproject"
scalaVersion := "2.9.2"
```
Currently, a `project/build.properties` is still needed to explicitly select the sbt version. For example:
```text
sbt.version=0.12.0
```
## Run sbt 0.12
Now launch sbt. If you're lucky it works and you're done. For help debugging, see below.
## Switching back to sbt 0.7.x
If you get stuck and want to switch back, you can leave your `build.sbt` file alone. sbt 0.7.x will not understand or notice it. Just rename your 0.12.x `project` directory to something like `project10` and rename the backup of your old project from `project-old` to `project` again.
# FAQs
There's a section in the [[FAQ]] about migration from 0.7 that
covers several other important points.

View File

@ -0,0 +1,128 @@
===========================
Migrating from 0.7 to 0.10+
===========================
The assumption here is that you are familiar with sbt 0.7 but new to sbt |version|.
sbt |version|'s many new capabilities can be a bit overwhelming, but this
page should help you migrate to |version| with a minimum of fuss.
Why move to |version|?
----------------------
1. Faster builds (because it is smarter at re-compiling only what it
must)
2. Easier configuration. For simple projects a single ``build.sbt`` file
in your root directory is easier to create than
``project/build/MyProject.scala`` was.
3. No more ``lib_managed`` directory, reducing disk usage and avoiding
backup and version control hassles.
4. ``update`` is now much faster and it's invoked automatically by sbt.
5. Terser output. (Yet you can ask for more details if something goes
wrong.)
Step 1: Read the Getting Started Guide for sbt |version|
========================================================
Reading the :doc:`Getting Started Guide </Getting-Started/Welcome>` will
probably save you a lot of confusion.
Step 2: Install sbt |release|
=============================
Download sbt |version| as described on :doc:`the setup page </Getting-Started/Setup>`.
You can run |version| the same way that you run 0.7.x, either simply:
::
java -jar sbt-launch.jar
Or (as most users do) with a shell script, as described on
:doc:`the setup page </Getting-Started/Setup>`.
For more details see :doc:`the setup page </Getting-Started/Setup>`.
Step 3: A technique for switching an existing project
=====================================================
Here is a technique for switching an existing project to |version| while
retaining the ability to switch back again at will. Some builds, such as
those with subprojects, are not suited for this technique, but if you
learn how to transition a simple project it will help you do a more
complex one next.
Preserve ``project/`` for 0.7.x project
---------------------------------------
Rename your ``project/`` directory to something like ``project-old``.
This will hide it from sbt |version| but keep it in case you want to switch
back to 0.7.x.
Create ``build.sbt`` for |version|
----------------------------------
Create a ``build.sbt`` file in the root directory of your project. See
:doc:`.sbt build definition </Getting-Started/Basic-Def>` in the Getting
Started Guide, and for simple examples :doc:`/Examples/Quick-Configuration-Examples`.
If you have a simple project then converting your existing project file
to this format is largely a matter of re-writing your dependencies and
maven archive declarations in a modified yet familiar syntax.
This ``build.sbt`` file combines aspects of the old
``project/build/ProjectName.scala`` and ``build.properties`` files. It
looks like a property file, yet contains Scala code in a special format.
A ``build.properties`` file like:
::
#Project properties
#Fri Jan 07 15:34:00 GMT 2011
project.organization=org.myproject
project.name=My Project
sbt.version=0.7.7
project.version=1.0
def.scala.version=2.7.7
build.scala.versions=2.8.1
project.initialize=false
Now becomes part of your ``build.sbt`` file with lines like:
::
name := "My Project"
version := "1.0"
organization := "org.myproject"
scalaVersion := "2.9.2"
Currently, a ``project/build.properties`` is still needed to explicitly
select the sbt version. For example:
::
sbt.version=|release|
Run sbt |version|
-----------------
Now launch sbt. If you're lucky it works and you're done. For help
debugging, see below.
Switching back to sbt 0.7.x
---------------------------
If you get stuck and want to switch back, you can leave your
``build.sbt`` file alone. sbt 0.7.x will not understand or notice it.
Just rename your |version| ``project`` directory to something like
``project10`` and rename the backup of your old project from
``project-old`` to ``project`` again.
FAQs
====
There's a section in the :doc:`FAQ </faq>` about migration from 0.7 that covers
several other important points.

View File

@ -1,268 +0,0 @@
[sbt.ConcurrentRestrictions]: https://github.com/harrah/xsbt/blob/v0.12.0/tasks/ConcurrentRestrictions.scala
# Task ordering
Task ordering is specified by declaring a task's inputs.
Correctness of execution requires correct input declarations.
For example, the following two tasks do not have an ordering specified:
```scala
write := IO.write(file("/tmp/sample.txt"), "Some content.")
read := IO.read(file("/tmp/sample.txt"))
```
sbt is free to execute `write` first and then `read`, `read` first and then `write`, or `read` and `write` simultaneously.
Execution of these tasks is non-deterministic because they share a file.
A correct declaration of the tasks would be:
```scala
write := {
val f = file("/tmp/sample.txt")
IO.write(f, "Some content.")
f
}
read <<= write map { f => IO.read(f) }
```
This establishes an ordering: `read` must run after `write`.
We've also guaranteed that `read` will read from the same file that `write` created.
# Practical constraints
Note: The feature described in this section is experimental.
The default configuration of the feature is subject to change in particular.
## Background
Declaring inputs and dependencies of a task ensures the task is properly ordered and that code executes correctly.
In practice, tasks share finite hardware and software resources and can require control over utilization of these resources.
By default, sbt executes tasks in parallel (subject to the ordering constraints already described) in an effort to utilize all available processors.
Also by default, each test class is mapped to its own task to enable executing tests in parallel.
Prior to sbt 0.12, user control over this process was restricted to:
1. Enabling or disabling all parallel execution (`parallelExecution := false`, for example).
2. Enabling or disabling mapping tests to their own tasks (`parallelExecution in Test := false`, for example).
(Although never exposed as a setting, the maximum number of tasks running at a given time was internally configurable as well.)
The second configuration mechanism described above only selected between running all of a project's tests in the same task or in separate tasks.
Each project still had a separate task for running its tests and so test tasks in separate projects could still run in parallel if overall execution was parallel.
There was no way to restriction execution such that only a single test out of all projects executed.
## Configuration
sbt 0.12 contains a general infrastructure for restricting task concurrency beyond the usual ordering declarations.
There are two parts to these restrictions.
1. A task is tagged in order to classify its purpose and resource utilization. For example, the `compile` task may be tagged as `Tags.Compile` and `Tags.CPU`.
2. A list of rules restrict the tasks that may execute concurrently. For example, `Tags.limit(Tags.CPU, 4)` would allow up to four computation-heavy tasks to run at a time.
The system is thus dependent on proper tagging of tasks and then on a good set of rules.
### Tagging Tasks
In general, a tag is associated with a weight that represents the task's relative utilization of the resource represented by the tag.
Currently, this weight is an integer, but it may be a floating point in the future.
`Initialize[Task[T]]` defines two methods for tagging the constructed Task: `tag` and `tagw`.
The first method, `tag`, fixes the weight to be 1 for the tags provided to it as arguments.
The second method, `tagw`, accepts pairs of tags and weights.
For example, the following associates the `CPU` and `Compile` tags with the `compile` task (with a weight of 1).
```scala
compile <<= myCompileTask tag(Tags.CPU, Tags.Compile)
```
Different weights may be specified by passing tag/weight pairs to `tagw`:
```scala
download <<= downloadImpl.tagw(Tags.Network -> 3)
```
### Defining Restrictions
Once tasks are tagged, the `concurrentRestrictions` setting sets restrictions on the tasks that may be concurrently executed based on the weighted tags of those tasks.
For example,
```scala
concurrentRestrictions := Seq(
Tags.limit(Tags.CPU, 2),
Tags.limit(Tags.Network, 10),
Tags.limit(Tags.Test, 1),
Tags.limitAll( 15 )
)
```
The example limits:
* the number of CPU-using tasks to be no more than 2
* the number of tasks using the network to be no more than 10
* test execution to only one test at a time across all projects
* the total number of tasks to be less than or equal to 15
Note that these restrictions rely on proper tagging of tasks.
Also, the value provided as the limit must be at least 1 to ensure every task is able to be executed.
sbt will generate an error if this condition is not met.
Most tasks won't be tagged because they are very short-lived.
These tasks are automatically assigned the label `Untagged`.
You may want to include these tasks in the CPU rule by using the `limitSum` method.
For example:
```scala
...
Tags.limitSum(2, Tags.CPU, Tags.Untagged)
...
```
Note that the limit is the first argument so that tags can be provided as varargs.
Another useful convenience function is `Tags.exclusive`.
This specifies that a task with the given tag should execute in isolation.
It starts executing only when no other tasks are running (even if they have the exclusive tag) and no other tasks may start execution until it completes.
For example, a task could be tagged with a custom tag `Benchmark` and a rule configured to ensure such a task is executed by itself:
```scala
...
Tags.exclusive(Benchmark)
...
```
Finally, for the most flexibility, you can specify a custom function of type `Map[Tag,Int] => Boolean`.
The `Map[Tag,Int]` represents the weighted tags of a set of tasks.
If the function returns `true`, it indicates that the set of tasks is allowed to execute concurrently.
If the return value is `false`, the set of tasks will not be allowed to execute concurrently.
For example, `Tags.exclusive(Benchmark)` is equivalent to the following:
```scala
...
Tags.customLimit { (tags: Map[Tag,Int]) =>
val exclusive = tags.getOrElse(Benchmark, 0)
// the total number of tasks in the group
val all = tags.getOrElse(Tags.All, 0)
// if there are no exclusive tasks in this group, this rule adds no restrictions
exclusive == 0 ||
// If there is only one task, allow it to execute.
all == 1
}
...
```
There are some basic rules that custom functions must follow, but the main one to be aware of in practice is that if there is only one task, it must be allowed to execute.
sbt will generate a warning if the user defines restrictions that prevent a task from executing at all and will then execute the task anyway.
### Built-in Tags and Rules
Built-in tags are defined in the `Tags` object.
All tags listed below must be qualified by this object.
For example, `CPU` refers to the `Tags.CPU` value.
The built-in semantic tags are:
* `Compile` - describes a task that compiles sources.
* `Test` - describes a task that performs a test.
* `Publish`
* `Update`
* `Untagged` - automatically added when a task doesn't explicitly define any tags.
* `All`- automatically added to every task.
The built-in resource tags are:
* `Network` - describes a task's network utilization.
* `Disk` - describes a task's filesystem utilization.
* `CPU` - describes a task's computational utilization.
The tasks that are currently tagged by default are:
* `compile`: `Compile`, `CPU`
* `test`: `Test`
* `update`: `Update`, `Network`
* `publish`, `publish-local`: `Publish`, `Network`
Of additional note is that the default `test` task will propagate its tags to each child task created for each test class.
The default rules provide the same behavior as previous versions of sbt:
```scala
concurrentRestrictions <<= parallelExecution { par =>
val max = Runtime.getRuntime.availableProcessors
Tags.limitAll(if(par) max else 1) :: Nil
}
```
As before, `parallelExecution in Test` controls whether tests are mapped to separate tasks.
To restrict the number of concurrently executing tests in all projects, use:
```scala
concurrentRestrictions += Tags.limit(Tags.Test, 1)
```
## Custom Tags
To define a new tag, pass a String to the `Tags.Tag` method. For example:
```scala
val Custom = Tags.Tag("custom")
```
Then, use this tag as any other tag. For example:
```scala
aCustomTask <<= aCustomTask.tag(Custom)
concurrentRestrictions +=
Tags.limit(Custom, 1)
```
## Future work
This is an experimental feature and there are several aspects that may change or require further work.
### Tagging Tasks
Currently, a tag applies only to the immediate computation it is defined on.
For example, in the following, the second compile definition has no tags applied to it.
Only the first computation is labeled.
```scala
compile <<= myCompileTask tag(Tags.CPU, Tags.Compile)
compile ~= { ... do some post processing ... }
```
Is this desirable? expected? If not, what is a better, alternative behavior?
### Fractional weighting
Weights are currently `int`s, but could be changed to be `double`s if fractional weights would be useful.
It is important to preserve a consistent notion of what a weight of 1 means so that built-in and custom tasks share this definition and useful rules can be written.
### Default Behavior
User feedback on what custom rules work for what workloads will help determine a good set of default tags and rules.
### Adjustments to Defaults
Rules should be easier to remove or redefine, perhaps by giving them names.
As it is, rules must be appended or all rules must be completely redefined.
Redefining the tags of a task looks like:
```scala
compile <<= compile.tag(Tags.Network)
```
This will overwrite the previous weight if the tag (Network) was already defined.
For removing tags, an implementation of `removeTag` should follow from the implementation of `tag` in a straightforward manner.
### Other characteristics
The system of a tag with a weight was selected as being reasonably powerful and flexible without being too complicated.
This selection is not fundamental and could be enhance, simplified, or replaced if necessary.
The fundamental interface that describes the constraints the system must work within is `sbt.ConcurrentRestrictions`.
This interface is used to provide an intermediate scheduling queue between task execution (`sbt.Execute`) and the underlying thread-based parallel execution service (`java.util.concurrent.CompletionService`).
This intermediate queue restricts new tasks from being forwarded to the `j.u.c.CompletionService` according to the `sbt.ConcurrentRestrictions` implementation.
See the [sbt.ConcurrentRestrictions] API documentation for details.

View File

@ -0,0 +1,336 @@
==================
Parallel Execution
==================
Task ordering
=============
Task ordering is specified by declaring a task's inputs. Correctness of
execution requires correct input declarations. For example, the
following two tasks do not have an ordering specified:
::
write := IO.write(file("/tmp/sample.txt"), "Some content.")
read := IO.read(file("/tmp/sample.txt"))
sbt is free to execute ``write`` first and then ``read``, ``read`` first
and then ``write``, or ``read`` and ``write`` simultaneously. Execution
of these tasks is non-deterministic because they share a file. A correct
declaration of the tasks would be:
::
write := {
val f = file("/tmp/sample.txt")
IO.write(f, "Some content.")
f
}
read <<= write map { f => IO.read(f) }
This establishes an ordering: ``read`` must run after ``write``. We've
also guaranteed that ``read`` will read from the same file that
``write`` created.
Practical constraints
=====================
Note: The feature described in this section is experimental. The default
configuration of the feature is subject to change in particular.
Background
----------
Declaring inputs and dependencies of a task ensures the task is properly
ordered and that code executes correctly. In practice, tasks share
finite hardware and software resources and can require control over
utilization of these resources. By default, sbt executes tasks in
parallel (subject to the ordering constraints already described) in an
effort to utilize all available processors. Also by default, each test
class is mapped to its own task to enable executing tests in parallel.
Prior to sbt 0.12, user control over this process was restricted to:
1. Enabling or disabling all parallel execution
(``parallelExecution := false``, for example).
2. Enabling or disabling mapping tests to their own tasks
(``parallelExecution in Test := false``, for example).
(Although never exposed as a setting, the maximum number of tasks
running at a given time was internally configurable as well.)
The second configuration mechanism described above only selected between
running all of a project's tests in the same task or in separate tasks.
Each project still had a separate task for running its tests and so test
tasks in separate projects could still run in parallel if overall
execution was parallel. There was no way to restriction execution such
that only a single test out of all projects executed.
Configuration
-------------
sbt 0.12.0 introduces a general infrastructure for restricting task
concurrency beyond the usual ordering declarations. There are two parts
to these restrictions.
1. A task is tagged in order to classify its purpose and resource
utilization. For example, the ``compile`` task may be tagged as
``Tags.Compile`` and ``Tags.CPU``.
2. A list of rules restrict the tasks that may execute concurrently. For
example, ``Tags.limit(Tags.CPU, 4)`` would allow up to four
computation-heavy tasks to run at a time.
The system is thus dependent on proper tagging of tasks and then on a
good set of rules.
Tagging Tasks
~~~~~~~~~~~~~
In general, a tag is associated with a weight that represents the task's
relative utilization of the resource represented by the tag. Currently,
this weight is an integer, but it may be a floating point in the future.
``Initialize[Task[T]]`` defines two methods for tagging the constructed
Task: ``tag`` and ``tagw``. The first method, ``tag``, fixes the weight
to be 1 for the tags provided to it as arguments. The second method,
``tagw``, accepts pairs of tags and weights. For example, the following
associates the ``CPU`` and ``Compile`` tags with the ``compile`` task
(with a weight of 1).
::
compile <<= myCompileTask tag(Tags.CPU, Tags.Compile)
Different weights may be specified by passing tag/weight pairs to
``tagw``:
::
download <<= downloadImpl.tagw(Tags.Network -> 3)
Defining Restrictions
~~~~~~~~~~~~~~~~~~~~~
Once tasks are tagged, the ``concurrentRestrictions`` setting sets
restrictions on the tasks that may be concurrently executed based on the
weighted tags of those tasks. For example,
::
concurrentRestrictions := Seq(
Tags.limit(Tags.CPU, 2),
Tags.limit(Tags.Network, 10),
Tags.limit(Tags.Test, 1),
Tags.limitAll( 15 )
)
The example limits:
- the number of CPU-using tasks to be no more than 2
- the number of tasks using the network to be no more than 10
- test execution to only one test at a time across all projects
- the total number of tasks to be less than or equal to 15
Note that these restrictions rely on proper tagging of tasks. Also, the
value provided as the limit must be at least 1 to ensure every task is
able to be executed. sbt will generate an error if this condition is not
met.
Most tasks won't be tagged because they are very short-lived. These
tasks are automatically assigned the label ``Untagged``. You may want to
include these tasks in the CPU rule by using the ``limitSum`` method.
For example:
::
...
Tags.limitSum(2, Tags.CPU, Tags.Untagged)
...
Note that the limit is the first argument so that tags can be provided
as varargs.
Another useful convenience function is ``Tags.exclusive``. This
specifies that a task with the given tag should execute in isolation. It
starts executing only when no other tasks are running (even if they have
the exclusive tag) and no other tasks may start execution until it
completes. For example, a task could be tagged with a custom tag
``Benchmark`` and a rule configured to ensure such a task is executed by
itself:
::
...
Tags.exclusive(Benchmark)
...
Finally, for the most flexibility, you can specify a custom function of
type ``Map[Tag,Int] => Boolean``. The ``Map[Tag,Int]`` represents the
weighted tags of a set of tasks. If the function returns ``true``, it
indicates that the set of tasks is allowed to execute concurrently. If
the return value is ``false``, the set of tasks will not be allowed to
execute concurrently. For example, ``Tags.exclusive(Benchmark)`` is
equivalent to the following:
::
...
Tags.customLimit { (tags: Map[Tag,Int]) =>
val exclusive = tags.getOrElse(Benchmark, 0)
// the total number of tasks in the group
val all = tags.getOrElse(Tags.All, 0)
// if there are no exclusive tasks in this group, this rule adds no restrictions
exclusive == 0 ||
// If there is only one task, allow it to execute.
all == 1
}
...
There are some basic rules that custom functions must follow, but the
main one to be aware of in practice is that if there is only one task,
it must be allowed to execute. sbt will generate a warning if the user
defines restrictions that prevent a task from executing at all and will
then execute the task anyway.
Built-in Tags and Rules
~~~~~~~~~~~~~~~~~~~~~~~
Built-in tags are defined in the ``Tags`` object. All tags listed below
must be qualified by this object. For example, ``CPU`` refers to the
``Tags.CPU`` value.
The built-in semantic tags are:
- ``Compile`` - describes a task that compiles sources.
- ``Test`` - describes a task that performs a test.
- ``Publish``
- ``Update``
- ``Untagged`` - automatically added when a task doesn't explicitly
define any tags.
- ``All``- automatically added to every task.
The built-in resource tags are:
- ``Network`` - describes a task's network utilization.
- ``Disk`` - describes a task's filesystem utilization.
- ``CPU`` - describes a task's computational utilization.
The tasks that are currently tagged by default are:
- ``compile``: ``Compile``, ``CPU``
- ``test``: ``Test``
- ``update``: ``Update``, ``Network``
- ``publish``, ``publish-local``: ``Publish``, ``Network``
Of additional note is that the default ``test`` task will propagate its
tags to each child task created for each test class.
The default rules provide the same behavior as previous versions of sbt:
::
concurrentRestrictions <<= parallelExecution { par =>
val max = Runtime.getRuntime.availableProcessors
Tags.limitAll(if(par) max else 1) :: Nil
}
As before, ``parallelExecution in Test`` controls whether tests are
mapped to separate tasks. To restrict the number of concurrently
executing tests in all projects, use:
::
concurrentRestrictions += Tags.limit(Tags.Test, 1)
Custom Tags
-----------
To define a new tag, pass a String to the ``Tags.Tag`` method. For
example:
::
val Custom = Tags.Tag("custom")
Then, use this tag as any other tag. For example:
::
aCustomTask <<= aCustomTask.tag(Custom)
concurrentRestrictions +=
Tags.limit(Custom, 1)
Future work
-----------
This is an experimental feature and there are several aspects that may
change or require further work.
Tagging Tasks
~~~~~~~~~~~~~
Currently, a tag applies only to the immediate computation it is defined
on. For example, in the following, the second compile definition has no
tags applied to it. Only the first computation is labeled.
::
compile <<= myCompileTask tag(Tags.CPU, Tags.Compile)
compile ~= { ... do some post processing ... }
Is this desirable? expected? If not, what is a better, alternative
behavior?
Fractional weighting
~~~~~~~~~~~~~~~~~~~~
Weights are currently ``int``\ s, but could be changed to be
``double``\ s if fractional weights would be useful. It is important to
preserve a consistent notion of what a weight of 1 means so that
built-in and custom tasks share this definition and useful rules can be
written.
Default Behavior
~~~~~~~~~~~~~~~~
User feedback on what custom rules work for what workloads will help
determine a good set of default tags and rules.
Adjustments to Defaults
~~~~~~~~~~~~~~~~~~~~~~~
Rules should be easier to remove or redefine, perhaps by giving them
names. As it is, rules must be appended or all rules must be completely
redefined.
Redefining the tags of a task looks like:
::
compile <<= compile.tag(Tags.Network)
This will overwrite the previous weight if the tag (Network) was already
defined.
For removing tags, an implementation of ``removeTag`` should follow from
the implementation of ``tag`` in a straightforward manner.
Other characteristics
~~~~~~~~~~~~~~~~~~~~~
The system of a tag with a weight was selected as being reasonably
powerful and flexible without being too complicated. This selection is
not fundamental and could be enhance, simplified, or replaced if
necessary. The fundamental interface that describes the constraints the
system must work within is ``sbt.ConcurrentRestrictions``. This
interface is used to provide an intermediate scheduling queue between
task execution (``sbt.Execute``) and the underlying thread-based
parallel execution service (``java.util.concurrent.CompletionService``).
This intermediate queue restricts new tasks from being forwarded to the
``j.u.c.CompletionService`` according to the
``sbt.ConcurrentRestrictions`` implementation. See the
`sbt.ConcurrentRestrictions <https://github.com/harrah/xsbt/blob/v0.12.0/tasks/ConcurrentRestrictions.scala>`_
API documentation for details.

View File

@ -1,148 +0,0 @@
# Parsing and tab completion
This page describes the parser combinators in sbt.
These parser combinators are typically used to parse user input and provide tab completion for [[Input Tasks]] and [[Commands]].
If you are already familiar with Scala's parser combinators, the methods are mostly the same except that their arguments are strict.
There are two additional methods for controlling tab completion that are discussed at the end of the section.
Parser combinators build up a parser from smaller parsers.
A `Parser[T]` in its most basic usage is a function `String => Option[T]`.
It accepts a `String` to parse and produces a value wrapped in `Some` if parsing succeeds or `None` if it fails.
Error handling and tab completion make this picture more complicated, but we'll stick with Option for this discussion.
The following examples assume the imports:
```scala
import sbt._
import complete.DefaultParsers._
```
## Basic parsers
The simplest parser combinators match exact inputs:
```scala
// A parser that succeeds if the input is 'x', returning the Char 'x'
// and failing otherwise
val singleChar: Parser[Char] = 'x'
// A parser that succeeds if the input is "blue", returning the String "blue"
// and failing otherwise
val litString: Parser[String] = "blue"
```
In these examples, implicit conversions produce a literal `Parser` from a `Char` or `String`.
Other basic parser constructors are the `charClass`, `success` and `failure` methods:
```scala
// A parser that succeeds if the character is a digit, returning the matched Char
// The second argument, "digit", describes the parser and is used in error messages
val digit: Parser[Char] = charClass( (c: Char) => c.isDigit, "digit")
// A parser that produces the value 3 for an empty input string, fails otherwise
val alwaysSucceed: Parser[Int] = success( 3 )
// Represents failure (always returns None for an input String).
// The argument is the error message.
val alwaysFail: Parser[Nothing] = failure("Invalid input.")
```
## Combining parsers
We build on these basic parsers to construct more interesting parsers.
We can combine parsers in a sequence, choose between parsers, or repeat a parser.
```scala
// A parser that succeeds if the input is "blue" or "green",
// returning the matched input
val color: Parser[String] = "blue" | "green"
// A parser that matches either "fg" or "bg"
val select: Parser[String] = "fg" | "bg"
// A parser that matches "fg" or "bg", a space, and then the color, returning the matched values.
// ~ is an alias for Tuple2.
val setColor: Parser[String ~ Char ~ String] =
select ~ ' ' ~ color
// Often, we don't care about the value matched by a parser, such as the space above
// For this, we can use ~> or <~, which keep the result of
// the parser on the right or left, respectively
val setColor2: Parser[String ~ String] = select ~ (' ' ~> color)
// Match one or more digits, returning a list of the matched characters
val digits: Parser[Seq[Char]] = charClass(_.isDigit, "digit").+
// Match zero or more digits, returning a list of the matched characters
val digits0: Parser[Seq[Char]] = charClass(_.isDigit, "digit").*
// Optionally match a digit
val optDigit: Parser[Option[Char]] = charClass(_.isDigit, "digit").?
```
## Transforming results
A key aspect of parser combinators is transforming results along the way into more useful data structures.
The fundamental methods for this are `map` and `flatMap`.
Here are examples of `map` and some convenience methods implemented on top of `map`.
```scala
// Apply the `digits` parser and apply the provided function to the matched
// character sequence
val num: Parser[Int] = digits map { (chars: Seq[Char]) => chars.mkString.toInt }
// Match a digit character, returning the matched character or return '0' if the input is not a digit
val digitWithDefault: Parser[Char] = charClass(_.isDigit, "digit") ?? '0'
// The previous example is equivalent to:
val digitDefault: Parser[Char] =
charClass(_.isDigit, "digit").? map { (d: Option[Char]) => d getOrElse '0' }
// Succeed if the input is "blue" and return the value 4
val blue = "blue" ^^^ 4
// The above is equivalent to:
val blueM = "blue" map { (s: String) => 4 }
```
## Controlling tab completion
Most parsers have reasonable default tab completion behavior.
For example, the string and character literal parsers will suggest the underlying literal for an empty input string.
However, it is impractical to determine the valid completions for `charClass`, since it accepts an arbitrary predicate.
The `examples` method defines explicit completions for such a parser:
```scala
val digit = charClass(_.isDigit, "digit").examples("0", "1", "2")
```
Tab completion will use the examples as suggestions.
The other method controlling tab completion is `token`.
The main purpose of `token` is to determine the boundaries for suggestions.
For example, if your parser is:
```scala
("fg" | "bg") ~ ' ' ~ ("green" | "blue")
```
then the potential completions on empty input are:
```console
fg green
fg blue
bg green
bg blue
```
Typically, you want to suggest smaller segments or the number of suggestions becomes unmanageable.
A better parser is:
```scala
token( ("fg" | "bg") ~ ' ') ~ token("green" | "blue")
```
Now, the initial suggestions would be (with _ representing a space):
```console
fg_
bg_
```
Be careful not to overlap or nest tokens, as in `token("green" ~ token("blue"))`. The behavior is unspecified (and should generate an error in the future), but typically the outer most token definition will be used.

View File

@ -0,0 +1,156 @@
==========================
Parsing and tab completion
==========================
This page describes the parser combinators in sbt. These parser
combinators are typically used to parse user input and provide tab
completion for :doc:`/Extending/Input-Tasks` and :doc:`/Extending/Commands`. If you are already
familiar with Scala's parser combinators, the methods are mostly the
same except that their arguments are strict. There are two additional
methods for controlling tab completion that are discussed at the end of
the section.
Parser combinators build up a parser from smaller parsers. A
``Parser[T]`` in its most basic usage is a function
``String => Option[T]``. It accepts a ``String`` to parse and produces a
value wrapped in ``Some`` if parsing succeeds or ``None`` if it fails.
Error handling and tab completion make this picture more complicated,
but we'll stick with Option for this discussion.
The following examples assume the imports:
``scala import sbt._ import complete.DefaultParsers._``
Basic parsers
-------------
The simplest parser combinators match exact inputs:
::
// A parser that succeeds if the input is 'x', returning the Char 'x'
// and failing otherwise
val singleChar: Parser[Char] = 'x'
// A parser that succeeds if the input is "blue", returning the String "blue"
// and failing otherwise
val litString: Parser[String] = "blue"
In these examples, implicit conversions produce a literal ``Parser``
from a ``Char`` or ``String``. Other basic parser constructors are the
``charClass``, ``success`` and ``failure`` methods:
::
// A parser that succeeds if the character is a digit, returning the matched Char
// The second argument, "digit", describes the parser and is used in error messages
val digit: Parser[Char] = charClass( (c: Char) => c.isDigit, "digit")
// A parser that produces the value 3 for an empty input string, fails otherwise
val alwaysSucceed: Parser[Int] = success( 3 )
// Represents failure (always returns None for an input String).
// The argument is the error message.
val alwaysFail: Parser[Nothing] = failure("Invalid input.")
Combining parsers
-----------------
We build on these basic parsers to construct more interesting parsers.
We can combine parsers in a sequence, choose between parsers, or repeat
a parser.
::
// A parser that succeeds if the input is "blue" or "green",
// returning the matched input
val color: Parser[String] = "blue" | "green"
// A parser that matches either "fg" or "bg"
val select: Parser[String] = "fg" | "bg"
// A parser that matches "fg" or "bg", a space, and then the color, returning the matched values.
// ~ is an alias for Tuple2.
val setColor: Parser[String ~ Char ~ String] =
select ~ ' ' ~ color
// Often, we don't care about the value matched by a parser, such as the space above
// For this, we can use ~> or <~, which keep the result of
// the parser on the right or left, respectively
val setColor2: Parser[String ~ String] = select ~ (' ' ~> color)
// Match one or more digits, returning a list of the matched characters
val digits: Parser[Seq[Char]] = charClass(_.isDigit, "digit").+
// Match zero or more digits, returning a list of the matched characters
val digits0: Parser[Seq[Char]] = charClass(_.isDigit, "digit").*
// Optionally match a digit
val optDigit: Parser[Option[Char]] = charClass(_.isDigit, "digit").?
Transforming results
--------------------
A key aspect of parser combinators is transforming results along the way
into more useful data structures. The fundamental methods for this are
``map`` and ``flatMap``. Here are examples of ``map`` and some
convenience methods implemented on top of ``map``.
::
// Apply the `digits` parser and apply the provided function to the matched
// character sequence
val num: Parser[Int] = digits map { (chars: Seq[Char]) => chars.mkString.toInt }
// Match a digit character, returning the matched character or return '0' if the input is not a digit
val digitWithDefault: Parser[Char] = charClass(_.isDigit, "digit") ?? '0'
// The previous example is equivalent to:
val digitDefault: Parser[Char] =
charClass(_.isDigit, "digit").? map { (d: Option[Char]) => d getOrElse '0' }
// Succeed if the input is "blue" and return the value 4
val blue = "blue" ^^^ 4
// The above is equivalent to:
val blueM = "blue" map { (s: String) => 4 }
Controlling tab completion
--------------------------
Most parsers have reasonable default tab completion behavior. For
example, the string and character literal parsers will suggest the
underlying literal for an empty input string. However, it is impractical
to determine the valid completions for ``charClass``, since it accepts
an arbitrary predicate. The ``examples`` method defines explicit
completions for such a parser:
::
val digit = charClass(_.isDigit, "digit").examples("0", "1", "2")
Tab completion will use the examples as suggestions. The other method
controlling tab completion is ``token``. The main purpose of ``token``
is to determine the boundaries for suggestions. For example, if your
parser is:
::
("fg" | "bg") ~ ' ' ~ ("green" | "blue")
then the potential completions on empty input are:
``console fg green fg blue bg green bg blue``
Typically, you want to suggest smaller segments or the number of
suggestions becomes unmanageable. A better parser is:
::
token( ("fg" | "bg") ~ ' ') ~ token("green" | "blue")
Now, the initial suggestions would be (with \_ representing a space):
``console fg_ bg_``
Be careful not to overlap or nest tokens, as in
``token("green" ~ token("blue"))``. The behavior is unspecified (and
should generate an error in the future), but typically the outer most
token definition will be used.

View File

@ -1,196 +0,0 @@
[java.io.File]: http://download.oracle.com/javase/6/docs/api/java/io/File.html
[java.io.FileFilter]: http://download.oracle.com/javase/6/docs/api/java/io/FileFilter.html
[RichFile]: http://harrah.github.com/xsbt/latest/api/sbt/RichFile.html
[PathFinder]: http://harrah.github.com/xsbt/latest/api/sbt/PathFinder.html
[Path]: http://harrah.github.com/xsbt/latest/api/sbt/Path$.html
[IO]: http://harrah.github.com/xsbt/latest/api/sbt/IO$.html
# Paths
This page describes files, sequences of files, and file filters. The base type used is [java.io.File], but several methods are augmented through implicits:
* [RichFile] adds methods to `File`
* [PathFinder] adds methods to `File` and `Seq[File]`
* [Path] and [IO] provide general methods related to files and I/O.
## Constructing a File
sbt 0.10+ uses [java.io.File] to represent a file instead of the custom `sbt.Path` class that was in sbt 0.7 and earlier.
sbt defines the alias `File` for `java.io.File` so that an extra import is not necessary.
The `file` method is an alias for the single-argument `File` constructor to simplify constructing a new file from a String:
```scala
val source: File = file("/home/user/code/A.scala")
```
Additionally, sbt augments File with a `/` method, which is an alias for the two-argument `File` constructor for building up a path:
```scala
def readme(base: File): File = base / "README"
```
Relative files should only be used when defining the base directory of a `Project`, where they will be resolved properly.
```scala
val root = Project("root", file("."))
```
Elsewhere, files should be absolute or be built up from an absolute base `File`. The `baseDirectory` setting defines the base directory of the build or project depending on the scope.
For example, the following setting sets the unmanaged library directory to be the "custom_lib" directory in a project's base directory:
```scala
unmanagedBase <<= baseDirectory( (base: File) => base /"custom_lib" )
```
Or, more concisely:
```scala
unmanagedBase <<= baseDirectory( _ /"custom_lib" )
```
This setting sets the location of the shell history to be in the base directory of the build, irrespective of the project the setting is defined in:
```scala
historyPath <<= (baseDirectory in ThisBuild)(t => Some(t / ".history")),
```
## Path Finders
A `PathFinder` computes a `Seq[File]` on demand. It is a way to build a sequence of files. There are several methods that augment `File` and `Seq[File]` to construct a `PathFinder`. Ultimately, call `get` on the resulting `PathFinder` to evaluate it and get back a `Seq[File]`.
### Selecting descendants
The `**` method accepts a `java.io.FileFilter` and selects all files matching that filter.
```scala
def scalaSources(base: File): PathFinder = (base / "src") ** "*.scala"
```
### get
This selects all files that end in `.scala` that are in `src` or a descendent directory. The list of files is not actually evaluated until `get` is called:
```scala
def scalaSources(base: File): Seq[File] = {
val finder: PathFinder = (base / "src") ** "*.scala"
finder.get
}
```
If the filesystem changes, a second call to `get` on the same `PathFinder` object will reflect the changes. That is, the `get` method reconstructs the list of files each time. Also, `get` only returns `File`s that existed at the time it was called.
### Selecting children
Selecting files that are immediate children of a subdirectory is done with a single `*`:
```scala
def scalaSources(base: File): PathFinder = (base / "src") * "*.scala"
```
This selects all files that end in `.scala` that are in the `src` directory.
### Existing files only
If a selector, such as `/`, `**`, or `*, is used on a path that does not represent a directory, the path list will be empty:
```scala
def emptyFinder(base: File) = (base / "lib" / "ivy.jar") * "not_possible"
```
### Name Filter
The argument to the child and descendent selectors `*` and `**` is actually a `NameFilter`. An implicit is used to convert a `String` to a `NameFilter` that interprets `*` to represent zero or more characters of any value. See the Name Filters section below for more information.
### Combining PathFinders
Another operation is concatenation of `PathFinder`s:
```scala
def multiPath(base: File): PathFinder =
(base / "src" / "main") +++
(base / "lib") +++
(base / "target" / "classes")
```
When evaluated using `get`, this will return `src/main/`, `lib/`, and `target/classes/`. The concatenated finder supports all standard methods. For example,
```scala
def jars(base: File): PathFinder =
(base / "lib" +++ base / "target") * "*.jar"
```
selects all jars directly in the "lib" and "target" directories.
A common problem is excluding version control directories. This can be accomplished as follows:
```scala
def sources(base: File) =
( (base / "src") ** "*.scala") --- ( (base / "src") ** ".svn" ** "*.scala")
```
The first selector selects all Scala sources and the second selects all sources that are a descendent of a `.svn` directory. The `---` method removes all files returned by the second selector from the sequence of files returned by the first selector.
### Filtering
There is a `filter` method that accepts a predicate of type `File => Boolean` and is non-strict:
```scala
// selects all directories under "src"
def srcDirs(base: File) = ( (base / "src") ** "*") filter { _.isDirectory }
// selects archives (.zip or .jar) that are selected by 'somePathFinder'
def archivesOnly(base: PathFinder) = base filter ClasspathUtilities.isArchive
```
### Empty PathFinder
`PathFinder.empty` is a `PathFinder` that returns the empty sequence when `get` is called:
```scala
assert( PathFinder.empty.get == Seq[File]() )
```
### PathFinder to String conversions
Convert a `PathFinder` to a String using one of the following methods:
* `toString` is for debugging. It puts the absolute path of each component on its own line.
* `absString` gets the absolute paths of each component and separates them by the platform's path separator.
* `getPaths` produces a `Seq[String]` containing the absolute paths of each component
### Mappings
The packaging and file copying methods in sbt expect values of type `Seq[(File,String)]` and `Seq[(File,File)]`, respectively.
These are mappings from the input file to its (String) path in the jar or its (File) destination.
This approach replaces the relative path approach (using the `##` method) from earlier versions of sbt.
Mappings are discussed in detail on the [[Mapping Files]] page.
## File Filters
The argument to `*` and `**` is of type [java.io.FileFilter].
sbt provides combinators for constructing `FileFilter`s.
First, a String may be implicitly converted to a `FileFilter`.
The resulting filter selects files with a name matching the string, with a `*` in the string interpreted as a wildcard.
For example, the following selects all Scala sources with the word "Test" in them:
```scala
def testSrcs(base: File): PathFinder = (base / "src") * "*Test*.scala"
```
There are some useful combinators added to `FileFilter`. The `||` method declares alternative `FileFilter`s. The following example selects all Java or Scala source files under "src":
```scala
def sources(base: File): PathFinder = (base / "src") ** ("*.scala" || "*.java")
```
The `--`method excludes a files matching a second filter from the files matched by the first:
```scala
def imageResources(base: File): PathFinder =
(base/"src"/"main"/"resources") * ("*.png" -- "logo.png")
```
This will get `right.png` and `left.png`, but not `logo.png`, for example.

View File

@ -0,0 +1,258 @@
=====
Paths
=====
This page describes files, sequences of files, and file filters. The
base type used is
`java.io.File <http://download.oracle.com/javase/6/docs/api/java/io/File.html>`_,
but several methods are augmented through implicits:
- `RichFile <../../api/sbt/RichFile.html>`_
adds methods to ``File``
- `PathFinder <../../api/sbt/PathFinder.html>`_
adds methods to ``File`` and ``Seq[File]``
- `Path <../../api/sbt/Path$.html>`_ and
`IO <../../api/sbt/IO$.html>`_ provide
general methods related to files and I/O.
Constructing a File
-------------------
sbt 0.10+ uses
`java.io.File <http://download.oracle.com/javase/6/docs/api/java/io/File.html>`_
to represent a file instead of the custom ``sbt.Path`` class that was in
sbt 0.7 and earlier. sbt defines the alias ``File`` for ``java.io.File``
so that an extra import is not necessary. The ``file`` method is an
alias for the single-argument ``File`` constructor to simplify
constructing a new file from a String:
::
val source: File = file("/home/user/code/A.scala")
Additionally, sbt augments File with a ``/`` method, which is an alias
for the two-argument ``File`` constructor for building up a path:
::
def readme(base: File): File = base / "README"
Relative files should only be used when defining the base directory of a
``Project``, where they will be resolved properly.
::
val root = Project("root", file("."))
Elsewhere, files should be absolute or be built up from an absolute base
``File``. The ``baseDirectory`` setting defines the base directory of
the build or project depending on the scope.
For example, the following setting sets the unmanaged library directory
to be the "custom\_lib" directory in a project's base directory:
::
unmanagedBase <<= baseDirectory( (base: File) => base /"custom_lib" )
Or, more concisely:
::
unmanagedBase <<= baseDirectory( _ /"custom_lib" )
This setting sets the location of the shell history to be in the base
directory of the build, irrespective of the project the setting is
defined in:
::
historyPath <<= (baseDirectory in ThisBuild)(t => Some(t / ".history")),
Path Finders
------------
A ``PathFinder`` computes a ``Seq[File]`` on demand. It is a way to
build a sequence of files. There are several methods that augment
``File`` and ``Seq[File]`` to construct a ``PathFinder``. Ultimately,
call ``get`` on the resulting ``PathFinder`` to evaluate it and get back
a ``Seq[File]``.
Selecting descendants
~~~~~~~~~~~~~~~~~~~~~
The ``**`` method accepts a ``java.io.FileFilter`` and selects all files
matching that filter.
::
def scalaSources(base: File): PathFinder = (base / "src") ** "*.scala"
get
~~~
This selects all files that end in ``.scala`` that are in ``src`` or a
descendent directory. The list of files is not actually evaluated until
``get`` is called:
::
def scalaSources(base: File): Seq[File] = {
val finder: PathFinder = (base / "src") ** "*.scala"
finder.get
}
If the filesystem changes, a second call to ``get`` on the same
``PathFinder`` object will reflect the changes. That is, the ``get``
method reconstructs the list of files each time. Also, ``get`` only
returns ``File``\ s that existed at the time it was called.
Selecting children
~~~~~~~~~~~~~~~~~~
Selecting files that are immediate children of a subdirectory is done
with a single ``*``:
::
def scalaSources(base: File): PathFinder = (base / "src") * "*.scala"
This selects all files that end in ``.scala`` that are in the ``src``
directory.
Existing files only
~~~~~~~~~~~~~~~~~~~
If a selector, such as ``/``, ``**``, or \`\*, is used on a path that
does not represent a directory, the path list will be empty:
::
def emptyFinder(base: File) = (base / "lib" / "ivy.jar") * "not_possible"
Name Filter
~~~~~~~~~~~
The argument to the child and descendent selectors ``*`` and ``**`` is
actually a ``NameFilter``. An implicit is used to convert a ``String``
to a ``NameFilter`` that interprets ``*`` to represent zero or more
characters of any value. See the Name Filters section below for more
information.
Combining PathFinders
~~~~~~~~~~~~~~~~~~~~~
Another operation is concatenation of ``PathFinder``\ s:
::
def multiPath(base: File): PathFinder =
(base / "src" / "main") +++
(base / "lib") +++
(base / "target" / "classes")
When evaluated using ``get``, this will return ``src/main/``, ``lib/``,
and ``target/classes/``. The concatenated finder supports all standard
methods. For example,
::
def jars(base: File): PathFinder =
(base / "lib" +++ base / "target") * "*.jar"
selects all jars directly in the "lib" and "target" directories.
A common problem is excluding version control directories. This can be
accomplished as follows:
::
def sources(base: File) =
( (base / "src") ** "*.scala") --- ( (base / "src") ** ".svn" ** "*.scala")
The first selector selects all Scala sources and the second selects all
sources that are a descendent of a ``.svn`` directory. The ``---``
method removes all files returned by the second selector from the
sequence of files returned by the first selector.
Filtering
~~~~~~~~~
There is a ``filter`` method that accepts a predicate of type
``File => Boolean`` and is non-strict:
::
// selects all directories under "src"
def srcDirs(base: File) = ( (base / "src") ** "*") filter { _.isDirectory }
// selects archives (.zip or .jar) that are selected by 'somePathFinder'
def archivesOnly(base: PathFinder) = base filter ClasspathUtilities.isArchive
Empty PathFinder
~~~~~~~~~~~~~~~~
``PathFinder.empty`` is a ``PathFinder`` that returns the empty sequence
when ``get`` is called:
::
assert( PathFinder.empty.get == Seq[File]() )
PathFinder to String conversions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Convert a ``PathFinder`` to a String using one of the following methods:
- ``toString`` is for debugging. It puts the absolute path of each
component on its own line.
- ``absString`` gets the absolute paths of each component and separates
them by the platform's path separator.
- ``getPaths`` produces a ``Seq[String]`` containing the absolute paths
of each component
Mappings
~~~~~~~~
The packaging and file copying methods in sbt expect values of type
``Seq[(File,String)]`` and ``Seq[(File,File)]``, respectively. These are
mappings from the input file to its (String) path in the jar or its
(File) destination. This approach replaces the relative path approach
(using the ``##`` method) from earlier versions of sbt.
Mappings are discussed in detail on the :doc:`Mapping-Files` page.
File Filters
------------
The argument to ``*`` and ``**`` is of type
`java.io.FileFilter <http://download.oracle.com/javase/6/docs/api/java/io/FileFilter.html>`_.
sbt provides combinators for constructing ``FileFilter``\ s.
First, a String may be implicitly converted to a ``FileFilter``. The
resulting filter selects files with a name matching the string, with a
``*`` in the string interpreted as a wildcard. For example, the
following selects all Scala sources with the word "Test" in them:
::
def testSrcs(base: File): PathFinder = (base / "src") * "*Test*.scala"
There are some useful combinators added to ``FileFilter``. The ``||``
method declares alternative ``FileFilter``\ s. The following example
selects all Java or Scala source files under "src":
::
def sources(base: File): PathFinder = (base / "src") ** ("*.scala" || "*.java")
The ``--``\ method excludes a files matching a second filter from the
files matched by the first:
::
def imageResources(base: File): PathFinder =
(base/"src"/"main"/"resources") * ("*.png" -- "logo.png")
This will get ``right.png`` and ``left.png``, but not ``logo.png``, for
example.

View File

@ -1,94 +0,0 @@
[ProcessBuilder API]: http://harrah.github.com/xsbt/latest/api/sbt/ProcessBuilder.html
# External Processes
# Usage
`sbt` includes a process library to simplify working with external processes. The library is available without import in build definitions and at the interpreter started by the [[console-project|Console Project]] task.
To run an external command, follow it with an exclamation mark `!`:
```scala
"find project -name *.jar" !
```
An implicit converts the `String` to `sbt.ProcessBuilder`, which defines the `!` method. This method runs the constructed command, waits until the command completes, and returns the exit code. Alternatively, the `run` method defined on `ProcessBuilder` runs the command and returns an instance of `sbt.Process`, which can be used to `destroy` the process before it completes. With no arguments, the `!` method sends output to standard output and standard error. You can pass a `Logger` to the `!` method to send output to the `Logger`:
```scala
"find project -name *.jar" ! log
```
Two alternative implicit conversions are from `scala.xml.Elem` or `List[String]` to `sbt.ProcessBuilder`. These are useful for constructing commands. An example of the first variant from the android plugin:
```scala
<x> {dxPath.absolutePath} --dex --output={classesDexPath.absolutePath} {classesMinJarPath.absolutePath}</x> !
```
If you need to set the working directory or modify the environment, call `sbt.Process` explicitly, passing the command sequence (command and argument list) or command string first and the working directory second. Any environment variables can be passed as a vararg list of key/value String pairs.
```scala
Process("ls" :: "-l" :: Nil, Path.userHome, "key1" -> value1, "key2" -> value2) ! log
```
Operators are defined to combine commands. These operators start with `#` in order to keep the precedence the same and to separate them from the operators defined elsewhere in `sbt` for filters. In the following operator definitions, `a` and `b` are subcommands.
* `a #&& b` Execute `a`. If the exit code is nonzero, return that exit code and do not execute `b`. If the exit code is zero, execute `b` and return its exit code.
* `a #|| b` Execute `a`. If the exit code is zero, return zero for the exit code and do not execute `b`. If the exit code is nonzero, execute `b` and return its exit code.
* `a #| b` Execute `a` and `b`, piping the output of `a` to the input of `b`.
There are also operators defined for redirecting output to `File`s and input from `File`s and `URL`s. In the following definitions, `url` is an instance of `URL` and `file` is an instance of `File`.
* `a #< url` or `url #> a` Use `url` as the input to `a`. `a` may be a `File` or a command.
* `a #< file` or `file #> a` Use `file` as the input to `a`. `a` may be a `File` or a command.
* `a #> file` or `file #< a` Write the output of `a` to `file`. `a` may be a `File`, `URL`, or a command.
* `a #>> file` or `file #<< a` Append the output of `a` to `file`. `a` may be a `File`, `URL`, or a command.
There are some additional methods to get the output from a forked process into a `String` or the output lines as a `Stream[String]`. Here are some examples, but see the [ProcessBuilder API] for details.
```scala
val listed: String = "ls" !!
val lines2: Stream[String] = "ls" lines_!
```
Finally, there is a `cat` method to send the contents of `File`s and `URL`s to standard output.
## Examples
Download a `URL` to a `File`:
```scala
url("http://databinder.net/dispatch/About") #> file("About.html") !
or
file("About.html") #< url("http://databinder.net/dispatch/About") !
```
Copy a `File`:
```scala
file("About.html") #> file("About_copy.html") !
or
file("About_copy.html") #< file("About.html") !
```
Append the contents of a `URL` to a `File` after filtering through `grep`:
```scala
url("http://databinder.net/dispatch/About") #> "grep JSON" #>> file("About_JSON") !
or
file("About_JSON") #<< ( "grep JSON" #< url("http://databinder.net/dispatch/About") ) !
```
Search for uses of `null` in the source directory:
```scala
"find src -name *.scala -exec grep null {} ;" #| "xargs test -z" #&& "echo null-free" #|| "echo null detected" !
```
Use `cat`:
```scala
val spde = url("http://technically.us/spde/About")
val dispatch = url("http://databinder.net/dispatch/About")
val build = file("project/build.properties")
cat(spde, dispatch, build) #| "grep -i scala" !
```

View File

@ -0,0 +1,128 @@
==================
External Processes
==================
Usage
=====
``sbt`` includes a process library to simplify working with external
processes. The library is available without import in build definitions
and at the interpreter started by the :doc:`console-project <Console-Project>` task.
To run an external command, follow it with an exclamation mark ``!``:
::
"find project -name *.jar" !
An implicit converts the ``String`` to ``sbt.ProcessBuilder``, which
defines the ``!`` method. This method runs the constructed command,
waits until the command completes, and returns the exit code.
Alternatively, the ``run`` method defined on ``ProcessBuilder`` runs the
command and returns an instance of ``sbt.Process``, which can be used to
``destroy`` the process before it completes. With no arguments, the
``!`` method sends output to standard output and standard error. You can
pass a ``Logger`` to the ``!`` method to send output to the ``Logger``:
::
"find project -name *.jar" ! log
Two alternative implicit conversions are from ``scala.xml.Elem`` or
``List[String]`` to ``sbt.ProcessBuilder``. These are useful for
constructing commands. An example of the first variant from the android
plugin:
::
<x> {dxPath.absolutePath} --dex --output={classesDexPath.absolutePath} {classesMinJarPath.absolutePath}</x> !
If you need to set the working directory or modify the environment, call
``sbt.Process`` explicitly, passing the command sequence (command and
argument list) or command string first and the working directory second.
Any environment variables can be passed as a vararg list of key/value
String pairs.
::
Process("ls" :: "-l" :: Nil, Path.userHome, "key1" -> value1, "key2" -> value2) ! log
Operators are defined to combine commands. These operators start with
``#`` in order to keep the precedence the same and to separate them from
the operators defined elsewhere in ``sbt`` for filters. In the following
operator definitions, ``a`` and ``b`` are subcommands.
- ``a #&& b`` Execute ``a``. If the exit code is nonzero, return that
exit code and do not execute ``b``. If the exit code is zero, execute
``b`` and return its exit code.
- ``a #|| b`` Execute ``a``. If the exit code is zero, return zero for
the exit code and do not execute ``b``. If the exit code is nonzero,
execute ``b`` and return its exit code.
- ``a #| b`` Execute ``a`` and ``b``, piping the output of ``a`` to the
input of ``b``.
There are also operators defined for redirecting output to ``File``\ s
and input from ``File``\ s and ``URL``\ s. In the following definitions,
``url`` is an instance of ``URL`` and ``file`` is an instance of
``File``.
- ``a #< url`` or ``url #> a`` Use ``url`` as the input to ``a``. ``a``
may be a ``File`` or a command.
- ``a #< file`` or ``file #> a`` Use ``file`` as the input to ``a``.
``a`` may be a ``File`` or a command.
- ``a #> file`` or ``file #< a`` Write the output of ``a`` to ``file``.
``a`` may be a ``File``, ``URL``, or a command.
- ``a #>> file`` or ``file #<< a`` Append the output of ``a`` to
``file``. ``a`` may be a ``File``, ``URL``, or a command.
There are some additional methods to get the output from a forked
process into a ``String`` or the output lines as a ``Stream[String]``.
Here are some examples, but see the `ProcessBuilder
API <../../api/sbt/ProcessBuilder.html>`_
for details.
::
val listed: String = "ls" !!
val lines2: Stream[String] = "ls" lines_!
Finally, there is a ``cat`` method to send the contents of ``File``\ s
and ``URL``\ s to standard output.
Examples
--------
Download a ``URL`` to a ``File``:
::
url("http://databinder.net/dispatch/About") #> file("About.html") !
or
file("About.html") #< url("http://databinder.net/dispatch/About") !
Copy a ``File``:
::
file("About.html") #> file("About_copy.html") !
or
file("About_copy.html") #< file("About.html") !
Append the contents of a ``URL`` to a ``File`` after filtering through
``grep``:
::
url("http://databinder.net/dispatch/About") #> "grep JSON" #>> file("About_JSON") !
or
file("About_JSON") #<< ( "grep JSON" #< url("http://databinder.net/dispatch/About") ) !
Search for uses of ``null`` in the source directory:
::
"find src -name *.scala -exec grep null {} ;" #| "xargs test -z" #&& "echo null-free" #|| "echo null detected" !
Use ``cat``:
``scala val spde = url("http://technically.us/spde/About") val dispatch = url("http://databinder.net/dispatch/About") val build = file("project/build.properties") cat(spde, dispatch, build) #| "grep -i scala" !``

View File

@ -1,122 +0,0 @@
# Publish
This page describes how to publish your project. Publishing consists of uploading a descriptor, such as an Ivy file or Maven POM, and artifacts, such as a jar or war, to a repository so that other projects can specify your project as a dependency.
The `publish` action is used to publish your project to a remote repository. To use publishing, you need to specify the repository to publish to and the credentials to use. Once these are set up, you can run `publish`.
The `publish-local` action is used to publish your project to a local Ivy repository. You can then use this project from other projects on the same machine.
## Define the repository
To specify the repository, assign a repository to `publishTo` and optionally set the publishing style. For example, to upload to Nexus:
```scala
publishTo := Some("Sonatype Snapshots Nexus" at "https://oss.sonatype.org/content/repositories/snapshots")
```
To publish to a local repository:
```scala
publishTo := Some(Resolver.file("file", new File( "path/to/my/maven-repo/releases" )) )
```
Publishing to the users local maven repository:
```scala
publishTo := Some(Resolver.file("file", new File(Path.userHome.absolutePath+"/.m2/repository")))
```
If you're using Maven repositories you will also have to select the right repository depending on your artifacts: SNAPSHOT versions go to the /snapshot repository while other versions go to the /releases repository. Doing this selection can be done by using the value of the `version` SettingKey:
```scala
publishTo <<= version { (v: String) =>
val nexus = "https://oss.sonatype.org/"
if (v.trim.endsWith("SNAPSHOT"))
Some("snapshots" at nexus + "content/repositories/snapshots")
else
Some("releases" at nexus + "service/local/staging/deploy/maven2")
}
```
## Credentials
There are two ways to specify credentials for such a repository. The first is to specify them inline:
```scala
credentials += Credentials("Sonatype Nexus Repository Manager", "nexus.scala-tools.org", "admin", "admin123")
```
The second and better way is to load them from a file, for example:
```scala
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
```
The credentials file is a properties file with keys `realm`, `host`, `user`, and `password`. For example:
```text
realm=Sonatype Nexus Repository Manager
host=nexus.scala-tools.org
user=admin
password=admin123
```
## Cross-publishing
To support multiple incompatible Scala versions, enable cross building and do `+ publish` (see [[Cross Build]]). See [[Resolvers]] for other supported repository types.
## Published artifacts
By default, the main binary jar, a sources jar, and a API documentation jar are published. You can declare other types of artifacts to publish and disable or modify the default artifacts. See the [[Artifacts]] page for details.
## Modifying the generated POM
When `publish-maven-style` is `true`, a POM is generated by the `make-pom` action and published to the repository instead of an Ivy file. This POM file may be altered by changing a few settings. Set 'pom-extra' to provide XML (`scala.xml.NodeSeq`) to insert directly into the generated pom. For example:
```scala
pomExtra :=
<licenses>
<license>
<name>Apache 2</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
```
`make-pom` adds to the POM any Maven-style repositories you have declared. You can filter these by modifying `pom-repository-filter`, which by default excludes local repositories. To instead only include local repositories:
```scala
pomIncludeRepository := { (repo: MavenRepository) =>
repo.root.startsWith("file:")
}
```
There is also a `pom-post-process` setting that can be used to manipulate the final XML before it is written. It's type is `Node => Node`.
```scala
pomPostProcess := { (node: Node) =>
...
}
```
## Publishing Locally
The `publish-local` command will publish to the local Ivy repository. By default, this is in `${user.home}/.ivy2/local`. Other projects on the same machine can then list the project as a dependency. For example, if the SBT project you are publishing has configuration parameters like:
```
name := 'My Project'
organization := 'org.me'
version := '0.1-SNAPSHOT'
```
Then another project can depend on it:
```
libraryDependencies += "org.me" %% "my-project" % "0.1-SNAPSHOT"
```
The version number you select must end with `SNAPSHOT`, or you must change the version number each time you publish. Ivy maintains a cache, and it stores even local projects in that cache. If Ivy already has a version cached, it will not check the local repository for updates, unless the version number matches a [changing pattern](http://ant.apache.org/ivy/history/2.0.0/concept.html#change), and `SNAPSHOT` is one such pattern.

View File

@ -0,0 +1,167 @@
==========
Publishing
==========
This page describes how to publish your project. Publishing consists of
uploading a descriptor, such as an Ivy file or Maven POM, and artifacts,
such as a jar or war, to a repository so that other projects can specify
your project as a dependency.
The ``publish`` action is used to publish your project to a remote
repository. To use publishing, you need to specify the repository to
publish to and the credentials to use. Once these are set up, you can
run ``publish``.
The ``publish-local`` action is used to publish your project to a local
Ivy repository. You can then use this project from other projects on the
same machine.
Define the repository
---------------------
To specify the repository, assign a repository to ``publishTo`` and
optionally set the publishing style. For example, to upload to Nexus:
::
publishTo := Some("Sonatype Snapshots Nexus" at "https://oss.sonatype.org/content/repositories/snapshots")
To publish to a local repository:
::
publishTo := Some(Resolver.file("file", new File( "path/to/my/maven-repo/releases" )) )
Publishing to the users local maven repository:
::
publishTo := Some(Resolver.file("file", new File(Path.userHome.absolutePath+"/.m2/repository")))
If you're using Maven repositories you will also have to select the
right repository depending on your artifacts: SNAPSHOT versions go to
the /snapshot repository while other versions go to the /releases
repository. Doing this selection can be done by using the value of the
``version`` SettingKey:
::
publishTo <<= version { (v: String) =>
val nexus = "https://oss.sonatype.org/"
if (v.trim.endsWith("SNAPSHOT"))
Some("snapshots" at nexus + "content/repositories/snapshots")
else
Some("releases" at nexus + "service/local/staging/deploy/maven2")
}
Credentials
-----------
There are two ways to specify credentials for such a repository. The
first is to specify them inline:
::
credentials += Credentials("Sonatype Nexus Repository Manager", "nexus.scala-tools.org", "admin", "admin123")
The second and better way is to load them from a file, for example:
::
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
The credentials file is a properties file with keys ``realm``, ``host``,
``user``, and ``password``. For example:
::
realm=Sonatype Nexus Repository Manager
host=nexus.scala-tools.org
user=admin
password=admin123
Cross-publishing
----------------
To support multiple incompatible Scala versions, enable cross building
and do ``+ publish`` (see :doc:`Cross-Build`). See :doc:`Resolvers` for other
supported repository types.
Published artifacts
-------------------
By default, the main binary jar, a sources jar, and a API documentation
jar are published. You can declare other types of artifacts to publish
and disable or modify the default artifacts. See the :doc:`Artifacts` page
for details.
Modifying the generated POM
---------------------------
When ``publish-maven-style`` is ``true``, a POM is generated by the
``make-pom`` action and published to the repository instead of an Ivy
file. This POM file may be altered by changing a few settings. Set
'pom-extra' to provide XML (``scala.xml.NodeSeq``) to insert directly
into the generated pom. For example:
::
pomExtra :=
<licenses>
<license>
<name>Apache 2</name>
<url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
<distribution>repo</distribution>
</license>
</licenses>
``make-pom`` adds to the POM any Maven-style repositories you have
declared. You can filter these by modifying ``pom-repository-filter``,
which by default excludes local repositories. To instead only include
local repositories:
::
pomIncludeRepository := { (repo: MavenRepository) =>
repo.root.startsWith("file:")
}
There is also a ``pom-post-process`` setting that can be used to
manipulate the final XML before it is written. It's type is
``Node => Node``.
::
pomPostProcess := { (node: Node) =>
...
}
Publishing Locally
------------------
The ``publish-local`` command will publish to the local Ivy repository.
By default, this is in ``${user.home}/.ivy2/local``. Other projects on
the same machine can then list the project as a dependency. For example,
if the SBT project you are publishing has configuration parameters like:
::
name := 'My Project'
organization := 'org.me'
version := '0.1-SNAPSHOT'
Then another project can depend on it:
::
libraryDependencies += "org.me" %% "my-project" % "0.1-SNAPSHOT"
The version number you select must end with ``SNAPSHOT``, or you must
change the version number each time you publish. Ivy maintains a cache,
and it stores even local projects in that cache. If Ivy already has a
version cached, it will not check the local repository for updates,
unless the version number matches a `changing
pattern <http://ant.apache.org/ivy/history/2.0.0/concept.html#change>`_,
and ``SNAPSHOT`` is one such pattern.

View File

@ -1,163 +0,0 @@
[patterns]: http://ant.apache.org/ivy/history/latest-milestone/concept.html#patterns
[Patterns API]: http://harrah.github.com/xsbt/latest/api/sbt/Patterns$.html
[Ivy filesystem]: http://ant.apache.org/ivy/history/latest-milestone/resolver/filesystem.html (Ivy)
[filesystem factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$file$.html
[FileRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/FileRepository.html
[Ivy sftp]: http://ant.apache.org/ivy/history/latest-milestone/resolver/sftp.html
[sftp factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$Define.html
[SftpRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/SftpRepository.html
[Ivy ssh]: http://ant.apache.org/ivy/history/latest-milestone/resolver/ssh.html
[ssh factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$Define.html
[SshRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/SshRepository.html
[Ivy url]: http://ant.apache.org/ivy/history/latest-milestone/resolver/url.html
[url factory]: http://harrah.github.com/xsbt/latest/api/sbt/Resolver$$url$.html
[URLRepository API]: http://harrah.github.com/xsbt/latest/api/sbt/URLRepository.html
# Resolvers
## Maven
Resolvers for Maven2 repositories are added as follows:
```scala
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
```
This is the most common kind of user-defined resolvers. The rest of this page describes how to define other types of repositories.
## Predefined
A few predefined repositories are available and are listed below
* `DefaultMavenRepository`
This is the main Maven repository at [[http://repo1.maven.org/maven2/]] and is included by default
* `JavaNet1Repository`
This is the Maven 1 repository at [[http://download.java.net/maven/1/]]
For example, to use the `java.net` repository, use the following setting in your build definition:
```scala
resolvers += JavaNet1Repository
```
Predefined repositories will go under Resolver going forward so they are in one place:
```scala
Resolver.sonatypeRepo("releases") // Or "snapshots"
```
See: [[https://github.com/harrah/xsbt/blob/e9bfcdfc5895a8fbde89179289430d4ffccfb7ed/ivy/IvyInterface.scala#L209]]
## Custom
sbt provides an interface to the repository types available in Ivy: file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using [patterns] to configure repositories.
Construct a repository definition using the factory in `sbt.Resolver` for the desired type. This factory creates a `Repository` object that can be further configured. The following table contains links to the Ivy documentation for the repository type and the API documentation for the factory and repository class. The SSH and SFTP repositories are configured identically except for the name of the factory. Use `Resolver.ssh` for SSH and `Resolver.sftp` for SFTP.
Type | Factory | Ivy Docs | Factory API | Repository Class API
-----|---------|----------|-------------|---------------------:
Filesystem | `Resolver.file` | [Ivy filesystem] | [filesystem factory] | [FileRepository API]</td>
SFTP | `Resolver.sftp` | [Ivy sftp] | [sftp factory] | [SftpRepository API]</td>
SSH | `Resolver.ssh` | [Ivy ssh] | [ssh factory] | [SshRepository API]</td>
URL | `Resolver.url` | [Ivy url] | [url factory] | [URLRepository API]</td>
### Basic Examples
These are basic examples that use the default Maven-style repository layout.
#### Filesystem
Define a filesystem repository in the `test` directory of the current working directory and declare that publishing to this repository must be atomic.
```scala
resolvers += Resolver.file("my-test-repo", file("test")) transactional()
```
#### URL
Define a URL repository at .`"http://example.org/repo-releases/"`.
```scala
resolvers += Resolver.url("my-test-repo", url("http://example.org/repo-releases/"))
```
To specify an Ivy repository, use:
```scala
resolvers += Resolver.url("my-test-repo", url)(Resolver.ivyStylePatterns)
```
or customize the layout pattern described in the Custom Layout section below.
#### SFTP and SSH Repositories
The following defines a repository that is served by SFTP from host `"example.org"`:
```scala
resolvers += Resolver.sftp("my-sftp-repo", "example.org")
```
To explicitly specify the port:
```scala
resolvers += Resolver.sftp("my-sftp-repo", "example.org", 22)
```
To specify a base path:
```scala
resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/")
```
Authentication for the repositories returned by `sftp` and `ssh` can be configured by the `as` methods.
To use password authentication:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", "password")
```
or to be prompted for the password:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user")
```
To use key authentication:
```scala
resolvers += {
val keyFile: File = ...
Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword")
}
```
or if no keyfile password is required or if you want to be prompted for it:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile)
```
To specify the permissions used when publishing to the server:
```scala
resolvers += Resolver.ssh("my-ssh-repo", "example.org") withPermissions("0644")
```
This is a chmod-like mode specification.
### Custom Layout
These examples specify custom repository layouts using patterns. The factory methods accept an `Patterns` instance that defines the patterns to use. The patterns are first resolved against the base file or URL. The default patterns give the default Maven-style layout. Provide a different Patterns object to use a different layout. For example:
```scala
resolvers += Resolver.url("my-test-repo", url)( Patterns("[organisation]/[module]/[revision]/[artifact].[ext]") )
```
You can specify multiple patterns or patterns for the metadata and artifacts separately. You can also specify whether the repository should be Maven compatible (as defined by Ivy). See the [patterns API] for the methods to use.
For filesystem and URL repositories, you can specify absolute patterns by omitting the base URL, passing an empty `Patterns` instance, and using `ivys` and `artifacts`:
```scala
resolvers += Resolver.url("my-test-repo") artifacts
"http://example.org/[organisation]/[module]/[revision]/[artifact].[ext]"
```

View File

@ -0,0 +1,200 @@
=========
Resolvers
=========
Maven
-----
Resolvers for Maven2 repositories are added as follows:
``scala resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"``
This is the most common kind of user-defined resolvers. The rest of this
page describes how to define other types of repositories.
Predefined
----------
A few predefined repositories are available and are listed below
- ``DefaultMavenRepository`` This is the main Maven repository at
http://repo1.maven.org/maven2/ and is included by default
- ``JavaNet1Repository`` This is the Maven 1 repository at
http://download.java.net/maven/1/
For example, to use the ``java.net`` repository, use the following
setting in your build definition:
::
resolvers += JavaNet1Repository
Predefined repositories will go under Resolver going forward so they are
in one place:
::
Resolver.sonatypeRepo("releases") // Or "snapshots"
Custom
------
sbt provides an interface to the repository types available in Ivy:
file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using
`patterns <http://ant.apache.org/ivy/history/latest-milestone/concept.html#patterns>`_
to configure repositories.
Construct a repository definition using the factory in ``sbt.Resolver``
for the desired type. This factory creates a ``Repository`` object that
can be further configured. The following table contains links to the Ivy
documentation for the repository type and the API documentation for the
factory and repository class. The SSH and SFTP repositories are
configured identically except for the name of the factory. Use
``Resolver.ssh`` for SSH and ``Resolver.sftp`` for SFTP.
.. _Ivy filesystem: http://ant.apache.org/ivy/history/latest-milestone/resolver/filesystem.html
.. _filesystem factory: ../../api/sbt/Resolver$$file$.html
.. _Ivy sftp: http://ant.apache.org/ivy/history/latest-milestone/resolver/sftp.html
.. _FileRepository API: ../../api/sbt/FileRepository.html
.. _sftp factory: ../../api/sbt/Resolver$$Define.html
.. _SftpRepository API: ../../api/sbt/SftpRepository.html
.. _Ivy ssh: http://ant.apache.org/ivy/history/latest-milestone/resolver/ssh.html
.. _ssh factory: ../../api/sbt/Resolver$$Define.html
.. _SshRepository API: ../../api/sbt/SshRepository.html
.. _Ivy url: http://ant.apache.org/ivy/history/latest-milestone/resolver/url.html
.. _url factory: ../../api/sbt/Resolver$$url$.html
.. _URLRepository API: ../../api/sbt/URLRepository.html
========== ================= ================= ===================== =====================
Type Factory Ivy Docs Factory API Repository Class API
========== ================= ================= ===================== =====================
Filesystem ``Resolver.file`` `Ivy filesystem`_ `filesystem factory`_ `FileRepository API`_
SFTP ``Resolver.sftp`` `Ivy sftp`_ `sftp factory`_ `SftpRepository API`_
SSH ``Resolver.ssh`` `Ivy ssh`_ `ssh factory`_ `SshRepository API`_
URL ``Resolver.url`` `Ivy url`_ `url factory`_ `URLRepository API`_
========== ================= ================= ===================== =====================
Basic Examples
~~~~~~~~~~~~~~
These are basic examples that use the default Maven-style repository
layout.
Filesystem
^^^^^^^^^^
Define a filesystem repository in the ``test`` directory of the current
working directory and declare that publishing to this repository must be
atomic.
::
resolvers += Resolver.file("my-test-repo", file("test")) transactional()
URL
^^^
Define a URL repository at .\ ``"http://example.org/repo-releases/"``.
::
resolvers += Resolver.url("my-test-repo", url("http://example.org/repo-releases/"))
To specify an Ivy repository, use:
::
resolvers += Resolver.url("my-test-repo", url)(Resolver.ivyStylePatterns)
or customize the layout pattern described in the Custom Layout section
below.
SFTP and SSH Repositories
^^^^^^^^^^^^^^^^^^^^^^^^^
The following defines a repository that is served by SFTP from host
``"example.org"``:
::
resolvers += Resolver.sftp("my-sftp-repo", "example.org")
To explicitly specify the port:
::
resolvers += Resolver.sftp("my-sftp-repo", "example.org", 22)
To specify a base path:
::
resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/")
Authentication for the repositories returned by ``sftp`` and ``ssh`` can
be configured by the ``as`` methods.
To use password authentication:
::
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", "password")
or to be prompted for the password:
::
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user")
To use key authentication:
::
resolvers += {
val keyFile: File = ...
Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword")
}
or if no keyfile password is required or if you want to be prompted for
it:
::
resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile)
To specify the permissions used when publishing to the server:
::
resolvers += Resolver.ssh("my-ssh-repo", "example.org") withPermissions("0644")
This is a chmod-like mode specification.
Custom Layout
~~~~~~~~~~~~~
These examples specify custom repository layouts using patterns. The
factory methods accept an ``Patterns`` instance that defines the
patterns to use. The patterns are first resolved against the base file
or URL. The default patterns give the default Maven-style layout.
Provide a different Patterns object to use a different layout. For
example:
::
resolvers += Resolver.url("my-test-repo", url)( Patterns("[organisation]/[module]/[revision]/[artifact].[ext]") )
You can specify multiple patterns or patterns for the metadata and
artifacts separately. You can also specify whether the repository should
be Maven compatible (as defined by Ivy). See the `patterns
API <../../api/sbt/Patterns$.html>`_ for
the methods to use.
For filesystem and URL repositories, you can specify absolute patterns
by omitting the base URL, passing an empty ``Patterns`` instance, and
using ``ivys`` and ``artifacts``:
::
resolvers += Resolver.url("my-test-repo") artifacts
"http://example.org/[organisation]/[module]/[revision]/[artifact].[ext]"

View File

@ -1,46 +0,0 @@
# Running Project Code
The `run` and `console` actions provide a means for running user code in the same virtual machine as sbt. This page describes the problems with doing so, how sbt handles these problems, what types of code can use this feature, and what types of code must use a [[forked jvm|Forking]]. Skip to User Code if you just want to see when you should use a [[forked jvm|Forking]].
# Problems
## System.exit
User code can call `System.exit`, which normally shuts down the JVM. Because the `run` and `console` actions run inside the same JVM as sbt, this also ends the build and requires restarting sbt.
## Threads
User code can also start other threads. Threads can be left running after the main method returns. In particular, creating a GUI creates several threads, some of which may not terminate until the JVM terminates. The program is not completed until either `System.exit` is called or all non-daemon threads terminate.
## Deserialization and class loading
During deserialization, the wrong class loader might be used for various complex reasons. This can happen in many scenarios, and running under SBT is just one of them. This is discussed for instance in [issue #163](https://github.com/harrah/xsbt/issues/163), [#136](https://github.com/harrah/xsbt/issues/136). The reason is explained [here](http://jira.codehaus.org/browse/GROOVY-1627?focusedCommentId=85900#comment-85900).
# sbt's Solutions
## System.exit
User code is run with a custom `SecurityManager` that throws a custom `SecurityException` when `System.exit` is called. This exception is caught by sbt. sbt then disposes of all top-level windows, interrupts (not stops) all user-created threads, and handles the exit code. If the exit code is nonzero, `run` and `console` complete unsuccessfully. If the exit code is zero, they complete normally.
## Threads
sbt makes a list of all threads running before executing user code. After the user code returns, sbt can then determine the threads created by the user code. For each user-created thread, sbt replaces the uncaught exception handler with a custom one that handles the custom `SecurityException` thrown by calls to `System.exit` and delegates to the original handler for everything else. sbt then waits for each created thread to exit or for `System.exit` to be called. sbt handles a call to `System.exit` as described above.
A user-created thread is one that is not in the `system` thread group and is not an `AWT` implementation thread (e.g. `AWT-XAWT`, `AWT-Windows`). User-created threads include the `AWT-EventQueue-*` thread(s).
# User Code
Given the above, when can user code be run with the `run` and `console` actions?
The user code cannot rely on shutdown hooks and at least one of the following situations must apply for user code to run in the same JVM:
1. User code creates no threads.
2. User code creates a GUI and no other threads.
3. The program ends when user-created threads terminate on their own.
4. `System.exit` is used to end the program and user-created threads terminate when interrupted.
5. No deserialization is done, or the deserialization code avoids ensures that the right class loader is used, as in
https://github.com/NetLogo/NetLogo/blob/master/src/main/org/nlogo/util/ClassLoaderObjectInputStream.scala or
https://github.com/scala/scala/blob/master/src/actors/scala/actors/remote/JavaSerializer.scala#L20.
The requirements on threading and shutdown hooks are required because the JVM does not actually shut down. So, shutdown hooks cannot be run and threads are not terminated unless they stop when interrupted. If these requirements are not met, code must run in a [[forked jvm|Forking]].
The feature of allowing `System.exit` and multiple threads to be used cannot completely emulate the situation of running in a separate JVM and is intended for development. Program execution should be checked in a [[forked jvm|Forking]] when using multiple threads or `System.exit`.

View File

@ -0,0 +1,100 @@
====================
Running Project Code
====================
The ``run`` and ``console`` actions provide a means for running user
code in the same virtual machine as sbt. This page describes the
problems with doing so, how sbt handles these problems, what types of
code can use this feature, and what types of code must use a [[forked
jvm\|Forking]]. Skip to User Code if you just want to see when you
should use a [[forked jvm\|Forking]].
Problems
========
System.exit
-----------
User code can call ``System.exit``, which normally shuts down the JVM.
Because the ``run`` and ``console`` actions run inside the same JVM as
sbt, this also ends the build and requires restarting sbt.
Threads
-------
User code can also start other threads. Threads can be left running
after the main method returns. In particular, creating a GUI creates
several threads, some of which may not terminate until the JVM
terminates. The program is not completed until either ``System.exit`` is
called or all non-daemon threads terminate.
Deserialization and class loading
---------------------------------
During deserialization, the wrong class loader might be used for various
complex reasons. This can happen in many scenarios, and running under
SBT is just one of them. This is discussed for instance in issues :issue:`163` and
:issue:`136`. The reason is
explained
`here <http://jira.codehaus.org/browse/GROOVY-1627?focusedCommentId=85900#comment-85900>`_.
sbt's Solutions
===============
System.exit
-----------
User code is run with a custom ``SecurityManager`` that throws a custom
``SecurityException`` when ``System.exit`` is called. This exception is
caught by sbt. sbt then disposes of all top-level windows, interrupts
(not stops) all user-created threads, and handles the exit code. If the
exit code is nonzero, ``run`` and ``console`` complete unsuccessfully.
If the exit code is zero, they complete normally.
Threads
-------
sbt makes a list of all threads running before executing user code.
After the user code returns, sbt can then determine the threads created
by the user code. For each user-created thread, sbt replaces the
uncaught exception handler with a custom one that handles the custom
``SecurityException`` thrown by calls to ``System.exit`` and delegates
to the original handler for everything else. sbt then waits for each
created thread to exit or for ``System.exit`` to be called. sbt handles
a call to ``System.exit`` as described above.
A user-created thread is one that is not in the ``system`` thread group
and is not an ``AWT`` implementation thread (e.g. ``AWT-XAWT``,
``AWT-Windows``). User-created threads include the ``AWT-EventQueue-*``
thread(s).
User Code
=========
Given the above, when can user code be run with the ``run`` and
``console`` actions?
The user code cannot rely on shutdown hooks and at least one of the
following situations must apply for user code to run in the same JVM:
1. User code creates no threads.
2. User code creates a GUI and no other threads.
3. The program ends when user-created threads terminate on their own.
4. ``System.exit`` is used to end the program and user-created threads
terminate when interrupted.
5. No deserialization is done, or the deserialization code avoids
ensures that the right class loader is used, as in
https://github.com/NetLogo/NetLogo/blob/master/src/main/org/nlogo/util/ClassLoaderObjectInputStream.scala
or
https://github.com/scala/scala/blob/master/src/actors/scala/actors/remote/JavaSerializer.scala#L20.
The requirements on threading and shutdown hooks are required because
the JVM does not actually shut down. So, shutdown hooks cannot be run
and threads are not terminated unless they stop when interrupted. If
these requirements are not met, code must run in a [[forked
jvm\|Forking]].
The feature of allowing ``System.exit`` and multiple threads to be used
cannot completely emulate the situation of running in a separate JVM and
is intended for development. Program execution should be checked in a
[[forked jvm\|Forking]] when using multiple threads or ``System.exit``.

View File

@ -1,125 +0,0 @@
[IvyConsole]: http://harrah.github.com/xsbt/latest/sxr/IvyConsole.scala.html
[conscript]: https://github.com/n8han/conscript
[setup script]: https://github.com/paulp/xsbtscript
# Scripts, REPL, and Dependencies
sbt has two alternative entry points that may be used to:
* Compile and execute a Scala script containing dependency declarations or other sbt settings
* Start up the Scala REPL, defining the dependencies that should be on the classpath
These entry points should be considered experimental. A notable disadvantage of these approaches is the startup time involved.
# Setup
To set up these entry points, you can either use [conscript] or manually construct the startup scripts.
In addition, there is a [setup script] for the script mode that only requires a JRE installed.
## Setup with Conscript
Install [conscript].
```
cs harrah/xsbt --branch 0.12.0
```
This will create two scripts: `screpl` and `scalas`.
## Manual Setup
Duplicate your standard `sbt` script, which was set up according to [[Setup|Getting Started Setup]], as `scalas` and `screpl` (or whatever names you like).
`scalas` is the script runner and should use `sbt.ConsoleMain` as the main class, by adding the `-Dsbt.main.class=sbt.ScriptMain` parameter to the `java` command. Its command line should look like:
```scala
java -Dsbt.main.class=sbt.ScriptMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
```
For the REPL runner `screpl`, use `sbt.ConsoleMain` as the main class:
```scala
java -Dsbt.main.class=sbt.ConsoleMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
```
In each case, `/home/user/.sbt/boot` should be replaced with wherever you want sbt's boot directory to be; you might also need to give more memory to the JVM via `-Xms512M -Xmx1536M` or similar options, just like shown in [[Setup|Getting Started Setup]].
# Usage
## sbt Script runner
The script runner can run a standard Scala script, but with the additional ability to configure sbt.
sbt settings may be embedded in the script in a comment block that opens with `/***`.
### Example
Copy the following script and make it executable.
You may need to adjust the first line depending on your script name and operating system.
When run, the example should retrieve Scala, the required dependencies, compile the script, and run it directly.
For example, if you name it `dispatch_example.scala`, you would do on Unix:
```
chmod u+x dispatch_example.scala
./dispatch_example.scala
```
```scala
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
import json.{ Js, JsObject }
def process(param: JsObject) = {
val Search.text(txt) = param
val Search.from_user(usr) = param
val Search.created_at(time) = param
"(" + time + ")" + usr + ": " + txt
}
Http.x((Search("#scala") lang "en") ~> (_ map process foreach println))
```
## sbt REPL with dependencies
The arguments to the REPL mode configure the dependencies to use when starting up the REPL.
An argument may be either a jar to include on the classpath, a dependency definition to retrieve and put on the classpath, or a resolver to use when retrieving dependencies.
A dependency definition looks like:
```text
organization%module%revision
```
Or, for a cross-built dependency:
```text
organization%%module%revision
```
A repository argument looks like:
```text
"id at url"
```
### Example:
To add the Sonatype snapshots repository and add Scalaz 7.0-SNAPSHOT to REPL classpath:
```text
screpl "sonatype-releases at https://oss.sonatype.org/content/repositories/snapshots/" "org.scalaz%%scalaz-core%7.0-SNAPSHOT"
```
This syntax was a quick hack. Feel free to improve it. The relevant class is [IvyConsole].

View File

@ -0,0 +1,152 @@
===============================
Scripts, REPL, and Dependencies
===============================
sbt has two alternative entry points that may be used to:
- Compile and execute a Scala script containing dependency declarations
or other sbt settings
- Start up the Scala REPL, defining the dependencies that should be on
the classpath
These entry points should be considered experimental. A notable
disadvantage of these approaches is the startup time involved.
Setup
=====
To set up these entry points, you can either use
`conscript <https://github.com/n8han/conscript>`_ or manually construct
the startup scripts. In addition, there is a `setup
script <https://github.com/paulp/xsbtscript>`_ for the script mode that
only requires a JRE installed.
Setup with Conscript
--------------------
Install `conscript <https://github.com/n8han/conscript>`_.
::
cs harrah/xsbt --branch 0.12.0
This will create two scripts: ``screpl`` and ``scalas``.
Manual Setup
------------
Duplicate your standard ``sbt`` script, which was set up according to
:doc:`Setup </Getting-Started/Setup>`, as ``scalas`` and ``screpl`` (or
whatever names you like).
``scalas`` is the script runner and should use ``sbt.ConsoleMain`` as
the main class, by adding the ``-Dsbt.main.class=sbt.ScriptMain``
parameter to the ``java`` command. Its command line should look like:
::
java -Dsbt.main.class=sbt.ScriptMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
For the REPL runner ``screpl``, use ``sbt.ConsoleMain`` as the main
class:
::
java -Dsbt.main.class=sbt.ConsoleMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
In each case, ``/home/user/.sbt/boot`` should be replaced with wherever
you want sbt's boot directory to be; you might also need to give more
memory to the JVM via ``-Xms512M -Xmx1536M`` or similar options, just
like shown in :doc:`Setup </Getting-Started/Setup>`.
Usage
=====
sbt Script runner
-----------------
The script runner can run a standard Scala script, but with the
additional ability to configure sbt. sbt settings may be embedded in the
script in a comment block that opens with ``/***``.
Example
~~~~~~~
Copy the following script and make it executable. You may need to adjust
the first line depending on your script name and operating system. When
run, the example should retrieve Scala, the required dependencies,
compile the script, and run it directly. For example, if you name it
``dispatch_example.scala``, you would do on Unix:
::
chmod u+x dispatch_example.scala
./dispatch_example.scala
::
#!/usr/bin/env scalas
!#
/***
scalaVersion := "2.9.0-1"
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-twitter" % "0.8.3",
"net.databinder" %% "dispatch-http" % "0.8.3"
)
*/
import dispatch.{ json, Http, Request }
import dispatch.twitter.Search
import json.{ Js, JsObject }
def process(param: JsObject) = {
val Search.text(txt) = param
val Search.from_user(usr) = param
val Search.created_at(time) = param
"(" + time + ")" + usr + ": " + txt
}
Http.x((Search("#scala") lang "en") ~> (_ map process foreach println))
sbt REPL with dependencies
--------------------------
The arguments to the REPL mode configure the dependencies to use when
starting up the REPL. An argument may be either a jar to include on the
classpath, a dependency definition to retrieve and put on the classpath,
or a resolver to use when retrieving dependencies.
A dependency definition looks like:
::
organization%module%revision
Or, for a cross-built dependency:
::
organization%%module%revision
A repository argument looks like:
::
"id at url"
Example:
~~~~~~~~
To add the Sonatype snapshots repository and add Scalaz 7.0-SNAPSHOT to
REPL classpath:
::
screpl "sonatype-releases at https://oss.sonatype.org/content/repositories/snapshots/" "org.scalaz%%scalaz-core%7.0-SNAPSHOT"
This syntax was a quick hack. Feel free to improve it. The relevant
class is
`IvyConsole <../../sxr/IvyConsole.scala.html>`_.

View File

@ -1,49 +0,0 @@
# Setup Notes
Some notes on how to set up your `sbt` script.
## Do not put `sbt-launch.jar` on your classpath.
Do _not_ put `sbt-launch.jar` in your `$SCALA_HOME/lib` directory, your project's `lib` directory, or anywhere it will be put on a classpath. It isn't a library.
## Terminal encoding
The character encoding used by your terminal may differ from Java's default encoding for your platform. In this case, you will need to add the option `-Dfile.encoding=<encoding>` in your `sbt` script to set the encoding, which might look like:
```text
java -Dfile.encoding=UTF8
```
## JVM heap, permgen, and stack sizes
If you find yourself running out of permgen space or your workstation is low
on memory, adjust the JVM configuration as you would for any application. For example
a common set of memory-related options is:
```text
java -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256m
```
## Boot directory
`sbt-launch.jar` is just a bootstrap; the actual meat of sbt, and the Scala
compiler and standard library, are downloaded to the shared directory `$HOME/.sbt/boot/`.
To change the location of this directory, set the `sbt.boot.directory` system property in your `sbt` script. A relative path will be resolved against the current working directory, which can be useful if you want to avoid sharing the boot directory between projects. For example, the following uses the pre-0.11 style of putting the boot directory in `project/boot/`:
```text
java -Dsbt.boot.directory=project/boot/
```
## HTTP Proxy
On Unix, sbt will pick up any HTTP proxy settings from the `http.proxy` environment variable. If you are behind a proxy requiring authentication, your `sbt` script must also pass flags to set the `http.proxyUser` and `http.proxyPassword` properties:
```text
java -Dhttp.proxyUser=username -Dhttp.proxyPassword=mypassword
```
On Windows, your script should set properties for proxy host, port, and if applicable, username and password:
```text
java -Dhttp.proxyHost=myproxy -Dhttp.proxyPort=8080 -Dhttp.proxyUser=username -Dhttp.proxyPassword=mypassword
```

View File

@ -0,0 +1,69 @@
===========
Setup Notes
===========
Some notes on how to set up your ``sbt`` script.
Do not put ``sbt-launch.jar`` on your classpath.
------------------------------------------------
Do *not* put ``sbt-launch.jar`` in your ``$SCALA_HOME/lib`` directory,
your project's ``lib`` directory, or anywhere it will be put on a
classpath. It isn't a library.
Terminal encoding
-----------------
The character encoding used by your terminal may differ from Java's
default encoding for your platform. In this case, you will need to add
the option ``-Dfile.encoding=<encoding>`` in your ``sbt`` script to set
the encoding, which might look like:
::
java -Dfile.encoding=UTF8
JVM heap, permgen, and stack sizes
----------------------------------
If you find yourself running out of permgen space or your workstation is
low on memory, adjust the JVM configuration as you would for any
application. For example a common set of memory-related options is:
``text java -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256m``
## Boot directory
``sbt-launch.jar`` is just a bootstrap; the actual meat of sbt, and the
Scala compiler and standard library, are downloaded to the shared
directory ``$HOME/.sbt/boot/``.
To change the location of this directory, set the ``sbt.boot.directory``
system property in your ``sbt`` script. A relative path will be resolved
against the current working directory, which can be useful if you want
to avoid sharing the boot directory between projects. For example, the
following uses the pre-0.11 style of putting the boot directory in
``project/boot/``:
::
java -Dsbt.boot.directory=project/boot/
HTTP Proxy
----------
On Unix, sbt will pick up any HTTP proxy settings from the
``http.proxy`` environment variable. If you are behind a proxy requiring
authentication, your ``sbt`` script must also pass flags to set the
``http.proxyUser`` and ``http.proxyPassword`` properties:
::
java -Dhttp.proxyUser=username -Dhttp.proxyPassword=mypassword
On Windows, your script should set properties for proxy host, port, and
if applicable, username and password:
::
java -Dhttp.proxyHost=myproxy -Dhttp.proxyPort=8080 -Dhttp.proxyUser=username -Dhttp.proxyPassword=mypassword

View File

@ -1,118 +0,0 @@
# Task Inputs/Dependencies
Tasks with dependencies are now introduced in the
[[getting started guide|Getting Started More About Settings]],
which you may wish to read first. This older page may have some
additional detail.
_Wiki Maintenance Note:_ This page should have its overlap with
the getting started guide cleaned up, and just have any advanced
or additional notes. It should maybe also be consolidated with
[[Tasks]].
An important aspect of the task system introduced in sbt 0.10 is to combine two common, related steps in a build:
1. Ensure some other task is performed.
2. Use some result from that task.
Previous versions of sbt configured these steps separately using
1. Dependency declarations
2. Some form of shared state
To see why it is advantageous to combine them, compare the situation to that of deferring initialization of a variable in Scala.
This Scala code is a bad way to expose a value whose initialization is deferred:
```scala
// Define a variable that will be initialized at some point
// We don't want to do it right away, because it might be expensive
var foo: Foo = _
// Define a function to initialize the variable
def makeFoo(): Unit = ... initialize foo ...
```
Typical usage would be:
```scala
makeFoo()
doSomething( foo )
```
This example is rather exaggerated in its badness, but I claim it is nearly the same situation as our two step task definitions.
Particular reasons this is bad include:
1. A client needs to know to call `makeFoo()` first.
2. `foo` could be changed by other code. There could be a `def makeFoo2()`, for example.
3. Access to foo is not thread safe.
The first point is like declaring a task dependency, the second is like two tasks modifying the same state (either project variables or files), and the third is a consequence of unsynchronized, shared state.
In Scala, we have the built-in functionality to easily fix this: `lazy val`.
```scala
lazy val foo: Foo = ... initialize foo ...
```
with the example usage:
```scala
doSomething( foo )
```
Here, `lazy val` gives us thread safety, guaranteed initialization before access, and immutability all in one, DRY construct.
The task system in sbt does the same thing for tasks (and more, but we won't go into that here) that `lazy val` did for our bad example.
A task definition must declare its inputs and the type of its output.
sbt will ensure that the input tasks have run and will then provide their results to the function that implements the task, which will generate its own result.
Other tasks can use this result and be assured that the task has run (once) and be thread-safe and typesafe in the process.
The general form of a task definition looks like:
```scala
myTask <<= (aTask, bTask) map { (a: A, b: B) =>
... do something with a, b and generate a result ...
}
```
(This is only intended to be a discussion of the ideas behind tasks, so see the [sbt Tasks](https://github.com/harrah/xsbt/wiki/Tasks) page for details on usage.)
Basically, `myTask` is defined by declaring `aTask` and `bTask` as inputs and by defining the function to apply to the results of these tasks.
Here, `aTask` is assumed to produce a result of type `A` and `bTask` is assumed to produce a result of type `B`.
## Application
Apply this in practice:
1. Determine the tasks that produce the values you need
2. `map` the tasks with the function that implements your task.
As an example, consider generating a zip file containing the binary jar, source jar, and documentation jar for your project.
First, determine what tasks produce the jars.
In this case, the input tasks are `packageBin`, `packageSrc`, and `packageDoc` in the main `Compile` scope.
The result of each of these tasks is the File for the jar that they generated.
Our zip file task is defined by mapping these package tasks and including their outputs in a zip file.
As good practice, we then return the File for this zip so that other tasks can map on the zip task.
```scala
zip <<= (packageBin in Compile, packageSrc in Compile, packageDoc in Compile, zipPath) map {
(bin: File, src: File, doc: File, out: File) =>
val inputs: Seq[(File,String)] = Seq(bin, src, doc) x Path.flat
IO.zip(inputs, out)
out
}
```
The `val inputs` line defines how the input files are mapped to paths in the zip.
See [Mapping Files](https://github.com/harrah/xsbt/wiki/Mapping-Files) for details.
The explicit types are not required, but are included for clarity.
The `zipPath` input would be a custom task to define the location of the zip file.
For example:
```scala
zipPath <<= target map {
(t: File) =>
t / "out.zip"
}
```

View File

@ -0,0 +1,138 @@
========================
Task Inputs/Dependencies
========================
Tasks with dependencies are now introduced in the
:doc:`getting started guide </Getting-Started/More-About-Settings>`,
which you may wish to read first. This older page may have some additional detail.
*Wiki Maintenance Note:* This page should have its overlap with the
getting started guide cleaned up, and just have any advanced or
additional notes. It should maybe also be consolidated with :doc:`Tasks`.
An important aspect of the task system introduced in sbt 0.10 is to
combine two common, related steps in a build:
1. Ensure some other task is performed.
2. Use some result from that task.
Previous versions of sbt configured these steps separately using
1. Dependency declarations
2. Some form of shared state
To see why it is advantageous to combine them, compare the situation to
that of deferring initialization of a variable in Scala. This Scala code
is a bad way to expose a value whose initialization is deferred:
::
// Define a variable that will be initialized at some point
// We don't want to do it right away, because it might be expensive
var foo: Foo = _
// Define a function to initialize the variable
def makeFoo(): Unit = ... initialize foo ...
Typical usage would be:
::
makeFoo()
doSomething( foo )
This example is rather exaggerated in its badness, but I claim it is
nearly the same situation as our two step task definitions. Particular
reasons this is bad include:
1. A client needs to know to call ``makeFoo()`` first.
2. ``foo`` could be changed by other code. There could be a
``def makeFoo2()``, for example.
3. Access to foo is not thread safe.
The first point is like declaring a task dependency, the second is like
two tasks modifying the same state (either project variables or files),
and the third is a consequence of unsynchronized, shared state.
In Scala, we have the built-in functionality to easily fix this:
``lazy val``.
::
lazy val foo: Foo = ... initialize foo ...
with the example usage:
::
doSomething( foo )
Here, ``lazy val`` gives us thread safety, guaranteed initialization
before access, and immutability all in one, DRY construct. The task
system in sbt does the same thing for tasks (and more, but we won't go
into that here) that ``lazy val`` did for our bad example.
A task definition must declare its inputs and the type of its output.
sbt will ensure that the input tasks have run and will then provide
their results to the function that implements the task, which will
generate its own result. Other tasks can use this result and be assured
that the task has run (once) and be thread-safe and typesafe in the
process.
The general form of a task definition looks like:
::
myTask <<= (aTask, bTask) map { (a: A, b: B) =>
... do something with a, b and generate a result ...
}
(This is only intended to be a discussion of the ideas behind tasks, so
see the :doc:`sbt Tasks </Detailed-Topics/Tasks>` page
for details on usage.) Basically, ``myTask`` is defined by declaring
``aTask`` and ``bTask`` as inputs and by defining the function to apply
to the results of these tasks. Here, ``aTask`` is assumed to produce a
result of type ``A`` and ``bTask`` is assumed to produce a result of
type ``B``.
Application
-----------
Apply this in practice:
1. Determine the tasks that produce the values you need
2. ``map`` the tasks with the function that implements your task.
As an example, consider generating a zip file containing the binary jar,
source jar, and documentation jar for your project. First, determine
what tasks produce the jars. In this case, the input tasks are
``packageBin``, ``packageSrc``, and ``packageDoc`` in the main
``Compile`` scope. The result of each of these tasks is the File for the
jar that they generated. Our zip file task is defined by mapping these
package tasks and including their outputs in a zip file. As good
practice, we then return the File for this zip so that other tasks can
map on the zip task.
::
zip <<= (packageBin in Compile, packageSrc in Compile, packageDoc in Compile, zipPath) map {
(bin: File, src: File, doc: File, out: File) =>
val inputs: Seq[(File,String)] = Seq(bin, src, doc) x Path.flat
IO.zip(inputs, out)
out
}
The ``val inputs`` line defines how the input files are mapped to paths
in the zip. See :doc:`/Detailed-Topics/Mapping-Files` for details.
The explicit types are not required, but are included for clarity.
The ``zipPath`` input would be a custom task to define the location of
the zip file. For example:
::
zipPath <<= target map {
(t: File) =>
t / "out.zip"
}

View File

@ -1,457 +0,0 @@
[TaskStreams]: http://harrah.github.com/xsbt/latest/api/sbt/std/TaskStreams.html
[Logger]: http://harrah.github.com/xsbt/latest/api/sbt/Logger.html
[Incomplete]: https://github.com/harrah/xsbt/latest/api/sbt/Incomplete.html
[Result]: https://github.com/harrah/xsbt/latest/api/sbt/Result.html
# Tasks
Tasks and settings are now introduced in the
[[getting started guide|Getting Started Basic Def]], which you may
wish to read first. This older page has some additional detail.
_Wiki Maintenance Note:_ This page should have its overlap with
the getting started guide cleaned up, and just have any advanced
or additional notes. It should maybe also be consolidated with
[[TaskInputs]].
# Introduction
sbt 0.10+ has a new task system that integrates with the new settings system.
Both settings and tasks produce values, but there are two major differences between them:
1. Settings are evaluated at project load time. Tasks are executed on demand, often in response to a command from the user.
2. At the beginning of project loading, settings and their dependencies are fixed. Tasks can introduce new tasks during execution, however. (Tasks have flatMap, but Settings do not.)
# Features
There are several features of the task system:
1. By integrating with the settings system, tasks can be added, removed, and modified as easily and flexibly as settings.
2. [[Input Tasks]], the successor to method tasks, use [[parser combinators|Parsing Input]] to define the syntax for their arguments. This allows flexible syntax and tab-completions in the same way as [[Commands]].
3. Tasks produce values. Other tasks can access a task's value with the `map` and `flatMap` methods.
4. The `flatMap` method allows dynamically changing the structure of the task graph. Tasks can be injected into the execution graph based on the result of another task.
5. There are ways to handle task failure, similar to `try/catch/finally`.
6. Each task has access to its own Logger that by default persists the logging for that task at a more verbose level than is initially printed to the screen.
These features are discussed in detail in the following sections.
The context for the code snippets will be either the body of a
`Build` object in a [[.scala file|Getting Started Full Def]] or an
expression in a [[build.sbt|Getting Started Basic Def]].
# Defining a New Task
## Hello World example (sbt)
build.sbt
```scala
TaskKey[Unit]("hello") := println("hello world!")
```
## Hello World example (scala)
project/Build.scala
```scala
import sbt._
import Keys._
object HelloBuild extends Build {
val hwsettings = Defaults.defaultSettings ++ Seq(
organization := "hello",
name := "world",
version := "1.0-SNAPSHOT",
scalaVersion := "2.9.0-1"
)
val hello = TaskKey[Unit]("hello", "Prints 'Hello World'")
val helloTask = hello := {
println("Hello World")
}
lazy val project = Project (
"project",
file ("."),
settings = hwsettings ++ Seq(helloTask)
)
}
```
Run "sbt hello" from command line to invoke the task. Run "sbt tasks" to see this task listed.
## Define the key
To declare a new task, define a `TaskKey` in your [[Full Configuration]]:
```scala
val sampleTask = TaskKey[Int]("sample-task")
```
The name of the `val` is used when referring to the task in Scala code.
The string passed to the `TaskKey` method is used at runtime, such as at the command line.
By convention, the Scala identifier is camelCase and the runtime identifier uses hyphens.
The type parameter passed to `TaskKey` (here, `Int`) is the type of value produced by the task.
We'll define a couple of other of tasks for the examples:
```scala
val intTask = TaskKey[Int]("int-task")
val stringTask = TaskKey[String]("string-task")
```
The examples themselves are valid entries in a `build.sbt` or can be provided as part of a sequence to `Project.settings` (see [[Full Configuration]]).
## Implement the task
There are three main parts to implementing a task once its key is defined:
1. Determine the settings and other tasks needed by the task. They are the task's inputs.
2. Define a function that takes these inputs and produces a value.
3. Determine the scope the task will go in.
These parts are then combined like the parts of a setting are combined.
### Tasks without inputs
A task that takes no arguments can be defined using `:=`
```scala
intTask := 1 + 2
stringTask := System.getProperty("user.name")
sampleTask := {
val sum = 1 + 2
println("sum: " + sum)
sum
}
```
As mentioned in the introduction, a task is evaluated on demand.
Each time `sample-task` is invoked, for example, it will print the sum.
If the username changes between runs, `string-task` will take different values in those separate runs.
(Within a run, each task is evaluated at most once.)
In contrast, settings are evaluated once on project load and are fixed until the next reload.
### Tasks with inputs
Tasks with other tasks or settings as inputs are defined using `<<=`.
The right hand side will typically call `map` or `flatMap` on other settings or tasks.
(Contrast this with the `apply` method that is used for settings.)
The function argument to `map` or `flatMap` is the task body.
The following are equivalent ways of defining a task that adds one to value produced by `int-task` and returns the result.
```scala
sampleTask <<= intTask map { (count: Int) => count + 1 }
sampleTask <<= intTask map { _ + 1 }
```
Multiple inputs are handled as with settings.
The `map` and `flatMap` are done on a tuple of inputs:
```scala
stringTask <<= (sampleTask, intTask) map { (sample: Int, intValue: Int) =>
"Sample: " + sample + ", int: " + intValue
}
```
### Task Scope
As with settings, tasks can be defined in a specific scope.
For example, there are separate `compile` tasks for the `compile` and `test` scopes.
The scope of a task is defined the same as for a setting.
In the following example, `test:sample-task` uses the result of `compile:int-task`.
```scala
sampleTask.in(Test) <<= intTask.in(Compile).map { (intValue: Int) =>
intValue * 3
}
// more succinctly:
sampleTask in Test <<= intTask in Compile map { _ * 3 }
```
### Inline task keys
Although generally not recommended, it is possible to specify the task key inline:
```scala
TaskKey[Int]("sample-task") in Test <<= TaskKey[Int]("int-task") in Compile map { _ * 3 }
```
The type argument to `TaskKey` must be explicitly specified because of `SI-4653`. It is not recommended because:
1. Tasks are no longer referenced by Scala identifiers (like `sampleTask`), but by Strings (like `"sample-task"`)
2. The type information must be repeated.
3. Keys should come with a description, which would need to be repeated as well.
### On precedence
As a reminder, method precedence is by the name of the method.
1. Assignment methods have the lowest precedence. These are methods with names ending in `=`, except for `!=`, `<=`, `>=`, and names that start with `=`.
2. Methods starting with a letter have the next highest precedence.
3. Methods with names that start with a symbol and aren't included in 1. have the highest precedence. (This category is divided further according to the specific character it starts with. See the Scala specification for details.)
Therefore, the second variant in the previous example is equivalent to the following:
```scala
(sampleTask in Test) <<= (intTask in Compile map { _ * 3 })
```
# Modifying an Existing Task
The examples in this section use the following key definitions, which would go in a `Build` object in a [[Full Configuration]]. Alternatively, the keys may be specified inline, as discussed above.
```scala
val unitTask = TaskKey[Unit]("unit-task")
val intTask = TaskKey[Int]("int-task")
val stringTask = TaskKey[String]("string-task")
```
The examples themselves are valid settings in a `build.sbt` file or as part of a sequence provided to `Project.settings`.
In the general case, modify a task by declaring the previous task as an input.
```scala
// initial definition
intTask := 3
// overriding definition that references the previous definition
intTask <<= intTask map { (value: Int) => value + 1 }
```
Completely override a task by not declaring the previous task as an input.
Each of the definitions in the following example completely overrides the previous one.
That is, when `int-task` is run, it will only print `#3`.
```scala
intTask := {
println("#1")
3
}
intTask := {
println("#2")
5
}
intTask <<= sampleTask map { (value: Int) =>
println("#3")
value - 3
}
```
To apply a transformation to a single task, without using additional tasks as inputs, use `~=`.
This accepts the function to apply to the task's result:
```scala
intTask := 3
// increment the value returned by intTask
intTask ~= { (x: Int) => x + 1 }
```
# Task Operations
The previous sections used the `map` method to define a task in terms of the results of other tasks.
This is the most common method, but there are several others.
The examples in this section use the task keys defined in the previous section.
## Dependencies
To depend on the side effect of some tasks without using their values and without doing additional work, use `dependOn` on a sequence of tasks.
The defining task key (the part on the left side of `<<=`) must be of type `Unit`, since no value is returned.
```scala
unitTask <<= Seq(stringTask, sampleTask).dependOn
```
To add dependencies to an existing task without using their values, call `dependsOn` on the task and provide the tasks to depend on.
For example, the second task definition here modifies the original to require that `string-task` and `sample-task` run first:
```scala
intTask := 4
intTask <<= intTask.dependsOn(stringTask, sampleTask)
```
## Streams: Per-task logging
New in sbt 0.10+ are per-task loggers, which are part of a more general system for task-specific data called Streams. This allows controlling the verbosity of stack traces and logging individually for tasks as well as recalling the last logging for a task. Tasks also have access to their own persisted binary or text data.
To use Streams, `map` or `flatMap` the `streams` task. This is a special task that provides an instance of [TaskStreams] for the defining task. This type provides access to named binary and text streams, named loggers, and a default logger. The default [Logger], which is the most commonly used aspect, is obtained by the `log` method:
```scala
myTask <<= streams map { (s: TaskStreams) =>
s.log.debug("Saying hi...")
s.log.info("Hello!")
}
```
You can scope logging settings by the specific task's scope:
```scala
logLevel in myTask := Level.Debug
traceLevel in myTask := 5
```
To obtain the last logging output from a task, use the `last` command:
```scala
$ last my-task
[debug] Saying hi...
[info] Hello!
```
The verbosity with which logging is persisted is controlled using the `persist-log-level` and `persist-trace-level` settings.
The `last` command displays what was logged according to these levels.
The levels do not affect already logged information.
## Handling Failure
This section discusses the `andFinally`, `mapFailure`, and `mapR` methods, which are used to handle failure of other tasks.
### andFinally
The `andFinally` method defines a new task that runs the original task and evaluates a side effect regardless of whether the original task succeeded.
The result of the task is the result of the original task.
For example:
```scala
intTask := error("I didn't succeed.")
intTask <<= intTask andFinally { println("andFinally") }
```
This modifies the original `intTask` to always print "andFinally" even if the task fails.
Note that `andFinally` constructs a new task.
This means that the new task has to be invoked in order for the extra block to run.
This is important when calling andFinally on another task instead of overriding a task like in the previous example.
For example, consider this code:
```scala
intTask := error("I didn't succeed.")
otherIntTask <<= intTask andFinally { println("andFinally") }
```
If `int-task` is run directly, `other-int-task` is never involved in execution.
This case is similar to the following plain Scala code:
```scala
def intTask: Int =
error("I didn't succeed.")
def otherIntTask: Int =
try { intTask }
finally { println("finally") }
intTask()
```
It is obvious here that calling intTask() will never result in "finally" being printed.
### mapFailure
`mapFailure` accepts a function of type `Incomplete => T`, where `T` is a type parameter.
In the case of multiple inputs, the function has type `Seq[Incomplete] => T`.
[Incomplete] is an exception with information about any tasks that caused the failure and any underlying exceptions thrown during task execution.
The resulting task defined by `mapFailure` fails if its input succeeds and evaluates the provided function if it fails.
For example:
```scala
intTask := error("Failed.")
intTask <<= intTask mapFailure { (inc: Incomplete) =>
println("Ignoring failure: " + inc)
3
}
```
This overrides the `int-task` so that the original exception is printed and the constant `3` is returned.
`mapFailure` does not prevent other tasks that depend on the target from failing.
Consider the following example:
```scala
intTask := if(shouldSucceed) 5 else error("Failed.")
// return 3 if int-task fails. if it succeeds, this task will fail
aTask <<= intTask mapFailure { (inc: Incomplete) => 3 }
// a new task that increments the result of int-task
bTask <<= intTask map { _ + 1 }
cTask <<= (aTask, bTask) map { (a,b) => a + b }
```
The following table lists the results of each task depending on the initially invoked task:
<table>
<th>invoked task</th> <th>int-task result</th> <th>a-task result</th> <th>b-task result</th> <th>c-task result</th> <th>overall result</th>
<tr><td>int-task</td> <td>failure</td> <td>not run</td> <td>not run</td> <td>not run</td> <td>failure</td></tr>
<tr><td>a-task</td> <td>failure</td> <td>success</td> <td>not run</td> <td>not run</td> <td>success</td></tr>
<tr><td>b-task</td> <td>failure</td> <td>not run</td> <td>failure</td> <td>not run</td> <td>failure</td></tr>
<tr><td>c-task</td> <td>failure</td> <td>success</td> <td>failure</td> <td>failure</td> <td>failure</td></tr>
<tr><td>int-task</td> <td>success</td> <td>not run</td> <td>not run</td> <td>not run</td> <td>success</td></tr>
<tr><td>a-task</td> <td>success</td> <td>failure</td> <td>not run</td> <td>not run</td> <td>failure</td></tr>
<tr><td>b-task</td> <td>success</td> <td>not run</td> <td>success</td> <td>not run</td> <td>success</td></tr>
<tr><td>c-task</td> <td>success</td> <td>failure</td> <td>success</td> <td>failure</td> <td>failure</td></tr>
</table>
The overall result is always the same as the root task (the directly invoked task).
A `mapFailure` turns a success into a failure, and a failure into whatever the result of evaluating the supplied function is.
A `map` fails when the input fails and applies the supplied function to a successfully completed input.
In the case of more than one input, `mapFailure` fails if all inputs succeed.
If at least one input fails, the supplied function is provided with the list of `Incomplete`s.
For example:
```scala
cTask <<= (aTask, bTask) mapFailure { (incs: Seq[Incomplete]) => 3 }
```
The following table lists the results of invoking `c-task`, depending on the success of `aTask` and `bTask`:
<table>
<th>a-task result</th> <th>b-task result</th> <th>c-task result</th>
<tr> <td>failure</td> <td>failure</td> <td>success</td> </tr>
<tr> <td>failure</td> <td>success</td> <td>success</td> </tr>
<tr> <td>success</td> <td>failure</td> <td>success</td> </tr>
<tr> <td>success</td> <td>success</td> <td>failure</td> </tr>
</table>
### mapR
`mapR` accepts a function of type `Result[S] => T`, where `S` is the type of the task being mapped and `T` is a type parameter.
In the case of multiple inputs, the function has type `(Result[A], Result[B], ...) => T`.
[Result] has the same structure as `Either[Incomplete, S]` for a task result of type `S`.
That is, it has two subtypes:
* `Inc`, which wraps `Incomplete` in case of failure
* `Value`, which wraps a task's result in case of success.
Thus, `mapR` is always invoked whether or not the original task succeeds or fails.
For example:
```scala
intTask := error("Failed.")
intTask <<= intTask mapR {
case Inc(inc: Incomplete) =>
println("Ignoring failure: " + inc)
3
case Value(v) =>
println("Using successful result: " + v)
v
}
```
This overrides the original `int-task` definition so that if the original task fails, the exception is printed and the constant `3` is returned.
If it succeeds, the value is printed and returned.

View File

@ -0,0 +1,941 @@
=====
Tasks
=====
Tasks and settings are now introduced in the :doc:`getting started guide </Getting-Started/Basic-Def>`,
which you may wish to read first. This older page has some additional detail.
*Wiki Maintenance Note:* This page should have its overlap with the
getting started guide cleaned up, and just have any advanced or
additional notes. It should maybe also be consolidated with
:doc:`TaskInputs`.
Introduction
============
sbt 0.10+ has a new task system that integrates with the new settings
system. Both settings and tasks produce values, but there are two major
differences between them:
1. Settings are evaluated at project load time. Tasks are executed on
demand, often in response to a command from the user.
2. At the beginning of project loading, settings and their dependencies
are fixed. Tasks can introduce new tasks during execution, however.
(Tasks have flatMap, but Settings do not.)
Features
========
There are several features of the task system:
1. By integrating with the settings system, tasks can be added, removed,
and modified as easily and flexibly as settings.
2. :doc:`Input Tasks <TaskInputs>`, the successor to method tasks, use
:doc:`parser combinators <Parsing-Input>` to define the syntax for their
arguments. This allows flexible syntax and tab-completions in the
same way as :doc:`/Extending/Commands`.
3. Tasks produce values. Other tasks can access a task's value with the
``map`` and ``flatMap`` methods.
4. The ``flatMap`` method allows dynamically changing the structure of
the task graph. Tasks can be injected into the execution graph based
on the result of another task.
5. There are ways to handle task failure, similar to
``try/catch/finally``.
6. Each task has access to its own Logger that by default persists the
logging for that task at a more verbose level than is initially
printed to the screen.
These features are discussed in detail in the following sections. The
context for the code snippets will be either the body of a ``Build``
object in a :doc:`.scala file </Getting-Started/Full-Def>` or an expression
in a :doc:`build.sbt </Getting-Started/Basic-Def>`.
Defining a New Task
===================
Hello World example (sbt)
-------------------------
build.sbt
::
TaskKey[Unit]("hello") := println("hello world!")
Hello World example (scala)
---------------------------
project/Build.scala
::
import sbt._
import Keys._
object HelloBuild extends Build {
val hwsettings = Defaults.defaultSettings ++ Seq(
organization := "hello",
name := "world",
version := "1.0-SNAPSHOT",
scalaVersion := "2.9.0-1"
)
val hello = TaskKey[Unit]("hello", "Prints 'Hello World'")
val helloTask = hello := {
println("Hello World")
}
lazy val project = Project (
"project",
file ("."),
settings = hwsettings ++ Seq(helloTask)
)
}
Run "sbt hello" from command line to invoke the task. Run "sbt tasks" to
see this task listed.
Define the key
--------------
To declare a new task, define a ``TaskKey`` in your
:doc:`Full Configuration </Getting-Started/Full-Def>`:
::
val sampleTask = TaskKey[Int]("sample-task")
The name of the ``val`` is used when referring to the task in Scala
code. The string passed to the ``TaskKey`` method is used at runtime,
such as at the command line. By convention, the Scala identifier is
camelCase and the runtime identifier uses hyphens. The type parameter
passed to ``TaskKey`` (here, ``Int``) is the type of value produced by
the task.
We'll define a couple of other of tasks for the examples:
::
val intTask = TaskKey[Int]("int-task")
val stringTask = TaskKey[String]("string-task")
The examples themselves are valid entries in a ``build.sbt`` or can be
provided as part of a sequence to ``Project.settings`` (see
:doc:`Full Configuration </Getting-Started/Full-Def>`).
Implement the task
------------------
There are three main parts to implementing a task once its key is
defined:
1. Determine the settings and other tasks needed by the task. They are
the task's inputs.
2. Define a function that takes these inputs and produces a value.
3. Determine the scope the task will go in.
These parts are then combined like the parts of a setting are combined.
Tasks without inputs
~~~~~~~~~~~~~~~~~~~~
A task that takes no arguments can be defined using ``:=``
\`\`\`scala
intTask := 1 + 2
stringTask := System.getProperty("user.name")
sampleTask := { val sum = 1 + 2 println("sum: " + sum) sum }
\`\`\ ``As mentioned in the introduction, a task is evaluated on demand. Each time``\ sample-task\ ``is invoked, for example, it will print the sum. If the username changes between runs,``\ string-task\`
will take different values in those separate runs. (Within a run, each
task is evaluated at most once.) In contrast, settings are evaluated
once on project load and are fixed until the next reload.
Tasks with inputs
~~~~~~~~~~~~~~~~~
Tasks with other tasks or settings as inputs are defined using ``<<=``.
The right hand side will typically call ``map`` or ``flatMap`` on other
settings or tasks. (Contrast this with the ``apply`` method that is used
for settings.) The function argument to ``map`` or ``flatMap`` is the
task body. The following are equivalent ways of defining a task that
adds one to value produced by ``int-task`` and returns the result.
::
sampleTask <<= intTask map { (count: Int) => count + 1 }
sampleTask <<= intTask map { _ + 1 }
Multiple inputs are handled as with settings. The ``map`` and
``flatMap`` are done on a tuple of inputs:
::
stringTask <<= (sampleTask, intTask) map { (sample: Int, intValue: Int) =>
"Sample: " + sample + ", int: " + intValue
}
Task Scope
~~~~~~~~~~
As with settings, tasks can be defined in a specific scope. For example,
there are separate ``compile`` tasks for the ``compile`` and ``test``
scopes. The scope of a task is defined the same as for a setting. In the
following example, ``test:sample-task`` uses the result of
``compile:int-task``.
::
sampleTask.in(Test) <<= intTask.in(Compile).map { (intValue: Int) =>
intValue * 3
}
// more succinctly:
sampleTask in Test <<= intTask in Compile map { _ * 3 }
Inline task keys
~~~~~~~~~~~~~~~~
Although generally not recommended, it is possible to specify the task
key inline:
::
TaskKey[Int]("sample-task") in Test <<= TaskKey[Int]("int-task") in Compile map { _ * 3 }
The type argument to ``TaskKey`` must be explicitly specified because of
``SI-4653``. It is not recommended because:
1. Tasks are no longer referenced by Scala identifiers (like
``sampleTask``), but by Strings (like ``"sample-task"``)
2. The type information must be repeated.
3. Keys should come with a description, which would need to be repeated
as well.
On precedence
~~~~~~~~~~~~~
As a reminder, method precedence is by the name of the method.
1. Assignment methods have the lowest precedence. These are methods with
names ending in ``=``, except for ``!=``, ``<=``, ``>=``, and names
that start with ``=``.
2. Methods starting with a letter have the next highest precedence.
3. Methods with names that start with a symbol and aren't included in 1.
have the highest precedence. (This category is divided further
according to the specific character it starts with. See the Scala
specification for details.)
Therefore, the second variant in the previous example is equivalent to
the following:
::
(sampleTask in Test) <<= (intTask in Compile map { _ * 3 })
Modifying an Existing Task
==========================
The examples in this section use the following key definitions, which
would go in a ``Build`` object in a :doc:`Full Configuration </Getting-Started/Full-Def>`.
Alternatively, the keys may be specified inline, as discussed above.
``scala val unitTask = TaskKey[Unit]("unit-task") val intTask = TaskKey[Int]("int-task") val stringTask = TaskKey[String]("string-task")``
The examples themselves are valid settings in a ``build.sbt`` file or as
part of a sequence provided to ``Project.settings``.
In the general case, modify a task by declaring the previous task as an
input.
::
// initial definition
intTask := 3
// overriding definition that references the previous definition
intTask <<= intTask map { (value: Int) => value + 1 }
Completely override a task by not declaring the previous task as an
input. Each of the definitions in the following example completely
overrides the previous one. That is, when ``int-task`` is run, it will
only print ``#3``.
::
intTask := {
println("#1")
3
}
intTask := {
println("#2")
5
}
intTask <<= sampleTask map { (value: Int) =>
println("#3")
value - 3
}
To apply a transformation to a single task, without using additional
tasks as inputs, use ``~=``. This accepts the function to apply to the
task's result:
::
intTask := 3
// increment the value returned by intTask
intTask ~= { (x: Int) => x + 1 }
Task Operations
===============
The previous sections used the ``map`` method to define a task in terms
of the results of other tasks. This is the most common method, but there
are several others. The examples in this section use the task keys
defined in the previous section.
Dependencies
------------
To depend on the side effect of some tasks without using their values
and without doing additional work, use ``dependOn`` on a sequence of
tasks. The defining task key (the part on the left side of ``<<=``) must
be of type ``Unit``, since no value is returned.
::
unitTask <<= Seq(stringTask, sampleTask).dependOn
To add dependencies to an existing task without using their values, call
``dependsOn`` on the task and provide the tasks to depend on. For
example, the second task definition here modifies the original to
require that ``string-task`` and ``sample-task`` run first:
::
intTask := 4
intTask <<= intTask.dependsOn(stringTask, sampleTask)
Streams: Per-task logging
-------------------------
New in sbt 0.10+ are per-task loggers, which are part of a more general
system for task-specific data called Streams. This allows controlling
the verbosity of stack traces and logging individually for tasks as well
as recalling the last logging for a task. Tasks also have access to
their own persisted binary or text data.
To use Streams, ``map`` or ``flatMap`` the ``streams`` task. This is a
special task that provides an instance of
`TaskStreams <../../api/sbt/std/TaskStreams.html>`_
for the defining task. This type provides access to named binary and
text streams, named loggers, and a default logger. The default
`Logger <../../api/sbt/Logger.html>`_,
which is the most commonly used aspect, is obtained by the ``log``
method:
::
myTask <<= streams map { (s: TaskStreams) =>
s.log.debug("Saying hi...")
s.log.info("Hello!")
}
You can scope logging settings by the specific task's scope:
::
logLevel in myTask := Level.Debug
traceLevel in myTask := 5
To obtain the last logging output from a task, use the ``last`` command:
::
$ last my-task
[debug] Saying hi...
[info] Hello!
The verbosity with which logging is persisted is controlled using the
``persist-log-level`` and ``persist-trace-level`` settings. The ``last``
command displays what was logged according to these levels. The levels
do not affect already logged information.
Handling Failure
----------------
This section discusses the ``andFinally``, ``mapFailure``, and ``mapR``
methods, which are used to handle failure of other tasks.
andFinally
~~~~~~~~~~
The ``andFinally`` method defines a new task that runs the original task
and evaluates a side effect regardless of whether the original task
succeeded. The result of the task is the result of the original task.
For example:
::
intTask := error("I didn't succeed.")
intTask <<= intTask andFinally { println("andFinally") }
This modifies the original ``intTask`` to always print "andFinally" even
if the task fails.
Note that ``andFinally`` constructs a new task. This means that the new
task has to be invoked in order for the extra block to run. This is
important when calling andFinally on another task instead of overriding
a task like in the previous example. For example, consider this code:
::
intTask := error("I didn't succeed.")
otherIntTask <<= intTask andFinally { println("andFinally") }
If ``int-task`` is run directly, ``other-int-task`` is never involved in
execution. This case is similar to the following plain Scala code:
::
def intTask: Int =
error("I didn't succeed.")
def otherIntTask: Int =
try { intTask }
finally { println("finally") }
intTask()
It is obvious here that calling intTask() will never result in "finally"
being printed.
mapFailure
~~~~~~~~~~
``mapFailure`` accepts a function of type ``Incomplete => T``, where
``T`` is a type parameter. In the case of multiple inputs, the function
has type ``Seq[Incomplete] => T``.
`Incomplete <https://github.com/harrah/xsbt/latest/api/sbt/Incomplete.html>`_
is an exception with information about any tasks that caused the failure
and any underlying exceptions thrown during task execution. The
resulting task defined by ``mapFailure`` fails if its input succeeds and
evaluates the provided function if it fails.
For example:
\`\`\`scala intTask := error("Failed.")
intTask <<= intTask mapFailure { (inc: Incomplete) => println("Ignoring
failure: " + inc) 3 }
\`\`\ ``This overrides the``\ int-task\ ``so that the original exception is printed and the constant``\ 3\`
is returned.
``mapFailure`` does not prevent other tasks that depend on the target
from failing. Consider the following example:
\`\`\`scala intTask := if(shouldSucceed) 5 else error("Failed.")
// return 3 if int-task fails. if it succeeds, this task will fail aTask
<<= intTask mapFailure { (inc: Incomplete) => 3 }
// a new task that increments the result of int-task bTask <<= intTask
map { \_ + 1 }
cTask <<= (aTask, bTask) map { (a,b) => a + b } \`\`\` The following
table lists the results of each task depending on the initially invoked
task:
.. raw:: html
<table>
<th>
invoked task
.. raw:: html
</th> <th>
int-task result
.. raw:: html
</th> <th>
a-task result
.. raw:: html
</th> <th>
b-task result
.. raw:: html
</th> <th>
c-task result
.. raw:: html
</th> <th>
overall result
.. raw:: html
</th>
<tr><td>
int-task
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
failure
.. raw:: html
</td></tr>
<tr><td>
a-task
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
success
.. raw:: html
</td></tr>
<tr><td>
b-task
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
failure
.. raw:: html
</td></tr>
<tr><td>
c-task
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
failure
.. raw:: html
</td></tr>
<tr><td>
int-task
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
success
.. raw:: html
</td></tr>
<tr><td>
a-task
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
failure
.. raw:: html
</td></tr>
<tr><td>
b-task
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
not run
.. raw:: html
</td> <td>
success
.. raw:: html
</td></tr>
<tr><td>
c-task
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
failure
.. raw:: html
</td></tr>
</table>
The overall result is always the same as the root task (the directly
invoked task). A ``mapFailure`` turns a success into a failure, and a
failure into whatever the result of evaluating the supplied function is.
A ``map`` fails when the input fails and applies the supplied function
to a successfully completed input.
In the case of more than one input, ``mapFailure`` fails if all inputs
succeed. If at least one input fails, the supplied function is provided
with the list of ``Incomplete``\ s. For example:
::
cTask <<= (aTask, bTask) mapFailure { (incs: Seq[Incomplete]) => 3 }
The following table lists the results of invoking ``c-task``, depending
on the success of ``aTask`` and ``bTask``:
.. raw:: html
<table>
<th>
a-task result
.. raw:: html
</th> <th>
b-task result
.. raw:: html
</th> <th>
c-task result
.. raw:: html
</th>
<tr> <td>
failure
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
success
.. raw:: html
</td> </tr>
<tr> <td>
failure
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
success
.. raw:: html
</td> </tr>
<tr> <td>
success
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> <td>
success
.. raw:: html
</td> </tr>
<tr> <td>
success
.. raw:: html
</td> <td>
success
.. raw:: html
</td> <td>
failure
.. raw:: html
</td> </tr>
</table>
mapR
~~~~
``mapR`` accepts a function of type ``Result[S] => T``, where ``S`` is
the type of the task being mapped and ``T`` is a type parameter. In the
case of multiple inputs, the function has type
``(Result[A], Result[B], ...) => T``.
`Result <https://github.com/harrah/xsbt/latest/api/sbt/Result.html>`_
has the same structure as ``Either[Incomplete, S]`` for a task result of
type ``S``. That is, it has two subtypes:
- ``Inc``, which wraps ``Incomplete`` in case of failure
- ``Value``, which wraps a task's result in case of success.
Thus, ``mapR`` is always invoked whether or not the original task
succeeds or fails.
For example:
\`\`\`scala intTask := error("Failed.")
intTask <<= intTask mapR { case Inc(inc: Incomplete) =>
println("Ignoring failure: " + inc) 3 case Value(v) => println("Using
successful result: " + v) v }
\`\`\ ``This overrides the original``\ int-task\ ``definition so that if the original task fails, the exception is printed and the constant``\ 3\`
is returned. If it succeeds, the value is printed and returned.

View File

@ -1,394 +0,0 @@
[uniform test interface]: http://github.com/harrah/test-interface
[TestReportListener]: http://harrah.github.com/xsbt/latest/api/sbt/TestReportListener.html
[TestsListener]: http://harrah.github.com/xsbt/latest/api/sbt/TestsListener.html
[junit-interface]: https://github.com/szeiger/junit-interface
[ScalaCheck]: http://code.google.com/p/scalacheck/
[specs2]: http://etorreborre.github.com/specs2/
[ScalaTest]: http://www.artima.com/scalatest/
# Testing
# Basics
The standard source locations for testing are:
* Scala sources in `src/test/scala/`
* Java sources in `src/test/java/`
* Resources for the test classpath in `src/test/resources/`
The resources may be accessed from tests by using the `getResource` methods of `java.lang.Class` or `java.lang.ClassLoader`.
The main Scala testing frameworks ([specs2], [ScalaCheck], and [ScalaTest]) provide an implementation of the common test interface and only need to be added to the classpath to work with sbt. For example, ScalaCheck may be used by declaring it as a [[managed dependency|Library Management]]:
```scala
libraryDependencies += "org.scala-tools.testing" %% "scalacheck" % "1.9" % "test"
```
The fourth component `"test"` is the [[configuration|Configurations]] and means that ScalaCheck will only be on the test classpath and it isn't needed by the main sources.
This is generally good practice for libraries because your users don't typically need your test dependencies to use your library.
With the library dependency defined, you can then add test sources in the locations listed above and compile and run tests.
The tasks for running tests are `test` and `test-only`.
The `test` task accepts no command line arguments and runs all tests:
```text
> test
```
## test-only
The `test-only` task accepts a whitespace separated list of test names to run. For example:
```text
> test-only org.example.MyTest1 org.example.MyTest2
```
It supports wildcards as well:
```text
> test-only org.example.*Slow org.example.MyTest1
```
## test-quick
The `test-quick` task, like `test-only`, allows to filter the tests to run to specific tests or wildcards using the same syntax to indicate the filters. In addition to the explicit filter, only the tests that satisfy one of the following conditions are run:
* The tests that failed in the previous run
* The tests that were not run before
* The tests that have one or more transitive dependencies, maybe in a different project, recompiled.
### Tab completion
Tab completion is provided for test names based on the results of the last `test:compile`. This means that a new sources aren't available for tab completion until they are compiled and deleted sources won't be removed from tab completion until a recompile. A new test source can still be manually written out and run using `test-only`.
## Other tasks
Tasks that are available for main sources are generally available for test sources, but are prefixed with `test:` on the command line and are referenced in Scala code with `in Test`. These tasks include:
* `test:compile`
* `test:console`
* `test:console-quick`
* `test:run`
* `test:run-main`
See [[Running|Getting Started Running]] for details on these tasks.
# Output
By default, logging is buffered for each test source file until all tests for that file complete.
This can be disabled with:
```scala
logBuffered in Test := false
```
# Options
## Test Framework Arguments
Arguments to the test framework may be provided on the command line to the `test-only` tasks following a `--` separator. For example:
```text
> test-only org.example.MyTest -- -d -S
```
To specify test framework arguments as part of the build, add options constructed by `Tests.Argument`:
```scala
testOptions in Test += Tests.Argument("-d", "-g")
```
To specify them for a specific test framework only:
```scala
testOptions in Test += Tests.Argument(TestFrameworks.ScalaCheck, "-d", "-g")
```
## Setup and Cleanup
Specify setup and cleanup actions using `Tests.Setup` and `Tests.Cleanup`.
These accept either a function of type `() => Unit` or a function of type `ClassLoader => Unit`.
The variant that accepts a ClassLoader is passed the class loader that is (or was) used for running the tests.
It provides access to the test classes as well as the test framework classes.
Examples:
```scala
testOptions in Test += Tests.Setup( () => println("Setup") )
testOptions in Test += Tests.Cleanup( () => println("Cleanup") )
testOptions in Test += Tests.Setup( loader => ... )
testOptions in Test += Tests.Cleanup( loader => ... )
```
## Disable Parallel Execution of Tests
By default, sbt runs all tasks in parallel. Because each test is mapped to a task, tests are also run in parallel by default. To make tests within a given project execute serially:
```scala
parallelExecution in Test := false
```
`Test` can be replaced with `IntegrationTest` to only execute integration tests serially. Note that tests from different projects may still execute concurrently.
## Filter classes
If you want to only run test classes whose name ends with "Test", use `Tests.Filter`:
```
testOptions in Test := Seq(Tests.Filter(s => s.endsWith("Test")))
```
## Forking tests
In version 0.12 the facility to run tests in a separate JVM has been added. The setting
```scala
fork in Test := true
```
specifies that all tests will be executed in a single external JVM.
See [[Forking]] for configuring standard options for forking.
More control over how tests are assigned to JVMs and what options to pass to those is available with `testGrouping` key. For example:
```scala
import Tests._
{
def groupByFirst(tests: Seq[TestDefinition]) =
tests groupBy (_.name(0)) map {
case (letter, tests) => new Group(letter.toString, tests, SubProcess(Seq("-Dfirst.letter"+letter)))
} toSeq;
testGrouping <<= definedTests in Test map groupByFirst
}
```
The tests in a single group are run sequentially. Controlling the number of forked JVMs allowed to run at the same time is through setting the limit on `Tags.ForkedTestGroup` tag which has 1 as a default value. `Setup` and `Cleanup` actions are not supported when a group is forked.
# Additional test configurations
You can add an additional test configuration to have a separate set of test sources and associated compilation, packaging, and testing tasks and settings.
The steps are:
* Define the configuration
* Add the tasks and settings
* Declare library dependencies
* Create sources
* Run tasks
The following two examples demonstrate this.
The first example shows how to enable integration tests.
The second shows how to define a customized test configuration.
This allows you to define multiple types of tests per project.
## Integration Tests
The following full build configuration demonstrates integration tests.
```scala
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( IntegrationTest )
.settings( Defaults.itSettings : _*)
.settings( libraryDependencies += specs )
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "it,test"
}
```
* `configs(IntegrationTest)` adds the predefined integration test configuration. This configuration is referred to by the name `it`.
* `settings( Defaults.itSettings : _* )` adds compilation, packaging, and testing actions and settings in the `IntegrationTest` configuration.
* `settings( libraryDependencies += specs )` adds specs to both the standard `test` configuration and the integration test configuration `it`. To define a dependency only for integration tests, use `"it"` as the configuration instead of `"it,test"`.
The standard source hierarchy is used:
* `src/it/scala` for Scala sources
* `src/it/java` for Java sources
* `src/it/resources` for resources that should go on the integration test classpath
The standard testing tasks are available, but must be prefixed with `it:`. For example,
```text
> it:test-only org.example.AnIntegrationTest
```
Similarly the standard settings may be configured for the `IntegrationTest` configuration.
If not specified directly, most `IntegrationTest` settings delegate to `Test` settings by default.
For example, if test options are specified as:
```scala
testOptions in Test += ...
```
then these will be picked up by the `Test` configuration and in turn by the `IntegrationTest` configuration.
Options can be added specifically for integration tests by putting them in the `IntegrationTest` configuration:
```scala
testOptions in IntegrationTest += ...
```
Or, use `:=` to overwrite any existing options, declaring these to be the definitive integration test options:
```scala
testOptions in IntegrationTest := Seq(...)
```
## Custom test configuration
The previous example may be generalized to a custom test configuration.
```scala
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( FunTest )
.settings( inConfig(FunTest)(Defaults.testSettings) : _*)
.settings( libraryDependencies += specs )
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "fun"
}
```
Instead of using the built-in configuration, we defined a new one:
```scala
lazy val FunTest = config("fun") extend(Test)
```
The `extend(Test)` part means to delegate to `Test` for undefined `CustomTest` settings.
The line that adds the tasks and settings for the new test configuration is:
```scala
settings( inConfig(FunTest)(Defaults.testSettings) : _*)
```
This says to add test and settings tasks in the `FunTest` configuration.
We could have done it this way for integration tests as well.
In fact, `Defaults.itSettings` is a convenience definition: `val itSettings = inConfig(IntegrationTest)(Defaults.testSettings)`.
The comments in the integration test section hold, except with `IntegrationTest` replaced with `FunTest` and `"it"` replaced with `"fun"`. For example, test options can be configured specifically for `FunTest`:
```scala
testOptions in FunTest += ...
```
Test tasks are run by prefixing them with `fun:`
```scala
> fun:test
```
## Additional test configurations with shared sources
An alternative to adding separate sets of test sources (and compilations) is to share sources.
In this approach, the sources are compiled together using the same classpath and are packaged together.
However, different tests are run depending on the configuration.
```scala
import sbt._
import Keys._
object B extends Build {
lazy val root =
Project("root", file("."))
.configs( FunTest )
.settings( inConfig(FunTest)(Defaults.testTasks) : _*)
.settings(
libraryDependencies += specs,
testOptions in Test := Seq(Tests.Filter(itFilter)),
testOptions in FunTest := Seq(Tests.Filter(unitFilter))
)
def itFilter(name: String): Boolean = name endsWith "ITest"
def unitFilter(name: String): Boolean = (name endsWith "Test") && !itFilter(name)
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "test"
}
```
The key differences are:
* We are now only adding the test tasks (`inConfig(FunTest)(Defaults.testTasks)`) and not compilation and packaging tasks and settings.
* We filter the tests to be run for each configuration.
To run standard unit tests, run `test` (or equivalently, `test:test`):
```text
> test
```
To run tests for the added configuration (here, `"fun"`), prefix it with the configuration name as before:
```text
> fun:test
> fun:test-only org.example.AFunTest
```
### Application to parallel execution
One use for this shared-source approach is to separate tests that can run in parallel from those that must execute serially.
Apply the procedure described in this section for an additional configuration.
Let's call the configuration `serial`:
```scala
lazy val Serial = config("serial") extend(Test)
```
Then, we can disable parallel execution in just that configuration using:
```text
parallelExecution in Serial := false
```
The tests to run in parallel would be run with `test` and the ones to run in serial would be run with `serial:test`.
# JUnit
Support for JUnit is provided by [junit-interface]. To add JUnit support into your project, add the junit-interface dependency in your project's main build.sbt file.
```scala
libraryDependencies += "com.novocode" % "junit-interface" % "0.8" % "test->default"
```
# Extensions
This page describes adding support for additional testing libraries and defining additional test reporters. You do this by implementing `sbt` interfaces (described below). If you are the author of the testing framework, you can depend on the test interface as a provided dependency. Alternatively, anyone can provide support for a test framework by implementing the interfaces in a separate project and packaging the project as an sbt [[Plugin|Plugins]].
## Custom Test Framework
`sbt` contains built-in support for the three main Scala testing libraries (specs 1 and 2, ScalaTest, and ScalaCheck). To add support for a different framework, implement the [uniform test interface].
## Custom Test Reporters
Test frameworks report status and results to test reporters. You can create a new test reporter by implementing either [TestReportListener] or [TestsListener].
## Using Extensions
To use your extensions in a project definition:
Modify the `testFrameworks`setting to reference your test framework:
```scala
testFrameworks += new TestFramework("custom.framework.ClassName")
```
Specify the test reporters you want to use by overriding the `testListeners` method in your project definition.
```scala
testListeners += customTestListener
```
where `customTestListener` is of type `sbt.TestReportListener`.

View File

@ -0,0 +1,480 @@
=======
Testing
=======
Basics
======
The standard source locations for testing are:
- Scala sources in ``src/test/scala/``
- Java sources in ``src/test/java/``
- Resources for the test classpath in ``src/test/resources/``
The resources may be accessed from tests by using the ``getResource``
methods of ``java.lang.Class`` or ``java.lang.ClassLoader``.
The main Scala testing frameworks
(`specs2 <http://etorreborre.github.com/specs2/>`_,
`ScalaCheck <http://code.google.com/p/scalacheck/>`_, and
`ScalaTest <http://www.artima.com/scalatest/>`_) provide an
implementation of the common test interface and only need to be added to
the classpath to work with sbt. For example, ScalaCheck may be used by
declaring it as a :doc:`managed dependency <Library-Management>`:
::
libraryDependencies += "org.scala-tools.testing" %% "scalacheck" % "1.9" % "test"
The fourth component ``"test"`` is the :doc:`configuration </Dormant/Configurations>`
and means that ScalaCheck will only be on the test classpath and it
isn't needed by the main sources. This is generally good practice for
libraries because your users don't typically need your test dependencies
to use your library.
With the library dependency defined, you can then add test sources in
the locations listed above and compile and run tests. The tasks for
running tests are ``test`` and ``test-only``. The ``test`` task accepts
no command line arguments and runs all tests:
::
> test
test-only
---------
The ``test-only`` task accepts a whitespace separated list of test names
to run. For example:
::
> test-only org.example.MyTest1 org.example.MyTest2
It supports wildcards as well:
``text > test-only org.example.*Slow org.example.MyTest1`` ## test-quick
The ``test-quick`` task, like ``test-only``, allows to filter the tests
to run to specific tests or wildcards using the same syntax to indicate
the filters. In addition to the explicit filter, only the tests that
satisfy one of the following conditions are run:
- The tests that failed in the previous run
- The tests that were not run before
- The tests that have one or more transitive dependencies, maybe in a
different project, recompiled.
Tab completion
~~~~~~~~~~~~~~
Tab completion is provided for test names based on the results of the
last ``test:compile``. This means that a new sources aren't available
for tab completion until they are compiled and deleted sources won't be
removed from tab completion until a recompile. A new test source can
still be manually written out and run using ``test-only``.
Other tasks
-----------
Tasks that are available for main sources are generally available for
test sources, but are prefixed with ``test:`` on the command line and
are referenced in Scala code with ``in Test``. These tasks include:
- ``test:compile``
- ``test:console``
- ``test:console-quick``
- ``test:run``
- ``test:run-main``
See :doc:`Running </Getting-Started/Running>` for details on these tasks.
Output
======
By default, logging is buffered for each test source file until all
tests for that file complete. This can be disabled with:
::
logBuffered in Test := false
Options
=======
Test Framework Arguments
------------------------
Arguments to the test framework may be provided on the command line to
the ``test-only`` tasks following a ``--`` separator. For example:
::
> test-only org.example.MyTest -- -d -S
To specify test framework arguments as part of the build, add options
constructed by ``Tests.Argument``:
::
testOptions in Test += Tests.Argument("-d", "-g")
To specify them for a specific test framework only:
::
testOptions in Test += Tests.Argument(TestFrameworks.ScalaCheck, "-d", "-g")
Setup and Cleanup
-----------------
Specify setup and cleanup actions using ``Tests.Setup`` and
``Tests.Cleanup``. These accept either a function of type ``() => Unit``
or a function of type ``ClassLoader => Unit``. The variant that accepts
a ClassLoader is passed the class loader that is (or was) used for
running the tests. It provides access to the test classes as well as the
test framework classes.
Examples:
::
testOptions in Test += Tests.Setup( () => println("Setup") )
testOptions in Test += Tests.Cleanup( () => println("Cleanup") )
testOptions in Test += Tests.Setup( loader => ... )
testOptions in Test += Tests.Cleanup( loader => ... )
Disable Parallel Execution of Tests
-----------------------------------
By default, sbt runs all tasks in parallel. Because each test is mapped
to a task, tests are also run in parallel by default. To make tests
within a given project execute serially:
``scala parallelExecution in Test := false`` ``Test`` can be replaced
with ``IntegrationTest`` to only execute integration tests serially.
Note that tests from different projects may still execute concurrently.
Filter classes
--------------
If you want to only run test classes whose name ends with "Test", use
``Tests.Filter``:
::
testOptions in Test := Seq(Tests.Filter(s => s.endsWith("Test")))
Forking tests
-------------
In version 0.12.0, the facility to run tests in a separate JVM was added. The setting
::
fork in Test := true
specifies that all tests will be executed in a single external JVM. See
:doc:`Forking` for configuring standard options for forking. More control
over how tests are assigned to JVMs and what options to pass to those is
available with ``testGrouping`` key. For example:
::
import Tests._
{
def groupByFirst(tests: Seq[TestDefinition]) =
tests groupBy (_.name(0)) map {
case (letter, tests) => new Group(letter.toString, tests, SubProcess(Seq("-Dfirst.letter"+letter)))
} toSeq;
testGrouping <<= definedTests in Test map groupByFirst
}
The tests in a single group are run sequentially. Controlling the number
of forked JVMs allowed to run at the same time is through setting the
limit on ``Tags.ForkedTestGroup`` tag which has 1 as a default value.
``Setup`` and ``Cleanup`` actions are not supported when a group is
forked.
Additional test configurations
==============================
You can add an additional test configuration to have a separate set of
test sources and associated compilation, packaging, and testing tasks
and settings. The steps are:
- Define the configuration
- Add the tasks and settings
- Declare library dependencies
- Create sources
- Run tasks
The following two examples demonstrate this. The first example shows how
to enable integration tests. The second shows how to define a customized
test configuration. This allows you to define multiple types of tests
per project.
Integration Tests
-----------------
The following full build configuration demonstrates integration tests.
::
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( IntegrationTest )
.settings( Defaults.itSettings : _*)
.settings( libraryDependencies += specs )
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "it,test"
}
- ``configs(IntegrationTest)`` adds the predefined integration test
configuration. This configuration is referred to by the name ``it``.
- ``settings( Defaults.itSettings : _* )`` adds compilation, packaging,
and testing actions and settings in the ``IntegrationTest``
configuration.
- ``settings( libraryDependencies += specs )`` adds specs to both the
standard ``test`` configuration and the integration test
configuration ``it``. To define a dependency only for integration
tests, use ``"it"`` as the configuration instead of ``"it,test"``.
The standard source hierarchy is used:
- ``src/it/scala`` for Scala sources
- ``src/it/java`` for Java sources
- ``src/it/resources`` for resources that should go on the integration
test classpath
The standard testing tasks are available, but must be prefixed with
``it:``. For example,
::
> it:test-only org.example.AnIntegrationTest
Similarly the standard settings may be configured for the
``IntegrationTest`` configuration. If not specified directly, most
``IntegrationTest`` settings delegate to ``Test`` settings by default.
For example, if test options are specified as:
::
testOptions in Test += ...
then these will be picked up by the ``Test`` configuration and in turn
by the ``IntegrationTest`` configuration. Options can be added
specifically for integration tests by putting them in the
``IntegrationTest`` configuration:
::
testOptions in IntegrationTest += ...
Or, use ``:=`` to overwrite any existing options, declaring these to be
the definitive integration test options:
::
testOptions in IntegrationTest := Seq(...)
Custom test configuration
-------------------------
The previous example may be generalized to a custom test configuration.
::
import sbt._
import Keys._
object B extends Build
{
lazy val root =
Project("root", file("."))
.configs( FunTest )
.settings( inConfig(FunTest)(Defaults.testSettings) : _*)
.settings( libraryDependencies += specs )
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "fun"
}
Instead of using the built-in configuration, we defined a new one:
::
lazy val FunTest = config("fun") extend(Test)
The ``extend(Test)`` part means to delegate to ``Test`` for undefined
``CustomTest`` settings. The line that adds the tasks and settings for
the new test configuration is:
::
settings( inConfig(FunTest)(Defaults.testSettings) : _*)
This says to add test and settings tasks in the ``FunTest``
configuration. We could have done it this way for integration tests as
well. In fact, ``Defaults.itSettings`` is a convenience definition:
``val itSettings = inConfig(IntegrationTest)(Defaults.testSettings)``.
The comments in the integration test section hold, except with
``IntegrationTest`` replaced with ``FunTest`` and ``"it"`` replaced with
``"fun"``. For example, test options can be configured specifically for
``FunTest``:
::
testOptions in FunTest += ...
Test tasks are run by prefixing them with ``fun:``
::
> fun:test
Additional test configurations with shared sources
--------------------------------------------------
An alternative to adding separate sets of test sources (and
compilations) is to share sources. In this approach, the sources are
compiled together using the same classpath and are packaged together.
However, different tests are run depending on the configuration.
::
import sbt._
import Keys._
object B extends Build {
lazy val root =
Project("root", file("."))
.configs( FunTest )
.settings( inConfig(FunTest)(Defaults.testTasks) : _*)
.settings(
libraryDependencies += specs,
testOptions in Test := Seq(Tests.Filter(itFilter)),
testOptions in FunTest := Seq(Tests.Filter(unitFilter))
)
def itFilter(name: String): Boolean = name endsWith "ITest"
def unitFilter(name: String): Boolean = (name endsWith "Test") && !itFilter(name)
lazy val FunTest = config("fun") extend(Test)
lazy val specs = "org.scala-tools.testing" %% "specs" % "1.6.8" % "test"
}
The key differences are:
- We are now only adding the test tasks
(``inConfig(FunTest)(Defaults.testTasks)``) and not compilation and
packaging tasks and settings.
- We filter the tests to be run for each configuration.
To run standard unit tests, run ``test`` (or equivalently,
``test:test``):
::
> test
To run tests for the added configuration (here, ``"fun"``), prefix it
with the configuration name as before:
::
> fun:test
> fun:test-only org.example.AFunTest
Application to parallel execution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One use for this shared-source approach is to separate tests that can
run in parallel from those that must execute serially. Apply the
procedure described in this section for an additional configuration.
Let's call the configuration ``serial``:
::
lazy val Serial = config("serial") extend(Test)
Then, we can disable parallel execution in just that configuration
using:
::
parallelExecution in Serial := false
The tests to run in parallel would be run with ``test`` and the ones to
run in serial would be run with ``serial:test``.
JUnit
=====
Support for JUnit is provided by
`junit-interface <https://github.com/szeiger/junit-interface>`_. To add
JUnit support into your project, add the junit-interface dependency in
your project's main build.sbt file.
::
libraryDependencies += "com.novocode" % "junit-interface" % "0.8" % "test->default"
Extensions
==========
This page describes adding support for additional testing libraries and
defining additional test reporters. You do this by implementing ``sbt``
interfaces (described below). If you are the author of the testing
framework, you can depend on the test interface as a provided
dependency. Alternatively, anyone can provide support for a test
framework by implementing the interfaces in a separate project and
packaging the project as an sbt :doc:`Plugin </Extending/Plugins>`.
Custom Test Framework
---------------------
``sbt`` contains built-in support for the three main Scala testing
libraries (specs 1 and 2, ScalaTest, and ScalaCheck). To add support for
a different framework, implement the `uniform test
interface <http://github.com/harrah/test-interface>`_.
Custom Test Reporters
---------------------
Test frameworks report status and results to test reporters. You can
create a new test reporter by implementing either
`TestReportListener <../../api/sbt/TestReportListener.html>`_
or
`TestsListener <../../api/sbt/TestsListener.html>`_.
Using Extensions
----------------
To use your extensions in a project definition:
Modify the ``testFrameworks``\ setting to reference your test framework:
::
testFrameworks += new TestFramework("custom.framework.ClassName")
Specify the test reporters you want to use by overriding the
``testListeners`` method in your project definition.
::
testListeners += customTestListener
where ``customTestListener`` is of type ``sbt.TestReportListener``.

View File

@ -1,41 +0,0 @@
[web plugin]: https://github.com/siasia/xsbt-web-plugin
# Triggered Execution
You can make a command run when certain files change by prefixing the command with `~`. Monitoring is terminated when `enter` is pressed. This triggered execution is configured by the `watch` setting, but typically the basic settings `watch-sources` and `poll-interval` are modified.
* `watch-sources` defines the files for a single project that are monitored for changes. By default, a project watches resources and Scala and Java sources.
* `watch-transitive-sources` then combines the `watch-sources` for the current project and all execution and classpath dependencies (see [[Full Configuration]] for details on inter-project dependencies).
* `poll-interval` selects the interval between polling for changes in milliseconds. The default value is `500 ms`.
Some example usages are described below.
# Compile
The original use-case was continuous compilation:
```scala
> ~ test:compile
> ~ compile
```
# Testing
You can use the triggered execution feature to run any command or task. One use is for test driven development, as suggested by Erick on the mailing list.
The following will poll for changes to your source code (main or test) and run `test-only` for the specified test.
```scala
> ~ test-only example.TestA
```
# Running Multiple Commands
Occasionally, you may need to trigger the execution of multiple commands. You can use semicolons to separate the commands to be triggered.
The following will poll for source changes and run `clean` and `test`.
```scala
> ~; clean; test
```

View File

@ -0,0 +1,57 @@
===================
Triggered Execution
===================
You can make a command run when certain files change by prefixing the
command with ``~``. Monitoring is terminated when ``enter`` is pressed.
This triggered execution is configured by the ``watch`` setting, but
typically the basic settings ``watch-sources`` and ``poll-interval`` are
modified.
- ``watch-sources`` defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.
- ``watch-transitive-sources`` then combines the ``watch-sources`` for
the current project and all execution and classpath dependencies (see
:doc:`Full Configuration </Getting-Started/Full-Def>` for details on inter-project dependencies).
- ``poll-interval`` selects the interval between polling for changes in
milliseconds. The default value is ``500 ms``.
Some example usages are described below.
Compile
=======
The original use-case was continuous compilation:
::
> ~ test:compile
> ~ compile
Testing
=======
You can use the triggered execution feature to run any command or task.
One use is for test driven development, as suggested by Erick on the
mailing list.
The following will poll for changes to your source code (main or test)
and run ``test-only`` for the specified test.
::
> ~ test-only example.TestA
Running Multiple Commands
=========================
Occasionally, you may need to trigger the execution of multiple
commands. You can use semicolons to separate the commands to be
triggered.
The following will poll for source changes and run ``clean`` and
``test``.
``scala > ~; clean; test``

View File

@ -1,154 +0,0 @@
[sbt.UpdateReport]: http://harrah.github.com/xsbt/latest/api/sbt/UpdateReport.html
[DependencyFilter]: http://harrah.github.com/xsbt/latest/api/sbt/DependencyFilter.html
[ConfigurationFilter]: http://harrah.github.com/xsbt/latest/api/sbt/ConfigurationFilter.html
[ModuleFilter]: http://harrah.github.com/xsbt/latest/api/sbt/ModuleFilter.html
[ArtifactFilter]: http://harrah.github.com/xsbt/latest/api/sbt/ArtifactFilter.html
# Update Report
`update` and related tasks produce a value of type [sbt.UpdateReport]
This data structure provides information about the resolved configurations, modules, and artifacts.
At the top level, `UpdateReport` provides reports of type `ConfigurationReport` for each resolved configuration.
A `ConfigurationReport` supplies reports (of type `ModuleReport`) for each module resolved for a given configuration.
Finally, a `ModuleReport` lists each successfully retrieved `Artifact` and the `File` it was retrieved to as well as the `Artifact`s that couldn't be downloaded.
This missing `Arifact` list is never empty for `update`, which will fail if it is non-empty.
However, it may be non-empty for `update-classifiers` and `update-sbt-classifers`.
# Filtering a Report and Getting Artifacts
A typical use of `UpdateReport` is to retrieve a list of files matching a filter.
A conversion of type `UpdateReport => RichUpdateReport` implicitly provides these methods for `UpdateReport`.
The filters are defined by the [DependencyFilter], [ConfigurationFilter], [ModuleFilter], and [ArtifactFilter] types.
Using these filter types, you can filter by the configuration name, the module organization, name, or revision, and the artifact name, type, extension, or classifier.
The relevant methods (implicitly on `UpdateReport`) are:
```scala
def matching(f: DependencyFilter): Seq[File]
def select(configuration: ConfigurationFilter = ..., module: ModuleFilter = ..., artifact: ArtifactFilter = ...): Seq[File]
```
Any argument to `select` may be omitted, in which case all values are allowed for the corresponding component.
For example, if the `ConfigurationFilter` is not specified, all configurations are accepted.
The individual filter types are discussed below.
## Filter Basics
Configuration, module, and artifact filters are typically built by applying a `NameFilter` to each component of a `Configuration`, `ModuleID`, or `Artifact`.
A basic `NameFilter` is implicitly constructed from a String, with `*` interpreted as a wildcard.
```scala
import sbt._
// each argument is of type NameFilter
val mf: ModuleFilter = moduleFilter(organization = "*sbt*", name = "main" | "actions", revision = "1.*" - "1.0")
// unspecified arguments match everything by default
val mf: ModuleFilter = moduleFilter(organization = "net.databinder")
// specifying "*" is the same as omitting the argument
val af: ArtifactFilter = artifactFilter(name = "*", `type` = "source", extension = "jar", classifier = "sources")
val cf: ConfigurationFilter = configurationFilter(name = "compile" | "test")
```
Alternatively, these filters, including a `NameFilter`, may be directly defined by an appropriate predicate (a single-argument function returning a Boolean).
```scala
import sbt._
// here the function value of type String => Boolean is implicitly converted to a NameFilter
val nf: NameFilter = (s: String) => s.startsWith("dispatch-")
// a Set[String] is a function String => Boolean
val acceptConfigs: Set[String] = Set("compile", "test")
// implicitly converted to a ConfigurationFilter
val cf: ConfigurationFilter = acceptConfigs
val mf: ModuleFilter = (m: ModuleID) => m.organization contains "sbt"
val af: ArtifactFilter = (a: Artifact) => a.classifier.isEmpty
```
## ConfigurationFilter
A configuration filter essentially wraps a `NameFilter` and is explicitly constructed by the `configurationFilter` method:
```scala
def configurationFilter(name: NameFilter = ...): ConfigurationFilter
```
If the argument is omitted, the filter matches all configurations.
Functions of type `String => Boolean` are implicitly convertible to a `ConfigurationFilter`.
As with `ModuleFilter`, `ArtifactFilter`, and `NameFilter`, the `&`, `|`, and `-` methods may be used to combine `ConfigurationFilter`s.
```scala
import sbt._
val a: ConfigurationFilter = Set("compile", "test")
val b: ConfigurationFilter = (c: String) => c.startsWith("r")
val c: ConfigurationFilter = a | b
```
(The explicit types are optional here.)
## ModuleFilter
A module filter is defined by three `NameFilter`s: one for the organization, one for the module name, and one for the revision.
Each component filter must match for the whole module filter to match.
A module filter is explicitly constructed by the `moduleFilter` method:
```scala
def moduleFilter(organization: NameFilter = ..., name: NameFilter = ..., revision: NameFilter = ...): ModuleFilter
```
An omitted argument does not contribute to the match. If all arguments are omitted, the filter matches all `ModuleID`s.
Functions of type `ModuleID => Boolean` are implicitly convertible to a `ModuleFilter`.
As with `ConfigurationFilter`, `ArtifactFilter`, and `NameFilter`, the `&`, `|`, and `-` methods may be used to combine `ModuleFilter`s:
```scala
import sbt._
val a: ModuleFilter = moduleFilter(name = "dispatch-twitter", revision = "0.7.8")
val b: ModuleFilter = moduleFilter(name = "dispatch-*")
val c: ModuleFilter = b - a
```
(The explicit types are optional here.)
## ArtifactFilter
An artifact filter is defined by four `NameFilter`s: one for the name, one for the type, one for the extension, and one for the classifier.
Each component filter must match for the whole artifact filter to match.
An artifact filter is explicitly constructed by the `artifactFilter` method:
```scala
def artifactFilter(name: NameFilter = ..., `type`: NameFilter = ..., extension: NameFilter = ..., classifier: NameFilter = ...): ArtifactFilter
```
Functions of type `Artifact => Boolean` are implicitly convertible to an `ArtifactFilter`.
As with `ConfigurationFilter`, `ModuleFilter`, and `NameFilter`, the `&`, `|`, and `-` methods may be used to combine `ArtifactFilter`s:
```scala
import sbt._
val a: ArtifactFilter = artifactFilter(classifier = "javadoc")
val b: ArtifactFilter = artifactFilter(`type` = "jar")
val c: ArtifactFilter = b - a
```
(The explicit types are optional here.)
## DependencyFilter
A `DependencyFilter` is typically constructed by combining other `DependencyFilter`s together using `&&`, `||`, and `--`.
Configuration, module, and artifact filters are `DependencyFilter`s themselves and can be used directly as a `DependencyFilter` or they can build up a `DependencyFilter`.
Note that the symbols for the `DependencyFilter` combining methods are doubled up to distinguish them from the combinators of the more specific filters for configurations, modules, and artifacts.
These double-character methods will always return a `DependencyFilter`, whereas the single character methods preserve the more specific filter type.
For example:
```scala
import sbt._
val df: DependencyFilter =
configurationFilter(name = "compile" | "test") && artifactFilter(`type` = "jar") || moduleFilter(name = "dispatch-*")
```
Here, we used `&&` and `||` to combine individual component filters into a dependency filter, which can then be provided to the `UpdateReport.matches` method. Alternatively, the `UpdateReport.select` method may be used, which is equivalent to calling `matches` with its arguments combined with `&&`.

View File

@ -0,0 +1,195 @@
=============
Update Report
=============
``update`` and related tasks produce a value of type
`sbt.UpdateReport <../../api/sbt/UpdateReport.html>`_
This data structure provides information about the resolved
configurations, modules, and artifacts. At the top level,
``UpdateReport`` provides reports of type ``ConfigurationReport`` for
each resolved configuration. A ``ConfigurationReport`` supplies reports
(of type ``ModuleReport``) for each module resolved for a given
configuration. Finally, a ``ModuleReport`` lists each successfully
retrieved ``Artifact`` and the ``File`` it was retrieved to as well as
the ``Artifact``\ s that couldn't be downloaded. This missing
``Arifact`` list is never empty for ``update``, which will fail if it is
non-empty. However, it may be non-empty for ``update-classifiers`` and
``update-sbt-classifers``.
Filtering a Report and Getting Artifacts
========================================
A typical use of ``UpdateReport`` is to retrieve a list of files
matching a filter. A conversion of type
``UpdateReport => RichUpdateReport`` implicitly provides these methods
for ``UpdateReport``. The filters are defined by the
`DependencyFilter <../../api/sbt/DependencyFilter.html>`_,
`ConfigurationFilter <../../api/sbt/ConfigurationFilter.html>`_,
`ModuleFilter <../../api/sbt/ModuleFilter.html>`_,
and
`ArtifactFilter <../../api/sbt/ArtifactFilter.html>`_
types. Using these filter types, you can filter by the configuration
name, the module organization, name, or revision, and the artifact name,
type, extension, or classifier.
The relevant methods (implicitly on ``UpdateReport``) are:
::
def matching(f: DependencyFilter): Seq[File]
def select(configuration: ConfigurationFilter = ..., module: ModuleFilter = ..., artifact: ArtifactFilter = ...): Seq[File]
Any argument to ``select`` may be omitted, in which case all values are
allowed for the corresponding component. For example, if the
``ConfigurationFilter`` is not specified, all configurations are
accepted. The individual filter types are discussed below.
Filter Basics
-------------
Configuration, module, and artifact filters are typically built by
applying a ``NameFilter`` to each component of a ``Configuration``,
``ModuleID``, or ``Artifact``. A basic ``NameFilter`` is implicitly
constructed from a String, with ``*`` interpreted as a wildcard.
::
import sbt._
// each argument is of type NameFilter
val mf: ModuleFilter = moduleFilter(organization = "*sbt*", name = "main" | "actions", revision = "1.*" - "1.0")
// unspecified arguments match everything by default
val mf: ModuleFilter = moduleFilter(organization = "net.databinder")
// specifying "*" is the same as omitting the argument
val af: ArtifactFilter = artifactFilter(name = "*", `type` = "source", extension = "jar", classifier = "sources")
val cf: ConfigurationFilter = configurationFilter(name = "compile" | "test")
Alternatively, these filters, including a ``NameFilter``, may be
directly defined by an appropriate predicate (a single-argument function
returning a Boolean).
::
import sbt._
// here the function value of type String => Boolean is implicitly converted to a NameFilter
val nf: NameFilter = (s: String) => s.startsWith("dispatch-")
// a Set[String] is a function String => Boolean
val acceptConfigs: Set[String] = Set("compile", "test")
// implicitly converted to a ConfigurationFilter
val cf: ConfigurationFilter = acceptConfigs
val mf: ModuleFilter = (m: ModuleID) => m.organization contains "sbt"
val af: ArtifactFilter = (a: Artifact) => a.classifier.isEmpty
ConfigurationFilter
-------------------
A configuration filter essentially wraps a ``NameFilter`` and is
explicitly constructed by the ``configurationFilter`` method:
::
def configurationFilter(name: NameFilter = ...): ConfigurationFilter
If the argument is omitted, the filter matches all configurations.
Functions of type ``String => Boolean`` are implicitly convertible to a
``ConfigurationFilter``. As with ``ModuleFilter``, ``ArtifactFilter``,
and ``NameFilter``, the ``&``, ``|``, and ``-`` methods may be used to
combine ``ConfigurationFilter``\ s.
::
import sbt._
val a: ConfigurationFilter = Set("compile", "test")
val b: ConfigurationFilter = (c: String) => c.startsWith("r")
val c: ConfigurationFilter = a | b
(The explicit types are optional here.)
ModuleFilter
------------
A module filter is defined by three ``NameFilter``\ s: one for the
organization, one for the module name, and one for the revision. Each
component filter must match for the whole module filter to match. A
module filter is explicitly constructed by the ``moduleFilter`` method:
::
def moduleFilter(organization: NameFilter = ..., name: NameFilter = ..., revision: NameFilter = ...): ModuleFilter
An omitted argument does not contribute to the match. If all arguments
are omitted, the filter matches all ``ModuleID``\ s. Functions of type
``ModuleID => Boolean`` are implicitly convertible to a
``ModuleFilter``. As with ``ConfigurationFilter``, ``ArtifactFilter``,
and ``NameFilter``, the ``&``, ``|``, and ``-`` methods may be used to
combine ``ModuleFilter``\ s:
::
import sbt._
val a: ModuleFilter = moduleFilter(name = "dispatch-twitter", revision = "0.7.8")
val b: ModuleFilter = moduleFilter(name = "dispatch-*")
val c: ModuleFilter = b - a
(The explicit types are optional here.)
ArtifactFilter
--------------
An artifact filter is defined by four ``NameFilter``\ s: one for the
name, one for the type, one for the extension, and one for the
classifier. Each component filter must match for the whole artifact
filter to match. An artifact filter is explicitly constructed by the
``artifactFilter`` method:
::
def artifactFilter(name: NameFilter = ..., `type`: NameFilter = ..., extension: NameFilter = ..., classifier: NameFilter = ...): ArtifactFilter
Functions of type ``Artifact => Boolean`` are implicitly convertible to
an ``ArtifactFilter``. As with ``ConfigurationFilter``,
``ModuleFilter``, and ``NameFilter``, the ``&``, ``|``, and ``-``
methods may be used to combine ``ArtifactFilter``\ s:
::
import sbt._
val a: ArtifactFilter = artifactFilter(classifier = "javadoc")
val b: ArtifactFilter = artifactFilter(`type` = "jar")
val c: ArtifactFilter = b - a
(The explicit types are optional here.)
DependencyFilter
----------------
A ``DependencyFilter`` is typically constructed by combining other
``DependencyFilter``\ s together using ``&&``, ``||``, and ``--``.
Configuration, module, and artifact filters are ``DependencyFilter``\ s
themselves and can be used directly as a ``DependencyFilter`` or they
can build up a ``DependencyFilter``. Note that the symbols for the
``DependencyFilter`` combining methods are doubled up to distinguish
them from the combinators of the more specific filters for
configurations, modules, and artifacts. These double-character methods
will always return a ``DependencyFilter``, whereas the single character
methods preserve the more specific filter type. For example:
::
import sbt._
val df: DependencyFilter =
configurationFilter(name = "compile" | "test") && artifactFilter(`type` = "jar") || moduleFilter(name = "dispatch-*")
Here, we used ``&&`` and ``||`` to combine individual component filters
into a dependency filter, which can then be provided to the
``UpdateReport.matches`` method. Alternatively, the
``UpdateReport.select`` method may be used, which is equivalent to
calling ``matches`` with its arguments combined with ``&&``.

View File

@ -1,38 +0,0 @@
* [[Home]] - Overview of sbt
* [[Getting Started Guide|Getting Started Welcome]] - START HERE
* [[FAQ]] - Questions, answered.
* [[Index]] - Find types, values, and methods
* [[Community]] - source, forums, releases
* [[Examples]]
* [[Detailed Topics]] - deep dive docs
* [[Artifacts]] what to publish
* [[Best Practices]]
* [[Classpaths]]
* [[Command Line Reference]]
* [[Compiler Plugins]]
* [[Console Project]]
* [[Cross Build]]
* [[Forking]]
* [[Global Settings]]
* [[Inspecting Settings]]
* [[Java Sources]]
* [[Launcher]]
* [[Library Management]]
* [[Local Scala]]
* [[Mapping Files]]
* [[Migrating to 0.10+|Migrating from SBT 0.7.x to 0.10.x]]
* [[Parallel Execution]]
* [[Parsing Input]]
* [[Paths]]
* [[Process]]
* [[Publishing]]
* [[Resolvers]]
* [[Running Project Code]]
* [[Scripts]]
* [[Setup Notes]]
* [[Tasks]]
* [[TaskInputs]]
* [[Testing]]
* [[Triggered Execution]]
* [[Update Report]]
* [[Extending sbt|Extending]] - internals docs

View File

@ -0,0 +1,35 @@
.. toctree::
:maxdepth: 2
Artifacts
Best-Practices
Classpaths
Command-Line-Reference
Compiler-Plugins
Console-Project
Cross-Build
Detailed-Topics
Forking
Global-Settings
Inspecting-Settings
Java-Sources
Launcher
Library-Management
Local-Scala
Mapping-Files
Migrating-from-sbt-0.7.x-to-0.10.x
Parallel-Execution
Parsing-Input
Paths
Process
Publishing
Resolvers
Running-Project-Code
Scripts
Setup-Notes
TaskInputs
Tasks
Testing
Triggered-Execution
Update-Report

View File

@ -1,3 +0,0 @@
Why could I?
Unfortunately, the GitHub wiki only provides two roles. One can't modify anything while the other can edit, delete, or create new pages. The delete page link doesn't ask for confirmation and so we get pages accidentally deleted. We have to live with it if we want to allow users to edit the wiki (and we do). Don't worry about it and thanks for promptly reverting.

View File

@ -1,188 +0,0 @@
[sbt.Keys]: http://harrah.github.com/xsbt/latest/api/sbt/Keys$.html
[Scoped]: http://harrah.github.com/xsbt/latest/api/sbt/Scoped$.html
[Scope]: http://harrah.github.com/xsbt/latest/api/sbt/Scope$.html
[Settings]: http://harrah.github.com/xsbt/latest/sxr/Settings.scala.html
[Attributes]: http://harrah.github.com/xsbt/latest/sxr/Attributes.scala.html
[Defaults]: http://harrah.github.com/xsbt/latest/sxr/Defaults.scala.html
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html
_Wiki Maintenance Note:_ This page has been replaced a couple of times; first by
[[Settings]] and most recently by [[Getting Started Basic Def]] and
[[Getting Started More About Settings]]. It has some obsolete
terminology:
- we now avoid referring to build definition as "configuration"
to avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full
configuration, in favor of ".sbt build definition files" and
".scala build definition files"
However, it may still be worth combing this page for examples or
points that are not made in new pages. After doing so, this page
could simply be a redirect (delete the content, link to the new
pages about build definition).
# Configuration
A build definition is written in Scala. There are two types of definitions: light and full. A light definition is a quick way of configuring a build. It consists of a list of Scala expressions describing project settings in one or more ".sbt" files located in the base directory of the project. This also applies to sub-projects.
A full definition is made up of one or more Scala source files that describe relationships between projects, introduce new configurations and settings, and define more complex aspects of the build. The capabilities of a light definition are a proper subset of those of a full definition.
Light configuration and full configuration can co-exist. Settings defined in the light configuration are appended to the settings defined in the full configuration for the corresponding project.
# Light Configuration
## By Example
Create a file with extension `.sbt` in your root project directory (such as `<your-project>/build.sbt`). This file contains Scala expressions of type `Setting[T]` that are separated by blank lines. Built-in settings typically have reasonable defaults (an exception is `publishTo`). A project typically redefines at least `name` and `version` and often `libraryDependencies`. All built-in settings are listed in [Keys].
A sample `build.sbt`:
```scala
// Set the project name to the string 'My Project'
name := "My Project"
// The := method used in Name and Version is one of two fundamental methods.
// The other method is <<=
// All other initialization methods are implemented in terms of these.
version := "1.0"
// Add a single dependency
libraryDependencies += "junit" % "junit" % "4.8" % "test"
// Add multiple dependencies
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-google" % "0.7.8",
"net.databinder" %% "dispatch-meetup" % "0.7.8"
)
// Exclude backup files by default. This uses ~=, which accepts a function of
// type T => T (here T = FileFilter) that is applied to the existing value.
// A similar idea is overriding a member and applying a function to the super value:
// override lazy val defaultExcludes = f(super.defaultExcludes)
//
defaultExcludes ~= (filter => filter || "*~")
/* Some equivalent ways of writing this:
defaultExcludes ~= (_ || "*~")
defaultExcludes ~= ( (_: FileFilter) || "*~")
defaultExcludes ~= ( (filter: FileFilter) => filter || "*~")
*/
// Use the project version to determine the repository to publish to.
publishTo <<= version { (v: String) =>
if(v endsWith "-SNAPSHOT")
Some(ScalaToolsSnapshots)
else
Some(ScalaToolsReleases)
}
```
## Notes
* Because everything is parsed as an expression, no semicolons are allowed at the ends of lines.
* All initialization methods end with `=` so that they have the lowest possible precedence. Except when passing a function literal to `~=`, you do not need to use parentheses for either side of the method.
Ok:
```scala
libraryDependencies += "junit" % "junit" % "4.8" % "test"
libraryDependencies.+=("junit" % "junit" % "4.8" % "test")
defaultExcludes ~= (_ || "*~")
defaultExcludes ~= (filter => filter || "*~")
```
Error:
```console
defaultExcludes ~= _ || "*~"
error: missing parameter type for expanded function ((x$1) => defaultExcludes.$colon$tilde(x$1).$bar("*~"))
defaultExcludes ~= _ || "*~"
^
error: value | is not a member of sbt.Project.Setting[sbt.FileFilter]
defaultExcludes ~= _ || "*~"
^
```
* A block is an expression, with the last statement in the block being the result. For example, the following is an expression:
```scala
{
val x = 3
def y = 2
x + y
}
```
An example of using a block to construct a Setting:
```scala
version := {
// Define a regular expression to match the current branch
val current = """\*\s+(\w+)""".r
// Process the output of 'git branch' to get the current branch
val branch = "git branch --no-color".lines_!.collect { case current(name) => "-" + name }
// Append the current branch to the version.
"1.0" + branch.mkString
}
```
* Remember that blank lines are used to clearly delineate expressions. This happens before the expression is sent to the Scala compiler, so no blank lines are allowed within a block.
## More Information
* A `Setting[T]` describes how to initialize a value of type T. The expressions shown in the example are expressions, not statements. In particular, there is no hidden mutable map that is being modified. Each `Setting[T]` describes an update to a map. The actual map is rarely directly referenced by user code. It is not the final map that is important, but the operations on the map.
* There are fundamentally two types of initializations, `:=` and `<<=`. The methods `+=`, `++=`, and `~=` are defined in terms of these. `:=` assigns a value, overwriting any existing value. `<<=` uses existing values to initialize a setting.
* `key ~= f` is equivalent to `key <<= key(f)`
* `key += value` is equivalent to `key ~= (_ :+ value)` or `key <<= key(_ :+ value)`
* `key ++= value` is equivalent to `key ~= (_ ++ value)` or `key <<= key(_ ++ value)`
* There can be multiple `.sbt` files per project. This feature can be used, for example, to put user-specific configurations in a separate file.
* Import clauses are allowed at the beginning of a `.sbt` file. Since they are clauses, no semicolons are allowed. They need not be separated by blank lines, but each import must be on one line. For example,
```scala
import scala.xml.NodeSeq
import math.{abs, pow}
```
* These imports are defined by default in a `.sbt` file:
```scala
import sbt._
import Process._
import Keys._
```
In addition, the contents of all public `Build` and `Plugin` objects from the full definition are imported.
sbt uses the blank lines to separate the expressions and then it sends them off to the Scala compiler. Each expression is parsed, compiled, and loaded independently. The settings are combined into a `Seq[Setting[_]]` and passed to the settings engine. The engine groups the settings by key, preserving order per key though, and then computes the order in which each setting needs to be evaluated. Cycles and references to uninitialized settings are detected here and dead settings are dropped. Finally, the settings are transformed into a function that is applied to an initially empty map.
Because the expressions can be separated before the compiler, sbt only needs to recompile expressions that change. So, the work to respond to changes is proportional to the number of settings that changed and not the number of settings defined in the build. If imports change, all expression in the `.sbt` file need to be recompiled.
## Implementation Details (even more information)
Each expression describes an initialization operation. The simplest operation is context-free assignment using `:=`. That is, no outside information is used to determine the setting value. Operations other than `:=` are implemented in terms of `<<=`. The `<<=` method specifies an operation that requires other settings to be initialized and uses their values to define a new setting.
The target (left side value) of a method like `:=` identifies one of the constructs in sbt: settings, tasks, and input tasks. It is not an actual setting or task, but a key representing a setting or task. A setting is a value assigned when a project is loaded. A task is a unit of work that is run on-demand zero or more times after a project is loaded and also produces a value. An input task, previously known as a Method Task in 0.7 and earlier, accepts an input string and produces a task to be run. The renaming is because it can accept arbitrary input in 0.10 and not just a space-delimited sequence of arguments like in 0.7.
A construct (setting, task, or input task) is identified by a scoped key, which is a pair `(Scope, AttributeKey[T])`. An `AttributeKey` associates a name with a type and is a typesafe key for use in an `AttributeMap`. Attributes are best illustrated by the `get` and `put` methods on `AttributeMap`:
```scala
def get[T](key: AttributeKey[T]): Option[T]
def put[T](key: AttributeKey[T], value: T): AttributeMap
```
For example, given a value `k: AttributeKey[String]` and a value `m: AttributeMap`, `m.get(k)` has type `Option[String]`.
In sbt, a Scope is mainly defined by a project reference and a configuration (such as 'test' or 'compile'). Project data is stored in a Map[Scope, AttributeMap]. Each Scope identifies a map. You can sort of compare a Scope to a reference to an object and an AttributeMap to the object's data.
In order to provide appropriate convenience methods for constructing an initialization operation for each construct, an AttributeKey is constructed through either a SettingKey, TaskKey, or InputKey:
```scala
// underlying key: AttributeKey[String]
val name = SettingKey[String]("name")
// underlying key: AttributeKey[Task[String]]
val hello = TaskKey[String]("hello")
// underlying key: AttributeKey[InputTask[String]]
val helloArgs = InputKey[String]("hello-with-args")
```
In the basic expression `name := "asdf"`, the `:=` method is implicitly available for a `SettingKey` and accepts an argument that conforms to the type parameter of name, which is String.
The high-level API for constructing settings is defined in [Scoped]. Scopes are defined in [Scope]. The underlying engine is in [Settings] and the heterogeneous map is in [Attributes].
Built-in keys are in [Keys] and default settings are defined in [Defaults].

View File

@ -0,0 +1,239 @@
*Wiki Maintenance Note:* This page has been replaced a couple of times;
first by
[`Settings <../../sxr/Settings.scala.html>`_\ ]
and most recently by [[Getting Started Basic Def]] and [[Getting Started
More About Settings]]. It has some obsolete terminology:
- we now avoid referring to build definition as "configuration" to
avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full configuration,
in favor of ".sbt build definition files" and ".scala build
definition files"
However, it may still be worth combing this page for examples or points
that are not made in new pages. After doing so, this page could simply
be a redirect (delete the content, link to the new pages about build
definition).
Configuration
=============
A build definition is written in Scala. There are two types of
definitions: light and full. A light definition is a quick way of
configuring a build. It consists of a list of Scala expressions
describing project settings in one or more ".sbt" files located in the
base directory of the project. This also applies to sub-projects.
A full definition is made up of one or more Scala source files that
describe relationships between projects, introduce new configurations
and settings, and define more complex aspects of the build. The
capabilities of a light definition are a proper subset of those of a
full definition.
Light configuration and full configuration can co-exist. Settings
defined in the light configuration are appended to the settings defined
in the full configuration for the corresponding project.
Light Configuration
===================
By Example
----------
Create a file with extension ``.sbt`` in your root project directory
(such as ``<your-project>/build.sbt``). This file contains Scala
expressions of type ``Setting[T]`` that are separated by blank lines.
Built-in settings typically have reasonable defaults (an exception is
``publishTo``). A project typically redefines at least ``name`` and
``version`` and often ``libraryDependencies``. All built-in settings are
listed in
`Keys <../../sxr/Keys.scala.html>`_.
A sample ``build.sbt``:
::
// Set the project name to the string 'My Project'
name := "My Project"
// The := method used in Name and Version is one of two fundamental methods.
// The other method is <<=
// All other initialization methods are implemented in terms of these.
version := "1.0"
// Add a single dependency
libraryDependencies += "junit" % "junit" % "4.8" % "test"
// Add multiple dependencies
libraryDependencies ++= Seq(
"net.databinder" %% "dispatch-google" % "0.7.8",
"net.databinder" %% "dispatch-meetup" % "0.7.8"
)
// Exclude backup files by default. This uses ~=, which accepts a function of
// type T => T (here T = FileFilter) that is applied to the existing value.
// A similar idea is overriding a member and applying a function to the super value:
// override lazy val defaultExcludes = f(super.defaultExcludes)
//
defaultExcludes ~= (filter => filter || "*~")
/* Some equivalent ways of writing this:
defaultExcludes ~= (_ || "*~")
defaultExcludes ~= ( (_: FileFilter) || "*~")
defaultExcludes ~= ( (filter: FileFilter) => filter || "*~")
*/
// Use the project version to determine the repository to publish to.
publishTo <<= version { (v: String) =>
if(v endsWith "-SNAPSHOT")
Some(ScalaToolsSnapshots)
else
Some(ScalaToolsReleases)
}
Notes
-----
- Because everything is parsed as an expression, no semicolons are
allowed at the ends of lines.
- All initialization methods end with ``=`` so that they have the
lowest possible precedence. Except when passing a function literal to
``~=``, you do not need to use parentheses for either side of the
method. Ok:
``scala libraryDependencies += "junit" % "junit" % "4.8" % "test" libraryDependencies.+=("junit" % "junit" % "4.8" % "test") defaultExcludes ~= (_ || "*~") defaultExcludes ~= (filter => filter || "*~")``
Error:
\`\`\`console defaultExcludes ~= \_ \|\| "\*~"
error: missing parameter type for expanded function
((x\ :math:`1) => defaultExcludes.`\ colon$tilde(x\ :math:`1).`\ bar("*~"))
defaultExcludes ~= \_ \|\| "*\ ~" ^ error: value \| is not a member of
sbt.Project.Setting[sbt.FileFilter] defaultExcludes ~= \_ \|\| "*~" ^
\`\`\`* A block is an expression, with the last statement in the block
being the result. For example, the following is an expression:
``scala { val x = 3 def y = 2 x + y }`` An example of using
a block to construct a Setting:
``scala version := { // Define a regular expression to match the current branch val current = """\*\s+(\w+)""".r // Process the output of 'git branch' to get the current branch val branch = "git branch --no-color".lines_!.collect { case current(name) => "-" + name } // Append the current branch to the version. "1.0" + branch.mkString }``
\* Remember that blank lines are used to clearly delineate expressions.
This happens before the expression is sent to the Scala compiler, so no
blank lines are allowed within a block.
More Information
----------------
- A ``Setting[T]`` describes how to initialize a value of type T. The
expressions shown in the example are expressions, not statements. In
particular, there is no hidden mutable map that is being modified.
Each ``Setting[T]`` describes an update to a map. The actual map is
rarely directly referenced by user code. It is not the final map that
is important, but the operations on the map.
- There are fundamentally two types of initializations, ``:=`` and
``<<=``. The methods ``+=``, ``++=``, and ``~=`` are defined in terms
of these. ``:=`` assigns a value, overwriting any existing value.
``<<=`` uses existing values to initialize a setting.
- ``key ~= f`` is equivalent to ``key <<= key(f)``
- ``key += value`` is equivalent to ``key ~= (_ :+ value)`` or
``key <<= key(_ :+ value)``
- ``key ++= value`` is equivalent to ``key ~= (_ ++ value)`` or
``key <<= key(_ ++ value)``
- There can be multiple ``.sbt`` files per project. This feature can be
used, for example, to put user-specific configurations in a separate
file.
- Import clauses are allowed at the beginning of a ``.sbt`` file. Since
they are clauses, no semicolons are allowed. They need not be
separated by blank lines, but each import must be on one line. For
example,
``scala import scala.xml.NodeSeq import math.{abs, pow}`` \* These
imports are defined by default in a ``.sbt`` file:
\`\`\`scala
import sbt.\_ import Process.\_ import Keys.\_
\`\`\ ``In addition, the contents of all public``\ Build\ ``and``\ Plugin\`
objects from the full definition are imported.
sbt uses the blank lines to separate the expressions and then it sends
them off to the Scala compiler. Each expression is parsed, compiled, and
loaded independently. The settings are combined into a
``Seq[Setting[_]]`` and passed to the settings engine. The engine groups
the settings by key, preserving order per key though, and then computes
the order in which each setting needs to be evaluated. Cycles and
references to uninitialized settings are detected here and dead settings
are dropped. Finally, the settings are transformed into a function that
is applied to an initially empty map.
Because the expressions can be separated before the compiler, sbt only
needs to recompile expressions that change. So, the work to respond to
changes is proportional to the number of settings that changed and not
the number of settings defined in the build. If imports change, all
expression in the ``.sbt`` file need to be recompiled.
Implementation Details (even more information)
----------------------------------------------
Each expression describes an initialization operation. The simplest
operation is context-free assignment using ``:=``. That is, no outside
information is used to determine the setting value. Operations other
than ``:=`` are implemented in terms of ``<<=``. The ``<<=`` method
specifies an operation that requires other settings to be initialized
and uses their values to define a new setting.
The target (left side value) of a method like ``:=`` identifies one of
the constructs in sbt: settings, tasks, and input tasks. It is not an
actual setting or task, but a key representing a setting or task. A
setting is a value assigned when a project is loaded. A task is a unit
of work that is run on-demand zero or more times after a project is
loaded and also produces a value. An input task, previously known as a
Method Task in 0.7 and earlier, accepts an input string and produces a
task to be run. The renaming is because it can accept arbitrary input in
0.10 and not just a space-delimited sequence of arguments like in 0.7.
A construct (setting, task, or input task) is identified by a scoped
key, which is a pair ``(Scope, AttributeKey[T])``. An ``AttributeKey``
associates a name with a type and is a typesafe key for use in an
``AttributeMap``. Attributes are best illustrated by the ``get`` and
``put`` methods on ``AttributeMap``:
::
def get[T](key: AttributeKey[T]): Option[T]
def put[T](key: AttributeKey[T], value: T): AttributeMap
For example, given a value ``k: AttributeKey[String]`` and a value
``m: AttributeMap``, ``m.get(k)`` has type ``Option[String]``.
In sbt, a Scope is mainly defined by a project reference and a
configuration (such as 'test' or 'compile'). Project data is stored in a
Map[Scope, AttributeMap]. Each Scope identifies a map. You can sort of
compare a Scope to a reference to an object and an AttributeMap to the
object's data.
In order to provide appropriate convenience methods for constructing an
initialization operation for each construct, an AttributeKey is
constructed through either a SettingKey, TaskKey, or InputKey:
::
// underlying key: AttributeKey[String]
val name = SettingKey[String]("name")
// underlying key: AttributeKey[Task[String]]
val hello = TaskKey[String]("hello")
// underlying key: AttributeKey[InputTask[String]]
val helloArgs = InputKey[String]("hello-with-args")
In the basic expression ``name := "asdf"``, the ``:=`` method is
implicitly available for a ``SettingKey`` and accepts an argument that
conforms to the type parameter of name, which is String.
The high-level API for constructing settings is defined in
`Scoped <../../api/sbt/Scoped$.html>`_. Scopes are defined in `Scope <../../api/sbt/Scope$.html>`_.
The underlying engine is in `Settings <../../sxr/Settings.scala.html>`_
and the heterogeneous map is in `Attributes <../../sxr/Attributes.scala.html>`_.
Built-in keys are in `Keys <../../sxr/Keys.scala.html>`_ and
default settings are defined in `Defaults <../../sxr/Defaults.scala.html>`_.

View File

@ -1,52 +0,0 @@
[Ivy documentation]: http://ant.apache.org/ivy/history/2.2.0/tutorial/conf.html
[Maven Scopes]: http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope
_Wiki Maintenance Note:_ Most of what's on this page is now covered in
[[Getting Started Library Dependencies]]. This page should be
analyzed for any points that aren't covered on the new page, and
those points moved somewhere (maybe the [[FAQ]] or an "advanced
library deps" page). Then this page could become a redirect with
no content except a link pointing to the new page(s).
_Wiki Maintenance Note 2:_ There probably should be a page called
Configurations that's less about library dependency management and
more about listing all the configurations that exist and
describing what they are used for. This would complement the way
this page is linked, for example in [[Index]].
# Configurations
Ivy configurations are a useful feature for your build when you use managed dependencies. They are essentially named sets of dependencies. You can read the [Ivy documentation] for details. Their use in sbt is described on this page.
# Usage
The built-in use of configurations in sbt is similar to scopes in Maven. sbt adds dependencies to different classpaths by the configuration that they are defined in. See the description of [Maven Scopes] for details.
You put a dependency in a configuration by selecting one or more of its configurations to map to one or more of your project's configurations. The most common case is to have one of your configurations `A` use a dependency's configuration `B`. The mapping for this looks like `"A->B"`. To apply this mapping to a dependency, add it to the end of your dependency definition:
```scala
libraryDependencies += "org.scalatest" % "scalatest" % "1.2" % "test->compile"
```
This says that your project's `test` configuration uses `ScalaTest`'s `default` configuration. Again, see the [Ivy documentation] for more advanced mappings. Most projects published to Maven repositories will use the `default` or `compile` configuration.
A useful application of configurations is to group dependencies that are not used on normal classpaths. For example, your project might use a `"js"` configuration to automatically download jQuery and then include it in your jar by modifying `resources`. For example:
```scala
ivyConfigurations += config("js") hide
libraryDependencies += "jquery" % "jquery" % "1.3.2" % "js->default" from "http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"
resources <<= (resources, update) { (rs, report) =>
rs ++ report.select( configurationFilter("js") )
}
```
The `config` method defines a new configuration with name `"js"` and makes it private to the project so that it is not used for publishing.
See [[Update Report]] for more information on selecting managed artifacts.
A configuration without a mapping (no `"->"`) is mapped to `default` or `compile`. The `->` is only needed when mapping to a different configuration than those. The ScalaTest dependency above can then be shortened to:
```scala
libraryDependencies += "org.scala-tools.testing" % "scalatest" % "1.0" % "test"
```

View File

@ -0,0 +1,81 @@
==============
Configurations
==============
*Wiki Maintenance Note:* Most of what's on this page is now covered in
:doc:`/Getting-Started/Library-Dependencies`. This page should be analyzed
for any points that aren't covered on the new page, and those points
moved somewhere (maybe the :doc:`/faq` or an "advanced library deps" page).
Then this page could become a redirect with no content except a link
pointing to the new page(s).
*Wiki Maintenance Note 2:* There probably should be a page called
Configurations that's less about library dependency management and more
about listing all the configurations that exist and describing what they
are used for. This would complement the way this page is linked, for
example in :doc:`/Name-Index`.
Configurations
==============
Ivy configurations are a useful feature for your build when you use
managed dependencies. They are essentially named sets of dependencies.
You can read the `Ivy
documentation <http://ant.apache.org/ivy/history/2.2.0/tutorial/conf.html>`_
for details. Their use in sbt is described on this page.
Usage
=====
The built-in use of configurations in sbt is similar to scopes in Maven.
sbt adds dependencies to different classpaths by the configuration that
they are defined in. See the description of `Maven
Scopes <http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Dependency_Scope>`_
for details.
You put a dependency in a configuration by selecting one or more of its
configurations to map to one or more of your project's configurations.
The most common case is to have one of your configurations ``A`` use a
dependency's configuration ``B``. The mapping for this looks like
``"A->B"``. To apply this mapping to a dependency, add it to the end of
your dependency definition:
::
libraryDependencies += "org.scalatest" % "scalatest" % "1.2" % "test->compile"
This says that your project's ``test`` configuration uses
``ScalaTest``'s ``default`` configuration. Again, see the `Ivy
documentation <http://ant.apache.org/ivy/history/2.2.0/tutorial/conf.html>`_
for more advanced mappings. Most projects published to Maven
repositories will use the ``default`` or ``compile`` configuration.
A useful application of configurations is to group dependencies that are
not used on normal classpaths. For example, your project might use a
``"js"`` configuration to automatically download jQuery and then include
it in your jar by modifying ``resources``. For example:
::
ivyConfigurations += config("js") hide
libraryDependencies += "jquery" % "jquery" % "1.3.2" % "js->default" from "http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"
resources <<= (resources, update) { (rs, report) =>
rs ++ report.select( configurationFilter("js") )
}
The ``config`` method defines a new configuration with name ``"js"`` and
makes it private to the project so that it is not used for publishing.
See :doc:`/Detailed-Topics/Update-Report` for more information on selecting managed
artifacts.
A configuration without a mapping (no ``"->"``) is mapped to ``default``
or ``compile``. The ``->`` is only needed when mapping to a different
configuration than those. The ScalaTest dependency above can then be
shortened to:
::
libraryDependencies += "org.scala-tools.testing" % "scalatest" % "1.0" % "test"

View File

@ -1,18 +0,0 @@
# Dormant Pages
If you check out the wiki as a git repository, there's a `Dormant`
directory (this one) which contains:
- "redirect" pages (empty pages that point to some new page).
If you want to rename a page and think it has lots of incoming
links from outside the wiki, you could leave the old page name
in here. The directory name is not part of the link so it's
safe to move the old page into the `Dormant` directory.
- "clipboard" pages that contain some amount of useful text, that
needs to be extracted and organized, maybe moved to existing
pages or the FAQ or maybe there's a new page that should exist.
Basically content that may be good but needs massaging into the
big picture.
Ideally, pages in here have a note at the top pointing to
alternative content and explaining the status of the page.

View File

@ -0,0 +1,28 @@
Dormant Pages
=============
If you check out the wiki as a git repository, there's a ``Dormant``
directory (this one) which contains:
- "redirect" pages (empty pages that point to some new page). If you
want to rename a page and think it has lots of incoming links from
outside the wiki, you could leave the old page name in here. The
directory name is not part of the link so it's safe to move the old
page into the ``Dormant`` directory.
- "clipboard" pages that contain some amount of useful text, that needs
to be extracted and organized, maybe moved to existing pages or the
FAQ or maybe there's a new page that should exist. Basically content
that may be good but needs massaging into the big picture.
Ideally, pages in here have a note at the top pointing to alternative
content and explaining the status of the page.
.. toctree::
:maxdepth: 2
Basic-Configuration
Configurations
Full-Configuration
Introduction-to-Full-Configurations
Needs-New-Home
Settings

View File

@ -1,260 +0,0 @@
[#35]: https://github.com/harrah/xsbt/issues/35
_Wiki Maintenance Note:_ This page has been _mostly_ replaced by
[[Getting Started Full Def]] and other pages. It has some obsolete
terminology:
- we now avoid referring to build definition as "configuration"
to avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full
configuration, in favor of ".sbt build definition files" and
".scala build definition files"
However, it may still be worth combing this page for examples or
points that are not made in new pages. Some stuff that may not be
elsewhere:
- discussion of cycles
- discussion of build-level settings
- discussion of omitting or augmenting defaults
Also, the discussion of configuration delegation which is teased
here, needs to exist somewhere.
After extracting useful content, this page could simply be a
redirect (delete the content, link to the new pages about build
definition).
There is a related page [[Introduction to Full Configurations]]
which could benefit from cleanup at the same time.
# Full Configuration (Draft)
A full configuration consists of one or more Scala source files that define concrete Builds.
A Build defines project relationships and configurations.
## By Example
Create a file with extension `.scala` in your `project/` directory (such as `<your-project>/project/Build.scala`).
A sample `project/Build.scala`:
```scala
import sbt._
object MyBuild extends Build {
// Declare a project in the root directory of the build with ID "root".
// Declare an execution dependency on sub1.
lazy val root = Project("root", file(".")) aggregate(sub1)
// Declare a project with ID 'sub1' in directory 'a'.
// Declare a classpath dependency on sub2 in the 'test' configuration.
lazy val sub1: Project = Project("sub1", file("a")) dependsOn(sub2 % "test")
// Declare a project with ID 'sub2' in directory 'b'.
// Declare a configuration dependency on the root project.
lazy val sub2 = Project("sub2", file("b"), delegates = root :: Nil)
}
```
### Cycles
(It is probably best to skip this section and come back after reading about project relationships. It is near the example for easier reference.)
The configuration dependency `sub2 -> root` is specified as an argument to the `delegates` parameter of `Project`, which is by-name and of type `Seq[ProjectReference]` because by-name repeated parameters are not allowed in Scala.
There are also corresponding by-name parameters `aggregate` and `dependencies` for execution and classpath dependencies.
By-name parameters, being non-strict, are useful when there are cycles between the projects, as is the case for `root` and `sub2`.
In the example, there is a *configuration* dependency `sub2 -> root`, a *classpath* dependency `sub1 -> sub2`, and an *execution* dependency `root -> sub1`.
This causes cycles at the Scala-level, but not within a particular dependency type, which is not allowed.
## Defining Projects
An internal project is defined by constructing an instance of `Project`. The minimum information for a new project is its ID string and base directory. For example:
```scala
import sbt._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"))
}
```
This constructs a project definition for a project with ID 'a' and located in the `<project root>/subA` directory.
Here, `file(...)` is equivalent to `new File(...)` and is resolved relative to the build's base directory.
There are additional optional parameters to the Project constructor.
These parameters configure the project and declare project relationships, as discussed in the next sections.
## Project Settings
A full build definition can configure settings for a project, just like a light configuration.
Unlike a light configuration, the default settings can be replaced or manipulated and sequences of settings can be manipulated.
In addition, a light configuration has default imports defined. A full definition needs to import these explicitly.
In particular, all keys (like `name` and `version`) need to be imported from `sbt.Keys`.
### No defaults
For example, to define a build from scratch (with no default settings or tasks):
```scala
import sbt._
import Keys._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"), settings = Seq(name := "From Scratch"))
}
```
### Augment Defaults
To augment the default settings, the following Project definitions are equivalent:
```scala
lazy val a1 = Project("a", file("subA")) settings(name := "Additional", version := "1.0")
lazy val a2 = Project("a", file("subA"),
settings = Defaults.defaultSettings ++ Seq(name := "Additional", version := "1.0")
)
```
### Select Defaults
Web support is now split out into a plugin.
With the plugin declared, its settings can be selected like:
```scala
import sbt_
import Keys._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"), settings = Web.webSettings)
}
```
Settings defined in `.sbt` files are appended to the settings for each `Project` definition.
### Build-level Settings
Lastly, settings can be defined for the entire build.
In general, these are used when a setting is not defined for a project.
These settings are declared either by augmenting `Build.settings` or defining settings in the scope of the current build.
For example, to set the shell prompt to be the id for the current project, the following setting can be added to a `.sbt` file:
```scala
shellPrompt in ThisBuild := { s => Project.extract(s).currentProject.id + "> " }
```
(The value is a function `State => String`. `State` contains everything about the build and will be discussed elsewhere.)
Alternatively, the setting can be defined in `Build.settings`:
```scala
import sbt._
import Keys._
object MyBuild extends Build {
override lazy val settings = super.settings :+
(shellPrompt := { s => Project.extract(s).currentProject.id + "> " })
...
}
```
## Project Relationships
There are three kinds of project relationships in sbt. These are described by execution, classpath, and configuration dependencies.
### Project References
When defining a dependency on another project, you provide a `ProjectReference`.
In the simplest case, this is a `Project` object. (Technically, there is an implicit conversion `Project => ProjectReference`)
This indicates a dependency on a project within the same build.
It is possible to declare a dependency on a project in a directory separate from the current build, in a git repository, or in a project packaged into a jar and accessible via http/https.
These are referred to as external builds and projects. You can reference the root project in an external build with `RootProject`:
```scala
RootProject( file("/home/user/a-project") )
RootProject( uri("git://github.com/dragos/dupcheck.git") )
```
or a specific project within the external build can be referenced using a `ProjectRef`:
```scala
ProjectRef( uri("git://github.com/dragos/dupcheck.git"), "project-id")
```
The fragment part of the git URI can be used to specify a specific branch or tag. For example:
```scala
RootProject( uri("git://github.com/typesafehub/sbteclipse.git#v1.2") )
```
Ultimately, a `RootProject` is resolved to a `ProjectRef` once the external project is loaded.
Additionally, there are implicit conversions `URI => RootProject` and `File => RootProject` so that URIs and Files can be used directly.
External, remote builds are retrieved or checked out to a staging directory in the user's `.sbt` directory so that they can be manipulated like local builds.
Examples of using project references follow in the next sections.
When using external projects, the `sbt.boot.directory` should be set (see [[Setup|Getting Started Setup]]) so that unnecessary recompilations do not occur (see [#35]).
### Execution Dependency
If project A has an execution dependency on project B, then when you execute a task on project A, it will also be run on project B. No ordering of these tasks is implied.
An execution dependency is declared using the `aggregate` method on `Project`. For example:
```scala
lazy val root = Project(...) aggregate(sub1)
lazy val sub1 = Project(...) aggregate(sub2)
lazy val sub2 = Project(...) aggregate(ext)
lazy val ext = uri("git://github.com/dragos/dupcheck.git")
```
If 'clean' is executed on `sub2`, it will also be executed on `ext` (the locally checked out version).
If 'clean' is executed on `root`, it will also be executed on `sub1`, `sub2`, and `ext`.
Aggregation can be controlled more finely by configuring the `aggregate` setting. This setting is of type `Aggregation`:
```scala
sealed trait Aggregation
final case class Implicit(enabled: Boolean) extends Aggregation
final class Explicit(val deps: Seq[ProjectReference], val transitive: Boolean) extends Aggregation
```
This key can be set in any scope, including per-task scopes. By default, aggregation is disabled for `run`, `console-quick`, `console`, and `console-project`. Re-enabling it from the command line for the current project for `run` would look like:
```scala
> set aggregate in run := true
```
(There is an implicit `Boolean => Implicit` where `true` translates to `Implicit(true)` and `false` translates to `Implicit(false)`). Similarly, aggregation can be disabled for the current project using:
```scala
> set aggregate in clean := false
```
`Explicit` allows finer control over the execution dependencies and transitivity. An instance is normally constructed using `Aggregation.apply`. No new projects may be introduced here (that is, internal references have to be defined already in the Build's `projects` and externals must be a dependency in the Build definition). For example, to declare that `root/clean` aggregates `sub1/clean` and `sub2/clean` intransitively (that is, excluding `ext` even though `sub2` aggregates it):
```scala
> set aggregate in clean := Aggregation(Seq(sub1, sub2), transitive = false)
```
### Classpath Dependencies
A classpath dependency declares that a project needs the full classpath of another project on its classpath.
Typically, this implies that the dependency will ensure its classpath is up-to-date, such as by fetching dependencies and recompiling modified sources.
A classpath dependency declaration consists of a project reference and an optional configuration mapping.
For example, to use project b's `compile` configuration from project a's `test` configuration:
```scala
lazy val a = Project(...) dependsOn(b % "test->compile")
lazy val b = Project(...)
```
`"test->compile"` may be shortened to `"test"` in this case. The `%` call may be omitted, in which case the mapping is `"compile->compile"` by default.
A useful configuration declaration is `test->test`. This means to use a dependency's test classes on the dependent's test classpath.
Multiple declarations may be separated by a semicolon. For example, the following says to use the main classes of `b` for the compile classpath of `a` as well as the test classes of `b` for the test classpath of `a`:
```scala
lazy val a = Project(...) dependsOn(b % "compile;test->test")
lazy val b = Project(...)
```
### Configuration Dependencies
Suppose project A has a configuration dependency on project B.
If a setting is not found on project A, it will be looked up in project B.
This is one aspect of delegation and will be described in detail elsewhere.

View File

@ -0,0 +1,325 @@
*Wiki Maintenance Note:* This page has been *mostly* replaced by
[[Getting Started Full Def]] and other pages. It has some obsolete
terminology:
- we now avoid referring to build definition as "configuration" to
avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full configuration,
in favor of ".sbt build definition files" and ".scala build
definition files"
However, it may still be worth combing this page for examples or points
that are not made in new pages. Some stuff that may not be elsewhere:
- discussion of cycles
- discussion of build-level settings
- discussion of omitting or augmenting defaults
Also, the discussion of configuration delegation which is teased here,
needs to exist somewhere.
After extracting useful content, this page could simply be a redirect
(delete the content, link to the new pages about build definition).
There is a related page [[Introduction to Full Configurations]] which
could benefit from cleanup at the same time.
Full Configuration (Draft)
==========================
A full configuration consists of one or more Scala source files that
define concrete Builds. A Build defines project relationships and
configurations.
By Example
----------
Create a file with extension ``.scala`` in your ``project/`` directory
(such as ``<your-project>/project/Build.scala``).
A sample ``project/Build.scala``:
::
import sbt._
object MyBuild extends Build {
// Declare a project in the root directory of the build with ID "root".
// Declare an execution dependency on sub1.
lazy val root = Project("root", file(".")) aggregate(sub1)
// Declare a project with ID 'sub1' in directory 'a'.
// Declare a classpath dependency on sub2 in the 'test' configuration.
lazy val sub1: Project = Project("sub1", file("a")) dependsOn(sub2 % "test")
// Declare a project with ID 'sub2' in directory 'b'.
// Declare a configuration dependency on the root project.
lazy val sub2 = Project("sub2", file("b"), delegates = root :: Nil)
}
Cycles
~~~~~~
(It is probably best to skip this section and come back after reading
about project relationships. It is near the example for easier
reference.)
The configuration dependency ``sub2 -> root`` is specified as an
argument to the ``delegates`` parameter of ``Project``, which is by-name
and of type ``Seq[ProjectReference]`` because by-name repeated
parameters are not allowed in Scala. There are also corresponding
by-name parameters ``aggregate`` and ``dependencies`` for execution and
classpath dependencies. By-name parameters, being non-strict, are useful
when there are cycles between the projects, as is the case for ``root``
and ``sub2``. In the example, there is a *configuration* dependency
``sub2 -> root``, a *classpath* dependency ``sub1 -> sub2``, and an
*execution* dependency ``root -> sub1``. This causes cycles at the
Scala-level, but not within a particular dependency type, which is not
allowed.
Defining Projects
-----------------
An internal project is defined by constructing an instance of
``Project``. The minimum information for a new project is its ID string
and base directory. For example:
\`\`\`scala import sbt.\_
object MyBuild extends Build { lazy val projectA = Project("a",
file("subA")) }
\`\`\ ``This constructs a project definition for a project with ID 'a' and located in the``\ /subA\ ``directory. Here,``\ file(...)\ ``is equivalent to``\ new
File(...)\` and is resolved relative to the build's base directory.
There are additional optional parameters to the Project constructor.
These parameters configure the project and declare project
relationships, as discussed in the next sections.
Project Settings
----------------
A full build definition can configure settings for a project, just like
a light configuration. Unlike a light configuration, the default
settings can be replaced or manipulated and sequences of settings can be
manipulated. In addition, a light configuration has default imports
defined. A full definition needs to import these explicitly. In
particular, all keys (like ``name`` and ``version``) need to be imported
from ``sbt.Keys``.
No defaults
~~~~~~~~~~~
For example, to define a build from scratch (with no default settings or
tasks):
::
import sbt._
import Keys._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"), settings = Seq(name := "From Scratch"))
}
Augment Defaults
~~~~~~~~~~~~~~~~
To augment the default settings, the following Project definitions are
equivalent:
::
lazy val a1 = Project("a", file("subA")) settings(name := "Additional", version := "1.0")
lazy val a2 = Project("a", file("subA"),
settings = Defaults.defaultSettings ++ Seq(name := "Additional", version := "1.0")
)
Select Defaults
~~~~~~~~~~~~~~~
Web support is now split out into a plugin. With the plugin declared,
its settings can be selected like:
::
import sbt_
import Keys._
object MyBuild extends Build {
lazy val projectA = Project("a", file("subA"), settings = Web.webSettings)
}
Settings defined in ``.sbt`` files are appended to the settings for each
``Project`` definition.
Build-level Settings
~~~~~~~~~~~~~~~~~~~~
Lastly, settings can be defined for the entire build. In general, these
are used when a setting is not defined for a project. These settings are
declared either by augmenting ``Build.settings`` or defining settings in
the scope of the current build. For example, to set the shell prompt to
be the id for the current project, the following setting can be added to
a ``.sbt`` file:
::
shellPrompt in ThisBuild := { s => Project.extract(s).currentProject.id + "> " }
(The value is a function ``State => String``. ``State`` contains
everything about the build and will be discussed elsewhere.)
Alternatively, the setting can be defined in ``Build.settings``:
::
import sbt._
import Keys._
object MyBuild extends Build {
override lazy val settings = super.settings :+
(shellPrompt := { s => Project.extract(s).currentProject.id + "> " })
...
}
Project Relationships
---------------------
There are three kinds of project relationships in sbt. These are
described by execution, classpath, and configuration dependencies.
Project References
~~~~~~~~~~~~~~~~~~
When defining a dependency on another project, you provide a
``ProjectReference``. In the simplest case, this is a ``Project``
object. (Technically, there is an implicit conversion
``Project => ProjectReference``) This indicates a dependency on a
project within the same build. It is possible to declare a dependency on
a project in a directory separate from the current build, in a git
repository, or in a project packaged into a jar and accessible via
http/https. These are referred to as external builds and projects. You
can reference the root project in an external build with
``RootProject``:
``scala RootProject( file("/home/user/a-project") ) RootProject( uri("git://github.com/dragos/dupcheck.git") )``
or a specific project within the external build can be referenced using
a ``ProjectRef``:
::
ProjectRef( uri("git://github.com/dragos/dupcheck.git"), "project-id")
The fragment part of the git URI can be used to specify a specific
branch or tag. For example:
::
RootProject( uri("git://github.com/typesafehub/sbteclipse.git#v1.2") )
Ultimately, a ``RootProject`` is resolved to a ``ProjectRef`` once the
external project is loaded. Additionally, there are implicit conversions
``URI => RootProject`` and ``File => RootProject`` so that URIs and
Files can be used directly. External, remote builds are retrieved or
checked out to a staging directory in the user's ``.sbt`` directory so
that they can be manipulated like local builds. Examples of using
project references follow in the next sections.
When using external projects, the ``sbt.boot.directory`` should be set
(see [[Setup\|Getting Started Setup]]) so that unnecessary
recompilations do not occur (see gh-35).
Execution Dependency
~~~~~~~~~~~~~~~~~~~~
If project A has an execution dependency on project B, then when you
execute a task on project A, it will also be run on project B. No
ordering of these tasks is implied. An execution dependency is declared
using the ``aggregate`` method on ``Project``. For example:
::
lazy val root = Project(...) aggregate(sub1)
lazy val sub1 = Project(...) aggregate(sub2)
lazy val sub2 = Project(...) aggregate(ext)
lazy val ext = uri("git://github.com/dragos/dupcheck.git")
If 'clean' is executed on ``sub2``, it will also be executed on ``ext``
(the locally checked out version). If 'clean' is executed on ``root``,
it will also be executed on ``sub1``, ``sub2``, and ``ext``.
Aggregation can be controlled more finely by configuring the
``aggregate`` setting. This setting is of type ``Aggregation``:
::
sealed trait Aggregation
final case class Implicit(enabled: Boolean) extends Aggregation
final class Explicit(val deps: Seq[ProjectReference], val transitive: Boolean) extends Aggregation
This key can be set in any scope, including per-task scopes. By default,
aggregation is disabled for ``run``, ``console-quick``, ``console``, and
``console-project``. Re-enabling it from the command line for the
current project for ``run`` would look like:
::
> set aggregate in run := true
(There is an implicit ``Boolean => Implicit`` where ``true`` translates
to ``Implicit(true)`` and ``false`` translates to ``Implicit(false)``).
Similarly, aggregation can be disabled for the current project using:
::
> set aggregate in clean := false
``Explicit`` allows finer control over the execution dependencies and
transitivity. An instance is normally constructed using
``Aggregation.apply``. No new projects may be introduced here (that is,
internal references have to be defined already in the Build's
``projects`` and externals must be a dependency in the Build
definition). For example, to declare that ``root/clean`` aggregates
``sub1/clean`` and ``sub2/clean`` intransitively (that is, excluding
``ext`` even though ``sub2`` aggregates it):
::
> set aggregate in clean := Aggregation(Seq(sub1, sub2), transitive = false)
Classpath Dependencies
~~~~~~~~~~~~~~~~~~~~~~
A classpath dependency declares that a project needs the full classpath
of another project on its classpath. Typically, this implies that the
dependency will ensure its classpath is up-to-date, such as by fetching
dependencies and recompiling modified sources.
A classpath dependency declaration consists of a project reference and
an optional configuration mapping. For example, to use project b's
``compile`` configuration from project a's ``test`` configuration:
``scala lazy val a = Project(...) dependsOn(b % "test->compile") lazy val b = Project(...)``
``"test->compile"`` may be shortened to ``"test"`` in this case. The
``%`` call may be omitted, in which case the mapping is
``"compile->compile"`` by default.
A useful configuration declaration is ``test->test``. This means to use
a dependency's test classes on the dependent's test classpath.
Multiple declarations may be separated by a semicolon. For example, the
following says to use the main classes of ``b`` for the compile
classpath of ``a`` as well as the test classes of ``b`` for the test
classpath of ``a``:
::
lazy val a = Project(...) dependsOn(b % "compile;test->test")
lazy val b = Project(...)
Configuration Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~
Suppose project A has a configuration dependency on project B. If a
setting is not found on project A, it will be looked up in project B.
This is one aspect of delegation and will be described in detail
elsewhere.

View File

@ -1,102 +0,0 @@
_Wiki Maintenance Note:_ This page has been _mostly_ replaced by
[[Getting Started Full Def]] and other pages. See the note at the
top of [[Full Configuration]] for details. If we can establish
(or cause to be true) that everything in here is covered
elsewhere, this page can be empty except for links to the new pages.
There are two types of file for configuring a build: a `build.sbt` file in you project root directory, or a `Build.scala` file in your `project/` directory. The former is often referred to as a "light", "quick" or "basic" configuration and the latter is often referred to as "full" configuration. This page is about "full" configuration.
# Naming the Scala build file
`Build.scala` is the typical name for this build file but in reality it can be called anything that ends with `.scala` as it is a standard Scala source file and sbt will detect and use it regardless of its name.
# Overview of what goes in the file
The most basic form of this file defines one object which extends `sbt.Build` e.g.:
```scala
import sbt._
object AnyName extends Build {
val anyName = Project("anyname", file("."))
// Declarations go here
}
```
There needs to be at least one `sbt.Project` defined and in this case we are giving it an arbitrary name and saying that it can be found in the root of this project. In other words we are saying that this is a build file to build the current project.
The declarations define any number of objects which can be used by sbt to determine what to build and how to build it.
Most of the time you are not telling sbt what to do, you are simply declaring the dependencies of your project and the particular settings you require. sbt then uses this information to determine how to carry out the tasks you give it when you interact with sbt on the command line. For this reason the order of declarations tends to be unimportant.
When you define something and assign it to a val the name of the val is often irrelevant. By defining it and making it part of an object, sbt can then interrogate it and extract the information it requires. So, for example, the line:
```scala
val apachenet = "commons-net" % "commons-net" % "2.0"
```
defines a dependency and assigns it to the val `apachenet` but, unless you refer to that val again in the build file, the name of it is of no significance to sbt. sbt simply sees that the dependency object exists and uses it when it needs it.
# Combining "light" and "full" configuration files
It is worth noting at this stage that you can have both a `build.sbt` file and a `Build.scala` file for the same project. If you do this, sbt will append the configurations in `build.sbt` to those in the `Build.scala` file. In fact you can also have multiple ".sbt" files in your root directory and they are all appended together.
# A simple example comparing a "light" and "full" configuration of the same project
Here is a short "light" `build.sbt` file which defines a build project with a single test dependency on "scalacheck":
```scala
name := "My Project"
version := "1.0"
organization := "org.myproject"
scalaVersion := "2.9.0-1"
libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
```
Here is an equivalent "full" `Build.scala` file which defines exactly the same thing:
```scala
import sbt._
import Keys._
object MyProjectBuild extends Build {
val mySettings = Defaults.defaultSettings ++ Seq(
name := "My Project",
version := "1.0",
organization := "org.myproject",
scalaVersion := "2.9.0-1",
libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
)
val myProject = Project("MyProject", file("."), settings = mySettings)
}
```
Note that we have to explicitly declare the build and project and we have to explicitly append our settings to the default settings. All of this work is done for us when we use a "light" build file.
To understand what is really going on you may find it helpful to see this `Build.scala` without the imports and associated implicit conversions:
```scala
object MyProjectBuild extends sbt.Build {
val mySettings = sbt.Defaults.defaultSettings ++ scala.Seq(
sbt.Keys.name := "My Project",
sbt.Keys.version := "1.0",
sbt.Keys.organization := "org.myproject",
sbt.Keys.scalaVersion := "2.9.0-1",
sbt.Keys.libraryDependencies += sbt.toGroupID("org.scalatest").%("scalatest_2.9.0").%("1.4.1").%("test")
)
val myProject = sbt.Project("MyProject", new java.io.File("."), settings = mySettings)
}
```

View File

@ -0,0 +1,137 @@
*Wiki Maintenance Note:* This page has been *mostly* replaced by
[[Getting Started Full Def]] and other pages. See the note at the top of
[[Full Configuration]] for details. If we can establish (or cause to be
true) that everything in here is covered elsewhere, this page can be
empty except for links to the new pages.
There are two types of file for configuring a build: a ``build.sbt``
file in you project root directory, or a ``Build.scala`` file in your
``project/`` directory. The former is often referred to as a "light",
"quick" or "basic" configuration and the latter is often referred to as
"full" configuration. This page is about "full" configuration.
Naming the Scala build file
===========================
``Build.scala`` is the typical name for this build file but in reality
it can be called anything that ends with ``.scala`` as it is a standard
Scala source file and sbt will detect and use it regardless of its name.
Overview of what goes in the file
=================================
The most basic form of this file defines one object which extends
``sbt.Build`` e.g.:
::
import sbt._
object AnyName extends Build {
val anyName = Project("anyname", file("."))
// Declarations go here
}
There needs to be at least one ``sbt.Project`` defined and in this case
we are giving it an arbitrary name and saying that it can be found in
the root of this project. In other words we are saying that this is a
build file to build the current project.
The declarations define any number of objects which can be used by sbt
to determine what to build and how to build it.
Most of the time you are not telling sbt what to do, you are simply
declaring the dependencies of your project and the particular settings
you require. sbt then uses this information to determine how to carry
out the tasks you give it when you interact with sbt on the command
line. For this reason the order of declarations tends to be unimportant.
When you define something and assign it to a val the name of the val is
often irrelevant. By defining it and making it part of an object, sbt
can then interrogate it and extract the information it requires. So, for
example, the line:
::
val apachenet = "commons-net" % "commons-net" % "2.0"
defines a dependency and assigns it to the val ``apachenet`` but, unless
you refer to that val again in the build file, the name of it is of no
significance to sbt. sbt simply sees that the dependency object exists
and uses it when it needs it.
Combining "light" and "full" configuration files
================================================
It is worth noting at this stage that you can have both a ``build.sbt``
file and a ``Build.scala`` file for the same project. If you do this,
sbt will append the configurations in ``build.sbt`` to those in the
``Build.scala`` file. In fact you can also have multiple ".sbt" files in
your root directory and they are all appended together.
A simple example comparing a "light" and "full" configuration of the same project
=================================================================================
Here is a short "light" ``build.sbt`` file which defines a build project
with a single test dependency on "scalacheck":
::
name := "My Project"
version := "1.0"
organization := "org.myproject"
scalaVersion := "2.9.0-1"
libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
Here is an equivalent "full" ``Build.scala`` file which defines exactly
the same thing:
::
import sbt._
import Keys._
object MyProjectBuild extends Build {
val mySettings = Defaults.defaultSettings ++ Seq(
name := "My Project",
version := "1.0",
organization := "org.myproject",
scalaVersion := "2.9.0-1",
libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
)
val myProject = Project("MyProject", file("."), settings = mySettings)
}
Note that we have to explicitly declare the build and project and we
have to explicitly append our settings to the default settings. All of
this work is done for us when we use a "light" build file.
To understand what is really going on you may find it helpful to see
this ``Build.scala`` without the imports and associated implicit
conversions:
::
object MyProjectBuild extends sbt.Build {
val mySettings = sbt.Defaults.defaultSettings ++ scala.Seq(
sbt.Keys.name := "My Project",
sbt.Keys.version := "1.0",
sbt.Keys.organization := "org.myproject",
sbt.Keys.scalaVersion := "2.9.0-1",
sbt.Keys.libraryDependencies += sbt.toGroupID("org.scalatest").%("scalatest_2.9.0").%("1.4.1").%("test")
)
val myProject = sbt.Project("MyProject", new java.io.File("."), settings = mySettings)
}

View File

@ -1,269 +0,0 @@
_Wiki Maintenance Note:_ This page is a dumping ground for little
bits of text, examples, and information that needs to find a new
home somewhere else on the wiki.
# Snippets of docs that need to move to another page
Temporarily change the logging level and configure how stack traces are displayed by modifying the `log-level` or `trace-level` settings:
```text
> set logLevel := Level.Warn
```
Valid `Level` values are `Debug, Info, Warn, Error`.
You can run an action for multiple versions of Scala by prefixing the action with `+`. See [[Cross Build]] for details. You can temporarily switch to another version of Scala using `++ <version>`. This version does not have to be listed in your build definition, but it does have to be available in a repository. You can also include the initial command to run after switching to that version. For example:
```text
> ++2.9.1 console-quick
...
Welcome to Scala version 2.9.1.final (Java HotSpot(TM) Server VM, Java 1.6.0).
...
scala>
...
> ++2.8.1 console-quick
...
Welcome to Scala version 2.8.1 (Java HotSpot(TM) Server VM, Java 1.6.0).
...
scala>
```
# Manual Dependency Management
Manually managing dependencies involves copying any jars that you want to use to the `lib` directory. sbt will put these jars on the classpath during compilation, testing, running, and when using the interpreter. You are responsible for adding, removing, updating, and otherwise managing the jars in this directory. No modifications to your project definition are required to use this method unless you would like to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the `unmanaged-base` setting in your project definition. For example, to use `custom_lib/`:
```scala
unmanagedBase <<= baseDirectory { base => base / "custom_lib" }
```
If you want more control and flexibility, override the `unmanaged-jars` task, which ultimately provides the manual dependencies to sbt. The default implementation is roughly:
```scala
unmanagedJars in Compile <<= baseDirectory map { base => (base ** "*.jar").classpath }
```
If you want to add jars from multiple directories in addition to the default directory, you can do:
```scala
unmanagedJars in Compile <++= baseDirectory map { base =>
val baseDirectories = (base / "libA") +++ (base / "b" / "lib") +++ (base / "libC")
val customJars = (baseDirectories ** "*.jar") +++ (base / "d" / "my.jar")
customJars.classpath
}
```
See [[Paths]] for more information on building up paths.
### Resolver.withDefaultResolvers method
To use the local and Maven Central repositories, but not the Scala Tools releases repository:
```scala
externalResolvers <<= resolvers map { rs =>
Resolver.withDefaultResolvers(rs, mavenCentral = true, scalaTools = false)
}
```
### Explicit URL
If your project requires a dependency that is not present in a repository, a
direct URL to its jar can be specified with the `from` method as follows:
```scala
libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar"
```
The URL is only used as a fallback if the dependency cannot be found through
the configured repositories. Also, when you publish a project, a pom or
ivy.xml is created listing your dependencies; the explicit URL is not
included in this published metadata.
### Disable Transitivity
By default, sbt fetches all dependencies, transitively. (That is, it downloads
the dependencies of the dependencies you list.)
In some instances, you may find that the dependencies listed for a project
aren't necessary for it to build. Avoid fetching artifact dependencies with
`intransitive()`, as in this example:
```scala
libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive()
```
### Classifiers
You can specify the classifer for a dependency using the `classifier` method. For example, to get the jdk15 version of TestNG:
```scala
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
```
To obtain particular classifiers for all dependencies transitively, run the `update-classifiers` task. By default, this resolves all artifacts with the `sources` or `javadoc` classifer. Select the classifiers to obtain by configuring the `transitive-classifiers` setting. For example, to only retrieve sources:
```scala
transitiveClassifiers := Seq("sources")
```
### Extra Attributes
[Extra attributes] can be specified by passing key/value pairs to the `extra` method.
To select dependencies by extra attributes:
```scala
libraryDependencies += "org" % "name" % "rev" extra("color" -> "blue")
```
To define extra attributes on the current project:
```scala
projectID <<= projectID { id =>
id extra("color" -> "blue", "component" -> "compiler-interface")
}
```
### Inline Ivy XML
sbt additionally supports directly specifying the configurations or dependencies sections of an Ivy configuration file inline. You can mix this with inline Scala dependency and repository declarations.
For example:
```scala
ivyXML :=
<dependencies>
<dependency org="javax.mail" name="mail" rev="1.4.2">
<exclude module="activation"/>
</dependency>
</dependencies>
```
### Ivy Home Directory
By default, sbt uses the standard Ivy home directory location `${user.home}/.ivy2/`.
This can be configured machine-wide, for use by both the sbt launcher and by projects, by setting the system property `sbt.ivy.home` in the sbt startup script (described in [[Setup|Getting Started Setup]]).
For example:
```text
java -Dsbt.ivy.home=/tmp/.ivy2/ ...
```
### Checksums
sbt ([through Ivy]) verifies the checksums of downloaded files by default. It also publishes checksums of artifacts by default. The checksums to use are specified by the _checksums_ setting.
To disable checksum checking during update:
```scala
checksums in update := Nil
```
To disable checksum creation during artifact publishing:
```scala
checksums in publishLocal := Nil
checksums in publish := Nil
```
The default value is:
```scala
checksums := Seq("sha1", "md5")
```
### Publishing
Finally, see [[Publishing]] for how to publish your project.
## Maven/Ivy
For this method, create the configuration files as you would for Maven (`pom.xml`) or Ivy (`ivy.xml` and optionally `ivysettings.xml`).
External configuration is selected by using one of the following expressions.
### Ivy settings (resolver configuration)
```scala
externalIvySettings()
```
or
```scala
externalIvySettings(baseDirectory(_ / "custom-settings-name.xml"))
```
### Ivy file (dependency configuration)
```scala
externalIvyFile()
```
or
```scala
externalIvyFile(baseDirectory(_ / "custom-name.xml"))
```
Because Ivy files specify their own configurations, sbt needs to know which configurations to use for the compile, runtime, and test classpaths. For example, to specify that the Compile classpath should use the 'default' configuration:
```scala
classpathConfiguration in Compile := config("default")
```
### Maven pom (dependencies only)
```scala
externalPom()
```
or
```scala
externalPom(baseDirectory(_ / "custom-name.xml"))
```
### Full Ivy Example
For example, a `build.sbt` using external Ivy files might look like:
```scala
externalIvySettings()
externalIvyFile( baseDirectory { base => base / "ivyA.xml"} )
classpathConfiguration in Compile := Compile
classpathConfiguration in Test := Test
classpathConfiguration in Runtime := Runtime
```
### Known limitations
Maven support is dependent on Ivy's support for Maven POMs.
Known issues with this support:
* Specifying `relativePath` in the `parent` section of a POM will produce an error.
* Ivy ignores repositories specified in the POM. A workaround is to specify repositories inline or in an Ivy `ivysettings.xml` file.
### Configuration dependencies
The GSG on multi-project builds doesn't describe delegation among
configurations. The FAQ entry about porting multi-project build
from 0.7 mentions "configuration dependencies" but there's nothing
really to link to that explains them.
### These should be FAQs (maybe just pointing to topic pages)
* Run your program in its own VM
* Run your program with a particular version of Scala
* Run your webapp within an embedded jetty server
* Create a WAR that can be deployed to an external app server

View File

@ -0,0 +1,324 @@
*Wiki Maintenance Note:* This page is a dumping ground for little bits
of text, examples, and information that needs to find a new home
somewhere else on the wiki.
Snippets of docs that need to move to another page
==================================================
Temporarily change the logging level and configure how stack traces are
displayed by modifying the ``log-level`` or ``trace-level`` settings:
::
> set logLevel := Level.Warn
Valid ``Level`` values are ``Debug, Info, Warn, Error``.
You can run an action for multiple versions of Scala by prefixing the
action with ``+``. See [[Cross Build]] for details. You can temporarily
switch to another version of Scala using ``++ <version>``. This version
does not have to be listed in your build definition, but it does have to
be available in a repository. You can also include the initial command
to run after switching to that version. For example:
::
> ++2.9.1 console-quick
...
Welcome to Scala version 2.9.1.final (Java HotSpot(TM) Server VM, Java 1.6.0).
...
scala>
...
> ++2.8.1 console-quick
...
Welcome to Scala version 2.8.1 (Java HotSpot(TM) Server VM, Java 1.6.0).
...
scala>
Manual Dependency Management
============================
Manually managing dependencies involves copying any jars that you want
to use to the ``lib`` directory. sbt will put these jars on the
classpath during compilation, testing, running, and when using the
interpreter. You are responsible for adding, removing, updating, and
otherwise managing the jars in this directory. No modifications to your
project definition are required to use this method unless you would like
to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the
``unmanaged-base`` setting in your project definition. For example, to
use ``custom_lib/``:
::
unmanagedBase <<= baseDirectory { base => base / "custom_lib" }
If you want more control and flexibility, override the
``unmanaged-jars`` task, which ultimately provides the manual
dependencies to sbt. The default implementation is roughly:
::
unmanagedJars in Compile <<= baseDirectory map { base => (base ** "*.jar").classpath }
If you want to add jars from multiple directories in addition to the
default directory, you can do:
::
unmanagedJars in Compile <++= baseDirectory map { base =>
val baseDirectories = (base / "libA") +++ (base / "b" / "lib") +++ (base / "libC")
val customJars = (baseDirectories ** "*.jar") +++ (base / "d" / "my.jar")
customJars.classpath
}
See [[Paths]] for more information on building up paths.
Resolver.withDefaultResolvers method
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To use the local and Maven Central repositories, but not the Scala Tools
releases repository:
::
externalResolvers <<= resolvers map { rs =>
Resolver.withDefaultResolvers(rs, mavenCentral = true, scalaTools = false)
}
Explicit URL
~~~~~~~~~~~~
If your project requires a dependency that is not present in a
repository, a direct URL to its jar can be specified with the ``from``
method as follows:
::
libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar"
The URL is only used as a fallback if the dependency cannot be found
through the configured repositories. Also, when you publish a project, a
pom or ivy.xml is created listing your dependencies; the explicit URL is
not included in this published metadata.
Disable Transitivity
~~~~~~~~~~~~~~~~~~~~
By default, sbt fetches all dependencies, transitively. (That is, it
downloads the dependencies of the dependencies you list.)
In some instances, you may find that the dependencies listed for a
project aren't necessary for it to build. Avoid fetching artifact
dependencies with ``intransitive()``, as in this example:
::
libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive()
Classifiers
~~~~~~~~~~~
You can specify the classifer for a dependency using the ``classifier``
method. For example, to get the jdk15 version of TestNG:
::
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
To obtain particular classifiers for all dependencies transitively, run
the ``update-classifiers`` task. By default, this resolves all artifacts
with the ``sources`` or ``javadoc`` classifer. Select the classifiers to
obtain by configuring the ``transitive-classifiers`` setting. For
example, to only retrieve sources:
::
transitiveClassifiers := Seq("sources")
Extra Attributes
~~~~~~~~~~~~~~~~
[Extra attributes] can be specified by passing key/value pairs to the
``extra`` method.
To select dependencies by extra attributes:
::
libraryDependencies += "org" % "name" % "rev" extra("color" -> "blue")
To define extra attributes on the current project:
::
projectID <<= projectID { id =>
id extra("color" -> "blue", "component" -> "compiler-interface")
}
Inline Ivy XML
~~~~~~~~~~~~~~
sbt additionally supports directly specifying the configurations or
dependencies sections of an Ivy configuration file inline. You can mix
this with inline Scala dependency and repository declarations.
For example:
::
ivyXML :=
<dependencies>
<dependency org="javax.mail" name="mail" rev="1.4.2">
<exclude module="activation"/>
</dependency>
</dependencies>
Ivy Home Directory
~~~~~~~~~~~~~~~~~~
By default, sbt uses the standard Ivy home directory location
``${user.home}/.ivy2/``. This can be configured machine-wide, for use by
both the sbt launcher and by projects, by setting the system property
``sbt.ivy.home`` in the sbt startup script (described in
[[Setup\|Getting Started Setup]]).
For example:
::
java -Dsbt.ivy.home=/tmp/.ivy2/ ...
Checksums
~~~~~~~~~
sbt ([through Ivy]) verifies the checksums of downloaded files by
default. It also publishes checksums of artifacts by default. The
checksums to use are specified by the *checksums* setting.
To disable checksum checking during update:
::
checksums in update := Nil
To disable checksum creation during artifact publishing:
::
checksums in publishLocal := Nil
checksums in publish := Nil
The default value is:
::
checksums := Seq("sha1", "md5")
Publishing
~~~~~~~~~~
Finally, see [[Publishing]] for how to publish your project.
Maven/Ivy
---------
For this method, create the configuration files as you would for Maven
(``pom.xml``) or Ivy (``ivy.xml`` and optionally ``ivysettings.xml``).
External configuration is selected by using one of the following
expressions.
Ivy settings (resolver configuration)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
externalIvySettings()
or
::
externalIvySettings(baseDirectory(_ / "custom-settings-name.xml"))
Ivy file (dependency configuration)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
externalIvyFile()
or
::
externalIvyFile(baseDirectory(_ / "custom-name.xml"))
Because Ivy files specify their own configurations, sbt needs to know
which configurations to use for the compile, runtime, and test
classpaths. For example, to specify that the Compile classpath should
use the 'default' configuration:
::
classpathConfiguration in Compile := config("default")
Maven pom (dependencies only)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
externalPom()
or
::
externalPom(baseDirectory(_ / "custom-name.xml"))
Full Ivy Example
~~~~~~~~~~~~~~~~
For example, a ``build.sbt`` using external Ivy files might look like:
::
externalIvySettings()
externalIvyFile( baseDirectory { base => base / "ivyA.xml"} )
classpathConfiguration in Compile := Compile
classpathConfiguration in Test := Test
classpathConfiguration in Runtime := Runtime
Known limitations
~~~~~~~~~~~~~~~~~
Maven support is dependent on Ivy's support for Maven POMs. Known issues
with this support:
- Specifying ``relativePath`` in the ``parent`` section of a POM will
produce an error.
- Ivy ignores repositories specified in the POM. A workaround is to
specify repositories inline or in an Ivy ``ivysettings.xml`` file.
Configuration dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~
The GSG on multi-project builds doesn't describe delegation among
configurations. The FAQ entry about porting multi-project build from 0.7
mentions "configuration dependencies" but there's nothing really to link
to that explains them.
These should be FAQs (maybe just pointing to topic pages)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Run your program in its own VM
- Run your program with a particular version of Scala
- Run your webapp within an embedded jetty server
- Create a WAR that can be deployed to an external app server

View File

@ -1,324 +0,0 @@
[light definition]: https://github.com/harrah/xsbt/wiki/Basic-Configuration
[full definition]: https://github.com/harrah/xsbt/wiki/Full-Configuration
[ScopedSetting]: http://harrah.github.com/xsbt/latest/api/sbt/ScopedSetting.html
[Scope]: http://harrah.github.com/xsbt/latest/api/sbt/Scope$.html
[Initialize]: http://harrah.github.com/xsbt/latest/api/sbt/Init$Initialize.html
[SettingKey]: http://harrah.github.com/xsbt/latest/api/sbt/SettingKey.html
[Keys]: http://harrah.github.com/xsbt/latest/sxr/Keys.scala.html "Keys.scala"
[InputKey]: http://harrah.github.com/xsbt/latest/api/sbt/InputKey.html
[TaskKey]: http://harrah.github.com/xsbt/latest/api/sbt/TaskKey.html
[Append]: http://harrah.github.com/xsbt/latest/api/sbt/Append$.html
_Wiki Maintenance Note:_ This page has been partly replaced by [[Getting Started Basic Def]] and
[[Getting Started More About Settings]]. It has some obsolete
terminology:
- we now avoid referring to build definition as "configuration"
to avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full
configuration, in favor of ".sbt build definition files" and
".scala build definition files"
However, it may still be worth combing this page for examples or
points that are not made in new pages. We may want to add FAQs or
topic pages to supplement the Getting Started pages with some of
that information. After doing so, this page could simply be a
redirect (delete the content, link to the new pages about build
definition).
## Introduction
A build definition is written in Scala.
There are two types of definitions: light and full.
A [light definition] is a quick way of configuring a build, consisting of a list of Scala expressions describing project settings.
A [full definition] is made up of one or more Scala source files that describe relationships between projects and introduce new configurations and settings.
This page introduces the `Setting` type, which is used by light and full definitions for general configuration.
### Introductory Examples
Basic examples of each type of definition are shown below for the purpose of getting an idea of what they look like, not for full comprehension of details, which are described at [light definition] and [full definition].
`<base>/build.sbt` (light)
```scala
name := "My Project"
libraryDependencies += "junit" % "junit" % "4.8" % "test"
```
`<base>/project/Build.scala` (full)
```scala
import sbt._
import Keys._
object MyBuild extends Build
{
lazy val root = Project("root", file(".")) dependsOn(sub)
lazy val sub = Project("sub", file("sub")) settings(
name := "My Project",
libraryDependencies += "junit" % "junit" % "4.8" % "test"
)
}
```
## Important Settings Background
The fundamental type of a configurable in sbt is a `Setting[T]`.
Each line in the `build.sbt` example above is of this type.
The arguments to the `settings` method in the `Build.scala` example are of type `Setting[T]`.
Specifically, the `name` setting has type `Setting[String]` and the `libraryDependencies` setting has type `Setting[Seq[ModuleID]]`, where `ModuleID` represents a dependency.
Throughout the documentation, many examples show a setting, such as:
```scala
libraryDependencies += "junit" % "junit" % "4.8" % "test"
```
This setting expression either goes in a [light definition] `(build.sbt)` as is or in the `settings` of a `Project` instance in a [full definition] `(Build.scala)` as shown in the example.
This is an important point to understanding the context of examples in the documentation.
(That is, you now know where to copy and paste examples now.)
A `Setting[T]` describes how to initialize a setting of type `T`.
The settings shown in the examples are expressions, not statements.
In particular, there is no hidden mutable map that is being modified.
Each `Setting[T]` is a value that describes an update to a map.
The actual map is rarely directly referenced by user code.
It is not the final map that is usually important, but the operations on the map.
To emphasize this, the setting in the following `Build.scala` fragment *is ignored* because it is a value that need to be included in the `settings` of a `Project`.
(Unfortunately, Scala will discard non-Unit values to get Unit, which is why there is no compile error.)
```scala
object Bad extends Build {
libraryDependencies += "junit" % "junit" % "4.8" % "test"
}
```
```scala
object Good extends Build
{
lazy val root = Project("root", file(".")) settings(
libraryDependencies += "junit" % "junit" % "4.8" % "test"
)
}
```
## Declaring a Setting
There is fundamentally one type of initialization, represented by the `<<=` method.
The other initialization methods `:=`, `+=`, `++=`, `<+=`, `<++=`, and `~=` are convenience methods that can be defined in terms of `<<=`.
The motivation behind the method names is:
* All methods end with `=` to obtain the lowest possible infix precedence.
* A method starting with `<` indicates that the initialization uses other settings.
* A single `+` means a single value is expected and will be appended to the current sequence.
* `++` means a `Seq[T]` is expected. The sequence will be appended to the current sequence.
The following sections include descriptions and examples of each initialization method.
The descriptions use "will initialize" or "will append" to emphasize that they construct a value describing an update and do not mutate anything.
Each setting may be directly included in a light configuration (build.sbt), appropriately separated by blank lines.
For a full configuration (Build.scala), the setting must go in a settings Seq as described in the previous section.
Information about the types of the left and right hand sides of the methods follows this section.
### :=
`:=` is used to define a setting that overwrites any previous value without referring to other settings.
For example, the following defines a setting that will set _name_ to "My Project" regardless of whether _name_ has already been initialized.
```scala
name := "My Project"
```
No other settings are used. The value assigned is just a constant.
### += and ++=
`+=` is used to define a setting that will append a single value to the current sequence without referring to other settings.
For example, the following defines a setting that will append a JUnit dependency to _libraryDependencies_.
No other settings are referenced.
```scala
libraryDependencies += "junit" % "junit" % "4.8" % "test"
```
The related method `++=` appends a sequence to the current sequence, also without using other settings.
For example, the following defines a setting that will add dependencies on ScalaCheck and specs to the current list of dependencies.
Because it will append a `Seq`, it uses ++= instead of +=.
```scala
libraryDependencies ++= Seq(
"org.scala-tools.testing" %% "scalacheck" % "1.9" % "test",
"org.scala-tools.testing" %% "specs" % "1.6.8" % "test"
)
)
```
The types involved in += and ++= are constrained by the existence of an implicit parameter of type Append.Value[A,B] in the case of += or Append.Values[A,B] in the case of ++=.
Here, B is the type of the value being appended and A is the type of the setting that the value is being appended to.
See [Append] for the provided instances.
### ~=
`~=` is used to transform the current value of a setting.
For example, the following defines a setting that will remove `-Y` compiler options from the current list of compiler options.
```scala
scalacOptions in Compile ~= { (options: Seq[String]) =>
options filterNot ( _ startsWith "-Y" )
}
```
The earlier declaration of JUnit as a library dependency using `+=` could also be written as:
```scala
libraryDependencies ~= { (deps: Seq[ModuleID]) =>
deps :+ ("junit" % "junit" % "4.8" % "test")
}
```
### <<=
The most general method is <<=.
All other methods can be implemented in terms of <<=.
<<= defines a setting using other settings, possibly including the previous value of the setting being defined.
For example, declaring JUnit as a dependency using <<= would look like:
```scala
libraryDependencies <<= libraryDependencies apply { (deps: Seq[ModuleID]) =>
// Note that :+ is a method on Seq that appends a single value
deps :+ ("junit" % "junit" % "4.8" % "test")
}
```
This defines a setting that will apply the provided function to the previous value of _libraryDependencies_.
`apply` and `Seq[ModuleID]` are explicit for demonstration only and may be omitted.
### <+= and <++=
The <+= method is a hybrid of the += and <<= methods.
Similarly, <++= is a hybrid of the ++= and <<= methods.
These methods are convenience methods for using other settings to append to the current value of a setting.
For example, the following will add a dependency on the Scala compiler to the current list of dependencies.
Because the _scalaVersion_ setting is used, the method is <+= instead of +=.
```scala
libraryDependencies <+= scalaVersion( "org.scala-lang" % "scala-compiler" % _ )
```
This next example adds a dependency on the Scala compiler to the current list of dependencies.
Because another setting (_scalaVersion_) is used and a Seq is appended, the method is <++=.
```scala
libraryDependencies <++= scalaVersion { sv =>
("org.scala-lang" % "scala-compiler" % sv) ::
("org.scala-lang" % "scala-swing" % sv) ::
Nil
}
```
The types involved in <+= and <++=, like += and ++=, are constrained by the existence of an implicit parameter of type Append.Value[A,B] in the case of <+= or Append.Values[A,B] in the case of <++=.
Here, B is the type of the value being appended and A is the type of the setting that the value is being appended to.
See [Append] for the provided instances.
## Setting types
This section provides information about the types of the left and right-hand sides of the initialization methods. It is currently incomplete.
### Setting Keys
The left hand side of a setting definition is of type [ScopedSetting].
This type has two parts: a key (of type [SettingKey]) and a scope (of type [Scope]).
An unspecified scope is like using `this` to refer to the current context.
The previous examples on this page have not defined an explicit scope. See [[Inspecting Settings]] for details on the axes that make up scopes.
The target (the value on the left) of a method like `:=` identifies one of the main constructs in sbt: a setting, a task, or an input task.
It is not an actual setting or task, but a key representing a setting or task.
A setting is a value assigned when a project is loaded.
A task is a unit of work that is run on-demand after a project is loaded and produces a value.
An input task, previously known as a method task in sbt 0.7 and earlier, accepts an input string and produces a task to be run.
(The renaming is because it can accept arbitrary input in 0.10+ and not just a space-delimited sequence of arguments like in 0.7.)
A setting key has type [SettingKey], a task key has type [TaskKey], and an input task has type [InputKey].
The remainder of this section only discusses settings.
See [[Tasks]] and [[Input Tasks]] for details on the other types (those pages assume an understanding of this page).
To construct a [ScopedSetting], select the key and then scope it using the `in` method (see the [ScopedSetting] for API details).
For example, the setting for compiler options for the test sources is referenced using the _scalacOptions_ key and the `Test` configuration in the current project.
```scala
val ref: ScopedSetting[Seq[String]] = scalacOptions in Test
```
The current project doesn't need to be explicitly specified, since that is the default in most cases.
Some settings are specific to a task, in which case the task should be specified as part of the scope as well.
For example, the compiler options used for the _console_ task for test sources is referenced like:
```scala
val ref: ScopedSetting[Seq[String]] = scalacOptions in Test in console
```
In these examples, the type of the setting reference key is given explicitly and the key is assigned to a value to emphasize that it is a normal (immutable) Scala value and can be manipulated and passed around as such.
### Computing the value for a setting
The right hand side of a setting definition varies by the initialization method used.
In the case of :=, +=, ++=, and ~=, the type of the argument is straightforward (see the [ScopedSetting] API).
For <<=, <+=, and <++=, the type is `Initialize[T]` (for <<= and <+=) or `Initialize[Seq[T]]` (for <++=).
This section discusses the [Initialize] type.
A value of type `Initialize[T]` represents a computation that takes the values of other settings as inputs.
For example, in the following setting, the argument to <<= is of type `Initialize[File]`:
```scala
scalaSource in Compile <<= baseDirectory {
(base: File) => base / "src"
}
```
This example can be written more explicitly as:
```scala
{
val key: ScopedSetting[File] = scalaSource.in(Compile)
val init: Initialize[File] = baseDirectory.apply( (base: File) => base / "src" )
key.<<=(init)
}
```
To construct a value of type `Initialize`, construct a tuple of up to nine input `ScopedSetting`s.
Then, define the function that will compute the value of the setting given the values for these input settings.
```scala
val path: Initialize[File] =
(baseDirectory, name, version).apply( (base: File, n: String, v: String) =>
base / (n + "-" + v + ".jar")
)
```
This example takes the base directory, project name, and project version as inputs.
The keys for these settings are defined in [sbt.Keys], along with all other built-in keys.
The argument to the `apply` method is a function that takes the values of those settings and computes a new value.
In this case, that value is the path of a jar.
### Initialize[Task[T]]
To initialize tasks, the procedure is similar.
There are a few differences.
First, the inputs are of type [ScopedTaskable].
The means that either settings ([ScopedSetting]) or tasks ([ScopedTask]) may be used as the input to a task.
Second, the name of the method used is `map` instead of `apply` and the resulting value is of type `Initialize[Task[T]]`.
In the following example, the inputs are the [report|Update-Report] produced by the _update_ task and the context _configuration_.
The function computes the locations of the dependencies for that configuration.
```scala
val mainDeps: Initialize[Task[File]] =
(update, configuration).map( (report: UpdateReport, config: Configuration) =>
report.select(configuration = config.name)
)
```
As before, _update_ and _configuration_ are defined in [Keys].
_update_ is of type `TaskKey[UpdateReport]` and _configuration_ is of type `SettingKey[Configuration]`.

View File

@ -0,0 +1,407 @@
*Wiki Maintenance Note:* This page has been partly replaced by [[Getting
Started Basic Def]] and [[Getting Started More About Settings]]. It has
some obsolete terminology:
- we now avoid referring to build definition as "configuration" to
avoid confusion with compile configurations
- we now avoid referring to basic/light/quick vs. full configuration,
in favor of ".sbt build definition files" and ".scala build
definition files"
However, it may still be worth combing this page for examples or points
that are not made in new pages. We may want to add FAQs or topic pages
to supplement the Getting Started pages with some of that information.
After doing so, this page could simply be a redirect (delete the
content, link to the new pages about build definition).
Introduction
------------
A build definition is written in Scala. There are two types of
definitions: light and full. A :doc:`light definition <Basic-Configuration>`
is a quick way of configuring a build, consisting of a list of Scala
expressions describing project settings. A :doc:`full definition <Full-Configuration>` is
made up of one or more Scala source files that describe relationships
between projects and introduce new configurations and settings. This
page introduces the ``Setting`` type, which is used by light and full
definitions for general configuration.
Introductory Examples
~~~~~~~~~~~~~~~~~~~~~
Basic examples of each type of definition are shown below for the
purpose of getting an idea of what they look like, not for full
comprehension of details, which are described at :doc:`light definition <Basic-Configuration>`
and :doc:`full definition <Full-Configuration>`.
``<base>/build.sbt`` (light)
::
name := "My Project"
libraryDependencies += "junit" % "junit" % "4.8" % "test"
``<base>/project/Build.scala`` (full)
::
import sbt._
import Keys._
object MyBuild extends Build
{
lazy val root = Project("root", file(".")) dependsOn(sub)
lazy val sub = Project("sub", file("sub")) settings(
name := "My Project",
libraryDependencies += "junit" % "junit" % "4.8" % "test"
)
}
Important Settings Background
-----------------------------
The fundamental type of a configurable in sbt is a ``Setting[T]``. Each
line in the ``build.sbt`` example above is of this type. The arguments
to the ``settings`` method in the ``Build.scala`` example are of type
``Setting[T]``. Specifically, the ``name`` setting has type
``Setting[String]`` and the ``libraryDependencies`` setting has type
``Setting[Seq[ModuleID]]``, where ``ModuleID`` represents a dependency.
Throughout the documentation, many examples show a setting, such as:
::
libraryDependencies += "junit" % "junit" % "4.8" % "test"
This setting expression either goes in a :doc:`light definition <Basic-Configuration>`
``(build.sbt)`` as is or in the ``settings`` of a ``Project`` instance
in a :doc:`full definition <Full-Configuration>`
``(Build.scala)`` as shown in the example. This is an important point to
understanding the context of examples in the documentation. (That is,
you now know where to copy and paste examples now.)
A ``Setting[T]`` describes how to initialize a setting of type ``T``.
The settings shown in the examples are expressions, not statements. In
particular, there is no hidden mutable map that is being modified. Each
``Setting[T]`` is a value that describes an update to a map. The actual
map is rarely directly referenced by user code. It is not the final map
that is usually important, but the operations on the map.
To emphasize this, the setting in the following ``Build.scala`` fragment
*is ignored* because it is a value that need to be included in the
``settings`` of a ``Project``. (Unfortunately, Scala will discard
non-Unit values to get Unit, which is why there is no compile error.)
::
object Bad extends Build {
libraryDependencies += "junit" % "junit" % "4.8" % "test"
}
::
object Good extends Build
{
lazy val root = Project("root", file(".")) settings(
libraryDependencies += "junit" % "junit" % "4.8" % "test"
)
}
Declaring a Setting
-------------------
There is fundamentally one type of initialization, represented by the
``<<=`` method. The other initialization methods ``:=``, ``+=``,
``++=``, ``<+=``, ``<++=``, and ``~=`` are convenience methods that can
be defined in terms of ``<<=``.
The motivation behind the method names is:
- All methods end with ``=`` to obtain the lowest possible infix
precedence.
- A method starting with ``<`` indicates that the initialization uses
other settings.
- A single ``+`` means a single value is expected and will be appended
to the current sequence.
- ``++`` means a ``Seq[T]`` is expected. The sequence will be appended
to the current sequence.
The following sections include descriptions and examples of each
initialization method. The descriptions use "will initialize" or "will
append" to emphasize that they construct a value describing an update
and do not mutate anything. Each setting may be directly included in a
light configuration (build.sbt), appropriately separated by blank lines.
For a full configuration (Build.scala), the setting must go in a
settings Seq as described in the previous section. Information about the
types of the left and right hand sides of the methods follows this
section.
:=
~~
``:=`` is used to define a setting that overwrites any previous value
without referring to other settings. For example, the following defines
a setting that will set *name* to "My Project" regardless of whether
*name* has already been initialized.
::
name := "My Project"
No other settings are used. The value assigned is just a constant.
+= and ++=
~~~~~~~~~~
``+=`` is used to define a setting that will append a single value to
the current sequence without referring to other settings. For example,
the following defines a setting that will append a JUnit dependency to
*libraryDependencies*. No other settings are referenced.
::
libraryDependencies += "junit" % "junit" % "4.8" % "test"
The related method ``++=`` appends a sequence to the current sequence,
also without using other settings. For example, the following defines a
setting that will add dependencies on ScalaCheck and specs to the
current list of dependencies. Because it will append a ``Seq``, it uses
++= instead of +=.
::
libraryDependencies ++= Seq(
"org.scala-tools.testing" %% "scalacheck" % "1.9" % "test",
"org.scala-tools.testing" %% "specs" % "1.6.8" % "test"
)
)
The types involved in += and ++= are constrained by the existence of an
implicit parameter of type Append.Value[A,B] in the case of += or
Append.Values[A,B] in the case of ++=. Here, B is the type of the value
being appended and A is the type of the setting that the value is being
appended to. See
`Append <../../api/sbt/Append$.html>`_
for the provided instances.
~=
~~
``~=`` is used to transform the current value of a setting. For example,
the following defines a setting that will remove ``-Y`` compiler options
from the current list of compiler options.
::
scalacOptions in Compile ~= { (options: Seq[String]) =>
options filterNot ( _ startsWith "-Y" )
}
The earlier declaration of JUnit as a library dependency using ``+=``
could also be written as:
::
libraryDependencies ~= { (deps: Seq[ModuleID]) =>
deps :+ ("junit" % "junit" % "4.8" % "test")
}
<<=
~~~
The most general method is <<=. All other methods can be implemented in
terms of <<=. <<= defines a setting using other settings, possibly
including the previous value of the setting being defined. For example,
declaring JUnit as a dependency using <<= would look like:
::
libraryDependencies <<= libraryDependencies apply { (deps: Seq[ModuleID]) =>
// Note that :+ is a method on Seq that appends a single value
deps :+ ("junit" % "junit" % "4.8" % "test")
}
This defines a setting that will apply the provided function to the
previous value of *libraryDependencies*. ``apply`` and ``Seq[ModuleID]``
are explicit for demonstration only and may be omitted.
<+= and <++=
~~~~~~~~~~~~
The <+= method is a hybrid of the += and <<= methods. Similarly, <++= is
a hybrid of the ++= and <<= methods. These methods are convenience
methods for using other settings to append to the current value of a
setting.
For example, the following will add a dependency on the Scala compiler
to the current list of dependencies. Because the *scalaVersion* setting
is used, the method is <+= instead of +=.
::
libraryDependencies <+= scalaVersion( "org.scala-lang" % "scala-compiler" % _ )
This next example adds a dependency on the Scala compiler to the current
list of dependencies. Because another setting (*scalaVersion*) is used
and a Seq is appended, the method is <++=.
::
libraryDependencies <++= scalaVersion { sv =>
("org.scala-lang" % "scala-compiler" % sv) ::
("org.scala-lang" % "scala-swing" % sv) ::
Nil
}
The types involved in <+= and <++=, like += and ++=, are constrained by
the existence of an implicit parameter of type Append.Value[A,B] in the
case of <+= or Append.Values[A,B] in the case of <++=. Here, B is the
type of the value being appended and A is the type of the setting that
the value is being appended to. See
`Append <../../api/sbt/Append$.html>`_
for the provided instances.
Setting types
-------------
This section provides information about the types of the left and
right-hand sides of the initialization methods. It is currently
incomplete.
Setting Keys
~~~~~~~~~~~~
The left hand side of a setting definition is of type
`ScopedSetting <../../api/sbt/ScopedSetting.html>`_.
This type has two parts: a key (of type
`SettingKey <../../api/sbt/SettingKey.html>`_)
and a scope (of type
`Scope <../../api/sbt/Scope$.html>`_). An
unspecified scope is like using ``this`` to refer to the current
context. The previous examples on this page have not defined an explicit
scope. See [[Inspecting Settings]] for details on the axes that make up
scopes.
The target (the value on the left) of a method like ``:=`` identifies
one of the main constructs in sbt: a setting, a task, or an input task.
It is not an actual setting or task, but a key representing a setting or
task. A setting is a value assigned when a project is loaded. A task is
a unit of work that is run on-demand after a project is loaded and
produces a value. An input task, previously known as a method task in
sbt 0.7 and earlier, accepts an input string and produces a task to be
run. (The renaming is because it can accept arbitrary input in 0.10+ and
not just a space-delimited sequence of arguments like in 0.7.)
A setting key has type
`SettingKey <../../api/sbt/SettingKey.html>`_,
a task key has type
`TaskKey <../../api/sbt/TaskKey.html>`_,
and an input task has type
`InputKey <../../api/sbt/InputKey.html>`_.
The remainder of this section only discusses settings. See [[Tasks]] and
[[Input Tasks]] for details on the other types (those pages assume an
understanding of this page).
To construct a
`ScopedSetting <../../api/sbt/ScopedSetting.html>`_,
select the key and then scope it using the ``in`` method (see the
`ScopedSetting <../../api/sbt/ScopedSetting.html>`_
for API details). For example, the setting for compiler options for the
test sources is referenced using the *scalacOptions* key and the
``Test`` configuration in the current project.
::
val ref: ScopedSetting[Seq[String]] = scalacOptions in Test
The current project doesn't need to be explicitly specified, since that
is the default in most cases. Some settings are specific to a task, in
which case the task should be specified as part of the scope as well.
For example, the compiler options used for the *console* task for test
sources is referenced like:
::
val ref: ScopedSetting[Seq[String]] = scalacOptions in Test in console
In these examples, the type of the setting reference key is given
explicitly and the key is assigned to a value to emphasize that it is a
normal (immutable) Scala value and can be manipulated and passed around
as such.
Computing the value for a setting
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The right hand side of a setting definition varies by the initialization
method used. In the case of :=, +=, ++=, and ~=, the type of the
argument is straightforward (see the
`ScopedSetting <../../api/sbt/ScopedSetting.html>`_
API). For <<=, <+=, and <++=, the type is ``Initialize[T]`` (for <<= and
<+=) or ``Initialize[Seq[T]]`` (for <++=). This section discusses the
`Initialize <../../api/sbt/Init$Initialize.html>`_
type.
A value of type ``Initialize[T]`` represents a computation that takes
the values of other settings as inputs. For example, in the following
setting, the argument to <<= is of type ``Initialize[File]``:
::
scalaSource in Compile <<= baseDirectory {
(base: File) => base / "src"
}
This example can be written more explicitly as:
::
{
val key: ScopedSetting[File] = scalaSource.in(Compile)
val init: Initialize[File] = baseDirectory.apply( (base: File) => base / "src" )
key.<<=(init)
}
To construct a value of type ``Initialize``, construct a tuple of up to
nine input ``ScopedSetting``\ s. Then, define the function that will
compute the value of the setting given the values for these input
settings.
::
val path: Initialize[File] =
(baseDirectory, name, version).apply( (base: File, n: String, v: String) =>
base / (n + "-" + v + ".jar")
)
This example takes the base directory, project name, and project version
as inputs. The keys for these settings are defined in [sbt.Keys], along
with all other built-in keys. The argument to the ``apply`` method is a
function that takes the values of those settings and computes a new
value. In this case, that value is the path of a jar.
Initialize[Task[T]]
~~~~~~~~~~~~~~~~~~~
To initialize tasks, the procedure is similar. There are a few
differences. First, the inputs are of type [ScopedTaskable]. The means
that either settings
(`ScopedSetting <../../api/sbt/ScopedSetting.html>`_)
or tasks ([ScopedTask]) may be used as the input to a task. Second, the
name of the method used is ``map`` instead of ``apply`` and the
resulting value is of type ``Initialize[Task[T]]``. In the following
example, the inputs are the [report\|Update-Report] produced by the
*update* task and the context *configuration*. The function computes the
locations of the dependencies for that configuration.
::
val mainDeps: Initialize[Task[File]] =
(update, configuration).map( (report: UpdateReport, config: Configuration) =>
report.select(configuration = config.name)
)
As before, *update* and *configuration* are defined in
`Keys <../../sxr/Keys.scala.html>`_.
*update* is of type ``TaskKey[UpdateReport]`` and *configuration* is of
type ``SettingKey[Configuration]``.

View File

@ -1,55 +0,0 @@
# Advanced Command Example
This is an advanced example showing some of the power of the new settings system. It shows how to temporarily modify all declared dependencies in the build, regardless of where they are defined. It directly operates on the final Seq[Setting[_]] produced from every setting involved in the build.
The modifications are applied by running _canonicalize_. A _reload_ or using _set_ reverts the modifications, requiring _canonicalize_ to be run again.
This particular example shows how to transform all declared dependencies on ScalaCheck to use version 1.8. As an exercise, you might try transforming other dependencies, the repositories used, or the scalac options used. It is possible to add or remove settings as well.
This kind of transformation is possible directly on the settings of Project, but it would not include settings automatically added from plugins or build.sbt files. What this example shows is doing it unconditionally on all settings in all projects in all builds, including external builds.
```scala
import sbt._
import Keys._
object Canon extends Plugin
{
// Registers the canonicalize command in every project
override def settings = Seq(commands += canonicalize)
// Define the command. This takes the existing settings (including any session settings)
// and applies 'f' to each Setting[_]
def canonicalize = Command.command("canonicalize") { (state: State) =>
val extracted = Project.extract(state)
import extracted._
val transformed = session.mergeSettings map ( s => f(s) )
val newStructure = Load.reapply(transformed, structure)
Project.setProject(session, newStructure, state)
}
// Transforms a Setting[_].
def f(s: Setting[_]): Setting[_] = s.key.key match {
// transform all settings that modify libraryDependencies
case Keys.libraryDependencies.key =>
// hey scalac. T == Seq[ModuleID]
s.asInstanceOf[Setting[Seq[ModuleID]]].mapInit(mapLibraryDependencies)
// preserve other settings
case _ => s
}
// This must be idempotent because it gets applied after every transformation.
// That is, if the user does:
// libraryDependencies += a
// libraryDependencies += b
// then this method will be called for Seq(a) and Seq(a,b)
def mapLibraryDependencies(key: ScopedKey[Seq[ModuleID]], value: Seq[ModuleID]): Seq[ModuleID] =
value map mapSingle
// This is the fundamental transformation.
// Here we map all declared ScalaCheck dependencies to be version 1.8
def mapSingle(module: ModuleID): ModuleID =
if(module.name == "scalacheck")
module.copy(revision = "1.8")
else
module
}
```

View File

@ -0,0 +1,71 @@
========================
Advanced Command Example
========================
This is an advanced example showing some of the power of the new
settings system. It shows how to temporarily modify all declared
dependencies in the build, regardless of where they are defined. It
directly operates on the final ``Seq[Setting[_]]`` produced from every
setting involved in the build.
The modifications are applied by running *canonicalize*. A *reload* or
using *set* reverts the modifications, requiring *canonicalize* to be
run again.
This particular example shows how to transform all declared dependencies
on ScalaCheck to use version 1.8. As an exercise, you might try
transforming other dependencies, the repositories used, or the scalac
options used. It is possible to add or remove settings as well.
This kind of transformation is possible directly on the settings of
Project, but it would not include settings automatically added from
plugins or build.sbt files. What this example shows is doing it
unconditionally on all settings in all projects in all builds, including
external builds.
::
import sbt._
import Keys._
object Canon extends Plugin
{
// Registers the canonicalize command in every project
override def settings = Seq(commands += canonicalize)
// Define the command. This takes the existing settings (including any session settings)
// and applies 'f' to each Setting[_]
def canonicalize = Command.command("canonicalize") { (state: State) =>
val extracted = Project.extract(state)
import extracted._
val transformed = session.mergeSettings map ( s => f(s) )
val newStructure = Load.reapply(transformed, structure)
Project.setProject(session, newStructure, state)
}
// Transforms a Setting[_].
def f(s: Setting[_]): Setting[_] = s.key.key match {
// transform all settings that modify libraryDependencies
case Keys.libraryDependencies.key =>
// hey scalac. T == Seq[ModuleID]
s.asInstanceOf[Setting[Seq[ModuleID]]].mapInit(mapLibraryDependencies)
// preserve other settings
case _ => s
}
// This must be idempotent because it gets applied after every transformation.
// That is, if the user does:
// libraryDependencies += a
// libraryDependencies += b
// then this method will be called for Seq(a) and Seq(a,b)
def mapLibraryDependencies(key: ScopedKey[Seq[ModuleID]], value: Seq[ModuleID]): Seq[ModuleID] =
value map mapSingle
// This is the fundamental transformation.
// Here we map all declared ScalaCheck dependencies to be version 1.8
def mapSingle(module: ModuleID): ModuleID =
if(module.name == "scalacheck")
module.copy(revision = "1.8")
else
module
}

View File

@ -1,67 +0,0 @@
## Advanced Configurations Example
This is an example [[full build definition|Full Configuration]] that demonstrates using Ivy configurations to group dependencies.
The `utils` module provides utilities for other modules. It uses Ivy configurations to
group dependencies so that a dependent project doesn't have to pull in all dependencies
if it only uses a subset of functionality. This can be an alternative to having multiple
utilities modules (and consequently, multiple utilities jars).
In this example, consider a `utils` project that provides utilities related to both Scalate and Saxon.
It therefore needs both Scalate and Saxon on the compilation classpath and a project that uses
all of the functionality of 'utils' will need these dependencies as well.
However, project `a` only needs the utilities related to Scalate, so it doesn't need Saxon.
By depending only on the `scalate` configuration of `utils`, it only gets the Scalate-related dependencies.
```scala
import sbt._
import Keys._
object B extends Build
{
/********** Projects ************/
// An example project that only uses the Scalate utilities.
lazy val a = Project("a", file("a")) dependsOn(utils % "compile->scalate")
// An example project that uses the Scalate and Saxon utilities.
// For the configurations defined here, this is equivalent to doing dependsOn(utils),
// but if there were more configurations, it would select only the Scalate and Saxon
// dependencies.
lazy val b = Project("b", file("b")) dependsOn(utils % "compile->scalate,saxon")
// Defines the utilities project
lazy val utils = Project("utils", file("utils")) settings(utilsSettings : _*)
def utilsSettings: Seq[Setting[_]] =
// Add the src/common/scala/ compilation configuration.
inConfig(Common)(Defaults.configSettings) ++
// Publish the common artifact
addArtifact(artifact in (Common, packageBin), packageBin in Common) ++ Seq(
// We want our Common sources to have access to all of the dependencies on the classpaths
// for compile and test, but when depended on, it should only require dependencies in 'common'
classpathConfiguration in Common := CustomCompile,
// Modify the default Ivy configurations.
// 'overrideConfigs' ensures that Compile is replaced by CustomCompile
ivyConfigurations ~= overrideConfigs(Scalate, Saxon, Common, CustomCompile),
// Put all dependencies without an explicit configuration into Common (optional)
defaultConfiguration := Some(Common),
// Declare dependencies in the appropriate configurations
libraryDependencies ++= Seq(
"org.fusesource.scalate" % "scalate-core" % "1.5.0" % "scalate",
"org.squeryl" %% "squeryl" % "0.9.4" % "scalate",
"net.sf.saxon" % "saxon" % "8.7" % "saxon"
)
)
/********* Configurations *******/
lazy val Scalate = config("scalate") extend(Common) describedAs("Dependencies for using Scalate utilities.")
lazy val Common = config("common") describedAs("Dependencies required in all configurations.")
lazy val Saxon = config("saxon") extend(Common) describedAs("Dependencies for using Saxon utilities.")
// Define a customized compile configuration that includes
// dependencies defined in our other custom configurations
lazy val CustomCompile = config("compile") extend(Saxon, Common, Scalate)
}
```

Some files were not shown because too many files have changed in this diff Show More