Docs: custom role 'codeliteral' to allow substitutions in inline code

The built-in 'literal' role implements ``.  It tokenizes the
literal string to attempt some particular wrapping behavior.
This puts parts of the string in different spans, each with
class='pre' and the previous docs.css had a padding for that
class.  This led to excessive space within the literal string.

The custom role does no do this tokenization, but nested
inline parsing can result in multiple nodes, repeating the
problem.  So, the padding for class='pre' is dropped.  Ideally,
the sequence of nodes would simply be wrapped in an inline element
with class='pre' or some other proper solution that avoids having
multiple nodes.

To avoid wrapping in the middle of a literal, the 'pre' class
now has the style `white-space: pre;`.

The default role is now 'codeliteral' and the previous
uses of the built-in `` are converted to `.  Some incorrectly
converted code blocks were fixed in the process.

Finally, the Global-Settings page is updated with the new location
for the global sbt directory.  Due to the above changes, this could
be done without hardcoding the version.
This commit is contained in:
Mark Harrah 2013-07-29 07:27:17 -04:00
parent f29b37e9d3
commit c14179c358
85 changed files with 2854 additions and 2743 deletions

View File

@ -17,37 +17,37 @@ Features, fixes, changes with compatibility implications (incomplete, please hel
below)
- Task axis syntax has changed from key(for task) to task::key (see
details section below)
- The organization for sbt has to changed to ``org.scala-sbt`` (was:
- The organization for sbt has to changed to `org.scala-sbt` (was:
org.scala-tools.sbt). This affects users of the scripted plugin in
particular.
- ``artifactName`` type has changed to
``(ScalaVersion, Artifact, ModuleID) => String``
- ``javacOptions`` is now a task
- ``session save`` overwrites settings in ``build.sbt`` (when appropriate). gh-369
- `artifactName` type has changed to
`(ScalaVersion, Artifact, ModuleID) => String`
- `javacOptions` is now a task
- `session save` overwrites settings in `build.sbt` (when appropriate). gh-369
- scala-library.jar is now required to be on the classpath in order to
compile Scala code. See the ``scala-library.jar`` section at the
compile Scala code. See the `scala-library.jar` section at the
bottom of the page for details.
Features
--------
- Support for forking tests (gh-415)
- ``test-quick`` (see details section below)
- `test-quick` (see details section below)
- Support globally overriding repositories (gh-472)
- Added ``print-warnings`` task that will print unchecked and
- Added `print-warnings` task that will print unchecked and
deprecation warnings from the previous compilation without needing to
recompile (Scala 2.10+ only)
- Support for loading an ivy settings file from a URL.
- ``projects add/remove <URI>`` for temporarily working with other builds
- `projects add/remove <URI>` for temporarily working with other builds
- Enhanced control over parallel execution (see details section below)
- ``inspect tree <key>`` for calling ``inspect`` command recursively (gh-274)
- `inspect tree <key>` for calling `inspect` command recursively (gh-274)
Fixes
-----
- Delete a symlink and not its contents when recursively deleting a directory.
- Fix detection of ancestors for java sources
- Fix the resolvers used for ``update-sbt-classifiers`` (gh-304)
- Fix the resolvers used for `update-sbt-classifiers` (gh-304)
- Fix auto-imports of plugins (gh-412)
- Argument quoting (see details section below)
- Properly reset JLine after being stopped by Ctrl+z (unix only). gh-394
@ -60,10 +60,10 @@ Improvements
- Use java 7 Redirect.INHERIT to inherit input stream of subprocess (gh-462,\ gh-327).
This should fix issues when forking interactive programs. (@vigdorchik)
- Mirror ivy 'force' attribute (gh-361)
- Various improvements to ``help`` and ``tasks`` commands as well as
new ``settings`` command (gh-315)
- Various improvements to `help` and `tasks` commands as well as
new `settings` command (gh-315)
- Bump jsch version to 0.1.46. (gh-403)
- Improved help commands: ``help``, ``tasks``, ``settings``.
- Improved help commands: `help`, `tasks`, `settings`.
- Bump to JLine 1.0 (see details section below)
- Global repository setting (see details section below)
- Other fixes/improvements: gh-368, gh-377, gh-378, gh-386, gh-387, gh-388, gh-389
@ -75,7 +75,7 @@ Experimental or In-progress
to change, but already being used in `a branch of the
scala-maven-plugin <https://github.com/davidB/scala-maven-plugin/tree/feature/sbt-inc>`_.
- Experimental support for keeping the Scala compiler resident. Enable
by passing ``-Dsbt.resident.limit=n`` to sbt, where ``n`` is an
by passing `-Dsbt.resident.limit=n` to sbt, where `n` is an
integer indicating the maximum number of compilers to keep around.
- The `Howto pages <http://www.scala-sbt.org/howto.html>`_ on the `new
site <http://www.scala-sbt.org>`_ are at least readable now. There is
@ -88,10 +88,10 @@ Details of major changes from 0.11.2 to 0.12.0
Plugin configuration directory
------------------------------
In 0.11.0, plugin configuration moved from ``project/plugins/`` to just
``project/``, with ``project/plugins/`` being deprecated. Only 0.11.2
In 0.11.0, plugin configuration moved from `project/plugins/` to just
`project/`, with `project/plugins/` being deprecated. Only 0.11.2
had a deprecation message, but in all of 0.11.x, the presence of the old
style ``project/plugins/`` directory took precedence over the new style.
style `project/plugins/` directory took precedence over the new style.
In 0.12.0, the new style takes precedence. Support for the old style
won't be removed until 0.13.0.
@ -99,7 +99,7 @@ won't be removed until 0.13.0.
styles are still supported; only the behavior when there is a
conflict has changed.
2. In practice, switching from an older branch of a project to a new
branch would often leave an empty ``project/plugins/`` directory that
branch would often leave an empty `project/plugins/` directory that
would cause the old style to be used, despite there being no
configuration there.
3. Therefore, the intention is that this change is strictly an
@ -113,9 +113,9 @@ There is an important change related to parsing the task axis for
settings and tasks that fixes gh-202
1. The syntax before 0.12 has been
``{build}project/config:key(for task)``
`{build}project/config:key(for task)`
2. The proposed (and implemented) change for 0.12 is
``{build}project/config:task::key``
`{build}project/config:task::key`
3. By moving the task axis before the key, it allows for easier
discovery (via tab completion) of keys in plugins.
4. It is not planned to support the old syntax.
@ -140,12 +140,12 @@ that has been previously discussed on the mailing list.
5. In 0.12, both of these situations result in the aggregated settings
being selected. For example,
1. Consider a project ``root`` that aggregates a subproject ``sub``.
2. ``root`` defines ``*:package``.
3. ``sub`` defines ``compile:package`` and ``compile:compile``.
4. Running ``root/package`` will run ``root/*:package`` and
``sub/compile:package``
5. Running ``root/compile`` will run ``sub/compile:compile``
1. Consider a project `root` that aggregates a subproject `sub`.
2. `root` defines `*:package`.
3. `sub` defines `compile:package` and `compile:compile`.
4. Running `root/package` will run `root/*:package` and
`sub/compile:package`
5. Running `root/compile` will run `sub/compile:compile`
6. This change was made possible in part by the change to task axis
parsing.
@ -157,10 +157,10 @@ Fine control over parallel execution is supported as described here:
:doc:`/Detailed-Topics/Parallel-Execution`
1. The default behavior should be the same as before, including the
``parallelExecution`` settings.
`parallelExecution` settings.
2. The new capabilities of the system should otherwise be considered
experimental.
3. Therefore, ``parallelExecution`` won't be deprecated at this time.
3. Therefore, `parallelExecution` won't be deprecated at this time.
Source dependencies
-------------------
@ -177,13 +177,13 @@ is loaded across all projects. There are two parts to this.
Additionally, Sanjin's patches to add support for hg and svn URIs are
included.
1. sbt uses subversion to retrieve URIs beginning with ``svn`` or
``svn+ssh``. An optional fragment identifies a specific revision to
1. sbt uses subversion to retrieve URIs beginning with `svn` or
`svn+ssh`. An optional fragment identifies a specific revision to
checkout.
2. Because a URI for mercurial doesn't have a mercurial-specific scheme,
sbt requires the URI to be prefixed with ``hg:`` to identify it as a
sbt requires the URI to be prefixed with `hg:` to identify it as a
mercurial repository.
3. Also, URIs that end with ``.git`` are now handled properly.
3. Also, URIs that end with `.git` are now handled properly.
Cross building
--------------
@ -191,7 +191,7 @@ Cross building
The cross version suffix is shortened to only include the major and
minor version for Scala versions starting with the 2.10 series and for
sbt versions starting with the 0.12 series. For example,
``sbinary_2.10`` for a normal library or ``sbt-plugin_2.10_0.12`` for an
`sbinary_2.10` for a normal library or `sbt-plugin_2.10_0.12` for an
sbt plugin. This requires forward and backward binary compatibility
across incremental releases for both Scala and sbt.
@ -206,29 +206,29 @@ across incremental releases for both Scala and sbt.
equal binary versions implies binary compatibility. All Scala
versions prior to 2.10 use the full version for the binary version to
reflect previous sbt behavior. For 2.10 and later, the binary version
is ``<major>.<minor>``.
is `<major>.<minor>`.
4. The cross version behavior for published artifacts is configured by
the crossVersion setting. It can be configured for dependencies by
using the ``cross`` method on ``ModuleID`` or by the traditional %%
using the `cross` method on `ModuleID` or by the traditional %%
dependency construction variant. By default, a dependency has cross
versioning disabled when constructed with a single % and uses the
binary Scala version when constructed with %%.
5. The artifactName function now accepts a type ScalaVersion as its
first argument instead of a String. The full type is now
``(ScalaVersion, ModuleID, Artifact) => String``. ScalaVersion
`(ScalaVersion, ModuleID, Artifact) => String`. ScalaVersion
contains both the full Scala version (such as 2.10.0) as well as the
binary Scala version (such as 2.10).
6. The flexible version mapping added by Indrajit has been merged into
the ``cross`` method and the %% variants accepting more than one
the `cross` method and the %% variants accepting more than one
argument have been deprecated. See :doc:`/Detailed-Topics/Cross-Build` for details.
Global repository setting
-------------------------
Define the repositories to use by putting a standalone
``[repositories]`` section (see the
`[repositories]` section (see the
:doc:`/Detailed-Topics/Launcher` page) in
``~/.sbt/repositories`` and pass ``-Dsbt.override.build.repos=true`` to
`~/.sbt/repositories` and pass `-Dsbt.override.build.repos=true` to
sbt. Only the repositories in that file will be used by the launcher for
retrieving sbt and Scala and by sbt when retrieving project
dependencies. (@jsuereth)
@ -236,7 +236,7 @@ dependencies. (@jsuereth)
test-quick
----------
``test-quick`` (gh-393) runs the tests specified as arguments (or all tests if no arguments are
`test-quick` (gh-393) runs the tests specified as arguments (or all tests if no arguments are
given) that:
1. have not been run yet OR
@ -249,10 +249,10 @@ Argument quoting
Argument quoting (gh-396) from the intereactive mode works like Scala string literals.
1. ``> command "arg with spaces,\n escapes interpreted"``
2. ``> command """arg with spaces,\n escapes not interpreted"""``
1. `> command "arg with spaces,\n escapes interpreted"`
2. `> command """arg with spaces,\n escapes not interpreted"""`
3. For the first variant, note that paths on Windows use backslashes and
need to be escaped (``\\``). Alternatively, use the second variant,
need to be escaped (`\\`). Alternatively, use the second variant,
which does not interpret escapes.
4. For using either variant in batch mode, note that a shell will
generally require the double quotes themselves to be escaped.
@ -264,7 +264,7 @@ sbt versions prior to 0.12.0 provided the location of scala-library.jar
to scalac even if scala-library.jar wasn't on the classpath. This
allowed compiling Scala code without scala-library as a dependency, for
example, but this was a misfeature. Instead, the Scala library should be
declared as ``provided``:
declared as `provided`:
::

View File

@ -10,36 +10,36 @@ Features, fixes, changes with compatibility implications (incomplete, please hel
- Moved to Scala 2.10 for sbt and build definitions.
- Support for plugin configuration in ``project/plugins/`` has been removed. It was deprecated since 0.11.2.
- Dropped support for tab completing the right side of a setting for the ``set`` command. The new task macros make this tab completion obsolete.
- Support for plugin configuration in `project/plugins/` has been removed. It was deprecated since 0.11.2.
- Dropped support for tab completing the right side of a setting for the `set` command. The new task macros make this tab completion obsolete.
- The convention for keys is now camelCase only. Details below.
- Fixed the default classifier for tests to be ``tests`` for proper Maven compatibility.
- The global settings and plugins directories are now versioned. Global settings go in ``~/.sbt/0.13/`` and global plugins in ``~/.sbt/0.13/plugins/`` by default. Explicit overrides, such as via the ``sbt.global.base`` system property, are still respected. (gh-735)
- Fixed the default classifier for tests to be `tests` for proper Maven compatibility.
- The global settings and plugins directories are now versioned. Global settings go in `~/.sbt/0.13/` and global plugins in `~/.sbt/0.13/plugins/` by default. Explicit overrides, such as via the `sbt.global.base` system property, are still respected. (gh-735)
- sbt no longer canonicalizes files passed to scalac. (gh-723)
- sbt now enforces that each project must have a unique ``target`` directory.
- sbt no longer overrides the Scala version in dependencies. This allows independent configurations to depend on different Scala versions and treats Scala dependencies other than scala-library as normal dependencies. However, it can result in resolved versions other than ``scalaVersion`` for those other Scala libraries.
- sbt now enforces that each project must have a unique `target` directory.
- sbt no longer overrides the Scala version in dependencies. This allows independent configurations to depend on different Scala versions and treats Scala dependencies other than scala-library as normal dependencies. However, it can result in resolved versions other than `scalaVersion` for those other Scala libraries.
- JLine is now configured differently for Cygwin. See :doc:`/Getting-Started/Setup`.
- Jline and Ansi codes work better on Windows now. CI servers might have to explictly disable Ansi codes via ``-Dsbt.log.format=false``.
- Jline and Ansi codes work better on Windows now. CI servers might have to explictly disable Ansi codes via `-Dsbt.log.format=false`.
- Forked tests and runs now use the project's base directory as the current working directory.
- ``compileInputs`` is now defined in ``(Compile,compile)`` instead of just ``Compile``
- `compileInputs` is now defined in `(Compile,compile)` instead of just `Compile`
- The result of running tests is now `Tests.Output <../../api/#sbt.Tests$$Output>`_.
Features
--------
- Use the repositories in boot.properties as the default project resolvers. Add ``bootOnly`` to a repository in boot.properties to specify that it should not be used by projects by default. (Josh S., gh-608)
- Use the repositories in boot.properties as the default project resolvers. Add `bootOnly` to a repository in boot.properties to specify that it should not be used by projects by default. (Josh S., gh-608)
- Support vals and defs in .sbt files. Details below.
- Support defining Projects in .sbt files: vals of type Project are added to the Build. Details below.
- New syntax for settings, tasks, and input tasks. Details below.
- Automatically link to external API scaladocs of dependencies by setting ``autoAPIMappings := true``. This requires at least Scala 2.10.1 and for dependencies to define ``apiURL`` for their scaladoc location. Mappings may be manually added to the ``apiMappings`` task as well.
- Support setting Scala home directory temporary using the switch command: ``++ scala-version=/path/to/scala/home``. The scala-version part is optional, but is used as the version for any managed dependencies.
- Add ``publishM2`` task for publishing to ``~/.m2/repository``. (gh-485)
- Automatically link to external API scaladocs of dependencies by setting `autoAPIMappings := true`. This requires at least Scala 2.10.1 and for dependencies to define `apiURL` for their scaladoc location. Mappings may be manually added to the `apiMappings` task as well.
- Support setting Scala home directory temporary using the switch command: `++ scala-version=/path/to/scala/home`. The scala-version part is optional, but is used as the version for any managed dependencies.
- Add `publishM2` task for publishing to `~/.m2/repository`. (gh-485)
- Use a default root project aggregating all projects if no root is defined. (gh-697)
- New API for getting tasks and settings from multiple projects and configurations. See the new section :ref:`getting values from multiple scopes <multiple-scopes>`.
- Enhanced test interface for better support of test framework features. (Details pending.)
- ``export`` command
- `export` command
* For tasks, prints the contents of the 'export' stream. By convention, this should be the equivalent command line(s) representation. ``compile``, ``doc``, and ``console`` show the approximate command lines for their execution. Classpath tasks print the classpath string suitable for passing as an option.
* For tasks, prints the contents of the 'export' stream. By convention, this should be the equivalent command line(s) representation. `compile`, `doc`, and `console` show the approximate command lines for their execution. Classpath tasks print the classpath string suitable for passing as an option.
* For settings, directly prints the value of a setting instead of going through the logger
Fixes
@ -51,19 +51,19 @@ Fixes
Improvements
------------
- Run the API extraction phase after the compiler's ``pickler`` phase instead of ``typer`` to allow compiler plugins after ``typer``. (Adriaan M., gh-609)
- Record defining source position of settings. ``inspect`` shows the definition location of all settings contributing to a defined value.
- Allow the root project to be specified explicitly in ``Build.rootProject``.
- Tasks that need a directory for storing cache information can now use the ``cacheDirectory`` method on ``streams``. This supersedes the ``cacheDirectory`` setting.
- The environment variables used when forking ``run`` and ``test`` may be set via ``envVars``, which is a ``Task[Map[String,String]]``. (gh-665)
- Run the API extraction phase after the compiler's `pickler` phase instead of `typer` to allow compiler plugins after `typer`. (Adriaan M., gh-609)
- Record defining source position of settings. `inspect` shows the definition location of all settings contributing to a defined value.
- Allow the root project to be specified explicitly in `Build.rootProject`.
- Tasks that need a directory for storing cache information can now use the `cacheDirectory` method on `streams`. This supersedes the `cacheDirectory` setting.
- The environment variables used when forking `run` and `test` may be set via `envVars`, which is a `Task[Map[String,String]]`. (gh-665)
- Restore class files after an unsuccessful compilation. This is useful when an error occurs in a later incremental step that requires a fix in the originally changed files.
- Better auto-generated IDs for default projects. (gh-554)
- Fork run directly with 'java' to avoid additional class loader from 'scala' command. (gh-702)
- Make autoCompilerPlugins support compiler plugins defined in a internal dependency (only if ``exportJars := true`` due to scalac limitations)
- Make autoCompilerPlugins support compiler plugins defined in a internal dependency (only if `exportJars := true` due to scalac limitations)
- Track ancestors of non-private templates and use this information to require fewer, smaller intermediate incremental compilation steps.
- ``autoCompilerPlugins`` now supports compiler plugins defined in a internal dependency. The plugin project must define ``exportJars := true``. Depend on the plugin with ``...dependsOn(... % Configurations.CompilerPlugin)``.
- `autoCompilerPlugins` now supports compiler plugins defined in a internal dependency. The plugin project must define `exportJars := true`. Depend on the plugin with `...dependsOn(... % Configurations.CompilerPlugin)`.
- Add utilities for debugging API representation extracted by the incremental compiler. (Grzegorz K., gh-677, gh-793)
- ``consoleProject`` unifies the syntax for getting the value of a setting and executing a task. See :doc:`/Detailed-Topics/Console-Project`.
- `consoleProject` unifies the syntax for getting the value of a setting and executing a task. See :doc:`/Detailed-Topics/Console-Project`.
Other
-----
@ -87,22 +87,22 @@ There are new methods that help avoid duplicating key names by declaring keys as
val myTask = taskKey[Int]("A (required) description of myTask.")
The name will be picked up from the val identifier by the implementation of the taskKey macro so there is no reflection needed or runtime overhead. Note that a description is mandatory and the method ``taskKey`` begins with a lowercase ``t``. Similar methods exist for keys for settings and input tasks: ``settingKey`` and ``inputKey``.
The name will be picked up from the val identifier by the implementation of the taskKey macro so there is no reflection needed or runtime overhead. Note that a description is mandatory and the method `taskKey` begins with a lowercase `t`. Similar methods exist for keys for settings and input tasks: `settingKey` and `inputKey`.
New task/setting syntax
-----------------------
First, the old syntax is still supported with the intention of allowing conversion to the new syntax at your leisure. There may be some incompatibilities and some may be unavoidable, but please report any issues you have with an existing build.
The new syntax is implemented by making ``:=``, ``+=``, and ``++=`` macros and making these the only required assignment methods. To refer to the value of other settings or tasks, use the ``value`` method on settings and tasks. This method is a stub that is removed at compile time by the macro, which will translate the implementation of the task/setting to the old syntax.
The new syntax is implemented by making `:=`, `+=`, and `++=` macros and making these the only required assignment methods. To refer to the value of other settings or tasks, use the `value` method on settings and tasks. This method is a stub that is removed at compile time by the macro, which will translate the implementation of the task/setting to the old syntax.
For example, the following declares a dependency on ``scala-reflect`` using the value of the ``scalaVersion`` setting:
For example, the following declares a dependency on `scala-reflect` using the value of the `scalaVersion` setting:
::
libraryDependencies += "org.scala-lang" % "scala-reflect" % scalaVersion.value
The ``value`` method is only allowed within a call to ``:=``, ``+=``, or ``++=``. To construct a setting or task outside of these methods, use ``Def.task`` or ``Def.setting``. For example,
The `value` method is only allowed within a call to `:=`, `+=`, or `++=`. To construct a setting or task outside of these methods, use `Def.task` or `Def.setting`. For example,
::
@ -110,7 +110,7 @@ The ``value`` method is only allowed within a call to ``:=``, ``+=``, or ``++=``
libraryDependencies += reflectDep.value
A similar method ``parsed`` is defined on ``Parser[T]``, ``Initialize[Parser[T]]`` (a setting that provides a parser), and ``Initialize[State => Parser[T]]`` (a setting that uses the current ``State`` to provide a ``Parser[T]``. This method can be used when defining an input task to get the result of user input.
A similar method `parsed` is defined on `Parser[T]`, `Initialize[Parser[T]]` (a setting that provides a parser), and `Initialize[State => Parser[T]]` (a setting that uses the current `State` to provide a `Parser[T]`. This method can be used when defining an input task to get the result of user input.
::
@ -126,9 +126,9 @@ A similar method ``parsed`` is defined on ``Parser[T]``, ``Initialize[Parser[T]]
For details, see :doc:`/Extending/Input-Tasks`.
To expect a task to fail and get the failing exception, use the ``failure`` method instead of ``value``. This provides an ``Incomplete`` value, which wraps the exception. To get the result of a task whether or not it succeeds, use ``result``, which provides a ``Result[T]``.
To expect a task to fail and get the failing exception, use the `failure` method instead of `value`. This provides an `Incomplete` value, which wraps the exception. To get the result of a task whether or not it succeeds, use `result`, which provides a `Result[T]`.
Dynamic settings and tasks (``flatMap``) have been cleaned up. Use the ``Def.taskDyn`` and ``Def.settingDyn`` methods to define them (better name suggestions welcome). These methods expect the result to be a task and setting, respectively.
Dynamic settings and tasks (`flatMap`) have been cleaned up. Use the `Def.taskDyn` and `Def.settingDyn` methods to define them (better name suggestions welcome). These methods expect the result to be a task and setting, respectively.
.sbt format enhancements
------------------------
@ -146,10 +146,10 @@ vals and defs are now allowed in .sbt files. They must follow the same rules as
All definitions are compiled before settings, but it will probably be best practice to put definitions together.
Currently, the visibility of definitions is restricted to the .sbt file it is defined in.
They are not visible in ``consoleProject`` or the ``set`` command at this time, either.
Use Scala files in ``project/`` for visibility in all .sbt files.
They are not visible in `consoleProject` or the `set` command at this time, either.
Use Scala files in `project/` for visibility in all .sbt files.
vals of type ``Project`` are added to the ``Build`` so that multi-project builds can be defined entirely in .sbt files now.
vals of type `Project` are added to the `Build` so that multi-project builds can be defined entirely in .sbt files now.
For example,
::
@ -181,20 +181,20 @@ This macro is also available for use in Scala files.
Control over automatically added settings
-----------------------------------------
sbt loads settings from a few places in addition to the settings explicitly defined by the ``Project.settings`` field.
sbt loads settings from a few places in addition to the settings explicitly defined by the `Project.settings` field.
These include plugins, global settings, and .sbt files.
The new ``Project.autoSettings`` method configures these sources: whether to include them for the project and in what order.
The new `Project.autoSettings` method configures these sources: whether to include them for the project and in what order.
``Project.autoSettings`` accepts a sequence of values of type ``AddSettings``.
Instances of ``AddSettings`` are constructed from methods in the ``AddSettings`` companion object.
`Project.autoSettings` accepts a sequence of values of type `AddSettings`.
Instances of `AddSettings` are constructed from methods in the `AddSettings` companion object.
The configurable settings are per-user settings (from ~/.sbt, for example), settings from .sbt files, and plugin settings (project-level only).
The order in which these instances are provided to ``autoSettings`` determines the order in which they are appended to the settings explicitly provided in ``Project.settings``.
The order in which these instances are provided to `autoSettings` determines the order in which they are appended to the settings explicitly provided in `Project.settings`.
For .sbt files, ``AddSettings.defaultSbtFiles`` adds the settings from all .sbt files in the project's base directory as usual.
The alternative method ``AddSettings.sbtFiles`` accepts a sequence of ``Files`` that will be loaded according to the standard .sbt format.
For .sbt files, `AddSettings.defaultSbtFiles` adds the settings from all .sbt files in the project's base directory as usual.
The alternative method `AddSettings.sbtFiles` accepts a sequence of `Files` that will be loaded according to the standard .sbt format.
Relative files are resolved against the project's base directory.
Plugin settings may be included on a per-Plugin basis by using the ``AddSettings.plugins`` method and passing a ``Plugin => Boolean``.
Plugin settings may be included on a per-Plugin basis by using the `AddSettings.plugins` method and passing a `Plugin => Boolean`.
The settings controlled here are only the automatic per-project settings.
Per-build and global settings will always be included.
Settings that plugins require to be manually added still need to be added manually.
@ -219,19 +219,19 @@ For example,
Resolving Scala dependencies
----------------------------
Scala dependencies (like scala-library and scala-compiler) are now resolved via the normal ``update`` task. This means:
Scala dependencies (like scala-library and scala-compiler) are now resolved via the normal `update` task. This means:
1. Scala jars won't be copied to the boot directory, except for those needed to run sbt.
2. Scala SNAPSHOTs behave like normal SNAPSHOTs. In particular, running ``update`` will properly re-resolve the dynamic revision.
2. Scala SNAPSHOTs behave like normal SNAPSHOTs. In particular, running `update` will properly re-resolve the dynamic revision.
3. Scala jars are resolved using the same repositories and configuration as other dependencies.
4. Scala dependencies are not resolved via ``update`` when ``scalaHome`` is set, but are instead obtained from the configured directory.
4. Scala dependencies are not resolved via `update` when `scalaHome` is set, but are instead obtained from the configured directory.
5. The Scala version for sbt will still be resolved via the repositories configured for the launcher.
sbt still needs access to the compiler and its dependencies in order to run ``compile``, ``console``, and other Scala-based tasks. So, the Scala compiler jar and dependencies (like scala-reflect.jar and scala-library.jar) are defined and resolved in the ``scala-tool`` configuration (unless ``scalaHome`` is defined). By default, this configuration and the dependencies in it are automatically added by sbt. This occurs even when dependencies are configured in a ``pom.xml`` or ``ivy.xml`` and so it means that the version of Scala defined for your project must be resolvable by the resolvers configured for your project.
sbt still needs access to the compiler and its dependencies in order to run `compile`, `console`, and other Scala-based tasks. So, the Scala compiler jar and dependencies (like scala-reflect.jar and scala-library.jar) are defined and resolved in the `scala-tool` configuration (unless `scalaHome` is defined). By default, this configuration and the dependencies in it are automatically added by sbt. This occurs even when dependencies are configured in a `pom.xml` or `ivy.xml` and so it means that the version of Scala defined for your project must be resolvable by the resolvers configured for your project.
If you need to manually configure where sbt gets the Scala compiler and library used for compilation, the REPL, and other Scala tasks, do one of the following:
1. Set ``scalaHome`` to use the existing Scala jars in a specific directory. If ``autoScalaLibrary`` is true, the library jar found here will be added to the (unmanaged) classpath.
2. Set ``managedScalaInstance := false`` and explicitly define ``scalaInstance``, which is of type ``ScalaInstance``. This defines the compiler, library, and other jars comprising Scala. If ``autoScalaLibrary`` is true, the library jar from the defined ``ScalaInstance`` will be added to the (unmanaged) classpath.
1. Set `scalaHome` to use the existing Scala jars in a specific directory. If `autoScalaLibrary` is true, the library jar found here will be added to the (unmanaged) classpath.
2. Set `managedScalaInstance := false` and explicitly define `scalaInstance`, which is of type `ScalaInstance`. This defines the compiler, library, and other jars comprising Scala. If `autoScalaLibrary` is true, the library jar from the defined `ScalaInstance` will be added to the (unmanaged) classpath.
The :doc:`/Detailed-Topics/Configuring-Scala` page provides full details.

File diff suppressed because it is too large Load Diff

View File

@ -15,7 +15,7 @@ A side benefit to using the sbt organization for projects is that you can use gh
Community Ivy Repository
========================
`Typesafe, Inc. <http://www.typesafe.com>`_ has provided a freely available `Ivy Repository <http://repo.scala-sbt.org/scalasbt>`_ for sbt projects to use.
`Typesafe <http://www.typesafe.com>`_ has provided a freely available `Ivy Repository <http://repo.scala-sbt.org/scalasbt>`_ for sbt projects to use.
If you would like to publish your project to this Ivy repository, first contact `sbt-repo-admins <http://groups.google.com/group/sbt-repo-admins?hl=en>`_ and request privileges (we have to verify code ownership, rights to publish, etc.). After which, you can deploy your plugins using the following configuration:
::
@ -24,14 +24,14 @@ If you would like to publish your project to this Ivy repository, first contact
publishMavenStyle := false
You'll also need to add your credentials somewhere. For example, you might use a ``~/.sbt/sbtpluginpublish.sbt`` file:
You'll also need to add your credentials somewhere. For example, you might use a `~/.sbt/pluginpublish.sbt` file:
::
credentials += Credentials("Artifactory Realm",
"scalasbt.artifactoryonline.com", "@user name@", "@my encrypted password@")
Where ``@my encrypted password@`` is actually obtained using the following `instructions <http://wiki.jfrog.org/confluence/display/RTF/Centrally+Secure+Passwords>`_.
Where `@my encrypted password@` is actually obtained using the following `instructions <http://wiki.jfrog.org/confluence/display/RTF/Centrally+Secure+Passwords>`_.
*Note: Your code must abide by the* `repository polices <Repository-Rules>`_.
@ -194,7 +194,7 @@ Code generator plugins
https://github.com/bigtoast/sbt-thrift
- xsbt-hginfo (Generate Scala source code for Mercurial repository
information): https://bitbucket.org/lukas\_pustina/xsbt-hginfo
- sbt-scalashim (Generate Scala shim like ``sys.error``):
- sbt-scalashim (Generate Scala shim like `sys.error`):
https://github.com/sbt/sbt-scalashim
- sbtend (Generate Java source code from
`xtend <http://www.eclipse.org/xtend/>`_ ):

View File

@ -9,18 +9,18 @@ To use a nightly build, the instructions are the same for :doc:`normal manual se
1. Download the launcher jar from one of the subdirectories of |nightly-launcher|.
They should be listed in chronological order, so the most recent one will be last.
2. The version number is the name of the subdirectory and is of the form
``0.13.x-yyyyMMdd-HHmmss``. Use this in a ``build.properties`` file.
3. Call your script something like ``sbt-nightly`` to retain access to a
stable ``sbt`` launcher. The documentation will refer to the script as ``sbt``, however.
`|version|.x-yyyyMMdd-HHmmss`. Use this in a `build.properties` file.
3. Call your script something like `sbt-nightly` to retain access to a
stable `sbt` launcher. The documentation will refer to the script as `sbt`, however.
Related to the third point, remember that an ``sbt.version`` setting in
``<build-base>/project/build.properties`` determines the version of sbt
Related to the third point, remember that an `sbt.version` setting in
`<build-base>/project/build.properties` determines the version of sbt
to use in a project. If it is not present, the default version
associated with the launcher is used. This means that you must set
``sbt.version=yyyyMMdd-HHmmss`` in an existing
``<build-base>/project/build.properties``. You can verify the right
`sbt.version=yyyyMMdd-HHmmss` in an existing
`<build-base>/project/build.properties`. You can verify the right
version of sbt is being used to build a project by running
``about``.
`about`.
To reduce problems, it is recommended to not use a launcher jar for one
nightly version to launch a different nightly version of sbt.

View File

@ -21,12 +21,12 @@ if you are interested in a specific topic.
inheritance relationships is a general area of work.
- 'update' produces an :doc:`/Detailed-Topics/Update-Report` mapping
``Configuration/ModuleID/Artifact`` to the retrieved ``File``
`Configuration/ModuleID/Artifact` to the retrieved `File`
- Ivy produces more detailed XML reports on dependencies. These come
with an XSL stylesheet to view them, but this does not scale to
large numbers of dependencies. Working on this is pretty
straightforward: the XML files are created in ``~/.ivy2`` and the
``.xsl`` and ``.css`` are there as well, so you don't even need to
straightforward: the XML files are created in `~/.ivy2` and the
`.xsl` and `.css` are there as well, so you don't even need to
work with sbt. Other approaches described in `the email
thread <https://groups.google.com/group/simple-build-tool/browse_thread/thread/7761f8b2ce51f02c/129064ea836c9baf>`_
- Tasks are a combination of static and dynamic graphs and it would

View File

@ -48,10 +48,10 @@ Another good idea is to not publish your test artifacts (this is the default):
Third - POM Metadata
--------------------
Now, we want to control what's available in the ``pom.xml`` file. This
Now, we want to control what's available in the `pom.xml` file. This
file describes our project in the maven repository and is used by
indexing services for search and discover. This means it's important
that ``pom.xml`` should have all information we wish to advertise as
that `pom.xml` should have all information we wish to advertise as
well as required info!
First, let's make sure no repositories show up in the POM file. To
@ -65,7 +65,7 @@ optional dependencies in our artifact:
pomIncludeRepository := { _ => false }
Next, the POM metadata that isn't generated by sbt must be added. This
is done through the ``pomExtra`` configuration option:
is done through the `pomExtra` configuration option:
::
@ -90,13 +90,13 @@ is done through the ``pomExtra`` configuration option:
</developer>
</developers>)
Specifically, the ``url``, ``license``, ``scm.url``, ``scm.connection``
and ``developer`` sections are required. The above is an example from
Specifically, the `url`, `license`, `scm.url`, `scm.connection`
and `developer` sections are required. The above is an example from
the `scala-arm <http://jsuereth.com/scala-arm>`_ project.
*Note* that sbt will automatically inject ``licenses`` and ``url`` nodes
*Note* that sbt will automatically inject `licenses` and `url` nodes
if they are already present in your build file. Thus an alternative to
the above ``pomExtra`` is to include the following entries:
the above `pomExtra` is to include the following entries:
::
@ -105,8 +105,8 @@ the above ``pomExtra`` is to include the following entries:
homepage := Some(url("http://jsuereth.com/scala-arm"))
This might be advantageous if those keys are used also by other plugins
(e.g. ``ls``). You **cannot use both** the sbt ``licenses`` key and the
``licenses`` section in ``pomExtra`` at the same time, as this will
(e.g. `ls`). You **cannot use both** the sbt `licenses` key and the
`licenses` section in `pomExtra` at the same time, as this will
produce duplicate entries in the final POM file, leading to a rejection
in Sonatype's staging process.
@ -117,7 +117,7 @@ Fourth - Adding credentials
---------------------------
The credentials for your Sonatype OSSRH account need to be added
somewhere. Common convention is a ``~/.sbt/sonatype.sbt`` file with the
somewhere. Common convention is a `~/.sbt/sonatype.sbt` file with the
following:
::
@ -128,13 +128,13 @@ following:
"<your password>")
*Note: The first two strings must be
``"Sonatype Nexus Repository Manager"`` and ``"oss.sonatype.org"`` for
`"Sonatype Nexus Repository Manager"` and `"oss.sonatype.org"` for
Ivy to use the credentials.*
Finally - Publish
-----------------
In sbt, run ``publish-signed`` and you should see something like the following:
In sbt, run `publish-signed` and you should see something like the following:
.. code-block:: console
@ -168,7 +168,7 @@ release workflow procedures to be performed directly from sbt.
independent releases before pushing the full project.*
\ *Note:* An error message of
``PGPException: checksum mismatch at 0 of 20`` indicates that you got
`PGPException: checksum mismatch at 0 of 20` indicates that you got
the passphrase wrong. We have found at least on OS X that there may be
issues with characters outside the 7-bit ASCII range (e.g. Umlauts). If
you are absolutely sure that you typed the right phrase and the error
@ -183,7 +183,7 @@ need to:
- Have GPG key pair, with published public key,
- An sbt file with your Sonatype credentials *that is not pushed to the VCS*,
- Add the `sbt-pgp plugin <http://scala-sbt.org/sbt-pgp>`_ to sign the artefacts,
- Modify ``build.sbt`` with the required elements in the generated POM.
- Modify `build.sbt` with the required elements in the generated POM.
Starting with a project that is not being published, you'll need to
install GPG, generate and publish your key. Swtiching to sbt, you'll
@ -206,7 +206,7 @@ settings:
^^^^^^^^^^^^^^^^^^^^^^
The `sbt-pgp plugin <http://scala-sbt.org/sbt-pgp>`_ allows you to
sign and publish your artefacts by running ``publish-signed`` in sbt:
sign and publish your artefacts by running `publish-signed` in sbt:
::
@ -215,7 +215,7 @@ sign and publish your artefacts by running ``publish-signed`` in sbt:
build.sbt
^^^^^^^^^
Finally, you'll need to tweak the generated POM in your ``build.sbt``.
Finally, you'll need to tweak the generated POM in your `build.sbt`.
The tweaks include specifying the project's authors, URL, SCM and many
others:

View File

@ -46,10 +46,10 @@ Modifying default artifacts
===========================
Each built-in artifact has several configurable settings in addition to
``publishArtifact``. The basic ones are ``artifact`` (of type
``SettingKey[Artifact]``), ``mappings`` (of type
``TaskKey[(File,String)]``), and ``artifactPath`` (of type
``SettingKey[File]``). They are scoped by ``(<config>, <task>)`` as
`publishArtifact`. The basic ones are `artifact` (of type
`SettingKey[Artifact]`), `mappings` (of type
`TaskKey[(File,String)]`), and `artifactPath` (of type
`SettingKey[File]`). They are scoped by `(<config>, <task>)` as
indicated in the previous section.
To modify the type of the main artifact, for example:
@ -60,15 +60,15 @@ To modify the type of the main artifact, for example:
art.copy(`type` = "bundle")
}
The generated artifact name is determined by the ``artifactName``
The generated artifact name is determined by the `artifactName`
setting. This setting is of type
``(ScalaVersion, ModuleID, Artifact) => String``. The ScalaVersion
`(ScalaVersion, ModuleID, Artifact) => String`. The ScalaVersion
argument provides the full Scala version String and the binary
compatible part of the version String. The String result is the name of
the file to produce. The default implementation is
``Artifact.artifactName _``. The function may be modified to produce
`Artifact.artifactName _`. The function may be modified to produce
different local names for artifacts without affecting the published
name, which is determined by the ``artifact`` definition combined with
name, which is determined by the `artifact` definition combined with
the repository pattern.
For example, to produce a minimal name without a classifier or cross
@ -82,10 +82,10 @@ path:
(Note that in practice you rarely want to drop the classifier.)
Finally, you can get the ``(Artifact, File)`` pair for the artifact by
mapping the ``packagedArtifact`` task. Note that if you don't need the
``Artifact``, you can get just the File from the package task
(``package``, ``packageDoc``, or ``packageSrc``). In both cases,
Finally, you can get the `(Artifact, File)` pair for the artifact by
mapping the `packagedArtifact` task. Note that if you don't need the
`Artifact`, you can get just the File from the package task
(`package`, `packageDoc`, or `packageSrc`). In both cases,
mapping the task to get the file ensures that the artifact is generated
first and so the file is guaranteed to be up-to-date.
@ -109,7 +109,7 @@ artifacts to publish. Multiple artifacts are allowed when using Ivy
metadata, but a Maven POM file only supports distinguishing artifacts
based on classifiers and these are not recorded in the POM.
Basic ``Artifact`` construction look like:
Basic `Artifact` construction look like:
::
@ -147,7 +147,7 @@ generates the artifact:
addArtifact( Artifact("myproject", "image", "jpg"), myImageTask )
``addArtifact`` returns a sequence of settings (wrapped in a
`addArtifact` returns a sequence of settings (wrapped in a
`SettingsDefinition <../../api/#sbt.Init$SettingsDefinition>`_).
In a full build configuration, usage looks like:
@ -161,8 +161,8 @@ In a full build configuration, usage looks like:
Publishing .war files
=====================
A common use case for web applications is to publish the ``.war`` file
instead of the ``.jar`` file.
A common use case for web applications is to publish the `.war` file
instead of the `.jar` file.
::
@ -181,15 +181,15 @@ Using dependencies with artifacts
=================================
To specify the artifacts to use from a dependency that has custom or
multiple artifacts, use the ``artifacts`` method on your dependencies.
multiple artifacts, use the `artifacts` method on your dependencies.
For example:
::
libraryDependencies += "org" % "name" % "rev" artifacts(Artifact("name", "type", "ext"))
The ``from`` and ``classifer`` methods (described on the :doc:`Library Management <Library-Management>`
page) are actually convenience methods that translate to ``artifacts``:
The `from` and `classifer` methods (described on the :doc:`Library Management <Library-Management>`
page) are actually convenience methods that translate to `artifacts`:
::

View File

@ -4,11 +4,11 @@ Best Practices
This page describes best practices for working with sbt.
``project/`` vs. ``~/.sbt/``
`project/` vs. `~/.sbt/`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Anything that is necessary for building the project should go in
``project/``. This includes things like the web plugin. ``~/.sbt/``
`project/`. This includes things like the web plugin. `~/.sbt/`
should contain local customizations and commands for working with a
build, but are not necessary. An example is an IDE plugin.
@ -26,39 +26,39 @@ beginning of the resolvers list:
localMaven +: resolvers.value
}
1. Put settings specific to a user in a global ``.sbt`` file, such as
``~/.sbt/local.sbt``. These settings will be applied to all projects.
2. Put settings in a ``.sbt`` file in a project that isn't checked into
version control, such as ``<project>/local.sbt``. sbt combines the
settings from multiple ``.sbt`` files, so you can still have the
standard ``<project>/build.sbt`` and check that into version control.
1. Put settings specific to a user in a global `.sbt` file, such as
`~/.sbt/local.sbt`. These settings will be applied to all projects.
2. Put settings in a `.sbt` file in a project that isn't checked into
version control, such as `<project>/local.sbt`. sbt combines the
settings from multiple `.sbt` files, so you can still have the
standard `<project>/build.sbt` and check that into version control.
.sbtrc
~~~~~~
Put commands to be executed when sbt starts up in a ``.sbtrc`` file, one
Put commands to be executed when sbt starts up in a `.sbtrc` file, one
per line. These commands run before a project is loaded and are useful
for defining aliases, for example. sbt executes commands in
``$HOME/.sbtrc`` (if it exists) and then ``<project>/.sbtrc`` (if it
`$HOME/.sbtrc` (if it exists) and then `<project>/.sbtrc` (if it
exists).
Generated files
~~~~~~~~~~~~~~~
Write any generated files to a subdirectory of the output directory,
which is specified by the ``target`` setting. This makes it easy to
which is specified by the `target` setting. This makes it easy to
clean up after a build and provides a single location to organize
generated files. Any generated files that are specific to a Scala
version should go in ``crossTarget`` for efficient cross-building.
version should go in `crossTarget` for efficient cross-building.
For generating sources and resources, see :doc:`/Howto/generatefiles`.
Don't hard code
~~~~~~~~~~~~~~~
Don't hard code constants, like the output directory ``target/``. This
is especially important for plugins. A user might change the ``target``
setting to point to ``build/``, for example, and the plugin needs to
Don't hard code constants, like the output directory `target/`. This
is especially important for plugins. A user might change the `target`
setting to point to `build/`, for example, and the plugin needs to
respect that. Instead, use the setting, like:
::
@ -122,8 +122,8 @@ or construct the file from an absolute base:
base / "A.scala"
This is related to the no hard coding best practice because the proper
way involves referencing the ``baseDirectory`` setting. For example, the
following defines the myPath setting to be the ``<base>/licenses/``
way involves referencing the `baseDirectory` setting. For example, the
following defines the myPath setting to be the `<base>/licenses/`
directory.
::
@ -141,14 +141,24 @@ root directory for you for convenience.
Parser combinators
~~~~~~~~~~~~~~~~~~
1. Use ``token`` everywhere to clearly delimit tab completion
1. Use `token` everywhere to clearly delimit tab completion
boundaries.
2. Don't overlap or nest tokens. The behavior here is unspecified and
will likely generate an error in the future.
3. Use ``flatMap`` for general recursion. sbt's combinators are strict
to limit the number of classes generated, so use ``flatMap`` like:
3. Use `flatMap` for general recursion. sbt's combinators are strict
to limit the number of classes generated, so use `flatMap` like:
.. code-block:: scala
lazy val parser: Parser[Int] =
token(IntBasic) flatMap { i =>
if(i <= 0)
success(i)
else
token(Space ~> parser)
}
``scala lazy val parser: Parser[Int] = token(IntBasic) flatMap { i => if(i <= 0) success(i) else token(Space ~> parser) }``
This example defines a parser a whitespace-delimited list of
integers, ending with a negative number, and returning that final,
negative number.

View File

@ -3,7 +3,7 @@ Classpaths, sources, and resources
==================================
This page discusses how sbt builds up classpaths for different actions,
like ``compile``, ``run``, and ``test`` and how to override or augment
like `compile`, `run`, and `test` and how to override or augment
these classpaths.
Basics
@ -11,25 +11,25 @@ Basics
In sbt 0.10 and later, classpaths now include the Scala library and
(when declared as a dependency) the Scala compiler. Classpath-related
settings and tasks typically provide a value of type ``Classpath``. This
is an alias for ``Seq[Attributed[File]]``.
settings and tasks typically provide a value of type `Classpath`. This
is an alias for `Seq[Attributed[File]]`.
`Attributed <../../api/sbt/Attributed.html>`_
is a type that associates a heterogeneous map with each classpath entry.
Currently, this allows sbt to associate the ``Analysis`` resulting from
Currently, this allows sbt to associate the `Analysis` resulting from
compilation with the corresponding classpath entry and for managed
entries, the ``ModuleID`` and ``Artifact`` that defined the dependency.
entries, the `ModuleID` and `Artifact` that defined the dependency.
To explicitly extract the raw ``Seq[File]``, use the ``files`` method
implicitly added to ``Classpath``:
To explicitly extract the raw `Seq[File]`, use the `files` method
implicitly added to `Classpath`:
::
val cp: Classpath = ...
val raw: Seq[File] = cp.files
To create a ``Classpath`` from a ``Seq[File]``, use ``classpath`` and to
create an ``Attributed[File]`` from a ``File``, use
``Attributed.blank``:
To create a `Classpath` from a `Seq[File]`, use `classpath` and to
create an `Attributed[File]` from a `File`, use
`Attributed.blank`:
::
@ -56,9 +56,9 @@ Tasks that produce managed files should be inserted as follows:
sourceGenerators in Compile +=
generate( (sourceManaged in Compile).value / "some_directory")
In this example, ``generate`` is some function of type
``File => Seq[File]`` that actually does the work. So, we are appending a new task
to the list of main source generators (``sourceGenerators in Compile``).
In this example, `generate` is some function of type
`File => Seq[File]` that actually does the work. So, we are appending a new task
to the list of main source generators (`sourceGenerators in Compile`).
To insert a named task, which is the better approach for plugins:
@ -71,18 +71,18 @@ To insert a named task, which is the better approach for plugins:
sourceGenerators in Compile += (mySourceGenerator in Compile).task
The ``task`` method is used to refer to the actual task instead of the
The `task` method is used to refer to the actual task instead of the
result of the task.
For resources, there are similar keys ``resourceGenerators`` and
``resourceManaged``.
For resources, there are similar keys `resourceGenerators` and
`resourceManaged`.
Excluding source files by name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The project base directory is by default a source directory in addition
to ``src/main/scala``. You can exclude source files by name
(``butler.scala`` in the example below) like:
to `src/main/scala`. You can exclude source files by name
(`butler.scala` in the example below) like:
::
@ -107,33 +107,33 @@ Keys
For classpaths, the relevant keys are:
- ``unmanagedClasspath``
- ``managedClasspath``
- ``externalDependencyClasspath``
- ``internalDependencyClasspath``
- `unmanagedClasspath`
- `managedClasspath`
- `externalDependencyClasspath`
- `internalDependencyClasspath`
For sources:
- ``unmanagedSources`` These are by default built up from
``unmanagedSourceDirectories``, which consists of ``scalaSource``
and ``javaSource``.
- ``managedSources`` These are generated sources.
- ``sources`` Combines ``managedSources`` and ``unmanagedSources``.
- ``sourceGenerators`` These are tasks that generate source files.
- `unmanagedSources` These are by default built up from
`unmanagedSourceDirectories`, which consists of `scalaSource`
and `javaSource`.
- `managedSources` These are generated sources.
- `sources` Combines `managedSources` and `unmanagedSources`.
- `sourceGenerators` These are tasks that generate source files.
Typically, these tasks will put sources in the directory provided by
``sourceManaged``.
`sourceManaged`.
For resources
- ``unmanagedResources`` These are by default built up from
``unmanagedResourceDirectories``, which by default is
``resourceDirectory``, excluding files matched by
``defaultExcludes``.
- ``managedResources`` By default, this is empty for standard
- `unmanagedResources` These are by default built up from
`unmanagedResourceDirectories`, which by default is
`resourceDirectory`, excluding files matched by
`defaultExcludes`.
- `managedResources` By default, this is empty for standard
projects. sbt plugins will have a generated descriptor file here.
- ``resourceGenerators`` These are tasks that generate resource files.
- `resourceGenerators` These are tasks that generate resource files.
Typically, these tasks will put resources in the directory provided
by ``resourceManaged``.
by `resourceManaged`.
Use the :doc:`inspect command </Detailed-Topics/Inspecting-Settings>` for more details.

View File

@ -17,131 +17,131 @@ Notes on the command line
see :doc:`/Extending/Commands`. This specific sbt meaning of "command" means
there's no good general term for "thing you can type at the sbt
prompt", which may be a setting, task, or command.
- Some tasks produce useful values. The ``toString`` representation of
these values can be shown using ``show <task>`` to run the task
instead of just ``<task>``.
- Some tasks produce useful values. The `toString` representation of
these values can be shown using `show <task>` to run the task
instead of just `<task>`.
- In a multi-project build, execution dependencies and the
``aggregate`` setting control which tasks from which projects are
`aggregate` setting control which tasks from which projects are
executed. See :doc:`multi-project builds </Getting-Started/Multi-Project>`.
Project-level tasks
-------------------
- ``clean`` Deletes all generated files (the ``target`` directory).
- ``publishLocal`` Publishes artifacts (such as jars) to the local Ivy
- `clean` Deletes all generated files (the `target` directory).
- `publishLocal` Publishes artifacts (such as jars) to the local Ivy
repository as described in :doc:`Publishing`.
- ``publish`` Publishes artifacts (such as jars) to the repository
defined by the ``publishTo`` setting, described in :doc:`Publishing`.
- ``update`` Resolves and retrieves external dependencies as described
- `publish` Publishes artifacts (such as jars) to the repository
defined by the `publishTo` setting, described in :doc:`Publishing`.
- `update` Resolves and retrieves external dependencies as described
in :doc:`library dependencies </Getting-Started/Library-Dependencies>`.
Configuration-level tasks
-------------------------
Configuration-level tasks are tasks associated with a configuration. For
example, ``compile``, which is equivalent to ``compile:compile``,
compiles the main source code (the ``compile`` configuration).
``test:compile`` compiles the test source code (test ``test``
configuration). Most tasks for the ``compile`` configuration have an
equivalent in the ``test`` configuration that can be run using a
``test:`` prefix.
example, `compile`, which is equivalent to `compile:compile`,
compiles the main source code (the `compile` configuration).
`test:compile` compiles the test source code (test `test`
configuration). Most tasks for the `compile` configuration have an
equivalent in the `test` configuration that can be run using a
`test:` prefix.
- ``compile`` Compiles the main sources (in the ``src/main/scala``
directory). ``test:compile`` compiles test sources (in the
``src/test/scala/`` directory).
- ``console`` Starts the Scala interpreter with a classpath including
the compiled sources, all jars in the ``lib`` directory, and managed
libraries. To return to sbt, type ``:quit``, Ctrl+D (Unix), or Ctrl+Z
(Windows). Similarly, ``test:console`` starts the interpreter with
- `compile` Compiles the main sources (in the `src/main/scala`
directory). `test:compile` compiles test sources (in the
`src/test/scala/` directory).
- `console` Starts the Scala interpreter with a classpath including
the compiled sources, all jars in the `lib` directory, and managed
libraries. To return to sbt, type `:quit`, Ctrl+D (Unix), or Ctrl+Z
(Windows). Similarly, `test:console` starts the interpreter with
the test classes and classpath.
- ``consoleQuick`` Starts the Scala interpreter with the project's
compile-time dependencies on the classpath. ``test:consoleQuick``
uses the test dependencies. This task differs from ``console`` in
- `consoleQuick` Starts the Scala interpreter with the project's
compile-time dependencies on the classpath. `test:consoleQuick`
uses the test dependencies. This task differs from `console` in
that it does not force compilation of the current project's sources.
- ``consoleProject`` Enters an interactive session with sbt and the
- `consoleProject` Enters an interactive session with sbt and the
build definition on the classpath. The build definition and related
values are bound to variables and common packages and values are
imported. See the :doc:`consoleProject documentation <Console-Project>` for more information.
- ``doc`` Generates API documentation for Scala source files in
``src/main/scala`` using scaladoc. ``test:doc`` generates API
documentation for source files in ``src/test/scala``.
- ``package`` Creates a jar file containing the files in
``src/main/resources`` and the classes compiled from
``src/main/scala``. ``test:package`` creates a jar containing the
files in ``src/test/resources`` and the class compiled from
``src/test/scala``.
- ``packageDoc`` Creates a jar file containing API documentation
generated from Scala source files in ``src/main/scala``.
``test:packageDoc`` creates a jar containing API documentation for
test sources files in ``src/test/scala``.
- ``packageSrc``: Creates a jar file containing all main source files
and resources. The packaged paths are relative to ``src/main/scala``
and ``src/main/resources``. Similarly, ``test:packageSrc`` operates
- `doc` Generates API documentation for Scala source files in
`src/main/scala` using scaladoc. `test:doc` generates API
documentation for source files in `src/test/scala`.
- `package` Creates a jar file containing the files in
`src/main/resources` and the classes compiled from
`src/main/scala`. `test:package` creates a jar containing the
files in `src/test/resources` and the class compiled from
`src/test/scala`.
- `packageDoc` Creates a jar file containing API documentation
generated from Scala source files in `src/main/scala`.
`test:packageDoc` creates a jar containing API documentation for
test sources files in `src/test/scala`.
- `packageSrc`: Creates a jar file containing all main source files
and resources. The packaged paths are relative to `src/main/scala`
and `src/main/resources`. Similarly, `test:packageSrc` operates
on test source files and resources.
- ``run <argument>*`` Runs the main class for the project in the same
virtual machine as ``sbt``. The main class is passed the
``argument``\ s provided. Please see :doc:`Running-Project-Code` for
details on the use of ``System.exit`` and multithreading (including
GUIs) in code run by this action. ``test:run`` runs a main class in
- `run <argument>*` Runs the main class for the project in the same
virtual machine as `sbt`. The main class is passed the
`argument`\ s provided. Please see :doc:`Running-Project-Code` for
details on the use of `System.exit` and multithreading (including
GUIs) in code run by this action. `test:run` runs a main class in
the test code.
- ``runMain <main-class> <argument>*`` Runs the specified main class
for the project in the same virtual machine as ``sbt``. The main
class is passed the ``argument``\ s provided. Please see :doc:`Running-Project-Code`
for details on the use of ``System.exit`` and
- `runMain <main-class> <argument>*` Runs the specified main class
for the project in the same virtual machine as `sbt`. The main
class is passed the `argument`\ s provided. Please see :doc:`Running-Project-Code`
for details on the use of `System.exit` and
multithreading (including GUIs) in code run by this action.
``test:runMain`` runs the specified main class in the test code.
- ``test`` Runs all tests detected during test compilation. See
`test:runMain` runs the specified main class in the test code.
- `test` Runs all tests detected during test compilation. See
:doc:`Testing` for details.
- ``testOnly <test>*`` Runs the tests provided as arguments. ``*``
- `testOnly <test>*` Runs the tests provided as arguments. `*`
(will be) interpreted as a wildcard in the test name. See :doc:`Testing`
for details.
- ``testQuick <test>*`` Runs the tests specified as arguments (or all
- `testQuick <test>*` Runs the tests specified as arguments (or all
tests if no arguments are given) that:
1. have not been run yet OR
2. failed the last time they were run OR
3. had any transitive dependencies recompiled since the last
successful run ``*`` (will be) interpreted as a wildcard in the
successful run `*` (will be) interpreted as a wildcard in the
test name. See :doc:`Testing` for details.
General commands
----------------
- ``exit`` or ``quit`` End the current interactive session or build.
Additionally, ``Ctrl+D`` (Unix) or ``Ctrl+Z`` (Windows) will exit the
- `exit` or `quit` End the current interactive session or build.
Additionally, `Ctrl+D` (Unix) or `Ctrl+Z` (Windows) will exit the
interactive prompt.
- ``help <command>`` Displays detailed help for the specified command.
If the command does not exist, ``help`` lists detailed help for
- `help <command>` Displays detailed help for the specified command.
If the command does not exist, `help` lists detailed help for
commands whose name or description match the argument, which is
interpreted as a regular expression. If no command is provided,
displays brief descriptions of the main commands. Related commands
are ``tasks`` and ``settings``.
- ``projects [add|remove <URI>]`` List all available projects if no
are `tasks` and `settings`.
- `projects [add|remove <URI>]` List all available projects if no
arguments provided or adds/removes the build at the provided URI.
(See :doc:`/Getting-Started/Full-Def/` for details on multi-project builds.)
- ``project <project-id>`` Change the current project to the project
with ID ``<project-id>``. Further operations will be done in the
- `project <project-id>` Change the current project to the project
with ID `<project-id>`. Further operations will be done in the
context of the given project. (See :doc:`/Getting-Started/Full-Def/` for details
on multiple project builds.)
- ``~ <command>`` Executes the project specified action or method
- `~ <command>` Executes the project specified action or method
whenever source files change. See :doc:`/Detailed-Topics/Triggered-Execution` for
details.
- ``< filename`` Executes the commands in the given file. Each command
- `< filename` Executes the commands in the given file. Each command
should be on its own line. Empty lines and lines beginning with '#'
are ignored
- ``+ <command>`` Executes the project specified action or method for
all versions of Scala defined in the ``crossScalaVersions``
- `+ <command>` Executes the project specified action or method for
all versions of Scala defined in the `crossScalaVersions`
setting.
- ``++ <version|home-directory> <command>`` Temporarily changes the version of Scala
building the project and executes the provided command. ``<command>``
- `++ <version|home-directory> <command>` Temporarily changes the version of Scala
building the project and executes the provided command. `<command>`
is optional. The specified version of Scala is used until the project
is reloaded, settings are modified (such as by the ``set`` or
``session`` commands), or ``++`` is run again. ``<version>`` does not
is reloaded, settings are modified (such as by the `set` or
`session` commands), or `++` is run again. `<version>` does not
need to be listed in the build definition, but it must be available
in a repository. Alternatively, specify the path to a Scala installation.
- ``; A ; B`` Execute A and if it succeeds, run B. Note that the
- `; A ; B` Execute A and if it succeeds, run B. Note that the
leading semicolon is required.
- ``eval <Scala-expression>`` Evaluates the given Scala expression and
- `eval <Scala-expression>` Evaluates the given Scala expression and
returns the result and inferred type. This can be used to set system
properties, as a calculator, to fork processes, etc ... For example:
@ -154,24 +154,24 @@ General commands
Commands for managing the build definition
------------------------------------------
- ``reload [plugins|return]`` If no argument is specified, reloads the
- `reload [plugins|return]` If no argument is specified, reloads the
build, recompiling any build or plugin definitions as necessary.
``reload plugins`` changes the current project to the build
definition project (in ``project/``). This can be useful to directly
manipulate the build definition. For example, running ``clean`` on
`reload plugins` changes the current project to the build
definition project (in `project/`). This can be useful to directly
manipulate the build definition. For example, running `clean` on
the build definition project will force snapshots to be updated and
the build definition to be recompiled. ``reload return`` changes back
the build definition to be recompiled. `reload return` changes back
to the main project.
- ``set <setting-expression>`` Evaluates and applies the given setting
- `set <setting-expression>` Evaluates and applies the given setting
definition. The setting applies until sbt is restarted, the build is
reloaded, or the setting is overridden by another ``set`` command or
removed by the ``session`` command. See
reloaded, or the setting is overridden by another `set` command or
removed by the `session` command. See
:doc:`.sbt build definition </Getting-Started/Basic-Def>` and
:doc:`Inspecting-Settings` for details.
- ``session <command>`` Manages session settings defined by the ``set``
- `session <command>` Manages session settings defined by the `set`
command. It can persist settings configured at the prompt. See
:doc:`Inspecting-Settings` for details.
- ``inspect <setting-key>`` Displays information about settings, such
- `inspect <setting-key>` Displays information about settings, such
as the value, description, defining scope, dependencies, delegation
chain, and related settings. See :doc:`Inspecting-Settings` for details.
@ -179,50 +179,50 @@ Command Line Options
--------------------
System properties can be provided either as JVM options, or as SBT
arguments, in both cases as ``-Dprop=value``. The following properties
arguments, in both cases as `-Dprop=value`. The following properties
influence SBT execution. Also see :doc:`Launcher`.
+------------------------------+-----------+---------------------+----------------------------------------------------+
| Property | Values | Default | Meaning |
+==============================+===========+=====================+====================================================+
| ``sbt.log.noformat`` | Boolean | false | If true, disable ANSI color codes. Useful on build |
| `sbt.log.noformat` | Boolean | false | If true, disable ANSI color codes. Useful on build |
| | | | servers or terminals that don't support color. |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.global.base`` | Directory | ~/.sbt | The directory containing global settings and |
| `sbt.global.base` | Directory | ~/.sbt | The directory containing global settings and |
| | | | plugins |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.ivy.home`` | Directory | ~/.ivy2 | The directory containing the local Ivy repository |
| `sbt.ivy.home` | Directory | ~/.ivy2 | The directory containing the local Ivy repository |
| | | | and artifact cache |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.boot.directory`` | Directory | ~/.sbt/boot | Path to shared boot directory |
| `sbt.boot.directory` | Directory | ~/.sbt/boot | Path to shared boot directory |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.main.class`` | String | | |
| `sbt.main.class` | String | | |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``xsbt.inc.debug`` | Boolean | false | |
| `xsbt.inc.debug` | Boolean | false | |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.extraClasspath`` | Classpath | | A list of classpath entries (jar files or |
| `sbt.extraClasspath` | Classpath | | A list of classpath entries (jar files or |
| | Entries | | directories) that are added to sbt's classpath. |
| | | | Note that the entries are deliminted by comma, |
| | | | e.g.: ``entry1, entry2,..``. See also |
| | | | ``resources`` in the :doc:`Launcher` |
| | | | e.g.: `entry1, entry2,..`. See also |
| | | | `resources` in the :doc:`Launcher` |
| | | | documentation. |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.version`` | Version | 0.11.3 | sbt version to use, usually taken from |
| `sbt.version` | Version | 0.11.3 | sbt version to use, usually taken from |
| | | | project/build.properties |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.boot.properties`` | File | | |
| `sbt.boot.properties` | File | | |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.override.build.repos`` | Boolean | false | If true, repositories configured in a build |
| `sbt.override.build.repos` | Boolean | false | If true, repositories configured in a build |
| | | | definition are ignored and the repositories |
| | | | configured for the launcher are used instead. See |
| | | | ``sbt.repository.config`` and the :doc:`Launcher` |
| | | | `sbt.repository.config` and the :doc:`Launcher` |
| | | | documentation. |
+------------------------------+-----------+---------------------+----------------------------------------------------+
| ``sbt.repository.config`` | File | ~/.sbt/repositories | A file containing the repositories to use for the |
| `sbt.repository.config` | File | ~/.sbt/repositories | A file containing the repositories to use for the |
| | | | launcher. The format is the same as a |
| | | | ``[repositories]`` section for a :doc:`Launcher` |
| | | | `[repositories]` section for a :doc:`Launcher` |
| | | | configuration file. This setting is typically used |
| | | | in conjuction with setting |
| | | | ``sbt.override.build.repos`` to true (see previous |
| | | | `sbt.override.build.repos` to true (see previous |
| | | | row and the :doc:`Launcher` documentation). |
+------------------------------+-----------+---------------------+----------------------------------------------------+

View File

@ -3,23 +3,23 @@ Compiler Plugin Support
=======================
There is some special support for using compiler plugins. You can set
``autoCompilerPlugins`` to ``true`` to enable this functionality.
`autoCompilerPlugins` to `true` to enable this functionality.
::
autoCompilerPlugins := true
To use a compiler plugin, you either put it in your unmanaged library
directory (``lib/`` by default) or add it as managed dependency in the
``plugin`` configuration. ``addCompilerPlugin`` is a convenience method
for specifying ``plugin`` as the configuration for a dependency:
directory (`lib/` by default) or add it as managed dependency in the
`plugin` configuration. `addCompilerPlugin` is a convenience method
for specifying `plugin` as the configuration for a dependency:
::
addCompilerPlugin("org.scala-tools.sxr" %% "sxr" % "0.2.7")
The ``compile`` and ``testCompile`` actions will use any compiler
plugins found in the ``lib`` directory or in the ``plugin``
The `compile` and `testCompile` actions will use any compiler
plugins found in the `lib` directory or in the `plugin`
configuration. You are responsible for configuring the plugins as
necessary. For example, Scala X-Ray requires the extra option:

View File

@ -17,7 +17,7 @@ For example,
scalaVersion := "2.10.0"
This will retrieve Scala from the repositories configured via the ``resolvers`` setting.
This will retrieve Scala from the repositories configured via the `resolvers` setting.
It will use this version for building your project: compiling, running, scaladoc, and the REPL.
Configuring the scala-library dependency
@ -31,7 +31,7 @@ If you want to configure it differently than the default or you have a project w
autoScalaLibrary := false
In order to compile Scala sources, the Scala library needs to be on the classpath.
When ``autoScalaLibrary`` is true, the Scala library will be on all classpaths: test, runtime, and compile.
When `autoScalaLibrary` is true, the Scala library will be on all classpaths: test, runtime, and compile.
Otherwise, you need to add it like any other dependency.
For example, the following dependency definition uses Scala only for tests:
@ -51,28 +51,28 @@ For example, to depend on the Scala compiler,
libraryDependencies += "org.scala-lang" % "scala-compiler" % scalaVersion.value
Note that this is necessary regardless of the value of the ``autoScalaLibrary`` setting described in the previous section.
Note that this is necessary regardless of the value of the `autoScalaLibrary` setting described in the previous section.
Configuring Scala tool dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to compile Scala code, run scaladoc, and provide a Scala REPL, sbt needs the ``scala-compiler`` jar.
This should not be a normal dependency of the project, so sbt adds a dependency on ``scala-compiler`` in the special, private ``scala-tool`` configuration.
In order to compile Scala code, run scaladoc, and provide a Scala REPL, sbt needs the `scala-compiler` jar.
This should not be a normal dependency of the project, so sbt adds a dependency on `scala-compiler` in the special, private `scala-tool` configuration.
It may be desirable to have more control over this in some situations.
Disable this automatic behavior with the ``managedScalaInstance`` key:
Disable this automatic behavior with the `managedScalaInstance` key:
::
managedScalaInstance := false
This will also disable the automatic dependency on ``scala-library``.
This will also disable the automatic dependency on `scala-library`.
If you do not need the Scala compiler for anything (compiling, the REPL, scaladoc, etc...), you can stop here.
sbt does not need an instance of Scala for your project in that case.
Otherwise, sbt will still need access to the jars for the Scala compiler for compilation and other tasks.
You can provide them by either declaring a dependency in the ``scala-tool`` configuration or by explicitly defining ``scalaInstance``.
You can provide them by either declaring a dependency in the `scala-tool` configuration or by explicitly defining `scalaInstance`.
In the first case, add the ``scala-tool`` configuration and add a dependency on ``scala-compiler`` in this configuration.
The organization is not important, but sbt needs the module name to be ``scala-compiler`` and ``scala-library`` in order to handle those jars appropriately.
In the first case, add the `scala-tool` configuration and add a dependency on `scala-compiler` in this configuration.
The organization is not important, but sbt needs the module name to be `scala-compiler` and `scala-library` in order to handle those jars appropriately.
For example,
::
@ -91,8 +91,8 @@ For example,
"org.scala-lang" % "scala-compiler" % scalaVersion.value % "scala-tool"
)
In the second case, directly construct a value of type `ScalaInstance <../../api/sbt/ScalaInstance.html>`_, typically using a method in the `companion object <../../api/sbt/ScalaInstance$.html>`_, and assign it to ``scalaInstance``.
You will also need to add the ``scala-library`` jar to the classpath to compile and run Scala sources.
In the second case, directly construct a value of type `ScalaInstance <../../api/sbt/ScalaInstance.html>`_, typically using a method in the `companion object <../../api/sbt/ScalaInstance$.html>`_, and assign it to `scalaInstance`.
You will also need to add the `scala-library` jar to the classpath to compile and run Scala sources.
For example,
::
@ -113,24 +113,24 @@ Scala will still be resolved as before, but the jars will come from the configur
Using Scala from a local directory
==================================
The result of building Scala from source is a Scala home directory ``<base>/build/pack/`` that contains a subdirectory ``lib/`` containing the Scala library, compiler, and other jars.
The result of building Scala from source is a Scala home directory `<base>/build/pack/` that contains a subdirectory `lib/` containing the Scala library, compiler, and other jars.
The same directory layout is obtained by downloading and extracting a Scala distribution.
Such a Scala home directory may be used as the source for jars by setting ``scalaHome``.
Such a Scala home directory may be used as the source for jars by setting `scalaHome`.
For example,
::
scalaHome := Some(file("/home/user/scala-2.10/"))
By default, ``lib/scala-library.jar`` will be added to the unmanaged classpath and ``lib/scala-compiler.jar`` will be used to compile Scala sources and provide a Scala REPL.
No managed dependency is recorded on ``scala-library``.
By default, `lib/scala-library.jar` will be added to the unmanaged classpath and `lib/scala-compiler.jar` will be used to compile Scala sources and provide a Scala REPL.
No managed dependency is recorded on `scala-library`.
This means that Scala will only be resolved from a repository if you explicitly define a dependency on Scala or if Scala is depended on indirectly via a dependency.
In these cases, the artifacts for the resolved dependencies will be substituted with jars in the Scala home ``lib/`` directory.
In these cases, the artifacts for the resolved dependencies will be substituted with jars in the Scala home `lib/` directory.
Mixing with managed dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As an example, consider adding a dependency on ``scala-reflect`` when ``scalaHome`` is configured:
As an example, consider adding a dependency on `scala-reflect` when `scalaHome` is configured:
::
@ -138,15 +138,15 @@ As an example, consider adding a dependency on ``scala-reflect`` when ``scalaHom
libraryDependencies += "org.scala-lang" % "scala-reflect" % scalaVersion.value
This will be resolved as normal, except that sbt will see if ``/home/user/scala-2.10/lib/scala-reflect.jar`` exists.
This will be resolved as normal, except that sbt will see if `/home/user/scala-2.10/lib/scala-reflect.jar` exists.
If it does, that file will be used in place of the artifact from the managed dependency.
Using unmanaged dependencies only
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Instead of adding managed dependencies on Scala jars, you can directly add them.
The ``scalaInstance`` task provides structured access to the Scala distribution.
For example, to add all jars in the Scala home ``lib/`` directory,
The `scalaInstance` task provides structured access to the Scala distribution.
For example, to add all jars in the Scala home `lib/` directory,
::
@ -154,7 +154,7 @@ For example, to add all jars in the Scala home ``lib/`` directory,
unmanagedJars in Compile ++= scalaInstance.value.jars
To add only some jars, filter the jars from ``scalaInstance`` before adding them.
To add only some jars, filter the jars from `scalaInstance` before adding them.
sbt's Scala version
===================

View File

@ -5,8 +5,8 @@ Console Project
Description
===========
The ``consoleProject`` task starts the Scala interpreter with access to
your project definition and to ``sbt``. Specifically, the interpreter is
The `consoleProject` task starts the Scala interpreter with access to
your project definition and to `sbt`. Specifically, the interpreter is
started up with these commands already executed:
::
@ -30,10 +30,10 @@ be included in the standard library in Scala 2.9):
> "grep -r null src" #|| "echo null-free" !
> uri("http://databinder.net/dispatch/About").toURL #> file("About.html") !
``consoleProject`` can be useful for creating and modifying your build
`consoleProject` can be useful for creating and modifying your build
in the same way that the Scala interpreter is normally used to explore
writing code. Note that this gives you raw access to your build. Think
about what you pass to ``IO.delete``, for example.
about what you pass to `IO.delete`, for example.
Accessing settings
==================
@ -91,15 +91,15 @@ Show the classpaths used for compilation and testing:
State
=====
The current :doc:`build State </Extending/Build-State>` is available as ``currentState``.
The contents of ``currentState`` are imported by default and can be used without qualification.
The current :doc:`build State </Extending/Build-State>` is available as `currentState`.
The contents of `currentState` are imported by default and can be used without qualification.
Examples
--------
Show the remaining commands to be executed in the build (more
interesting if you invoke ``consoleProject`` like
``; consoleProject ; clean ; compile``):
interesting if you invoke `consoleProject` like
`; consoleProject ; clean ; compile`):
.. code-block:: scala

View File

@ -6,7 +6,7 @@ Introduction
============
Different versions of Scala can be binary incompatible, despite
maintaining source compatibility. This page describes how to use ``sbt``
maintaining source compatibility. This page describes how to use `sbt`
to build and publish your project against multiple versions of Scala and
how to use libraries that have done the same.
@ -14,21 +14,21 @@ Publishing Conventions
======================
The underlying mechanism used to indicate which version of Scala a
library was compiled against is to append ``_<scala-version>`` to the
library was compiled against is to append `_<scala-version>` to the
library's name. For Scala 2.10.0 and later, the binary version is used.
For example, ``dispatch`` becomes ``dispatch_2.8.1`` for the variant
compiled against Scala 2.8.1 and ``dispatch_2.10`` when compiled against
For example, `dispatch` becomes `dispatch_2.8.1` for the variant
compiled against Scala 2.8.1 and `dispatch_2.10` when compiled against
2.10.0, 2.10.0-M1 or any 2.10.x version. This fairly simple approach
allows interoperability with users of Maven, Ant and other build tools.
The rest of this page describes how ``sbt`` handles this for you as part
The rest of this page describes how `sbt` handles this for you as part
of cross-building.
Using Cross-Built Libraries
===========================
To use a library built against multiple versions of Scala, double the
first ``%`` in an inline dependency to be ``%%``. This tells ``sbt``
first `%` in an inline dependency to be `%%`. This tells `sbt`
that it should append the current version of Scala being used to build
the library to the dependency's name. For example:
@ -52,23 +52,23 @@ Cross-Building a Project
========================
Define the versions of Scala to build against in the
``cross-scala-versions`` setting. Versions of Scala 2.8.0 or later are
allowed. For example, in a ``.sbt`` build definition:
`cross-scala-versions` setting. Versions of Scala 2.8.0 or later are
allowed. For example, in a `.sbt` build definition:
::
crossScalaVersions := Seq("2.8.2", "2.9.2", "2.10.0")
To build against all versions listed in ``build.scala.versions``, prefix
the action to run with ``+``. For example:
To build against all versions listed in `build.scala.versions`, prefix
the action to run with `+`. For example:
::
> + package
A typical way to use this feature is to do development on a single Scala
version (no ``+`` prefix) and then cross-build (using ``+``)
occasionally and when releasing. The ultimate purpose of ``+`` is to
version (no `+` prefix) and then cross-build (using `+`)
occasionally and when releasing. The ultimate purpose of `+` is to
cross-publish your project. That is, by doing:
.. code-block:: console
@ -82,20 +82,20 @@ In order to make this process as quick as possible, different output and
managed dependency directories are used for different versions of Scala.
For example, when building against Scala 2.10.0,
- ``./target/`` becomes ``./target/scala_2.1.0/``
- ``./lib_managed/`` becomes ``./lib_managed/scala_2.10/``
- `./target/` becomes `./target/scala_2.1.0/`
- `./lib_managed/` becomes `./lib_managed/scala_2.10/`
Packaged jars, wars, and other artifacts have ``_<scala-version>``
Packaged jars, wars, and other artifacts have `_<scala-version>`
appended to the normal artifact ID as mentioned in the Publishing
Conventions section above.
This means that the outputs of each build against each version of Scala
are independent of the others. ``sbt`` will resolve your dependencies
are independent of the others. `sbt` will resolve your dependencies
for each version separately. This way, for example, you get the version
of Dispatch compiled against 2.8.1 for your 2.8.1 build, the version
compiled against 2.10 for your 2.10.x builds, and so on. You can have
fine-grained control over the behavior for for different Scala versions
by using the ``cross`` method on ``ModuleID`` These are equivalent:
by using the `cross` method on `ModuleID` These are equivalent:
::
@ -141,6 +141,6 @@ A custom function is mainly used when cross-building and a dependency
isn't available for all Scala versions or it uses a different convention
than the default.
As a final note, you can use ``++ <version>`` to temporarily switch the
Scala version currently being used to build. ``<version>`` should be either a version for Scala published to a repository, as in ``++ 2.10.0`` or the path to a Scala home directory, as in ``++ /path/to/scala/home``. See
As a final note, you can use `++ <version>` to temporarily switch the
Scala version currently being used to build. `<version>` should be either a version for Scala published to a repository, as in `++ 2.10.0` or the path to a Scala home directory, as in `++ /path/to/scala/home`. See
:doc:`/Detailed-Topics/Command-Line-Reference` for details.

View File

@ -10,14 +10,14 @@ the current work flow with dependency management in sbt follows.
Background
==========
``update`` resolves dependencies according to the settings in a build
file, such as ``libraryDependencies`` and ``resolvers``. Other tasks use
the output of ``update`` (an ``UpdateReport``) to form various
classpaths. Tasks that in turn use these classpaths, such as ``compile``
or ``run``, thus indirectly depend on ``update``. This means that before
``compile`` can run, the ``update`` task needs to run. However,
resolving dependencies on every ``compile`` would be unnecessarily slow
and so ``update`` must be particular about when it actually performs a
`update` resolves dependencies according to the settings in a build
file, such as `libraryDependencies` and `resolvers`. Other tasks use
the output of `update` (an `UpdateReport`) to form various
classpaths. Tasks that in turn use these classpaths, such as `compile`
or `run`, thus indirectly depend on `update`. This means that before
`compile` can run, the `update` task needs to run. However,
resolving dependencies on every `compile` would be unnecessarily slow
and so `update` must be particular about when it actually performs a
resolution.
Caching and Configuration
@ -30,55 +30,55 @@ Caching and Configuration
or changing the version or other attributes of a dependency, will
automatically cause resolution to be performed. Updates to locally
published dependencies should be detected in sbt 0.12.1 and later and
will force an ``update``. Dependent tasks like ``compile`` and
``run`` will get updated classpaths.
3. Directly running the ``update`` task (as opposed to a task that
will force an `update`. Dependent tasks like `compile` and
`run` will get updated classpaths.
3. Directly running the `update` task (as opposed to a task that
depends on it) will force resolution to run, whether or not
configuration changed. This should be done in order to refresh remote
SNAPSHOT dependencies.
4. When ``offline := true``, remote SNAPSHOTs will not be updated by a
resolution, even an explicitly requested ``update``. This should
4. When `offline := true`, remote SNAPSHOTs will not be updated by a
resolution, even an explicitly requested `update`. This should
effectively support working without a connection to remote
repositories. Reproducible examples demonstrating otherwise are
appreciated. Obviously, ``update`` must have successfully run before
appreciated. Obviously, `update` must have successfully run before
going offline.
5. Overriding all of the above, ``skip in update := true`` will tell sbt
5. Overriding all of the above, `skip in update := true` will tell sbt
to never perform resolution. Note that this can cause dependent tasks
to fail. For example, compilation may fail if jars have been deleted
from the cache (and so needed classes are missing) or a dependency
has been added (but will not be resolved because skip is true). Also,
``update`` itself will immediately fail if resolution has not been
allowed to run since the last ``clean``.
`update` itself will immediately fail if resolution has not been
allowed to run since the last `clean`.
General troubleshooting steps
=============================
A. Run ``update`` explicitly. This will typically fix problems with out
A. Run `update` explicitly. This will typically fix problems with out
of date SNAPSHOTs or locally published artifacts.
B. If a file cannot be
found, look at the output of ``update`` to see where Ivy is looking for
found, look at the output of `update` to see where Ivy is looking for
the file. This may help diagnose an incorrectly defined dependency or a
dependency that is actually not present in a repository.
C. ``last update`` contains more information about the most recent
C. `last update` contains more information about the most recent
resolution and download. The amount of debugging output from Ivy is
high, so you may want to use ``lastGrep`` (run ``help lastGrep`` for
high, so you may want to use `lastGrep` (run `help lastGrep` for
usage).
D. Run ``clean`` and then ``update``. If this works, it could
D. Run `clean` and then `update`. If this works, it could
indicate a bug in sbt, but the problem would need to be reproduced in
order to diagnose and fix it.
E. Before deleting all of the Ivy cache,
first try deleting files in ``~/.ivy2/cache`` related to problematic
first try deleting files in `~/.ivy2/cache` related to problematic
dependencies. For example, if there are problems with dependency
``"org.example" % "demo" % "1.0"``, delete
``~/.ivy2/cache/org.example/demo/1.0/`` and retry ``update``. This
`"org.example" % "demo" % "1.0"`, delete
`~/.ivy2/cache/org.example/demo/1.0/` and retry `update`. This
avoids needing to redownload all dependencies.
F. Normal sbt usage
should not require deleting files from ``~/.ivy2/cache``, especially if
should not require deleting files from `~/.ivy2/cache`, especially if
the first four steps have been followed. If deleting the cache fixes a
dependency management issue, please try to reproduce the issue and
submit a test case.
@ -100,22 +100,22 @@ Notes
=====
A. Configure offline behavior for all projects on a machine by putting
``offline := true`` in ``~/.sbt/global.sbt``. A command that does this
`offline := true` in `~/.sbt/global.sbt`. A command that does this
for the user would make a nice pull request. Perhaps the setting of
offline should go into the output of ``about`` or should it be a warning
in the output of ``update`` or both?
offline should go into the output of `about` or should it be a warning
in the output of `update` or both?
B. The cache improvements in 0.12.1 address issues in the change detection
for ``update`` so that it will correctly re-resolve automatically in more
for `update` so that it will correctly re-resolve automatically in more
situations. A problem with an out of date cache can usually be attributed
to a bug in that change detection if explicitly running ``update`` fixes
to a bug in that change detection if explicitly running `update` fixes
the problem.
C. A common solution to dependency management problems in sbt has been to
remove ``~/.ivy2/cache``. Before doing this with 0.12.1, be sure to
remove `~/.ivy2/cache`. Before doing this with 0.12.1, be sure to
follow the steps in the troubleshooting section first. In particular,
verify that a ``clean`` and an explicit ``update`` do not solve the
verify that a `clean` and an explicit `update` do not solve the
issue.
D. There is no need to mark SNAPSHOT dependencies as ``changing()``
D. There is no need to mark SNAPSHOT dependencies as `changing()`
because sbt configures Ivy to know this already.

View File

@ -2,49 +2,49 @@
Forking
=======
By default, the ``run`` task runs in the same JVM as sbt. Forking is
By default, the `run` task runs in the same JVM as sbt. Forking is
required under :doc:`certain circumstances <Running-Project-Code>`, however.
Or, you might want to fork Java processes when implementing new tasks.
By default, a forked process uses the same Java and Scala versions being
used for the build and the working directory and JVM options of the
current process. This page discusses how to enable and configure forking
for both ``run`` and ``test`` tasks. Each kind of task may be configured
for both `run` and `test` tasks. Each kind of task may be configured
separately by scoping the relevant keys as explained below.
Enable forking
==============
The ``fork`` setting controls whether forking is enabled (true) or not
(false). It can be set in the ``run`` scope to only fork ``run``
commands or in the ``test`` scope to only fork ``test`` commands.
The `fork` setting controls whether forking is enabled (true) or not
(false). It can be set in the `run` scope to only fork `run`
commands or in the `test` scope to only fork `test` commands.
To fork all test tasks (``test``, ``testOnly``, and ``testQuick``) and
run tasks (``run``, ``runMain``, ``test:run``, and ``test:runMain``),
To fork all test tasks (`test`, `testOnly`, and `testQuick`) and
run tasks (`run`, `runMain`, `test:run`, and `test:runMain`),
::
fork := true
To enable forking ``run`` tasks only, set ``fork`` to ``true`` in the
``run`` scope.
To enable forking `run` tasks only, set `fork` to `true` in the
`run` scope.
::
fork in run := true
To only fork ``test:run`` and ``test:runMain``:
To only fork `test:run` and `test:runMain`:
::
fork in (Test,run) := true
Similarly, set ``fork in (Compile,run) := true`` to only fork the main
``run`` tasks. ``run`` and ``runMain`` share the same configuration and
Similarly, set `fork in (Compile,run) := true` to only fork the main
`run` tasks. `run` and `runMain` share the same configuration and
cannot be configured separately.
To enable forking all ``test`` tasks only, set ``fork`` to ``true`` in
the ``test`` scope:
To enable forking all `test` tasks only, set `fork` to `true` in
the `test` scope:
::
@ -57,7 +57,7 @@ Change working directory
========================
To change the working directory when forked, set
``baseDirectory in run`` or ``baseDirectory in test``:
`baseDirectory in run` or `baseDirectory in test`:
::
@ -77,20 +77,20 @@ Forked JVM options
==================
To specify options to be provided to the forked JVM, set
``javaOptions``:
`javaOptions`:
::
javaOptions in run += "-Xmx8G"
or specify the configuration to affect only the main or test ``run``
or specify the configuration to affect only the main or test `run`
tasks:
::
javaOptions in (Test,run) += "-Xmx8G"
or only affect the ``test`` tasks:
or only affect the `test` tasks:
::
@ -99,7 +99,7 @@ or only affect the ``test`` tasks:
Java Home
=========
Select the Java installation to use by setting the ``javaHome``
Select the Java installation to use by setting the `javaHome`
directory:
::
@ -108,21 +108,21 @@ directory:
Note that if this is set globally, it also sets the Java installation
used to compile Java sources. You can restrict it to running only by
setting it in the ``run`` scope:
setting it in the `run` scope:
::
javaHome in run := file("/path/to/jre/")
As with the other settings, you can specify the configuration to affect
only the main or test ``run`` tasks or just the ``test`` tasks.
only the main or test `run` tasks or just the `test` tasks.
Configuring output
==================
By default, forked output is sent to the Logger, with standard output
logged at the ``Info`` level and standard error at the ``Error`` level.
This can be configured with the ``outputStrategy`` setting, which is of
logged at the `Info` level and standard error at the `Error` level.
This can be configured with the `outputStrategy` setting, which is of
type
`OutputStrategy <../../api/sbt/OutputStrategy.html>`_.
@ -141,13 +141,13 @@ type
outputStrategy := Some(BufferedOutput(log: Logger))
As with other settings, this can be configured individually for main or
test ``run`` tasks or for ``test`` tasks.
test `run` tasks or for `test` tasks.
Configuring Input
=================
By default, the standard input of the sbt process is not forwarded to
the forked process. To enable this, configure the ``connectInput``
the forked process. To enable this, configure the `connectInput`
setting:
::
@ -159,8 +159,8 @@ Direct Usage
To fork a new Java process, use the `Fork
API <../../api/sbt/Fork$.html>`_. The
methods of interest are ``Fork.java``, ``Fork.javac``, ``Fork.scala``,
and ``Fork.scalac``. See the
methods of interest are `Fork.java`, `Fork.javac`, `Fork.scala`,
and `Fork.scalac`. See the
`ForkJava <../../api/sbt/Fork$.ForkJava.html>`_
and
`ForkScala <../../api/sbt/Fork$.ForkScala.html>`_

View File

@ -5,13 +5,13 @@ Global Settings
Basic global configuration file
-------------------------------
Settings that should be applied to all projects can go in
``~/.sbt/global.sbt`` (or any file in ``~/.sbt/`` with a ``.sbt``
extension). Plugins that are defined globally in ``~/.sbt/plugins`` are
Settings that should be applied to all projects can go in `|globalSbtFile|`
(or any file in `|globalBase|` with a `.sbt` extension).
Plugins that are defined globally in `|globalPluginsBase|` are
available to these settings. For example, to change the default
``shellPrompt`` for your projects:
`shellPrompt` for your projects:
``~/.sbt/global.sbt``
`|globalSbtFile|`
::
@ -22,18 +22,18 @@ available to these settings. For example, to change the default
Global Settings using a Global Plugin
-------------------------------------
The ``~/.sbt/plugins`` directory is a global plugin project. This can be
The `|globalPluginsBase|` directory is a global plugin project. This can be
used to provide global commands, plugins, or other code.
To add a plugin globally, create ``~/.sbt/plugins/build.sbt`` containing
To add a plugin globally, create `|globalPluginSbtFile|` containing
the dependency definitions. For example:
::
addSbtPlugin("org.example" % "plugin" % "1.0")
To change the default ``shellPrompt`` for every project using this
approach, create a local plugin ``~/.sbt/plugins/ShellPrompt.scala``:
To change the default `shellPrompt` for every project using this
approach, create a local plugin `|globalShellPromptScala|`:
::
@ -47,9 +47,15 @@ approach, create a local plugin ``~/.sbt/plugins/ShellPrompt.scala``:
)
}
The ``~/.sbt/plugins`` directory is a full project that is included as
The `|globalPluginsBase|` directory is a full project that is included as
an external dependency of every plugin project. In practice, settings
and code defined here effectively work as if they were defined in a
project's ``project/`` directory. This means that ``~/.sbt/plugins`` can
be used to try out ideas for plugins such as shown in the shellPrompt
project's `project/` directory. This means that `|globalPluginsBase|` can
be used to try out ideas for plugins such as shown in the `shellPrompt`
example.
.. |globalBase| replace:: ~/.sbt/|version|/
.. |globalPluginsBase| replace:: |globalBase|\ plugins/
.. |globalSbtFile| replace:: |globalBase|\ global.sbt
.. |globalPluginSbtFile| replace:: |globalPluginsBase|\ build.sbt
.. |globalShellPromptScala| replace:: |globalPluginsBase|\ ShellPrompt.scala`

View File

@ -18,29 +18,29 @@ A fully-qualified reference to a setting or task looks like:
{<build-uri>}<project-id>/config:inkey::key
This "scoped key" reference is used by commands like ``last`` and
``inspect`` and when selecting a task to run. Only ``key`` is usually
This "scoped key" reference is used by commands like `last` and
`inspect` and when selecting a task to run. Only `key` is usually
required by the parser; the remaining optional pieces select the scope.
These optional pieces are individually referred to as scope axes. In the
above description, ``{<build-uri>}`` and ``<project-id>/`` specify the
project axis, ``config:`` is the configuration axis, and ``inkey`` is
above description, `{<build-uri>}` and `<project-id>/` specify the
project axis, `config:` is the configuration axis, and `inkey` is
the task-specific axis. Unspecified components are taken to be the
current project (project axis) or auto-detected (configuration and task
axes). An asterisk (``*``) is used to explicitly refer to the ``Global``
context, as in ``*/*:key``.
axes). An asterisk (`*`) is used to explicitly refer to the `Global`
context, as in `*/*:key`.
Selecting the configuration
---------------------------
In the case of an unspecified configuration (that is, when the
``config:`` part is omitted), if the key is defined in ``Global``, that
`config:` part is omitted), if the key is defined in `Global`, that
is selected. Otherwise, the first configuration defining the key is
selected, where order is determined by the project definition's
``configurations`` member. By default, this ordering is
``compile, test, ...``
`configurations` member. By default, this ordering is
`compile, test, ...`
For example, the following are equivalent when run in a project ``root``
in the build in ``/home/user/sample/``:
For example, the following are equivalent when run in a project `root`
in the build in `/home/user/sample/`:
.. code-block:: console
@ -50,12 +50,12 @@ in the build in ``/home/user/sample/``:
> root/compile:compile
> {file:/home/user/sample/}root/compile:compile
As another example, ``run`` by itself refers to ``compile:run`` because
there is no global ``run`` task and the first configuration searched,
``compile``, defines a ``run``. Therefore, to reference the ``run`` task
for the ``test`` configuration, the configuration axis must be specified
like ``test:run``. Some other examples that require the explicit
``test:`` axis:
As another example, `run` by itself refers to `compile:run` because
there is no global `run` task and the first configuration searched,
`compile`, defines a `run`. Therefore, to reference the `run` task
for the `test` configuration, the configuration axis must be specified
like `test:run`. Some other examples that require the explicit
`test:` axis:
.. code-block:: console
@ -68,9 +68,9 @@ Task-specific Settings
----------------------
Some settings are defined per-task. This is used when there are several
related tasks, such as ``package``, ``packageSrc``, and
``packageDoc``, in the same configuration (such as ``compile`` or
``test``). For package tasks, their settings are the files to package,
related tasks, such as `package`, `packageSrc`, and
`packageDoc`, in the same configuration (such as `compile` or
`test`). For package tasks, their settings are the files to package,
the options to use, and the output file to produce. Each package task
should be able to have different values for these settings.
@ -92,13 +92,13 @@ different package tasks.
> test:package::artifactPath
[info] /home/user/sample/target/scala-2.8.1.final/root_2.8.1-0.1-test.jar
Note that a single colon ``:`` follows a configuration axis and a double
colon ``::`` follows a task axis.
Note that a single colon `:` follows a configuration axis and a double
colon `::` follows a task axis.
Discovering Settings and Tasks
==============================
This section discusses the ``inspect`` command, which is useful for
This section discusses the `inspect` command, which is useful for
exploring relationships between settings. It can be used to determine
which setting should be modified in order to affect another setting, for
example.
@ -106,7 +106,7 @@ example.
Value and Provided By
---------------------
The first piece of information provided by ``inspect`` is the type of a
The first piece of information provided by `inspect` is the type of a
task or the value and type of a setting. The following section of output
is labeled "Provided by". This shows the actual scope where the setting
is defined. For example,
@ -119,9 +119,9 @@ is defined. For example,
[info] {file:/home/user/sample/}root/*:libraryDependencies
...
This shows that ``libraryDependencies`` has been defined on the current
project (``{file:/home/user/sample/}root``) in the global configuration
(``*:``). For a task like ``update``, the output looks like:
This shows that `libraryDependencies` has been defined on the current
project (`{file:/home/user/sample/}root`) in the global configuration
(`*:`). For a task like `update`, the output looks like:
.. code-block:: console
@ -134,7 +134,7 @@ project (``{file:/home/user/sample/}root``) in the global configuration
Related Settings
----------------
The "Related" section of ``inspect`` output lists all of the definitions
The "Related" section of `inspect` output lists all of the definitions
of a key. For example,
.. code-block:: console
@ -144,15 +144,15 @@ of a key. For example,
[info] Related:
[info] test:compile
This shows that in addition to the requested ``compile:compile`` task,
there is also a ``test:compile`` task.
This shows that in addition to the requested `compile:compile` task,
there is also a `test:compile` task.
Dependencies
------------
Forward dependencies show the other settings (or tasks) used to define a
setting (or task). Reverse dependencies go the other direction, showing
what uses a given setting. ``inspect`` provides this information based
what uses a given setting. `inspect` provides this information based
on either the requested dependencies or the actual dependencies.
Requested dependencies are those that a setting directly specifies.
Actual settings are what those dependencies get resolved to. This
@ -161,7 +161,7 @@ distinction is explained in more detail in the following sections.
Requested Dependencies
~~~~~~~~~~~~~~~~~~~~~~
As an example, we'll look at ``console``:
As an example, we'll look at `console`:
.. code-block:: console
@ -179,15 +179,15 @@ As an example, we'll look at ``console``:
...
This shows the inputs to the ``console`` task. We can see that it gets
its classpath and options from ``fullClasspath`` and
``scalacOptions(for console)``. The information provided by the
``inspect`` command can thus assist in finding the right setting to
change. The convention for keys, like ``console`` and
``fullClasspath``, is that the Scala identifier is camel case, while
This shows the inputs to the `console` task. We can see that it gets
its classpath and options from `fullClasspath` and
`scalacOptions(for console)`. The information provided by the
`inspect` command can thus assist in finding the right setting to
change. The convention for keys, like `console` and
`fullClasspath`, is that the Scala identifier is camel case, while
the String representation is lowercase and separated by dashes. The
Scala identifier for a configuration is uppercase to distinguish it from
tasks like ``compile`` and ``test``. For example, we can infer from the
tasks like `compile` and `test`. For example, we can infer from the
previous example how to add code to be run when the Scala interpreter
starts up:
@ -199,23 +199,23 @@ starts up:
import mypackage._
...
``inspect`` showed that ``console`` used the setting
``compile:console::initialCommands``. Translating the
``initialCommands`` string to the Scala identifier gives us
``initialCommands``. ``compile`` indicates that this is for the main
sources. ``console::`` indicates that the setting is specific to
``console``. Because of this, we can set the initial commands on the
``console`` task without affecting the ``consoleQuick`` task, for
`inspect` showed that `console` used the setting
`compile:console::initialCommands`. Translating the
`initialCommands` string to the Scala identifier gives us
`initialCommands`. `compile` indicates that this is for the main
sources. `console::` indicates that the setting is specific to
`console`. Because of this, we can set the initial commands on the
`console` task without affecting the `consoleQuick` task, for
example.
Actual Dependencies
~~~~~~~~~~~~~~~~~~~
``inspect actual <scoped-key>`` shows the actual dependency used. This
`inspect actual <scoped-key>` shows the actual dependency used. This
is useful because delegation means that the dependency can come from a
scope other than the requested one. Using ``inspect actual``, we see
scope other than the requested one. Using `inspect actual`, we see
exactly which scope is providing a value for a setting. Combining
``inspect actual`` with plain ``inspect``, we can see the range of
`inspect actual` with plain `inspect`, we can see the range of
scopes that will affect a setting. Returning to the example in Requested
Dependencies,
@ -234,18 +234,18 @@ Dependencies,
[info] compile:console::streams
...
For ``initialCommands``, we see that it comes from the global scope
(``*/*:``). Combining this with the relevant output from
``inspect console``:
For `initialCommands`, we see that it comes from the global scope
(`*/*:`). Combining this with the relevant output from
`inspect console`:
.. code-block:: console
compile:console::initialCommands
we know that we can set ``initialCommands`` as generally as the global
scope, as specific as the current project's ``console`` task scope, or
we know that we can set `initialCommands` as generally as the global
scope, as specific as the current project's `console` task scope, or
anything in between. This means that we can, for example, set
``initialCommands`` for the whole project and will affect ``console``:
`initialCommands` for the whole project and will affect `console`:
.. code-block:: console
@ -254,7 +254,7 @@ anything in between. This means that we can, for example, set
The reason we might want to set it here this is that other console tasks
will use this value now. We can see which ones use our new setting by
looking at the reverse dependencies output of ``inspect actual``:
looking at the reverse dependencies output of `inspect actual`:
.. code-block:: console
@ -268,13 +268,17 @@ looking at the reverse dependencies output of ``inspect actual``:
[info] *:consoleProject
...
We now know that by setting ``initialCommands`` on the whole project,
We now know that by setting `initialCommands` on the whole project,
we affect all console tasks in all configurations in that project. If we
didn't want the initial commands to apply for ``consoleProject``, which
didn't want the initial commands to apply for `consoleProject`, which
doesn't have our project's classpath available, we could use the more
specific task axis:
``console > set initialCommands in console := "import mypackage._" > set initialCommands in consoleQuick := "import mypackage._"``
.. code-block:: console
> set initialCommands in console := "import mypackage._"
> set initialCommands in consoleQuick := "import mypackage._"`
or configuration axis:
.. code-block:: console
@ -291,11 +295,11 @@ Delegates
A setting has a key and a scope. A request for a key in a scope A may be
delegated to another scope if A doesn't define a value for the key. The
delegation chain is well-defined and is displayed in the Delegates
section of the ``inspect`` command. The Delegates section shows the
section of the `inspect` command. The Delegates section shows the
order in which scopes are searched when a value is not defined for the
requested key.
As an example, consider the initial commands for ``console`` again:
As an example, consider the initial commands for `console` again:
.. code-block:: console
@ -311,5 +315,5 @@ As an example, consider the initial commands for ``console`` again:
...
This means that if there is no value specifically for
``*:console::initialCommands``, the scopes listed under Delegates will
`*:console::initialCommands`, the scopes listed under Delegates will
be searched in order until a defined value is found.

View File

@ -9,33 +9,33 @@ class files.
Usage
=====
- ``compile`` will compile the sources under ``src/main/java`` by
- `compile` will compile the sources under `src/main/java` by
default.
- ``testCompile`` will compile the sources under ``src/test/java`` by
- `testCompile` will compile the sources under `src/test/java` by
default.
Pass options to the Java compiler by setting ``javacOptions``:
Pass options to the Java compiler by setting `javacOptions`:
::
javacOptions += "-g:none"
As with options for the Scala compiler, the arguments are not parsed by
sbt. Multi-element options, such as ``-source 1.5``, are specified like:
sbt. Multi-element options, such as `-source 1.5`, are specified like:
::
javacOptions ++= Seq("-source", "1.5")
You can specify the order in which Scala and Java sources are built with
the ``compileOrder`` setting. Possible values are from the
``CompileOrder`` enumeration: ``Mixed``, ``JavaThenScala``, and
``ScalaThenJava``. If you have circular dependencies between Scala and
Java sources, you need the default, ``Mixed``, which passes both Java
and Scala sources to ``scalac`` and then compiles the Java sources with
``javac``. If you do not have circular dependencies, you can use one of
the `compileOrder` setting. Possible values are from the
`CompileOrder` enumeration: `Mixed`, `JavaThenScala`, and
`ScalaThenJava`. If you have circular dependencies between Scala and
Java sources, you need the default, `Mixed`, which passes both Java
and Scala sources to `scalac` and then compiles the Java sources with
`javac`. If you do not have circular dependencies, you can use one of
the other two options to speed up your build by not passing the Java
sources to ``scalac``. For example, if your Scala sources depend on your
sources to `scalac`. For example, if your Scala sources depend on your
Java sources, but your Java sources do not depend on your Scala sources,
you can do:
@ -60,10 +60,10 @@ they share the same output directory. So, previously compiled classes
not involved in the current recompilation may be picked up. A clean
compile will always provide full checking, however.
By default, sbt includes ``src/main/scala`` and ``src/main/java`` in its
By default, sbt includes `src/main/scala` and `src/main/java` in its
list of unmanaged source directories. For Java-only projects, the
unnecessary Scala directories can be ignored by modifying
``unmanagedSourceDirectories``:
`unmanagedSourceDirectories`:
::

View File

@ -11,17 +11,17 @@ Overview
========
A user downloads the launcher jar and creates a script to run it. In
this documentation, the script will be assumed to be called ``launch``.
this documentation, the script will be assumed to be called `launch`.
For unix, the script would look like:
``java -jar sbt-launcher.jar "$@"``
`java -jar sbt-launcher.jar "$@"`
The user then downloads the configuration file for the application (call
it ``my.app.configuration``) and creates a script to launch it (call it
``myapp``): ``launch @my.app.configuration "$@"``
it `my.app.configuration`) and creates a script to launch it (call it
`myapp`): `launch @my.app.configuration "$@"`
The user can then launch the application using ``myapp arg1 arg2 ...``
The user can then launch the application using `myapp arg1 arg2 ...`
Like the launcher used to distribute ``sbt``, the downloaded launcher
Like the launcher used to distribute `sbt`, the downloaded launcher
jar will retrieve Scala and the application according to the provided
configuration file. The versions may be fixed or read from a different
configuration file (the location of which is also configurable). The
@ -35,7 +35,7 @@ information about how it was called: command line arguments, current
working directory, Scala version, and application ID (organization,
name, version). In addition, the application can ask the launcher to
perform operations such as obtaining the Scala jars and a
``ClassLoader`` for any version of Scala retrievable from the
`ClassLoader` for any version of Scala retrievable from the
repositories specified in the configuration file. It can request that
other applications be downloaded and run. When the application
completes, it can tell the launcher to exit with a specific exit code or
@ -53,13 +53,13 @@ Configuration
The launcher may be configured in one of the following ways in
increasing order of precedence:
- Replace the ``/sbt/sbt.boot.properties`` file in the jar
- Put a configuration file named ``sbt.boot.properties`` on the
classpath. Put it in the classpath root without the ``/sbt`` prefix.
- Replace the `/sbt/sbt.boot.properties` file in the jar
- Put a configuration file named `sbt.boot.properties` on the
classpath. Put it in the classpath root without the `/sbt` prefix.
- Specify the location of an alternate configuration on the command
line. This can be done by either specifying the location as the
system property ``sbt.boot.properties`` or as the first argument to
the launcher prefixed by ``'@'``. The system property has lower
system property `sbt.boot.properties` or as the first argument to
the launcher prefixed by `'@'`. The system property has lower
precedence. Resolution of a relative path is first attempted against
the current working directory, then against the user's home
directory, and then against the directory containing the launcher
@ -69,8 +69,8 @@ Syntax
~~~~~~
The configuration file is line-based, read as UTF-8 encoded, and defined
by the following grammar. ``'nl'`` is a newline or end of file and
``'text'`` is plain text without newlines or the surrounding delimiters
by the following grammar. `'nl'` is a newline or end of file and
`'text'` is plain text without newlines or the surrounding delimiters
(such as parentheses or square brackets):
.. productionlist::
@ -127,10 +127,10 @@ by the following grammar. ``'nl'`` is a newline or end of file and
In addition to the grammar specified here, property values may include
variable substitutions. A variable substitution has one of these forms:
- ``${variable.name}``
- ``${variable.name-default}``
- `${variable.name}`
- `${variable.name-default}`
where ``variable.name`` is the name of a system property. If a system
where `variable.name` is the name of a system property. If a system
property by that name exists, the value is substituted. If it does not
exists and a default is specified, the default is substituted after
recursively substituting variables in it. If the system property does
@ -142,7 +142,7 @@ Example
The default configuration file for sbt looks like:
.. code-block:: ini
.. parsed-literal::
[scala]
version: ${sbt.scala.version-auto}
@ -150,7 +150,7 @@ The default configuration file for sbt looks like:
[app]
org: ${sbt.organization-org.scala-sbt}
name: sbt
version: ${sbt.version-read(sbt.version)[0.13.0]}
version: ${sbt.version-read(sbt.version)[\ |release|\ ]}
class: ${sbt.main.class-sbt.xMain}
components: xsbti,extra
cross-versioned: ${sbt.cross.versioned-false}
@ -173,28 +173,28 @@ The default configuration file for sbt looks like:
Semantics
~~~~~~~~~
The ``scala.version`` property specifies the version of Scala used to
The `scala.version` property specifies the version of Scala used to
run the application. If the application is not cross-built, this may be
set to ``auto`` and it will be auto-detected from the application's
dependencies. If specified, the ``scala.classifiers`` property defines
set to `auto` and it will be auto-detected from the application's
dependencies. If specified, the `scala.classifiers` property defines
classifiers, such as 'sources', of extra Scala artifacts to retrieve.
The ``app.org``, ``app.name``, and ``app.version`` properties specify
The `app.org`, `app.name`, and `app.version` properties specify
the organization, module ID, and version of the application,
respectively. These are used to resolve and retrieve the application
from the repositories listed in ``[repositories]``. If
``app.cross-versioned`` is ``binary``, the resolved module ID is
``{app.name+'_'+CrossVersion.binaryScalaVersion(scala.version)}``.
If ``app.cross-versioned`` is ``true`` or ``full``, the resolved module ID is
``{app.name+'_'+scala.version}``. The ``scala.version`` property must be
specified and cannot be ``auto`` when cross-versioned. The paths given
in ``app.resources`` are added to the application's classpath. If the
from the repositories listed in `[repositories]`. If
`app.cross-versioned` is `binary`, the resolved module ID is
`{app.name+'_'+CrossVersion.binaryScalaVersion(scala.version)}`.
If `app.cross-versioned` is `true` or `full`, the resolved module ID is
`{app.name+'_'+scala.version}`. The `scala.version` property must be
specified and cannot be `auto` when cross-versioned. The paths given
in `app.resources` are added to the application's classpath. If the
path is relative, it is resolved against the application's working
directory. If specified, the ``app.classifiers`` property defines
directory. If specified, the `app.classifiers` property defines
classifiers, like 'sources', of extra artifacts to retrieve for the
application.
Jars are retrieved to the directory given by ``boot.directory``. By
Jars are retrieved to the directory given by `boot.directory`. By
default, this is an absolute path that is shared by all launched
instances on the machine. If multiple versions access it simultaneously.
, you might see messages like:
@ -207,37 +207,37 @@ This boot directory may be relative to the current directory instead. In
this case, the launched application will have a separate boot directory
for each directory it is launched in.
The ``boot.properties`` property specifies the location of the
properties file to use if ``app.version`` or ``scala.version`` is
specified as ``read``. The ``prompt-create``, ``prompt-fill``, and
``quick-option`` properties together with the property definitions in
``[app.properties]`` can be used to initialize the ``boot.properties``
The `boot.properties` property specifies the location of the
properties file to use if `app.version` or `scala.version` is
specified as `read`. The `prompt-create`, `prompt-fill`, and
`quick-option` properties together with the property definitions in
`[app.properties]` can be used to initialize the `boot.properties`
file.
The app.class property specifies the name of the entry point to the
application. An application entry point must be a public class with a
no-argument constructor that implements ``xsbti.AppMain``. The
``AppMain`` interface specifies the entry method signature 'run'. The
no-argument constructor that implements `xsbti.AppMain`. The
`AppMain` interface specifies the entry method signature 'run'. The
run method is passed an instance of AppConfiguration, which provides
access to the startup environment. ``AppConfiguration`` also provides an
access to the startup environment. `AppConfiguration` also provides an
interface to retrieve other versions of Scala or other applications.
Finally, the return type of the run method is ``xsbti.MainResult``,
which has two subtypes: ``xsbti.Reboot`` and ``xsbti.Exit``. To exit
with a specific code, return an instance of ``xsbti.Exit`` with the
Finally, the return type of the run method is `xsbti.MainResult`,
which has two subtypes: `xsbti.Reboot` and `xsbti.Exit`. To exit
with a specific code, return an instance of `xsbti.Exit` with the
requested code. To restart the application, return an instance of
Reboot. You can change some aspects of the configuration with a reboot,
such as the version of Scala, the application ID, and the arguments.
The ``ivy.cache-directory`` property provides an alternative location
The `ivy.cache-directory` property provides an alternative location
for the Ivy cache used by the launcher. This does not automatically set
the Ivy cache for the application, but the application is provided this
location through the AppConfiguration instance. The ``checksums``
location through the AppConfiguration instance. The `checksums`
property selects the checksum algorithms (sha1 or md5) that are used to
verify artifacts downloaded by the launcher. ``override-build-repos`` is
verify artifacts downloaded by the launcher. `override-build-repos` is
a flag that can inform the application that the repositories configured
for the launcher should be used in the application. If
``repository-config`` is defined, the file it specifies should contain a
``[repositories]`` section that is used in place of the section in the
`repository-config` is defined, the file it specifies should contain a
`[repositories]` section that is used in place of the section in the
original configuration file.
Execution
@ -248,16 +248,16 @@ described in the Configuration section and then parses it. If either the
Scala version or the application version are specified as 'read', the
launcher determines them in the following manner. The file given by the
'boot.properties' property is read as a Java properties file to obtain
the version. The expected property names are ``${app.name}.version`` for
the application version (where ``${app.name}`` is replaced with the
value of the ``app.name`` property from the boot configuration file) and
``scala.version`` for the Scala version. If the properties file does not
the version. The expected property names are `${app.name}.version` for
the application version (where `${app.name}` is replaced with the
value of the `app.name` property from the boot configuration file) and
`scala.version` for the Scala version. If the properties file does not
exist, the default value provided is used. If no default was provided,
an error is generated.
Once the final configuration is resolved, the launcher proceeds to
obtain the necessary jars to launch the application. The
``boot.directory`` property is used as a base directory to retrieve jars
`boot.directory` property is used as a base directory to retrieve jars
to. Locking is done on the directory, so it can be shared system-wide.
The launcher retrieves the requested version of Scala to
@ -278,13 +278,13 @@ application itself. It and its dependencies are retrieved to
Once all required code is downloaded, the class loaders are set up. The
launcher creates a class loader for the requested version of Scala. It
then creates a child class loader containing the jars for the requested
'app.components' and with the paths specified in ``app.resources``. An
'app.components' and with the paths specified in `app.resources`. An
application that does not use components will have all of its jars in
this class loader.
The main class for the application is then instantiated. It must be a
public class with a public no-argument constructor and must conform to
xsbti.AppMain. The ``run`` method is invoked and execution passes to the
xsbti.AppMain. The `run` method is invoked and execution passes to the
application. The argument to the 'run' method provides configuration
information and a callback to obtain a class loader for any version of
Scala that can be obtained from a repository in [repositories]. The
@ -305,16 +305,16 @@ interface class will be provided by the launcher, so it is only a
compile-time dependency. If you are building with sbt, your dependency
definition would be:
::
.. parsed-literal::
libraryDependencies += "org.scala-sbt" % "launcher-interface" % "0.13.0" % "provided"
libraryDependencies += "org.scala-sbt" % "launcher-interface" % "|release|" % "provided"
resolvers += sbtResolver.value
Make the entry point to your class implement 'xsbti.AppMain'. An example
that uses some of the information:
::
.. code-block:: scala
package xsbt.test
class Main extends xsbti.AppMain
@ -332,14 +332,14 @@ that uses some of the information:
// and how to return the code to exit with
scalaVersion match
{
case "2.8.2" =>
case "2.9.3" =>
new xsbti.Reboot {
def arguments = configuration.arguments
def baseDirectory = configuration.baseDirectory
def scalaVersion = "2.9.2
def scalaVersion = "2.10.2
def app = configuration.provider.id
}
case "2.9.2" => new Exit(1)
case "2.10.2" => new Exit(1)
case _ => new Exit(0)
}
}
@ -349,14 +349,14 @@ that uses some of the information:
Next, define a configuration file for the launcher. For the above class,
it might look like:
.. code-block:: ini
.. parsed-literal::
[scala]
version: 2.9.2
version: |scalaRelease|
[app]
org: org.scala-sbt
name: xsbt-test
version: 0.13.0
version: |release|
class: xsbt.test.Main
cross-versioned: binary
[repositories]
@ -365,7 +365,7 @@ it might look like:
[boot]
directory: ${user.home}/.myapp/boot
Then, ``publishLocal`` or ``+publishLocal`` the application to make it
Then, `publishLocal` or `+publishLocal` the application to make it
available.
Running an Application
@ -377,12 +377,12 @@ The second two require providing a configuration file for download.
- Replace the /sbt/sbt.boot.properties file in the launcher jar and
distribute the modified jar. The user would need a script to run
``java -jar your-launcher.jar arg1 arg2 ...``.
`java -jar your-launcher.jar arg1 arg2 ...`.
- The user downloads the launcher jar and you provide the configuration
file.
- The user needs to run ``java -Dsbt.boot.properties=your.boot.properties -jar launcher.jar``.
- The user needs to run `java -Dsbt.boot.properties=your.boot.properties -jar launcher.jar`.
- The user already has a script to run the launcher (call it
'launch'). The user needs to run ``launch @your.boot.properties your-arg-1 your-arg-2``
'launch'). The user needs to run `launch @your.boot.properties your-arg-1 your-arg-2`

View File

@ -16,13 +16,13 @@ There are two ways for you to manage libraries with sbt: manually or
automatically. These two ways can be mixed as well. This page discusses
the two approaches. All configurations shown here are settings that go
either directly in a :doc:`.sbt file </Getting-Started/Basic-Def>` or are
appended to the ``settings`` of a Project in a :doc:`.scala file </Getting-Started/Full-Def>`.
appended to the `settings` of a Project in a :doc:`.scala file </Getting-Started/Full-Def>`.
Manual Dependency Management
============================
Manually managing dependencies involves copying any jars that you want
to use to the ``lib`` directory. sbt will put these jars on the
to use to the `lib` directory. sbt will put these jars on the
classpath during compilation, testing, running, and when using the
interpreter. You are responsible for adding, removing, updating, and
otherwise managing the jars in this directory. No modifications to your
@ -30,15 +30,15 @@ project definition are required to use this method unless you would like
to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the
``unmanagedBase`` setting in your project definition. For example, to
use ``custom_lib/``:
`unmanagedBase` setting in your project definition. For example, to
use `custom_lib/`:
::
unmanagedBase := baseDirectory.value / "custom_lib"
If you want more control and flexibility, override the
``unmanagedJars`` task, which ultimately provides the manual
`unmanagedJars` task, which ultimately provides the manual
dependencies to sbt. The default implementation is roughly:
::
@ -110,7 +110,7 @@ several dependencies can be declared together:
)
If you are using a dependency that was built with sbt, double the first
``%`` to be ``%%``:
`%` to be `%%`:
::
@ -122,8 +122,8 @@ this kind of dependency, that dependency probably wasn't published for
the version of Scala you are using. See :doc:`Cross-Build` for details.
Ivy can select the latest revision of a module according to constraints
you specify. Instead of a fixed revision like ``"1.6.1"``, you specify
``"latest.integration"``, ``"2.9.+"``, or ``"[1.0,)"``. See the `Ivy
you specify. Instead of a fixed revision like `"1.6.1"`, you specify
`"latest.integration"`, `"2.9.+"`, or `"[1.0,)"`. See the `Ivy
revisions <http://ant.apache.org/ivy/history/2.3.0-rc1/ivyfile/dependency.html#revision>`_
documentation for details.
@ -161,12 +161,12 @@ See :doc:`Resolvers` for details on defining other types of repositories.
Override default resolvers
~~~~~~~~~~~~~~~~~~~~~~~~~~
``resolvers`` configures additional, inline user resolvers. By default,
``sbt`` combines these resolvers with default repositories (Maven
Central and the local Ivy repository) to form ``externalResolvers``. To
have more control over repositories, set ``externalResolvers``
`resolvers` configures additional, inline user resolvers. By default,
`sbt` combines these resolvers with default repositories (Maven
Central and the local Ivy repository) to form `externalResolvers`. To
have more control over repositories, set `externalResolvers`
directly. To only specify repositories in addition to the usual
defaults, configure ``resolvers``.
defaults, configure `resolvers`.
For example, to use the Sonatype OSS Snapshots repository in addition to
the default repositories,
@ -195,7 +195,7 @@ parts:
definitions.
The repositories used by the launcher can be overridden by defining
``~/.sbt/repositories``, which must contain a ``[repositories]`` section
`~/.sbt/repositories`, which must contain a `[repositories]` section
with the same format as the :doc:`Launcher` configuration file. For
example:
@ -207,8 +207,8 @@ example:
my-ivy-repo: http://example.org/ivy-repo/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext]
A different location for the repositories file may be specified by the
``sbt.repository.config`` system property in the sbt startup script. The
final step is to set ``sbt.override.build.repos`` to true to use these
`sbt.repository.config` system property in the sbt startup script. The
final step is to set `sbt.override.build.repos` to true to use these
repositories for dependency resolution and retrieval.
Explicit URL
@ -233,7 +233,7 @@ transitively. In some instances, you may find that the dependencies
listed for a project aren't necessary for it to build. Projects using
the Felix OSGI framework, for instance, only explicitly require its main
jar to compile and run. Avoid fetching artifact dependencies with either
``intransitive()`` or ``notTransitive()``, as in this example:
`intransitive()` or `notTransitive()`, as in this example:
::
@ -242,14 +242,14 @@ jar to compile and run. Avoid fetching artifact dependencies with either
Classifiers
~~~~~~~~~~~
You can specify the classifier for a dependency using the ``classifier``
You can specify the classifier for a dependency using the `classifier`
method. For example, to get the jdk15 version of TestNG:
::
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
For multiple classifiers, use multiple ``classifier`` calls:
For multiple classifiers, use multiple `classifier` calls:
::
@ -257,9 +257,9 @@ For multiple classifiers, use multiple ``classifier`` calls:
"org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifier "natives-linux" classifier "natives-osx"
To obtain particular classifiers for all dependencies transitively, run
the ``updateClassifiers`` task. By default, this resolves all artifacts
with the ``sources`` or ``javadoc`` classifier. Select the classifiers
to obtain by configuring the ``transitiveClassifiers`` setting. For
the `updateClassifiers` task. By default, this resolves all artifacts
with the `sources` or `javadoc` classifier. Select the classifiers
to obtain by configuring the `transitiveClassifiers` setting. For
example, to only retrieve sources:
::
@ -270,7 +270,7 @@ Exclude Transitive Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To exclude certain transitive dependencies of a dependency, use the
``excludeAll`` or ``exclude`` methods. The ``exclude`` method should be
`excludeAll` or `exclude` methods. The `exclude` method should be
used when a pom will be published for the project. It requires the
organization and module name to exclude. For example,
@ -279,7 +279,7 @@ organization and module name to exclude. For example,
libraryDependencies +=
"log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")
The ``excludeAll`` method is more flexible, but because it cannot be
The `excludeAll` method is more flexible, but because it cannot be
represented in a pom.xml, it should only be used when a pom doesn't need
to be generated. For example,
@ -300,20 +300,20 @@ Download Sources
~~~~~~~~~~~~~~~~
Downloading source and API documentation jars is usually handled by an
IDE plugin. These plugins use the ``updateClassifiers`` and
``updateSbtClassifiers`` tasks, which produce an :doc:`Update-Report`
IDE plugin. These plugins use the `updateClassifiers` and
`updateSbtClassifiers` tasks, which produce an :doc:`Update-Report`
referencing these jars.
To have sbt download the dependency's sources without using an IDE
plugin, add ``withSources()`` to the dependency definition. For API
jars, add ``withJavadoc()``. For example:
plugin, add `withSources()` to the dependency definition. For API
jars, add `withJavadoc()`. For example:
::
libraryDependencies +=
"org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc()
Note that this is not transitive. Use the ``update-*classifiers`` tasks
Note that this is not transitive. Use the `update-*classifiers` tasks
for that.
Extra Attributes
@ -321,7 +321,7 @@ Extra Attributes
`Extra
attributes <http://ant.apache.org/ivy/history/2.3.0-rc1/concept.html#extra>`_
can be specified by passing key/value pairs to the ``extra`` method.
can be specified by passing key/value pairs to the `extra` method.
To select dependencies by extra attributes:
@ -359,9 +359,9 @@ Ivy Home Directory
~~~~~~~~~~~~~~~~~~
By default, sbt uses the standard Ivy home directory location
``${user.home}/.ivy2/``. This can be configured machine-wide, for use by
`${user.home}/.ivy2/`. This can be configured machine-wide, for use by
both the sbt launcher and by projects, by setting the system property
``sbt.ivy.home`` in the sbt startup script (described in
`sbt.ivy.home` in the sbt startup script (described in
:doc:`Setup </Getting-Started/Setup>`).
For example:
@ -406,7 +406,7 @@ Conflict Management
The conflict manager decides what to do when dependency resolution brings in different versions of the same library.
By default, the latest revision is selected.
This can be changed by setting ``conflictManager``, which has type `ConflictManager <../../api/sbt/ConflictManager.html>`_.
This can be changed by setting `conflictManager`, which has type `ConflictManager <../../api/sbt/ConflictManager.html>`_.
See the `Ivy documentation <http://ant.apache.org/ivy/history/latest-milestone/settings/conflict-managers.html>`_ for details on the different conflict managers.
For example, to specify that no conflicts are allowed,
@ -446,7 +446,7 @@ This can be confirmed in the output of `show update`, which shows the newer vers
[info] (EVICTED) log4j:log4j:1.2.14
...
To say that we prefer the version we've specified over the version from indirect dependencies, use ``force()``:
To say that we prefer the version we've specified over the version from indirect dependencies, use `force()`:
::
@ -455,7 +455,7 @@ To say that we prefer the version we've specified over the version from indirect
"log4j" % "log4j" % "1.2.14" force()
)
The output of ``show update`` is now reversed:
The output of `show update` is now reversed:
::
@ -472,9 +472,9 @@ The output of ``show update`` is now reversed:
Forcing a revision without introducing a dependency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Use of the ``force()`` method described in the previous section requires having a direct dependency.
Use of the `force()` method described in the previous section requires having a direct dependency.
However, it may be desirable to force a revision without introducing that direct dependency.
Ivy provides overrides for this and in sbt, overrides are configured in sbt with the ``dependencyOverrides`` setting, which is a set of ``ModuleIDs``.
Ivy provides overrides for this and in sbt, overrides are configured in sbt with the `dependencyOverrides` setting, which is a set of `ModuleIDs`.
For example, the following dependency definitions conflict because spark uses log4j 1.2.16 and scalaxb uses log4j 1.2.17:
::
@ -502,7 +502,7 @@ To change the version selected, add an override:
dependencyOverrides += "log4j" % "log4j" % "1.2.16"
This will not add a direct dependency on log4j, but will force the revision to be 1.2.16.
This is confirmed by the output of ``show update``:
This is confirmed by the output of `show update`:
::
@ -537,25 +537,25 @@ for details.
You put a dependency in a configuration by selecting one or more of its
configurations to map to one or more of your project's configurations.
The most common case is to have one of your configurations ``A`` use a
dependency's configuration ``B``. The mapping for this looks like
``"A->B"``. To apply this mapping to a dependency, add it to the end of
The most common case is to have one of your configurations `A` use a
dependency's configuration `B`. The mapping for this looks like
`"A->B"`. To apply this mapping to a dependency, add it to the end of
your dependency definition:
::
libraryDependencies += "org.scalatest" % "scalatest" % "1.2" % "test->compile"
This says that your project's ``test`` configuration uses
``ScalaTest``'s ``compile`` configuration. See the `Ivy
This says that your project's `test` configuration uses
`ScalaTest`'s `compile` configuration. See the `Ivy
documentation <http://ant.apache.org/ivy/history/2.3.0-rc1/tutorial/conf.html>`_
for more advanced mappings. Most projects published to Maven
repositories will use the ``compile`` configuration.
repositories will use the `compile` configuration.
A useful application of configurations is to group dependencies that are
not used on normal classpaths. For example, your project might use a
``"js"`` configuration to automatically download jQuery and then include
it in your jar by modifying ``resources``. For example:
`"js"` configuration to automatically download jQuery and then include
it in your jar by modifying `resources`. For example:
::
@ -565,13 +565,13 @@ it in your jar by modifying ``resources``. For example:
resources ++= update.value.select( configurationFilter("js") )
The ``config`` method defines a new configuration with name ``"js"`` and
The `config` method defines a new configuration with name `"js"` and
makes it private to the project so that it is not used for publishing.
See :doc:`/Detailed-Topics/Update-Report` for more information on selecting managed
artifacts.
A configuration without a mapping (no ``"->"``) is mapped to ``default``
or ``compile``. The ``->`` is only needed when mapping to a different
A configuration without a mapping (no `"->"`) is mapped to `default`
or `compile`. The `->` is only needed when mapping to a different
configuration than those. The ScalaTest dependency above can then be
shortened to:
@ -586,7 +586,7 @@ Maven/Ivy
---------
For this method, create the configuration files as you would for Maven
(``pom.xml``) or Ivy (``ivy.xml`` and optionally ``ivysettings.xml``).
(`pom.xml`) or Ivy (`ivy.xml` and optionally `ivysettings.xml`).
External configuration is selected by using one of the following
expressions.
@ -647,7 +647,7 @@ or
Full Ivy Example
~~~~~~~~~~~~~~~~
For example, a ``build.sbt`` using external Ivy files might look like:
For example, a `build.sbt` using external Ivy files might look like:
::
@ -667,8 +667,8 @@ Known limitations
Maven support is dependent on Ivy's support for Maven POMs. Known issues
with this support:
- Specifying ``relativePath`` in the ``parent`` section of a POM will
- Specifying `relativePath` in the `parent` section of a POM will
produce an error.
- Ivy ignores repositories specified in the POM. A workaround is to
specify repositories inline or in an Ivy ``ivysettings.xml`` file.
specify repositories inline or in an Ivy `ivysettings.xml` file.

View File

@ -2,8 +2,8 @@
Local Scala
===========
To use a locally built Scala version, define the ``scalaHome`` setting,
which is of type ``Option[File]``. This Scala version will only be used
To use a locally built Scala version, define the `scalaHome` setting,
which is of type `Option[File]`. This Scala version will only be used
for the build and not for sbt, which will still use the version it was
compiled against.
@ -13,7 +13,7 @@ Example:
scalaHome := Some(file("/path/to/scala"))
Using a local Scala version will override the ``scalaVersion`` setting
Using a local Scala version will override the `scalaVersion` setting
and will not work with :doc:`cross building <Cross-Build>`.
sbt reuses the class loader for the local Scala version. If you

View File

@ -15,11 +15,11 @@ The rest of the page shows example solutions to these problems.
Defining the Project Relationships
==================================
The macro implementation will go in a subproject in the ``macro/`` directory.
The macro implementation will go in a subproject in the `macro/` directory.
The main project in the project's base directory will depend on this subproject and use the macro.
This configuration is shown in the following build definition:
``project/Build.scala``
`project/Build.scala`
::
@ -34,11 +34,11 @@ This configuration is shown in the following build definition:
}
This specifies that the macro implementation goes in ``macro/src/main/scala/`` and tests go in ``macro/src/test/scala/``.
This specifies that the macro implementation goes in `macro/src/main/scala/` and tests go in `macro/src/test/scala/`.
It also shows that we need a dependency on the compiler for the macro implementation.
As an example macro, we'll use ``desugar`` from `macrocosm <https://github.com/retronym/macrocosm>`_.
As an example macro, we'll use `desugar` from `macrocosm <https://github.com/retronym/macrocosm>`_.
``macro/src/main/scala/demo/Demo.scala``
`macro/src/main/scala/demo/Demo.scala`
::
@ -84,12 +84,12 @@ This can be then be run at the console:
> macro/test:run
immutable.this.List.apply[Int](1, 2, 3).reverse
Actual tests can be defined and run as usual with ``macro/test``.
Actual tests can be defined and run as usual with `macro/test`.
The main project can use the macro in the same way that the tests do.
For example,
``src/main/scala/MainUsage.scala``
`src/main/scala/MainUsage.scala`
::
@ -123,7 +123,7 @@ For example, the project definitions from above would look like:
)
lazy val commonSub = Project("common", file("common"))
Code in ``common/src/main/scala/`` is available for both the ``macro`` and ``main`` projects to use.
Code in `common/src/main/scala/` is available for both the `macro` and `main` projects to use.
Distribution
============
@ -142,7 +142,7 @@ For example, the `main` Project definition above would now look like:
You may wish to disable publishing the macro implementation.
This is done by overriding ``publish`` and ``publishLocal`` to do nothing:
This is done by overriding `publish` and `publishLocal` to do nothing:
::

View File

@ -2,30 +2,30 @@
Mapping Files
=============
Tasks like ``package``, ``packageSrc``, and ``packageDoc`` accept
mappings of type ``Seq[(File, String)]`` from an input file to the path
Tasks like `package`, `packageSrc`, and `packageDoc` accept
mappings of type `Seq[(File, String)]` from an input file to the path
to use in the resulting artifact (jar). Similarly, tasks that copy files
accept mappings of type ``Seq[(File, File)]`` from an input file to the
accept mappings of type `Seq[(File, File)]` from an input file to the
destination file. There are some methods on
`PathFinder <../../api/sbt/PathFinder.html>`_
and `Path <../../api/sbt/Path$.html>`_
that can be useful for constructing the ``Seq[(File, String)]`` or
``Seq[(File, File)]`` sequences.
that can be useful for constructing the `Seq[(File, String)]` or
`Seq[(File, File)]` sequences.
A common way of making this sequence is to start with a ``PathFinder``
or ``Seq[File]`` (which is implicitly convertible to ``PathFinder``) and
then call the ``x`` method. See the
A common way of making this sequence is to start with a `PathFinder`
or `Seq[File]` (which is implicitly convertible to `PathFinder`) and
then call the `x` method. See the
`PathFinder <../../api/sbt/PathFinder.html>`_
API for details, but essentially this method accepts a function
``File => Option[String]`` or ``File => Option[File]`` that is used to
`File => Option[String]` or `File => Option[File]` that is used to
generate mappings.
Relative to a directory
-----------------------
The ``Path.relativeTo`` method is used to map a ``File`` to its path
``String`` relative to a base directory or directories. The
``relativeTo`` method accepts a base directory or sequence of base
The `Path.relativeTo` method is used to map a `File` to its path
`String` relative to a base directory or directories. The
`relativeTo` method accepts a base directory or sequence of base
directories to relativize an input file against. The first directory
that is an ancestor of the file is used in the case of a sequence of
base directories.
@ -45,14 +45,14 @@ For example:
Rebase
------
The ``Path.rebase`` method relativizes an input file against one or more
The `Path.rebase` method relativizes an input file against one or more
base directories (the first argument) and then prepends a base String or
File (the second argument) to the result. As with ``relativeTo``, the
File (the second argument) to the result. As with `relativeTo`, the
first base directory that is an ancestor of the input file is used in
the case of multiple base directories.
For example, the following demonstrates building a
``Seq[(File, String)]`` using ``rebase``:
`Seq[(File, String)]` using `rebase`:
::
@ -64,7 +64,7 @@ For example, the following demonstrates building a
val expected = (file("/a/b/C.scala") -> "pre/b/C.scala" ) :: Nil
assert( mappings == expected )
Or, to build a ``Seq[(File, File)]``:
Or, to build a `Seq[(File, File)]`:
::
@ -80,7 +80,7 @@ Or, to build a ``Seq[(File, File)]``:
Flatten
-------
The ``Path.flat`` method provides a function that maps a file to the
The `Path.flat` method provides a function that maps a file to the
last component of the path (its name). For a File to File mapping, the
input file is mapped to a file with the same name in a given target
directory. For example:
@ -94,7 +94,7 @@ directory. For example:
val expected = (file("/a/b/C.scala") -> "C.scala" ) :: Nil
assert( mappings == expected )
To build a ``Seq[(File, File)]`` using ``flat``:
To build a `Seq[(File, File)]` using `flat`:
::
@ -109,8 +109,8 @@ To build a ``Seq[(File, File)]`` using ``flat``:
Alternatives
------------
To try to apply several alternative mappings for a file, use ``|``,
which is implicitly added to a function of type ``A => Option[B]``. For
To try to apply several alternative mappings for a file, use `|`,
which is implicitly added to a function of type `A => Option[B]`. For
example, to try to relativize a file against some base directories but
fall back to flattening:

View File

@ -12,12 +12,12 @@ Why move to |version|?
1. Faster builds (because it is smarter at re-compiling only what it
must)
2. Easier configuration. For simple projects a single ``build.sbt`` file
2. Easier configuration. For simple projects a single `build.sbt` file
in your root directory is easier to create than
``project/build/MyProject.scala`` was.
3. No more ``lib_managed`` directory, reducing disk usage and avoiding
`project/build/MyProject.scala` was.
3. No more `lib_managed` directory, reducing disk usage and avoiding
backup and version control hassles.
4. ``update`` is now much faster and it's invoked automatically by sbt.
4. `update` is now much faster and it's invoked automatically by sbt.
5. Terser output. (Yet you can ask for more details if something goes
wrong.)
@ -52,28 +52,28 @@ those with subprojects, are not suited for this technique, but if you
learn how to transition a simple project it will help you do a more
complex one next.
Preserve ``project/`` for 0.7.x project
Preserve `project/` for 0.7.x project
---------------------------------------
Rename your ``project/`` directory to something like ``project-old``.
Rename your `project/` directory to something like `project-old`.
This will hide it from sbt |version| but keep it in case you want to switch
back to 0.7.x.
Create ``build.sbt`` for |version|
Create `build.sbt` for |version|
----------------------------------
Create a ``build.sbt`` file in the root directory of your project. See
Create a `build.sbt` file in the root directory of your project. See
:doc:`.sbt build definition </Getting-Started/Basic-Def>` in the Getting
Started Guide, and for simple examples :doc:`/Examples/Quick-Configuration-Examples`.
If you have a simple project then converting your existing project file
to this format is largely a matter of re-writing your dependencies and
maven archive declarations in a modified yet familiar syntax.
This ``build.sbt`` file combines aspects of the old
``project/build/ProjectName.scala`` and ``build.properties`` files. It
This `build.sbt` file combines aspects of the old
`project/build/ProjectName.scala` and `build.properties` files. It
looks like a property file, yet contains Scala code in a special format.
A ``build.properties`` file like:
A `build.properties` file like:
.. code-block:: text
@ -87,7 +87,7 @@ A ``build.properties`` file like:
build.scala.versions=2.8.1
project.initialize=false
Now becomes part of your ``build.sbt`` file with lines like:
Now becomes part of your `build.sbt` file with lines like:
::
@ -99,7 +99,7 @@ Now becomes part of your ``build.sbt`` file with lines like:
scalaVersion := "2.9.2"
Currently, a ``project/build.properties`` is still needed to explicitly
Currently, a `project/build.properties` is still needed to explicitly
select the sbt version. For example:
.. code-block:: text
@ -116,10 +116,10 @@ Switching back to sbt 0.7.x
---------------------------
If you get stuck and want to switch back, you can leave your
``build.sbt`` file alone. sbt 0.7.x will not understand or notice it.
Just rename your |version| ``project`` directory to something like
``project10`` and rename the backup of your old project from
``project-old`` to ``project`` again.
`build.sbt` file alone. sbt 0.7.x will not understand or notice it.
Just rename your |version| `project` directory to something like
`project10` and rename the backup of your old project from
`project-old` to `project` again.
FAQs
====

View File

@ -15,8 +15,8 @@ following two tasks do not have an ordering specified:
read := IO.read(file("/tmp/sample.txt"))
sbt is free to execute ``write`` first and then ``read``, ``read`` first
and then ``write``, or ``read`` and ``write`` simultaneously. Execution
sbt is free to execute `write` first and then `read`, `read` first
and then `write`, or `read` and `write` simultaneously. Execution
of these tasks is non-deterministic because they share a file. A correct
declaration of the tasks would be:
@ -30,9 +30,9 @@ declaration of the tasks would be:
read := IO.read(write.value)
This establishes an ordering: ``read`` must run after ``write``. We've
also guaranteed that ``read`` will read from the same file that
``write`` created.
This establishes an ordering: `read` must run after `write`. We've
also guaranteed that `read` will read from the same file that
`write` created.
Practical constraints
=====================
@ -54,9 +54,9 @@ class is mapped to its own task to enable executing tests in parallel.
Prior to sbt 0.12, user control over this process was restricted to:
1. Enabling or disabling all parallel execution
(``parallelExecution := false``, for example).
(`parallelExecution := false`, for example).
2. Enabling or disabling mapping tests to their own tasks
(``parallelExecution in Test := false``, for example).
(`parallelExecution in Test := false`, for example).
(Although never exposed as a setting, the maximum number of tasks
running at a given time was internally configurable as well.)
@ -76,10 +76,10 @@ concurrency beyond the usual ordering declarations. There are two parts
to these restrictions.
1. A task is tagged in order to classify its purpose and resource
utilization. For example, the ``compile`` task may be tagged as
``Tags.Compile`` and ``Tags.CPU``.
utilization. For example, the `compile` task may be tagged as
`Tags.Compile` and `Tags.CPU`.
2. A list of rules restrict the tasks that may execute concurrently. For
example, ``Tags.limit(Tags.CPU, 4)`` would allow up to four
example, `Tags.limit(Tags.CPU, 4)` would allow up to four
computation-heavy tasks to run at a time.
The system is thus dependent on proper tagging of tasks and then on a
@ -91,11 +91,11 @@ Tagging Tasks
In general, a tag is associated with a weight that represents the task's
relative utilization of the resource represented by the tag. Currently,
this weight is an integer, but it may be a floating point in the future.
``Initialize[Task[T]]`` defines two methods for tagging the constructed
Task: ``tag`` and ``tagw``. The first method, ``tag``, fixes the weight
`Initialize[Task[T]]` defines two methods for tagging the constructed
Task: `tag` and `tagw`. The first method, `tag`, fixes the weight
to be 1 for the tags provided to it as arguments. The second method,
``tagw``, accepts pairs of tags and weights. For example, the following
associates the ``CPU`` and ``Compile`` tags with the ``compile`` task
`tagw`, accepts pairs of tags and weights. For example, the following
associates the `CPU` and `Compile` tags with the `compile` task
(with a weight of 1).
::
@ -105,7 +105,7 @@ associates the ``CPU`` and ``Compile`` tags with the ``compile`` task
compile := myCompileTask.value
Different weights may be specified by passing tag/weight pairs to
``tagw``:
`tagw`:
::
@ -116,7 +116,7 @@ Different weights may be specified by passing tag/weight pairs to
Defining Restrictions
~~~~~~~~~~~~~~~~~~~~~
Once tasks are tagged, the ``concurrentRestrictions`` setting sets
Once tasks are tagged, the `concurrentRestrictions` setting sets
restrictions on the tasks that may be concurrently executed based on the
weighted tags of those tasks. This is necessarily a global set of rules,
so it must be scoped `in Global`. For example,
@ -143,8 +143,8 @@ able to be executed. sbt will generate an error if this condition is not
met.
Most tasks won't be tagged because they are very short-lived. These
tasks are automatically assigned the label ``Untagged``. You may want to
include these tasks in the CPU rule by using the ``limitSum`` method.
tasks are automatically assigned the label `Untagged`. You may want to
include these tasks in the CPU rule by using the `limitSum` method.
For example:
::
@ -156,12 +156,12 @@ For example:
Note that the limit is the first argument so that tags can be provided
as varargs.
Another useful convenience function is ``Tags.exclusive``. This
Another useful convenience function is `Tags.exclusive`. This
specifies that a task with the given tag should execute in isolation. It
starts executing only when no other tasks are running (even if they have
the exclusive tag) and no other tasks may start execution until it
completes. For example, a task could be tagged with a custom tag
``Benchmark`` and a rule configured to ensure such a task is executed by
`Benchmark` and a rule configured to ensure such a task is executed by
itself:
::
@ -171,11 +171,11 @@ itself:
...
Finally, for the most flexibility, you can specify a custom function of
type ``Map[Tag,Int] => Boolean``. The ``Map[Tag,Int]`` represents the
weighted tags of a set of tasks. If the function returns ``true``, it
type `Map[Tag,Int] => Boolean`. The `Map[Tag,Int]` represents the
weighted tags of a set of tasks. If the function returns `true`, it
indicates that the set of tasks is allowed to execute concurrently. If
the return value is ``false``, the set of tasks will not be allowed to
execute concurrently. For example, ``Tags.exclusive(Benchmark)`` is
the return value is `false`, the set of tasks will not be allowed to
execute concurrently. For example, `Tags.exclusive(Benchmark)` is
equivalent to the following:
::
@ -201,34 +201,34 @@ then execute the task anyway.
Built-in Tags and Rules
~~~~~~~~~~~~~~~~~~~~~~~
Built-in tags are defined in the ``Tags`` object. All tags listed below
must be qualified by this object. For example, ``CPU`` refers to the
``Tags.CPU`` value.
Built-in tags are defined in the `Tags` object. All tags listed below
must be qualified by this object. For example, `CPU` refers to the
`Tags.CPU` value.
The built-in semantic tags are:
- ``Compile`` - describes a task that compiles sources.
- ``Test`` - describes a task that performs a test.
- ``Publish``
- ``Update``
- ``Untagged`` - automatically added when a task doesn't explicitly
- `Compile` - describes a task that compiles sources.
- `Test` - describes a task that performs a test.
- `Publish`
- `Update`
- `Untagged` - automatically added when a task doesn't explicitly
define any tags.
- ``All``- automatically added to every task.
- `All`- automatically added to every task.
The built-in resource tags are:
- ``Network`` - describes a task's network utilization.
- ``Disk`` - describes a task's filesystem utilization.
- ``CPU`` - describes a task's computational utilization.
- `Network` - describes a task's network utilization.
- `Disk` - describes a task's filesystem utilization.
- `CPU` - describes a task's computational utilization.
The tasks that are currently tagged by default are:
- ``compile``: ``Compile``, ``CPU``
- ``test``: ``Test``
- ``update``: ``Update``, ``Network``
- ``publish``, ``publishLocal``: ``Publish``, ``Network``
- `compile`: `Compile`, `CPU`
- `test`: `Test`
- `update`: `Update`, `Network`
- `publish`, `publishLocal`: `Publish`, `Network`
Of additional note is that the default ``test`` task will propagate its
Of additional note is that the default `test` task will propagate its
tags to each child task created for each test class.
The default rules provide the same behavior as previous versions of sbt:
@ -240,7 +240,7 @@ The default rules provide the same behavior as previous versions of sbt:
Tags.limitAll(if(parallelExecution.value) max else 1) :: Nil
}
As before, ``parallelExecution in Test`` controls whether tests are
As before, `parallelExecution in Test` controls whether tests are
mapped to separate tasks. To restrict the number of concurrently
executing tests in all projects, use:
@ -251,7 +251,7 @@ executing tests in all projects, use:
Custom Tags
-----------
To define a new tag, pass a String to the ``Tags.Tag`` method. For
To define a new tag, pass a String to the `Tags.Tag` method. For
example:
::
@ -296,8 +296,8 @@ behavior?
Fractional weighting
~~~~~~~~~~~~~~~~~~~~
Weights are currently ``int``\ s, but could be changed to be
``double``\ s if fractional weights would be useful. It is important to
Weights are currently `int`\ s, but could be changed to be
`double`\ s if fractional weights would be useful. It is important to
preserve a consistent notion of what a weight of 1 means so that
built-in and custom tasks share this definition and useful rules can be
written.
@ -314,10 +314,10 @@ Adjustments to Defaults
Rules should be easier to remove or redefine, perhaps by giving them
names. As it is, rules must be appended or all rules must be completely
redefined. Also, tags can only be defined for tasks at the original
definition site when using the ``:=`` syntax.
definition site when using the `:=` syntax.
For removing tags, an implementation of ``removeTag`` should follow from
the implementation of ``tag`` in a straightforward manner.
For removing tags, an implementation of `removeTag` should follow from
the implementation of `tag` in a straightforward manner.
Other characteristics
~~~~~~~~~~~~~~~~~~~~~
@ -326,12 +326,12 @@ The system of a tag with a weight was selected as being reasonably
powerful and flexible without being too complicated. This selection is
not fundamental and could be enhance, simplified, or replaced if
necessary. The fundamental interface that describes the constraints the
system must work within is ``sbt.ConcurrentRestrictions``. This
system must work within is `sbt.ConcurrentRestrictions`. This
interface is used to provide an intermediate scheduling queue between
task execution (``sbt.Execute``) and the underlying thread-based
parallel execution service (``java.util.concurrent.CompletionService``).
task execution (`sbt.Execute`) and the underlying thread-based
parallel execution service (`java.util.concurrent.CompletionService`).
This intermediate queue restricts new tasks from being forwarded to the
``j.u.c.CompletionService`` according to the
``sbt.ConcurrentRestrictions`` implementation. See the
`j.u.c.CompletionService` according to the
`sbt.ConcurrentRestrictions` implementation. See the
`sbt.ConcurrentRestrictions <https://github.com/sbt/sbt/blob/v0.12.0/tasks/ConcurrentRestrictions.scala>`_
API documentation for details.

View File

@ -11,9 +11,9 @@ methods for controlling tab completion that are discussed at the end of
the section.
Parser combinators build up a parser from smaller parsers. A
``Parser[T]`` in its most basic usage is a function
``String => Option[T]``. It accepts a ``String`` to parse and produces a
value wrapped in ``Some`` if parsing succeeds or ``None`` if it fails.
`Parser[T]` in its most basic usage is a function
`String => Option[T]`. It accepts a `String` to parse and produces a
value wrapped in `Some` if parsing succeeds or `None` if it fails.
Error handling and tab completion make this picture more complicated,
but we'll stick with Option for this discussion.
@ -37,9 +37,9 @@ The simplest parser combinators match exact inputs:
// and failing otherwise
val litString: Parser[String] = "blue"
In these examples, implicit conversions produce a literal ``Parser``
from a ``Char`` or ``String``. Other basic parser constructors are the
``charClass``, ``success`` and ``failure`` methods:
In these examples, implicit conversions produce a literal `Parser`
from a `Char` or `String`. Other basic parser constructors are the
`charClass`, `success` and `failure` methods:
::
@ -94,8 +94,8 @@ Transforming results
A key aspect of parser combinators is transforming results along the way
into more useful data structures. The fundamental methods for this are
``map`` and ``flatMap``. Here are examples of ``map`` and some
convenience methods implemented on top of ``map``.
`map` and `flatMap`. Here are examples of `map` and some
convenience methods implemented on top of `map`.
::
@ -122,8 +122,8 @@ Controlling tab completion
Most parsers have reasonable default tab completion behavior. For
example, the string and character literal parsers will suggest the
underlying literal for an empty input string. However, it is impractical
to determine the valid completions for ``charClass``, since it accepts
an arbitrary predicate. The ``examples`` method defines explicit
to determine the valid completions for `charClass`, since it accepts
an arbitrary predicate. The `examples` method defines explicit
completions for such a parser:
::
@ -131,7 +131,7 @@ completions for such a parser:
val digit = charClass(_.isDigit, "digit").examples("0", "1", "2")
Tab completion will use the examples as suggestions. The other method
controlling tab completion is ``token``. The main purpose of ``token``
controlling tab completion is `token`. The main purpose of `token`
is to determine the boundaries for suggestions. For example, if your
parser is:
@ -140,7 +140,7 @@ parser is:
("fg" | "bg") ~ ' ' ~ ("green" | "blue")
then the potential completions on empty input are:
``console fg green fg blue bg green bg blue``
`console fg green fg blue bg green bg blue`
Typically, you want to suggest smaller segments or the number of
suggestions becomes unmanageable. A better parser is:
@ -150,9 +150,9 @@ suggestions becomes unmanageable. A better parser is:
token( ("fg" | "bg") ~ ' ') ~ token("green" | "blue")
Now, the initial suggestions would be (with \_ representing a space):
``console fg_ bg_``
`console fg_ bg_`
Be careful not to overlap or nest tokens, as in
``token("green" ~ token("blue"))``. The behavior is unspecified (and
`token("green" ~ token("blue"))`. The behavior is unspecified (and
should generate an error in the future), but typically the outer most
token definition will be used.

View File

@ -8,9 +8,9 @@ base type used is
but several methods are augmented through implicits:
- `RichFile <../../api/sbt/RichFile.html>`_
adds methods to ``File``
adds methods to `File`
- `PathFinder <../../api/sbt/PathFinder.html>`_
adds methods to ``File`` and ``Seq[File]``
adds methods to `File` and `Seq[File]`
- `Path <../../api/sbt/Path$.html>`_ and
`IO <../../api/sbt/IO$.html>`_ provide
general methods related to files and I/O.
@ -20,32 +20,32 @@ Constructing a File
sbt 0.10+ uses
`java.io.File <http://download.oracle.com/javase/6/docs/api/java/io/File.html>`_
to represent a file instead of the custom ``sbt.Path`` class that was in
sbt 0.7 and earlier. sbt defines the alias ``File`` for ``java.io.File``
so that an extra import is not necessary. The ``file`` method is an
alias for the single-argument ``File`` constructor to simplify
to represent a file instead of the custom `sbt.Path` class that was in
sbt 0.7 and earlier. sbt defines the alias `File` for `java.io.File`
so that an extra import is not necessary. The `file` method is an
alias for the single-argument `File` constructor to simplify
constructing a new file from a String:
::
val source: File = file("/home/user/code/A.scala")
Additionally, sbt augments File with a ``/`` method, which is an alias
for the two-argument ``File`` constructor for building up a path:
Additionally, sbt augments File with a `/` method, which is an alias
for the two-argument `File` constructor for building up a path:
::
def readme(base: File): File = base / "README"
Relative files should only be used when defining the base directory of a
``Project``, where they will be resolved properly.
`Project`, where they will be resolved properly.
::
val root = Project("root", file("."))
Elsewhere, files should be absolute or be built up from an absolute base
``File``. The ``baseDirectory`` setting defines the base directory of
`File`. The `baseDirectory` setting defines the base directory of
the build or project depending on the scope.
For example, the following setting sets the unmanaged library directory
@ -72,16 +72,16 @@ defined in:
Path Finders
------------
A ``PathFinder`` computes a ``Seq[File]`` on demand. It is a way to
A `PathFinder` computes a `Seq[File]` on demand. It is a way to
build a sequence of files. There are several methods that augment
``File`` and ``Seq[File]`` to construct a ``PathFinder``. Ultimately,
call ``get`` on the resulting ``PathFinder`` to evaluate it and get back
a ``Seq[File]``.
`File` and `Seq[File]` to construct a `PathFinder`. Ultimately,
call `get` on the resulting `PathFinder` to evaluate it and get back
a `Seq[File]`.
Selecting descendants
~~~~~~~~~~~~~~~~~~~~~
The ``**`` method accepts a ``java.io.FileFilter`` and selects all files
The `**` method accepts a `java.io.FileFilter` and selects all files
matching that filter.
::
@ -91,9 +91,9 @@ matching that filter.
get
~~~
This selects all files that end in ``.scala`` that are in ``src`` or a
This selects all files that end in `.scala` that are in `src` or a
descendent directory. The list of files is not actually evaluated until
``get`` is called:
`get` is called:
::
@ -102,28 +102,28 @@ descendent directory. The list of files is not actually evaluated until
finder.get
}
If the filesystem changes, a second call to ``get`` on the same
``PathFinder`` object will reflect the changes. That is, the ``get``
method reconstructs the list of files each time. Also, ``get`` only
returns ``File``\ s that existed at the time it was called.
If the filesystem changes, a second call to `get` on the same
`PathFinder` object will reflect the changes. That is, the `get`
method reconstructs the list of files each time. Also, `get` only
returns `File`\ s that existed at the time it was called.
Selecting children
~~~~~~~~~~~~~~~~~~
Selecting files that are immediate children of a subdirectory is done
with a single ``*``:
with a single `*`:
::
def scalaSources(base: File): PathFinder = (base / "src") * "*.scala"
This selects all files that end in ``.scala`` that are in the ``src``
This selects all files that end in `.scala` that are in the `src`
directory.
Existing files only
~~~~~~~~~~~~~~~~~~~
If a selector, such as ``/``, ``**``, or \`\*, is used on a path that
If a selector, such as `/`, `**`, or `*`, is used on a path that
does not represent a directory, the path list will be empty:
::
@ -133,16 +133,16 @@ does not represent a directory, the path list will be empty:
Name Filter
~~~~~~~~~~~
The argument to the child and descendent selectors ``*`` and ``**`` is
actually a ``NameFilter``. An implicit is used to convert a ``String``
to a ``NameFilter`` that interprets ``*`` to represent zero or more
The argument to the child and descendent selectors `*` and `**` is
actually a `NameFilter`. An implicit is used to convert a `String`
to a `NameFilter` that interprets `*` to represent zero or more
characters of any value. See the Name Filters section below for more
information.
Combining PathFinders
~~~~~~~~~~~~~~~~~~~~~
Another operation is concatenation of ``PathFinder``\ s:
Another operation is concatenation of `PathFinder`\ s:
::
@ -151,8 +151,8 @@ Another operation is concatenation of ``PathFinder``\ s:
(base / "lib") +++
(base / "target" / "classes")
When evaluated using ``get``, this will return ``src/main/``, ``lib/``,
and ``target/classes/``. The concatenated finder supports all standard
When evaluated using `get`, this will return `src/main/`, `lib/`,
and `target/classes/`. The concatenated finder supports all standard
methods. For example,
::
@ -171,15 +171,15 @@ accomplished as follows:
( (base / "src") ** "*.scala") --- ( (base / "src") ** ".svn" ** "*.scala")
The first selector selects all Scala sources and the second selects all
sources that are a descendent of a ``.svn`` directory. The ``---``
sources that are a descendent of a `.svn` directory. The `---`
method removes all files returned by the second selector from the
sequence of files returned by the first selector.
Filtering
~~~~~~~~~
There is a ``filter`` method that accepts a predicate of type
``File => Boolean`` and is non-strict:
There is a `filter` method that accepts a predicate of type
`File => Boolean` and is non-strict:
::
@ -192,8 +192,8 @@ There is a ``filter`` method that accepts a predicate of type
Empty PathFinder
~~~~~~~~~~~~~~~~
``PathFinder.empty`` is a ``PathFinder`` that returns the empty sequence
when ``get`` is called:
`PathFinder.empty` is a `PathFinder` that returns the empty sequence
when `get` is called:
::
@ -202,51 +202,51 @@ when ``get`` is called:
PathFinder to String conversions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Convert a ``PathFinder`` to a String using one of the following methods:
Convert a `PathFinder` to a String using one of the following methods:
- ``toString`` is for debugging. It puts the absolute path of each
- `toString` is for debugging. It puts the absolute path of each
component on its own line.
- ``absString`` gets the absolute paths of each component and separates
- `absString` gets the absolute paths of each component and separates
them by the platform's path separator.
- ``getPaths`` produces a ``Seq[String]`` containing the absolute paths
- `getPaths` produces a `Seq[String]` containing the absolute paths
of each component
Mappings
~~~~~~~~
The packaging and file copying methods in sbt expect values of type
``Seq[(File,String)]`` and ``Seq[(File,File)]``, respectively. These are
`Seq[(File,String)]` and `Seq[(File,File)]`, respectively. These are
mappings from the input file to its (String) path in the jar or its
(File) destination. This approach replaces the relative path approach
(using the ``##`` method) from earlier versions of sbt.
(using the `##` method) from earlier versions of sbt.
Mappings are discussed in detail on the :doc:`Mapping-Files` page.
File Filters
------------
The argument to ``*`` and ``**`` is of type
The argument to `*` and `**` is of type
`java.io.FileFilter <http://download.oracle.com/javase/6/docs/api/java/io/FileFilter.html>`_.
sbt provides combinators for constructing ``FileFilter``\ s.
sbt provides combinators for constructing `FileFilter`\ s.
First, a String may be implicitly converted to a ``FileFilter``. The
First, a String may be implicitly converted to a `FileFilter`. The
resulting filter selects files with a name matching the string, with a
``*`` in the string interpreted as a wildcard. For example, the
`*` in the string interpreted as a wildcard. For example, the
following selects all Scala sources with the word "Test" in them:
::
def testSrcs(base: File): PathFinder = (base / "src") * "*Test*.scala"
There are some useful combinators added to ``FileFilter``. The ``||``
method declares alternative ``FileFilter``\ s. The following example
There are some useful combinators added to `FileFilter`. The `||`
method declares alternative `FileFilter`\ s. The following example
selects all Java or Scala source files under "src":
::
def sources(base: File): PathFinder = (base / "src") ** ("*.scala" || "*.java")
The ``--``\ method excludes a files matching a second filter from the
The `--` method excludes a files matching a second filter from the
files matched by the first:
::
@ -254,5 +254,5 @@ files matched by the first:
def imageResources(base: File): PathFinder =
(base/"src"/"main"/"resources") * ("*.png" -- "logo.png")
This will get ``right.png`` and ``left.png``, but not ``logo.png``, for
This will get `right.png` and `left.png`, but not `logo.png`, for
example.

View File

@ -5,31 +5,31 @@ External Processes
Usage
=====
``sbt`` includes a process library to simplify working with external
`sbt` includes a process library to simplify working with external
processes. The library is available without import in build definitions
and at the interpreter started by the :doc:`consoleProject <Console-Project>` task.
To run an external command, follow it with an exclamation mark ``!``:
To run an external command, follow it with an exclamation mark `!`:
::
"find project -name *.jar" !
An implicit converts the ``String`` to ``sbt.ProcessBuilder``, which
defines the ``!`` method. This method runs the constructed command,
An implicit converts the `String` to `sbt.ProcessBuilder`, which
defines the `!` method. This method runs the constructed command,
waits until the command completes, and returns the exit code.
Alternatively, the ``run`` method defined on ``ProcessBuilder`` runs the
command and returns an instance of ``sbt.Process``, which can be used to
``destroy`` the process before it completes. With no arguments, the
``!`` method sends output to standard output and standard error. You can
pass a ``Logger`` to the ``!`` method to send output to the ``Logger``:
Alternatively, the `run` method defined on `ProcessBuilder` runs the
command and returns an instance of `sbt.Process`, which can be used to
`destroy` the process before it completes. With no arguments, the
`!` method sends output to standard output and standard error. You can
pass a `Logger` to the `!` method to send output to the `Logger`:
::
"find project -name *.jar" ! log
Two alternative implicit conversions are from ``scala.xml.Elem`` or
``List[String]`` to ``sbt.ProcessBuilder``. These are useful for
Two alternative implicit conversions are from `scala.xml.Elem` or
`List[String]` to `sbt.ProcessBuilder`. These are useful for
constructing commands. An example of the first variant from the android
plugin:
@ -38,7 +38,7 @@ plugin:
<x> {dxPath.absolutePath} --dex --output={classesDexPath.absolutePath} {classesMinJarPath.absolutePath}</x> !
If you need to set the working directory or modify the environment, call
``sbt.Process`` explicitly, passing the command sequence (command and
`sbt.Process` explicitly, passing the command sequence (command and
argument list) or command string first and the working directory second.
Any environment variables can be passed as a vararg list of key/value
String pairs.
@ -48,35 +48,35 @@ String pairs.
Process("ls" :: "-l" :: Nil, Path.userHome, "key1" -> value1, "key2" -> value2) ! log
Operators are defined to combine commands. These operators start with
``#`` in order to keep the precedence the same and to separate them from
the operators defined elsewhere in ``sbt`` for filters. In the following
operator definitions, ``a`` and ``b`` are subcommands.
`#` in order to keep the precedence the same and to separate them from
the operators defined elsewhere in `sbt` for filters. In the following
operator definitions, `a` and `b` are subcommands.
- ``a #&& b`` Execute ``a``. If the exit code is nonzero, return that
exit code and do not execute ``b``. If the exit code is zero, execute
``b`` and return its exit code.
- ``a #|| b`` Execute ``a``. If the exit code is zero, return zero for
the exit code and do not execute ``b``. If the exit code is nonzero,
execute ``b`` and return its exit code.
- ``a #| b`` Execute ``a`` and ``b``, piping the output of ``a`` to the
input of ``b``.
- `a #&& b` Execute `a`. If the exit code is nonzero, return that
exit code and do not execute `b`. If the exit code is zero, execute
`b` and return its exit code.
- `a #|| b` Execute `a`. If the exit code is zero, return zero for
the exit code and do not execute `b`. If the exit code is nonzero,
execute `b` and return its exit code.
- `a #| b` Execute `a` and `b`, piping the output of `a` to the
input of `b`.
There are also operators defined for redirecting output to ``File``\ s
and input from ``File``\ s and ``URL``\ s. In the following definitions,
``url`` is an instance of ``URL`` and ``file`` is an instance of
``File``.
There are also operators defined for redirecting output to `File`\ s
and input from `File`\ s and `URL`\ s. In the following definitions,
`url` is an instance of `URL` and `file` is an instance of
`File`.
- ``a #< url`` or ``url #> a`` Use ``url`` as the input to ``a``. ``a``
may be a ``File`` or a command.
- ``a #< file`` or ``file #> a`` Use ``file`` as the input to ``a``.
``a`` may be a ``File`` or a command.
- ``a #> file`` or ``file #< a`` Write the output of ``a`` to ``file``.
``a`` may be a ``File``, ``URL``, or a command.
- ``a #>> file`` or ``file #<< a`` Append the output of ``a`` to
``file``. ``a`` may be a ``File``, ``URL``, or a command.
- `a #< url` or `url #> a` Use `url` as the input to `a`. `a`
may be a `File` or a command.
- `a #< file` or `file #> a` Use `file` as the input to `a`.
`a` may be a `File` or a command.
- `a #> file` or `file #< a` Write the output of `a` to `file`.
`a` may be a `File`, `URL`, or a command.
- `a #>> file` or `file #<< a` Append the output of `a` to
`file`. `a` may be a `File`, `URL`, or a command.
There are some additional methods to get the output from a forked
process into a ``String`` or the output lines as a ``Stream[String]``.
process into a `String` or the output lines as a `Stream[String]`.
Here are some examples, but see the `ProcessBuilder
API <../../api/sbt/ProcessBuilder.html>`_
for details.
@ -86,13 +86,13 @@ for details.
val listed: String = "ls" !!
val lines2: Stream[String] = "ls" lines_!
Finally, there is a ``cat`` method to send the contents of ``File``\ s
and ``URL``\ s to standard output.
Finally, there is a `cat` method to send the contents of `File`\ s
and `URL`\ s to standard output.
Examples
--------
Download a ``URL`` to a ``File``:
Download a `URL` to a `File`:
::
@ -100,7 +100,7 @@ Download a ``URL`` to a ``File``:
or
file("About.html") #< url("http://databinder.net/dispatch/About") !
Copy a ``File``:
Copy a `File`:
::
@ -108,8 +108,8 @@ Copy a ``File``:
or
file("About_copy.html") #< file("About.html") !
Append the contents of a ``URL`` to a ``File`` after filtering through
``grep``:
Append the contents of a `URL` to a `File` after filtering through
`grep`:
::
@ -117,13 +117,13 @@ Append the contents of a ``URL`` to a ``File`` after filtering through
or
file("About_JSON") #<< ( "grep JSON" #< url("http://databinder.net/dispatch/About") ) !
Search for uses of ``null`` in the source directory:
Search for uses of `null` in the source directory:
::
"find src -name *.scala -exec grep null {} ;" #| "xargs test -z" #&& "echo null-free" #|| "echo null detected" !
Use ``cat``::
Use `cat`::
val spde = url("http://technically.us/spde/About")
val dispatch = url("http://databinder.net/dispatch/About")

View File

@ -37,11 +37,11 @@ sbt Configuration
=================
sbt requires configuration in two places to make use of a
proxy repository. The first is the ``~/.sbt/repositories``
proxy repository. The first is the `~/.sbt/repositories`
file, and the second is the launcher script.
``~/.sbt/repositories``
`~/.sbt/repositories`
-----------------------
The repositories file is an external configuration for the Launcher.
The exact syntax for the configuration file is detailed in the
@ -58,16 +58,16 @@ Here's an example config:
This example configuration has three repositories configured for sbt.
The first resolver is ``local``, and is used so that artifacts pushed
using ``publish-local`` will be seen in other sbt projects.
The first resolver is `local`, and is used so that artifacts pushed
using `publish-local` will be seen in other sbt projects.
The second resolver is ``my-ivy-proxy-releases``. This repository
The second resolver is `my-ivy-proxy-releases`. This repository
is used to resolve sbt *itself* from the company proxy repository,
as well as any sbt plugins that may be required. Note that the
ivy resolver pattern is important, make sure that yours matches the
one shown or you may not be able to resolve sbt plugins.
The final resolver is ``my-maven-proxy-releases``. This repository
The final resolver is `my-maven-proxy-releases`. This repository
is a proxy for all standard maven repositories, including
maven central.
@ -76,15 +76,15 @@ Launcher Script
---------------------
The sbt launcher supports two configuration options that
allow the usage of proxy repositories. The first is the
``sbt.override.build.repos`` setting and the second is the
``sbt.repository.config`` setting.
`sbt.override.build.repos` setting and the second is the
`sbt.repository.config` setting.
``sbt.override.build.repos``
`sbt.override.build.repos`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This setting is used to specify that all sbt project added resolvers
should be ignored in favor of those configured in the ``repositories``
should be ignored in favor of those configured in the `repositories`
configuration. Using this with a properly configured
``~/.sbt/repositories`` file leads to only your proxy repository
`~/.sbt/repositories` file leads to only your proxy repository
used for builds.
It is specified like so:
@ -95,9 +95,9 @@ It is specified like so:
``sbt.repository.config``
`sbt.repository.config`
~~~~~~~~~~~~~~~~~~~~~~~~~
If you are unable to create a ``~/.sbt/repositories`` file, due
If you are unable to create a `~/.sbt/repositories` file, due
to user permission errors or for convenience of developers, you
can modify the sbt start script directly with the following:

View File

@ -7,19 +7,19 @@ uploading a descriptor, such as an Ivy file or Maven POM, and artifacts,
such as a jar or war, to a repository so that other projects can specify
your project as a dependency.
The ``publish`` action is used to publish your project to a remote
The `publish` action is used to publish your project to a remote
repository. To use publishing, you need to specify the repository to
publish to and the credentials to use. Once these are set up, you can
run ``publish``.
run `publish`.
The ``publishLocal`` action is used to publish your project to a local
The `publishLocal` action is used to publish your project to a local
Ivy repository. You can then use this project from other projects on the
same machine.
Define the repository
---------------------
To specify the repository, assign a repository to ``publishTo`` and
To specify the repository, assign a repository to `publishTo` and
optionally set the publishing style. For example, to upload to Nexus:
::
@ -44,7 +44,7 @@ If you're using Maven repositories you will also have to select the
right repository depending on your artifacts: SNAPSHOT versions go to
the /snapshot repository while other versions go to the /releases
repository. Doing this selection can be done by using the value of the
``version`` SettingKey:
`version` SettingKey:
::
@ -72,8 +72,8 @@ The second and better way is to load them from a file, for example:
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
The credentials file is a properties file with keys ``realm``, ``host``,
``user``, and ``password``. For example:
The credentials file is a properties file with keys `realm`, `host`,
`user`, and `password`. For example:
.. code-block:: text
@ -86,7 +86,7 @@ Cross-publishing
----------------
To support multiple incompatible Scala versions, enable cross building
and do ``+ publish`` (see :doc:`Cross-Build`). See :doc:`Resolvers` for other
and do `+ publish` (see :doc:`Cross-Build`). See :doc:`Resolvers` for other
supported repository types.
Published artifacts
@ -100,10 +100,10 @@ for details.
Modifying the generated POM
---------------------------
When ``publishMavenStyle`` is ``true``, a POM is generated by the
``makePom`` action and published to the repository instead of an Ivy
When `publishMavenStyle` is `true`, a POM is generated by the
`makePom` action and published to the repository instead of an Ivy
file. This POM file may be altered by changing a few settings. Set
``pomExtra`` to provide XML (``scala.xml.NodeSeq``) to insert directly
`pomExtra` to provide XML (`scala.xml.NodeSeq`) to insert directly
into the generated pom. For example:
::
@ -117,8 +117,8 @@ into the generated pom. For example:
</license>
</licenses>
``makePom`` adds to the POM any Maven-style repositories you have
declared. You can filter these by modifying ``pomRepositoryFilter``,
`makePom` adds to the POM any Maven-style repositories you have
declared. You can filter these by modifying `pomRepositoryFilter`,
which by default excludes local repositories. To instead only include
local repositories:
@ -128,9 +128,9 @@ local repositories:
repo.root.startsWith("file:")
}
There is also a ``pomPostProcess`` setting that can be used to
There is also a `pomPostProcess` setting that can be used to
manipulate the final XML before it is written. It's type is
``Node => Node``.
`Node => Node`.
::
@ -141,8 +141,8 @@ manipulate the final XML before it is written. It's type is
Publishing Locally
------------------
The ``publishLocal`` command will publish to the local Ivy repository.
By default, this is in ``${user.home}/.ivy2/local``. Other projects on
The `publishLocal` command will publish to the local Ivy repository.
By default, this is in `${user.home}/.ivy2/local`. Other projects on
the same machine can then list the project as a dependency. For example,
if the SBT project you are publishing has configuration parameters like:
@ -158,10 +158,10 @@ Then another project can depend on it:
libraryDependencies += "org.me" %% "my-project" % "0.1-SNAPSHOT"
The version number you select must end with ``SNAPSHOT``, or you must
The version number you select must end with `SNAPSHOT`, or you must
change the version number each time you publish. Ivy maintains a cache,
and it stores even local projects in that cache. If Ivy already has a
version cached, it will not check the local repository for updates,
unless the version number matches a `changing
pattern <http://ant.apache.org/ivy/history/2.3.0-rc1/concept.html#change>`_,
and ``SNAPSHOT`` is one such pattern.
and `SNAPSHOT` is one such pattern.

View File

@ -7,7 +7,11 @@ Maven
Resolvers for Maven2 repositories are added as follows:
``scala resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"``
.. code-block:: scala
resolvers +=
"Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
This is the most common kind of user-defined resolvers. The rest of this
page describes how to define other types of repositories.
@ -16,12 +20,12 @@ Predefined
A few predefined repositories are available and are listed below
- ``DefaultMavenRepository`` This is the main Maven repository at
- `DefaultMavenRepository` This is the main Maven repository at
http://repo1.maven.org/maven2/ and is included by default
- ``JavaNet1Repository`` This is the Maven 1 repository at
- `JavaNet1Repository` This is the Maven 1 repository at
http://download.java.net/maven/1/
For example, to use the ``java.net`` repository, use the following
For example, to use the `java.net` repository, use the following
setting in your build definition:
::
@ -43,13 +47,13 @@ file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using
`patterns <http://ant.apache.org/ivy/history/latest-milestone/concept.html#patterns>`_
to configure repositories.
Construct a repository definition using the factory in ``sbt.Resolver``
for the desired type. This factory creates a ``Repository`` object that
Construct a repository definition using the factory in `sbt.Resolver`
for the desired type. This factory creates a `Repository` object that
can be further configured. The following table contains links to the Ivy
documentation for the repository type and the API documentation for the
factory and repository class. The SSH and SFTP repositories are
configured identically except for the name of the factory. Use
``Resolver.ssh`` for SSH and ``Resolver.sftp`` for SFTP.
`Resolver.ssh` for SSH and `Resolver.sftp` for SFTP.
.. _Ivy filesystem: http://ant.apache.org/ivy/history/latest-milestone/resolver/filesystem.html
.. _filesystem factory: ../../api/sbt/Resolver$$file$.html
@ -67,10 +71,10 @@ configured identically except for the name of the factory. Use
========== ================= ================= ===================== =====================
Type Factory Ivy Docs Factory API Repository Class API
========== ================= ================= ===================== =====================
Filesystem ``Resolver.file`` `Ivy filesystem`_ `filesystem factory`_ `FileRepository API`_
SFTP ``Resolver.sftp`` `Ivy sftp`_ `sftp factory`_ `SftpRepository API`_
SSH ``Resolver.ssh`` `Ivy ssh`_ `ssh factory`_ `SshRepository API`_
URL ``Resolver.url`` `Ivy url`_ `url factory`_ `URLRepository API`_
Filesystem `Resolver.file` `Ivy filesystem`_ `filesystem factory`_ `FileRepository API`_
SFTP `Resolver.sftp` `Ivy sftp`_ `sftp factory`_ `SftpRepository API`_
SSH `Resolver.ssh` `Ivy ssh`_ `ssh factory`_ `SshRepository API`_
URL `Resolver.url` `Ivy url`_ `url factory`_ `URLRepository API`_
========== ================= ================= ===================== =====================
Basic Examples
@ -82,7 +86,7 @@ layout.
Filesystem
^^^^^^^^^^
Define a filesystem repository in the ``test`` directory of the current
Define a filesystem repository in the `test` directory of the current
working directory and declare that publishing to this repository must be
atomic.
@ -93,7 +97,7 @@ atomic.
URL
^^^
Define a URL repository at ``"http://example.org/repo-releases/"``.
Define a URL repository at `"http://example.org/repo-releases/"`.
::
@ -112,7 +116,7 @@ SFTP and SSH Repositories
^^^^^^^^^^^^^^^^^^^^^^^^^
The following defines a repository that is served by SFTP from host
``"example.org"``:
`"example.org"`:
::
@ -130,8 +134,8 @@ To specify a base path:
resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/")
Authentication for the repositories returned by ``sftp`` and ``ssh`` can
be configured by the ``as`` methods.
Authentication for the repositories returned by `sftp` and `ssh` can
be configured by the `as` methods.
To use password authentication:
@ -173,7 +177,7 @@ Custom Layout
~~~~~~~~~~~~~
These examples specify custom repository layouts using patterns. The
factory methods accept an ``Patterns`` instance that defines the
factory methods accept an `Patterns` instance that defines the
patterns to use. The patterns are first resolved against the base file
or URL. The default patterns give the default Maven-style layout.
Provide a different Patterns object to use a different layout. For
@ -190,8 +194,8 @@ API <../../api/sbt/Patterns$.html>`_ for
the methods to use.
For filesystem and URL repositories, you can specify absolute patterns
by omitting the base URL, passing an empty ``Patterns`` instance, and
using ``ivys`` and ``artifacts``:
by omitting the base URL, passing an empty `Patterns` instance, and
using `ivys` and `artifacts`:
::

View File

@ -2,7 +2,7 @@
Running Project Code
====================
The ``run`` and ``console`` actions provide a means for running user
The `run` and `console` actions provide a means for running user
code in the same virtual machine as sbt. This page describes the
problems with doing so, how sbt handles these problems, what types of
code can use this feature, and what types of code must use a :doc:`forked jvm <Forking>`.
@ -14,8 +14,8 @@ Problems
System.exit
-----------
User code can call ``System.exit``, which normally shuts down the JVM.
Because the ``run`` and ``console`` actions run inside the same JVM as
User code can call `System.exit`, which normally shuts down the JVM.
Because the `run` and `console` actions run inside the same JVM as
sbt, this also ends the build and requires restarting sbt.
Threads
@ -24,7 +24,7 @@ Threads
User code can also start other threads. Threads can be left running
after the main method returns. In particular, creating a GUI creates
several threads, some of which may not terminate until the JVM
terminates. The program is not completed until either ``System.exit`` is
terminates. The program is not completed until either `System.exit` is
called or all non-daemon threads terminate.
Deserialization and class loading
@ -43,11 +43,11 @@ sbt's Solutions
System.exit
-----------
User code is run with a custom ``SecurityManager`` that throws a custom
``SecurityException`` when ``System.exit`` is called. This exception is
User code is run with a custom `SecurityManager` that throws a custom
`SecurityException` when `System.exit` is called. This exception is
caught by sbt. sbt then disposes of all top-level windows, interrupts
(not stops) all user-created threads, and handles the exit code. If the
exit code is nonzero, ``run`` and ``console`` complete unsuccessfully.
exit code is nonzero, `run` and `console` complete unsuccessfully.
If the exit code is zero, they complete normally.
Threads
@ -57,21 +57,21 @@ sbt makes a list of all threads running before executing user code.
After the user code returns, sbt can then determine the threads created
by the user code. For each user-created thread, sbt replaces the
uncaught exception handler with a custom one that handles the custom
``SecurityException`` thrown by calls to ``System.exit`` and delegates
`SecurityException` thrown by calls to `System.exit` and delegates
to the original handler for everything else. sbt then waits for each
created thread to exit or for ``System.exit`` to be called. sbt handles
a call to ``System.exit`` as described above.
created thread to exit or for `System.exit` to be called. sbt handles
a call to `System.exit` as described above.
A user-created thread is one that is not in the ``system`` thread group
and is not an ``AWT`` implementation thread (e.g. ``AWT-XAWT``,
``AWT-Windows``). User-created threads include the ``AWT-EventQueue-*``
A user-created thread is one that is not in the `system` thread group
and is not an `AWT` implementation thread (e.g. `AWT-XAWT`,
`AWT-Windows`). User-created threads include the `AWT-EventQueue-*`
thread(s).
User Code
=========
Given the above, when can user code be run with the ``run`` and
``console`` actions?
Given the above, when can user code be run with the `run` and
`console` actions?
The user code cannot rely on shutdown hooks and at least one of the
following situations must apply for user code to run in the same JVM:
@ -79,7 +79,7 @@ following situations must apply for user code to run in the same JVM:
1. User code creates no threads.
2. User code creates a GUI and no other threads.
3. The program ends when user-created threads terminate on their own.
4. ``System.exit`` is used to end the program and user-created threads
4. `System.exit` is used to end the program and user-created threads
terminate when interrupted.
5. No deserialization is done, or the deserialization code avoids
ensures that the right class loader is used, as in
@ -92,7 +92,7 @@ the JVM does not actually shut down. So, shutdown hooks cannot be run
and threads are not terminated unless they stop when interrupted. If
these requirements are not met, code must run in a :doc:`forked jvm <Forking>`.
The feature of allowing ``System.exit`` and multiple threads to be used
The feature of allowing `System.exit` and multiple threads to be used
cannot completely emulate the situation of running in a separate JVM and
is intended for development. Program execution should be checked in a
:doc:`forked jvm <Forking>` when using multiple threads or ``System.exit``.
:doc:`forked jvm <Forking>` when using multiple threads or `System.exit`.

View File

@ -30,33 +30,33 @@ Install `conscript <https://github.com/n8han/conscript>`_.
cs sbt/sbt --branch 0.12.0
This will create two scripts: ``screpl`` and ``scalas``.
This will create two scripts: `screpl` and `scalas`.
Manual Setup
------------
Duplicate your standard ``sbt`` script, which was set up according to
:doc:`Setup </Getting-Started/Setup>`, as ``scalas`` and ``screpl`` (or
Duplicate your standard `sbt` script, which was set up according to
:doc:`Setup </Getting-Started/Setup>`, as `scalas` and `screpl` (or
whatever names you like).
``scalas`` is the script runner and should use ``sbt.ScriptMain`` as
the main class, by adding the ``-Dsbt.main.class=sbt.ScriptMain``
parameter to the ``java`` command. Its command line should look like:
`scalas` is the script runner and should use `sbt.ScriptMain` as
the main class, by adding the `-Dsbt.main.class=sbt.ScriptMain`
parameter to the `java` command. Its command line should look like:
.. code-block:: console
java -Dsbt.main.class=sbt.ScriptMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
For the REPL runner ``screpl``, use ``sbt.ConsoleMain`` as the main
For the REPL runner `screpl`, use `sbt.ConsoleMain` as the main
class:
.. code-block:: console
java -Dsbt.main.class=sbt.ConsoleMain -Dsbt.boot.directory=/home/user/.sbt/boot -jar sbt-launch.jar "$@"
In each case, ``/home/user/.sbt/boot`` should be replaced with wherever
In each case, `/home/user/.sbt/boot` should be replaced with wherever
you want sbt's boot directory to be; you might also need to give more
memory to the JVM via ``-Xms512M -Xmx1536M`` or similar options, just
memory to the JVM via `-Xms512M -Xmx1536M` or similar options, just
like shown in :doc:`Setup </Getting-Started/Setup>`.
Usage
@ -67,7 +67,7 @@ sbt Script runner
The script runner can run a standard Scala script, but with the
additional ability to configure sbt. sbt settings may be embedded in the
script in a comment block that opens with ``/***``.
script in a comment block that opens with `/***`.
Example
~~~~~~~
@ -76,7 +76,7 @@ Copy the following script and make it executable. You may need to adjust
the first line depending on your script name and operating system. When
run, the example should retrieve Scala, the required dependencies,
compile the script, and run it directly. For example, if you name it
``dispatch_example.scala``, you would do on Unix:
`dispatch_example.scala`, you would do on Unix:
.. code-block:: console

View File

@ -2,13 +2,13 @@
Setup Notes
===========
Some notes on how to set up your ``sbt`` script.
Some notes on how to set up your `sbt` script.
Do not put ``sbt-launch.jar`` on your classpath.
Do not put `sbt-launch.jar` on your classpath.
------------------------------------------------
Do *not* put ``sbt-launch.jar`` in your ``$SCALA_HOME/lib`` directory,
your project's ``lib`` directory, or anywhere it will be put on a
Do *not* put `sbt-launch.jar` in your `$SCALA_HOME/lib` directory,
your project's `lib` directory, or anywhere it will be put on a
classpath. It isn't a library.
Terminal encoding
@ -16,7 +16,7 @@ Terminal encoding
The character encoding used by your terminal may differ from Java's
default encoding for your platform. In this case, you will need to add
the option ``-Dfile.encoding=<encoding>`` in your ``sbt`` script to set
the option `-Dfile.encoding=<encoding>` in your `sbt` script to set
the encoding, which might look like:
.. code-block:: console
@ -32,21 +32,21 @@ application. For example a common set of memory-related options is:
.. code-block:: console
java -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256m``
java -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256m`
Boot directory
--------------
``sbt-launch.jar`` is just a bootstrap; the actual meat of sbt, and the
`sbt-launch.jar` is just a bootstrap; the actual meat of sbt, and the
Scala compiler and standard library, are downloaded to the shared
directory ``$HOME/.sbt/boot/``.
directory `$HOME/.sbt/boot/`.
To change the location of this directory, set the ``sbt.boot.directory``
system property in your ``sbt`` script. A relative path will be resolved
To change the location of this directory, set the `sbt.boot.directory`
system property in your `sbt` script. A relative path will be resolved
against the current working directory, which can be useful if you want
to avoid sharing the boot directory between projects. For example, the
following uses the pre-0.11 style of putting the boot directory in
``project/boot/``:
`project/boot/`:
.. code-block:: console
@ -56,9 +56,9 @@ HTTP Proxy
----------
On Unix, sbt will pick up any HTTP proxy settings from the standard
``http_proxy`` environment variable. If you are behind a proxy requiring
authentication, your ``sbt`` script must also pass flags to set the
``http.proxyUser`` and ``http.proxyPassword`` properties:
`http_proxy` environment variable. If you are behind a proxy requiring
authentication, your `sbt` script must also pass flags to set the
`http.proxyUser` and `http.proxyPassword` properties:
.. code-block:: console

View File

@ -41,16 +41,16 @@ This example is rather exaggerated in its badness, but I claim it is
nearly the same situation as our two step task definitions. Particular
reasons this is bad include:
1. A client needs to know to call ``makeFoo()`` first.
2. ``foo`` could be changed by other code. There could be a
``def makeFoo2()``, for example.
1. A client needs to know to call `makeFoo()` first.
2. `foo` could be changed by other code. There could be a
`def makeFoo2()`, for example.
3. Access to foo is not thread safe.
The first point is like declaring a task dependency, the second is like
two tasks modifying the same state (either project variables or files),
and the third is a consequence of unsynchronized, shared state.
In Scala, we have the built-in functionality to easily fix this: ``lazy val``.
In Scala, we have the built-in functionality to easily fix this: `lazy val`.
::
@ -62,10 +62,10 @@ with the example usage:
doSomething( foo )
Here, ``lazy val`` gives us thread safety, guaranteed initialization
Here, `lazy val` gives us thread safety, guaranteed initialization
before access, and immutability all in one, DRY construct. The task
system in sbt does the same thing for tasks (and more, but we won't go
into that here) that ``lazy val`` did for our bad example.
into that here) that `lazy val` did for our bad example.
A task definition must declare its inputs and the type of its output.
sbt will ensure that the input tasks have run and will then provide
@ -86,9 +86,9 @@ The general form of a task definition looks like:
(This is only intended to be a discussion of the ideas behind tasks, so
see the :doc:`sbt Tasks </Detailed-Topics/Tasks>` page
for details on usage.) Here, ``aTask`` is assumed to produce a
result of type ``A`` and ``bTask`` is assumed to produce a result of
type ``B``.
for details on usage.) Here, `aTask` is assumed to produce a
result of type `A` and `bTask` is assumed to produce a result of
type `B`.
Application
-----------
@ -96,8 +96,8 @@ Application
As an example, consider generating a zip file containing the binary jar,
source jar, and documentation jar for your project. First, determine
what tasks produce the jars. In this case, the input tasks are
``packageBin``, ``packageSrc``, and ``packageDoc`` in the main
``Compile`` scope. The result of each of these tasks is the File for the
`packageBin`, `packageSrc`, and `packageDoc` in the main
`Compile` scope. The result of each of these tasks is the File for the
jar that they generated. Our zip file task is defined by mapping these
package tasks and including their outputs in a zip file. As good
practice, we then return the File for this zip so that other tasks can
@ -115,11 +115,11 @@ map on the zip task.
out
}
The ``val inputs`` line defines how the input files are mapped to paths
The `val inputs` line defines how the input files are mapped to paths
in the zip. See :doc:`/Detailed-Topics/Mapping-Files` for details.
The explicit types are not required, but are included for clarity.
The ``zipPath`` input would be a custom task to define the location of
The `zipPath` input would be a custom task to define the location of
the zip file. For example:
::

View File

@ -26,10 +26,10 @@ There are several features of the task system:
and modified as easily and flexibly as settings.
2. :doc:`Input Tasks </Extending/Input-Tasks>` use :doc:`parser combinators <Parsing-Input>` to define the syntax for their arguments.
This allows flexible syntax and tab-completions in the same way as :doc:`/Extending/Commands`.
3. Tasks produce values. Other tasks can access a task's value by calling ``value`` on it within a task definition.
3. Tasks produce values. Other tasks can access a task's value by calling `value` on it within a task definition.
4. Dynamically changing the structure of the task graph is possible.
Tasks can be injected into the execution graph based on the result of another task.
5. There are ways to handle task failure, similar to ``try/catch/finally``.
5. There are ways to handle task failure, similar to `try/catch/finally`.
6. Each task has access to its own Logger that by default persists the
logging for that task at a more verbose level than is initially
printed to the screen.
@ -56,15 +56,15 @@ see this task listed.
Define the key
--------------
To declare a new task, define a lazy val of type ``TaskKey``:
To declare a new task, define a lazy val of type `TaskKey`:
::
lazy val sampleTask = taskKey[Int]("A sample task.")
The name of the ``val`` is used when referring to the task in Scala code and at the command line.
The string passed to the ``taskKey`` method is a description of the task.
The type parameter passed to ``taskKey`` (here, ``Int``) is the type of value produced by the task.
The name of the `val` is used when referring to the task in Scala code and at the command line.
The string passed to the `taskKey` method is a description of the task.
The type parameter passed to `taskKey` (here, `Int`) is the type of value produced by the task.
We'll define a couple of other keys for the examples:
@ -73,8 +73,8 @@ We'll define a couple of other keys for the examples:
lazy val intTask = taskKey[Int]("An int task")
lazy val stringTask = taskKey[String]("A string task")
The examples themselves are valid entries in a ``build.sbt`` or can be
provided as part of a sequence to ``Project.settings`` (see
The examples themselves are valid entries in a `build.sbt` or can be
provided as part of a sequence to `Project.settings` (see
:doc:`Full Configuration </Getting-Started/Full-Def>`).
Implement the task
@ -93,7 +93,7 @@ These parts are then combined just like the parts of a setting are combined.
Defining a basic task
~~~~~~~~~~~~~~~~~~~~~
A task is defined using ``:=``
A task is defined using `:=`
::
@ -108,19 +108,19 @@ A task is defined using ``:=``
}
As mentioned in the introduction, a task is evaluated on demand.
Each time ``sampleTask`` is invoked, for example, it will print the sum.
If the username changes between runs, ``stringTask`` will take different values in those separate runs.
Each time `sampleTask` is invoked, for example, it will print the sum.
If the username changes between runs, `stringTask` will take different values in those separate runs.
(Within a run, each task is evaluated at most once.)
In contrast, settings are evaluated once on project load and are fixed until the next reload.
Tasks with inputs
~~~~~~~~~~~~~~~~~
Tasks with other tasks or settings as inputs are also defined using ``:=``.
The values of the inputs are referenced by the ``value`` method. This method
Tasks with other tasks or settings as inputs are also defined using `:=`.
The values of the inputs are referenced by the `value` method. This method
is special syntax and can only be called when defining a task, such as in the
argument to ``:=``. The following defines a task that adds one to the value
produced by ``intTask`` and returns the result.
argument to `:=`. The following defines a task that adds one to the value
produced by `intTask` and returns the result.
::
@ -136,10 +136,10 @@ Task Scope
~~~~~~~~~~
As with settings, tasks can be defined in a specific scope. For example,
there are separate ``compile`` tasks for the ``compile`` and ``test``
there are separate `compile` tasks for the `compile` and `test`
scopes. The scope of a task is defined the same as for a setting. In the
following example, ``test:sampleTask`` uses the result of
``compile:intTask``.
following example, `test:sampleTask` uses the result of
`compile:intTask`.
::
@ -151,8 +151,8 @@ On precedence
As a reminder, infix method precedence is by the name of the method and postfix methods have lower precedence than infix methods.
1. Assignment methods have the lowest precedence. These are methods with
names ending in ``=``, except for ``!=``, ``<=``, ``>=``, and names
that start with ``=``.
names ending in `=`, except for `!=`, `<=`, `>=`, and names
that start with `=`.
2. Methods starting with a letter have the next highest precedence.
3. Methods with names that start with a symbol and aren't included in 1.
have the highest precedence. (This category is divided further
@ -171,8 +171,8 @@ Additionally, the braces in the following are necessary:
helloTask := { "echo Hello" ! }
Without them, Scala interprets the line as ``( helloTask.:=("echo Hello") ).!``
instead of the desired ``helloTask.:=( "echo Hello".! )``.
Without them, Scala interprets the line as `( helloTask.:=("echo Hello") ).!`
instead of the desired `helloTask.:=( "echo Hello".! )`.
Separating implementations
@ -190,8 +190,8 @@ For example, a basic separate definition looks like:
// Bind the implementation to a specific key
intTask := intTaskImpl.value
Note that whenever ``.value`` is used, it must be within a task definition, such as
within ``Def.task`` above or as an argument to ``:=``.
Note that whenever `.value` is used, it must be within a task definition, such as
within `Def.task` above or as an argument to `:=`.
Modifying an Existing Task
@ -210,8 +210,8 @@ input.
Completely override a task by not declaring the previous task as an
input. Each of the definitions in the following example completely
overrides the previous one. That is, when ``intTask`` is run, it will
only print ``#3``.
overrides the previous one. That is, when `intTask` is run, it will
only print `#3`.
::
@ -244,15 +244,15 @@ The general form of an expression that gets values from multiple scopes is:
<setting-or-task>.all(<scope-filter>).value
The ``all`` method is implicitly added to tasks and settings.
It accepts a ``ScopeFilter`` that will select the ``Scopes``.
The result has type ``Seq[T]``, where ``T`` is the key's underlying type.
The `all` method is implicitly added to tasks and settings.
It accepts a `ScopeFilter` that will select the `Scopes`.
The result has type `Seq[T]`, where `T` is the key's underlying type.
Example
-------
A common scenario is getting the sources for all subprojects for processing all at once, such as passing them to scaladoc.
The task that we want to obtain values for is ``sources`` and we want to get the values in all non-root projects and in the ``Compile`` configuration.
The task that we want to obtain values for is `sources` and we want to get the values in all non-root projects and in the `Compile` configuration.
This looks like:
::
@ -276,8 +276,8 @@ The next section describes various ways to construct a ScopeFilter.
ScopeFilter
-----------
A basic ``ScopeFilter`` is constructed by the ``ScopeFilter.apply`` method.
This method makes a ``ScopeFilter`` from filters on the parts of a ``Scope``: a ``ProjectFilter``, ``ConfigurationFilter``, and ``TaskFilter``.
A basic `ScopeFilter` is constructed by the `ScopeFilter.apply` method.
This method makes a `ScopeFilter` from filters on the parts of a `Scope`: a `ProjectFilter`, `ConfigurationFilter`, and `TaskFilter`.
The simplest case is explicitly specifying the values for the parts:
::
@ -298,33 +298,33 @@ The project filter should usually be explicit, but if left unspecified, the curr
More on filter construction
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The example showed the basic methods ``inProjects`` and ``inConfigurations``.
This section describes all methods for constructing a ``ProjectFilter``, ``ConfigurationFilter``, or ``TaskFilter``.
The example showed the basic methods `inProjects` and `inConfigurations`.
This section describes all methods for constructing a `ProjectFilter`, `ConfigurationFilter`, or `TaskFilter`.
These methods can be organized into four groups:
* Explicit member list (``inProjects``, ``inConfigurations``, ``inTasks``)
* Global value (``inGlobalProject``, ``inGlobalConfiguration``, ``inGlobalTask``)
* Default filter (``inAnyProject``, ``inAnyConfiguration``, ``inAnyTask``)
* Project relationships (``inAggregates``, ``inDependencies``)
* Explicit member list (`inProjects`, `inConfigurations`, `inTasks`)
* Global value (`inGlobalProject`, `inGlobalConfiguration`, `inGlobalTask`)
* Default filter (`inAnyProject`, `inAnyConfiguration`, `inAnyTask`)
* Project relationships (`inAggregates`, `inDependencies`)
See the `API documentation <../../api/sbt/ScopeFilter$$Make.html>`_ for details.
Combining ScopeFilters
~~~~~~~~~~~~~~~~~~~~~~
``ScopeFilters`` may be combined with the ``&&``, ``||``, ``--``, and ``-`` methods:
`ScopeFilters` may be combined with the `&&`, `||`, `--`, and `-` methods:
a && b
Selects scopes that match both ``a`` and ``b``
Selects scopes that match both `a` and `b`
a || b
Selects scopes that match either ``a`` or ``b``
Selects scopes that match either `a` or `b`
a -- b
Selects scopes that match ``a`` but not ``b``
Selects scopes that match `a` but not `b`
\-b
Selects scopes that do not match ``b``
Selects scopes that do not match `b`
For example, the following selects the scope for the ``Compile`` and ``Test`` configurations of the ``core`` project
and the global configuration of the ``util`` project:
For example, the following selects the scope for the `Compile` and `Test` configurations of the `core` project
and the global configuration of the `util` project:
::
@ -336,9 +336,9 @@ and the global configuration of the ``util`` project:
More operations
---------------
The ``all`` method applies to both settings (values of type ``Initialize[T]``)
and tasks (values of type ``Initialize[Task[T]]``).
It returns a setting or task that provides a ``Seq[T]``, as shown in this table:
The `all` method applies to both settings (values of type `Initialize[T]`)
and tasks (values of type `Initialize[Task[T]]`).
It returns a setting or task that provides a `Seq[T]`, as shown in this table:
==================== =========================
Target Result
@ -347,20 +347,20 @@ Initialize[T] Initialize[Seq[T]]
Initialize[Task[T]] Initialize[Task[Seq[T]]]
==================== =========================
This means that the ``all`` method can be combined with methods that construct tasks and settings.
This means that the `all` method can be combined with methods that construct tasks and settings.
Missing values
~~~~~~~~~~~~~~
Some scopes might not define a setting or task.
The ``?`` and ``??`` methods can help in this case.
The `?` and `??` methods can help in this case.
They are both defined on settings and tasks and indicate what to do when a key is undefined.
``?``
On a setting or task with underlying type ``T``, this accepts no arguments and returns a setting or task (respectively) of type ``Option[T]``.
The result is ``None`` if the setting/task is undefined and ``Some[T]`` with the value if it is.
``??``
On a setting or task with underlying type ``T``, this accepts an argument of type ``T`` and uses this argument if the setting/task is undefined.
`?`
On a setting or task with underlying type `T`, this accepts no arguments and returns a setting or task (respectively) of type `Option[T]`.
The result is `None` if the setting/task is undefined and `Some[T]` with the value if it is.
`??`
On a setting or task with underlying type `T`, this accepts an argument of type `T` and uses this argument if the setting/task is undefined.
The following contrived example sets the maximum errors to be the maximum of all aggregates of the current project.
@ -380,18 +380,18 @@ The following contrived example sets the maximum errors to be the maximum of all
Multiple values from multiple scopes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The target of ``all`` is any task or setting, including anonymous ones.
The target of `all` is any task or setting, including anonymous ones.
This means it is possible to get multiple values at once without defining a new task or setting in each scope.
A common use case is to pair each value obtained with the project, configuration, or full scope it came from.
``resolvedScoped``
Provides the full enclosing ``ScopedKey`` (which is a ``Scope`` + ``AttributeKey[_]``)
``thisProject``
Provides the ``Project`` associated with this scope (undefined at the global and build levels)
``thisProjectRef``
Provides the ``ProjectRef`` for the context (undefined at the global and build levels)
``configuration``
Provides the ``Configuration`` for the context (undefined for the global configuration)
`resolvedScoped`
Provides the full enclosing `ScopedKey` (which is a `Scope` + `AttributeKey[_]`)
`thisProject`
Provides the `Project` associated with this scope (undefined at the global and build levels)
`thisProjectRef`
Provides the `ProjectRef` for the context (undefined at the global and build levels)
`configuration`
Provides the `Configuration` for the context (undefined for the global configuration)
For example, the following defines a task that prints non-Compile configurations that define
sbt plugins. This might be used to identify an incorrectly configured build (or not, since this is
@ -434,13 +434,13 @@ This allows controlling the verbosity of stack traces and logging individually f
as recalling the last logging for a task.
Tasks also have access to their own persisted binary or text data.
To use Streams, get the value of the ``streams`` task. This is a
To use Streams, get the value of the `streams` task. This is a
special task that provides an instance of
`TaskStreams <../../api/sbt/std/TaskStreams.html>`_
for the defining task. This type provides access to named binary and
text streams, named loggers, and a default logger. The default
`Logger <../../api/sbt/Logger.html>`_,
which is the most commonly used aspect, is obtained by the ``log``
which is the most commonly used aspect, is obtained by the `log`
method:
::
@ -459,7 +459,7 @@ You can scope logging settings by the specific task's scope:
traceLevel in myTask := 5
To obtain the last logging output from a task, use the ``last`` command:
To obtain the last logging output from a task, use the `last` command:
.. code-block:: console
@ -468,20 +468,20 @@ To obtain the last logging output from a task, use the ``last`` command:
[info] Hello!
The verbosity with which logging is persisted is controlled using the
``persistLogLevel`` and ``persistTraceLevel`` settings. The ``last``
`persistLogLevel` and `persistTraceLevel` settings. The `last`
command displays what was logged according to these levels. The levels
do not affect already logged information.
Handling Failure
----------------
This section discusses the ``failure``, ``result``, and ``andFinally``
This section discusses the `failure`, `result`, and `andFinally`
methods, which are used to handle failure of other tasks.
``failure``
`failure`
~~~~~~~~~~~
The ``failure`` method creates a new task that returns the ``Incomplete`` value
The `failure` method creates a new task that returns the `Incomplete` value
when the original task fails to complete normally. If the original task succeeds,
the new task fails.
`Incomplete <../../api/sbt/Incomplete.html>`_
@ -499,9 +499,9 @@ For example:
3
}
This overrides the ``intTask`` so that the original exception is printed and the constant ``3`` is returned.
This overrides the `intTask` so that the original exception is printed and the constant `3` is returned.
``failure`` does not prevent other tasks that depend on the target
`failure` does not prevent other tasks that depend on the target
from failing. Consider the following example:
::
@ -532,21 +532,21 @@ cTask success failure success failure failu
============== =============== ============= ============== ============== ==============
The overall result is always the same as the root task (the directly
invoked task). A ``failure`` turns a success into a failure, and a failure into an ``Incomplete``.
invoked task). A `failure` turns a success into a failure, and a failure into an `Incomplete`.
A normal task definition fails when any of its inputs fail and computes its value otherwise.
``result``
`result`
~~~~~~~~~~
The ``result`` method creates a new task that returns the full ``Result[T]`` value for the original task.
The `result` method creates a new task that returns the full `Result[T]` value for the original task.
`Result <../../api/sbt/Result.html>`_
has the same structure as ``Either[Incomplete, T]`` for a task result of
type ``T``. That is, it has two subtypes:
has the same structure as `Either[Incomplete, T]` for a task result of
type `T`. That is, it has two subtypes:
- ``Inc``, which wraps ``Incomplete`` in case of failure
- ``Value``, which wraps a task's result in case of success.
- `Inc`, which wraps `Incomplete` in case of failure
- `Value`, which wraps a task's result in case of success.
Thus, the task created by ``result`` executes whether or not the original task succeeds or fails.
Thus, the task created by `result` executes whether or not the original task succeeds or fails.
For example:
@ -563,13 +563,13 @@ For example:
v
}
This overrides the original ``intTask`` definition so that if the original task fails, the exception is printed and the constant ``3`` is returned. If it succeeds, the value is printed and returned.
This overrides the original `intTask` definition so that if the original task fails, the exception is printed and the constant `3` is returned. If it succeeds, the value is printed and returned.
andFinally
~~~~~~~~~~
The ``andFinally`` method defines a new task that runs the original task
The `andFinally` method defines a new task that runs the original task
and evaluates a side effect regardless of whether the original task
succeeded. The result of the task is the result of the original task.
For example:
@ -582,10 +582,10 @@ For example:
intTask := intTaskImpl.value
This modifies the original ``intTask`` to always print "andFinally" even
This modifies the original `intTask` to always print "andFinally" even
if the task fails.
Note that ``andFinally`` constructs a new task. This means that the new
Note that `andFinally` constructs a new task. This means that the new
task has to be invoked in order for the extra block to run. This is
important when calling andFinally on another task instead of overriding
a task like in the previous example. For example, consider this code:
@ -598,7 +598,7 @@ a task like in the previous example. For example, consider this code:
otherIntTask := intTaskImpl.value
If ``intTask`` is run directly, ``otherIntTask`` is never involved in
If `intTask` is run directly, `otherIntTask` is never involved in
execution. This case is similar to the following plain Scala code:
::

View File

@ -7,12 +7,12 @@ Basics
The standard source locations for testing are:
- Scala sources in ``src/test/scala/``
- Java sources in ``src/test/java/``
- Resources for the test classpath in ``src/test/resources/``
- Scala sources in `src/test/scala/`
- Java sources in `src/test/java/`
- Resources for the test classpath in `src/test/resources/`
The resources may be accessed from tests by using the ``getResource``
methods of ``java.lang.Class`` or ``java.lang.ClassLoader``.
The resources may be accessed from tests by using the `getResource`
methods of `java.lang.Class` or `java.lang.ClassLoader`.
The main Scala testing frameworks
(`specs2 <http://specs2.org/>`_,
@ -26,7 +26,7 @@ declaring it as a :doc:`managed dependency <Library-Management>`:
libraryDependencies += "org.scalacheck" %% "scalacheck" % "1.10.1" % "test"
The fourth component ``"test"`` is the :ref:`configuration <gsg-ivy-configurations>`
The fourth component `"test"` is the :ref:`configuration <gsg-ivy-configurations>`
and means that ScalaCheck will only be on the test classpath and it
isn't needed by the main sources. This is generally good practice for
libraries because your users don't typically need your test dependencies
@ -34,7 +34,7 @@ to use your library.
With the library dependency defined, you can then add test sources in
the locations listed above and compile and run tests. The tasks for
running tests are ``test`` and ``testOnly``. The ``test`` task accepts
running tests are `test` and `testOnly`. The `test` task accepts
no command line arguments and runs all tests:
.. code-block:: console
@ -44,7 +44,7 @@ no command line arguments and runs all tests:
testOnly
---------
The ``testOnly`` task accepts a whitespace separated list of test names
The `testOnly` task accepts a whitespace separated list of test names
to run. For example:
.. code-block:: console
@ -60,7 +60,7 @@ It supports wildcards as well:
testQuick
----------
The ``testQuick`` task, like ``testOnly``, allows to filter the tests
The `testQuick` task, like `testOnly`, allows to filter the tests
to run to specific tests or wildcards using the same syntax to indicate
the filters. In addition to the explicit filter, only the tests that
satisfy one of the following conditions are run:
@ -74,23 +74,23 @@ Tab completion
~~~~~~~~~~~~~~
Tab completion is provided for test names based on the results of the
last ``test:compile``. This means that a new sources aren't available
last `test:compile`. This means that a new sources aren't available
for tab completion until they are compiled and deleted sources won't be
removed from tab completion until a recompile. A new test source can
still be manually written out and run using ``testOnly``.
still be manually written out and run using `testOnly`.
Other tasks
-----------
Tasks that are available for main sources are generally available for
test sources, but are prefixed with ``test:`` on the command line and
are referenced in Scala code with ``in Test``. These tasks include:
test sources, but are prefixed with `test:` on the command line and
are referenced in Scala code with `in Test`. These tasks include:
- ``test:compile``
- ``test:console``
- ``test:consoleQuick``
- ``test:run``
- ``test:runMain``
- `test:compile`
- `test:console`
- `test:consoleQuick`
- `test:run`
- `test:runMain`
See :doc:`Running </Getting-Started/Running>` for details on these tasks.
@ -111,14 +111,14 @@ Test Framework Arguments
------------------------
Arguments to the test framework may be provided on the command line to
the ``testOnly`` tasks following a ``--`` separator. For example:
the `testOnly` tasks following a `--` separator. For example:
.. code-block:: console
> testOnly org.example.MyTest -- -d -S
To specify test framework arguments as part of the build, add options
constructed by ``Tests.Argument``:
constructed by `Tests.Argument`:
::
@ -133,9 +133,9 @@ To specify them for a specific test framework only:
Setup and Cleanup
-----------------
Specify setup and cleanup actions using ``Tests.Setup`` and
``Tests.Cleanup``. These accept either a function of type ``() => Unit``
or a function of type ``ClassLoader => Unit``. The variant that accepts
Specify setup and cleanup actions using `Tests.Setup` and
`Tests.Cleanup`. These accept either a function of type `() => Unit`
or a function of type `ClassLoader => Unit`. The variant that accepts
a ClassLoader is passed the class loader that is (or was) used for
running the tests. It provides access to the test classes as well as the
test framework classes.
@ -143,7 +143,7 @@ test framework classes.
.. note::
When forking, the ClassLoader containing the test classes cannot be provided because it is in another JVM. Only use the ``() => Unit`` variants in this case.
When forking, the ClassLoader containing the test classes cannot be provided because it is in another JVM. Only use the `() => Unit` variants in this case.
Examples:
@ -164,15 +164,15 @@ By default, sbt runs all tasks in parallel. Because each test is mapped
to a task, tests are also run in parallel by default. To make tests
within a given project execute serially:
``scala parallelExecution in Test := false`` ``Test`` can be replaced
with ``IntegrationTest`` to only execute integration tests serially.
`scala parallelExecution in Test := false` `Test` can be replaced
with `IntegrationTest` to only execute integration tests serially.
Note that tests from different projects may still execute concurrently.
Filter classes
--------------
If you want to only run test classes whose name ends with "Test", use
``Tests.Filter``:
`Tests.Filter`:
::
@ -190,7 +190,7 @@ The setting:
specifies that all tests will be executed in a single external JVM. See
:doc:`Forking` for configuring standard options for forking. More control
over how tests are assigned to JVMs and what options to pass to those is
available with ``testGrouping`` key. For example:
available with `testGrouping` key. For example:
::
@ -206,8 +206,8 @@ available with ``testGrouping`` key. For example:
The tests in a single group are run sequentially. Control the number
of forked JVMs allowed to run at the same time by setting the
limit on ``Tags.ForkedTestGroup`` tag, which is 1 by default.
``Setup`` and ``Cleanup`` actions cannot be provided with the actual
limit on `Tags.ForkedTestGroup` tag, which is 1 by default.
`Setup` and `Cleanup` actions cannot be provided with the actual
test class loader when a group is forked.
Additional test configurations
@ -249,49 +249,49 @@ The following full build configuration demonstrates integration tests.
lazy val specs = "org.specs2" %% "specs2" % "2.0" % "it,test"
}
- ``configs(IntegrationTest)`` adds the predefined integration test
configuration. This configuration is referred to by the name ``it``.
- ``settings( Defaults.itSettings : _* )`` adds compilation, packaging,
and testing actions and settings in the ``IntegrationTest``
- `configs(IntegrationTest)` adds the predefined integration test
configuration. This configuration is referred to by the name `it`.
- `settings( Defaults.itSettings : _* )` adds compilation, packaging,
and testing actions and settings in the `IntegrationTest`
configuration.
- ``settings( libraryDependencies += specs )`` adds specs to both the
standard ``test`` configuration and the integration test
configuration ``it``. To define a dependency only for integration
tests, use ``"it"`` as the configuration instead of ``"it,test"``.
- `settings( libraryDependencies += specs )` adds specs to both the
standard `test` configuration and the integration test
configuration `it`. To define a dependency only for integration
tests, use `"it"` as the configuration instead of `"it,test"`.
The standard source hierarchy is used:
- ``src/it/scala`` for Scala sources
- ``src/it/java`` for Java sources
- ``src/it/resources`` for resources that should go on the integration
- `src/it/scala` for Scala sources
- `src/it/java` for Java sources
- `src/it/resources` for resources that should go on the integration
test classpath
The standard testing tasks are available, but must be prefixed with
``it:``. For example,
`it:`. For example,
.. code-block:: console
> it:testOnly org.example.AnIntegrationTest
Similarly the standard settings may be configured for the
``IntegrationTest`` configuration. If not specified directly, most
``IntegrationTest`` settings delegate to ``Test`` settings by default.
`IntegrationTest` configuration. If not specified directly, most
`IntegrationTest` settings delegate to `Test` settings by default.
For example, if test options are specified as:
::
testOptions in Test += ...
then these will be picked up by the ``Test`` configuration and in turn
by the ``IntegrationTest`` configuration. Options can be added
then these will be picked up by the `Test` configuration and in turn
by the `IntegrationTest` configuration. Options can be added
specifically for integration tests by putting them in the
``IntegrationTest`` configuration:
`IntegrationTest` configuration:
::
testOptions in IntegrationTest += ...
Or, use ``:=`` to overwrite any existing options, declaring these to be
Or, use `:=` to overwrite any existing options, declaring these to be
the definitive integration test options:
::
@ -326,29 +326,29 @@ Instead of using the built-in configuration, we defined a new one:
lazy val FunTest = config("fun") extend(Test)
The ``extend(Test)`` part means to delegate to ``Test`` for undefined
``CustomTest`` settings. The line that adds the tasks and settings for
The `extend(Test)` part means to delegate to `Test` for undefined
`CustomTest` settings. The line that adds the tasks and settings for
the new test configuration is:
::
settings( inConfig(FunTest)(Defaults.testSettings) : _*)
This says to add test and settings tasks in the ``FunTest``
This says to add test and settings tasks in the `FunTest`
configuration. We could have done it this way for integration tests as
well. In fact, ``Defaults.itSettings`` is a convenience definition:
``val itSettings = inConfig(IntegrationTest)(Defaults.testSettings)``.
well. In fact, `Defaults.itSettings` is a convenience definition:
`val itSettings = inConfig(IntegrationTest)(Defaults.testSettings)`.
The comments in the integration test section hold, except with
``IntegrationTest`` replaced with ``FunTest`` and ``"it"`` replaced with
``"fun"``. For example, test options can be configured specifically for
``FunTest``:
`IntegrationTest` replaced with `FunTest` and `"it"` replaced with
`"fun"`. For example, test options can be configured specifically for
`FunTest`:
::
testOptions in FunTest += ...
Test tasks are run by prefixing them with ``fun:``
Test tasks are run by prefixing them with `fun:`
.. code-block:: console
@ -388,18 +388,18 @@ However, different tests are run depending on the configuration.
The key differences are:
- We are now only adding the test tasks
(``inConfig(FunTest)(Defaults.testTasks)``) and not compilation and
(`inConfig(FunTest)(Defaults.testTasks)`) and not compilation and
packaging tasks and settings.
- We filter the tests to be run for each configuration.
To run standard unit tests, run ``test`` (or equivalently,
``test:test``):
To run standard unit tests, run `test` (or equivalently,
`test:test`):
.. code-block:: console
> test
To run tests for the added configuration (here, ``"fun"``), prefix it
To run tests for the added configuration (here, `"fun"`), prefix it
with the configuration name as before:
.. code-block:: console
@ -413,7 +413,7 @@ Application to parallel execution
One use for this shared-source approach is to separate tests that can
run in parallel from those that must execute serially. Apply the
procedure described in this section for an additional configuration.
Let's call the configuration ``serial``:
Let's call the configuration `serial`:
::
@ -426,8 +426,8 @@ using:
parallelExecution in Serial := false
The tests to run in parallel would be run with ``test`` and the ones to
run in serial would be run with ``serial:test``.
The tests to run in parallel would be run with `test` and the ones to
run in serial would be run with `serial:test`.
JUnit
=====
@ -445,7 +445,7 @@ Extensions
==========
This page describes adding support for additional testing libraries and
defining additional test reporters. You do this by implementing ``sbt``
defining additional test reporters. You do this by implementing `sbt`
interfaces (described below). If you are the author of the testing
framework, you can depend on the test interface as a provided
dependency. Alternatively, anyone can provide support for a test
@ -473,17 +473,17 @@ Using Extensions
To use your extensions in a project definition:
Modify the ``testFrameworks``\ setting to reference your test framework:
Modify the `testFrameworks` setting to reference your test framework:
::
testFrameworks += new TestFramework("custom.framework.ClassName")
Specify the test reporters you want to use by overriding the
``testListeners`` method in your project definition.
`testListeners` method in your project definition.
::
testListeners += customTestListener
where ``customTestListener`` is of type ``sbt.TestReportListener``.
where `customTestListener` is of type `sbt.TestReportListener`.

View File

@ -3,19 +3,19 @@ Triggered Execution
===================
You can make a command run when certain files change by prefixing the
command with ``~``. Monitoring is terminated when ``enter`` is pressed.
This triggered execution is configured by the ``watch`` setting, but
typically the basic settings ``watchSources`` and ``pollInterval`` are
command with `~`. Monitoring is terminated when `enter` is pressed.
This triggered execution is configured by the `watch` setting, but
typically the basic settings `watchSources` and `pollInterval` are
modified.
- ``watchSources`` defines the files for a single project that are
- `watchSources` defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.
- ``watchTransitiveSources`` then combines the ``watchSources`` for
- `watchTransitiveSources` then combines the `watchSources` for
the current project and all execution and classpath dependencies (see
:doc:`Full Configuration </Getting-Started/Full-Def>` for details on interProject dependencies).
- ``pollInterval`` selects the interval between polling for changes in
milliseconds. The default value is ``500 ms``.
- `pollInterval` selects the interval between polling for changes in
milliseconds. The default value is `500 ms`.
Some example usages are described below.
@ -38,7 +38,7 @@ One use is for test driven development, as suggested by Erick on the
mailing list.
The following will poll for changes to your source code (main or test)
and run ``testOnly`` for the specified test.
and run `testOnly` for the specified test.
.. code-block:: console
@ -51,8 +51,8 @@ Occasionally, you may need to trigger the execution of multiple
commands. You can use semicolons to separate the commands to be
triggered.
The following will poll for source changes and run ``clean`` and
``test``.
The following will poll for source changes and run `clean` and
`test`.
.. code-block:: console

View File

@ -2,7 +2,7 @@
Understanding Incremental Recompilation
=======================================
Compiling Scala code is slow, and SBT makes it often faster. By
Compiling Scala code is slow, and sbt makes it often faster. By
understanding how, you can even understand how to make compilation even
faster. Modifying source files with many dependencies might require
recompiling only those source files—which might take, say, 5
@ -10,51 +10,51 @@ seconds—instead of all the dependencies—which might take, say, 2
minutes. Often you can control which will be your case and make
development much faster by some simple coding practices.
In fact, improving Scala compilation times is one major goal of SBT, and
In fact, improving Scala compilation times is one major goal of sbt, and
conversely the speedups it gives are one of the major motivations to use
it. A significant portion of SBT sources and development efforts deals
it. A significant portion of sbt sources and development efforts deals
with strategies for speeding up compilation.
To reduce compile times, SBT uses two strategies:
To reduce compile times, sbt uses two strategies:
1. reduce the overhead for restarting Scalac;
2. implement smart and transparent strategies for incremental
recompilation, so that only modified files and the needed
dependencies are recompiled.
3. SBT runs Scalac always in the same virtual machine. If one compiles
source code using SBT, keeps SBT alive, modifies source code and
3. sbt runs Scalac always in the same virtual machine. If one compiles
source code using sbt, keeps sbt alive, modifies source code and
triggers a new compilation, this compilation will be faster because
(part of) Scalac will have already been JIT-compiled. In the future,
SBT will reintroduce support for reusing the same compiler instance,
similarly to FSC.
sbt will reintroduce support for reusing the same compiler instance,
similarly to fsc.
4. When a source file ``A.scala`` is modified, SBT goes to great effort
to recompile other source files depending on ``A.scala`` only if
required - that is, only if the interface of ``A.scala`` was
4. When a source file `A.scala` is modified, sbt goes to great effort
to recompile other source files depending on `A.scala` only if
required - that is, only if the interface of `A.scala` was
modified. With other build management tools (especially for Java,
like ant), when a developer changes a source file in a
non-binary-compatible way, he needs to manually ensure that
dependencies are also recompiled - often by manually running the
``clean`` command to remove existing compilation output; otherwise
`clean` command to remove existing compilation output; otherwise
compilation might succeed even when dependent class files might need
to be recompiled. What is worse, the change to one source might make
dependencies incorrect, but this is not discovered automatically: One
might get a compilation success with incorrect source code. Since
Scala compile times are so high, running ``clean`` is particularly
Scala compile times are so high, running `clean` is particularly
undesirable.
By organizing your source code appropriately, you can minimize the
amount of code affected by a change. SBT cannot determine precisely
amount of code affected by a change. sbt cannot determine precisely
which dependencies have to be recompiled; the goal is to compute a
conservative approximation, so that whenever a file must be recompiled,
it will, even though we might recompile extra files.
SBT heuristics
sbt heuristics
--------------
SBT tracks source dependencies at the granularity of source files. For
each source file, SBT tracks files which depend on it directly; if the
sbt tracks source dependencies at the granularity of source files. For
each source file, sbt tracks files which depend on it directly; if the
**interface** of classes, objects or traits in a file changes, all files
dependent on that source must be recompiled. At the moment sbt uses the
following algorithm to calculate source files dependent on a given source
@ -101,24 +101,24 @@ There are also the following member reference dependencies:
D.scala -> A.scala
E.scala -> D.scala
Now if the interface of ``A.scala`` is changed the following files
will get invalidated: ``B.scala``, ``C.scala``, ``D.scala``. Both
``B.scala`` and ``C.scala`` were included through transtive closure
of inheritance dependencies. The ``E.scala`` was not included because
``E.scala`` doesn't depend directly on ``A.scala``.
Now if the interface of `A.scala` is changed the following files
will get invalidated: `B.scala`, `C.scala`, `D.scala`. Both
`B.scala` and `C.scala` were included through transtive closure
of inheritance dependencies. The `E.scala` was not included because
`E.scala` doesn't depend directly on `A.scala`.
The distinction between depdencies by inheritance or member reference
is a new feature in sbt 0.13 and is responsible for improved recompilation
times in many cases where deep inheritance chains are not used extensively.
SBT does not instead track dependencies to source code at the
granularity of individual output ``.class`` files, as one might hope.
sbt does not instead track dependencies to source code at the
granularity of individual output `.class` files, as one might hope.
Doing so would be incorrect, because of some problems with sealed
classes (see below for discussion).
Dependencies on binary files are different - they are tracked both on
the ``.class`` level and on the source file level. Adding a new
implementation of a sealed trait to source file ``A`` affects all
the `.class` level and on the source file level. Adding a new
implementation of a sealed trait to source file `A` affects all
clients of that sealed trait, and such dependencies are tracked at the
source file level.
@ -145,12 +145,12 @@ just to illustrate the ideas; this list is not intended to be complete.
2. Adding a method to a trait requires recompiling all implementing
classes. The same is true for most changes to a method signature in a
trait.
3. Calls to ``super.methodName`` in traits are resolved to calls to an
abstract method called ``fullyQualifiedTraitName$$super$methodName``;
3. Calls to `super.methodName` in traits are resolved to calls to an
abstract method called `fullyQualifiedTraitName$$super$methodName`;
such methods only exist if they are used. Hence, adding the first
call to ``super.methodName`` for a specific ``methodName`` changes
call to `super.methodName` for a specific `methodName` changes
the interface. At present, this is not yet handled—see gh-466.
4. ``sealed`` hierarchies of case classes allow to check exhaustiveness
4. `sealed` hierarchies of case classes allow to check exhaustiveness
of pattern matching. Hence pattern matches using case classes must
depend on the complete hierarchy - this is one reason why
dependencies cannot be easily tracked at the class level (see Scala
@ -167,12 +167,12 @@ then sbt 0.13 has the right tools for that.
In order to debug the interface representation and its changes as you
modify and recompile source code you need to do two things:
1. Enable incremental compiler's ``apiDebug`` option.
1. Enable incremental compiler's `apiDebug` option.
2. Add `diff-utils library <https://code.google.com/p/java-diff-utils/>`_
to sbt's classpath. Check documentation of `sbt.extraClasspath`
system property in the :doc:`Command-Line-Reference`.
.. warning:: Enabling the ``apiDebug`` option increases significantly
.. warning:: Enabling the `apiDebug` option increases significantly
memory consumption and degrades performance of the
incremental compiler. The underlying reason is that in
order to produce meaningful debugging information about
@ -184,7 +184,7 @@ modify and recompile source code you need to do two things:
compiler problem only.
Below is complete transcript which shows how to enable interface debugging
in your project. First, we download the ``diffutils`` jar and pass it
in your project. First, we download the `diffutils` jar and pass it
to sbt:
.. code-block:: none
@ -199,13 +199,13 @@ to sbt:
[info] Reapplying settings...
[info] Set current project to sbt-013 (in build file:/Users/grek/tmp/sbt-013/)
Let's suppose you have the following source code in ``Test.scala``::
Let's suppose you have the following source code in `Test.scala`::
class A {
def b: Int = 123
}
compile it and then change the ``Test.scala`` file so it looks like::
compile it and then change the `Test.scala` file so it looks like::
class A {
def b: String = "abc"
@ -234,10 +234,10 @@ the following lines in the debugging log
You can see an unified diff of two interface textual represetantions. As you can see,
the incremental compiler detected a change to the return type of `b` method.
How to take advantage of SBT heuristics
How to take advantage of sbt heuristics
---------------------------------------
The heuristics used by SBT imply the following user-visible
The heuristics used by sbt imply the following user-visible
consequences, which determine whether a change to a class affects other
classes.
@ -245,14 +245,14 @@ XXX Please note that this part of the documentation is a first draft;
part of the strategy might be unsound, part of it might be not yet
implemented.
1. Adding, removing, modifying ``private`` methods does not require
1. Adding, removing, modifying `private` methods does not require
recompilation of client classes. Therefore, suppose you add a method
to a class with a lot of dependencies, and that this method is only
used in the declaring class; marking it ``private`` will prevent
used in the declaring class; marking it `private` will prevent
recompilation of clients. However, this only applies to methods which
are not accessible to other classes, hence methods marked with
``private`` or ``private[this]``; methods which are private to a
package, marked with ``private[name]``, are part of the API.
`private` or `private[this]`; methods which are private to a
package, marked with `private[name]`, are part of the API.
2. Modifying the interface of a non-private method requires recompiling
all clients, even if the method is not used.
3. Modifying one class does require recompiling dependencies of other
@ -288,16 +288,32 @@ often invasive, and reducing compilation times is not often a good
enough motivation. That is why we discuss also some of the implications
from the point of view of binary compatibility and software engineering.
Consider the following source file ``A.scala``:
``scala import java.io._ object A { def openFiles(list: List[File]) = list.map(name => new FileWriter(name)) }``
Let us now consider the public interface of trait ``A``. Note that the
return type of method ``openFiles`` is not specified explicitly, but
computed by type inference to be ``List[FileWriter]``. Suppose that
Consider the following source file `A.scala`:
.. code-block:: scala
import java.io._
object A {
def openFiles(list: List[File]) =
list.map(name => new FileWriter(name))
}
Let us now consider the public interface of trait `A`. Note that the
return type of method `openFiles` is not specified explicitly, but
computed by type inference to be `List[FileWriter]`. Suppose that
after writing this source code, we introduce client code and then modify
``A.scala`` as follows:
``scala import java.io._ object A { def openFiles(list: List[File]) = Vector(list.map(name => new BufferedWriter(new FileWriter(name))): _*) }``
`A.scala` as follows:
.. code-block:: scala
import java.io._
object A {
def openFiles(list: List[File]) =
Vector(list.map(name => new BufferedWriter(new FileWriter(name))): _*)
}
Type inference will now compute as result type
``Vector[BufferedWriter]``; in other words, changing the implementation
`Vector[BufferedWriter]`; in other words, changing the implementation
lead to a change of the public interface, with two undesirable
consequences:
@ -316,7 +332,11 @@ consequences:
val res: List[FileWriter] = A.openFiles(List(new File("foo.input")))
Also the following code will break:
``scala val a: Seq[Writer] = new BufferedWriter(new FileWriter("bar.input")) :: A.openFiles(List(new File("foo.input")))``
.. code-block:: scala
val a: Seq[Writer] = new BufferedWriter(new FileWriter("bar.input"))
A.openFiles(List(new File("foo.input")))
How can we avoid these problems?
@ -324,10 +344,10 @@ Of course, we cannot solve them in general: if we want to alter the
interface of a module, breakage might result. However, often we can
remove *implementation details* from the interface of a module. In the
example above, for instance, it might well be that the intended return
type is more general - namely ``Seq[Writer]``. It might also not be the
type is more general - namely `Seq[Writer]`. It might also not be the
case - this is a design choice to be decided on a case-by-case basis. In
this example I will assume however that the designer chooses
``Seq[Writer]``, since it is a reasonable choice both in the above
`Seq[Writer]`, since it is a reasonable choice both in the above
simplified example and in a real-world extension of the above code.
The client snippets above will now become
@ -365,12 +385,12 @@ Why adding a member requires recompiling existing clients
In Java adding a member does not require recompiling existing valid
source code. The same should seemingly hold also in Scala, but this is
not the case: implicit conversions might enrich class ``Foo`` with
method ``bar`` without modifying class ``Foo`` itself (see discussion in
issue gh-288 - XXX integrate more). However, if another method ``bar``
is introduced in class ``Foo``, this method should be used in preference
not the case: implicit conversions might enrich class `Foo` with
method `bar` without modifying class `Foo` itself (see discussion in
issue gh-288 - XXX integrate more). However, if another method `bar`
is introduced in class `Foo`, this method should be used in preference
to the one added through implicit conversions. Therefore any class
depending on ``Foo`` should be recompiled. One can imagine more
depending on `Foo` should be recompiled. One can imagine more
fine-grained tracking of dependencies, but this is currently not
implemented.
@ -379,7 +399,7 @@ Further references
The incremental compilation logic is implemented in
https://github.com/sbt/sbt/blob/0.13/compile/inc/src/main/scala/inc/Incremental.scala.
Some related documentation for SBT 0.7 is available at:
Some related documentation for sbt 0.7 is available at:
https://code.google.com/p/simple-build-tool/wiki/ChangeDetectionAndTesting.
Some discussion on the incremental recompilation policies is available
in issue gh-322 and gh-288.

View File

@ -2,27 +2,27 @@
Update Report
=============
``update`` and related tasks produce a value of type
`update` and related tasks produce a value of type
`sbt.UpdateReport <../../api/sbt/UpdateReport.html>`_
This data structure provides information about the resolved
configurations, modules, and artifacts. At the top level,
``UpdateReport`` provides reports of type ``ConfigurationReport`` for
each resolved configuration. A ``ConfigurationReport`` supplies reports
(of type ``ModuleReport``) for each module resolved for a given
configuration. Finally, a ``ModuleReport`` lists each successfully
retrieved ``Artifact`` and the ``File`` it was retrieved to as well as
the ``Artifact``\ s that couldn't be downloaded. This missing
``Arifact`` list is always empty for ``update``, which will fail if it is
non-empty. However, it may be non-empty for ``updateClassifiers`` and
``updateSbtClassifers``.
`UpdateReport` provides reports of type `ConfigurationReport` for
each resolved configuration. A `ConfigurationReport` supplies reports
(of type `ModuleReport`) for each module resolved for a given
configuration. Finally, a `ModuleReport` lists each successfully
retrieved `Artifact` and the `File` it was retrieved to as well as
the `Artifact`\ s that couldn't be downloaded. This missing
`Arifact` list is always empty for `update`, which will fail if it is
non-empty. However, it may be non-empty for `updateClassifiers` and
`updateSbtClassifers`.
Filtering a Report and Getting Artifacts
========================================
A typical use of ``UpdateReport`` is to retrieve a list of files
A typical use of `UpdateReport` is to retrieve a list of files
matching a filter. A conversion of type
``UpdateReport => RichUpdateReport`` implicitly provides these methods
for ``UpdateReport``. The filters are defined by the
`UpdateReport => RichUpdateReport` implicitly provides these methods
for `UpdateReport`. The filters are defined by the
`DependencyFilter <../../api/sbt/DependencyFilter.html>`_,
`ConfigurationFilter <../../api/sbt/ConfigurationFilter.html>`_,
`ModuleFilter <../../api/sbt/ModuleFilter.html>`_,
@ -32,7 +32,7 @@ types. Using these filter types, you can filter by the configuration
name, the module organization, name, or revision, and the artifact name,
type, extension, or classifier.
The relevant methods (implicitly on ``UpdateReport``) are:
The relevant methods (implicitly on `UpdateReport`) are:
::
@ -40,18 +40,18 @@ The relevant methods (implicitly on ``UpdateReport``) are:
def select(configuration: ConfigurationFilter = ..., module: ModuleFilter = ..., artifact: ArtifactFilter = ...): Seq[File]
Any argument to ``select`` may be omitted, in which case all values are
Any argument to `select` may be omitted, in which case all values are
allowed for the corresponding component. For example, if the
``ConfigurationFilter`` is not specified, all configurations are
`ConfigurationFilter` is not specified, all configurations are
accepted. The individual filter types are discussed below.
Filter Basics
-------------
Configuration, module, and artifact filters are typically built by
applying a ``NameFilter`` to each component of a ``Configuration``,
``ModuleID``, or ``Artifact``. A basic ``NameFilter`` is implicitly
constructed from a String, with ``*`` interpreted as a wildcard.
applying a `NameFilter` to each component of a `Configuration`,
`ModuleID`, or `Artifact`. A basic `NameFilter` is implicitly
constructed from a String, with `*` interpreted as a wildcard.
::
@ -67,7 +67,7 @@ constructed from a String, with ``*`` interpreted as a wildcard.
val cf: ConfigurationFilter = configurationFilter(name = "compile" | "test")
Alternatively, these filters, including a ``NameFilter``, may be
Alternatively, these filters, including a `NameFilter`, may be
directly defined by an appropriate predicate (a single-argument function
returning a Boolean).
@ -90,18 +90,18 @@ returning a Boolean).
ConfigurationFilter
-------------------
A configuration filter essentially wraps a ``NameFilter`` and is
explicitly constructed by the ``configurationFilter`` method:
A configuration filter essentially wraps a `NameFilter` and is
explicitly constructed by the `configurationFilter` method:
::
def configurationFilter(name: NameFilter = ...): ConfigurationFilter
If the argument is omitted, the filter matches all configurations.
Functions of type ``String => Boolean`` are implicitly convertible to a
``ConfigurationFilter``. As with ``ModuleFilter``, ``ArtifactFilter``,
and ``NameFilter``, the ``&``, ``|``, and ``-`` methods may be used to
combine ``ConfigurationFilter``\ s.
Functions of type `String => Boolean` are implicitly convertible to a
`ConfigurationFilter`. As with `ModuleFilter`, `ArtifactFilter`,
and `NameFilter`, the `&`, `|`, and `-` methods may be used to
combine `ConfigurationFilter`\ s.
::
@ -115,21 +115,21 @@ combine ``ConfigurationFilter``\ s.
ModuleFilter
------------
A module filter is defined by three ``NameFilter``\ s: one for the
A module filter is defined by three `NameFilter`\ s: one for the
organization, one for the module name, and one for the revision. Each
component filter must match for the whole module filter to match. A
module filter is explicitly constructed by the ``moduleFilter`` method:
module filter is explicitly constructed by the `moduleFilter` method:
::
def moduleFilter(organization: NameFilter = ..., name: NameFilter = ..., revision: NameFilter = ...): ModuleFilter
An omitted argument does not contribute to the match. If all arguments
are omitted, the filter matches all ``ModuleID``\ s. Functions of type
``ModuleID => Boolean`` are implicitly convertible to a
``ModuleFilter``. As with ``ConfigurationFilter``, ``ArtifactFilter``,
and ``NameFilter``, the ``&``, ``|``, and ``-`` methods may be used to
combine ``ModuleFilter``\ s:
are omitted, the filter matches all `ModuleID`\ s. Functions of type
`ModuleID => Boolean` are implicitly convertible to a
`ModuleFilter`. As with `ConfigurationFilter`, `ArtifactFilter`,
and `NameFilter`, the `&`, `|`, and `-` methods may be used to
combine `ModuleFilter`\ s:
::
@ -143,20 +143,20 @@ combine ``ModuleFilter``\ s:
ArtifactFilter
--------------
An artifact filter is defined by four ``NameFilter``\ s: one for the
An artifact filter is defined by four `NameFilter`\ s: one for the
name, one for the type, one for the extension, and one for the
classifier. Each component filter must match for the whole artifact
filter to match. An artifact filter is explicitly constructed by the
``artifactFilter`` method:
`artifactFilter` method:
::
def artifactFilter(name: NameFilter = ..., `type`: NameFilter = ..., extension: NameFilter = ..., classifier: NameFilter = ...): ArtifactFilter
Functions of type ``Artifact => Boolean`` are implicitly convertible to
an ``ArtifactFilter``. As with ``ConfigurationFilter``,
``ModuleFilter``, and ``NameFilter``, the ``&``, ``|``, and ``-``
methods may be used to combine ``ArtifactFilter``\ s:
Functions of type `Artifact => Boolean` are implicitly convertible to
an `ArtifactFilter`. As with `ConfigurationFilter`,
`ModuleFilter`, and `NameFilter`, the `&`, `|`, and `-`
methods may be used to combine `ArtifactFilter`\ s:
::
@ -170,15 +170,15 @@ methods may be used to combine ``ArtifactFilter``\ s:
DependencyFilter
----------------
A ``DependencyFilter`` is typically constructed by combining other
``DependencyFilter``\ s together using ``&&``, ``||``, and ``--``.
Configuration, module, and artifact filters are ``DependencyFilter``\ s
themselves and can be used directly as a ``DependencyFilter`` or they
can build up a ``DependencyFilter``. Note that the symbols for the
``DependencyFilter`` combining methods are doubled up to distinguish
A `DependencyFilter` is typically constructed by combining other
`DependencyFilter`\ s together using `&&`, `||`, and `--`.
Configuration, module, and artifact filters are `DependencyFilter`\ s
themselves and can be used directly as a `DependencyFilter` or they
can build up a `DependencyFilter`. Note that the symbols for the
`DependencyFilter` combining methods are doubled up to distinguish
them from the combinators of the more specific filters for
configurations, modules, and artifacts. These double-character methods
will always return a ``DependencyFilter``, whereas the single character
will always return a `DependencyFilter`, whereas the single character
methods preserve the more specific filter type. For example:
::
@ -188,8 +188,8 @@ methods preserve the more specific filter type. For example:
val df: DependencyFilter =
configurationFilter(name = "compile" | "test") && artifactFilter(`type` = "jar") || moduleFilter(name = "dispatch-*")
Here, we used ``&&`` and ``||`` to combine individual component filters
Here, we used `&&` and `||` to combine individual component filters
into a dependency filter, which can then be provided to the
``UpdateReport.matches`` method. Alternatively, the
``UpdateReport.select`` method may be used, which is equivalent to
calling ``matches`` with its arguments combined with ``&&``.
`UpdateReport.matches` method. Alternatively, the
`UpdateReport.select` method may be used, which is equivalent to
calling `matches` with its arguments combined with `&&`.

View File

@ -2,14 +2,14 @@
Dormant Pages
===============
If you check out the documentation as a git repository, there's a ``Dormant``
If you check out the documentation as a git repository, there's a `Dormant`
directory (this one) which contains:
- "redirect" pages (empty pages that point to some new page). If you
want to rename a page and think it has lots of incoming links from
outside the wiki, you could leave the old page name in here. The
directory name is not part of the link so it's safe to move the old
page into the ``Dormant`` directory.
page into the `Dormant` directory.
- "clipboard" pages that contain some amount of useful text, that needs
to be extracted and organized, maybe moved to existing pages or the
:doc:`/faq` or maybe there's a new page that should exist. Basically content

View File

@ -34,10 +34,10 @@ configurations.
By Example
----------
Create a file with extension ``.scala`` in your ``project/`` directory
(such as ``<your-project>/project/Build.scala``).
Create a file with extension `.scala` in your `project/` directory
(such as `<your-project>/project/Build.scala`).
A sample ``project/Build.scala``:
A sample `project/Build.scala`:
::
@ -64,16 +64,16 @@ Cycles
about project relationships. It is near the example for easier
reference.)
The configuration dependency ``sub2 -> root`` is specified as an
argument to the ``delegates`` parameter of ``Project``, which is by-name
and of type ``Seq[ProjectReference]`` because by-name repeated
The configuration dependency `sub2 -> root` is specified as an
argument to the `delegates` parameter of `Project`, which is by-name
and of type `Seq[ProjectReference]` because by-name repeated
parameters are not allowed in Scala. There are also corresponding
by-name parameters ``aggregate`` and ``dependencies`` for execution and
by-name parameters `aggregate` and `dependencies` for execution and
classpath dependencies. By-name parameters, being non-strict, are useful
when there are cycles between the projects, as is the case for ``root``
and ``sub2``. In the example, there is a *configuration* dependency
``sub2 -> root``, a *classpath* dependency ``sub1 -> sub2``, and an
*execution* dependency ``root -> sub1``. This causes cycles at the
when there are cycles between the projects, as is the case for `root`
and `sub2`. In the example, there is a *configuration* dependency
`sub2 -> root`, a *classpath* dependency `sub1 -> sub2`, and an
*execution* dependency `root -> sub1`. This causes cycles at the
Scala-level, but not within a particular dependency type, which is not
allowed.
@ -81,7 +81,7 @@ Defining Projects
-----------------
An internal project is defined by constructing an instance of
``Project``. The minimum information for a new project is its ID string
`Project`. The minimum information for a new project is its ID string
and base directory. For example:
::
@ -92,7 +92,7 @@ and base directory. For example:
lazy val projectA = Project("a", file("subA"))
}
This constructs a project definition for a project with ID 'a' and located in the ``subA/`` directory. Here, ``file(...)`` is equivalent to ``new File(...)`` and is resolved relative to the build's base directory.
This constructs a project definition for a project with ID 'a' and located in the `subA/` directory. Here, `file(...)` is equivalent to `new File(...)` and is resolved relative to the build's base directory.
There are additional optional parameters to the Project constructor.
These parameters configure the project and declare project
relationships, as discussed in the next sections.
@ -105,8 +105,8 @@ a light configuration. Unlike a light configuration, the default
settings can be replaced or manipulated and sequences of settings can be
manipulated. In addition, a light configuration has default imports
defined. A full definition needs to import these explicitly. In
particular, all keys (like ``name`` and ``version``) need to be imported
from ``sbt.Keys``.
particular, all keys (like `name` and `version`) need to be imported
from `sbt.Keys`.
No defaults
~~~~~~~~~~~
@ -152,26 +152,26 @@ its settings can be selected like:
lazy val projectA = Project("a", file("subA"), settings = Web.webSettings)
}
Settings defined in ``.sbt`` files are appended to the settings for each
``Project`` definition.
Settings defined in `.sbt` files are appended to the settings for each
`Project` definition.
Build-level Settings
~~~~~~~~~~~~~~~~~~~~
Lastly, settings can be defined for the entire build. In general, these
are used when a setting is not defined for a project. These settings are
declared either by augmenting ``Build.settings`` or defining settings in
declared either by augmenting `Build.settings` or defining settings in
the scope of the current build. For example, to set the shell prompt to
be the id for the current project, the following setting can be added to
a ``.sbt`` file:
a `.sbt` file:
::
shellPrompt in ThisBuild := { s => Project.extract(s).currentProject.id + "> " }
(The value is a function ``State => String``. ``State`` contains
(The value is a function `State => String`. `State` contains
everything about the build and will be discussed elsewhere.)
Alternatively, the setting can be defined in ``Build.settings``:
Alternatively, the setting can be defined in `Build.settings`:
::
@ -194,15 +194,15 @@ Project References
~~~~~~~~~~~~~~~~~~
When defining a dependency on another project, you provide a
``ProjectReference``. In the simplest case, this is a ``Project``
`ProjectReference`. In the simplest case, this is a `Project`
object. (Technically, there is an implicit conversion
``Project => ProjectReference``) This indicates a dependency on a
`Project => ProjectReference`) This indicates a dependency on a
project within the same build. It is possible to declare a dependency on
a project in a directory separate from the current build, in a git
repository, or in a project packaged into a jar and accessible via
http/https. These are referred to as external builds and projects. You
can reference the root project in an external build with
``RootProject``:
`RootProject`:
.. code-block:: text
@ -210,7 +210,7 @@ can reference the root project in an external build with
RootProject( uri("git://github.com/dragos/dupcheck.git") )
or a specific project within the external build can be referenced using
a ``ProjectRef``:
a `ProjectRef`:
::
@ -223,15 +223,15 @@ branch or tag. For example:
RootProject( uri("git://github.com/typesafehub/sbteclipse.git#v1.2") )
Ultimately, a ``RootProject`` is resolved to a ``ProjectRef`` once the
Ultimately, a `RootProject` is resolved to a `ProjectRef` once the
external project is loaded. Additionally, there are implicit conversions
``URI => RootProject`` and ``File => RootProject`` so that URIs and
`URI => RootProject` and `File => RootProject` so that URIs and
Files can be used directly. External, remote builds are retrieved or
checked out to a staging directory in the user's ``.sbt`` directory so
checked out to a staging directory in the user's `.sbt` directory so
that they can be manipulated like local builds. Examples of using
project references follow in the next sections.
When using external projects, the ``sbt.boot.directory`` should be set
When using external projects, the `sbt.boot.directory` should be set
(see [[Setup\|Getting Started Setup]]) so that unnecessary
recompilations do not occur (see gh-35).
@ -241,7 +241,7 @@ Execution Dependency
If project A has an execution dependency on project B, then when you
execute a task on project A, it will also be run on project B. No
ordering of these tasks is implied. An execution dependency is declared
using the ``aggregate`` method on ``Project``. For example:
using the `aggregate` method on `Project`. For example:
::
@ -250,12 +250,12 @@ using the ``aggregate`` method on ``Project``. For example:
lazy val sub2 = Project(...) aggregate(ext)
lazy val ext = uri("git://github.com/dragos/dupcheck.git")
If 'clean' is executed on ``sub2``, it will also be executed on ``ext``
(the locally checked out version). If 'clean' is executed on ``root``,
it will also be executed on ``sub1``, ``sub2``, and ``ext``.
If 'clean' is executed on `sub2`, it will also be executed on `ext`
(the locally checked out version). If 'clean' is executed on `root`,
it will also be executed on `sub1`, `sub2`, and `ext`.
Aggregation can be controlled more finely by configuring the
``aggregate`` setting. This setting is of type ``Aggregation``:
`aggregate` setting. This setting is of type `Aggregation`:
::
@ -264,30 +264,30 @@ Aggregation can be controlled more finely by configuring the
final class Explicit(val deps: Seq[ProjectReference], val transitive: Boolean) extends Aggregation
This key can be set in any scope, including per-task scopes. By default,
aggregation is disabled for ``run``, ``console-quick``, ``console``, and
``console-project``. Re-enabling it from the command line for the
current project for ``run`` would look like:
aggregation is disabled for `run`, `console-quick`, `console`, and
`console-project`. Re-enabling it from the command line for the
current project for `run` would look like:
.. code-block:: console
> set aggregate in run := true
(There is an implicit ``Boolean => Implicit`` where ``true`` translates
to ``Implicit(true)`` and ``false`` translates to ``Implicit(false)``).
(There is an implicit `Boolean => Implicit` where `true` translates
to `Implicit(true)` and `false` translates to `Implicit(false)`).
Similarly, aggregation can be disabled for the current project using:
.. code-block:: console
> set aggregate in clean := false
``Explicit`` allows finer control over the execution dependencies and
`Explicit` allows finer control over the execution dependencies and
transitivity. An instance is normally constructed using
``Aggregation.apply``. No new projects may be introduced here (that is,
`Aggregation.apply`. No new projects may be introduced here (that is,
internal references have to be defined already in the Build's
``projects`` and externals must be a dependency in the Build
definition). For example, to declare that ``root/clean`` aggregates
``sub1/clean`` and ``sub2/clean`` intransitively (that is, excluding
``ext`` even though ``sub2`` aggregates it):
`projects` and externals must be a dependency in the Build
definition). For example, to declare that `root/clean` aggregates
`sub1/clean` and `sub2/clean` intransitively (that is, excluding
`ext` even though `sub2` aggregates it):
.. code-block:: scala
@ -303,24 +303,24 @@ dependencies and recompiling modified sources.
A classpath dependency declaration consists of a project reference and
an optional configuration mapping. For example, to use project b's
``compile`` configuration from project a's ``test`` configuration:
`compile` configuration from project a's `test` configuration:
::
lazy val a = Project(...) dependsOn(b % "test->compile")
lazy val b = Project(...)
"test->compile"`` may be shortened to ``"test"`` in this case. The
``%`` call may be omitted, in which case the mapping is
``"compile->compile"`` by default.
"test->compile"` may be shortened to `"test"` in this case. The
`%` call may be omitted, in which case the mapping is
`"compile->compile"` by default.
A useful configuration declaration is ``test->test``. This means to use
A useful configuration declaration is `test->test`. This means to use
a dependency's test classes on the dependent's test classpath.
Multiple declarations may be separated by a semicolon. For example, the
following says to use the main classes of ``b`` for the compile
classpath of ``a`` as well as the test classes of ``b`` for the test
classpath of ``a``:
following says to use the main classes of `b` for the compile
classpath of `a` as well as the test classes of `b` for the test
classpath of `a`:
::

View File

@ -4,24 +4,24 @@
true) that everything in here is covered elsewhere, this page can be
empty except for links to the new pages.
There are two types of file for configuring a build: a ``build.sbt``
file in you project root directory, or a ``Build.scala`` file in your
``project/`` directory. The former is often referred to as a "light",
There are two types of file for configuring a build: a `build.sbt`
file in you project root directory, or a `Build.scala` file in your
`project/` directory. The former is often referred to as a "light",
"quick" or "basic" configuration and the latter is often referred to as
"full" configuration. This page is about "full" configuration.
Naming the Scala build file
===========================
``Build.scala`` is the typical name for this build file but in reality
it can be called anything that ends with ``.scala`` as it is a standard
`Build.scala` is the typical name for this build file but in reality
it can be called anything that ends with `.scala` as it is a standard
Scala source file and sbt will detect and use it regardless of its name.
Overview of what goes in the file
=================================
The most basic form of this file defines one object which extends
``sbt.Build`` e.g.:
`sbt.Build` e.g.:
::
@ -34,7 +34,7 @@ The most basic form of this file defines one object which extends
// Declarations go here
}
There needs to be at least one ``sbt.Project`` defined and in this case
There needs to be at least one `sbt.Project` defined and in this case
we are giving it an arbitrary name and saying that it can be found in
the root of this project. In other words we are saying that this is a
build file to build the current project.
@ -57,7 +57,7 @@ example, the line:
val apachenet = "commons-net" % "commons-net" % "2.0"
defines a dependency and assigns it to the val ``apachenet`` but, unless
defines a dependency and assigns it to the val `apachenet` but, unless
you refer to that val again in the build file, the name of it is of no
significance to sbt. sbt simply sees that the dependency object exists
and uses it when it needs it.
@ -65,16 +65,16 @@ and uses it when it needs it.
Combining "light" and "full" configuration files
================================================
It is worth noting at this stage that you can have both a ``build.sbt``
file and a ``Build.scala`` file for the same project. If you do this,
sbt will append the configurations in ``build.sbt`` to those in the
``Build.scala`` file. In fact you can also have multiple ".sbt" files in
It is worth noting at this stage that you can have both a `build.sbt`
file and a `Build.scala` file for the same project. If you do this,
sbt will append the configurations in `build.sbt` to those in the
`Build.scala` file. In fact you can also have multiple ".sbt" files in
your root directory and they are all appended together.
A simple example comparing a "light" and "full" configuration of the same project
=================================================================================
Here is a short "light" ``build.sbt`` file which defines a build project
Here is a short "light" `build.sbt` file which defines a build project
with a single test dependency on "scalacheck":
::
@ -89,7 +89,7 @@ with a single test dependency on "scalacheck":
libraryDependencies += "org.scalatest" % "scalatest_2.9.0" % "1.4.1" % "test"
Here is an equivalent "full" ``Build.scala`` file which defines exactly
Here is an equivalent "full" `Build.scala` file which defines exactly
the same thing:
::
@ -116,7 +116,7 @@ have to explicitly append our settings to the default settings. All of
this work is done for us when we use a "light" build file.
To understand what is really going on you may find it helpful to see
this ``Build.scala`` without the imports and associated implicit
this `Build.scala` without the imports and associated implicit
conversions:
::

View File

@ -6,17 +6,17 @@ Snippets of docs that need to move to another page
==================================================
Temporarily change the logging level and configure how stack traces are
displayed by modifying the ``log-level`` or ``trace-level`` settings:
displayed by modifying the `log-level` or `trace-level` settings:
.. code-block:: console
> set logLevel := Level.Warn
Valid ``Level`` values are ``Debug, Info, Warn, Error``.
Valid `Level` values are `Debug, Info, Warn, Error`.
You can run an action for multiple versions of Scala by prefixing the
action with ``+``. See [[Cross Build]] for details. You can temporarily
switch to another version of Scala using ``++ <version>``. This version
action with `+`. See [[Cross Build]] for details. You can temporarily
switch to another version of Scala using `++ <version>`. This version
does not have to be listed in your build definition, but it does have to
be available in a repository. You can also include the initial command
to run after switching to that version. For example:
@ -39,7 +39,7 @@ Manual Dependency Management
============================
Manually managing dependencies involves copying any jars that you want
to use to the ``lib`` directory. sbt will put these jars on the
to use to the `lib` directory. sbt will put these jars on the
classpath during compilation, testing, running, and when using the
interpreter. You are responsible for adding, removing, updating, and
otherwise managing the jars in this directory. No modifications to your
@ -47,15 +47,15 @@ project definition are required to use this method unless you would like
to change the location of the directory you store the jars in.
To change the directory jars are stored in, change the
``unmanaged-base`` setting in your project definition. For example, to
use ``custom_lib/``:
`unmanaged-base` setting in your project definition. For example, to
use `custom_lib/`:
::
unmanagedBase := baseDirectory.value / "custom_lib"
If you want more control and flexibility, override the
``unmanaged-jars`` task, which ultimately provides the manual
`unmanaged-jars` task, which ultimately provides the manual
dependencies to sbt. The default implementation is roughly:
::
@ -91,7 +91,7 @@ Explicit URL
~~~~~~~~~~~~
If your project requires a dependency that is not present in a
repository, a direct URL to its jar can be specified with the ``from``
repository, a direct URL to its jar can be specified with the `from`
method as follows:
::
@ -111,7 +111,7 @@ downloads the dependencies of the dependencies you list.)
In some instances, you may find that the dependencies listed for a
project aren't necessary for it to build. Avoid fetching artifact
dependencies with ``intransitive()``, as in this example:
dependencies with `intransitive()`, as in this example:
::
@ -120,7 +120,7 @@ dependencies with ``intransitive()``, as in this example:
Classifiers
~~~~~~~~~~~
You can specify the classifer for a dependency using the ``classifier``
You can specify the classifer for a dependency using the `classifier`
method. For example, to get the jdk15 version of TestNG:
::
@ -128,9 +128,9 @@ method. For example, to get the jdk15 version of TestNG:
libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15"
To obtain particular classifiers for all dependencies transitively, run
the ``update-classifiers`` task. By default, this resolves all artifacts
with the ``sources`` or ``javadoc`` classifer. Select the classifiers to
obtain by configuring the ``transitive-classifiers`` setting. For
the `update-classifiers` task. By default, this resolves all artifacts
with the `sources` or `javadoc` classifer. Select the classifiers to
obtain by configuring the `transitive-classifiers` setting. For
example, to only retrieve sources:
::
@ -141,7 +141,7 @@ Extra Attributes
~~~~~~~~~~~~~~~~
[Extra attributes] can be specified by passing key/value pairs to the
``extra`` method.
`extra` method.
To select dependencies by extra attributes:
@ -179,9 +179,9 @@ Ivy Home Directory
~~~~~~~~~~~~~~~~~~
By default, sbt uses the standard Ivy home directory location
``${user.home}/.ivy2/``. This can be configured machine-wide, for use by
`${user.home}/.ivy2/`. This can be configured machine-wide, for use by
both the sbt launcher and by projects, by setting the system property
``sbt.ivy.home`` in the sbt startup script (described in
`sbt.ivy.home` in the sbt startup script (described in
[[Setup\|Getting Started Setup]]).
For example:
@ -226,7 +226,7 @@ Maven/Ivy
---------
For this method, create the configuration files as you would for Maven
(``pom.xml``) or Ivy (``ivy.xml`` and optionally ``ivysettings.xml``).
(`pom.xml`) or Ivy (`ivy.xml` and optionally `ivysettings.xml`).
External configuration is selected by using one of the following
expressions.
@ -281,7 +281,7 @@ or
Full Ivy Example
~~~~~~~~~~~~~~~~
For example, a ``build.sbt`` using external Ivy files might look like:
For example, a `build.sbt` using external Ivy files might look like:
::
@ -301,10 +301,10 @@ Known limitations
Maven support is dependent on Ivy's support for Maven POMs. Known issues
with this support:
- Specifying ``relativePath`` in the ``parent`` section of a POM will
- Specifying `relativePath` in the `parent` section of a POM will
produce an error.
- Ivy ignores repositories specified in the POM. A workaround is to
specify repositories inline or in an Ivy ``ivysettings.xml`` file.
specify repositories inline or in an Ivy `ivysettings.xml` file.
Configuration dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -23,7 +23,7 @@ is a quick way of configuring a build, consisting of a list of Scala
expressions describing project settings. A :doc:`full definition <Full-Configuration>` is
made up of one or more Scala source files that describe relationships
between projects and introduce new configurations and settings. This
page introduces the ``Setting`` type, which is used by light and full
page introduces the `Setting` type, which is used by light and full
definitions for general configuration.
Introductory Examples
@ -34,7 +34,7 @@ purpose of getting an idea of what they look like, not for full
comprehension of details, which are described at :doc:`light definition <Basic-Configuration>`
and :doc:`full definition <Full-Configuration>`.
``<base>/build.sbt`` (light)
`<base>/build.sbt` (light)
::
@ -42,7 +42,7 @@ and :doc:`full definition <Full-Configuration>`.
libraryDependencies += "junit" % "junit" % "4.8" % "test"
``<base>/project/Build.scala`` (full)
`<base>/project/Build.scala` (full)
::
@ -61,12 +61,12 @@ and :doc:`full definition <Full-Configuration>`.
Important Settings Background
-----------------------------
The fundamental type of a configurable in sbt is a ``Setting[T]``. Each
line in the ``build.sbt`` example above is of this type. The arguments
to the ``settings`` method in the ``Build.scala`` example are of type
``Setting[T]``. Specifically, the ``name`` setting has type
``Setting[String]`` and the ``libraryDependencies`` setting has type
``Setting[Seq[ModuleID]]``, where ``ModuleID`` represents a dependency.
The fundamental type of a configurable in sbt is a `Setting[T]`. Each
line in the `build.sbt` example above is of this type. The arguments
to the `settings` method in the `Build.scala` example are of type
`Setting[T]`. Specifically, the `name` setting has type
`Setting[String]` and the `libraryDependencies` setting has type
`Setting[Seq[ModuleID]]`, where `ModuleID` represents a dependency.
Throughout the documentation, many examples show a setting, such as:
@ -75,22 +75,22 @@ Throughout the documentation, many examples show a setting, such as:
libraryDependencies += "junit" % "junit" % "4.8" % "test"
This setting expression either goes in a :doc:`light definition <Basic-Configuration>`
``(build.sbt)`` as is or in the ``settings`` of a ``Project`` instance
`(build.sbt)` as is or in the `settings` of a `Project` instance
in a :doc:`full definition <Full-Configuration>`
``(Build.scala)`` as shown in the example. This is an important point to
`(Build.scala)` as shown in the example. This is an important point to
understanding the context of examples in the documentation. (That is,
you now know where to copy and paste examples now.)
A ``Setting[T]`` describes how to initialize a setting of type ``T``.
A `Setting[T]` describes how to initialize a setting of type `T`.
The settings shown in the examples are expressions, not statements. In
particular, there is no hidden mutable map that is being modified. Each
``Setting[T]`` is a value that describes an update to a map. The actual
`Setting[T]` is a value that describes an update to a map. The actual
map is rarely directly referenced by user code. It is not the final map
that is usually important, but the operations on the map.
To emphasize this, the setting in the following ``Build.scala`` fragment
To emphasize this, the setting in the following `Build.scala` fragment
*is ignored* because it is a value that need to be included in the
``settings`` of a ``Project``. (Unfortunately, Scala will discard
`settings` of a `Project`. (Unfortunately, Scala will discard
non-Unit values to get Unit, which is why there is no compile error.)
::
@ -112,19 +112,19 @@ Declaring a Setting
-------------------
There is fundamentally one type of initialization, represented by the
``<<=`` method. The other initialization methods ``:=``, ``+=``,
``++=``, ``<+=``, ``<++=``, and ``~=`` are convenience methods that can
be defined in terms of ``<<=``.
`<<=` method. The other initialization methods `:=`, `+=`,
`++=`, `<+=`, `<++=`, and `~=` are convenience methods that can
be defined in terms of `<<=`.
The motivation behind the method names is:
- All methods end with ``=`` to obtain the lowest possible infix
- All methods end with `=` to obtain the lowest possible infix
precedence.
- A method starting with ``<`` indicates that the initialization uses
- A method starting with `<` indicates that the initialization uses
other settings.
- A single ``+`` means a single value is expected and will be appended
- A single `+` means a single value is expected and will be appended
to the current sequence.
- ``++`` means a ``Seq[T]`` is expected. The sequence will be appended
- `++` means a `Seq[T]` is expected. The sequence will be appended
to the current sequence.
The following sections include descriptions and examples of each
@ -140,7 +140,7 @@ section.
:=
~~
``:=`` is used to define a setting that overwrites any previous value
`:=` is used to define a setting that overwrites any previous value
without referring to other settings. For example, the following defines
a setting that will set *name* to "My Project" regardless of whether
*name* has already been initialized.
@ -154,7 +154,7 @@ No other settings are used. The value assigned is just a constant.
+= and ++=
~~~~~~~~~~
``+=`` is used to define a setting that will append a single value to
`+=` is used to define a setting that will append a single value to
the current sequence without referring to other settings. For example,
the following defines a setting that will append a JUnit dependency to
*libraryDependencies*. No other settings are referenced.
@ -163,10 +163,10 @@ the following defines a setting that will append a JUnit dependency to
libraryDependencies += "junit" % "junit" % "4.8" % "test"
The related method ``++=`` appends a sequence to the current sequence,
The related method `++=` appends a sequence to the current sequence,
also without using other settings. For example, the following defines a
setting that will add dependencies on ScalaCheck and specs to the
current list of dependencies. Because it will append a ``Seq``, it uses
current list of dependencies. Because it will append a `Seq`, it uses
++= instead of +=.
::
@ -188,8 +188,8 @@ for the provided instances.
~=
~~
``~=`` is used to transform the current value of a setting. For example,
the following defines a setting that will remove ``-Y`` compiler options
`~=` is used to transform the current value of a setting. For example,
the following defines a setting that will remove `-Y` compiler options
from the current list of compiler options.
::
@ -198,7 +198,7 @@ from the current list of compiler options.
options filterNot ( _ startsWith "-Y" )
}
The earlier declaration of JUnit as a library dependency using ``+=``
The earlier declaration of JUnit as a library dependency using `+=`
could also be written as:
::
@ -223,7 +223,7 @@ declaring JUnit as a dependency using <<= would look like:
}
This defines a setting that will apply the provided function to the
previous value of *libraryDependencies*. ``apply`` and ``Seq[ModuleID]``
previous value of *libraryDependencies*. `apply` and `Seq[ModuleID]`
are explicit for demonstration only and may be omitted.
<+= and <++=
@ -278,12 +278,12 @@ This type has two parts: a key (of type
`SettingKey <../../api/sbt/SettingKey.html>`_)
and a scope (of type
`Scope <../../api/sbt/Scope$.html>`_). An
unspecified scope is like using ``this`` to refer to the current
unspecified scope is like using `this` to refer to the current
context. The previous examples on this page have not defined an explicit
scope. See [[Inspecting Settings]] for details on the axes that make up
scopes.
The target (the value on the left) of a method like ``:=`` identifies
The target (the value on the left) of a method like `:=` identifies
one of the main constructs in sbt: a setting, a task, or an input task.
It is not an actual setting or task, but a key representing a setting or
task. A setting is a value assigned when a project is loaded. A task is
@ -305,11 +305,11 @@ understanding of this page).
To construct a
`ScopedSetting <../../api/sbt/ScopedSetting.html>`_,
select the key and then scope it using the ``in`` method (see the
select the key and then scope it using the `in` method (see the
`ScopedSetting <../../api/sbt/ScopedSetting.html>`_
for API details). For example, the setting for compiler options for the
test sources is referenced using the *scalacOptions* key and the
``Test`` configuration in the current project.
`Test` configuration in the current project.
::
@ -337,14 +337,14 @@ The right hand side of a setting definition varies by the initialization
method used. In the case of :=, +=, ++=, and ~=, the type of the
argument is straightforward (see the
`ScopedSetting <../../api/sbt/ScopedSetting.html>`_
API). For <<=, <+=, and <++=, the type is ``Initialize[T]`` (for <<= and
<+=) or ``Initialize[Seq[T]]`` (for <++=). This section discusses the
API). For <<=, <+=, and <++=, the type is `Initialize[T]` (for <<= and
<+=) or `Initialize[Seq[T]]` (for <++=). This section discusses the
`Initialize <../../api/sbt/Init$Initialize.html>`_
type.
A value of type ``Initialize[T]`` represents a computation that takes
A value of type `Initialize[T]` represents a computation that takes
the values of other settings as inputs. For example, in the following
setting, the argument to <<= is of type ``Initialize[File]``:
setting, the argument to <<= is of type `Initialize[File]`:
::
@ -362,8 +362,8 @@ This example can be written more explicitly as:
key.<<=(init)
}
To construct a value of type ``Initialize``, construct a tuple of up to
nine input ``ScopedSetting``\ s. Then, define the function that will
To construct a value of type `Initialize`, construct a tuple of up to
nine input `ScopedSetting`\ s. Then, define the function that will
compute the value of the setting given the values for these input
settings.
@ -376,7 +376,7 @@ settings.
This example takes the base directory, project name, and project version
as inputs. The keys for these settings are defined in [sbt.Keys], along
with all other built-in keys. The argument to the ``apply`` method is a
with all other built-in keys. The argument to the `apply` method is a
function that takes the values of those settings and computes a new
value. In this case, that value is the path of a jar.
@ -388,8 +388,8 @@ differences. First, the inputs are of type [ScopedTaskable]. The means
that either settings
(`ScopedSetting <../../api/sbt/ScopedSetting.html>`_)
or tasks ([ScopedTask]) may be used as the input to a task. Second, the
name of the method used is ``map`` instead of ``apply`` and the
resulting value is of type ``Initialize[Task[T]]``. In the following
name of the method used is `map` instead of `apply` and the
resulting value is of type `Initialize[Task[T]]`. In the following
example, the inputs are the [report\|Update-Report] produced by the
*update* task and the context *configuration*. The function computes the
locations of the dependencies for that configuration.
@ -403,5 +403,5 @@ locations of the dependencies for that configuration.
As before, *update* and *configuration* are defined in
`Keys <../../sxr/Keys.scala.html>`_.
*update* is of type ``TaskKey[UpdateReport]`` and *configuration* is of
type ``SettingKey[Configuration]``.
*update* is of type `TaskKey[UpdateReport]` and *configuration* is of
type `SettingKey[Configuration]`.

View File

@ -5,7 +5,7 @@ Advanced Command Example
This is an advanced example showing some of the power of the new
settings system. It shows how to temporarily modify all declared
dependencies in the build, regardless of where they are defined. It
directly operates on the final ``Seq[Setting[_]]`` produced from every
directly operates on the final `Seq[Setting[_]]` produced from every
setting involved in the build.
The modifications are applied by running *canonicalize*. A *reload* or

View File

@ -4,19 +4,19 @@ Advanced Configurations Example
This is an example :doc:`full build definition </Getting-Started/Full-Def>` that
demonstrates using Ivy configurations to group dependencies.
The ``utils`` module provides utilities for other modules. It uses Ivy
The `utils` module provides utilities for other modules. It uses Ivy
configurations to group dependencies so that a dependent project doesn't
have to pull in all dependencies if it only uses a subset of
functionality. This can be an alternative to having multiple utilities
modules (and consequently, multiple utilities jars).
In this example, consider a ``utils`` project that provides utilities
In this example, consider a `utils` project that provides utilities
related to both Scalate and Saxon. It therefore needs both Scalate and
Saxon on the compilation classpath and a project that uses all of the
functionality of 'utils' will need these dependencies as well. However,
project ``a`` only needs the utilities related to Scalate, so it doesn't
need Saxon. By depending only on the ``scalate`` configuration of
``utils``, it only gets the Scalate-related dependencies.
project `a` only needs the utilities related to Scalate, so it doesn't
need Saxon. By depending only on the `scalate` configuration of
`utils`, it only gets the Scalate-related dependencies.
::

View File

@ -112,14 +112,14 @@ Custom Builder
--------------
Once a project is resolved, it needs to be built and then presented to
sbt as an instance of ``sbt.BuildUnit``. A custom builder has type:
sbt as an instance of `sbt.BuildUnit`. A custom builder has type:
::
BuildInfo => Option[() => BuildUnit]
A builder returns None if it does not want to handle the build
identified by the ``BuildInfo``. Otherwise, it provides a function that
identified by the `BuildInfo`. Otherwise, it provides a function that
will load the build when evaluated. Register a builder by passing it to
*BuildLoader.build* and overriding *Build.buildLoaders* with the result:
@ -192,7 +192,7 @@ project/ directory.
Custom Transformer
------------------
Once a project has been loaded into an ``sbt.BuildUnit``, it is
Once a project has been loaded into an `sbt.BuildUnit`, it is
transformed by all registered transformers. A custom transformer has
type:
@ -228,36 +228,36 @@ Relevant API documentation for custom transformers:
Manipulating Project Dependencies in Settings
=============================================
The ``buildDependencies`` setting, in the Global scope, defines the
The `buildDependencies` setting, in the Global scope, defines the
aggregation and classpath dependencies between projects. By default,
this information comes from the dependencies defined by ``Project``
instances by the ``aggregate`` and ``dependsOn`` methods. Because
``buildDependencies`` is a setting and is used everywhere dependencies
this information comes from the dependencies defined by `Project`
instances by the `aggregate` and `dependsOn` methods. Because
`buildDependencies` is a setting and is used everywhere dependencies
need to be known (once all projects are loaded), plugins and build
definitions can transform it to manipulate inter-project dependencies at
setting evaluation time. The only requirement is that no new projects
are introduced because all projects are loaded before settings get
evaluated. That is, all Projects must have been declared directly in a
Build or referenced as the argument to ``Project.aggregate`` or
``Project.dependsOn``.
Build or referenced as the argument to `Project.aggregate` or
`Project.dependsOn`.
The BuildDependencies type
--------------------------
The type of the ``buildDependencies`` setting is
The type of the `buildDependencies` setting is
`BuildDependencies </api/sbt/BuildDependencies.html>`_.
``BuildDependencies`` provides mappings from a project to its aggregate
`BuildDependencies` provides mappings from a project to its aggregate
or classpath dependencies. For classpath dependencies, a dependency has
type ``ClasspathDep[ProjectRef]``, which combines a ``ProjectRef`` with
type `ClasspathDep[ProjectRef]`, which combines a `ProjectRef` with
a configuration (see `ClasspathDep <../../api/sbt/ClasspathDep.html>`_
and `ProjectRef <../../api/sbt/ProjectRef.html>`_). For aggregate
dependencies, the type of a dependency is just ``ProjectRef``.
dependencies, the type of a dependency is just `ProjectRef`.
The API for ``BuildDependencies`` is not extensive, covering only a
The API for `BuildDependencies` is not extensive, covering only a
little more than the minimum required, and related APIs have more of an
internal, unpolished feel. Most manipulations consist of modifying the
relevant map (classpath or aggregate) manually and creating a new
``BuildDependencies`` instance.
`BuildDependencies` instance.
Example
~~~~~~~
@ -285,6 +285,6 @@ like a local directory.
It is not limited to such basic translations, however. The configuration
a dependency is defined in may be modified and dependencies may be added
or removed. Modifying ``buildDependencies`` can be combined with
modifying ``libraryDependencies`` to convert binary dependencies to and
or removed. Modifying `buildDependencies` can be combined with
modifying `libraryDependencies` to convert binary dependencies to and
from source dependencies, for example.

View File

@ -5,16 +5,16 @@ State and actions
`State <../../api/sbt/State$.html>`_ is the entry point to all available
information in sbt. The key methods are:
- ``definedCommands: Seq[Command]`` returns all registered Command
- `definedCommands: Seq[Command]` returns all registered Command
definitions
- ``remainingCommands: Seq[String]`` returns the remaining commands to
- `remainingCommands: Seq[String]` returns the remaining commands to
be run
- ``attributes: AttributeMap`` contains generic data.
- `attributes: AttributeMap` contains generic data.
The action part of a command performs work and transforms ``State``. The
following sections discuss ``State => State`` transformations. As
The action part of a command performs work and transforms `State`. The
following sections discuss `State => State` transformations. As
mentioned previously, a command will typically handle a parsed value as
well: ``(State, T) => State``.
well: `(State, T) => State`.
Command-related data
--------------------
@ -60,7 +60,7 @@ commands run. The second inserts a command that will run next. The
remaining commands will run after the inserted command completes.
To indicate that a command has failed and execution should not continue,
return ``state.fail``.
return `state.fail`.
::
@ -72,7 +72,7 @@ return ``state.fail``.
Project-related data
--------------------
Project-related information is stored in ``attributes``. Typically,
Project-related information is stored in `attributes`. Typically,
commands won't access this directly but will instead use a convenience
method to extract the most useful information:
@ -84,19 +84,19 @@ method to extract the most useful information:
`Extracted <../../api/sbt/Extracted.html>`_ provides:
- Access to the current build and project (``currentRef``)
- Access to initialized project setting data (``structure.data``)
- Access to session ``Setting``\ s and the original, permanent settings
from ``.sbt`` and ``.scala`` files (``session.append`` and
``session.original``, respectively)
- Access to the current build and project (`currentRef`)
- Access to initialized project setting data (`structure.data`)
- Access to session `Setting`\ s and the original, permanent settings
from `.sbt` and `.scala` files (`session.append` and
`session.original`, respectively)
- Access to the current `Eval <../../api/sbt/compiler/Eval.html>`_
instance for evaluating Scala expressions in the build context.
Project data
------------
All project data is stored in ``structure.data``, which is of type
``sbt.Settings[Scope]``. Typically, one gets information of type ``T``
All project data is stored in `structure.data`, which is of type
`sbt.Settings[Scope]`. Typically, one gets information of type `T`
in the following way:
::
@ -105,13 +105,13 @@ in the following way:
val scope: Scope
val value: Option[T] = key in scope get structure.data
Here, a ``SettingKey[T]`` is typically obtained from
Here, a `SettingKey[T]` is typically obtained from
`Keys <../../api/sbt/Keys$.html>`_ and is the same type that is used to
define settings in ``.sbt`` files, for example.
define settings in `.sbt` files, for example.
`Scope <../../api/sbt/Scope.html>`_ selects the scope the key is
obtained for. There are convenience overloads of ``in`` that can be used
obtained for. There are convenience overloads of `in` that can be used
to specify only the required scope axes. See
`Structure.scala <../../sxr/Structure.scala.html>`_ for where ``in`` and
`Structure.scala <../../sxr/Structure.scala.html>`_ for where `in` and
other parts of the settings interface are defined. Some examples:
::
@ -134,10 +134,10 @@ information about build and project relationships. Key members are:
units: Map[URI, LoadedBuildUnit]
root: URI
A ``URI`` identifies a build and ``root`` identifies the initial build
A `URI` identifies a build and `root` identifies the initial build
loaded. `LoadedBuildUnit <../../api/sbt/Load$$LoadedBuildUnit.html>`_
provides information about a single build. The key members of
``LoadedBuildUnit`` are:
`LoadedBuildUnit` are:
::
@ -148,20 +148,20 @@ provides information about a single build. The key members of
defined: Map[String, ResolvedProject]
`ResolvedProject <../../api/sbt/ResolvedProject.html>`_ has the same
information as the ``Project`` used in a ``project/Build.scala`` except
information as the `Project` used in a `project/Build.scala` except
that `ProjectReferences <../../api/sbt/ProjectReference.html>`_ are
resolved to ``ProjectRef``\ s.
resolved to `ProjectRef`\ s.
Classpaths
----------
Classpaths in sbt 0.10+ are of type ``Seq[Attributed[File]]``. This
Classpaths in sbt 0.10+ are of type `Seq[Attributed[File]]`. This
allows tagging arbitrary information to classpath entries. sbt currently
uses this to associate an ``Analysis`` with an entry. This is how it
uses this to associate an `Analysis` with an entry. This is how it
manages the information needed for multi-project incremental
recompilation. It also associates the ModuleID and Artifact with managed
entries (those obtained by dependency management). When you only want
the underlying ``Seq[File]``, use ``files``:
the underlying `Seq[File]`, use `files`:
::
@ -175,7 +175,7 @@ It can be useful to run a specific project task from a
:doc:`command <Commands>` (*not from another task*) and get its
result. For example, an IDE-related command might want to get the
classpath from a project or a task might analyze the results of a
compilation. The relevant method is ``Project.evaluateTask``, which has
compilation. The relevant method is `Project.evaluateTask`, which has
the following signature:
::

View File

@ -27,8 +27,8 @@ There are three files in this example:
To try out this example:
1. Put the first two files in a new directory
2. Run ``sbt publishLocal`` in that directory
3. Run ``sbt @path/to/hello.build.properties`` to run the application.
2. Run `sbt publishLocal` in that directory
3. Run `sbt @path/to/hello.build.properties` to run the application.
Like for sbt itself, you can specify commands from the command line
(batch mode) or run them at an prompt (interactive mode).
@ -38,8 +38,8 @@ Build Definition: build.sbt
The build.sbt file should define the standard settings: name, version,
and organization. To use the sbt command system, a dependency on the
``command`` module is needed. To use the task system, add a dependency
on the ``task-system`` module as well.
`command` module is needed. To use the task system, add a dependency
on the `task-system` module as well.
::
@ -105,7 +105,7 @@ Launcher configuration file: hello.build.properties
The launcher needs a configuration file in order to retrieve and run an
application.
``hello.build.properties``
`hello.build.properties`
.. code-block:: ini

View File

@ -27,26 +27,26 @@ There are three main aspects to commands:
In sbt, the syntax part, including tab completion, is specified with
parser combinators. If you are familiar with the parser combinators in
Scala's standard library, these are very similar. The action part is a
function ``(State, T) => State``, where ``T`` is the data structure
function `(State, T) => State`, where `T` is the data structure
produced by the parser. See the :doc:`/Detailed-Topics/Parsing-Input`
page for how to use the parser combinators.
`State <../../api/sbt/State.html>`_ provides access to the build state,
such as all registered ``Command``\ s, the remaining commands to
such as all registered `Command`\ s, the remaining commands to
execute, and all project-related information. See :doc:`Build-State`
for details on State.
Finally, basic help information may be provided that is used by the
``help`` command to display command help.
`help` command to display command help.
Defining a Command
==================
A command combines a function ``State => Parser[T]`` with an action
``(State, T) => State``. The reason for ``State => Parser[T]`` and not
simply ``Parser[T]`` is that often the current ``State`` is used to
A command combines a function `State => Parser[T]` with an action
`(State, T) => State`. The reason for `State => Parser[T]` and not
simply `Parser[T]` is that often the current `State` is used to
build the parser. For example, the currently loaded projects (provided
by ``State``) determine valid completions for the ``project`` command.
by `State`) determine valid completions for the `project` command.
Examples for the general and specific cases are shown in the following
sections.
@ -103,14 +103,14 @@ multiple arguments separated by spaces.
Full Example
============
The following example is a valid ``project/Build.scala`` that adds
The following example is a valid `project/Build.scala` that adds
commands to a project. To try it out:
1. Copy the following build definition into ``project/Build.scala`` for
1. Copy the following build definition into `project/Build.scala` for
a new project.
2. Run sbt on the project.
3. Try out the ``hello``, ``helloAll``, ``failIfTrue``, ``color``,
and ``printState`` commands.
3. Try out the `hello`, `helloAll`, `failIfTrue`, `color`,
and `printState` commands.
4. Use tab-completion and the code below as guidance.
::

View File

@ -11,10 +11,10 @@ system.
Input Keys
==========
A key for an input task is of type ``InputKey`` and represents the input
task like a ``SettingKey`` represents a setting or a ``TaskKey``
A key for an input task is of type `InputKey` and represents the input
task like a `SettingKey` represents a setting or a `TaskKey`
represents a task. Define a new input task key using the
``inputKey.apply`` factory method:
`inputKey.apply` factory method:
::
@ -23,15 +23,15 @@ represents a task. Define a new input task key using the
The definition of an input task is similar to that of a normal task, but it can
also use the result of a `Parser </Detailed-Topics/Parsing-Input>`_ applied to
user input. Just as the special ``value`` method gets the value of a
setting or task, the special ``parsed`` method gets the result of a ``Parser``.
user input. Just as the special `value` method gets the value of a
setting or task, the special `parsed` method gets the result of a `Parser`.
Basic Input Task Definition
===========================
The simplest input task accepts a space-delimited sequence of arguments.
It does not provide useful tab completion and parsing is basic. The built-in
parser for space-delimited arguments is constructed via the ``spaceDelimited``
parser for space-delimited arguments is constructed via the `spaceDelimited`
method, which accepts as its only argument the label to present to the user
during tab completion.
@ -52,26 +52,26 @@ the arguments passed to it on their own line.
Input Task using Parsers
========================
The Parser provided by the ``spaceDelimited`` method does not provide
The Parser provided by the `spaceDelimited` method does not provide
any flexibility in defining the input syntax. Using a custom parser
is just a matter of defining your own ``Parser`` as described on the
is just a matter of defining your own `Parser` as described on the
:doc:`/Detailed-Topics/Parsing-Input` page.
Constructing the Parser
-----------------------
The first step is to construct the actual ``Parser`` by defining a value
The first step is to construct the actual `Parser` by defining a value
of one of the following types:
* ``Parser[I]``: a basic parser that does not use any settings
* ``Initialize[Parser[I]]``: a parser whose definition depends on one or more settings
* ``Initialize[State => Parser[I]]``: a parser that is defined using both settings and the current :doc:`state <Build-State>`
* `Parser[I]`: a basic parser that does not use any settings
* `Initialize[Parser[I]]`: a parser whose definition depends on one or more settings
* `Initialize[State => Parser[I]]`: a parser that is defined using both settings and the current :doc:`state <Build-State>`
We already saw an example of the first case with ``spaceDelimited``, which doesn't use any settings in its definition.
As an example of the third case, the following defines a contrived ``Parser`` that uses the
We already saw an example of the first case with `spaceDelimited`, which doesn't use any settings in its definition.
As an example of the third case, the following defines a contrived `Parser` that uses the
project's Scala and sbt version settings as well as the state. To use these settings, we
need to wrap the Parser construction in ``Def.setting`` and get the setting values with the
special ``value`` method:
need to wrap the Parser construction in `Def.setting` and get the setting values with the
special `value` method:
::
@ -86,16 +86,16 @@ special ``value`` method:
token(state.remainingCommands.size.toString) )
}
This Parser definition will produce a value of type ``(String,String)``.
This Parser definition will produce a value of type `(String,String)`.
The input syntax defined isn't very flexible; it is just a demonstration. It
will produce one of the following values for a successful parse
(assuming the current Scala version is 2.10.0, the current sbt version is
0.13.0, and there are 3 commands left to run):
(assuming the current Scala version is |scalaRelease|, the current sbt version is
|release|, and there are 3 commands left to run):
.. code-block:: text
.. parsed-literal::
("scala", "2.10.0")
("sbt", "0.13.0")
("scala", "|scalaRelease|")
("sbt", "|release|")
("commands", "3")
Again, we were able to access the current Scala and sbt version for the project because
@ -105,11 +105,11 @@ Constructing the Task
---------------------
Next, we construct the actual task to execute from the result of the
``Parser``. For this, we define a task as usual, but we can access the
result of parsing via the special ``parsed`` method on ``Parser``.
`Parser`. For this, we define a task as usual, but we can access the
result of parsing via the special `parsed` method on `Parser`.
The following contrived example uses the previous example's output (of
type ``(String,String)``) and the result of the ``package`` task to
type `(String,String)`) and the result of the `package` task to
print some information to the screen.
::
@ -124,25 +124,25 @@ print some information to the screen.
The InputTask type
==================
It helps to look at the ``InputTask`` type to understand more advanced usage of input tasks.
It helps to look at the `InputTask` type to understand more advanced usage of input tasks.
The core input task type is:
::
class InputTask[T](val parser: State => Parser[Task[T]])
Normally, an input task is assigned to a setting and you work with ``Initialize[InputTask[T]]``.
Normally, an input task is assigned to a setting and you work with `Initialize[InputTask[T]]`.
Breaking this down,
1. You can use other settings (via ``Initialize``) to construct an input task.
2. You can use the current ``State`` to construct the parser.
1. You can use other settings (via `Initialize`) to construct an input task.
2. You can use the current `State` to construct the parser.
3. The parser accepts user input and provides tab completion.
4. The parser produces the task to run.
So, you can use settings or ``State`` to construct the parser that defines an input task's command line syntax.
So, you can use settings or `State` to construct the parser that defines an input task's command line syntax.
This was described in the previous section.
You can then use settings, ``State``, or user input to construct the task to run.
You can then use settings, `State`, or user input to construct the task to run.
This is implicit in the input task syntax.
@ -151,17 +151,17 @@ Using other input tasks
=======================
The types involved in an input task are composable, so it is possible to reuse input tasks.
The ``.parsed`` and ``.evaluated`` methods are defined on InputTasks to make this more convenient in common situations:
The `.parsed` and `.evaluated` methods are defined on InputTasks to make this more convenient in common situations:
* Call ``.parsed`` on an ``InputTask[T]`` or ``Initialize[InputTask[T]]`` to get the ``Task[T]`` created after parsing the command line
* Call ``.evaluated`` on an ``InputTask[T]`` or ``Initialize[InputTask[T]]`` to get the value of type ``T`` from evaluating that task
* Call `.parsed` on an `InputTask[T]` or `Initialize[InputTask[T]]` to get the `Task[T]` created after parsing the command line
* Call `.evaluated` on an `InputTask[T]` or `Initialize[InputTask[T]]` to get the value of type `T` from evaluating that task
In both situations, the underlying ``Parser`` is sequenced with other parsers in the input task definition.
In the case of ``.evaluated``, the generated task is evaluated.
In both situations, the underlying `Parser` is sequenced with other parsers in the input task definition.
In the case of `.evaluated`, the generated task is evaluated.
The following example applies the ``run`` input task, a literal separator parser ``--``, and ``run`` again.
The following example applies the `run` input task, a literal separator parser `--`, and `run` again.
The parsers are sequenced in order of syntactic appearance,
so that the arguments before ``--`` are passed to the first ``run`` and the ones after are passed to the second.
so that the arguments before `--` are passed to the first `run` and the ones after are passed to the second.
::
@ -193,11 +193,11 @@ For a main class Demo that echoes its arguments, this looks like:
Preapplying input
=================
Because ``InputTasks`` are built from ``Parsers``, it is possible to generate a new ``InputTask`` by applying some input programmatically.
Two convenience methods are provided on ``InputTask[T]`` and ``Initialize[InputTask[T]]`` that accept the String to apply.
Because `InputTasks` are built from `Parsers`, it is possible to generate a new `InputTask` by applying some input programmatically.
Two convenience methods are provided on `InputTask[T]` and `Initialize[InputTask[T]]` that accept the String to apply.
* ``partialInput`` applies the input and allows further input, such as from the command line
* ``fullInput`` applies the input and terminates parsing, so that further input is not accepted
* `partialInput` applies the input and allows further input, such as from the command line
* `fullInput` applies the input and terminates parsing, so that further input is not accepted
In each case, the input is applied to the input task's parser.
Because input tasks handle all input after the task name, they usually require initial whitespace to be provided in the input.
@ -205,10 +205,10 @@ Because input tasks handle all input after the task name, they usually require i
Consider the example in the previous section.
We can modify it so that we:
* Explicitly specify all of the arguments to the first ``run``. We use ``name`` and ``version`` to show that settings can be used to define and modify parsers.
* Define the initial arguments passed to the second ``run``, but allow further input on the command line.
* Explicitly specify all of the arguments to the first `run`. We use `name` and `version` to show that settings can be used to define and modify parsers.
* Define the initial arguments passed to the second `run`, but allow further input on the command line.
NOTE: the current implementation of ``:=`` doesn't actually support applying input derived from settings yet.
NOTE: the current implementation of `:=` doesn't actually support applying input derived from settings yet.
::

View File

@ -2,25 +2,25 @@
Plugins Best Practices
======================
*This page is intended primarily for SBT plugin authors.*
*This page is intended primarily for sbt plugin authors.*
A plugin developer should strive for consistency and ease of use.
Specifically:
- Plugins should play well with other plugins. Avoiding namespace
clashes (in both SBT and Scala) is paramount.
clashes (in both sbt and Scala) is paramount.
- Plugins should follow consistent conventions. The experiences of an
SBT *user* should be consistent, no matter what plugins are pulled
sbt *user* should be consistent, no matter what plugins are pulled
in.
Here are some current plugin best practices. **NOTE:** Best practices
are evolving, so check back frequently.
Avoid overriding ``settings``
Avoid overriding `settings`
-----------------------------
SBT will automatically load your plugin's ``settings`` into the build.
Overriding ``val settings`` should only be done by plugins intending to
sbt will automatically load your plugin's `settings` into the build.
Overriding `val settings` should only be done by plugins intending to
provide commands. Regular plugins defining tasks and settings should
provide a sequence named after the plugin like so:
@ -34,24 +34,24 @@ used. See later section for how the settings should be scoped.
Reuse existing keys
-------------------
SBT has a number of `predefined keys <../../api/sbt/Keys%24.html>`_.
sbt has a number of `predefined keys <../../api/sbt/Keys%24.html>`_.
Where possible, reuse them in your plugin. For instance, don't define:
::
val sourceFiles = settingKey[Seq[File]]("Some source files")
Instead, simply reuse SBT's existing ``sources`` key.
Instead, simply reuse sbt's existing `sources` key.
Avoid namespace clashes
-----------------------
Sometimes, you need a new key, because there is no existing SBT key. In
Sometimes, you need a new key, because there is no existing sbt key. In
this case, use a plugin-specific prefix, both in the (string) key name
used in the SBT namespace and in the Scala ``val``. There are two
used in the sbt namespace and in the Scala `val`. There are two
acceptable ways to accomplish this goal.
Just use a ``val`` prefix
Just use a `val` prefix
~~~~~~~~~~~~~~~~~~~~~~~~~
::
@ -61,7 +61,7 @@ Just use a ``val`` prefix
val obfuscateStylesheet = settingKey[File]("Obfuscate stylesheet")
}
In this approach, every ``val`` starts with ``obfuscate``. A user of the
In this approach, every `val` starts with `obfuscate`. A user of the
plugin would refer to the settings like this:
::
@ -114,14 +114,14 @@ and your plugin defines a target directory to receive the resulting
PDFs. That target directory is scoped in its own configuration, so it is
distinct from other target directories. Thus, these two definitions use
the same *key*, but they represent distinct *values*. So, in a user's
``build.sbt``, we might see:
`build.sbt`, we might see:
::
target in PDFPlugin := baseDirectory.value / "mytarget" / "pdf"
target in Compile := baseDirectory.value / "mytarget"
In the PDF plugin, this is achieved with an ``inConfig`` definition:
In the PDF plugin, this is achieved with an `inConfig` definition:
::
@ -155,9 +155,9 @@ When defining a new type of configuration, e.g.
should be used to create a "cross-task" configuration. The task
definitions don't change in this case, but the default configuration
does. For example, the ``profile`` configuration can extend the test
does. For example, the `profile` configuration can extend the test
configuration with additional settings and changes to allow profiling in
SBT. Plugins should not create arbitrary Configurations, but utilize
sbt. Plugins should not create arbitrary Configurations, but utilize
them for specific purposes and builds.
Configurations actually tie into dependency resolution (with Ivy) and
@ -195,10 +195,10 @@ Split your settings by the configuration axis like so:
sources in obfuscate := sources.value
)
The ``baseObfuscateSettings`` value provides base configuration for the
The `baseObfuscateSettings` value provides base configuration for the
plugin's tasks. This can be re-used in other configurations if projects
require it. The ``obfuscateSettings`` value provides the default
``Compile`` scoped settings for projects to use directly. This gives the
require it. The `obfuscateSettings` value provides the default
`Compile` scoped settings for projects to use directly. This gives the
greatest flexibility in using features provided by a plugin. Here's how
the raw settings may be reused:
@ -236,8 +236,8 @@ task itself.
sources in obfuscate := sources.value
)
In the above example, ``sources in obfuscate`` is scoped under the main
task, ``obfuscate``.
In the above example, `sources in obfuscate` is scoped under the main
task, `obfuscate`.
Mucking with Global build state
-------------------------------
@ -255,11 +255,11 @@ First, make sure your user does not include global build configuration in
val main = project(file("."), "root") settings(MyPlugin.globalSettings:_*) // BAD!
}
Global settings should *not* be placed into a ``build.sbt`` file.
Global settings should *not* be placed into a `build.sbt` file.
When overriding global settings, care should be taken to ensure previous
settings from other plugins are not ignored. e.g. when creating a new
``onLoad`` handler, ensure that the previous ``onLoad`` handler is not
`onLoad` handler, ensure that the previous `onLoad` handler is not
removed.
::

View File

@ -13,7 +13,7 @@ markdown processing task. A plugin can define a sequence of sbt Settings
that are automatically added to all projects or that are explicitly
declared for selected projects. For example, a plugin might add a
'proguard' task and associated (overridable) settings. Because
:doc:`Commands` can be added with the ``commands`` setting, a plugin can
:doc:`Commands` can be added with the `commands` setting, a plugin can
also fulfill the role that processors did in 0.7.x.
The :doc:`Plugins-Best-Practices` page describes the
@ -24,7 +24,7 @@ Using a binary sbt plugin
=========================
A common situation is using a binary plugin published to a repository.
Create ``project/plugins.sbt`` with the desired sbt plugins, any general
Create `project/plugins.sbt` with the desired sbt plugins, any general
dependencies, and any necessary repositories:
::
@ -44,38 +44,38 @@ plugins.
By Description
==============
A plugin definition is a project in ``<main-project>/project/``. This
A plugin definition is a project in `<main-project>/project/`. This
project's classpath is the classpath used for build definitions in
``<main-project>/project/`` and any ``.sbt`` files in the project's base
directory. It is also used for the ``eval`` and ``set`` commands.
`<main-project>/project/` and any `.sbt` files in the project's base
directory. It is also used for the `eval` and `set` commands.
Specifically,
1. Managed dependencies declared by the ``project/`` project are
1. Managed dependencies declared by the `project/` project are
retrieved and are available on the build definition classpath, just
like for a normal project.
2. Unmanaged dependencies in ``project/lib/`` are available to the build
2. Unmanaged dependencies in `project/lib/` are available to the build
definition, just like for a normal project.
3. Sources in the ``project/`` project are the build definition files
3. Sources in the `project/` project are the build definition files
and are compiled using the classpath built from the managed and
unmanaged dependencies.
4. Project dependencies can be declared in
``project/project/Build.scala`` and will be available to the build
definition sources. Think of ``project/project/`` as the build
`project/project/Build.scala` and will be available to the build
definition sources. Think of `project/project/` as the build
definition for the build definition.
The build definition classpath is searched for ``sbt/sbt.plugins``
The build definition classpath is searched for `sbt/sbt.plugins`
descriptor files containing the names of Plugin implementations. A
Plugin is a module that defines settings to automatically inject to
projects. Additionally, all Plugin modules are wildcard imported for the
``eval`` and ``set`` commands and ``.sbt`` files. A Plugin
`eval` and `set` commands and `.sbt` files. A Plugin
implementation is not required to produce a plugin, however. It is a
convenience for plugin consumers and because of the automatic nature, it
is not always appropriate.
The ``reload plugins`` command changes the current build to
``<current-build>/project/``. This allows manipulating the build
definition project like a normal project. ``reload return`` changes back
The `reload plugins` command changes the current build to
`<current-build>/project/`. This allows manipulating the build
definition project like a normal project. `reload return` changes back
to the original build. Any session settings for the plugin definition
project that have not been saved are dropped.
@ -92,7 +92,7 @@ that user. In sbt 0.10+, plugins and processors are unified.
Specifically, a plugin can add commands and plugins can be declared
globally for a user.
The ``~/.sbt/plugins/`` directory is treated as a global plugin
The `~/.sbt/plugins/` directory is treated as a global plugin
definition project. It is a normal sbt project whose classpath is
available to all sbt project definitions for that user as described
above for per-project plugins.
@ -112,24 +112,24 @@ demonstrates how to declare plugins.
1. Download the jar manually from
https://oss.sonatype.org/content/repositories/releases/org/clapper/grizzled-scala\_2.8.1/1.0.4/grizzled-scala\_2.8.1-1.0.4.jar
2. Put it in ``project/lib/``
2. Put it in `project/lib/`
1b) Automatically managed: direct editing approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Edit ``project/plugins.sbt`` to contain:
Edit `project/plugins.sbt` to contain:
::
libraryDependencies += "org.clapper" %% "grizzled-scala" % "1.0.4"
If sbt is running, do ``reload``.
If sbt is running, do `reload`.
1c) Automatically managed: command line approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We can change to the plugins project in ``project/`` using
``reload plugins``.
We can change to the plugins project in `project/` using
`reload plugins`.
.. code-block:: console
@ -139,8 +139,8 @@ We can change to the plugins project in ``project/`` using
>
Then, we can add dependencies like usual and save them to
``project/plugins.sbt``. It is useful, but not required, to run
``update`` to verify that the dependencies are correct.
`project/plugins.sbt`. It is useful, but not required, to run
`update` to verify that the dependencies are correct.
.. code-block:: console
@ -165,7 +165,7 @@ This variant shows how to use the external project support in sbt 0.10
to declare a source dependency on a plugin. This means that the plugin
will be built from source and used on the classpath.
Edit ``project/project/Build.scala``
Edit `project/project/Build.scala`
::
@ -176,11 +176,11 @@ Edit ``project/project/Build.scala``
lazy val webPlugin = uri("git://github.com/JamesEarlDouglas/xsbt-web-plugin")
}
If sbt is running, run ``reload``.
If sbt is running, run `reload`.
Note that this approach can be useful used when developing a plugin. A
project that uses the plugin will rebuild the plugin on ``reload``. This
saves the intermediate steps of ``publishLocal`` and ``cleanPlugins``
project that uses the plugin will rebuild the plugin on `reload`. This
saves the intermediate steps of `publishLocal` and `cleanPlugins`
required in 0.7. It can also be used to work with the development
version of a plugin from its repository.
@ -195,14 +195,14 @@ it to the repository as a fragment:
~~~~~~~~~~~~~~~~~~
Grizzled Scala is ready to be used in build definitions. This includes
the ``eval`` and ``set`` commands and ``.sbt`` and ``project/*.scala``
the `eval` and `set` commands and `.sbt` and `project/*.scala`
files.
.. code-block:: console
> eval grizzled.sys.os
In a ``build.sbt`` file:
In a `build.sbt` file:
::
@ -232,16 +232,16 @@ integrate.
Description
-----------
To make a plugin, create a project and configure ``sbtPlugin`` to
``true``. Then, write the plugin code and publish your project to a
To make a plugin, create a project and configure `sbtPlugin` to
`true`. Then, write the plugin code and publish your project to a
repository. The plugin can be used as described in the previous section.
A plugin can implement ``sbt.Plugin``. The contents of a Plugin
singleton, declared like ``object MyPlugin extends Plugin``, are
wildcard imported in ``set``, ``eval``, and ``.sbt`` files. Typically,
A plugin can implement `sbt.Plugin`. The contents of a Plugin
singleton, declared like `object MyPlugin extends Plugin`, are
wildcard imported in `set`, `eval`, and `.sbt` files. Typically,
this is used to provide new keys (SettingKey, TaskKey, or InputKey) or
core methods without requiring an import or qualification. In addition,
the ``settings`` member of the ``Plugin`` is automatically appended to
the `settings` member of the `Plugin` is automatically appended to
each project's settings. This allows a plugin to automatically provide
new functionality or new defaults. One main use of this feature is to
globally add commands, like a processor in sbt 0.7.x. These features
@ -253,7 +253,7 @@ Example Plugin
An example of a typical plugin:
``build.sbt``:
`build.sbt`:
::
@ -263,7 +263,7 @@ An example of a typical plugin:
organization := "org.example"
``MyPlugin.scala``:
`MyPlugin.scala`:
::
@ -309,8 +309,8 @@ A full build definition that uses this plugin might look like:
)
}
Individual settings could be defined in ``MyBuild.scala`` above or in a
``build.sbt`` file:
Individual settings could be defined in `MyBuild.scala` above or in a
`build.sbt` file:
::
@ -321,7 +321,7 @@ Example command plugin
A basic plugin that adds commands looks like:
``build.sbt``
`build.sbt`
::
@ -331,7 +331,7 @@ A basic plugin that adds commands looks like:
organization := "org.example"
``MyPlugin.scala``
`MyPlugin.scala`
::
@ -348,9 +348,9 @@ A basic plugin that adds commands looks like:
}
}
This example demonstrates how to take a Command (here, ``myCommand``)
This example demonstrates how to take a Command (here, `myCommand`)
and distribute it in a plugin. Note that multiple commands can be
included in one plugin (for example, use ``commands ++= Seq(a,b)``). See
included in one plugin (for example, use `commands ++= Seq(a,b)`). See
:doc:`Commands` for defining more useful commands, including ones that
accept arguments and affect the execution state.
@ -358,7 +358,7 @@ Global plugins example
----------------------
The simplest global plugin definition is declaring a library or plugin
in ``~/.sbt/plugins/build.sbt``:
in `~/.sbt/plugins/build.sbt`:
::
@ -369,26 +369,26 @@ user.
In addition:
1. Jars may be placed directly in ``~/.sbt/plugins/lib/`` and will be
1. Jars may be placed directly in `~/.sbt/plugins/lib/` and will be
available to every build definition for the current user.
2. Dependencies on plugins built from source may be declared in
~/.sbt/plugins/project/Build.scala\` as described at
:doc:`/Getting-Started/Full-Def`.
3. A Plugin may be directly defined in Scala source files in
``~/.sbt/plugins/``, such as ``~/.sbt/plugins/MyPlugin.scala``.
``~/.sbt/plugins/build.sbt`` should contain ``sbtPlugin := true``.
`~/.sbt/plugins/`, such as `~/.sbt/plugins/MyPlugin.scala`.
`~/.sbt/plugins/build.sbt` should contain `sbtPlugin := true`.
This can be used for quicker turnaround when developing a plugin
initially:
1. Edit the global plugin code
2. ``reload`` the project you want to use the modified plugin in
2. `reload` the project you want to use the modified plugin in
3. sbt will rebuild the plugin and use it for the project.
Additionally, the plugin will be available in other projects on
the machine without recompiling again. This approach skips the
overhead of ``publishLocal`` and cleaning the plugins directory
overhead of `publishLocal` and cleaning the plugins directory
of the project using the plugin.
These are all consequences of ``~/.sbt/plugins/`` being a standard
These are all consequences of `~/.sbt/plugins/` being a standard
project whose classpath is added to every sbt project's build
definition.

View File

@ -29,9 +29,9 @@ build.sbt file:
resolvers += sbtResolver.value
Then, put the following examples in source files
``SettingsExample.scala`` and ``SettingsUsage.scala``. Finally, run sbt
and enter the REPL using ``console``. To see the output described below,
enter ``SettingsUsage``.
`SettingsExample.scala` and `SettingsUsage.scala`. Finally, run sbt
and enter the REPL using `console`. To see the output described below,
enter `SettingsUsage`.
Example Settings System
~~~~~~~~~~~~~~~~~~~~~~~
@ -48,7 +48,7 @@ are three main parts:
There is also a fourth, but its usage is likely to be specific to sbt at
this time. The example uses a trivial implementation for this part.
``SettingsExample.scala``
`SettingsExample.scala`
::
@ -86,12 +86,12 @@ Example Usage
~~~~~~~~~~~~~
This part shows how to use the system we just defined. The end result is
a ``Settings[Scope]`` value. This type is basically a mapping
``Scope -> AttributeKey[T] -> Option[T]``. See the `Settings API
a `Settings[Scope]` value. This type is basically a mapping
`Scope -> AttributeKey[T] -> Option[T]`. See the `Settings API
documentation <../../api/sbt/Settings.html>`_
for details.
``SettingsUsage.scala``
`SettingsUsage.scala`
::
@ -132,8 +132,20 @@ for details.
println( k.label + i + " = " + applied.get( Scope(i), k) )
}
This produces the following output when run:
``a0 = None b0 = None a1 = None b1 = None a2 = None b2 = None a3 = Some(3) b3 = None a4 = Some(3) b4 = Some(9) a5 = Some(4) b5 = Some(9)``
This produces the following output when run: ::
a0 = None
b0 = None
a1 = None
b1 = None
a2 = None
b2 = None
a3 = Some(3)
b3 = None
a4 = Some(3)
b4 = Some(9)
a5 = Some(4)
b5 = Some(9)
- For the None results, we never defined the value and there was no
value to delegate to.
@ -169,14 +181,14 @@ For example, in a project, a `This`_ project axis becomes a
`Select`_ referring to the defining project. All other axes that are
`This`_ are translated to `Global`_.
Functions like inConfig and inTask transform This into a
`Select`_ for a specific value. For example, ``inConfig(Compile)(someSettings)``
`Select`_ for a specific value. For example, `inConfig(Compile)(someSettings)`
translates the configuration axis for all settings in *someSettings* to
be ``Select(Compile)`` if the axis value is `This`_.
be `Select(Compile)` if the axis value is `This`_.
So, from the example and from sbt's scopes, you can see that the core
settings engine does not impose much on the structure of a scope. All it
requires is a delegates function ``Scope => Seq[Scope]`` and a
``display`` function. You can choose a scope type that makes sense for
requires is a delegates function `Scope => Seq[Scope]` and a
`display` function. You can choose a scope type that makes sense for
your situation.
Constructing settings
@ -197,14 +209,14 @@ at the top-level, this requires only one level of duplication.
Additionally, sbt uniformly integrates its task engine into the settings
system. The underlying settings engine has no notion of tasks. This is
why sbt uses a ``SettingKey`` type and a ``TaskKey`` type. Methods on an
underlying ``TaskKey[T]`` are basically translated to operating on an
underlying ``SettingKey[Task[T]]`` (and they both wrap an underlying
``AttributeKey``).
why sbt uses a `SettingKey` type and a `TaskKey` type. Methods on an
underlying `TaskKey[T]` are basically translated to operating on an
underlying `SettingKey[Task[T]]` (and they both wrap an underlying
`AttributeKey`).
For example, ``a := 3`` for a SettingKey *a* will very roughly translate
to ``setting(a, value(3))``. For a TaskKey *a*, it will roughly
translate to ``setting(a, value( task { 3 } ) )``. See
For example, `a := 3` for a SettingKey *a* will very roughly translate
to `setting(a, value(3))`. For a TaskKey *a*, it will roughly
translate to `setting(a, value( task { 3 } ) )`. See
`main/Structure.scala <../../sxr/Structure.scala>`_
for details.
@ -213,7 +225,7 @@ Settings definitions
sbt also provides a way to define these settings in a file (build.sbt
and Build.scala). This is done for build.sbt using basic parsing and
then passing the resulting chunks of code to ``compile/Eval.scala``. For
then passing the resulting chunks of code to `compile/Eval.scala`. For
all definitions, sbt manages the classpaths and recompilation process to
obtain the settings. It also provides a way for users to define project,
task, and configuration delegation, which ends up being used by the

View File

@ -1,24 +1,24 @@
=========================
``.sbt`` Build Definition
`.sbt` Build Definition
=========================
This page describes sbt build definitions, including some "theory" and
the syntax of ``build.sbt``. It assumes you know how to :doc:`use sbt <Running>` and have read the previous pages in the
the syntax of `build.sbt`. It assumes you know how to :doc:`use sbt <Running>` and have read the previous pages in the
Getting Started Guide.
``.sbt`` vs. ``.scala`` Definition
`.sbt` vs. `.scala` Definition
----------------------------------
An sbt build definition can contain files ending in ``.sbt``, located in
the base directory, and files ending in ``.scala``, located in the
``project`` subdirectory of the base directory.
An sbt build definition can contain files ending in `.sbt`, located in
the base directory, and files ending in `.scala`, located in the
`project` subdirectory of the base directory.
You can use either one exclusively, or use both. A good approach is to
use ``.sbt`` files for most purposes, and use ``.scala`` files only to
contain what can't be done in ``.sbt``.
use `.sbt` files for most purposes, and use `.scala` files only to
contain what can't be done in `.sbt`.
This page discusses ``.sbt`` files. See :doc:`.scala build definition <Full-Def>` (later in Getting Started) for
more on ``.scala`` files and how they relate to ``.sbt`` files.
This page discusses `.sbt` files. See :doc:`.scala build definition <Full-Def>` (later in Getting Started) for
more on `.scala` files and how they relate to `.sbt` files.
What is a build definition?
---------------------------
@ -29,47 +29,47 @@ After examining a project and processing any build definition files, sbt
will end up with an immutable map (set of key-value pairs) describing
the build.
For example, one key is ``name`` and it maps to a string value, the name
For example, one key is `name` and it maps to a string value, the name
of your project.
*Build definition files do not affect sbt's map directly.*
Instead, the build definition creates a huge list of objects with type
``Setting[T]`` where ``T`` is the type of the value in the map. A ``Setting`` describes
`Setting[T]` where `T` is the type of the value in the map. A `Setting` describes
a *transformation to the map*, such as adding a new key-value pair or
appending to an existing value. (In the spirit of functional
programming, a transformation returns a new map, it does not update the
old map in-place.)
In ``build.sbt``, you might create a ``Setting[String]`` for the name of
In `build.sbt`, you might create a `Setting[String]` for the name of
your project like this:
::
name := "hello"
This ``Setting[String]`` transforms the map by adding (or replacing) the
``name`` key, giving it the value ``"hello"``. The transformed map
This `Setting[String]` transforms the map by adding (or replacing) the
`name` key, giving it the value `"hello"`. The transformed map
becomes sbt's new map.
To create its map, sbt first sorts the list of settings so that all
changes to the same key are made together, and values that depend on
other keys are processed after the keys they depend on. Then sbt walks
over the sorted list of ``Setting`` and applies each one to the map in
over the sorted list of `Setting` and applies each one to the map in
turn.
Summary: A build definition defines a list of ``Setting[T]``, where a
``Setting[T]`` is a transformation affecting sbt's map of key-value
pairs and ``T`` is the type of each value.
Summary: A build definition defines a list of `Setting[T]`, where a
`Setting[T]` is a transformation affecting sbt's map of key-value
pairs and `T` is the type of each value.
How ``build.sbt`` defines settings
How `build.sbt` defines settings
----------------------------------
``build.sbt`` defines a ``Seq[Setting[_]]``; it's a list of Scala
`build.sbt` defines a `Seq[Setting[_]]`; it's a list of Scala
expressions, separated by blank lines, where each one becomes one
element in the sequence. If you put ``Seq(`` in front of the ``.sbt``
file and ``)`` at the end and replace the blank lines with commas, you'd
be looking at the equivalent ``.scala`` code.
element in the sequence. If you put `Seq(` in front of the `.sbt`
file and `)` at the end and replace the blank lines with commas, you'd
be looking at the equivalent `.scala` code.
Here's an example:
@ -81,34 +81,34 @@ Here's an example:
scalaVersion := "2.9.2"
A ``build.sbt`` file is a list of ``Setting``, separated by blank lines.
Each ``Setting`` is defined with a Scala expression.
The expressions in ``build.sbt`` are independent of one another, and
A `build.sbt` file is a list of `Setting`, separated by blank lines.
Each `Setting` is defined with a Scala expression.
The expressions in `build.sbt` are independent of one another, and
they are expressions, rather than complete Scala statements. These
expressions may be interspersed with ``val``s, ``lazy val``s, and ``def``s,
but top-level ``object``s and classes are not allowed in ``build.sbt``.
Those should go in the ``project/`` directory as full Scala source files.
expressions may be interspersed with `val`s, `lazy val`s, and `def`s,
but top-level `object`s and classes are not allowed in `build.sbt`.
Those should go in the `project/` directory as full Scala source files.
On the left, ``name``, ``version``, and ``scalaVersion`` are *keys*. A
key is an instance of ``SettingKey[T]``, ``TaskKey[T]``, or
``InputKey[T]`` where ``T`` is the expected value type. The kinds of key
On the left, `name`, `version`, and `scalaVersion` are *keys*. A
key is an instance of `SettingKey[T]`, `TaskKey[T]`, or
`InputKey[T]` where `T` is the expected value type. The kinds of key
are explained more below.
Keys have a method called ``:=``, which returns a ``Setting[T]``. You
Keys have a method called `:=`, which returns a `Setting[T]`. You
could use a Java-like syntax to call the method:
::
name.:=("hello")
But Scala allows ``name := "hello"`` instead (in Scala, any method can
But Scala allows `name := "hello"` instead (in Scala, any method can
use either syntax).
The ``:=`` method on key ``name`` returns a ``Setting``, specifically a
``Setting[String]``. ``String`` also appears in the type of ``name``
itself, which is ``SettingKey[String]``. In this case, the returned
``Setting[String]`` is a transformation to add or replace the ``name``
key in sbt's map, giving it the value ``"hello"``.
The `:=` method on key `name` returns a `Setting`, specifically a
`Setting[String]`. `String` also appears in the type of `name`
itself, which is `SettingKey[String]`. In this case, the returned
`Setting[String]` is a transformation to add or replace the `name`
key in sbt's map, giving it the value `"hello"`.
If you use the wrong value type, the build definition will not compile:
@ -119,7 +119,7 @@ If you use the wrong value type, the build definition will not compile:
Settings are separated by blank lines
-------------------------------------
You can't write a ``build.sbt`` like this:
You can't write a `build.sbt` like this:
::
@ -131,15 +131,15 @@ You can't write a ``build.sbt`` like this:
sbt needs some kind of delimiter to tell where one expression stops and
the next begins.
``.sbt`` files contain a list of Scala expressions, not a single Scala
`.sbt` files contain a list of Scala expressions, not a single Scala
program. These expressions have to be split up and passed to the
compiler individually.
If you want a single Scala program, use :doc:`.scala files <Full-Def>`
rather than ``.sbt`` files; ``.sbt`` files are optional.
rather than `.sbt` files; `.sbt` files are optional.
:doc:`Later on <Full-Def>` this guide explains how to use
``.scala`` files. (Preview: the same settings expressions found in a
``.sbt`` file can always be listed in a ``Seq[Setting]`` in a ``.scala``
`.scala` files. (Preview: the same settings expressions found in a
`.sbt` file can always be listed in a `Seq[Setting]` in a `.scala`
file instead.)
Keys are defined in the Keys object
@ -147,16 +147,16 @@ Keys are defined in the Keys object
The built-in keys are just fields in an object called
`Keys <../../sxr/Keys.scala.html>`_. A
``build.sbt`` implicitly has an ``import sbt.Keys._``, so
``sbt.Keys.name`` can be referred to as ``name``.
`build.sbt` implicitly has an `import sbt.Keys._`, so
`sbt.Keys.name` can be referred to as `name`.
Custom keys may be defined in a :doc:`.scala file <Full-Def>` or a :doc:`plugin <Using-Plugins>`.
Other ways to transform settings
--------------------------------
Replacement with ``:=`` is the simplest transformation, but there are
several others. For example you can append to a list value with ``+=``.
Replacement with `:=` is the simplest transformation, but there are
several others. For example you can append to a list value with `+=`.
The other transformations require an understanding of :doc:`scopes <Scopes>`, so the :doc:`next page <Scopes>` is about
scopes and the :doc:`page after that <More-About-Settings>` goes into more detail about settings.
@ -166,27 +166,27 @@ Task Keys
There are three flavors of key:
- ``SettingKey[T]``: a key with a value computed once (the value is
- `SettingKey[T]`: a key with a value computed once (the value is
computed one time when loading the project, and kept around).
- ``TaskKey[T]``: a key with a value that has to be recomputed each
- `TaskKey[T]`: a key with a value that has to be recomputed each
time, potentially creating side effects.
- ``InputKey[T]``: a task key which has command line arguments as
input. The Getting Started Guide doesn't cover ``InputKey``, but when
- `InputKey[T]`: a task key which has command line arguments as
input. The Getting Started Guide doesn't cover `InputKey`, but when
you finish this guide, check out :doc:`/Extending/Input-Tasks` for more.
A ``TaskKey[T]`` is said to define a *task*. Tasks are operations such
as ``compile`` or ``package``. They may return ``Unit`` (``Unit`` is
Scala for ``void``), or they may return a value related to the task, for
example ``package`` is a ``TaskKey[File]`` and its value is the jar file
A `TaskKey[T]` is said to define a *task*. Tasks are operations such
as `compile` or `package`. They may return `Unit` (`Unit` is
Scala for `void`), or they may return a value related to the task, for
example `package` is a `TaskKey[File]` and its value is the jar file
it creates.
Each time you start a task execution, for example by typing ``compile``
Each time you start a task execution, for example by typing `compile`
at the interactive sbt prompt, sbt will re-run any tasks involved
exactly once.
sbt's map describing the project can keep around a fixed string value
for a setting such as ``name``, but it has to keep around some
executable code for a task such as ``compile`` -- even if that
for a setting such as `name`, but it has to keep around some
executable code for a task such as `compile` -- even if that
executable code eventually returns a string, it has to be re-run every
time.
@ -194,21 +194,21 @@ time.
is, "taskiness" (whether to re-run each time) is a property of the key,
not the value.
Using ``:=``, you can assign a computation to a task, and that
Using `:=`, you can assign a computation to a task, and that
computation will be re-run each time:
::
hello := { println("Hello!") }
From a type-system perspective, the ``Setting`` created from a task key
From a type-system perspective, the `Setting` created from a task key
is slightly different from the one created from a setting key.
``taskKey := 42`` results in a ``Setting[Task[T]]`` while
``settingKey := 42`` results in a ``Setting[T]``. For most purposes this
makes no difference; the task key still creates a value of type ``T``
`taskKey := 42` results in a `Setting[Task[T]]` while
`settingKey := 42` results in a `Setting[T]`. For most purposes this
makes no difference; the task key still creates a value of type `T`
when the task executes.
The ``T`` vs. ``Task[T]`` type difference has this implication: a
The `T` vs. `Task[T]` type difference has this implication: a
setting key can't depend on a task key, because a setting key is
evaluated only once on project load, and not re-run. More on this in
:doc:`more about settings <More-About-Settings>`, coming up
@ -218,25 +218,25 @@ Keys in sbt interactive mode
----------------------------
In sbt's interactive mode, you can type the name of any task to execute
that task. This is why typing ``compile`` runs the compile task.
``compile`` is a task key.
that task. This is why typing `compile` runs the compile task.
`compile` is a task key.
If you type the name of a setting key rather than a task key, the value
of the setting key will be displayed. Typing a task key name executes
the task but doesn't display the resulting value; to see a task's
result, use ``show <task name>`` rather than plain ``<task name>``.
The convention for keys names is to use ``camelCase`` so that the
result, use `show <task name>` rather than plain `<task name>`.
The convention for keys names is to use `camelCase` so that the
command line name and the Scala identifiers are the same.
To learn more about any key, type ``inspect <keyname>`` at the sbt
interactive prompt. Some of the information ``inspect`` displays won't
To learn more about any key, type `inspect <keyname>` at the sbt
interactive prompt. Some of the information `inspect` displays won't
make sense yet, but at the top it shows you the setting's value type and
a brief description of the setting.
Imports in ``build.sbt``
Imports in `build.sbt`
------------------------
You can place import statements at the top of ``build.sbt``; they need
You can place import statements at the top of `build.sbt`; they need
not be separated by blank lines.
There are some implied default imports, as follows:
@ -248,15 +248,15 @@ There are some implied default imports, as follows:
import Keys._
(In addition, if you have :doc:`.scala files <Full-Def>`,
the contents of any ``Build`` or ``Plugin`` objects in those files will
the contents of any `Build` or `Plugin` objects in those files will
be imported. More on that when we get to :doc:`.scala build definitions <Full-Def>`.)
Adding library dependencies
---------------------------
To depend on third-party libraries, there are two options. The first is
to drop jars in ``lib/`` (unmanaged dependencies) and the other is to
add managed dependencies, which will look like this in ``build.sbt``:
to drop jars in `lib/` (unmanaged dependencies) and the other is to
add managed dependencies, which will look like this in `build.sbt`:
::
@ -265,11 +265,11 @@ add managed dependencies, which will look like this in ``build.sbt``:
This is how you add a managed dependency on the Apache Derby library,
version 10.4.1.3.
The ``libraryDependencies`` key involves two complexities: ``+=`` rather
than ``:=``, and the ``%`` method. ``+=`` appends to the key's old value
The `libraryDependencies` key involves two complexities: `+=` rather
than `:=`, and the `%` method. `+=` appends to the key's old value
rather than replacing it, this is explained in
:doc:`more about settings </Getting-Started/More-About-Settings>`.
The ``%`` method is used to construct an Ivy module ID from strings,
The `%` method is used to construct an Ivy module ID from strings,
explained in :doc:`library dependencies </Getting-Started/Library-Dependencies>`.
We'll skip over the details of library dependencies until later in the

View File

@ -16,9 +16,9 @@ packed with examples illustrating how to define keys. Most of the keys
are implemented in
`Defaults <../../sxr/Defaults.scala.html>`_.
Keys have one of three types. ``SettingKey`` and ``TaskKey`` are
Keys have one of three types. `SettingKey` and `TaskKey` are
described in :doc:`.sbt build definition <Basic-Def>`. Read
about ``InputKey`` on the :doc:`/Extending/Input-Tasks` page.
about `InputKey` on the :doc:`/Extending/Input-Tasks` page.
Some examples from `Keys <../../sxr/Keys.scala.html>`_:
@ -28,31 +28,31 @@ Some examples from `Keys <../../sxr/Keys.scala.html>`_:
val clean = taskKey[Unit]("Deletes files produced by the build, such as generated sources, compiled classes, and task caches.")
The key constructors have two string parameters: the name of the key
(``"scalaVersion"``) and a documentation string
(``"The version of scala used for building."``).
(`"scalaVersion"`) and a documentation string
(`"The version of scala used for building."`).
Remember from :doc:`.sbt build definition <Basic-Def>` that
the type parameter ``T`` in ``SettingKey[T]`` indicates the type of
value a setting has. ``T`` in ``TaskKey[T]`` indicates the type of the
the type parameter `T` in `SettingKey[T]` indicates the type of
value a setting has. `T` in `TaskKey[T]` indicates the type of the
task's result. Also remember from :doc:`.sbt build definition <Basic-Def>`
that a setting has a fixed value until project
reload, while a task is re-computed for every "task execution" (every
time someone types a command at the sbt interactive prompt or in batch
mode).
Keys may be defined in a ``.scala`` file (as described in :doc:`.scala build definition <Full-Def>`),
Keys may be defined in a `.scala` file (as described in :doc:`.scala build definition <Full-Def>`),
or in a plugin (as described in
:doc:`using plugins <Using-Plugins>`). Any ``val`` found in
a ``Build`` object in your ``.scala`` build definition files, or any
``val`` found in a ``Plugin`` object from a plugin, will be imported
automatically into your ``.sbt`` files.
:doc:`using plugins <Using-Plugins>`). Any `val` found in
a `Build` object in your `.scala` build definition files, or any
`val` found in a `Plugin` object from a plugin, will be imported
automatically into your `.sbt` files.
Implementing a task
-------------------
Once you've defined a key, you'll need to use it in some task. You could
be defining your own task, or you could be planning to redefine an
existing task. Either way looks the same; use ``:=`` to associate some
existing task. Either way looks the same; use `:=` to associate some
code with the task key:
::
@ -82,7 +82,7 @@ you can often use the convenient APIs in
Use plugins!
------------
If you find you have a lot of custom code in ``.scala`` files, consider
If you find you have a lot of custom code in `.scala` files, consider
moving it to a plugin for re-use across multiple projects.
It's very easy to create a plugin, as :doc:`teased earlier <Using-Plugins>` and :doc:`discussed at more length here </Extending/Plugins>`.

View File

@ -9,15 +9,15 @@ Base directory
--------------
In sbt's terminology, the "base directory" is the directory containing
the project. So if you created a project ``hello`` containing
``hello/build.sbt`` and ``hello/hw.scala`` as in the :doc:`Hello, World <Hello>`
example, ``hello`` is your base directory.
the project. So if you created a project `hello` containing
`hello/build.sbt` and `hello/hw.scala` as in the :doc:`Hello, World <Hello>`
example, `hello` is your base directory.
Source code
-----------
Source code can be placed in the project's base directory as with
``hello/hw.scala``. However, most people don't do this for real
`hello/hw.scala`. However, most people don't do this for real
projects; too much clutter.
sbt uses the same directory structure as
@ -42,17 +42,17 @@ paths are relative to the base directory):
java/
<test Java sources>
Other directories in ``src/`` will be ignored. Additionally, all hidden
Other directories in `src/` will be ignored. Additionally, all hidden
directories will be ignored.
sbt build definition files
--------------------------
You've already seen ``build.sbt`` in the project's base directory. Other
sbt files appear in a ``project`` subdirectory.
You've already seen `build.sbt` in the project's base directory. Other
sbt files appear in a `project` subdirectory.
``project`` can contain ``.scala`` files, which are combined with
``.sbt`` files to form the complete build definition.
`project` can contain `.scala` files, which are combined with
`.sbt` files to form the complete build definition.
See :doc:`.scala build definitions <Full-Def>` for more.
.. code-block:: text
@ -61,8 +61,8 @@ See :doc:`.scala build definitions <Full-Def>` for more.
project/
Build.scala
You may see ``.sbt`` files inside ``project/`` but they are not
equivalent to ``.sbt`` files in the project's base directory. Explaining
You may see `.sbt` files inside `project/` but they are not
equivalent to `.sbt` files in the project's base directory. Explaining
this will :doc:`come later <Full-Def>`, since you'll need
some background information first.
@ -70,22 +70,22 @@ Build products
--------------
Generated files (compiled classes, packaged jars, managed files, caches,
and documentation) will be written to the ``target`` directory by
and documentation) will be written to the `target` directory by
default.
Configuring version control
---------------------------
Your ``.gitignore`` (or equivalent for other version control systems)
Your `.gitignore` (or equivalent for other version control systems)
should contain:
.. code-block:: text
target/
Note that this deliberately has a trailing ``/`` (to match only
directories) and it deliberately has no leading ``/`` (to match
``project/target/`` in addition to plain ``target/``).
Note that this deliberately has a trailing `/` (to match only
directories) and it deliberately has no leading `/` (to match
`project/target/` in addition to plain `target/`).
Next
====

View File

@ -1,5 +1,5 @@
===========================
``.scala`` Build Definition
`.scala` Build Definition
===========================
This page assumes you've read previous pages in the Getting Started
@ -9,18 +9,18 @@ and :doc:`more about settings <More-About-Settings>`.
sbt is recursive
----------------
``build.sbt`` is so simple, it conceals how sbt really works. sbt builds
`build.sbt` is so simple, it conceals how sbt really works. sbt builds
are defined with Scala code. That code, itself, has to be built. What
better way than with sbt?
The ``project`` directory *is another project inside your project* which
knows how to build your project. The project inside ``project`` can (in
The `project` directory *is another project inside your project* which
knows how to build your project. The project inside `project` can (in
theory) do anything any other project can do. *Your build definition is
an sbt project.*
And the turtles go all the way down. If you like, you can tweak the
build definition of the build definition project, by creating a
``project/project/`` directory.
`project/project/` directory.
Here's an illustration.
@ -53,14 +53,14 @@ Here's an illustration.
*Don't worry!* Most of the time you are not going to need all that. But
understanding the principle can be helpful.
By the way: any time files ending in ``.scala`` or ``.sbt`` are used,
naming them ``build.sbt`` and ``Build.scala`` are conventions only. This
By the way: any time files ending in `.scala` or `.sbt` are used,
naming them `build.sbt` and `Build.scala` are conventions only. This
also means that multiple files are allowed.
``.scala`` source files in the build definition project
`.scala` source files in the build definition project
-------------------------------------------------------
``.sbt`` files are merged into their sibling ``project`` directory.
`.sbt` files are merged into their sibling `project` directory.
Looking back at the project layout:
.. code-block:: text
@ -76,25 +76,25 @@ Looking back at the project layout:
Build.scala # a source file in the project/ project,
# that is, a source file in the build definition
The Scala expressions in ``build.sbt`` are compiled alongside and merged
with ``Build.scala`` (or any other ``.scala`` files in the ``project/``
The Scala expressions in `build.sbt` are compiled alongside and merged
with `Build.scala` (or any other `.scala` files in the `project/`
directory).
*``.sbt`` files in the base directory for a project become part of the
``project`` build definition project also located in that base
*`.sbt` files in the base directory for a project become part of the
`project` build definition project also located in that base
directory.*
The ``.sbt`` file format is a convenient shorthand for adding settings
The `.sbt` file format is a convenient shorthand for adding settings
to the build definition project.
Relating ``build.sbt`` to ``Build.scala``
Relating `build.sbt` to `Build.scala`
-----------------------------------------
To mix ``.sbt`` and ``.scala`` files in your build definition, you need
To mix `.sbt` and `.scala` files in your build definition, you need
to understand how they relate.
The following two files illustrate. First, if your project is in
``hello``, create ``hello/project/Build.scala`` as follows:
`hello`, create `hello/project/Build.scala` as follows:
::
@ -116,7 +116,7 @@ The following two files illustrate. First, if your project is in
settings = Project.defaultSettings ++ Seq(sampleKeyB := "B: in the root project settings in Build.scala"))
}
Now, create ``hello/build.sbt`` as follows:
Now, create `hello/build.sbt` as follows:
::
@ -124,7 +124,7 @@ Now, create ``hello/build.sbt`` as follows:
sampleKeyD := "D: in build.sbt"
Start up the sbt interactive prompt. Type ``inspect sampleKeyA`` and you
Start up the sbt interactive prompt. Type `inspect sampleKeyA` and you
should see (among other things):
.. code-block:: text
@ -133,7 +133,7 @@ should see (among other things):
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}/*:sampleKeyA
and then ``inspect sampleKeyC`` and you should see:
and then `inspect sampleKeyC` and you should see:
.. code-block:: text
@ -142,12 +142,12 @@ and then ``inspect sampleKeyC`` and you should see:
[info] {file:/home/hp/checkout/hello/}/*:sampleKeyC
Note that the "Provided by" shows the same scope for the two values.
That is, ``sampleKeyC in ThisBuild`` in a ``.sbt`` file is equivalent to
placing a setting in the ``Build.settings`` list in a ``.scala`` file.
That is, `sampleKeyC in ThisBuild` in a `.sbt` file is equivalent to
placing a setting in the `Build.settings` list in a `.scala` file.
sbt takes build-scoped settings from both places to create the build
definition.
Now, ``inspect sampleKeyB``:
Now, `inspect sampleKeyB`:
.. code-block:: text
@ -155,11 +155,11 @@ Now, ``inspect sampleKeyB``:
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}hello/*:sampleKeyB
Note that ``sampleKeyB`` is scoped to the project
(``{file:/home/hp/checkout/hello/}hello``) rather than the entire build
(``{file:/home/hp/checkout/hello/}``).
Note that `sampleKeyB` is scoped to the project
(`{file:/home/hp/checkout/hello/}hello`) rather than the entire build
(`{file:/home/hp/checkout/hello/}`).
As you've probably guessed, ``inspect sampleKeyD`` matches ``sampleKeyB``:
As you've probably guessed, `inspect sampleKeyD` matches `sampleKeyB`:
.. code-block:: text
@ -167,58 +167,58 @@ As you've probably guessed, ``inspect sampleKeyD`` matches ``sampleKeyB``:
[info] Provided by:
[info] {file:/home/hp/checkout/hello/}hello/*:sampleKeyD
sbt *appends* the settings from ``.sbt`` files to the settings from
``Build.settings`` and ``Project.settings`` which means ``.sbt``
settings take precedence. Try changing ``Build.scala`` so it sets key
``sampleC`` or ``sampleD``, which are also set in ``build.sbt``. The
setting in ``build.sbt`` should "win" over the one in ``Build.scala``.
sbt *appends* the settings from `.sbt` files to the settings from
`Build.settings` and `Project.settings` which means `.sbt`
settings take precedence. Try changing `Build.scala` so it sets key
`sampleC` or `sampleD`, which are also set in `build.sbt`. The
setting in `build.sbt` should "win" over the one in `Build.scala`.
One other thing you may have noticed: ``sampleKeyC`` and ``sampleKeyD``
were available inside ``build.sbt``. That's because sbt imports the
contents of your ``Build`` object into your ``.sbt`` files. In this case
``import HelloBuild._`` was implicitly done for the ``build.sbt`` file.
One other thing you may have noticed: `sampleKeyC` and `sampleKeyD`
were available inside `build.sbt`. That's because sbt imports the
contents of your `Build` object into your `.sbt` files. In this case
`import HelloBuild._` was implicitly done for the `build.sbt` file.
In summary:
- In ``.scala`` files, you can add settings to ``Build.settings`` for
- In `.scala` files, you can add settings to `Build.settings` for
sbt to find, and they are automatically build-scoped.
- In ``.scala`` files, you can add settings to ``Project.settings`` for
- In `.scala` files, you can add settings to `Project.settings` for
sbt to find, and they are automatically project-scoped.
- Any ``Build`` object you write in a ``.scala`` file will have its
contents imported and available to ``.sbt`` files.
- The settings in ``.sbt`` files are *appended* to the settings in
``.scala`` files.
- The settings in ``.sbt`` files are project-scoped unless you
- Any `Build` object you write in a `.scala` file will have its
contents imported and available to `.sbt` files.
- The settings in `.sbt` files are *appended* to the settings in
`.scala` files.
- The settings in `.sbt` files are project-scoped unless you
explicitly specify another scope.
When to use ``.scala`` files
When to use `.scala` files
----------------------------
In ``.scala`` files, you can write any Scala code including ``val``, ``object``,
In `.scala` files, you can write any Scala code including `val`, `object`,
and method definitions.
*One recommended approach is to define settings in ``.sbt`` files, using
``.scala`` files when you need to factor out a ``val`` or ``object`` or
*One recommended approach is to define settings in `.sbt` files, using
`.scala` files when you need to factor out a `val` or `object` or
method definition.*
There's one build definition, which is a nested project inside your main
project. ``.sbt`` and ``.scala`` files are compiled together to create
project. `.sbt` and `.scala` files are compiled together to create
that single definition.
``.scala`` files are also required to define multiple projects in a
`.scala` files are also required to define multiple projects in a
single build. More on that is coming up in :doc:`Multi-Project Builds <Multi-Project>`.
(A disadvantage of using ``.sbt`` files in a :doc:`multi-project build <Multi-Project>` is that they'll be spread around
(A disadvantage of using `.sbt` files in a :doc:`multi-project build <Multi-Project>` is that they'll be spread around
in different directories; for that reason, some people prefer to put
settings in their ``.scala`` files if they have sub-projects. This will
settings in their `.scala` files if they have sub-projects. This will
be clearer after you see how :doc:`multi-project builds <Multi-Project>` work.)
The build definition project in interactive mode
------------------------------------------------
You can switch the sbt interactive prompt to have the build definition
project in ``project/`` as the current project. To do so, type
``reload plugins``.
project in `project/` as the current project. To do so, type
`reload plugins`.
.. code-block:: text
@ -233,31 +233,31 @@ project in ``project/`` as the current project. To do so, type
[info] ArrayBuffer(/home/hp/checkout/hello/hw.scala)
>
As shown above, you use ``reload return`` to leave the build definition
As shown above, you use `reload return` to leave the build definition
project and return to your regular project.
Reminder: it's all immutable
----------------------------
It would be wrong to think that the settings in ``build.sbt`` are added
to the ``settings`` fields in ``Build`` and ``Project`` objects.
Instead, the settings list from ``Build`` and ``Project``, and the
settings from ``build.sbt``, are concatenated into another immutable
list which is then used by sbt. The ``Build`` and ``Project`` objects
It would be wrong to think that the settings in `build.sbt` are added
to the `settings` fields in `Build` and `Project` objects.
Instead, the settings list from `Build` and `Project`, and the
settings from `build.sbt`, are concatenated into another immutable
list which is then used by sbt. The `Build` and `Project` objects
are "immutable configuration" forming only part of the complete build
definition.
In fact, there are other sources of settings as well. They are appended
in this order:
- Settings from ``Build.settings`` and ``Project.settings`` in your
``.scala`` files.
- Your user-global settings; for example in ``~/.sbt/build.sbt`` you
- Settings from `Build.settings` and `Project.settings` in your
`.scala` files.
- Your user-global settings; for example in `~/.sbt/build.sbt` you
can define settings affecting *all* your projects.
- Settings injected by plugins, see :doc:`using plugins <Using-Plugins>` coming up next.
- Settings from ``.sbt`` files in the project.
- Build definition projects (i.e. projects inside ``project``) have
settings from global plugins (``~/.sbt/plugins``) added. :doc:`Using plugins <Using-Plugins>` explains this more.
- Settings from `.sbt` files in the project.
- Build definition projects (i.e. projects inside `project`) have
settings from global plugins (`~/.sbt/plugins`) added. :doc:`Using plugins <Using-Plugins>` explains this more.
Later settings override earlier ones. The entire list of settings forms
the build definition.

View File

@ -8,7 +8,7 @@ Create a project directory with source code
-------------------------------------------
A valid sbt project can be a directory containing a single source file.
Try creating a directory ``hello`` with a file ``hw.scala``, containing
Try creating a directory `hello` with a file `hw.scala`, containing
the following:
::
@ -17,7 +17,7 @@ the following:
def main(args: Array[String]) = println("Hi!")
}
Now from inside the ``hello`` directory, start sbt and type ``run`` at
Now from inside the `hello` directory, start sbt and type `run` at
the sbt interactive console. On Linux or OS X the commands might look
like this:
@ -36,55 +36,55 @@ In this case, sbt works purely by convention. sbt will find the
following automatically:
- Sources in the base directory
- Sources in ``src/main/scala`` or ``src/main/java``
- Tests in ``src/test/scala`` or ``src/test/java``
- Data files in ``src/main/resources`` or ``src/test/resources``
- jars in ``lib``
- Sources in `src/main/scala` or `src/main/java`
- Tests in `src/test/scala` or `src/test/java`
- Data files in `src/main/resources` or `src/test/resources`
- jars in `lib`
By default, sbt will build projects with the same version of Scala used
to run sbt itself.
You can run the project with ``sbt run`` or enter the `Scala
REPL <http://www.scala-lang.org/node/2097>`_ with ``sbt console``.
``sbt console`` sets up your project's classpath so you can try out live
You can run the project with `sbt run` or enter the `Scala
REPL <http://www.scala-lang.org/node/2097>`_ with `sbt console`.
`sbt console` sets up your project's classpath so you can try out live
Scala examples based on your project's code.
Build definition
----------------
Most projects will need some manual setup. Basic build settings go in a
file called ``build.sbt``, located in the project's base directory.
file called `build.sbt`, located in the project's base directory.
For example, if your project is in the directory ``hello``, in
``hello/build.sbt`` you might write:
For example, if your project is in the directory `hello`, in
`hello/build.sbt` you might write:
::
.. parsed-literal::
name := "hello"
version := "1.0"
scalaVersion := "2.9.1"
scalaVersion := "|scalaRelease|"
Notice the blank line between every item. This isn't just for show;
they're actually required in order to separate each item. In :doc:`.sbt build definition <Basic-Def>` you'll learn more about
how to write a ``build.sbt`` file.
how to write a `build.sbt` file.
If you plan to package your project in a jar, you will want to set at
least the name and version in a ``build.sbt``.
least the name and version in a `build.sbt`.
Setting the sbt version
-----------------------
You can force a particular version of sbt by creating a file
``hello/project/build.properties``. In this file, write:
`hello/project/build.properties`. In this file, write:
.. code-block:: text
.. parsed-literal::
sbt.version=0.13.0
sbt.version=|release|
to force the use of sbt 0.13.0. sbt is 99% source compatible from release to release.
Still, setting the sbt version in ``project/build.properties`` avoids
to force the use of sbt |release|. sbt is 99% source compatible from release to release.
Still, setting the sbt version in `project/build.properties` avoids
any potential confusion.
Next

View File

@ -8,7 +8,7 @@ particular :doc:`.sbt build definition <Basic-Def>`,
Library dependencies can be added in two ways:
- *unmanaged dependencies* are jars dropped into the ``lib`` directory
- *unmanaged dependencies* are jars dropped into the `lib` directory
- *managed dependencies* are configured in the build definition and
downloaded automatically from repositories
@ -18,40 +18,40 @@ Unmanaged dependencies
Most people use managed dependencies instead of unmanaged. But unmanaged
can be simpler when starting out.
Unmanaged dependencies work like this: add jars to ``lib`` and they will
Unmanaged dependencies work like this: add jars to `lib` and they will
be placed on the project classpath. Not much else to it!
You can place test jars such as
`ScalaCheck <https://github.com/rickynils/scalacheck>`_,
`specs <http://code.google.com/p/specs/>`_, and
`ScalaTest <http://www.scalatest.org/>`_ in ``lib`` as well.
`ScalaTest <http://www.scalatest.org/>`_ in `lib` as well.
Dependencies in ``lib`` go on all the classpaths (for ``compile``,
``test``, ``run``, and ``console``). If you wanted to change the
Dependencies in `lib` go on all the classpaths (for `compile`,
`test`, `run`, and `console`). If you wanted to change the
classpath for just one of those, you would adjust
``dependencyClasspath in Compile`` or ``dependencyClasspath in Runtime``
for example. You could use ``~=`` to get the previous classpath value,
`dependencyClasspath in Compile` or `dependencyClasspath in Runtime`
for example. You could use `~=` to get the previous classpath value,
filter some entries out, and return a new classpath value. See :doc:`more about settings <More-About-Settings>`
for details of ``~=``.
for details of `~=`.
There's nothing to add to ``build.sbt`` to use unmanaged dependencies,
though you could change the ``unmanagedBase`` key if you'd like to use
a different directory rather than ``lib``.
There's nothing to add to `build.sbt` to use unmanaged dependencies,
though you could change the `unmanagedBase` key if you'd like to use
a different directory rather than `lib`.
To use ``custom_lib`` instead of ``lib``:
To use `custom_lib` instead of `lib`:
::
unmanagedBase := baseDirectory.value / "custom_lib"
``baseDirectory`` is the project's root directory, so here you're
changing ``unmanagedBase`` depending on ``baseDirectory`` using the
special ``value`` method as explained in :doc:`more about settings <More-About-Settings>`.
`baseDirectory` is the project's root directory, so here you're
changing `unmanagedBase` depending on `baseDirectory` using the
special `value` method as explained in :doc:`more about settings <More-About-Settings>`.
There's also an ``unmanagedJars`` task which lists the jars from the
``unmanagedBase`` directory. If you wanted to use multiple directories
There's also an `unmanagedJars` task which lists the jars from the
`unmanagedBase` directory. If you wanted to use multiple directories
or do something else complex, you might need to replace the whole
``unmanagedJars`` task with one that does something else.
`unmanagedJars` task with one that does something else.
Managed Dependencies
--------------------
@ -60,36 +60,36 @@ sbt uses `Apache Ivy <http://ant.apache.org/ivy/>`_ to implement managed
dependencies, so if you're familiar with Maven or Ivy, you won't have
much trouble.
The ``libraryDependencies`` key
The `libraryDependencies` key
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Most of the time, you can simply list your dependencies in the setting
``libraryDependencies``. It's also possible to write a Maven POM file or
`libraryDependencies`. It's also possible to write a Maven POM file or
Ivy configuration file to externally configure your dependencies, and
have sbt use those external configuration files. You can learn more
about that :ref:`here <external-maven-ivy>`.
Declaring a dependency looks like this, where ``groupId``,
``artifactId``, and ``revision`` are strings:
Declaring a dependency looks like this, where `groupId`,
`artifactId`, and `revision` are strings:
::
libraryDependencies += groupID % artifactID % revision
or like this, where ``configuration`` is also a string:
or like this, where `configuration` is also a string:
::
libraryDependencies += groupID % artifactID % revision % configuration
``libraryDependencies`` is declared in `Keys <../../sxr/Keys.scala.html>`_ like this:
`libraryDependencies` is declared in `Keys <../../sxr/Keys.scala.html>`_ like this:
::
val libraryDependencies = settingKey[Seq[ModuleID]]("Declares managed dependencies.")
The ``%`` methods create ``ModuleID`` objects from strings, then you add
those ``ModuleID`` to ``libraryDependencies``.
The `%` methods create `ModuleID` objects from strings, then you add
those `ModuleID` to `libraryDependencies`.
Of course, sbt (via Ivy) has to know where to download the module. If
your module is in one of the default repositories sbt comes with, this
@ -99,12 +99,12 @@ will just work. For example, Apache Derby is in a default repository:
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3"
If you type that in ``build.sbt`` and then ``update``, sbt should
download Derby to ``~/.ivy2/cache/org.apache.derby/``. (By the way,
``update`` is a dependency of ``compile`` so there's no need to manually
type ``update`` most of the time.)
If you type that in `build.sbt` and then `update`, sbt should
download Derby to `~/.ivy2/cache/org.apache.derby/`. (By the way,
`update` is a dependency of `compile` so there's no need to manually
type `update` most of the time.)
Of course, you can also use ``++=`` to add a list of dependencies all at
Of course, you can also use `++=` to add a list of dependencies all at
once:
::
@ -114,22 +114,22 @@ once:
groupID % otherID % otherRevision
)
In rare cases you might find reasons to use ``:=`` with ``libraryDependencies`` as well.
In rare cases you might find reasons to use `:=` with `libraryDependencies` as well.
Getting the right Scala version with ``%%``
Getting the right Scala version with `%%`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you use ``groupID %% artifactID % revision`` rather than
``groupID % artifactID % revision`` (the difference is the double ``%%``
If you use `groupID %% artifactID % revision` rather than
`groupID % artifactID % revision` (the difference is the double `%%`
after the groupID), sbt will add your project's Scala version to the
artifact name. This is just a shortcut. You could write this without the
``%%``:
`%%`:
::
libraryDependencies += "org.scala-tools" % "scala-stm_2.9.1" % "0.3"
Assuming the ``scalaVersion`` for your build is ``2.9.1``, the following
Assuming the `scalaVersion` for your build is `2.9.1`, the following
is identical:
::
@ -140,10 +140,10 @@ The idea is that many dependencies are compiled for multiple Scala
versions, and you'd like to get the one that matches your project.
The complexity in practice is that often a dependency will work with a
slightly different Scala version; but ``%%`` is not smart about that. So
if the dependency is available for ``2.9.0`` but you're using
``scalaVersion := "2.9.1"``, you won't be able to use ``%%`` even though
the ``2.9.0`` dependency likely works. If ``%%`` stops working just go
slightly different Scala version; but `%%` is not smart about that. So
if the dependency is available for `2.9.0` but you're using
`scalaVersion := "2.9.1"`, you won't be able to use `%%` even though
the `2.9.0` dependency likely works. If `%%` stops working just go
see which versions the dependency is really built for, and hardcode the
one you think will work (assuming there is one).
@ -152,11 +152,11 @@ See :doc:`/Detailed-Topics/Cross-Build` for some more detail on this.
Ivy revisions
~~~~~~~~~~~~~
The ``revision`` in ``groupID % artifactID % revision`` does not have to
The `revision` in `groupID % artifactID % revision` does not have to
be a single fixed version. Ivy can select the latest revision of a
module according to constraints you specify. Instead of a fixed revision
like ``"1.6.1"``, you specify ``"latest.integration"``, ``"2.9.+"``, or
``"[1.0,)"``. See the `Ivy
like `"1.6.1"`, you specify `"latest.integration"`, `"2.9.+"`, or
`"[1.0,)"`. See the `Ivy
revisions <http://ant.apache.org/ivy/history/2.3.0-rc1/ivyfile/dependency.html#revision>`_
documentation for details.
@ -179,7 +179,7 @@ For example:
resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"
The ``resolvers`` key is defined in
The `resolvers` key is defined in
`Keys <../../sxr/Keys.scala.html>`_ like
this:
@ -187,7 +187,7 @@ this:
val resolvers = settingKey[Seq[Resolver]]("The user-defined additional resolvers for automatically managed dependencies.")
The ``at`` method creates a ``Resolver`` object from two strings.
The `at` method creates a `Resolver` object from two strings.
sbt can search your local Maven repository if you add it as a
repository:
@ -201,41 +201,41 @@ See :doc:`/Detailed-Topics/Resolvers` for details on defining other types of rep
Overriding default resolvers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``resolvers`` does not contain the default resolvers; only additional
`resolvers` does not contain the default resolvers; only additional
ones added by your build definition.
``sbt`` combines ``resolvers`` with some default repositories to form
``externalResolvers``.
`sbt` combines `resolvers` with some default repositories to form
`externalResolvers`.
Therefore, to change or remove the default resolvers, you would need to
override ``externalResolvers`` instead of ``resolvers``.
override `externalResolvers` instead of `resolvers`.
.. _gsg-ivy-configurations:
Per-configuration dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Often a dependency is used by your test code (in ``src/test/scala``,
which is compiled by the ``Test`` configuration) but not your main code.
Often a dependency is used by your test code (in `src/test/scala`,
which is compiled by the `Test` configuration) but not your main code.
If you want a dependency to show up in the classpath only for the
``Test`` configuration and not the ``Compile`` configuration, add
``% "test"`` like this:
`Test` configuration and not the `Compile` configuration, add
`% "test"` like this:
::
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3" % "test"
Now, if you type ``show compile:dependencyClasspath`` at the sbt
Now, if you type `show compile:dependencyClasspath` at the sbt
interactive prompt, you should not see derby. But if you type
``show test:dependencyClasspath``, you should see the derby jar in the
`show test:dependencyClasspath`, you should see the derby jar in the
list.
Typically, test-related dependencies such as
`ScalaCheck <https://github.com/rickynils/scalacheck>`_,
`specs <http://code.google.com/p/specs/>`_, and
`ScalaTest <http://www.scalatest.org/>`_ would be defined with
``% "test"``.
`% "test"`.
Next
====

View File

@ -2,70 +2,70 @@
More Kinds of Setting
=====================
This page explains other ways to create a ``Setting``, beyond the basic
``:=`` method. It assumes you've read :doc:`.sbt build definition <Basic-Def>` and :doc:`scopes <Scopes>`.
This page explains other ways to create a `Setting`, beyond the basic
`:=` method. It assumes you've read :doc:`.sbt build definition <Basic-Def>` and :doc:`scopes <Scopes>`.
Refresher: Settings
-------------------
:doc:`Remember <Basic-Def>`, a build definition creates a
list of ``Setting``, which is then used to transform sbt's description
of the build (which is a map of key-value pairs). A ``Setting`` is a
list of `Setting`, which is then used to transform sbt's description
of the build (which is a map of key-value pairs). A `Setting` is a
transformation with sbt's earlier map as input and a new map as output.
The new map becomes sbt's new state.
Different settings transform the map in different ways.
:doc:`Earlier <Basic-Def>`, you read about the ``:=`` method.
:doc:`Earlier <Basic-Def>`, you read about the `:=` method.
The ``Setting`` which ``:=`` creates puts a fixed, constant value in the
The `Setting` which `:=` creates puts a fixed, constant value in the
new, transformed map. For example, if you transform a map with the
setting ``name := "hello"`` the new map has the string ``"hello"``
stored under the key ``name``.
setting `name := "hello"` the new map has the string `"hello"`
stored under the key `name`.
Settings must end up in the master list of settings to do any good (all
lines in a ``build.sbt`` automatically end up in the list, but in a
lines in a `build.sbt` automatically end up in the list, but in a
:doc:`.scala file <Full-Def>` you can get it wrong by
creating a ``Setting`` without putting it where sbt will find it).
creating a `Setting` without putting it where sbt will find it).
Appending to previous values: ``+=`` and ``++=``
Appending to previous values: `+=` and `++=`
------------------------------------------------
Assignment with ``:=`` is the simplest transformation, but keys have
other methods as well. If the ``T`` in ``SettingKey[T]`` is a sequence,
Assignment with `:=` is the simplest transformation, but keys have
other methods as well. If the `T` in `SettingKey[T]` is a sequence,
i.e. the key's value type is a sequence, you can append to the sequence
rather than replacing it.
- ``+=`` will append a single element to the sequence.
- ``++=`` will concatenate another sequence.
- `+=` will append a single element to the sequence.
- `++=` will concatenate another sequence.
For example, the key ``sourceDirectories in Compile`` has a
``Seq[File]`` as its value. By default this key's value would include
``src/main/scala``. If you wanted to also compile source code in a
directory called ``source`` (since you just have to be nonstandard), you
For example, the key `sourceDirectories in Compile` has a
`Seq[File]` as its value. By default this key's value would include
`src/main/scala`. If you wanted to also compile source code in a
directory called `source` (since you just have to be nonstandard), you
could add that directory:
::
sourceDirectories in Compile += new File("source")
Or, using the ``file()`` function from the sbt package for convenience:
Or, using the `file()` function from the sbt package for convenience:
::
sourceDirectories in Compile += file("source")
(``file()`` just creates a new ``File``.)
(`file()` just creates a new `File`.)
You could use ``++=`` to add more than one directory at a time:
You could use `++=` to add more than one directory at a time:
::
sourceDirectories in Compile ++= Seq(file("sources1"), file("sources2"))
Where ``Seq(a, b, c, ...)`` is standard Scala syntax to construct a
Where `Seq(a, b, c, ...)` is standard Scala syntax to construct a
sequence.
To replace the default source directories entirely, you use ``:=`` of
To replace the default source directories entirely, you use `:=` of
course:
::
@ -75,9 +75,9 @@ course:
Computing a value based on other keys' values
---------------------------------------------
Reference the value of another task or setting by calling ``value``
on the key for the task or setting. The ``value`` method is special and may
only be called in the argument to ``:=``, ``+=``, or ``++=``.
Reference the value of another task or setting by calling `value`
on the key for the task or setting. The `value` method is special and may
only be called in the argument to `:=`, `+=`, or `++=`.
As a first example, consider defining the project organization to be the same as the project name.
@ -94,7 +94,7 @@ Or, set the name to the name of the project's directory:
// name the project after the directory it's inside
name := baseDirectory.value.getName
This transforms the value of ``baseDirectory`` using the standard ``getName`` method of ``java.io.File``.
This transforms the value of `baseDirectory` using the standard `getName` method of `java.io.File`.
Using multiple inputs is similar. For example,
@ -107,10 +107,10 @@ This sets the name in terms of its previous value as well as the organization an
Settings with dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~
In the setting ``name := baseDirectory.value.getName``, ``name`` will have
a *dependency* on ``baseDirectory``. If you place the above in
``build.sbt`` and run the sbt interactive console, then type
``inspect name``, you should see (in part):
In the setting `name := baseDirectory.value.getName`, `name` will have
a *dependency* on `baseDirectory`. If you place the above in
`build.sbt` and run the sbt interactive console, then type
`inspect name`, you should see (in part):
.. code-block:: text
@ -121,12 +121,12 @@ This is how sbt knows which settings depend on which other settings.
Remember that some settings describe tasks, so this approach also
creates dependencies between tasks.
For example, if you ``inspect compile`` you'll see it depends on another
key ``compileInputs``, and if you inspect ``compileInputs`` it in turn
For example, if you `inspect compile` you'll see it depends on another
key `compileInputs`, and if you inspect `compileInputs` it in turn
depends on other keys. Keep following the dependency chains and magic
happens. When you type ``compile`` sbt automatically performs an
``update``, for example. It Just Works because the values required as
inputs to the ``compile`` computation require sbt to do the ``update``
happens. When you type `compile` sbt automatically performs an
`update`, for example. It Just Works because the values required as
inputs to the `compile` computation require sbt to do the `update`
computation first.
In this way, all build dependencies in sbt are *automatic* rather than
@ -137,7 +137,7 @@ then the computation depends on that key. It just works!
When settings are undefined
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Whenever a setting uses ``:=``, ``+=``, or ``++=`` to create a dependency on
Whenever a setting uses `:=`, `+=`, or `++=` to create a dependency on
itself or another key's value, the value it depends on must exist. If it
does not, sbt will complain. It might say *"Reference to undefined
setting"*, for example. When this happens, be sure you're using the key
@ -150,8 +150,8 @@ Tasks with dependencies
~~~~~~~~~~~~~~~~~~~~~~~
As noted in :doc:`.sbt build definition <Basic-Def>`, task
keys create a ``Setting[Task[T]]`` rather than a ``Setting[T]`` when you
build a setting with ``:=``, etc. Tasks can use settings as inputs, but
keys create a `Setting[Task[T]]` rather than a `Setting[T]` when you
build a setting with `:=`, etc. Tasks can use settings as inputs, but
settings cannot use tasks as inputs.
Take these two keys (from `Keys <../../sxr/Keys.scala.html>`_):
@ -161,10 +161,10 @@ Take these two keys (from `Keys <../../sxr/Keys.scala.html>`_):
val scalacOptions = taskKey[Seq[String]]("Options for the Scala compiler.")
val checksums = settingKey[Seq[String]]("The list of checksums to generate and to verify for dependencies.")
(``scalacOptions`` and ``checksums`` have nothing to do with each other,
(`scalacOptions` and `checksums` have nothing to do with each other,
they are just two keys with the same value type, where one is a task.)
It is possible to compile a ``build.sbt`` that aliases ``scalacOptions`` to ``checksums``, but not the other way.
It is possible to compile a `build.sbt` that aliases `scalacOptions` to `checksums`, but not the other way.
For example, this is allowed:
::
@ -183,13 +183,13 @@ time, and tasks expect to re-run every time.
checksums := scalacOptions.value
Appending with dependencies: ``+=`` and ``++=``
Appending with dependencies: `+=` and `++=`
-------------------------------------------------
Other keys can be used when appending to an existing setting or task, just like they can for assigning with ``:=``.
Other keys can be used when appending to an existing setting or task, just like they can for assigning with `:=`.
For example, say you have a coverage report named after the project, and
you want to add it to the files removed by ``clean``:
you want to add it to the files removed by `clean`:
::
@ -199,5 +199,5 @@ Next
----
At this point you know how to get things done with settings, so we can
move on to a specific key that comes up often: ``libraryDependencies``.
move on to a specific key that comes up often: `libraryDependencies`.
:doc:`Learn about library dependencies <Library-Dependencies>`.

View File

@ -15,19 +15,19 @@ It can be useful to keep multiple related projects in a single build,
especially if they depend on one another and you tend to modify them
together.
Each sub-project in a build has its own ``src/main/scala``, generates
its own jar file when you run ``package``, and in general works like any
Each sub-project in a build has its own `src/main/scala`, generates
its own jar file when you run `package`, and in general works like any
other project.
Defining projects in a ``.scala`` file
Defining projects in a `.scala` file
--------------------------------------
To have multiple projects, you must declare each project and how they
relate in a ``.scala`` file; there's no way to do it in a ``.sbt`` file.
However, you can define settings for each project in ``.sbt`` files.
Here's an example of a ``.scala`` file which defines a root project
``hello``, where the root project aggregates two sub-projects,
``hello-foo`` and ``hello-bar``:
relate in a `.scala` file; there's no way to do it in a `.sbt` file.
However, you can define settings for each project in `.sbt` files.
Here's an example of a `.scala` file which defines a root project
`hello`, where the root project aggregates two sub-projects,
`hello-foo` and `hello-bar`:
::
@ -45,22 +45,22 @@ Here's an example of a ``.scala`` file which defines a root project
base = file("bar"))
}
sbt finds the list of ``Project`` objects using reflection, looking for
fields with type ``Project`` in the ``Build`` object.
sbt finds the list of `Project` objects using reflection, looking for
fields with type `Project` in the `Build` object.
Because project ``hello-foo`` is defined with ``base = file("foo")``, it
will be contained in the subdirectory ``foo``. Its sources could be
directly under ``foo``, like ``foo/Foo.scala``, or in
``foo/src/main/scala``. The usual sbt :doc:`directory structure <Directories>`
applies underneath ``foo`` with the exception of build definition files.
Because project `hello-foo` is defined with `base = file("foo")`, it
will be contained in the subdirectory `foo`. Its sources could be
directly under `foo`, like `foo/Foo.scala`, or in
`foo/src/main/scala`. The usual sbt :doc:`directory structure <Directories>`
applies underneath `foo` with the exception of build definition files.
Any ``.sbt`` files in ``foo``, say ``foo/build.sbt``, will be merged
Any `.sbt` files in `foo`, say `foo/build.sbt`, will be merged
with the build definition for the entire build, but scoped to the
``hello-foo`` project.
`hello-foo` project.
If your whole project is in ``hello``, try defining a different version
(``version := "0.6"``) in ``hello/build.sbt``, ``hello/foo/build.sbt``,
and ``hello/bar/build.sbt``. Now ``show version`` at the sbt interactive
If your whole project is in `hello`, try defining a different version
(`version := "0.6"`) in `hello/build.sbt`, `hello/foo/build.sbt`,
and `hello/bar/build.sbt`. Now `show version` at the sbt interactive
prompt. You should get something like this (with whatever versions you
defined):
@ -74,24 +74,24 @@ defined):
[info] hello/*:version
[info] 0.5
``hello-foo/*:version`` was defined in ``hello/foo/build.sbt``,
``hello-bar/*:version`` was defined in ``hello/bar/build.sbt``, and
``hello/*:version`` was defined in ``hello/build.sbt``. Remember the
:doc:`syntax for scoped keys <Scopes>`. Each ``version`` key
is scoped to a project, based on the location of the ``build.sbt``. But
all three ``build.sbt`` are part of the same build definition.
`hello-foo/*:version` was defined in `hello/foo/build.sbt`,
`hello-bar/*:version` was defined in `hello/bar/build.sbt`, and
`hello/*:version` was defined in `hello/build.sbt`. Remember the
:doc:`syntax for scoped keys <Scopes>`. Each `version` key
is scoped to a project, based on the location of the `build.sbt`. But
all three `build.sbt` are part of the same build definition.
*Each project's settings can go in ``.sbt`` files in the base directory
of that project*, while the ``.scala`` file can be as simple as the one
*Each project's settings can go in `.sbt` files in the base directory
of that project*, while the `.scala` file can be as simple as the one
shown above, listing the projects and base directories. *There is no
need to put settings in the ``.scala`` file.*
need to put settings in the `.scala` file.*
You may find it cleaner to put everything including settings in
``.scala`` files in order to keep all build definition under a single
``project`` directory, however. It's up to you.
`.scala` files in order to keep all build definition under a single
`project` directory, however. It's up to you.
You cannot have a ``project`` subdirectory or ``project/*.scala`` files
in the sub-projects. ``foo/project/Build.scala`` would be ignored.
You cannot have a `project` subdirectory or `project/*.scala` files
in the sub-projects. `foo/project/Build.scala` would be ignored.
Aggregation
-----------
@ -100,24 +100,24 @@ Projects in the build can be completely independent of one another, if
you want.
In the above example, however, you can see the method call
``aggregate(foo, bar)``. This aggregates ``hello-foo`` and ``hello-bar``
`aggregate(foo, bar)`. This aggregates `hello-foo` and `hello-bar`
underneath the root project.
Aggregation means that running a task on the aggregate project will also
run it on the aggregated projects. Start up sbt with two subprojects as
in the example, and try ``compile``. You should see that all three
in the example, and try `compile`. You should see that all three
projects are compiled.
*In the project doing the aggregating*, the root ``hello`` project in
*In the project doing the aggregating*, the root `hello` project in
this case, you can control aggregation per-task. So for example in
``hello/build.sbt`` you could avoid aggregating the ``update`` task:
`hello/build.sbt` you could avoid aggregating the `update` task:
::
aggregate in update := false
``aggregate in update`` is the ``aggregate`` key scoped to the
``update`` task, see :doc:`scopes <Scopes>`.
`aggregate in update` is the `aggregate` key scoped to the
`update` task, see :doc:`scopes <Scopes>`.
Note: aggregation will run the aggregated tasks in parallel and with no
defined ordering.
@ -126,52 +126,52 @@ Classpath dependencies
----------------------
A project may depend on code in another project. This is done by adding
a ``dependsOn`` method call. For example, if ``hello-foo`` needed
``hello-bar`` on its classpath, you would write in your ``Build.scala``:
a `dependsOn` method call. For example, if `hello-foo` needed
`hello-bar` on its classpath, you would write in your `Build.scala`:
::
lazy val foo = Project(id = "hello-foo",
base = file("foo")) dependsOn(bar)
Now code in ``hello-foo`` can use classes from ``hello-bar``. This also
Now code in `hello-foo` can use classes from `hello-bar`. This also
creates an ordering between the projects when compiling them;
``hello-bar`` must be updated and compiled before ``hello-foo`` can be
`hello-bar` must be updated and compiled before `hello-foo` can be
compiled.
To depend on multiple projects, use multiple arguments to ``dependsOn``,
like ``dependsOn(bar, baz)``.
To depend on multiple projects, use multiple arguments to `dependsOn`,
like `dependsOn(bar, baz)`.
Per-configuration classpath dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``foo dependsOn(bar)`` means that the ``Compile`` configuration in
``foo`` depends on the ``Compile`` configuration in ``bar``. You could
write this explicitly as ``dependsOn(bar % "compile->compile")``.
`foo dependsOn(bar)` means that the `Compile` configuration in
`foo` depends on the `Compile` configuration in `bar`. You could
write this explicitly as `dependsOn(bar % "compile->compile")`.
The ``->`` in ``"compile->compile"`` means "depends on" so
``"test->compile"`` means the ``Test`` configuration in ``foo`` would
depend on the ``Compile`` configuration in ``bar``.
The `->` in `"compile->compile"` means "depends on" so
`"test->compile"` means the `Test` configuration in `foo` would
depend on the `Compile` configuration in `bar`.
Omitting the ``->config`` part implies ``->compile``, so
``dependsOn(bar % "test")`` means that the ``Test`` configuration in
``foo`` depends on the ``Compile`` configuration in ``bar``.
Omitting the `->config` part implies `->compile`, so
`dependsOn(bar % "test")` means that the `Test` configuration in
`foo` depends on the `Compile` configuration in `bar`.
A useful declaration is ``"test->test"`` which means ``Test`` depends on
``Test``. This allows you to put utility code for testing in
``bar/src/test/scala`` and then use that code in ``foo/src/test/scala``,
A useful declaration is `"test->test"` which means `Test` depends on
`Test`. This allows you to put utility code for testing in
`bar/src/test/scala` and then use that code in `foo/src/test/scala`,
for example.
You can have multiple configurations for a dependency, separated by
semicolons. For example,
``dependsOn(bar % "test->test;compile->compile")``.
`dependsOn(bar % "test->test;compile->compile")`.
Navigating projects interactively
---------------------------------
At the sbt interactive prompt, type ``projects`` to list your projects
and ``project <projectname>`` to select a current project. When you run
a task like ``compile``, it runs on the current project. So you don't
At the sbt interactive prompt, type `projects` to list your projects
and `project <projectname>` to select a current project. When you run
a task like `compile`, it runs on the current project. So you don't
necessarily have to compile the root project, you could compile only a
subproject.

View File

@ -3,7 +3,7 @@
Running
=======
This page describes how to use ``sbt`` once you have set up your
This page describes how to use `sbt` once you have set up your
project. It assumes you've :doc:`installed sbt <Setup>` and
created a :doc:`Hello, World <Hello>` or other project.
@ -20,17 +20,17 @@ Running sbt with no command line arguments starts it in interactive
mode. Interactive mode has a command prompt (with tab completion and
history!).
For example, you could type ``compile`` at the sbt prompt:
For example, you could type `compile` at the sbt prompt:
.. code-block:: console
> compile
To ``compile`` again, press up arrow and then enter.
To `compile` again, press up arrow and then enter.
To run your program, type ``run``.
To run your program, type `run`.
To leave interactive mode, type ``exit`` or use Ctrl+D (Unix) or Ctrl+Z
To leave interactive mode, type `exit` or use Ctrl+D (Unix) or Ctrl+Z
(Windows).
Batch mode
@ -38,16 +38,16 @@ Batch mode
You can also run sbt in batch mode, specifying a space-separated list of
sbt commands as arguments. For sbt commands that take arguments, pass
the command and arguments as one argument to ``sbt`` by enclosing them
the command and arguments as one argument to `sbt` by enclosing them
in quotes. For example,
.. code-block:: console
$ sbt clean compile "testOnly TestA TestB"
In this example, ``test-only`` has arguments, ``TestA`` and ``TestB``.
The commands will be run in sequence (``clean``, ``compile``, then
``test-only``).
In this example, `test-only` has arguments, `TestA` and `TestB`.
The commands will be run in sequence (`clean`, `compile`, then
`test-only`).
Continuous build and test
-------------------------
@ -56,7 +56,7 @@ To speed up your edit-compile-test cycle, you can ask sbt to
automatically recompile or run tests whenever you save a source file.
Make a command run when one or more source files change by prefixing the
command with ``~``. For example, in interactive mode try:
command with `~`. For example, in interactive mode try:
.. code-block:: console
@ -64,7 +64,7 @@ command with ``~``. For example, in interactive mode try:
Press enter to stop watching for changes.
You can use the ``~`` prefix with either interactive mode or batch mode.
You can use the `~` prefix with either interactive mode or batch mode.
See :doc:`/Detailed-Topics/Triggered-Execution` for more details.
@ -74,23 +74,23 @@ Common commands
Here are some of the most common sbt commands. For a more complete list,
see :doc:`/Detailed-Topics/Command-Line-Reference`.
- ``clean`` Deletes all generated files (in the ``target`` directory).
- ``compile`` Compiles the main sources (in ``src/main/scala`` and
``src/main/java`` directories).
- ``test`` Compiles and runs all tests.
- ``console`` Starts the Scala interpreter with a classpath including
- `clean` Deletes all generated files (in the `target` directory).
- `compile` Compiles the main sources (in `src/main/scala` and
`src/main/java` directories).
- `test` Compiles and runs all tests.
- `console` Starts the Scala interpreter with a classpath including
the compiled sources and all dependencies. To return to sbt, type
``:quit``, Ctrl+D (Unix), or Ctrl+Z (Windows).
- ``run <argument>*`` Runs the main class for the project in the same
virtual machine as ``sbt``.
- ``package`` Creates a jar file containing the files in
``src/main/resources`` and the classes compiled from
``src/main/scala`` and ``src/main/java``.
- ``help <command>`` Displays detailed help for the specified command.
`:quit`, Ctrl+D (Unix), or Ctrl+Z (Windows).
- `run <argument>*` Runs the main class for the project in the same
virtual machine as `sbt`.
- `package` Creates a jar file containing the files in
`src/main/resources` and the classes compiled from
`src/main/scala` and `src/main/java`.
- `help <command>` Displays detailed help for the specified command.
If no command is provided, displays brief descriptions of all
commands.
- ``reload`` Reloads the build definition (``build.sbt``,
``project/*.scala``, ``project/*.sbt`` files). Needed if you change
- `reload` Reloads the build definition (`build.sbt`,
`project/*.scala`, `project/*.sbt` files). Needed if you change
the build definition.
Tab completion
@ -108,15 +108,15 @@ Interactive mode remembers history, even if you exit sbt and restart it.
The simplest way to access history is with the up arrow key. The
following commands are also supported:
- ``!`` Show history command help.
- ``!!`` Execute the previous command again.
- ``!:`` Show all previous commands.
- ``!:n`` Show the last n commands.
- ``!n`` Execute the command with index ``n``, as shown by the ``!:``
- `!` Show history command help.
- `!!` Execute the previous command again.
- `!:` Show all previous commands.
- `!:n` Show the last n commands.
- `!n` Execute the command with index `n`, as shown by the `!:`
command.
- ``!-n`` Execute the nth command before this one.
- ``!string`` Execute the most recent command starting with 'string'
- ``!?string`` Execute the most recent command containing 'string'
- `!-n` Execute the nth command before this one.
- `!string` Execute the most recent command starting with 'string'
- `!?string` Execute the most recent command containing 'string'
Next
----

View File

@ -9,7 +9,7 @@ The whole story about keys
--------------------------
:doc:`Previously <Basic-Def>` we pretended that a key like
``name`` corresponded to one entry in sbt's map of key-value pairs. This
`name` corresponded to one entry in sbt's map of key-value pairs. This
was a simplification.
In truth, each key can have an associated value in more than one
@ -19,11 +19,11 @@ Some concrete examples:
- if you have multiple projects in your build definition, a key can
have a different value in each project.
- the ``compile`` key may have a different value for your main sources
- the `compile` key may have a different value for your main sources
and your test sources, if you want to compile them differently.
- the ``packageOptions`` key (which contains options for creating jar
- the `packageOptions` key (which contains options for creating jar
packages) may have different values when packaging class files
(``packageBin``) or packaging source code (``packageSrc``).
(`packageBin`) or packaging source code (`packageSrc`).
*There is no single value for a given key name*, because the value may
differ according to scope.
@ -34,10 +34,10 @@ If you think about sbt processing a list of settings to generate a
key-value map describing the project, as :doc:`discussed earlier <Basic-Def>`,
the keys in that key-value map are *scoped* keys.
Each setting defined in the build definition (for example in
``build.sbt``) applies to a scoped key as well.
`build.sbt`) applies to a scoped key as well.
Often the scope is implied or has a default, but if the defaults are
wrong, you'll need to mention the desired scope in ``build.sbt``.
wrong, you'll need to mention the desired scope in `build.sbt`.
Scope axes
----------
@ -73,29 +73,29 @@ comes from Ivy, which sbt uses for :doc:`managed dependencies <Library-Dependenc
Some configurations you'll see in sbt:
- ``Compile`` which defines the main build (``src/main/scala``).
- ``Test`` which defines how to build tests (``src/test/scala``).
- ``Runtime`` which defines the classpath for the ``run`` task.
- `Compile` which defines the main build (`src/main/scala`).
- `Test` which defines how to build tests (`src/test/scala`).
- `Runtime` which defines the classpath for the `run` task.
By default, all the keys associated with compiling, packaging, and
running are scoped to a configuration and therefore may work differently
in each configuration. The most obvious examples are the task keys
``compile``, ``package``, and ``run``; but all the keys which *affect*
those keys (such as ``sourceDirectories`` or ``scalacOptions`` or
``fullClasspath``) are also scoped to the configuration.
`compile`, `package`, and `run`; but all the keys which *affect*
those keys (such as `sourceDirectories` or `scalacOptions` or
`fullClasspath`) are also scoped to the configuration.
Scoping by task axis
~~~~~~~~~~~~~~~~~~~~
Settings can affect how a task works. For example, the ``packageSrc``
task is affected by the ``packageOptions`` setting.
Settings can affect how a task works. For example, the `packageSrc`
task is affected by the `packageOptions` setting.
To support this, a task key (such as ``packageSrc``) can be a scope for
another key (such as ``packageOptions``).
To support this, a task key (such as `packageSrc`) can be a scope for
another key (such as `packageOptions`).
The various tasks that build a package (``packageSrc``,
``packageBin``, ``packageDoc``) can share keys related to packaging,
such as ``artifactName`` and ``packageOptions``. Those keys can have
The various tasks that build a package (`packageSrc`,
`packageBin`, `packageDoc`) can share keys related to packaging,
such as `artifactName` and `packageOptions`. Those keys can have
distinct values for each packaging task.
Global scope
@ -103,10 +103,10 @@ Global scope
Each scope axis can be filled in with an instance of the axis type (for
example the task axis can be filled in with a task), or the axis can be
filled in with the special value ``Global``.
filled in with the special value `Global`.
``Global`` means what you would expect: the setting's value applies to
all instances of that axis. For example if the task axis is ``Global``,
`Global` means what you would expect: the setting's value applies to
all instances of that axis. For example if the task axis is `Global`,
then the setting would apply to all tasks.
Delegation
@ -118,13 +118,13 @@ its scope.
For each scope, sbt has a fallback search path made up of other scopes.
Typically, if a key has no associated value in a more-specific scope,
sbt will try to get a value from a more general scope, such as the
``Global`` scope or the entire-build scope.
`Global` scope or the entire-build scope.
This feature allows you to set a value once in a more general scope,
allowing multiple more-specific scopes to inherit the value.
You can see the fallback search path or "delegates" for a key using the
``inspect`` command, as described below. Read on.
`inspect` command, as described below. Read on.
Referring to scoped keys when running sbt
-----------------------------------------
@ -136,14 +136,14 @@ scoped keys like this:
{<build-uri>}<project-id>/config:intask::key
- ``{<build-uri>}<project-id>`` identifies the project axis. The
``<project-id>`` part will be missing if the project axis has "entire
- `{<build-uri>}<project-id>` identifies the project axis. The
`<project-id>` part will be missing if the project axis has "entire
build" scope.
- ``config`` identifies the configuration axis.
- ``intask`` identifies the task axis.
- ``key`` identifies the key being scoped.
- `config` identifies the configuration axis.
- `intask` identifies the task axis.
- `key` identifies the key being scoped.
``*`` can appear for each axis, referring to the ``Global`` scope.
`*` can appear for each axis, referring to the `Global` scope.
If you omit part of the scoped key, it will be inferred as follows:
@ -156,37 +156,37 @@ For more details, see :doc:`/Detailed-Topics/Inspecting-Settings`.
Examples of scoped key notation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ``fullClasspath``: just a key, so the default scopes are used:
- `fullClasspath`: just a key, so the default scopes are used:
current project, a key-dependent configuration, and global task
scope.
- ``test:fullClasspath``: specifies the configuration, so this is
``fullClasspath`` in the ``test`` configuration, with defaults for
- `test:fullClasspath`: specifies the configuration, so this is
`fullClasspath` in the `test` configuration, with defaults for
the other two scope axes.
- ``*:fullClasspath``: specifies ``Global`` for the configuration,
- `*:fullClasspath`: specifies `Global` for the configuration,
rather than the default configuration.
- ``doc::fullClasspath``: specifies the ``fullClasspath`` key scoped
to the ``doc`` task, with the defaults for the project and
- `doc::fullClasspath`: specifies the `fullClasspath` key scoped
to the `doc` task, with the defaults for the project and
configuration axes.
- ``{file:/home/hp/checkout/hello/}default-aea33a/test:fullClasspath``
- `{file:/home/hp/checkout/hello/}default-aea33a/test:fullClasspath`
specifies a project,
``{file:/home/hp/checkout/hello/}default-aea33a``, where the project
is identified with the build ``{file:/home/hp/checkout/hello/}`` and
then a project id inside that build ``default-aea33a``. Also
specifies configuration ``test``, but leaves the default task axis.
- ``{file:/home/hp/checkout/hello/}/test:fullClasspath`` sets the
`{file:/home/hp/checkout/hello/}default-aea33a`, where the project
is identified with the build `{file:/home/hp/checkout/hello/}` and
then a project id inside that build `default-aea33a`. Also
specifies configuration `test`, but leaves the default task axis.
- `{file:/home/hp/checkout/hello/}/test:fullClasspath` sets the
project axis to "entire build" where the build is
``{file:/home/hp/checkout/hello/}``
- ``{.}/test:fullClasspath`` sets the project axis to "entire build"
where the build is ``{.}``. ``{.}`` can be written ``ThisBuild`` in
`{file:/home/hp/checkout/hello/}`
- `{.}/test:fullClasspath` sets the project axis to "entire build"
where the build is `{.}`. `{.}` can be written `ThisBuild` in
Scala code.
- ``{file:/home/hp/checkout/hello/}/compile:doc::fullClasspath`` sets
- `{file:/home/hp/checkout/hello/}/compile:doc::fullClasspath` sets
all three scope axes.
Inspecting scopes
-----------------
In sbt's interactive mode, you can use the ``inspect`` command to
understand keys and their scopes. Try ``inspect test:fullClasspath``:
In sbt's interactive mode, you can use the `inspect` command to
understand keys and their scopes. Try `inspect test:fullClasspath`:
.. code-block:: text
@ -227,13 +227,13 @@ understand keys and their scopes. Try ``inspect test:fullClasspath``:
On the first line, you can see this is a task (as opposed to a setting,
as explained in :doc:`.sbt build definition <Basic-Def>`).
The value resulting from the task will have type
``scala.collection.Seq[sbt.Attributed[java.io.File]]``.
`scala.collection.Seq[sbt.Attributed[java.io.File]]`.
"Provided by" points you to the scoped key that defines the value, in
this case
``{file:/home/hp/checkout/hello/}default-aea33a/test:fullClasspath``
(which is the ``fullClasspath`` key scoped to the ``test``
configuration and the ``{file:/home/hp/checkout/hello/}default-aea33a``
`{file:/home/hp/checkout/hello/}default-aea33a/test:fullClasspath`
(which is the `fullClasspath` key scoped to the `test`
configuration and the `{file:/home/hp/checkout/hello/}default-aea33a`
project).
"Dependencies" may not make sense yet; stay tuned for the :doc:`next page <More-About-Settings>`.
@ -241,33 +241,33 @@ project).
You can also see the delegates; if the value were not defined, sbt would
search through:
- two other configurations (``runtime:fullClasspath``,
``compile:fullClasspath``). In these scoped keys, the project is
- two other configurations (`runtime:fullClasspath`,
`compile:fullClasspath`). In these scoped keys, the project is
unspecified meaning "current project" and the task is unspecified
meaning ``Global``
- configuration set to ``Global`` (``*:fullClasspath``), since project
meaning `Global`
- configuration set to `Global` (`*:fullClasspath`), since project
is still unspecified it's "current project" and task is still
unspecified so ``Global``
- project set to ``{.}`` or ``ThisBuild`` (meaning the entire build, no
unspecified so `Global`
- project set to `{.}` or `ThisBuild` (meaning the entire build, no
specific project)
- project axis set to ``Global`` (``*/test:fullClasspath``) (remember,
an unspecified project means current, so searching ``Global`` here is
new; i.e. ``*`` and "no project shown" are different for the project
axis; i.e. ``*/test:fullClasspath`` is not the same as
``test:fullClasspath``)
- both project and configuration set to ``Global``
(``*/*:fullClasspath``) (remember that unspecified task means
``Global`` already, so ``*/*:fullClasspath`` uses ``Global`` for all
- project axis set to `Global` (`*/test:fullClasspath`) (remember,
an unspecified project means current, so searching `Global` here is
new; i.e. `*` and "no project shown" are different for the project
axis; i.e. `*/test:fullClasspath` is not the same as
`test:fullClasspath`)
- both project and configuration set to `Global`
(`*/*:fullClasspath`) (remember that unspecified task means
`Global` already, so `*/*:fullClasspath` uses `Global` for all
three axes)
Try ``inspect fullClasspath`` (as opposed to the above example,
``inspect test:fullClasspath``) to get a sense of the difference.
Because the configuration is omitted, it is autodetected as ``compile``.
``inspect compile:fullClasspath`` should therefore look the same as
``inspect fullClasspath``.
Try `inspect fullClasspath` (as opposed to the above example,
`inspect test:fullClasspath`) to get a sense of the difference.
Because the configuration is omitted, it is autodetected as `compile`.
`inspect compile:fullClasspath` should therefore look the same as
`inspect fullClasspath`.
Try ``inspect *:fullClasspath`` for another contrast.
``fullClasspath`` is not defined in the ``Global`` configuration by
Try `inspect *:fullClasspath` for another contrast.
`fullClasspath` is not defined in the `Global` configuration by
default.
Again, for more details, see :doc:`/Detailed-Topics/Inspecting-Settings`.
@ -275,34 +275,34 @@ Again, for more details, see :doc:`/Detailed-Topics/Inspecting-Settings`.
Referring to scopes in a build definition
-----------------------------------------
If you create a setting in ``build.sbt`` with a bare key, it will be
scoped to the current project, configuration ``Global`` and task
``Global``:
If you create a setting in `build.sbt` with a bare key, it will be
scoped to the current project, configuration `Global` and task
`Global`:
::
name := "hello"
Run sbt and ``inspect name`` to see that it's provided by
``{file:/home/hp/checkout/hello/}default-aea33a/*:name``, that is, the
project is ``{file:/home/hp/checkout/hello/}default-aea33a``, the
configuration is ``*`` (meaning global), and the task is not shown
Run sbt and `inspect name` to see that it's provided by
`{file:/home/hp/checkout/hello/}default-aea33a/*:name`, that is, the
project is `{file:/home/hp/checkout/hello/}default-aea33a`, the
configuration is `*` (meaning global), and the task is not shown
(which also means global).
``build.sbt`` always defines settings for a single project, so the
`build.sbt` always defines settings for a single project, so the
"current project" is the project you're defining in that particular
``build.sbt``. (For :doc:`multi-project builds <Multi-Project>`, each project has its own ``build.sbt``.)
`build.sbt`. (For :doc:`multi-project builds <Multi-Project>`, each project has its own `build.sbt`.)
Keys have an overloaded method called ``in`` used to set the scope. The
argument to ``in`` can be an instance of any of the scope axes. So for
Keys have an overloaded method called `in` used to set the scope. The
argument to `in` can be an instance of any of the scope axes. So for
example, though there's no real reason to do this, you could set the
name scoped to the ``Compile`` configuration:
name scoped to the `Compile` configuration:
::
name in Compile := "hello"
or you could set the name scoped to the ``packageBin`` task (pointless!
or you could set the name scoped to the `packageBin` task (pointless!
just an example):
::
@ -310,26 +310,26 @@ just an example):
name in packageBin := "hello"
or you could set the name with multiple scope axes, for example in the
``packageBin`` task in the ``Compile`` configuration:
`packageBin` task in the `Compile` configuration:
::
name in (Compile, packageBin) := "hello"
or you could use ``Global`` for all axes:
or you could use `Global` for all axes:
::
name in Global := "hello"
(``name in Global`` implicitly converts the scope axis ``Global`` to a
scope with all axes set to ``Global``; the task and configuration are
already ``Global`` by default, so here the effect is to make the project
``Global``, that is, define ``*/*:name`` rather than
``{file:/home/hp/checkout/hello/}default-aea33a/*:name``)
(`name in Global` implicitly converts the scope axis `Global` to a
scope with all axes set to `Global`; the task and configuration are
already `Global` by default, so here the effect is to make the project
`Global`, that is, define `*/*:name` rather than
`{file:/home/hp/checkout/hello/}default-aea33a/*:name`)
If you aren't used to Scala, a reminder: it's important to understand
that ``in`` and ``:=`` are just methods, not magic. Scala lets you write
that `in` and `:=` are just methods, not magic. Scala lets you write
them in a nicer way, but you could also use the Java style:
::
@ -343,12 +343,12 @@ When to specify a scope
-----------------------
You need to specify the scope if the key in question is normally scoped.
For example, the ``compile`` task, by default, is scoped to ``Compile``
and ``Test`` configurations, and does not exist outside of those scopes.
For example, the `compile` task, by default, is scoped to `Compile`
and `Test` configurations, and does not exist outside of those scopes.
To change the value associated with the ``compile`` key, you need to
write ``compile in Compile`` or ``compile in Test``. Using plain
``compile`` would define a new compile task scoped to the current
To change the value associated with the `compile` key, you need to
write `compile in Compile` or `compile in Test`. Using plain
`compile` would define a new compile task scoped to the current
project, rather than overriding the standard compile tasks which are
scoped to a configuration.
@ -361,9 +361,9 @@ mean compile:compile?"
One way to think of it is that a name is only *part* of a key. In
reality, all keys consist of both a name, and a scope (where the scope
has three axes). The entire expression
``packageOptions in (Compile, packageBin)`` is a key name, in other
words. Simply ``packageOptions`` is also a key name, but a different one
(for keys with no ``in``, a scope is implicitly assumed: current
`packageOptions in (Compile, packageBin)` is a key name, in other
words. Simply `packageOptions` is also a key name, but a different one
(for keys with no `in`, a scope is implicitly assumed: current
project, global config, global task).
Next

View File

@ -92,9 +92,9 @@ Manual installation requires downloading `sbt-launch.jar`_ and creating a script
Unix
~~~~
Put `sbt-launch.jar`_ in ``~/bin``.
Put `sbt-launch.jar`_ in `~/bin`.
Create a script to run the jar, by creating ``~/bin/sbt`` with these contents:
Create a script to run the jar, by creating `~/bin/sbt` with these contents:
.. code-block:: console
@ -111,11 +111,11 @@ Windows
~~~~~~~
Manual installation for Windows varies by terminal type and whether Cygwin is used.
In all cases, put the batch file or script on the path so that you can launch ``sbt``
in any directory by typing ``sbt`` at the command prompt. Also, adjust JVM settings
In all cases, put the batch file or script on the path so that you can launch `sbt`
in any directory by typing `sbt` at the command prompt. Also, adjust JVM settings
according to your machine if necessary.
For **non-Cygwin users using the standard Windows terminal**, create a batch file ``sbt.bat``:
For **non-Cygwin users using the standard Windows terminal**, create a batch file `sbt.bat`:
.. code-block:: console
@ -124,21 +124,21 @@ For **non-Cygwin users using the standard Windows terminal**, create a batch fil
and put the downloaded `sbt-launch.jar`_ in the same directory as the batch file.
If using **Cygwin with the standard Windows terminal**, create a bash script ``~/bin/sbt``:
If using **Cygwin with the standard Windows terminal**, create a bash script `~/bin/sbt`:
.. code-block:: console
$ SBT_OPTS="-Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=256M"
$ java $SBT_OPTS -jar sbt-launch.jar "$@"
Replace ``sbt-launch.jar`` with the path to your downloaded `sbt-launch.jar`_ and remember to use ``cygpath`` if necessary.
Replace `sbt-launch.jar` with the path to your downloaded `sbt-launch.jar`_ and remember to use `cygpath` if necessary.
Make the script executable:
.. code-block:: console
$ chmod u+x ~/bin/sbt
If using **Cygwin with an Ansi terminal** (supports Ansi escape sequences and is configurable via ``stty``), create a bash script ``~/bin/sbt``:
If using **Cygwin with an Ansi terminal** (supports Ansi escape sequences and is configurable via `stty`), create a bash script `~/bin/sbt`:
.. code-block:: console
@ -147,7 +147,7 @@ If using **Cygwin with an Ansi terminal** (supports Ansi escape sequences and is
$ java -Djline.terminal=jline.UnixTerminal -Dsbt.cygwin=true $SBT_OPTS -jar sbt-launch.jar "$@"
$ stty icanon echo > /dev/null 2>&1
Replace ``sbt-launch.jar`` with the path to your downloaded `sbt-launch.jar`_ and remember to use ``cygpath`` if necessary.
Replace `sbt-launch.jar` with the path to your downloaded `sbt-launch.jar`_ and remember to use `cygpath` if necessary.
Then, make the script executable:
.. code-block:: console

View File

@ -20,12 +20,12 @@ sbt: The Core Concepts
Scala <http://www.artima.com/shop/programming_in_scala_2ed>`_ written
by the creator of Scala is a great introduction.
- :doc:`.sbt build definition <Basic-Def>`
- your build definition is one big list of ``Setting`` objects, where a
``Setting`` transforms the set of key-value pairs sbt uses to perform
- your build definition is one big list of `Setting` objects, where a
`Setting` transforms the set of key-value pairs sbt uses to perform
tasks.
- to create a ``Setting``, call one of a few methods on a key: ``:=``, ``+=``, ``++=``, or ``~=``.
- to create a `Setting`, call one of a few methods on a key: `:=`, `+=`, `++=`, or `~=`.
- there is no mutable state, only transformation; for example, a
``Setting`` transforms sbt's collection of key-value pairs into a new
`Setting` transforms sbt's collection of key-value pairs into a new
collection. It doesn't change anything in-place.
- each setting has a value of a particular type, determined by the key.
- *tasks* are special settings where the computation to produce the
@ -37,19 +37,19 @@ sbt: The Core Concepts
- scoping allows you to have different behaviors per-project, per-task,
or per-configuration.
- a configuration is a kind of build, such as the main one
(``Compile``) or the test one (``Test``).
(`Compile`) or the test one (`Test`).
- the per-project axis also supports "entire build" scope.
- scopes fall back to or *delegate* to more general scopes.
- :doc:`.sbt <Basic-Def>` vs. :doc:`.scala <Full-Def>` build definition
- put most of your settings in ``build.sbt``, but use ``.scala`` build
- put most of your settings in `build.sbt`, but use `.scala` build
definition files to :doc:`define multiple subprojects <Multi-Project>`
, and to factor out common values, objects, and methods.
- the build definition is an sbt project in its own right, rooted in
the ``project`` directory.
the `project` directory.
- :doc:`Plugins <Using-Plugins>` are extensions to the
build definition
- add plugins with the ``addSbtPlugin`` method in ``project/build.sbt``
(NOT ``build.sbt`` in the project's base directory).
- add plugins with the `addSbtPlugin` method in `project/build.sbt`
(NOT `build.sbt` in the project's base directory).
If any of this leaves you wondering rather than nodding, please ask for
help on the `mailing list`_, go

View File

@ -13,7 +13,7 @@ What is a plugin?
A plugin extends the build definition, most commonly by adding new
settings. The new settings could be new tasks. For example, a plugin
could add a ``code-coverage`` task which would generate a test coverage
could add a `code-coverage` task which would generate a test coverage
report.
Adding a plugin
@ -22,9 +22,9 @@ Adding a plugin
The short answer
~~~~~~~~~~~~~~~~
If your project is in directory ``hello``, edit
``hello/project/build.sbt`` and add the plugin location as a resolver,
then call ``addSbtPlugin`` with the plugin's Ivy module ID:
If your project is in directory `hello`, edit
`hello/project/build.sbt` and add the plugin location as a resolver,
then call `addSbtPlugin` with the plugin's Ivy module ID:
::
@ -39,14 +39,14 @@ Global plugins
~~~~~~~~~~~~~~
Plugins can be installed for all your projects at once by dropping them
in ``~/.sbt/plugins/``. ``~/.sbt/plugins/`` is an sbt project whose
in `~/.sbt/plugins/`. `~/.sbt/plugins/` is an sbt project whose
classpath is exported to all sbt build definition projects. Roughly
speaking, any ``.sbt`` files in ``~/.sbt/plugins/`` behave as if they
were in the ``project/`` directory for all projects, and any ``.scala``
files in ``~/.sbt/plugins/project/`` behave as if they were in the
``project/project/`` directory for all projects.
speaking, any `.sbt` files in `~/.sbt/plugins/` behave as if they
were in the `project/` directory for all projects, and any `.scala`
files in `~/.sbt/plugins/project/` behave as if they were in the
`project/project/` directory for all projects.
You can create ``~/.sbt/plugins/build.sbt`` and put ``addSbtPlugin()``
You can create `~/.sbt/plugins/build.sbt` and put `addSbtPlugin()`
expressions in there to add plugins to all your projects at once.
How it works
@ -62,8 +62,8 @@ Adding a plugin means *adding a library dependency to the build
definition*. To do that, you edit the build definition for the build
definition.
Recall that for a project ``hello``, its build definition project lives
in ``hello/*.sbt`` and ``hello/project/*.scala``:
Recall that for a project `hello`, its build definition project lives
in `hello/*.sbt` and `hello/project/*.scala`:
.. code-block:: text
@ -78,54 +78,54 @@ in ``hello/*.sbt`` and ``hello/project/*.scala``:
Build.scala # a source file in the project/ project,
# that is, a source file in the build definition
If you wanted to add a managed dependency to project ``hello``, you
would add to the ``libraryDependencies`` setting either in
``hello/*.sbt`` or ``hello/project/*.scala``.
If you wanted to add a managed dependency to project `hello`, you
would add to the `libraryDependencies` setting either in
`hello/*.sbt` or `hello/project/*.scala`.
You could add this in ``hello/build.sbt``:
You could add this in `hello/build.sbt`:
::
libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3" % "test"
If you add that and start up the sbt interactive mode and type
``show dependencyClasspath``, you should see the derby jar on your
`show dependencyClasspath`, you should see the derby jar on your
classpath.
To add a plugin, do the same thing but recursed one level. We want the
*build definition project* to have a new dependency. That means changing
the ``libraryDependencies`` setting for the build definition of the
the `libraryDependencies` setting for the build definition of the
build definition.
The build definition of the build definition, if your project is
``hello``, would be in ``hello/project/*.sbt`` and
``hello/project/project/*.scala``.
`hello`, would be in `hello/project/*.sbt` and
`hello/project/project/*.scala`.
The simplest "plugin" has no special sbt support; it's just a jar file.
For example, edit ``hello/project/build.sbt`` and add this line:
For example, edit `hello/project/build.sbt` and add this line:
::
libraryDependencies += "net.liftweb" % "lift-json" % "2.0"
Now, at the sbt interactive prompt, ``reload plugins`` to enter the
build definition project, and try ``show dependencyClasspath``. You
Now, at the sbt interactive prompt, `reload plugins` to enter the
build definition project, and try `show dependencyClasspath`. You
should see the lift-json jar on the classpath. This means: you could use
classes from lift-json in your ``Build.scala`` or ``build.sbt`` to
classes from lift-json in your `Build.scala` or `build.sbt` to
implement a task. You could parse a JSON file and generate other files
based on it, for example. Remember, use ``reload return`` to leave the
based on it, for example. Remember, use `reload return` to leave the
build definition project and go back to the parent project.
(Stupid sbt trick: type ``reload plugins`` over and over. You'll find
(Stupid sbt trick: type `reload plugins` over and over. You'll find
yourself in the project rooted in
``project/project/project/project/project/project/``. Don't worry, it
isn't useful. Also, it creates ``target`` directories all the way down,
`project/project/project/project/project/project/`. Don't worry, it
isn't useful. Also, it creates `target` directories all the way down,
which you'll have to clean up.)
``addSbtPlugin``
`addSbtPlugin`
^^^^^^^^^^^^^^^^
``addSbtPlugin`` is just a convenience method. Here's its definition:
`addSbtPlugin` is just a convenience method. Here's its definition:
::
@ -133,27 +133,27 @@ which you'll have to clean up.)
libraryDependencies +=
sbtPluginExtra(dependency, (sbtVersion in update).value, scalaVersion.value)
The appended dependency is based on ``sbtVersion in update``
(sbt's version scoped to the ``update`` task) and ``scalaVersion`` (the
The appended dependency is based on `sbtVersion in update`
(sbt's version scoped to the `update` task) and `scalaVersion` (the
version of scala used to compile the project, in this case used to
compile the build definition). ``sbtPluginExtra`` adds the sbt and Scala
compile the build definition). `sbtPluginExtra` adds the sbt and Scala
version information to the module ID.
``plugins.sbt``
`plugins.sbt`
^^^^^^^^^^^^^^^
Some people like to list plugin dependencies (for a project ``hello``)
in ``hello/project/plugins.sbt`` to avoid confusion with
``hello/build.sbt``. sbt does not care what ``.sbt`` files are called,
so both ``build.sbt`` and ``project/plugins.sbt`` are conventions. sbt
*does* of course care where the sbt files are *located*. ``hello/*.sbt``
would contain dependencies for ``hello`` and ``hello/project/*.sbt``
would contain dependencies for ``hello``'s build definition.
Some people like to list plugin dependencies (for a project `hello`)
in `hello/project/plugins.sbt` to avoid confusion with
`hello/build.sbt`. sbt does not care what `.sbt` files are called,
so both `build.sbt` and `project/plugins.sbt` are conventions. sbt
*does* of course care where the sbt files are *located*. `hello/*.sbt`
would contain dependencies for `hello` and `hello/project/*.sbt`
would contain dependencies for `hello`'s build definition.
Plugins can add settings and imports automatically
--------------------------------------------------
In one sense a plugin is just a jar added to ``libraryDependencies`` for
In one sense a plugin is just a jar added to `libraryDependencies` for
the build definition; you can then use the jar from build definition
code as in the lift-json example above.
@ -161,16 +161,16 @@ However, jars intended for use as sbt plugins can do more.
If you download a plugin jar (`here's one for
sbteclipse <http://repo.typesafe.com/typesafe/ivy-releases/com.typesafe.sbteclipse/sbteclipse/scala_2.9.1/sbt_0.11.0/1.4.0/jars/sbteclipse.jar>`_)
and unpack it with ``jar xf``, you'll see that it contains a text file
``sbt/sbt.plugins``. In ``sbt/sbt.plugins`` there's an object name on
and unpack it with `jar xf`, you'll see that it contains a text file
`sbt/sbt.plugins`. In `sbt/sbt.plugins` there's an object name on
each line like this:
.. code-block:: text
com.typesafe.sbteclipse.SbtEclipsePlugin
``com.typesafe.sbteclipse.SbtEclipsePlugin`` is the name of an object
that extends ``sbt.Plugin``. The ``sbt.Plugin`` trait is very simple:
`com.typesafe.sbteclipse.SbtEclipsePlugin` is the name of an object
that extends `sbt.Plugin`. The `sbt.Plugin` trait is very simple:
::
@ -178,18 +178,18 @@ that extends ``sbt.Plugin``. The ``sbt.Plugin`` trait is very simple:
def settings: Seq[Setting[_]] = Nil
}
sbt looks for objects listed in ``sbt/sbt.plugins``. When it finds
``com.typesafe.sbteclipse.SbtEclipsePlugin``, it adds
``com.typesafe.sbteclipse.SbtEclipsePlugin.settings`` to the settings
sbt looks for objects listed in `sbt/sbt.plugins`. When it finds
`com.typesafe.sbteclipse.SbtEclipsePlugin`, it adds
`com.typesafe.sbteclipse.SbtEclipsePlugin.settings` to the settings
for the project. It also does
``import com.typesafe.sbteclipse.SbtEclipsePlugin._`` for any ``.sbt``
`import com.typesafe.sbteclipse.SbtEclipsePlugin._` for any `.sbt`
files, allowing a plugin to provide values, objects, and methods to
``.sbt`` files in the build definition.
`.sbt` files in the build definition.
Adding settings manually from a plugin
--------------------------------------
If a plugin defines settings in the ``settings`` field of a ``Plugin``
If a plugin defines settings in the `settings` field of a `Plugin`
object, you don't have to do anything to add them.
However, plugins often avoid this because you could not control which
@ -203,7 +203,7 @@ A whole batch of settings can be added by directly referencing the sequence of s
val myPluginSettings = Seq(settings in here)
}
You could add all those settings in ``build.sbt`` with this syntax:
You could add all those settings in `build.sbt` with this syntax:
::
@ -213,10 +213,10 @@ Creating a plugin
-----------------
After reading this far, you pretty much know how to *create* an sbt
plugin as well. There's one trick to know; set ``sbtPlugin := true`` in
``build.sbt``. If ``sbtPlugin`` is true, the project will scan its
compiled classes for instances of ``Plugin``, and list them in
``sbt/sbt.plugins`` when it packages a jar. ``sbtPlugin := true`` also
plugin as well. There's one trick to know; set `sbtPlugin := true` in
`build.sbt`. If `sbtPlugin` is true, the project will scan its
compiled classes for instances of `Plugin`, and list them in
`sbt/sbt.plugins` when it packages a jar. `sbtPlugin := true` also
adds sbt to the project's classpath, so you can use sbt APIs to
implement your plugin.

View File

@ -6,7 +6,7 @@ How to...
---------
This page presents an index of the how-to topics with short examples for many of them.
Click ``(details)`` to jump to the full explanation.
Click `(details)` to jump to the full explanation.
See also the :doc:`Basic Index <index>`, which omits the examples and just lists the topics.
.. howtoindex::

View File

@ -11,13 +11,13 @@ sbt provides standard hooks for adding source or resource generation tasks.
sourceGenerators in Compile += <your Task[Seq[File]] here>
A source generation task should generate sources in a subdirectory of ``sourceManaged`` and return a sequence of files generated. The key to add the task to is called ``sourceGenerators``. It should be scoped according to whether the generated files are main (``Compile``) or test (``Test``) sources. This basic structure looks like:
A source generation task should generate sources in a subdirectory of `sourceManaged` and return a sequence of files generated. The key to add the task to is called `sourceGenerators`. It should be scoped according to whether the generated files are main (`Compile`) or test (`Test`) sources. This basic structure looks like:
::
sourceGenerators in Compile += <your Task[Seq[File]] here>
For example, assuming a method ``def makeSomeSources(base: File): Seq[File]``,
For example, assuming a method `def makeSomeSources(base: File): Seq[File]`,
::
@ -35,7 +35,7 @@ As a specific example, the following generates a hello world source file:
Seq(file)
}
Executing 'run' will print "Hi". Change ``Compile`` to ``Test`` to make it a test source. For efficiency, you would only want to generate sources when necessary and not every run.
Executing 'run' will print "Hi". Change `Compile` to `Test` to make it a test source. For efficiency, you would only want to generate sources when necessary and not every run.
By default, generated sources are not included in the packaged source artifact. To do so, add them as you would other mappings. See :ref:`Adding files to a package <modify-package-contents>`. A source generator can return both Java and Scala sources mixed together in the same sequence. They will be distinguished by their extension later.
@ -46,13 +46,13 @@ By default, generated sources are not included in the packaged source artifact.
resourceGenerators in Compile += <your Task[Seq[File]] here>
A resource generation task should generate resources in a subdirectory of ``resourceManaged`` and return a sequence of files generated. The key to add the task to is called ``resourceGenerators``. It should be scoped according to whether the generated files are main (``Compile``) or test (``Test``) resources. This basic structure looks like:
A resource generation task should generate resources in a subdirectory of `resourceManaged` and return a sequence of files generated. The key to add the task to is called `resourceGenerators`. It should be scoped according to whether the generated files are main (`Compile`) or test (`Test`) resources. This basic structure looks like:
::
resourceGenerators in Compile += <your Task[Seq[File]] here>
For example, assuming a method ``def makeSomeResources(base: File): Seq[File]``,
For example, assuming a method `def makeSomeResources(base: File): Seq[File]`,
::
@ -71,6 +71,6 @@ As a specific example, the following generates a properties file containing the
Seq(file)
}
Change ``Compile`` to ``Test`` to make it a test resource. Normally, you would only want to generate resources when necessary and not every run.
Change `Compile` to `Test` to make it a test resource. Normally, you would only want to generate resources when necessary and not every run.
By default, generated resources are not included in the packaged source artifact. To do so, add them as you would other mappings. See :ref:`Adding files to a package <modify-package-contents>`.

View File

@ -9,8 +9,8 @@ Inspect the build
help compile
The ``help`` command is used to show available commands and search the help for commands, tasks, or settings.
If run without arguments, ``help`` lists the available commands.
The `help` command is used to show available commands and search the help for commands, tasks, or settings.
If run without arguments, `help` lists the available commands.
::
@ -26,14 +26,14 @@ If run without arguments, ``help`` lists the available commands.
> help compile
If the argument passed to ``help`` is the name of an existing command, setting or task, the help
If the argument passed to `help` is the name of an existing command, setting or task, the help
for that entity is displayed. Otherwise, the argument is interpreted as a regular expression that
is used to search the help of all commands, settings and tasks.
The ``tasks`` command is like ``help``, but operates only on tasks.
Similarly, the ``settings`` command only operates on settings.
The `tasks` command is like `help`, but operates only on tasks.
Similarly, the `settings` command only operates on settings.
See also ``help help``, ``help tasks``, and ``help settings``.
See also `help help`, `help tasks`, and `help settings`.
.. howto::
:id: listtasks
@ -42,10 +42,10 @@ See also ``help help``, ``help tasks``, and ``help settings``.
tasks
The ``tasks`` command, without arguments, lists the most commonly used tasks.
The `tasks` command, without arguments, lists the most commonly used tasks.
It can take a regular expression to search task names and descriptions.
The verbosity can be increased to show or search less commonly used tasks.
See ``help tasks`` for details.
See `help tasks` for details.
.. howto::
@ -55,10 +55,10 @@ See ``help tasks`` for details.
settings
The ``settings`` command, without arguments, lists the most commonly used settings.
The `settings` command, without arguments, lists the most commonly used settings.
It can take a regular expression to search setting names and descriptions.
The verbosity can be increased to show or search less commonly used settings.
See ``help settings`` for details.
See `help settings` for details.
.. howto::
:id: dependencies
@ -67,7 +67,7 @@ See ``help settings`` for details.
inspect compile
The ``inspect`` command displays several pieces of information about a given setting or task, including
The `inspect` command displays several pieces of information about a given setting or task, including
the dependencies of a task/setting as well as the tasks/settings that depend on the it. For example,
.. code-block:: console
@ -97,7 +97,7 @@ See the :doc:`/Detailed-Topics/Inspecting-Settings` page for details.
inspect compile
In addition to displaying immediate forward and reverse dependencies as described in the previous section,
the ``inspect`` command can display the full dependency tree for a task or setting.
the `inspect` command can display the full dependency tree for a task or setting.
For example,
.. code-block:: console
@ -114,9 +114,9 @@ For example,
[info] +-*:history = Some(<project>/target/.history)
...
For each task, ``inspect tree`` show the type of the value generated by the task.
For a setting, the ``toString`` of the setting is displayed.
See the :doc:`/Detailed-Topics/Inspecting-Settings` page for details on the ``inspect`` command.
For each task, `inspect tree` show the type of the value generated by the task.
For a setting, the `toString` of the setting is displayed.
See the :doc:`/Detailed-Topics/Inspecting-Settings` page for details on the `inspect` command.
.. howto::
:id: description
@ -125,8 +125,8 @@ See the :doc:`/Detailed-Topics/Inspecting-Settings` page for details on the ``in
help compile
While the ``help``, ``settings``, and ``tasks`` commands display a description of a task,
the ``inspect`` command also shows the type of a setting or task and the value of a setting.
While the `help`, `settings`, and `tasks` commands display a description of a task,
the `inspect` command also shows the type of a setting or task and the value of a setting.
For example:
.. code-block:: console
@ -164,7 +164,7 @@ See the :doc:`/Detailed-Topics/Inspecting-Settings` page for details.
inspect compile
The ``inspect`` command can help find scopes where a setting or task is defined.
The `inspect` command can help find scopes where a setting or task is defined.
The following example shows that different options may be specified to the Scala
for testing and API documentation generation.
@ -187,7 +187,7 @@ See the :doc:`/Detailed-Topics/Inspecting-Settings` page for details.
projects
The ``projects`` command displays the currently loaded projects.
The `projects` command displays the currently loaded projects.
The projects are grouped by their enclosing build and the current project is indicated by an asterisk.
For example,
@ -208,7 +208,7 @@ For example,
session list
``session list`` displays the settings that have been added at the command line for the current project. For example,
`session list` displays the settings that have been added at the command line for the current project. For example,
.. code-block:: console
@ -216,8 +216,8 @@ For example,
1. maxErrors := 5
2. scalacOptions += "-explaintypes"
``session list-all`` displays the settings added for all projects.
For details, see ``help session``.
`session list-all` displays the settings added for all projects.
For details, see `help session`.
.. howto::
:id: about
@ -242,7 +242,7 @@ For details, see ``help session``.
show name
The ``inspect`` command shows the value of a setting as part of its output, but the ``show`` command is dedicated to this job.
The `inspect` command shows the value of a setting as part of its output, but the `show` command is dedicated to this job.
It shows the output of the setting provided as an argument. For example,
.. code-block:: console
@ -250,7 +250,7 @@ It shows the output of the setting provided as an argument. For example,
> show organization
[info] com.github.sbt
The ``show`` command also works for tasks, described next.
The `show` command also works for tasks, described next.
.. howto::
:id: result
@ -268,8 +268,8 @@ The ``show`` command also works for tasks, described next.
[info] compile:
[info] org.scala-lang:scala-library:2.9.2: ...
The ``show`` command will execute the task provided as an argument and then print the result.
Note that this is different from the behavior of the ``inspect`` command (described in other sections),
The `show` command will execute the task provided as an argument and then print the result.
Note that this is different from the behavior of the `inspect` command (described in other sections),
which does not execute a task and thus can only display its type and not its generated value.
.. howto::
@ -300,8 +300,8 @@ For the test classpath,
show compile:discoveredMainClasses
sbt detects the classes with public, static main methods for use by the ``run`` method and to tab-complete the ``runMain`` method.
The ``discoveredMainClasses`` task does this discovery and provides as its result the list of class names.
sbt detects the classes with public, static main methods for use by the `run` method and to tab-complete the `runMain` method.
The `discoveredMainClasses` task does this discovery and provides as its result the list of class names.
For example, the following shows the main classes discovered in the main sources:
.. code-block:: console
@ -318,7 +318,7 @@ For example, the following shows the main classes discovered in the main sources
show definedTestNames
sbt detects tests according to fingerprints provided by test frameworks.
The ``definedTestNames`` task provides as its result the list of test names detected in this way.
The `definedTestNames` task provides as its result the list of test names detected in this way.
For example,
.. code-block:: console

View File

@ -2,7 +2,7 @@
Interactive mode
=================
By default, sbt's interactive mode is started when no commands are provided on the command line or when the ``shell`` command is invoked.
By default, sbt's interactive mode is started when no commands are provided on the command line or when the `shell` command is invoked.
.. howto::
:id: basic_completion
@ -16,13 +16,13 @@ Suggestions are provided that can complete the text entered to the left of the c
Any part of the suggestion that is unambiguous is automatically appended to the current text.
Commands typically support tab completion for most of their syntax.
As an example, entering ``tes`` and hitting tab:
As an example, entering `tes` and hitting tab:
.. code-block:: console
> tes<TAB>
results in sbt appending a ``t``:
results in sbt appending a `t`:
.. code-block:: console
@ -36,7 +36,7 @@ To get further completions, hit tab again:
testFrameworks testListeners testLoader testOnly testOptions test:
Now, there is more than one possibility for the next character, so sbt prints the available options.
We will select ``testOnly`` and get more suggestions by entering the rest of the command and hitting tab twice:
We will select `testOnly` and get more suggestions by entering the rest of the command and hitting tab twice:
.. code-block:: console
@ -57,7 +57,7 @@ If tests have been added, renamed, or removed since the last test compilation, t
Press tab multiple times.
Some commands have different levels of completion. Hitting tab multiple times increases the verbosity of completions. (Presently, this feature is only used by the ``set`` command.)
Some commands have different levels of completion. Hitting tab multiple times increases the verbosity of completions. (Presently, this feature is only used by the `set` command.)
.. howto::
:id: show_keybindings
@ -67,7 +67,7 @@ Some commands have different levels of completion. Hitting tab multiple times i
> consoleQuick
scala> :keybindings
Both the Scala and sbt command prompts use JLine for interaction. The Scala REPL contains a ``:keybindings`` command to show many of the keybindings used for JLine. For sbt, this can be used by running one of the ``console`` commands (``console``, ``consoleQuick``, or ``consoleProject``) and then running ``:keybindings``. For example:
Both the Scala and sbt command prompts use JLine for interaction. The Scala REPL contains a `:keybindings` command to show many of the keybindings used for JLine. For sbt, this can be used by running one of the `console` commands (`console`, `consoleQuick`, or `consoleProject`) and then running `:keybindings`. For example:
.. code-block:: console
@ -88,7 +88,7 @@ Both the Scala and sbt command prompts use JLine for interaction. The Scala REP
:title: Modify the default JLine keybindings
JLine, used by both Scala and sbt, uses a configuration file for many of its keybindings.
The location of this file can be changed with the system property ``jline.keybindings``.
The location of this file can be changed with the system property `jline.keybindings`.
The default keybindings file is included in the sbt launcher and may be used as a starting point for customization.
@ -100,7 +100,7 @@ The default keybindings file is included in the sbt launcher and may be used as
shellPrompt := { (s: State) => System.getProperty("user.name") + "> " }
By default, sbt only displays `> ` to prompt for a command.
This can be changed through the ``shellPrompt`` setting, which has type ``State => String``.
This can be changed through the `shellPrompt` setting, which has type `State => String`.
:doc:`State </Extending/Build-State>` contains all state for sbt and thus provides access to all build information for use in the prompt string.
Examples:
@ -123,17 +123,17 @@ Examples:
Interactive mode remembers history even if you exit sbt and restart it.
The simplest way to access history is to press the up arrow key to cycle
through previously entered commands. Use ``Ctrl+r`` to incrementally
through previously entered commands. Use `Ctrl+r` to incrementally
search history backwards. The following commands are supported:
* ``!`` Show history command help.
* ``!!`` Execute the previous command again.
* ``!:`` Show all previous commands.
* ``!:n`` Show the last n commands.
* ``!n`` Execute the command with index ``n``, as shown by the ``!:`` command.
* ``!-n`` Execute the nth command before this one.
* ``!string`` Execute the most recent command starting with 'string'
* ``!?string`` Execute the most recent command containing 'string'
* `!` Show history command help.
* `!!` Execute the previous command again.
* `!:` Show all previous commands.
* `!:n` Show the last n commands.
* `!n` Execute the command with index `n`, as shown by the `!:` command.
* `!-n` Execute the nth command before this one.
* `!string` Execute the most recent command starting with 'string'
* `!?string` Execute the most recent command containing 'string'
.. howto::
:id: history_file
@ -142,16 +142,16 @@ search history backwards. The following commands are supported:
historyPath := Some( baseDirectory.value / ".history" )
By default, interactive history is stored in the ``target/`` directory for the current project (but is not removed by a ``clean``).
By default, interactive history is stored in the `target/` directory for the current project (but is not removed by a `clean`).
History is thus separate for each subproject.
The location can be changed with the ``historyPath`` setting, which has type ``Option[File]``.
The location can be changed with the `historyPath` setting, which has type `Option[File]`.
For example, history can be stored in the root directory for the project instead of the output directory:
::
historyPath := Some(baseDirectory.value / ".history")
The history path needs to be set for each project, since sbt will use the value of ``historyPath`` for the current project (as selected by the ``project`` command).
The history path needs to be set for each project, since sbt will use the value of `historyPath` for the current project (as selected by the `project` command).
.. howto::
@ -163,14 +163,14 @@ The history path needs to be set for each project, since sbt will use the value
The previous section describes how to configure the location of the history file.
This setting can be used to share the interactive history among all projects in a build instead of using a different history for each project.
The way this is done is to set ``historyPath`` to be the same file, such as a file in the root project's ``target/`` directory:
The way this is done is to set `historyPath` to be the same file, such as a file in the root project's `target/` directory:
::
historyPath :=
Some( (target in LocalRootProject).value / ".history")
The ``in LocalRootProject`` part means to get the output directory for the root project for the build.
The `in LocalRootProject` part means to get the output directory for the root project for the build.
.. howto::
:id: disable_history
@ -179,7 +179,7 @@ The ``in LocalRootProject`` part means to get the output directory for the root
historyPath := None
If, for whatever reason, you want to disable history, set ``historyPath`` to ``None`` in each project it should be disabled in:
If, for whatever reason, you want to disable history, set `historyPath` to `None` in each project it should be disabled in:
historyPath := None
@ -190,18 +190,18 @@ If, for whatever reason, you want to disable history, set ``historyPath`` to ``N
clean compile shell
Interactive mode is implemented by the ``shell`` command.
By default, the ``shell`` command is run if no commands are provided to sbt on the command line.
To run commands before entering interactive mode, specify them on the command line followed by ``shell``.
Interactive mode is implemented by the `shell` command.
By default, the `shell` command is run if no commands are provided to sbt on the command line.
To run commands before entering interactive mode, specify them on the command line followed by `shell`.
For example,
.. code-block:: console
$ sbt clean compile shell
This runs ``clean`` and then ``compile`` before entering the interactive prompt.
If either ``clean`` or ``compile`` fails, sbt will exit without going to the prompt.
To enter the prompt whether or not these initial commands succeed, prepend `-shell`, which means to run ``shell`` if any command fails.
This runs `clean` and then `compile` before entering the interactive prompt.
If either `clean` or `compile` fails, sbt will exit without going to the prompt.
To enter the prompt whether or not these initial commands succeed, prepend `-shell`, which means to run `shell` if any command fails.
For example,
.. code-block:: console

View File

@ -10,9 +10,9 @@ Configure and use logging
last
When a command is run, more detailed logging output is sent to a file than to the screen (by default).
This output can be recalled for the command just executed by running ``last``.
This output can be recalled for the command just executed by running `last`.
For example, the output of ``run`` when the sources are uptodate is:
For example, the output of `run` when the sources are uptodate is:
.. code-block:: console
@ -22,7 +22,7 @@ For example, the output of ``run`` when the sources are uptodate is:
[success] Total time: 0 s, completed Feb 25, 2012 1:00:00 PM
The details of this execution can be recalled by running ``last``:
The details of this execution can be recalled by running `last`:
.. code-block:: console
@ -68,8 +68,8 @@ Configuration of the logging level for the console and for the backing file are
last compile
When a task is run, more detailed logging output is sent to a file than to the screen (by default).
This output can be recalled for a specific task by running ``last <task>``.
For example, the first time ``compile`` is run, output might look like:
This output can be recalled for a specific task by running `last <task>`.
For example, the first time `compile` is run, output might look like:
.. code-block:: console
@ -116,7 +116,7 @@ and:
printWarnings
The Scala compiler does not print the full details of warnings by default.
Compiling code that uses the deprecated ``error`` method from Predef might generate the following output:
Compiling code that uses the deprecated `error` method from Predef might generate the following output:
.. code-block:: console
@ -125,8 +125,8 @@ Compiling code that uses the deprecated ``error`` method from Predef might gener
[warn] there were 1 deprecation warnings; re-run with -deprecation for details
[warn] one warning found
The details aren't provided, so it is necessary to add ``-deprecation`` to the options passed to the compiler (``scalacOptions``) and recompile.
An alternative when using Scala 2.10 and later is to run ``printWarnings``.
The details aren't provided, so it is necessary to add `-deprecation` to the options passed to the compiler (`scalacOptions`) and recompile.
An alternative when using Scala 2.10 and later is to run `printWarnings`.
This task will display all warnings from the previous compilation.
For example,
@ -144,9 +144,9 @@ For example,
set every logLevel := Level.Debug
The amount of logging is controlled by the ``logLevel`` setting, which takes values from the ``Level`` enumeration.
Valid values are ``Error``, ``Warn``, ``Info``, and ``Debug`` in order of increasing verbosity.
To change the global logging level, set ``logLevel in Global``.
The amount of logging is controlled by the `logLevel` setting, which takes values from the `Level` enumeration.
Valid values are `Error`, `Warn`, `Info`, and `Debug` in order of increasing verbosity.
To change the global logging level, set `logLevel in Global`.
For example, to set it temporarily from the sbt prompt,
.. code-block:: console
@ -158,8 +158,8 @@ For example, to set it temporarily from the sbt prompt,
:title: Change the logging level for a specific task, configuration, or project
setting: logLevel in compile := Level.Debug
The amount of logging is controlled by the ``logLevel`` setting, which takes values from the ``Level`` enumeration.
Valid values are ``Error``, ``Warn``, ``Info``, and ``Debug`` in order of increasing verbosity.
The amount of logging is controlled by the `logLevel` setting, which takes values from the `Level` enumeration.
Valid values are `Error`, `Warn`, `Info`, and `Debug` in order of increasing verbosity.
The logging level may be configured globally, as described in the previous section, or it may be applied to a specific project, configuration, or task.
For example, to change the logging level for compilation to only show warnings and errors:
@ -174,9 +174,9 @@ To enable debug logging for all tasks in the current project,
> set logLevel := Level.Warn
A common scenario is that after running a task, you notice that you need more information than was shown by default.
A ``logLevel`` based solution typically requires changing the logging level and running a task again.
A `logLevel` based solution typically requires changing the logging level and running a task again.
However, there are two cases where this is unnecessary.
First, warnings from a previous compilation may be displayed using ``printWarnings`` for the main sources or ``test:printWarnings`` for test sources.
First, warnings from a previous compilation may be displayed using `printWarnings` for the main sources or `test:printWarnings` for test sources.
Second, output from the previous execution is available either for a single task or for in its entirety.
See the section on `printWarnings <#printwarnings>`_ and the sections on `previous output <#last>`_.
@ -192,8 +192,8 @@ By default, sbt hides the stack trace of most exceptions thrown during execution
It prints a message that indicates how to display the exception.
However, you may want to show more of stack traces by default.
The setting to configure is ``traceLevel``, which is a setting with an Int value.
When ``traceLevel`` is set to a negative value, no stack traces are shown.
The setting to configure is `traceLevel`, which is a setting with an Int value.
When `traceLevel` is set to a negative value, no stack traces are shown.
When it is zero, the stack trace is displayed up to the first sbt stack frame.
When positive, the stack trace is shown up to that many stack frames.
@ -203,8 +203,8 @@ For example, the following configures sbt to show stack traces up to the first s
> set every traceLevel := 0
The ``every`` part means to override the setting in all scopes.
To change the trace printing behavior for a single project, configuration, or task, scope ``traceLevel`` appropriately:
The `every` part means to override the setting in all scopes.
To change the trace printing behavior for a single project, configuration, or task, scope `traceLevel` appropriately:
.. code-block:: console
@ -221,7 +221,7 @@ To change the trace printing behavior for a single project, configuration, or ta
By default, sbt buffers the logging output of a test until the whole class finishes.
This is so that output does not get mixed up when executing in parallel.
To disable buffering, set the ``logBuffered`` setting to false:
To disable buffering, set the `logBuffered` setting to false:
::
@ -231,9 +231,9 @@ To disable buffering, set the ``logBuffered`` setting to false:
:id: custom
:title: Add a custom logger
The setting ``extraLoggers`` can be used to add custom loggers.
The setting `extraLoggers` can be used to add custom loggers.
A custom logger should implement [AbstractLogger].
``extraLoggers`` is a function ``ScopedKey[_] => Seq[AbstractLogger]``.
`extraLoggers` is a function `ScopedKey[_] => Seq[AbstractLogger]`.
This means that it can provide different logging based on the task that requests the logger.
::
@ -244,15 +244,15 @@ This means that it can provide different logging based on the task that requests
}
}
Here, we take the current function for the setting ``currentFunction`` and provide a new function.
Here, we take the current function for the setting `currentFunction` and provide a new function.
The new function prepends our custom logger to the ones provided by the old function.
.. howto::
:id: log
:title: Log messages in a task
The special task ``streams`` provides per-task logging and I/O via a `Streams <../../api/#sbt.std.Streams>`_ instance.
To log, a task uses the ``log`` member from the ``streams`` task:
The special task `streams` provides per-task logging and I/O via a `Streams <../../api/#sbt.std.Streams>`_ instance.
To log, a task uses the `log` member from the `streams` task:
::

View File

@ -2,7 +2,7 @@
Project metadata
================
A project should define ``name`` and ``version``. These will be used in various parts of the build, such as the names of generated artifacts. Projects that are published to a repository should also override ``organization``.
A project should define `name` and `version`. These will be used in various parts of the build, such as the names of generated artifacts. Projects that are published to a repository should also override `organization`.
.. howto::
:id: name
@ -15,7 +15,7 @@ A project should define ``name`` and ``version``. These will be used in various
name := "Your project name"
For published projects, this name is normalized to be suitable for use as an artifact name and dependency ID. This normalized name is stored in ``normalizedName``.
For published projects, this name is normalized to be suitable for use as an artifact name and dependency ID. This normalized name is stored in `normalizedName`.
.. howto::
:id: version
@ -37,7 +37,7 @@ For published projects, this name is normalized to be suitable for use as an art
By convention, this is a reverse domain name that you own, typically one specific to your project. It is used as a namespace for projects.
A full/formal name can be defined in the ``organizationName`` setting. This is used in the generated pom.xml. If the organization has a web site, it may be set in the ``organizationHomepage`` setting. For example:
A full/formal name can be defined in the `organizationName` setting. This is used in the generated pom.xml. If the organization has a web site, it may be set in the `organizationHomepage` setting. For example:
::

View File

@ -9,13 +9,13 @@
exportJars := true
By default, a project exports a directory containing its resources and compiled class files. Set ``exportJars`` to true to export the packaged jar instead. For example,
By default, a project exports a directory containing its resources and compiled class files. Set `exportJars` to true to export the packaged jar instead. For example,
::
exportJars := true
The jar will be used by ``run``, ``test``, ``console``, and other tasks that use the full classpath.
The jar will be used by `run`, `test`, `console`, and other tasks that use the full classpath.
.. howto::
@ -26,9 +26,9 @@ The jar will be used by ``run``, ``test``, ``console``, and other tasks that use
packageOptions in (Compile, packageBin) +=
Package.ManifestAttributes( Attributes.Name.SEALED -> "true" )
By default, sbt constructs a manifest for the binary package from settings such as ``organization`` and ``mainClass``. Additional attributes may be added to the ``packageOptions`` setting scoped by the configuration and package task.
By default, sbt constructs a manifest for the binary package from settings such as `organization` and `mainClass`. Additional attributes may be added to the `packageOptions` setting scoped by the configuration and package task.
Main attributes may be added with ``Package.ManifestAttributes``. There are two variants of this method, once that accepts repeated arguments that map an attribute of type ``java.util.jar.Attributes.Name`` to a String value and other that maps attribute names (type String) to the String value.
Main attributes may be added with `Package.ManifestAttributes`. There are two variants of this method, once that accepts repeated arguments that map an attribute of type `java.util.jar.Attributes.Name` to a String value and other that maps attribute names (type String) to the String value.
For example,
@ -37,7 +37,7 @@ For example,
packageOptions in (Compile, packageBin) +=
Package.ManifestAttributes( java.util.jar.Attributes.Name.SEALED -> "true" )
Other attributes may be added with ``Package.JarManifest``.
Other attributes may be added with `Package.JarManifest`.
::
@ -61,7 +61,7 @@ Or, to read the manifest from a file:
:id: name
:title: Change the file name of a package
The ``artifactName`` setting controls the name of generated packages. See the :doc:`/Detailed-Topics/Artifacts` page for details.
The `artifactName` setting controls the name of generated packages. See the :doc:`/Detailed-Topics/Artifacts` page for details.
.. howto::
:id: contents
@ -73,7 +73,7 @@ The ``artifactName`` setting controls the name of generated packages. See the :
.. _modify-package-contents:
The contents of a package are defined by the ``mappings`` task, of type ``Seq[(File,String)]``. The ``mappings`` task is a sequence of mappings from a file to include in the package to the path in the package. See :doc:`/Detailed-Topics/Mapping-Files` for convenience functions for generating these mappings. For example, to add the file ``in/example.txt`` to the main binary jar with the path "out/example.txt",
The contents of a package are defined by the `mappings` task, of type `Seq[(File,String)]`. The `mappings` task is a sequence of mappings from a file to include in the package to the path in the package. See :doc:`/Detailed-Topics/Mapping-Files` for convenience functions for generating these mappings. For example, to add the file `in/example.txt` to the main binary jar with the path "out/example.txt",
::
@ -81,4 +81,4 @@ The contents of a package are defined by the ``mappings`` task, of type ``Seq[(F
(baseDirectory.value / "in" / "example.txt") -> "out/example.txt"
}
Note that ``mappings`` is scoped by the configuration and the specific package task. For example, the mappings for the test source package are defined by the ``mappings in (Test, packageSrc)`` task.
Note that `mappings` is scoped by the configuration and the specific package task. For example, the mappings for the test source package are defined by the `mappings in (Test, packageSrc)` task.

View File

@ -26,9 +26,9 @@ For example,
Multiple commands can be scheduled at once by prefixing each command with a semicolon.
This is useful for specifying multiple commands where a single command string is accepted.
For example, the syntax for triggered execution is ``~ <command>``.
For example, the syntax for triggered execution is `~ <command>`.
To have more than one command run for each triggering, use semicolons.
For example, the following runs ``clean`` and then ``compile`` each time a source file changes:
For example, the following runs `clean` and then `compile` each time a source file changes:
.. code-block:: console
@ -41,7 +41,7 @@ For example, the following runs ``clean`` and then ``compile`` each time a sourc
< /path/to/file
The ``<`` command reads commands from the files provided to it as arguments. Run ``help <`` at the sbt prompt for details.
The `<` command reads commands from the files provided to it as arguments. Run `help <` at the sbt prompt for details.
.. howto::
:id: alias
@ -50,7 +50,7 @@ The ``<`` command reads commands from the files provided to it as arguments. Ru
alias h=help
The ``alias`` command defines, removes, and displays aliases for commands. Run ``help alias`` at the sbt prompt for details.
The `alias` command defines, removes, and displays aliases for commands. Run `help alias` at the sbt prompt for details.
Example usage:
@ -74,7 +74,7 @@ Example usage:
eval 2+2
The ``eval`` command compiles and runs the Scala expression passed to it as an argument.
The `eval` command compiles and runs the Scala expression passed to it as an argument.
The result is printed along with its type.
For example,
@ -84,5 +84,5 @@ For example,
> eval 2+2
4: Int
Variables defined by an ``eval`` are not visible to subsequent ``eval``s, although changes to system properties persist and affect the JVM that is running sbt.
Use the Scala REPL (``console`` and related commands) for full support for evaluating Scala code interactively.
Variables defined by an `eval` are not visible to subsequent `eval`s, although changes to system properties persist and affect the JVM that is running sbt.
Use the Scala REPL (`console` and related commands) for full support for evaluating Scala code interactively.

View File

@ -2,7 +2,7 @@
Configure and use Scala
=========================
By default, sbt's interactive mode is started when no commands are provided on the command line or when the ``shell`` command is invoked.
By default, sbt's interactive mode is started when no commands are provided on the command line or when the `shell` command is invoked.
.. howto::
:id: version
@ -11,7 +11,7 @@ By default, sbt's interactive mode is started when no commands are provided on t
version := "1.0"
The ``scalaVersion`` configures the version of Scala used for compilation. By default, sbt also adds a dependency on the Scala library with this version. See the next section for how to disable this automatic dependency. If the Scala version is not specified, the version sbt was built against is used. It is recommended to explicitly specify the version of Scala.
The `scalaVersion` configures the version of Scala used for compilation. By default, sbt also adds a dependency on the Scala library with this version. See the next section for how to disable this automatic dependency. If the Scala version is not specified, the version sbt was built against is used. It is recommended to explicitly specify the version of Scala.
For example, to set the Scala version to "2.9.2",
@ -26,7 +26,7 @@ For example, to set the Scala version to "2.9.2",
autoScalaLibrary := false
sbt adds a dependency on the Scala standard library by default. To disable this behavior, set the ``autoScalaLibrary`` setting to false.
sbt adds a dependency on the Scala standard library by default. To disable this behavior, set the `autoScalaLibrary` setting to false.
::
@ -39,7 +39,7 @@ sbt adds a dependency on the Scala standard library by default. To disable this
++ 2.8.2
To set the Scala version in all scopes to a specific value, use the ``++`` command. For example, to temporarily use Scala 2.8.2, run:
To set the Scala version in all scopes to a specific value, use the `++` command. For example, to temporarily use Scala 2.8.2, run:
.. code-block:: console
@ -52,7 +52,7 @@ To set the Scala version in all scopes to a specific value, use the ``++`` comma
scalaHome := Some(file("/path/to/scala/home/"))
Defining the ``scalaHome`` setting with the path to the Scala home directory will use that Scala installation. sbt still requires ``scalaVersion`` to be set when a local Scala version is used. For example,
Defining the `scalaHome` setting with the path to the Scala home directory will use that Scala installation. sbt still requires `scalaVersion` to be set when a local Scala version is used. For example,
::
@ -73,7 +73,7 @@ See :doc:`cross building </Detailed-Topics/Cross-Build>`.
consoleQuick
The ``consoleQuick`` action retrieves dependencies and puts them on the classpath of the Scala REPL. The project's sources are not compiled, but sources of any source dependencies are compiled. To enter the REPL with test dependencies on the classpath but without compiling test sources, run ``test:consoleQuick``. This will force compilation of main sources.
The `consoleQuick` action retrieves dependencies and puts them on the classpath of the Scala REPL. The project's sources are not compiled, but sources of any source dependencies are compiled. To enter the REPL with test dependencies on the classpath but without compiling test sources, run `test:consoleQuick`. This will force compilation of main sources.
.. howto::
:id: console
@ -82,7 +82,7 @@ The ``consoleQuick`` action retrieves dependencies and puts them on the classpat
console
The ``console`` action retrieves dependencies and compiles sources and puts them on the classpath of the Scala REPL. To enter the REPL with test dependencies and compiled test sources on the classpath, run ``test:console``.
The `console` action retrieves dependencies and compiles sources and puts them on the classpath of the Scala REPL. To enter the REPL with test dependencies and compiled test sources on the classpath, run `test:console`.
.. howto::
:id: consoleProject
@ -104,7 +104,7 @@ For details, see the :doc:`consoleProject </Detailed-Topics/Console-Project>` pa
initialCommands in console := """println("Hi!")"""
Set ``initialCommands in console`` to set the initial statements to evaluate when ``console`` and ``consoleQuick`` are run. To configure ``consoleQuick`` separately, use ``initialCommands in consoleQuick``.
Set `initialCommands in console` to set the initial statements to evaluate when `console` and `consoleQuick` are run. To configure `consoleQuick` separately, use `initialCommands in consoleQuick`.
For example,
::
@ -113,7 +113,7 @@ For example,
initialCommands in consoleQuick := """println("Hello from consoleQuick")"""
The ``consoleProject`` command is configured separately by ``initialCommands in consoleProject``. It does not use the value from ``initialCommands in console`` by default. For example,
The `consoleProject` command is configured separately by `initialCommands in consoleProject`. It does not use the value from `initialCommands in console` by default. For example,
::
@ -124,7 +124,7 @@ The ``consoleProject`` command is configured separately by ``initialCommands in
:id: embed
:title: Use the Scala REPL from project code
sbt runs tests in the same JVM as sbt itself and Scala classes are not in the same class loader as the application classes. This is also the case in ``console`` and when ``run`` is not forked. Therefore, when using the Scala interpreter, it is important to set it up properly to avoid an error message like:
sbt runs tests in the same JVM as sbt itself and Scala classes are not in the same class loader as the application classes. This is also the case in `console` and when `run` is not forked. Therefore, when using the Scala interpreter, it is important to set it up properly to avoid an error message like:
.. code-block:: text

View File

@ -9,7 +9,7 @@
~ test
You can make a command run when certain files change by prefixing the command with ``~``. Monitoring is terminated when ``enter`` is pressed. This triggered execution is configured by the ``watch`` setting, but typically the basic settings ``watchSources`` and ``pollInterval`` are modified as described in later sections.
You can make a command run when certain files change by prefixing the command with `~`. Monitoring is terminated when `enter` is pressed. This triggered execution is configured by the `watch` setting, but typically the basic settings `watchSources` and `pollInterval` are modified as described in later sections.
The original use-case for triggered execution was continuous compilation:
@ -19,7 +19,7 @@ The original use-case for triggered execution was continuous compilation:
> ~ compile
You can use the triggered execution feature to run any command or task, however. The following will poll for changes to your source code (main or test) and run ``testOnly`` for the specified test.
You can use the triggered execution feature to run any command or task, however. The following will poll for changes to your source code (main or test) and run `testOnly` for the specified test.
::
@ -32,13 +32,13 @@ You can use the triggered execution feature to run any command or task, however.
~ ;a ;b
The command passed to ``~`` may be any command string, so multiple commands may be run by separating them with a semicolon. For example,
The command passed to `~` may be any command string, so multiple commands may be run by separating them with a semicolon. For example,
::
> ~ ;a ;b
This runs ``a`` and then ``b`` when sources change.
This runs `a` and then `b` when sources change.
.. howto::
:id: sources
@ -47,10 +47,10 @@ This runs ``a`` and then ``b`` when sources change.
watchSources += baseDirectory.value / "examples.txt"
* ``watchSources`` defines the files for a single project that are monitored for changes. By default, a project watches resources and Scala and Java sources.
* ``watchTransitiveSources`` then combines the ``watchSources`` for the current project and all execution and classpath dependencies (see :doc:`/Getting-Started/Full-Def` for details on inter-project dependencies).
* `watchSources` defines the files for a single project that are monitored for changes. By default, a project watches resources and Scala and Java sources.
* `watchTransitiveSources` then combines the `watchSources` for the current project and all execution and classpath dependencies (see :doc:`/Getting-Started/Full-Def` for details on inter-project dependencies).
To add the file ``demo/example.txt`` to the files to watch,
To add the file `demo/example.txt` to the files to watch,
::
@ -63,7 +63,7 @@ To add the file ``demo/example.txt`` to the files to watch,
pollInterval := 1000 // in ms
``pollInterval`` selects the interval between polling for changes in milliseconds. The default value is ``500 ms``. To change it to ``1 s``,
`pollInterval` selects the interval between polling for changes in milliseconds. The default value is `500 ms`. To change it to `1 s`,
::

View File

@ -29,7 +29,7 @@ Dependency Management
`Configuration <../api/sbt/Configuration.html>`_
is a useful Ivy construct for grouping dependencies. See
:ref:`ivy-configurations`. It is also used for :doc:`scoping settings </Getting-Started/Scopes>`.
- ``Compile``, ``Test``, ``Runtime``, ``Provided``, and ``Optional`` are predefined :ref:`configurations <ivy-configurations>`.
- `Compile`, `Test`, `Runtime`, `Provided`, and `Optional` are predefined :ref:`configurations <ivy-configurations>`.
Settings and Tasks
~~~~~~~~~~~~~~~~~~
@ -98,14 +98,14 @@ Settings and Tasks
See the :doc:`Getting Started Guide </Getting-Started/Basic-Def>` for
details.
- ``:=``, ``+=``, ``++=``, ``~=`` These
- `:=`, `+=`, `++=`, `~=` These
construct a `Setting <../api/sbt/Init$Setting.html>`_,
which is the fundamental type in the :doc:`settings </Getting-Started/Basic-Def>` system.
- ``value`` This uses the value of another setting or task in the definition of a new setting or task.
- `value` This uses the value of another setting or task in the definition of a new setting or task.
This method is special (it is a macro) and cannot be used except in the argument of one of the setting
definition methods above (``:=``, ...) or in the standalone construction methods ``Def.setting`` and ``Def.task``.
definition methods above (`:=`, ...) or in the standalone construction methods `Def.setting` and `Def.task`.
See :doc:`more about settings </Getting-Started/More-About-Settings>` for details.
- ``in`` specifies the `Scope <../api/sbt/Scope.html>`_ or part of the
- `in` specifies the `Scope <../api/sbt/Scope.html>`_ or part of the
`Scope <../api/sbt/Scope.html>`_ of a setting being referenced. See :doc:`scopes </Getting-Started/Scopes>`.
File and IO
@ -115,13 +115,13 @@ See `RichFile <../api/sbt/RichFile.html>`_,
`PathFinder <../api/sbt/PathFinder.html>`_,
and :doc:`/Detailed-Topics/Paths` for the full documentation.
- ``/`` When called on a single File, this is ``new File(x,y)``. For
``Seq[File]``, this is applied for each member of the sequence..
- ``*`` and ``**`` are methods for selecting children (``*``) or
descendants (``**``) of a ``File`` or ``Seq[File]`` that match a
- `/` When called on a single File, this is `new File(x,y)`. For
`Seq[File]`, this is applied for each member of the sequence..
- `*` and `**` are methods for selecting children (`*`) or
descendants (`**`) of a `File` or `Seq[File]` that match a
filter.
- ``|``, ``||``, ``&&``, ``&``, ``-``, and ``--`` are methods for
combining filters, which are often used for selecting ``File``\ s.
- `|`, `||`, `&&`, `&`, `-`, and `--` are methods for
combining filters, which are often used for selecting `File`\ s.
See
`NameFilter <../api/sbt/NameFilter.html>`_
and
@ -130,29 +130,29 @@ and :doc:`/Detailed-Topics/Paths` for the full documentation.
as collections (like \`Seq) and
`Parser <../api/sbt/complete/Parser.html>`_
(see :doc:`/Detailed-Topics/Parsing-Input`).
- ``x`` Used to construct mappings from a ``File`` to another ``File``
or to a ``String``. See :doc:`/Detailed-Topics/Mapping-Files`.
- ``get`` forces a `PathFinder <../api/sbt/PathFinder.html>`_
(a call-by-name data structure) to a strict ``Seq[File]``
- `x` Used to construct mappings from a `File` to another `File`
or to a `String`. See :doc:`/Detailed-Topics/Mapping-Files`.
- `get` forces a `PathFinder <../api/sbt/PathFinder.html>`_
(a call-by-name data structure) to a strict `Seq[File]`
representation. This is a common name in Scala, used by types like
``Option``.
`Option`.
Dependency Management
~~~~~~~~~~~~~~~~~~~~~
See :doc:`/Detailed-Topics/Library-Management` for full documentation.
- ``%`` This is used to build up a
- `%` This is used to build up a
`ModuleID <../api/sbt/ModuleID.html>`_.
- ``%%`` This is similar to ``%`` except that it identifies a
- `%%` This is similar to `%` except that it identifies a
dependency that has been :doc:`cross built </Detailed-Topics/Cross-Build>`.
- ``from`` Used to specify the fallback URL for a dependency
- ``classifier`` Used to specify the classifier for a dependency.
- ``at`` Used to define a Maven-style resolver.
- ``intransitive`` Marks a `dependency <../api/sbt/ModuleID.html>`_
- `from` Used to specify the fallback URL for a dependency
- `classifier` Used to specify the classifier for a dependency.
- `at` Used to define a Maven-style resolver.
- `intransitive` Marks a `dependency <../api/sbt/ModuleID.html>`_
or `Configuration <../api/sbt/Configuration.html>`_
as being intransitive.
- ``hide`` Marks a
- `hide` Marks a
`Configuration <../api/sbt/Configuration.html>`_
as internal and not to be included in the published metadata.
@ -165,22 +165,22 @@ They closely follow the names of the standard library's parser
combinators. See :doc:`/Detailed-Topics/Parsing-Input` for the full documentation. These are
used for :doc:`/Extending/Input-Tasks` and :doc:`/Extending/Commands`.
- ``~``, ``~>``, ``<~`` Sequencing methods.
- ``??``, ``?`` Methods for making a Parser optional. ``?`` is postfix.
- ``id`` Used for turning a Char or String literal into a Parser. It is
- `~`, `~>`, `<~` Sequencing methods.
- `??`, `?` Methods for making a Parser optional. `?` is postfix.
- `id` Used for turning a Char or String literal into a Parser. It is
generally used to trigger an implicit conversion to a Parser.
- ``|``, ``||`` Choice methods. These are common method names in Scala.
- ``^^^`` Produces a constant value when a Parser matches.
- ``+``, ``*`` Postfix repetition methods. These are common method
- `|`, `||` Choice methods. These are common method names in Scala.
- `^^^` Produces a constant value when a Parser matches.
- `+`, `*` Postfix repetition methods. These are common method
names in Scala.
- ``map``, ``flatMap`` Transforms the result of a Parser. These are
- `map`, `flatMap` Transforms the result of a Parser. These are
common method names in Scala.
- ``filter`` Restricts the inputs that a Parser matches on. This is a
- `filter` Restricts the inputs that a Parser matches on. This is a
common method name in Scala.
- ``-`` Prefix negation. Only matches the input when the original
- `-` Prefix negation. Only matches the input when the original
parser doesn't match the input.
- ``examples``, ``token`` Tab completion
- ``!!!`` Provides an error message to use when the original parser
- `examples`, `token` Tab completion
- `!!!` Provides an error message to use when the original parser
doesn't match the input.
Processes
@ -192,15 +192,15 @@ version 2.9.
`ProcessBuilder <../api/sbt/ProcessBuilder.html>`_
is the builder type and `Process <../api/sbt/Process.html>`_
is the type representing the actual forked process. The methods to
combine processes start with ``#`` so that they share the same
combine processes start with `#` so that they share the same
precedence.
- ``run``, ``!``, ``!!``, ``!<``, ``lines``, ``lines_!`` are different
ways to start a process once it has been defined. The ``lines``
variants produce a ``Stream[String]`` to obtain the output lines.
- ``#<``, ``#<<``, ``#>`` are used to get input for a process from a
- `run`, `!`, `!!`, `!<`, `lines`, `lines_!` are different
ways to start a process once it has been defined. The `lines`
variants produce a `Stream[String]` to obtain the output lines.
- `#<`, `#<<`, `#>` are used to get input for a process from a
source or send the output of a process to a sink.
- ``#|`` is used to pipe output from one process into the input of
- `#|` is used to pipe output from one process into the input of
another.
- ``#||``, ``#&&``, ``###`` sequence processes in different ways.
- `#||`, `#&&`, `###` sequence processes in different ways.

View File

@ -0,0 +1,36 @@
from docutils import nodes
from sphinx.util.nodes import set_source_info
class Struct:
"""Stores data attributes for dotted-attribute access."""
def __init__(self, **keywordargs):
self.__dict__.update(keywordargs)
def process_node(node):
if isinstance(node, nodes.Text):
node = nodes.inline('', node.astext())
else:
node = nodes.inline('', '', node)
node['classes'].append('pre')
print ("NODE: %s" % node)
return node
# This directive formats a string to be in a fixed width font.
# Only substitions in the string are processed.
def code_literal(name, rawtext, text, lineno, inliner, options={}, content=[]):
memo = Struct(document=inliner.document,
reporter=inliner.reporter,
language=inliner.language,
inliner=inliner)
nested_parse, problems = inliner.parse(text, lineno, memo, inliner.parent)
nodes = [process_node(node) for node in nested_parse ]
return nodes, problems
# register the role
def setup(app):
app.add_role('codeliteral', code_literal)

View File

@ -170,7 +170,7 @@ strong {color: #1d3c52; }
box-shadow: inset 0 1px 0 rgba(255, 255, 255, 0.25);
}
.pre { padding: 1px 2px; background-color: #f3f7e9; font-family: Menlo, Monaco, "Courier New", monospace; font-size: 12px; }
.pre { background-color: #f3f7e9; font-family: Menlo, Monaco, "Courier New", monospace; font-size: 12px; white-space: pre; }
.footer h5 { text-transform: none; }

View File

@ -3,7 +3,7 @@
import sys, os
sys.path.append(os.path.abspath('_sphinx/exts'))
extensions = ['sphinxcontrib.issuetracker', 'sphinx.ext.extlinks', 'howto']
extensions = ['sphinxcontrib.issuetracker', 'sphinx.ext.extlinks', 'howto', 'codeliteral']
# Project variables
@ -18,7 +18,7 @@ scalaRelease = "2.10.2"
needs_sphinx = '1.1'
nitpicky = True
default_role = 'literal'
default_role = 'codeliteral'
master_doc = 'home'
highlight_language = 'scala'
add_function_parentheses = False

View File

@ -30,7 +30,8 @@ How can I help?
- Fix mistakes that you notice on the wiki.
- Make `bug reports <https://github.com/sbt/sbt/issues>`_ that are
clear and reproducible.
- Answer questions on the `mailing list`_.
- Answer questions on `Stack Overflow`_.
- Discuss development on the `mailing list`_.
- Fix issues that affect you. `Fork, fix, and submit a pull
request <http://help.github.com/fork-a-repo/>`_.
- Implement features that are important to you. There is an
@ -47,16 +48,16 @@ sbt |version| by default suppresses most stack traces and debugging
information. It has the nice side effect of giving you less noise on
screen, but as a newcomer it can leave you lost for explanation. To see
the previous output of a command at a higher verbosity, type
``last <task>`` where ``<task>`` is the task that failed or that you
`last <task>` where `<task>` is the task that failed or that you
want to view detailed output for. For example, if you find that your
``update`` fails to load all the dependencies as you expect you can
`update` fails to load all the dependencies as you expect you can
enter:
.. code-block:: console
> last update
and it will display the full output from the last run of the ``update``
and it will display the full output from the last run of the `update`
command.
How do I disable ansi codes in the output?
@ -70,8 +71,8 @@ get output that looks like:
[0m[ [0minfo [0m] [0mSet current project to root
or ansi codes are supported but you want to disable colored output. To
completely disable ansi codes, set the ``sbt.log.format`` system
property to ``false``. For example,
completely disable ansi codes, set the `sbt.log.format` system
property to `false`. For example,
.. code-block :: console
@ -80,26 +81,26 @@ property to ``false``. For example,
How can I start a Scala interpreter (REPL) with sbt project configuration (dependencies, etc.)?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You may run ``sbt console``.
You may run `sbt console`.
Build definitions
-----------------
What are the ``:=``, ``+=``, ``++=``, and ``~=`` methods?
What are the `:=`, `+=`, `++=`, and `~=` methods?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These are methods on keys used to construct a ``Setting`` or a ``Task``. The Getting
These are methods on keys used to construct a `Setting` or a `Task`. The Getting
Started Guide covers all these methods, see :doc:`.sbt build definition </Getting-Started/Basic-Def>`
and :doc:`more about settings </Getting-Started/More-About-Settings>` for example.
What is the ``%`` method?
What is the `%` method?
~~~~~~~~~~~~~~~~~~~~~~~~~
It's used to create a ``ModuleID`` from strings, when specifying managed
It's used to create a `ModuleID` from strings, when specifying managed
dependencies. Read the Getting Started Guide about
:doc:`library dependencies </Getting-Started/Library-Dependencies>`.
What is ``ModuleID``, ``Project``, ...?
What is `ModuleID`, `Project`, ...?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To figure out an unknown type or method, have a look at the
@ -112,8 +113,8 @@ How do I add files to a jar package?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The files included in an artifact are configured by default by a task
``mappings`` that is scoped by the relevant package task. The
``mappings`` task returns a sequence ``Seq[(File,String)]`` of mappings
`mappings` that is scoped by the relevant package task. The
`mappings` task returns a sequence `Seq[(File,String)]` of mappings
from the file to include to the path within the jar. See
:doc:`/Detailed-Topics/Mapping-Files` for details on creating these mappings.
@ -128,10 +129,10 @@ For example, to add generated sources to the packaged source artifact:
srcs x (relativeTo(base) | flat)
}
This takes sources from the ``managedSources`` task and relativizes them
against the ``managedSource`` base directory, falling back to a
This takes sources from the `managedSources` task and relativizes them
against the `managedSource` base directory, falling back to a
flattened mapping. If a source generation task doesn't write the sources
to the ``managedSource`` directory, the mapping function would have to
to the `managedSource` directory, the mapping function would have to
be adjusted to try relativizing against additional directories or
something more appropriate for the generator.
@ -169,19 +170,19 @@ is:
There are two additional arguments for the first parameter list that
allow the file tracking style to be explicitly specified. By default,
the input tracking style is ``FilesInfo.lastModified``, based on a
the input tracking style is `FilesInfo.lastModified`, based on a
file's last modified time, and the output tracking style is
``FilesInfo.exists``, based only on whether the file exists. The other
available style is ``FilesInfo.hash``, which tracks a file based on a
`FilesInfo.exists`, based only on whether the file exists. The other
available style is `FilesInfo.hash`, which tracks a file based on a
hash of its contents. See the `FilesInfo
API <../api/sbt/FilesInfo$.html>`_ for
details.
A more advanced version of ``FileFunction.cached`` passes a data
A more advanced version of `FileFunction.cached` passes a data
structure of type
`ChangeReport <../api/sbt/ChangeReport.html>`_
describing the changes to input and output files since the last
evaluation. This version of ``cached`` also expects the set of files
evaluation. This version of `cached` also expects the set of files
generated as output to be the result of the evaluated function.
Extending sbt
@ -191,19 +192,19 @@ How can I add a new configuration?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following example demonstrates adding a new set of compilation
settings and tasks to a new configuration called ``samples``. The
sources for this configuration go in ``src/samples/scala/``. Unspecified
settings delegate to those defined for the ``compile`` configuration.
For example, if ``scalacOptions`` are not overridden for ``samples``,
settings and tasks to a new configuration called `samples`. The
sources for this configuration go in `src/samples/scala/`. Unspecified
settings delegate to those defined for the `compile` configuration.
For example, if `scalacOptions` are not overridden for `samples`,
the options for the main sources are used.
Options specific to ``samples`` may be declared like:
Options specific to `samples` may be declared like:
::
scalacOptions in Samples += "-deprecation"
This uses the main options as base options because of ``+=``. Use ``:=``
This uses the main options as base options because of `+=`. Use `:=`
to ignore the main options:
::
@ -211,7 +212,7 @@ to ignore the main options:
scalacOptions in Samples := "-deprecation" :: Nil
The example adds all of the usual compilation related settings and tasks
to ``samples``:
to `samples`:
::
@ -230,9 +231,9 @@ to ``samples``:
How do I add a test configuration?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
See the ``Additional test configurations`` section of :doc`/Detailed-Topics/Testing`.
See the `Additional test configurations` section of :doc`/Detailed-Topics/Testing`.
How can I create a custom run task, in addition to ``run``?
How can I create a custom run task, in addition to `run`?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This answer is extracted from a `mailing list
@ -251,9 +252,9 @@ A basic run task is created by:
fullRunTask(myRunTask, Test, "foo.Foo", "arg1", "arg2")
If you want to be able to supply arguments on the command line, replace
``TaskKey`` with ``InputKey`` and ``fullRunTask`` with
``fullRunInputTask``. The ``Test`` part can be replaced with another
configuration, such as ``Compile``, to use that configuration's
`TaskKey` with `InputKey` and `fullRunTask` with
`fullRunInputTask`. The `Test` part can be replaced with another
configuration, such as `Compile`, to use that configuration's
classpath.
This run task can be configured individually by specifying the task key
@ -275,10 +276,10 @@ configuration and classpaths. These are the steps:
1. Define a new :ref:`configuration <ivy-configurations>`.
2. Declare the tool :doc:`dependencies </Detailed-Topics/Library-Management>` in that
configuration.
3. Define a classpath that pulls the dependencies from the :doc:`/Detailed-Topics/Update-Report` produced by ``update``.
3. Define a classpath that pulls the dependencies from the :doc:`/Detailed-Topics/Update-Report` produced by `update`.
4. Use the classpath to implement the task.
As an example, consider a ``proguard`` task. This task needs the
As an example, consider a `proguard` task. This task needs the
ProGuard jars in order to run the tool. First, define and add the new configuration:
::
@ -314,7 +315,7 @@ Then,
Defining the intermediate classpath is optional, but it can be useful for debugging or if it needs to
be used by multiple tasks.
It is also possible to specify artifact types inline.
This alternative ``proguard`` task would look like:
This alternative `proguard` task would look like:
::
@ -335,16 +336,16 @@ classpath (since version 0.10.1). Through
is possible to obtain a
`xsbti.ComponentProvider <../api/xsbti/ComponentProvider.html>`_,
which manages application components. Components are groups of files in
the ``~/.sbt/boot/`` directory and, in this case, the application is
the `~/.sbt/boot/` directory and, in this case, the application is
sbt. In addition to the base classpath, components in the "extra"
component are included on sbt's classpath.
(Note: the additional components on an application's classpath are
declared by the ``components`` property in the ``[main]`` section of the
launcher configuration file ``boot.properties``.)
declared by the `components` property in the `[main]` section of the
launcher configuration file `boot.properties`.)
Because these components are added to the ``~/.sbt/boot/`` directory and
``~/.sbt/boot/`` may be read-only, this can fail. In this case, the user
Because these components are added to the `~/.sbt/boot/` directory and
`~/.sbt/boot/` may be read-only, this can fail. In this case, the user
has generally intentionally set sbt up this way, so error recovery is
not typically necessary (just a short error message explaining the
situation.)
@ -352,8 +353,8 @@ situation.)
Example of dynamic classpath augmentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following code can be used where a ``State => State`` is required,
such as in the ``onLoad`` setting (described below) or in a
The following code can be used where a `State => State` is required,
such as in the `onLoad` setting (described below) or in a
:doc:`command </Extending/Commands>`. It adds some files to the "extra" component and
reloads sbt if they were not already added. Note that reloading will
drop the user's session state.
@ -378,14 +379,14 @@ drop the user's session state.
How can I take action when the project is loaded or unloaded?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The single, global setting ``onLoad`` is of type ``State => State`` (see
The single, global setting `onLoad` is of type `State => State` (see
:doc:`/Extending/Build-State`) and is executed once, after all projects are built and
loaded. There is a similar hook ``onUnload`` for when a project is
unloaded. Project unloading typically occurs as a result of a ``reload``
command or a ``set`` command. Because the ``onLoad`` and ``onUnload``
loaded. There is a similar hook `onUnload` for when a project is
unloaded. Project unloading typically occurs as a result of a `reload`
command or a `set` command. Because the `onLoad` and `onUnload`
hooks are global, modifying this setting typically involves composing a
new function with the previous value. The following example shows the
basic structure of defining ``onLoad``:
basic structure of defining `onLoad`:
::
@ -425,7 +426,7 @@ Setting initializers are executed in order. If the initialization of a
setting depends on other settings that has not been initialized, sbt
will stop loading.
In this example, we try to append a library to ``libraryDependencies``
In this example, we try to append a library to `libraryDependencies`
before it is initialized with an empty sequence.
::
@ -439,7 +440,7 @@ before it is initialized with an empty sequence.
}
To correct this, include the default settings, which includes
``libraryDependencies := Seq()``.
`libraryDependencies := Seq()`.
::
@ -458,7 +459,7 @@ A more subtle variation of this error occurs when using :doc:`scoped settings </
)
Generally, all of the setting definition methods can be expressed in terms of
``:=``. To better understand the error, we can rewrite the setting as:
`:=`. To better understand the error, we can rewrite the setting as:
::
@ -514,19 +515,19 @@ version of the plugin.
**... unless you specify the plugin in the wrong place!**
A typical mistake is to put global plugin definitions in
``~/.sbt/plugins.sbt``. **THIS IS WRONG.** ``.sbt`` files in ``~/.sbt``
`~/.sbt/plugins.sbt`. **THIS IS WRONG.** `.sbt` files in `~/.sbt`
are loaded for *each* build--that is, for *each* cross-compilation. So,
if you build for Scala 2.9.0, sbt will try to find a version of the
plugin that's compiled for 2.9.0--and it usually won't. That's because
it doesn't *know* the dependency is a plugin.
To tell sbt that the dependency is an sbt plugin, make sure you define
your global plugins in a ``.sbt`` file in ``~/.sbt/plugins/``. sbt knows
that files in ``~/.sbt/plugins`` are only to be used by sbt itself, not
your global plugins in a `.sbt` file in `~/.sbt/plugins/`. sbt knows
that files in `~/.sbt/plugins` are only to be used by sbt itself, not
as part of the general build definition. If you define your plugins in a
file under *that* directory, they won't foul up your cross-compilations.
Any file name ending in ``.sbt`` will do, but most people use
``~/.sbt/plugins/build.sbt`` or ``~/.sbt/plugins/plugins.sbt``.
Any file name ending in `.sbt` will do, but most people use
`~/.sbt/plugins/build.sbt` or `~/.sbt/plugins/plugins.sbt`.
Miscellaneous
-------------
@ -582,16 +583,16 @@ How do I migrate from 0.7 to 0.10+?
See the :doc:`migration page </Detailed-Topics/Migrating-from-sbt-0.7.x-to-0.10.x>` first and
then the following questions.
Where has 0.7's ``lib_managed`` gone?
Where has 0.7's `lib_managed` gone?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, sbt |version| loads managed libraries from your ivy cache without
copying them to a ``lib_managed`` directory. This fixes some bugs with
copying them to a `lib_managed` directory. This fixes some bugs with
the previous solution and keeps your project directory small. If you
want to insulate your builds from the ivy cache being cleared, set
``retrieveManaged := true`` and the dependencies will be copied to
``lib_managed`` as a build-local cache (while avoiding the issues of
``lib_managed`` in 0.7.x).
`retrieveManaged := true` and the dependencies will be copied to
`lib_managed` as a build-local cache (while avoiding the issues of
`lib_managed` in 0.7.x).
This does mean that existing solutions for sharing libraries with your
favoured IDE may not work. There are |version| plugins for IDEs being
@ -604,9 +605,9 @@ developed:
What are the commands I can use in |version| vs. 0.7?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For a list of commands, run ``help``. For details on a specific command,
run ``help <command>``. To view a list of tasks defined on the current
project, run ``tasks``. Alternatively, see the :doc:`Running </Getting-Started/Running>`
For a list of commands, run `help`. For details on a specific command,
run `help <command>`. To view a list of tasks defined on the current
project, run `tasks`. Alternatively, see the :doc:`Running </Getting-Started/Running>`
page in the Getting Started Guide for descriptions of common commands and tasks.
If in doubt start by just trying the old command as it may just work.
@ -632,8 +633,8 @@ sbt 0.10 fixes a flaw in how dependencies get resolved in multi-module
projects. This change ensures that only one version of a library appears
on a classpath.
Use ``last update`` to view the debugging output for the last ``update``
run. Use ``show update`` to view a summary of files comprising managed
Use `last update` to view the debugging output for the last `update`
run. Use `show update` to view a summary of files comprising managed
classpaths.
My tests all run really fast but some are broken that weren't in 0.7!
@ -641,7 +642,7 @@ My tests all run really fast but some are broken that weren't in 0.7!
Be aware that compilation and tests run in parallel by default in sbt
|version|. If your test code isn't thread-safe then you may want to change
this behaviour by adding one of the following to your ``build.sbt``:
this behaviour by adding one of the following to your `build.sbt`:
::
@ -655,12 +656,13 @@ this behaviour by adding one of the following to your ``build.sbt``:
How do I set log levels in |version| vs. 0.7?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``warn``, ``info``, ``debug`` and ``error`` don't work any more.
`warn`, `info`, `debug` and `error` don't work any more.
The new syntax in the sbt |version| shell is:
``text > set logLevel := Level.Warn``
The new syntax in the sbt |version| shell is: ::
Or in your ``build.sbt`` file write:
> set logLevel := Level.Warn`
Or in your `build.sbt` file write:
::
@ -687,15 +689,15 @@ were combined in a single dependency type in 0.7.x. A declaration like:
lazy val a = project("a", "A")
lazy val b = project("b", "B", a)
meant that the ``B`` project had a classpath and execution dependency on
``A`` and ``A`` had a configuration dependency on ``B``. Specifically,
meant that the `B` project had a classpath and execution dependency on
`A` and `A` had a configuration dependency on `B`. Specifically,
in 0.7.x:
1. Classpath: Classpaths for ``A`` were available on the appropriate
classpath for ``B``.
2. Execution: A task executed on ``B`` would be executed on ``A`` first.
1. Classpath: Classpaths for `A` were available on the appropriate
classpath for `B`.
2. Execution: A task executed on `B` would be executed on `A` first.
3. Configuration: For some settings, if they were not overridden in
``A``, they would default to the value provided in ``B``.
`A`, they would default to the value provided in `B`.
In |version|, declare the specific type of dependency you want. Read about
:doc:`multi-project builds </Getting-Started/Multi-Project>` in the Getting
@ -708,8 +710,8 @@ Where did class/object X go since 0.7?
0.7 |version|
================================================================================================================================================================================================ =====================================================================================================================================================================================
| `FileUtilities <http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/FileUtilities$object.html>`_ `IO <../api/sbt/IO$.html>`_
`Path class <http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/Path.html>`_ and `object <http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/Path$.html>`_ `Path object <../api/sbt/Path$.html>`_, ``File``, `RichFile <../api/sbt/RichFile.html>`_
`PathFinder class <http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/PathFinder.html>`_ ``Seq[File]``, `PathFinder class <../api/sbt/PathFinder.html>`_, `PathFinder object <../api/sbt/PathFinder$.html>`_
`Path class <http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/Path.html>`_ and `object <http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/Path$.html>`_ `Path object <../api/sbt/Path$.html>`_, `File`, `RichFile <../api/sbt/RichFile.html>`_
`PathFinder class <http://simple-build-tool.googlecode.com/svn/artifacts/latest/api/sbt/PathFinder.html>`_ `Seq[File]`, `PathFinder class <../api/sbt/PathFinder.html>`_, `PathFinder object <../api/sbt/PathFinder$.html>`_
================================================================================================================================================================================================ =====================================================================================================================================================================================