Gradle User Manual
Gradle User Manual
Version 7.4
Version 7.4
Table of Contents
About Gradle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
What is Gradle? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Getting Started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Installing Gradle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Troubleshooting builds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Compatibility Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
Gradle is an open-source build automation tool that is designed to be flexible enough to build
almost any type of software. The following is a high-level overview of some of its most important
features:
High performance
Gradle avoids unnecessary work by only running the tasks that need to run because their inputs
or outputs have changed. You can also use a build cache to enable the reuse of task outputs from
previous runs or even from a different machine (with a shared build cache).
There are many other optimizations that Gradle implements and the development team
continually work to improve Gradle’s performance.
JVM foundation
Gradle runs on the JVM and you must have a Java Development Kit (JDK) installed to use it. This
is a bonus for users familiar with the Java platform as you can use the standard Java APIs in
your build logic, such as custom task types and plugins. It also makes it easy to run Gradle on
different platforms.
Note that Gradle isn’t limited to building just JVM projects, and it even comes packaged with
support for building native projects.
Conventions
Gradle takes a leaf out of Maven’s book and makes common types of projects — such as Java
projects — easy to build by implementing conventions. Apply the appropriate plugins and you
can easily end up with slim build scripts for many projects. But these conventions don’t limit
you: Gradle allows you to override them, add your own tasks, and make many other
customizations to your convention-based builds.
Extensibility
You can readily extend Gradle to provide your own task types or even build model. See the
Android build support for an example of this: it adds many new build concepts such as flavors
and build types.
IDE support
Several major IDEs allow you to import Gradle builds and interact with them: Android Studio,
IntelliJ IDEA, Eclipse, and NetBeans. Gradle also has support for generating the solution files
required to load a project into Visual Studio.
Insight
Build scans provide extensive information about a build run that you can use to identify build
issues. They are particularly good at helping you to identify problems with a build’s
performance. You can also share build scans with others, which is particularly useful if you need
to ask for advice in fixing an issue with the build.
Gradle is a flexible and powerful build tool that can easily feel intimidating when you first start.
However, understanding the following core principles will make Gradle much more approachable
and you will become adept with the tool before you know it.
Gradle allows you to build any software, because it makes few assumptions about what you’re
trying to build or how it should be done. The most notable restriction is that dependency
management currently only supports Maven- and Ivy-compatible repositories and the filesystem.
This doesn’t mean you have to do a lot of work to create a build. Gradle makes it easy to build
common types of project — say Java libraries — by adding a layer of conventions and prebuilt
functionality through plugins. You can even create and publish custom plugins to encapsulate your
own conventions and build functionality.
Gradle models its builds as Directed Acyclic Graphs (DAGs) of tasks (units of work). What this
means is that a build essentially configures a set of tasks and wires them together — based on their
dependencies — to create that DAG. Once the task graph has been created, Gradle determines
which tasks need to be run in which order and then proceeds to execute them.
This diagram shows two example task graphs, one abstract and the other concrete, with the
dependencies between the tasks represented as arrows:
• Actions — pieces of work that do something, like copy files or compile source
• Inputs — values, files and directories that the actions use or operate on
In fact, all of the above are optional depending on what the task needs to do. Some tasks — such as
the standard lifecycle tasks — don’t even have any actions. They simply aggregate multiple tasks
together as a convenience.
You choose which task to run. Save time by specifying the task that does what you
need, but no more than that. If you just want to run the unit tests, choose the task
NOTE
that does that — typically test. If you want to package an application, most builds
have an assemble task for that.
One last thing: Gradle’s incremental build support is robust and reliable, so keep your builds
running fast by avoiding the clean task unless you actually do want to perform a clean.
It’s important to understand that Gradle evaluates and executes build scripts in three phases:
1. Initialization
Sets up the environment for the build and determine which projects will take part in it.
2. Configuration
Constructs and configures the task graph for the build and then determines which tasks need to
run and in which order, based on the task the user wants to run.
3. Execution
Well-designed build scripts consist mostly of declarative configuration rather than imperative logic.
That configuration is understandably evaluated during the configuration phase. Even so, many
such builds also have task actions — for example via doLast {} and doFirst {} blocks — which are
evaluated during the execution phase. This is important because code evaluated during the
configuration phase won’t see changes that happen during the execution phase.
Another important aspect of the configuration phase is that everything involved in it is evaluated
every time the build runs. That is why it’s best practice to avoid expensive work during the
configuration phase. Build scans can help you identify such hotspots, among other things.
It would be great if you could build your project using only the build logic bundled with Gradle, but
that’s rarely possible. Most builds have some special requirements that mean you need to add
custom build logic.
Gradle provides several mechanisms that allow you to extend it, such as:
When you want the build to do some work that an existing task can’t do, you can simply write
your own task type. It’s typically best to put the source file for a custom task type in the buildSrc
directory or in a packaged plugin. Then you can use the custom task type just like any of the
Gradle-provided ones.
You can attach custom build logic that executes before or after a task via the Task.doFirst() and
Task.doLast() methods.
These allows you to add your own properties to a project or task that you can then use from
your own custom actions or any other build logic. Extra properties can even be applied to tasks
that aren’t explicitly created by you, such as those created by Gradle’s core plugins.
• Custom conventions.
Conventions are a powerful way to simplify builds so that users can understand and use them
more easily. This can be seen with builds that use standard project structures and naming
conventions, such as Java builds. You can write your own plugins that provide conventions —
they just need to configure default values for the relevant aspects of a build.
• A custom model.
Gradle allows you to introduce new concepts into a build beyond tasks, files and dependency
configurations. You can see this with most language plugins, which add the concept of source
sets to a build. Appropriate modeling of a build process can greatly improve a build’s ease of use
and its efficiency.
5. Build scripts operate against an API
It’s easy to view Gradle’s build scripts as executable code, because that’s what they are. But that’s an
implementation detail: well-designed build scripts describe what steps are needed to build the
software, not how those steps should do the work. That’s a job for custom task types and plugins.
There is a common misconception that Gradle’s power and flexibility come from the
fact that its build scripts are code. This couldn’t be further from the truth. It’s the
NOTE underlying model and API that provide the power. As we recommend in our best
practices, you should avoid putting much, if any, imperative logic in your build
scripts.
Yet there is one area in which it is useful to view a build script as executable code: in understanding
how the syntax of the build script maps to Gradle’s API. The API documentation — formed of the
Groovy DSL Reference and the Javadocs — lists methods and properties, and refers to closures and
actions. What do these mean within the context of a build script? Check out the Groovy Build Script
Primer to learn the answer to that question so that you can make effective use of the API
documentation.
As Gradle runs on the JVM, build scripts can also use the standard Java API. Groovy
NOTE build scripts can additionally use the Groovy APIs, while Kotlin build scripts can use
the Kotlin ones.
Getting Started
Getting Started
Everyone has to start somewhere and if you’re new to Gradle, this is where to begin.
In order to use Gradle effectively, you need to know what it is and understand some of its
fundamental concepts. So before you start using Gradle in earnest, we highly recommend you read
What is Gradle?.
Even if you’re experienced with using Gradle, we suggest you read the section 5 things you need to
know about Gradle as it clears up some common misconceptions.
Installation
If all you want to do is run an existing Gradle build, then you don’t need to install Gradle if the
build has a Gradle Wrapper, identifiable via the gradlew and/or gradlew.bat files in the root of the
build. You just need to make sure your system satisfies Gradle’s prerequisites.
Android Studio comes with a working installation of Gradle, so you don’t need to install Gradle
separately in that case.
In order to create a new build or add a Wrapper to an existing build, you will need to install Gradle
according to these instructions. Note that there may be other ways to install Gradle in addition to
those described on that page, since it’s nearly impossible to keep track of all the package managers
out there.
Try Gradle
Actively using Gradle is a great way to learn about it, so once you’ve installed Gradle, try one of the
introductory hands-on tutorials:
Some folks are hard-core command-line users, while others prefer to never leave the comfort of
their IDE. Many people happily use both and Gradle endeavors not to discriminate. Gradle is
supported by several major IDEs and everything that can be done from the command line is
available to IDEs via the Tooling API.
Android Studio and IntelliJ IDEA users should consider using Kotlin DSL build scripts for the
superior IDE support when editing them.
If you follow any of the tutorials linked above, you will execute a Gradle build. But what do you do
if you’re given a Gradle build without any instructions?
1. Determine whether the project has a Gradle wrapper and use it if it’s there — the main IDEs
default to using the wrapper when it’s available.
Either import the build with an IDE or run gradle projects from the command line. If only the
root project is listed, it’s a single-project build. Otherwise it’s a multi-project build.
If you have imported the build into an IDE, you should have access to a view that displays all the
available tasks. From the command line, run gradle tasks.
4. Learn more about the tasks via gradle help --task <taskname>.
The help task can display extra information about a task, including which projects contain that
task and what options the task supports.
Many convention-based builds integrate with Gradle’s lifecycle tasks, so use those when you
don’t have something more specific you want to do with the build. For example, most builds
have clean, check, assemble and build tasks.
From the command line, just run gradle <taskname> to execute a particular task. You can learn
more about command-line execution in the corresponding user manual chapter. If you’re using
an IDE, check its documentation to find out how to run a task.
Gradle builds often follow standard conventions on project structure and tasks, so if you’re familiar
with other builds of the same type — such as Java, Android or native builds — then the file and
directory structure of the build should be familiar, as well as many of the tasks and project
properties.
For more specialized builds or those with significant customizations, you should ideally have access
to documentation on how to run the build and what build properties you can configure.
Learning to create and maintain Gradle builds is a process, and one that takes a little time. We
recommend that you start with the appropriate core plugins and their conventions for your project,
and then gradually incorporate customizations as you learn more about the tool.
Here are some useful first steps on your journey to mastering Gradle:
1. Try one or two basic tutorials to see what a Gradle build looks like, particularly the ones that
match the type of project you work with (Java, native, Android, etc.).
2. Make sure you’ve read 5 things you need to know about Gradle!
3. Learn about the fundamental elements of a Gradle build: projects, tasks, and the file API.
4. If you are building software for the JVM, be sure to read about the specifics of those types of
projects in Building Java & JVM projects and Testing in Java & JVM projects.
5. Familiarize yourself with the core plugins that come packaged with Gradle, as they provide a lot
of useful functionality out of the box.
6. Learn how to author maintainable build scripts and best organize your Gradle projects.
The user manual contains a lot of other useful information and you can find samples
demonstrating various Gradle features on the samples pages.
Gradle’s flexibility means that it readily works with other tools, such as those listed on our Gradle &
Third-party Tools page.
• A tool drives Gradle — uses it to extract information about a build and run it — via the Tooling
API
• Gradle invokes or generates information for a tool via the 3rd-party tool’s APIs — this is usually
done via plugins and custom task types
Tools that have existing Java-based APIs are generally straightforward to integrate. You can find
many such integrations on Gradle’s plugin portal.
Installing Gradle
You can install the Gradle build tool on Linux, macOS, or Windows. This document covers installing
using a package manager like SDKMAN! or Homebrew, as well as manual installation.
You can find all releases and their checksums on the releases page.
Prerequisites
Gradle runs on all major operating systems and requires only a Java Development Kit version 8 or
higher to run. To check, run java -version. You should see something like this:
❯ java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
Gradle ships with its own Groovy library, therefore Groovy does not need to be installed. Any
existing Groovy installation is ignored by Gradle.
Gradle uses whatever JDK it finds in your path. Alternatively, you can set the JAVA_HOME
environment variable to point to the installation directory of the desired JDK.
See the full compatibility notes for Java, Groovy, Kotlin and Android.
SDKMAN! is a tool for managing parallel versions of multiple Software Development Kits on most
Unix-like systems (macOS, Linux, Cygwin, Solaris and FreeBSD). We deploy and maintain the
versions available from SDKMAN!.
Other package managers are available, but the version of Gradle distributed by them is not
controlled by Gradle, Inc. Linux package managers may distribute a modified version of Gradle that
is incompatible or incomplete when compared to the official version (available from SDKMAN! or
below).
• Binary-only (bin)
Unzip the distribution zip file in the directory of your choosing, e.g.:
❯ mkdir /opt/gradle
❯ unzip -d /opt/gradle gradle-7.4-bin.zip
❯ ls /opt/gradle/gradle-7.4
LICENSE NOTICE bin README init.d lib media
Open a second File Explorer window and go to the directory where the Gradle distribution was
downloaded. Double-click the ZIP archive to expose the content. Drag the content folder gradle-7.4
to your newly created C:\Gradle folder.
Alternatively, you can unpack the Gradle distribution ZIP into C:\Gradle using an archiver tool of
your choice.
To run Gradle, the path to the unpacked files from the Gradle website need to be on your terminal’s
path. The steps to do this are different for each operating system.
Configure your PATH environment variable to include the bin directory of the unzipped distribution,
e.g.:
❯ export PATH=$PATH:/opt/gradle/gradle-7.4/bin
Alternatively, you could also add the environment variable GRADLE_HOME and point this to the
unzipped distribution. Instead of adding a specific version of Gradle to your PATH, you can add
$GRADLE_HOME/bin to your PATH. When upgrading to a different version of Gradle, just change the
GRADLE_HOME environment variable.
Microsoft Windows users
In File Explorer right-click on the This PC (or Computer) icon, then click Properties → Advanced
System Settings → Environmental Variables.
Under System Variables select Path, then click Edit. Add an entry for C:\Gradle\gradle-7.4\bin. Click
OK to save.
Alternatively, you could also add the environment variable GRADLE_HOME and point this to the
unzipped distribution. Instead of adding a specific version of Gradle to your Path, you can add
%GRADLE_HOME%/bin to your Path. When upgrading to a different version of Gradle, just change the
GRADLE_HOME environment variable.
Verifying installation
Open a console (or a Windows command prompt) and run gradle -v to run gradle and display the
version, e.g.:
❯ gradle -v
------------------------------------------------------------
Gradle 7.4
------------------------------------------------------------
If you run into any trouble, see the section on troubleshooting installation.
You can verify the integrity of the Gradle distribution by downloading the SHA-256 file (available
from the releases page) and following these verification instructions.
Next steps
Now that you have Gradle installed, use these resources for getting started:
• Create your first Gradle project by following one of our step-by-step samples.
• Configure Gradle execution, such as use of an HTTP proxy for downloading dependencies.
• Subscribe to the Gradle Newsletter for monthly release and community updates.
Troubleshooting builds
The following is a collection of common issues and suggestions for addressing them. You can get
other tips and search the Gradle forums and StackOverflow #gradle answers, as well as Gradle
documentation from help.gradle.org.
If you followed the installation instructions, and aren’t able to execute your Gradle build, here are
some tips that may help.
If you installed Gradle outside of just invoking the Gradle Wrapper, you can check your Gradle
installation by running gradle --version in a terminal.
❯ gradle --version
------------------------------------------------------------
Gradle 6.5
------------------------------------------------------------
Kotlin: 1.3.72
Groovy: 2.5.11
Ant: Apache Ant(TM) version 1.10.7 compiled on September 1 2019
JVM: 14 (AdoptOpenJDK 14+36)
OS: Mac OS X 10.15.2 x86_64
If you get "command not found: gradle", you need to ensure that Gradle is properly added to your
PATH.
Please set the JAVA_HOME variable in your environment to match the location of your
Java installation.
You’ll need to ensure that a Java Development Kit version 8 or higher is properly installed, the
JAVA_HOME environment variable is set, and Java is added to your PATH.
Permission denied
If you get "permission denied", that means that Gradle likely exists in the correct place, but it is not
executable. You can fix this using chmod +x path/to/executable on *nix-based systems.
If gradle --version works, but all of your builds fail with the same error, it is possible there is a
problem with one of your Gradle build configuration scripts.
You can verify the problem is with Gradle scripts by running gradle help which executes
configuration scripts, but no Gradle tasks. If the error persists, build configuration is problematic. If
not, then the problem exists within the execution of one or more of the requested tasks (Gradle
executes configuration scripts first, and then executes build steps).
Common dependency resolution issues such as resolving version conflicts are covered in
Troubleshooting Dependency Resolution.
You can see a dependency tree and see which resolved dependency versions differed from what
was requested by clicking the Dependencies view and using the search functionality, specifying the
resolution reason.
The actual build scan with filtering criteria is available for exploration.
For build performance issues (including “slow sync time”), see improving the Performance of
Gradle Builds.
Android developers should watch a presentation by the Android SDK Tools team about Speeding Up
Your Android Gradle Builds. Many tips are also covered in the Android Studio user guide on
optimizing build speed.
You can set breakpoints and debug buildSrc and standalone plugins in your Gradle build itself by
setting the org.gradle.debug property to “true” and then attaching a remote debugger to port 5005.
You can change the port number by setting the org.gradle.debug.port property to the desired port
number.
In addition, if you’ve adopted the Kotlin DSL, you can also debug build scripts themselves.
The following video demonstrates how to debug an example build using IntelliJ IDEA.
In addition to controlling logging verbosity, you can also control display of task outcomes (e.g. “UP-
TO-DATE”) in lifecycle logging using the --console=verbose flag.
You can also replace much of Gradle’s logging with your own by registering various event listeners.
One example of a custom event logger is explained in the logging documentation. You can also
control logging from external tools, making them more verbose in order to debug their execution.
--info logs explain why a task was executed, though build scans do this in a searchable, visual way
by going to the Timeline view and clicking on the task you want to inspect.
Figure 4. Debugging incremental build with a build scan
You can learn what the task outcomes mean from this listing.
Many infrequent errors within IDEs can be solved by "refreshing" Gradle. See also more
documentation on working with Gradle in IntelliJ IDEA and in Eclipse.
From the main menu, go to View > Tool Windows > Gradle. Then click on the Refresh icon.
Figure 5. Refreshing a Gradle project in IntelliJ IDEA
If you’re using Buildship for the Eclipse IDE, you can re-synchronize your Gradle build by opening
the "Gradle Tasks" view and clicking the "Refresh" icon, or by executing the Gradle > Refresh Gradle
Project command from the context menu while editing a Gradle script.
If your Gradle build fails before running any tasks, you may be encountering problems with your
network configuration. When Gradle is unable to communicate with the Gradle daemon process,
the build will immediately fail with a message similar to this:
$ gradle help
Starting a Gradle Daemon, 1 stopped Daemon could not be reused, use --status for
details
We have observed this can occur when network address translation (NAT) masquerade is used.
When NAT masquerade is enabled, connections that should be considered local to the machine are
masked to appear from external IP addresses. Gradle refuses to connect to any external IP address
as a security precaution.
The solution to this problem is to adjust your network configuration such that local connections are
not modified to appear as from external addresses.
You can monitor the detected network setup and the connection requests in the daemon log file
(<GRADLE_USER_HOME>/daemon/<Gradle version>/daemon-<PID>.out.log).
2021-08-12T12:01:50.755+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Adding IP addresses for
network interface enp0s3
2021-08-12T12:01:50.759+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Is this a loopback interface?
false
2021-08-12T12:01:50.769+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Adding remote address
/fe80:0:0:0:85ba:3f3e:1b88:c0e1%enp0s3
2021-08-12T12:01:50.770+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Adding remote address
/10.0.2.15
2021-08-12T12:01:50.770+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Adding IP addresses for
network interface lo
2021-08-12T12:01:50.771+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Is this a loopback interface?
true
2021-08-12T12:01:50.771+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Adding loopback address
/0:0:0:0:0:0:0:1%lo
2021-08-12T12:01:50.771+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.InetAddresses] Adding loopback address
/127.0.0.1
2021-08-12T12:01:50.775+0200 [DEBUG]
[org.gradle.internal.remote.internal.inet.TcpIncomingConnector] Listening on
[7fb34c82-1907-4c32-afda-888c9b6e2279 port:42751, addresses:[localhost/127.0.0.1]].
...
2021-08-12T12:01:50.797+0200 [INFO]
[org.gradle.launcher.daemon.server.DaemonRegistryUpdater] Advertising the daemon
address to the clients: [7fb34c82-1907-4c32-afda-888c9b6e2279 port:42751,
addresses:[localhost/127.0.0.1]]
...
2021-08-12T12:01:50.923+0200 [ERROR]
[org.gradle.internal.remote.internal.inet.TcpIncomingConnector] Cannot accept
connection from remote address /10.0.2.15.
If you didn’t find a fix for your issue here, please reach out to the Gradle community on the help
forum or search relevant developer resources using help.gradle.org.
If you believe you’ve found a bug in Gradle, please file an issue on GitHub.
Compatibility Matrix
The sections below describe Gradle’s compatibility with several integrations. Other versions not
listed here may or may not work.
Java
A Java version between 8 and 17 is required to execute Gradle. Java 18 and later versions are not
yet supported.
Java 6 and 7 can still be used for compilation and forked test execution.
For older Gradle versions, please see the table below which Java version is supported by which
Gradle release.
8 2.0
9 4.3
10 4.7
11 5.0
12 5.4
13 6.0
14 6.3
15 6.7
16 7.0
17 7.3
Kotlin
Gradle plugins written in Kotlin target Kotlin 1.4 for compatibility with Gradle and Kotlin DSL build
scripts, even though the embedded Kotlin runtime is Kotlin 1.5.
Groovy
Gradle plugins written in Groovy must use Groovy 3.x for compatibility with Gradle and Groovy
DSL build scripts.
Android
Gradle is tested with Android Gradle Plugin 4.1, 4.2, 7.0 and 7.1. Alpha and beta versions may or
may not work.
Upgrading and Migrating
Upgrading your build from Gradle 6.x to the latest
This chapter provides the information you need to migrate your Gradle 6.x builds to the latest
Gradle release. For migrating from Gradle 4.x or 5.x, see the older migration guide first.
1. Try running gradle help --scan and view the deprecations view of the generated build scan.
This is so that you can see any deprecation warnings that apply to your build.
Alternatively, you could run gradle help --warning-mode=all to see the deprecations in the
console, though it may not report as much detailed information.
Some plugins will break with this new version of Gradle, for example because they use internal
APIs that have been removed or changed. The previous step will help you identify potential
problems by issuing deprecation warnings when a plugin does try to use a deprecated part of
the API.
4. Try to run the project and debug any errors using the Troubleshooting Guide.
The format of the dependency lockfile has been changed and as a consequence there is only one file
per project instead of one file per configuration per project. This change only affects writing lock
files. Gradle remains capable of loading lock state saved in the older format.
Head over to the documentation to learn how to migrate to the new format. The migration can be
performed per configuration and does not have to be done in a single step. Gradle will
automatically clean up previous lock files when migrating them over to the new file format.
The buildId field will not be populated by default to ensure that the produced metadata file
remains unchanged when no build inputs are changed. Users can still opt in to have this unique
identifier part of the produced metadata if they want to, see the documentation.
JFrog announced the sunset of the JCenter repository in February 2021. Many Gradle builds rely on
JCenter for project dependencies.
No new packages or versions are published to JCenter, but JFrog says they will keep JCenter
running in a read-only state indefinitely. We recommend that you consider using mavenCentral(),
google() or a private maven repository instead.
Gradle emits a deprecation warning when jcenter() is used as a repository and this method is
scheduled to be removed in Gradle 8.0.
Due to the update to the next major version of Groovy, you may experience minor issues when
upgrading to Gradle 7.0.
The new version of Groovy has a stricter parser that fails to compile code that may have been
accepted in previous Groovy versions. If you encounter syntax errors, check the Groovy issue
tracker and Groovy 3 release highlights.
Some very specific regressions have already been fixed in the next minor version of Groovy.
Groovy modularization
Gradle no longer embeds a copy of groovy-all that bundles all Groovy modules into a single jar—
only the most important modules are distributed in the Gradle distribution.
• groovy
• groovy-ant
• groovy-astbuilder
• groovy-console
• groovy-datetime
• groovy-dateutil
• groovy-groovydoc
• groovy-json
• groovy-nio
• groovy-sql
• groovy-templates
• groovy-test
• groovy-xml
• groovy-cli-picocli
• groovy-docgenerator
• groovy-groovysh
• groovy-jmx
• groovy-jsr223
• groovy-macro
• groovy-servlet
• groovy-swing
• groovy-test-junit5
• groovy-testng
You can pull these dependencies into your build like any other external dependency.
Plugins built with Gradle 7.0 will now have Groovy 3 on their classpath when using gradleApi() or
localGroovy().
If you use Spock to test your plugins, you will need to use Spock 2.x. There are no
NOTE
compatible versions of Spock 1.x and Groovy 3.
dependencies {
// Ensure you use the Groovy 3.x variant
testImplementation('org.spockframework:spock-core:2.0-groovy-3.0') {
exclude group: 'org.codehaus.groovy'
}
}
Performance
Depending on the number of subprojects and Groovy DSL build scripts, you may notice a
performance regression when compiling build scripts for the first time or when changes are made
to the build script’s classpath. This is due to the slower performance of the Groovy 3 parser, but the
Groovy team is aware of the issue and trying to mitigate the regression.
In general, we are also looking at how we can improve the performance of build script compilation
for both Groovy DSL and Kotlin DSL.
While the following error initially looks like a compile error, it is actually due to the fact that
specific `Configuration`s have been removed. Please refer to Removal of compile and runtime
configurations for more details.
Since its inception, Gradle provided the compile and runtime configurations to declare
dependencies. These however did not support a fine grained scoping of dependencies. Hence, better
replacements were introduced in Gradle 3.4:
• The implementation configuration should be used to declare dependencies which are
implementation details of a library: they are not visible to consumers of the library during
compilation time.
• The api configuration, available only if you apply the java-library plugin, should be used to
declare dependencies which are part of the API of a library, that need to be exposed to
consumers at compilation time.
In Gradle 7, both the compile and runtime configurations are removed. Therefore, you have to
migrate to the implementation and api configurations above. If you are still using the java plugin for
a Java library, you will need to apply the java-library plugin instead.
You can find more details about the benefits of the new configurations and which one to use in
place of compile and runtime by reading the Java Library plugin documentation.
When using the Groovy DSL, you need to watch out for a particular upgrade
problem when dealing with the removed configurations.
If you were creating custom configurations that extend one of the removed
configurations, Gradle may silently create configurations that do not exist.
WARNING
configurations {
// This silently creates a configuration called "runtime"
myConf extendsFrom runtime
}
The result of dependency resolution for your custom configuration may not be
the same as Gradle 6.x or before. You may notice missing dependencies or
artifacts.
The ProjectBuilder API is used for inspecting Gradle builds in unit tests. This API used to create
temporary project files under the system temporary directory as defined by java.io.tmpdir.
The API now creates temporary project files under the Test task’s temporary directory. This path is
usually under the project build directory. This may cause test failures when the test expects
particular file paths.
Tests that use the TestKit API used to create temporary files under the system temporary directory
as defined by java.io.tmpdir. These files were used to store copies of Gradle distributions or
another test-only Gradle User Home.
TestKit tests will now create temporary files under the Test task’s temporary directory. This path is
usually under the project build directory. This may cause test failures when the test expects
particular file paths.
The file system watching implementation on Windows adds a lock to the root project directory in
order to watch for changes. This may cause errors when you try to delete the root project directory
after running a build with TestKit. For example, tests that use TestKit together with JUnit’s @TempDir
extension, or the TemporaryFolder rule can run into this problem. To avoid problems with these file
locks, TestKit disables file system watching for builds executed on Windows via GradleRunner. If
you’d like to override the default behavior, you can enable file system watching by passing --watch
-fs to GradleRunner.withArguments().
The maven plugin has been removed. You should use the maven-publish plugin instead.
Please refer to the documentation of the Maven Publish plugin for more details.
The uploadArchives task was used in combination with the legacy Ivy or Maven publishing
mechanisms. It has been removed in Gradle 7. You should migrate to the maven-publish or ivy-
publish plugin instead.
Please refer to the documentation of the Maven Publish plugin for publishing on Maven
repositories. Please refer to the documentation of the Ivy Publish plugin for publishing on Ivy
repositories.
In the context of dependency version sorting, a -SNAPSHOT version is now considered to be right
before a final release but after any -RC version. More special version suffixes are also taken into
account. This brings the Gradle algorithm closer to the Maven one for well-known version suffixes.
Have a look at the documentation for all the rules Gradle applies.
Removal of Play Framework plugins
The deprecated Play plugins have been removed. An external replacement, the Play Framework
plugin, is available from the plugin portal.
These unmaintained alternative JVM plugins have been removed: java-lang, scala-lang, junit-
test-suite, jvm-component, jvm-resources.
Please use the stable Java Library and Scala plugins instead.
The following plugins for experimental JavaScript integration are now removed from the
distribution: coffeescript-base, envjs, javascript-base, jshint, rhino.
If you used these plugins despite their experimental nature, you may find suitable replacements in
the Plugin Portal.
The layout method taking a configuration block has been removed and is replaced by
patternLayout.
A Gradle build is defined by its settings.gradle(.kts) file found in the current or parent directory.
Without a settings file, a Gradle build is undefined and Gradle produces an error when attempting
to execute tasks.
Exceptions to this are invoking Gradle with the init task or using diagnostic command line flags,
such as --version.
Gradle 6.x warns users about the wrong behavior and ignores the target action in this scenario.
Starting from 7.0 the same case will produce an error. Plugins and build scripts should be adjusted
to call afterEvaluate only at configuration time. If you have such a build failure and the related
afterEvaluate statement is declared in your build sources then you can simply delete it. If
afterEvaluate is declared in a plugin then report the issue to the plugin maintainers.
Calling any mutator methods (i.e. clear(), add(), remove(), etc.) on ConfigurableFileCollection after
the stored value calculated throws an exception. Users and plugin authors should adjust their code
such that all configuration on ConfigurableFileCollection happens during configuration time,
before the values are read.
Removal of ProjectLayout#configurableFiles
Removal of UnableToDeleteFileException
• The configDir getters and setters have been removed from the Checkstle task and extension. Use
the configDirectory property instead.
• The rulePriority getter and setter have been removed from the Pmd task and extension. Use the
rulesMinimumPriority property instead.
The getBaseName() and setBaseName() methods were removed from the Distribution class. Clients
should replace the usages with the distributionBaseName property.
Using AbstractTask
Registering a task with the AbstractTask type or with a type extending AbstractTask was deprecated
in Gradle 6.5 and is now an error in Gradle 7.0. You can use DefaultTask instead.
Removal of BuildListener.buildStarted(Gradle)
The following APIs, which were not usable via command line options anymore since Gradle 5.0, are
now removed: StartParameter.useEmptySettings(), StartParameter.isUseEmptySettings(),
StartParameter.setSearchUpwards(boolean) and StartParameter.isSearchUpwards().
Gradle no longer supports discovering the settings file in a directory named master in a sibling
directory. If your build still uses this deprecated feature, consider refactoring the build to have the
root directory match the physical root of the project hierarchy. You can find more information
about how to structure a Gradle build or a composition of builds in the user manual. Alternatively,
you can still run tasks in builds like this by invoking the build from the master directory only using
a fully qualified path to the task.
Compiling, testing and executing now works automatically for any source set that defines a module
by containing a module-info.java file. Usually, this is the behavior you need. If this is causing issues
in cases you manually configure the module path, or use a 3rd party plugin for it, you can still opt
out of this by setting modularity.inferModulePath to false on the java extension or individual tasks.
Removal of ValidateTaskProperties
The ValidateTaskProperties task has been removed and replaced by the ValidatePlugins task.
Removal of ImmutableFileCollection
The ImmutableFileCollection type has been removed. Use the factory method instead. A handle to
the project layout can be obtained via Project.layout.
Removal of ComponentSelectionReason.getDescription
• DefaultNamedDomainObjectSet(Class, Instantiator)
• DefaultPolymorphicDomainObjectContainer(Class, Instantiator)
The local build cache configuration now needs to be done via BuildCacheConfiguration.local().
This internal API was used in plugins, amongst other the Nebula plugins, and was deprecated in the
Gradle 5.x timeline and is now removed. Latest plugins version should no longer reference it.
Setting the config_loc config property on the checkstyle plugin is now an error
checkstyle {
configProperties['config_loc'] = file("path/to/checkstyle-config-dir")
}
Builds should declare the checkstyle configuration with the checkstyle block:
checkstyle {
configDirectory = file("path/to/checkstyle-config-dir")
}
Querying the mapped value of a provider before the producer has completed is now an error
Gradle 6.x warns users about the wrong behavior and then returns a possibly incorrect provider
value. Starting with 7.0 the same case will produce an error. Plugins and build scripts should be
adjusted to query the mapped value of a provider, for example a task output property, after the task
has completed.
Gradle 6.0 started warning about problems with task definitions (such as incorrectly defined inputs
or outputs). For Gradle 7.0, those warnings are now errors and will fail the build.
Change in behavior when there’s a strict version conflict with a local project
Previous Gradle releases would succeed, selecting the project dependency despite the strict
constraint. Starting from Gradle 7, this will trigger a dependency resolution failure.
Deprecations
Having a task which produces an output in a location and another task consuming that location by
referring to it as an input without the consumer task depending on the producer task has been
deprecated. A fix for this problem is to add a dependency from the consumer to the producer.
Duplicates strategy
Gradle 7 now fails when a copy operation (or any operation which uses a
org.gradle.api.file.CopySpec) encounters a duplicate entry, and that the duplicates strategy isn’t
set. Please look at the CopySpec docs for details.
The API supporting the Java Toolchain feature in org.gradle.jvm.toolchain is now marked as
@NonNull.
This may impact Kotlin consumers where the return types of APIs are no longer nullable.
• Kotlin has been updated to Kotlin 1.4.20. Note that Gradle scripts are still using the Kotlin 1.3
language.
Projects imported into Eclipse now include custom source set classpaths
Previously, projects imported by Eclipse only included dependencies for the main and test source
sets. The compile and runtime classpaths of custom source sets were ignored.
Since Gradle 6.8, projects imported into Eclipse include the compile and runtime classpath for
every source set defined by the build.
Previously, empty directories would be taken into account during up-to-date checks and build cache
key calculations for the sources declared in SourceTask. This meant that a source tree that contained
an empty directory and an otherwise identical source tree that did not contain the empty directory
would be considered different sources, even if the task would produce the same outputs. In Gradle
6.8, SourceTask now ignores empty directories during doing up-to-date checks and build cache key
calculations. In the vast majority of cases, this is the desired behavior, but it is possible that a task
may extend SourceTask but also produce different outputs when empty directories are present in
the sources. For tasks where this is a concern, you can expose a separate property without the
@IgnoreEmptyDirectories annotation in order to capture those changes:
@InputFiles
@SkipWhenEmpty
@PathSensitive(PathSensitivity.ABSOLUTE)
public FileTree getSourcesWithEmptyDirectories() {
return super.getSource()
}
Changes to publications
If, for some reason, you still want to publish components with dependencies on enforced platforms,
you can disable the validation following the documentation.
Gradle’s file trees apply some default exclude patterns for convenience — the same defaults as Ant
in fact. See the user manual for more information. Sometimes, Ant’s default excludes prove
problematic, for example when you want to include the .gitignore in an archive file.
Changing Gradle’s default excludes during the execution phase can lead to correctness problems
with up-to-date checks. As a consequence, you are only allowed to change Gradle’s default excludes
in the settings script, see the user manual for an example.
Deprecations
Direct references to tasks from included builds in mustRunAfter, shouldRunAfter and finalizedBy task
methods have been deprecated. Task ordering using mustRunAfter and shouldRunAfter as well as
finalizers specified by finalizedBy should be used for task ordering within a build. If you happen to
have cross-build task ordering defined using above mentioned methods, consider restructuring
such builds and decoupling them from one another.
Gradle will emit a deprecation warning when your build relies on finding the settings file in a
directory named master in a sibling directory.
If your build uses this feature, consider refactoring the build to have the root directory match the
physical root of the project hierarchy.
Alternatively, you can still run tasks in builds like this by invoking the build from the master
directory only using a fully qualified path to the task.
Gradle Kotlin DSL extensions have been changed to favor Gradle’s Action<T> type over Kotlin
function types.
While the change should be transparent to Kotlin clients, Java clients calling Kotlin DSL extensions
need to be updated to use the Action<T> APIs.
Previously, buildSrc was built in such a way that included builds were ignored from the root build.
Since Gradle 6.7, buildSrc can see any included build from the root build. This may cause
dependencies to be substituted from an included build in buildSrc. This may also change the order
in which some builds are executed if an included build is needed by buildSrc.
Deprecations
Gradle’s file trees apply some default exclude patterns for convenience — the same defaults as Ant
in fact. See the user manual for more information. Sometimes, Ant’s default excludes prove
problematic, for example when you want to include the .gitignore in an archive file.
Changing Gradle’s default excludes during the execution phase can lead to correctness problems
with up-to-date checks, and is deprecated. You are only allowed to change Gradle’s default excludes
in the settings script, see the user manual for an example.
dependencies {
implementation(configurations.myConfiguration)
}
This behavior is now deprecated as it is confusing: one could expect the "dependent configuration"
to be resolved first and add the result of resolution as dependencies to the including configuration,
which is not the case. The deprecated version can be replaced with the actual behavior, which is
configuration inheritance:
configurations.implementation.extendsFrom(configurations.myConfiguration)
While adding support for expressing variant support in dependency substitutions, a bug fix
introduced a behaviour change that some builds may rely upon. Previously a substituted
dependency would still use the attributes of the original selector instead of the ones from the
replacement selector.
With that change, existing substitutions around dependencies with richer selectors, such as for
platform dependencies, will no longer work as they did. It becomes mandatory to define the variant
aware part in the target selector.
Deprecations
Deprecations
AbstractTask is an internal class which is visible on the public API, as a superclass of public type
DefaultTask. AbstractTask will be removed in Gradle 7.0, and the following are deprecated in Gradle
6.5:
• Registering a task whose type is AbstractTask or TaskInternal. You can remove the task type
from the task registration and Gradle will use DefaultTask instead.
• Registering a task whose type is a subclass of AbstractTask but not a subclass of DefaultTask. You
can change the task type to extend DefaultTask instead.
• Using the class AbstractTask from plugin code or build scripts. You can change the code to use
DefaultTask instead.
Upgrading from 6.3
Gradle 6.4 enabled incremental analysis by default. Incremental analysis is only available in PMD
6.0.0 or higher. If you want to use an older PMD version, you need to disable incremental analysis:
pmd {
incrementalAnalysis = false
}
With Gradle 6.4, the incubating API for dependency locking LockMode has changed. The value is now
set via a Property<LockMode> instead of a direct setter. This means that the notation to set the value
has to be updated for the Kotlin DSL:
dependencyLocking {
lockMode.set(LockMode.STRICT)
}
Users of the Groovy DSL should not be impacted as the notation lockMode = LockMode.STRICT
remains valid.
If a Java library is published with Gradle Module Metadata, the information which Java version it
supports is encoded in the org.gradle.jvm.version attribute. By default, this attribute was set to
what you configured in java.targetCompatibility. If that was not configured, it was set to the
current Java version running Gradle. Changing the version of a particular compile task, e.g.
javaCompile.targetCompatibility had no effect on that attribute, leading to wrong information if the
attribute was not adjusted manually. This is now fixed and the attribute defaults to the setting of
the compile task that is associated with the sources from which the published jar is built.
Gradle versions from 6.0 to 6.3.x included could generate bad Gradle Module Metadata when
publishing on an Ivy repository which had a custom repository layout. Starting from 6.4, Gradle will
no longer publish Gradle Module Metadata if it detects that you are using a custom repository
layout.
Affected is configuration code inside the application {} and java {} configuration blocks, inside a
java execution setup with project.javaexec {}, and inside various task configurations (JavaExec,
CreateStartScripts, JavaCompile, Test, Javadoc).
Deprecations
Gradle no longer includes the annotation processor classpath as provided dependencies in IDEA.
The dependencies IDEA sees at compile time are the same as what Gradle sees after resolving the
compile classpath (configuration named compileClasspath). This prevents the leakage of annotation
processor dependencies into the project’s code.
Before Gradle introduced incremental annotation processing support, IDEA required all annotation
processors to be on the compilation classpath to be able to run annotation processing when
compiling in IDEA. This is no longer necessary because Gradle has a separate annotation processor
classpath. The dependencies for annotation processors are not added to an IDEA module’s classpath
when a Gradle project with annotation processors is imported.
Gradle 6.3 does not support the rich console for 32-bit Unix systems and for old FreeBSD versions
(older than FreeBSD 10). Microsoft Windows 32-bit is unaffected.
Gradle will continue building projects on 32-bit systems but will no longer show the rich console.
Deprecations
Almost every Gradle project has the default and archives configurations which are added by the
base plugin. These configurations are no longer used in modern Gradle builds that use variant
aware dependency management and the new publishing plugins.
While the configurations will stay in Gradle for backwards compatibility for now, using them to
declare dependencies or to resolve dependencies is now deprecated.
Resolving these configurations was never an intended use case and only possible because in earlier
Gradle versions every configuration was resolvable. For declaring dependencies, please use the
configurations provided by the plugins you use, for example by the Java Library plugin.
A classpath in a JVM project now explicitly requests the org.gradle.category=library attribute. This
leads to clearer error messages if a certain library cannot be used. For example, when the library
does not support the required Java version. The practical effect is that now all platform
dependencies have to be declared as such. Before, platform dependencies also worked, accidentally,
when the platform() keyword was omitted for local platforms or platforms published with Gradle
Module Metadata.
Properties from project root gradle.properties leaking into buildSrc and included builds
There was a regression in Gradle 6.2 and Gradle 6.2.1 that caused Gradle properties set in the
project root gradle.properties file to leak into the buildSrc build and any builds included by the
root.
This could cause your build to start failing if the buildSrc build or an included build suddenly found
an unexpected or incompatible value for a property coming from the project root gradle.properties
file.
Deprecations
Deprecations
Querying a mapped output property of a task before the task has completed
Querying the value of a mapped output property before the task has completed can cause strange
build failures because it indicates stale or non-existent outputs may be used by mistake. This
behavior is deprecated and will emit a deprecation warning. This will become an error in Gradle
7.0.
The following example demonstrates this problem where the Producer’s output file is parsed
before the Producer executes:
Querying the value of consumer.threadPoolSize will produce a deprecation warning if done prior to
producer completing, as the output file has not yet been generated.
Discontinued methods
The following methods have been discontinued and should no longer be used. They will be
removed in Gradle 7.0.
• BasePluginConvention.setProject(ProjectInternal)
• BasePluginConvention.getProject()
• StartParameter.useEmptySettings()
• StartParameter.isUseEmptySettings()
A set of alternative plugins for Java and Scala development were introduced in Gradle 2.x as an
experiment based on the "software model". These plugins are now deprecated and will eventually
be removed. If you are still using one of these old plugins (java-lang, scala-lang, jvm-component, jvm-
resources, junit-test-suite) please consult the documentation on Building Java & JVM projects to
determine which of the stable JVM plugins are appropriate for your project.
In Gradle 6.0, the ProjectLayout service was made available to worker actions via service injection.
This service allowed for mutable state to leak into a worker action and introduced a way for
dependencies to go undeclared in the worker action.
ProjectLayout has been removed from the available services. Worker actions that were using
ProjectLayout should switch to injecting the projectDirectory or buildDirectory as a parameter
instead.
Starting from Gradle 6.2, Gradle performs a sanity check before uploading, to make sure you don’t
upload stale files (files produced by another build). This introduces a problem with Spring Boot
applications which are uploaded using the components.java component:
This is caused by the fact that the main jar task is disabled by the Spring Boot application, and the
component expects it to be present. Because the bootJar task uses the same file as the main jar task
by default, previous releases of Gradle would either:
A workaround is to tell Gradle what to upload. If you want to upload the bootJar, then you need to
configure the outgoing configurations to do this:
configurations {
[apiElements, runtimeElements].each {
it.outgoing.artifacts.removeIf {
it.buildDependencies.getDependencies(null).contains(jar) }
it.outgoing.artifact(bootJar)
}
}
Alternatively, you might want to re-enable the jar task, and add the bootJar with a different
classifier.
jar {
enabled = true
}
bootJar {
classifier = 'application'
}
1. Try running gradle help --scan and view the deprecations view of the generated build scan.
This is so that you can see any deprecation warnings that apply to your build.
Alternatively, you could run gradle help --warning-mode=all to see the deprecations in the
console, though it may not report as much detailed information.
Some plugins will break with this new version of Gradle, for example because they use internal
APIs that have been removed or changed. The previous step will help you identify potential
problems by issuing deprecation warnings when a plugin does try to use a deprecated part of
the API.
4. Try to run the project and debug any errors using the Troubleshooting Guide.
Upgrading from 5.6 and earlier
Deprecations
Dependencies should no longer be declared using the compile and runtime configurations
The usage of the compile and runtime configurations in the Java ecosystem plugins has been
discouraged since Gradle 3.4.
These configurations are used for compiling and running code from the main source set. Other
sources sets create similar configurations (e.g. testCompile and testRuntime for the test source set),
should not be used either. The implementation, api, compileOnly and runtimeOnly configurations
should be used to declare dependencies and the compileClasspath and runtimeClasspath
configurations to resolve dependencies. See the relationship of these configurations.
Legacy publication system is deprecated and replaced with the *-publish plugins
Users should migrate to the publishing system of Gradle by using either the maven-publish or ivy-
publish plugins. These plugins have been stable since Gradle 4.8.
The publishing system is also the only way to ensure the publication of Gradle Module Metadata.
When Gradle detects problems with task definitions (such as incorrectly defined inputs or outputs)
it will show the following message on the console:
Deprecated Gradle features were used in this build, making it incompatible with Gradle
7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See
https://docs.gradle.org/6.0/userguide/command_line_interface.html#sec:command_line_war
nings
The deprecation warnings show up in build scans for every build, regardless of the command-line
switches used.
When the build is executed with --warning-mode all, the individual warnings will be shown:
Otherwise, you’ll need to report the problems to the maintainer of the relevant task or plugin.
In Gradle 5.4 we introduced a new API for implementing incremental tasks: InputChanges. The old
API based on IncrementalTaskInputs has been deprecated.
Forced dependencies
Forcing dependency versions using force = true on a first-level dependency has been deprecated.
Force has both a semantic and ordering issue which can be avoided by using a strict version
constraint.
These methods currently do not work as expected since the callbacks will never be called after the
build has started.
Implicit duplicate strategy for Copy or archive tasks has been deprecated
Archive tasks Tar and Zip by default allow multiple entries for the same path to exist in the created
archive. This can cause "grossly invalid zip files" that can trigger zip bomb detection.
To prevent this from happening accidentally, encountering duplicates while creating an archive
now produces a deprecation message and will fail the build starting with Gradle 7.0.
Copy tasks also happily copy multiple sources with the same relative path to the destination
directory. This behavior has also been deprecated.
A Gradle build is defined by a settings.gradle[.kts] file in the current or parent directory. Without
a settings file, a Gradle build is undefined and will emit a deprecation warning.
In Gradle 7.0, Gradle will only allow you to invoke the init task or diagnostic command line flags,
such as --version, with undefined builds.
Once a project is evaluated, Gradle ignores all configuration passed to Project#afterEvaluate and
emits a deprecation warning. This scenario will become an error in Gradle 7.0.
Deprecated plugins
The following bundled plugins were never announced and will be removed in the next major
release of Gradle:
• org.gradle.coffeescript-base
• org.gradle.envjs
• org.gradle.javascript-base
• org.gradle.jshint
• org.gradle.rhino
Gradle 6.0 supports Android Gradle Plugin versions 3.4 and later.
For Gradle 6, usage of the build scan plugin must be replaced with the Gradle Enterprise plugin.
This also requires changing how the plugin is applied. Please see https://gradle.com/help/gradle-6-
build-scan-plugin for more information.
Previously, Gradle used the name of the root project as the build name for an included build. Now,
the name of the build’s root directory is used and the root project name is not considered if
different. A different name for the build can be specified if the build is being included via a settings
file.
includeBuild("some-other-build") {
name = "another-name"
}
The previous behavior was problematic as it caused different names to be used at different times
during the build.
Previously, Gradle did not prevent using the name “buildSrc” for a subproject of a multi-project
build or as the name of an included build. Now, this is not allowed. The name “buildSrc” is now
reserved for the conventional buildSrc project that builds extra build logic.
Typical use of buildSrc is unaffected by this change. You will only be affected if your settings file
specifies include("buildSrc") or includeBuild("buildSrc").
The Zinc compiler has been upgraded to version 1.3.0. Gradle no longer supports building for Scala
2.9.
The minimum Zinc compiler supported by Gradle is 1.2.0 and the maximum tested version is 1.3.0.
To make it easier to select the version of the Zinc compiler, you can now configure a zincVersion
property:
scala {
zincVersion = "1.2.1"
}
Please remove any explicit dependencies you’ve added to the zinc configuration and use this
property instead. If you try to use the com.typesafe.zinc:zinc dependency, Gradle will switch to the
new Zinc implementation.
In the past, it was possible to use any build cache implementation as the local cache. This is no
longer allowed as the local cache must always be a DirectoryBuildCache.
Failing to pack or unpack cached results will now fail the build
In the past, when Gradle encountered a problem while packing the results of a cached task, Gradle
would ignore the problem and continue running the build.
When encountering a corrupt cached artifact, Gradle would remove whatever was already
unpacked and re-execute the task to make sure the build had a chance to succeed.
While this behavior was intended to make a build successful, this had the adverse effect of hiding
problems and led to reduced cache performance.
In Gradle 6.0, both pack and unpack errors will cause the build to fail, so that these problems will
be surfaced more easily.
Previously, in order to to use the build cache for the buildSrc build you needed to duplicate your
build cache config in the buildSrc build. Now, it automatically uses the build cache configuration
defined by the top level settings script.
Officially introduced in Gradle 5.3, Gradle Module Metadata was created to solve many of the
problems that have plagued dependency management for years, in particular, but not exclusively,
in the Java ecosystem.
This means, if you are publishing libraries with Gradle and using the maven-publish or ivy-publish
plugin, the Gradle Module Metadata file is always published in addition to traditional metadata.
The traditional metadata file will contain a marker so that Gradle knows that there is additional
metadata to consume.
The following rules are verified when publishing Gradle Module Metadata:
• Two variants cannot have the exact same attributes and capabilities,
• If there are dependencies, at least one, across all variants, must carry version information.
If Gradle fails to locate the metadata file (.pom or ivy.xml) of a module in a repository defined in the
repositories { } section, it now assumes that the module does not exist in that repository.
For dynamic versions, the maven-metadata.xml for the corresponding module needs to be present in
a Maven repository.
Previously, Gradle would also look for a default artifact (.jar). This behavior often caused a large
number of unnecessary requests when using multiple repositories that slowed builds down.
You can opt into the old behavior for selected repositories by adding the artifact() metadata
source.
Changing the pom packaging property no longer changes the artifact extension
Previously, if the pom packaging was not jar, ejb, bundle or maven-plugin, the extension of the main
artifact published to a Maven repository was changed during publishing to match the pom
packaging.
This behavior led to broken Gradle Module Metadata and was difficult to understand due to
handling of different packaging types.
Build authors can change the artifact name when the artifact is created to obtain the same result as
before — e.g. by setting jar.archiveExtension.set(pomPackaging) explicitly.
A number of fixes were made to produce more correct ivy.xml metadata in the ivy-publish plugin.
As a consequence, the internal structure of the ivy.xml file has changed. The runtime configuration
now contains more information, which corresponds to the runtimeElements variant of a Java
library. The default configuration should yield the same result as before.
In general, users are advised to migrate from ivy.xml to the new Gradle Module Metadata format.
Previously, the buildSrc project was built before applying the project’s settings script and its classes
were visible within the script. Now, buildSrc is built after the settings script and its classes are not
visible to it. The buildSrc classes remain visible to project build scripts and script plugins.
Custom logic can be used from a settings script by declaring external dependencies.
Previously, any pluginManagement {} blocks inside a settings script were executed during the normal
execution of the script.
Now, they are executed earlier in a similar manner to buildscript {} or plugins {}. This means that
code inside such a block cannot reference anything declared elsewhere in the script.
This change has been made so that pluginManagement configuration can also be applied when
resolving plugins for the settings script itself.
Plugins and classes loaded in settings scripts are visible to project scripts and buildSrc
Previously, any classes added to the a settings script by using buildscript {} were not visible
outside of the script. Now, they they are visible to all of the project build scripts.
They are also visible to the buildSrc build script and its settings script.
This change has been made so that plugins applied to the settings script can contribute logic to the
entire build.
• The validateTaskProperties task is now deprecated, use validatePlugins instead. The new name
better reflects the fact that it also validates artifact transform parameters and other non-
property definitions.
• The following task validation errors now fail the build at runtime and are promoted to errors
for ValidatePlugins:
◦ A task property is annotated with a property annotation not allowed for tasks, like
@InputArtifact.
Just like when using the kotlin-dsl plugin, it is now required to declare a repository where Kotlin
dependencies can be found if you apply the embedded-kotlin plugin.
plugins {
`embedded-kotlin`
}
repositories {
mavenCentral()
}
Kotlin DSL IDE support now requires Kotlin IntelliJ Plugin >= 1.3.50
With Kotlin IntelliJ plugin versions prior to 1.3.50, Kotlin DSL scripts will be wrongly highlighted
when the Gradle JVM is set to a version different from the one in Project SDK. Simply upgrade your
IDE plugin to a version >= 1.3.50 to restore the correct Kotlin DSL script highlighting behavior.
Kotlin DSL script base types no longer extend Project, Settings or Gradle
In previous versions, Kotlin DSL scripts were compiled to classes that implemented one of the three
core Gradle configuration interfaces in order to implicitly expose their APIs to scripts.
org.gradle.api.Project for project scripts, org.gradle.api.initialization.Settings for settings
scripts and org.gradle.api.invocation.Gradle for init scripts.
Having the script instance implement the core Gradle interface of the model object it was supposed
to configure was convenient because it made the model object API immediately available to the
body of the script but it was also a lie that could cause all sorts of trouble whenever the script itself
was used in place of the model object, a project script was not a proper Project instance just
because it implemented the core Project interface and the same was true for settings and init
scripts.
In 6.0 all Kotlin DSL scripts are compiled to classes that implement the newly introduced
org.gradle.kotlin.dsl.KotlinScript interface and the corresponding model objects are now
available as implicit receivers in the body of the scripts. In other words, a project script behaves as if
the body of the script is enclosed within a with(project) { … } block, a settings script as if the
body of the script is enclosed within a with(settings) { … } block and an init script as if the body
of the script is enclosed within a with(gradle) { … } block. This implies the corresponding model
object is also available as a property in the body of the script, the project property for project
scripts, the settings property for settings scripts and the gradle property for init scripts.
As part of the change, the SettingsScriptApi interface is no longer implemented by settings scripts
and the InitScriptApi interface is no longer implemented by init scripts. They should be replaced
with the corresponding model object interfaces, Settings and Gradle.
Miscellaneous
Timestamps in the generated documentation have very limited practical use, however they make it
impossible to have repeatable documentation builds. Therefore, the Javadoc and Groovydoc tasks are
now configured to not include timestamps by default any more.
Gradle always uses configDirectory as the value for 'config_loc' when running Checkstyle.
The following deprecated methods on the task container now result in errors:
• TaskContainer.add()
• TaskContainer.addAll()
• TaskContainer.remove()
• TaskContainer.removeAll()
• TaskContainer.retainAll()
• TaskContainer.clear()
• TaskContainer.iterator().remove()
• Replacing a registered (unrealized) task with an incompatible type. A compatible type is the
same type or a sub-type of the registered type.
Use ObjectFactory.fileProperty() instead of the following methods that are now removed:
• DefaultTask.newInputFile()
• DefaultTask.newOutputFile()
• ProjectLayout.fileProperty()
Use ObjectFactory.directoryProperty() instead of the following methods that are now removed:
• DefaultTask.newInputDirectory()
• DefaultTask.newOutputDirectory()
• ProjectLayout.directoryProperty()
The deprecated FindBugs plugin has been removed. As an alternative, you can use the SpotBugs
plugin from the Gradle Plugin Portal.
The deprecated JDepend plugin has been removed. There are a number of community-provided
plugins for code and architecture analysis available on the Gradle Plugin Portal.
The OSGI plugin has been removed
The deprecated OSGI plugin has been removed. There are a number of community-provided OSGI
plugins available on the Gradle Plugin Portal.
The deprecated announce and build-announcements plugins have been removed. There are a
number of community-provided plugins for sending out notifications available on the Gradle
Plugin Portal.
The deprecated Compare Gradle Builds plugin has been removed. Please use build scans for build
analysis and comparison.
The deprecated Play plugin has been removed. An external replacement, the Play Framework
plugin, is available from the plugin portal.
Tasks extending AbstractCompile can implement their own @TaskAction method with the name of
their choosing.
They are also free to add a method annotated with @TaskAction using an InputChanges parameter
without having to implement a parameter-less one as well.
• The append property on JacocoTaskExtension has been removed. append is now always configured
to be true for the Jacoco agent.
• File paths in deployment descriptor file name for the ear plugin are not allowed any more. Use a
simple name, like application.xml, instead.
• When incremental Groovy compilation is enabled, a wrong configuration of the source roots or
enabling Java annotation for Groovy now fails the build. Disable incremental Groovy
compilation when you want to compile in those cases.
• ComponentSelectionRule no longer can inject the metadata or ivy descriptor. Use the methods on
the ComponentSelection parameter instead.
• Declaring an incremental task without declaring outputs is now an error. Declare file outputs or
use TaskOutputs.upToDateWhen() instead.
• Changing the value of a task property with type Property<T> after the task has started execution
now results in an error.
• There are slight changes in the incubating capabilities resolution API, which has been
introduced in 5.6, to also allow variant selection based on variant name
Deprecations
Changing the contents of ConfigurableFileCollection task properties after task starts execution
When a task property has type ConfigurableFileCollection, then the file collection referenced by
the property will ignore changes made to the contents of the collection once the task starts
execution. This has two benefits. Firstly, this prevents accidental changes to the property value
during task execution which can cause Gradle up-to-date checks and build cache lookup using
different values to those used by the task action. Secondly, this improves performance as Gradle can
calculate the value once and cache the result.
Declaring an incremental task without declaring outputs is now deprecated. Declare file outputs or
use TaskOutputs.upToDateWhen() instead.
Task dependencies are honored for task @Input properties whose value is a Property
Previously, task dependencies would be ignored for task @Input properties of type Property<T>.
These are now honored, so that it is possible to attach a task output property to a task @Input
property.
This may introduce unexpected cycles in the task dependency graph, where the value of an output
property is mapped to produce a value for an input property.
Declaring task dependencies using a file Provider that does not represent a task output
This is now an error because Gradle does not know how to build files that are not task outputs.
Note that it is still possible to to pass Task.dependsOn() a Provider that returns a file and that
represents a task output, for example myTask.dependsOn(jar.archiveFile) or
myTask.dependsOn(taskProvider.flatMap { it.outputDirectory }), when the Provider is an annotated
@OutputFile or @OutputDirectory property of a task.
Previously, calling Property.set(null) would always reset the value of the property to 'not defined'.
Now, the convention that is associated with the property using the convention() method will be
used to determine the value of the property.
Enhanced validation of names for publishing.publications and publishing.repositories
The repository and publication names are used to construct task names for publishing. It was
possible to supply a name that would result in an invalid task name. Names for publications and
repositories are now restricted to [A-Za-z0-9_\\-.]+.
Gradle now prevents internal dependencies (like Guava) from leaking into the classpath used by
Worker API actions. This fixes an issue where a worker needs to use a dependency that is also used
by Gradle internally.
In previous releases, it was possible to rely on these leaked classes. Plugins relying on this behavior
will now fail. To fix the plugin, the worker should explicitly include all required dependencies in its
classpath.
The PMD plugin has been upgraded to use PMD version 6.15.0 instead of 6.8.0 by default.
Contributed by wreulicke
Previously, all copies of a configuration always had the name <OriginConfigurationName>Copy. Now
when creating multiple copies, each will have a unique name by adding an index starting from the
second copy. (e.g. CompileOnlyCopy2)
Gradle 5.6 no longer supplies custom classpath attributes in the Eclipse model. Instead, it provides
the attributes for Eclipse test sources. This change requires Buildship version 3.1.1 or later.
Gradle Kotlin DSL scripts and Gradle Plugins authored using the kotlin-dsl plugin are now
compiled using Kotlin 1.3.41.
Please see the Kotlin blog post and changelog for more information about the included changes.
The minimum supported Kotlin Gradle Plugin version is now 1.2.31. Previously it was 1.2.21.
Previous versions of Gradle would automatically select, in case of capability conflicts, the module
which has the highest capability version. Starting from 5.6, this is an opt-in behavior that can be
activated using:
configurations.all {
resolutionStrategy.capabilitiesResolution.all { selectHighestVersion() }
}
See the capabilities section of the documentation for more options.
When Gradle has to remove the output files of a task for various reasons, it will not follow
symlinked directories. The symlink itself will be deleted, but the contents of the linked directory
will stay intact.
Deprecations
Play
The built-in Play plugin has been deprecated and will be replaced by a new Play Framework plugin
available from the plugin portal.
Build Comparison
The build comparison plugin has been deprecated and will be removed in the next major version of
Gradle.
Build scans show much deeper insights into your build and you can use Gradle Enterprise to
directly compare two build’s build-scans.
Project names configured via EclipseProject.setName(…) were honored by Gradle and Buildship in
all cases, even when the names caused conflicts and import/synchronization errors.
Gradle can now deduplicate these names if they conflict with other project names in an Eclipse
workspace. This may lead to different Eclipse project names for projects with user-specified names.
The upcoming 3.1.1 version of Buildship is required to take advantage of this behavior.
Contributed by Christian Fränkel
The JaCoCo plugin has been upgraded to use JaCoCo version 0.8.4 instead of 0.8.3 by default.
The version of Ant distributed with Gradle has been upgraded to 1.9.14 from 1.9.13.
This affects Kotlin DSL build scripts that make use of ExtensionAware extension members such as the
extra properties accessor inside the dependencies {} block. The receiver for those members will no
longer be the enclosing Project instance but the dependencies object itself, the innermost
ExtensionAware conforming receiver. In order to address Project extra properties inside
dependencies {} the receiver must be explicitly qualified i.e. project.extra instead of just extra.
Affected extensions also include the<T>() and configure<T>(T.() → Unit).
Previous versions of Gradle could, in some complex dependency graphs, have a wrong result or a
randomized dependency order when lots of excludes were present. To mitigate this, the algorithm
that computes exclusions has been rewritten. In some rare cases this may cause some differences in
resolution, due to the correctness changes.
The system classpath for worker daemons started by the Worker API when using PROCESS isolation
has been reduced to a minimum set of Gradle infrastructure. User code is still segregated into a
separate classloader to isolate it from the Gradle runtime. This should be a transparent change for
tasks using the worker API, but previous versions of Gradle mixed user code and Gradle internals
in the worker process. Worker actions that rely on things like the java.class.path system property
may be affected, since java.class.path now represents only the classpath of the Gradle internals.
Deprecations
Using a custom build cache implementation for the local build cache is now deprecated. The only
allowed type will be DirectoryBuildCache going forward. There is no change in the support for using
custom build cache implementations as the remote build cache.
There was a bug from Gradle 5.0 to 5.2.1 (included) where enforced platforms would potentially
include dependencies instead of constraints. This would happen whenever a POM file defined both
dependencies and "constraints" (via <dependencyManagement>) and that you used enforcedPlatform.
Gradle 5.3 fixes this bug, meaning that you might have differences in the resolution result if you
relied on this broken behavior. Similarly, Gradle 5.3 will no longer try to download jars for platform
and enforcedPlatform dependencies (as they should only bring in constraints).
If you apply any of the Java plugins, Gradle will now do its best to select dependencies which match
the target compatibility of the module being compiled. What it means, in practice, is that if you
have module A built for Java 8, and module B built for Java 8, then there’s no change. However if B
is built for Java 9+, then it’s not binary compatible anymore, and Gradle would complain with an
error message like the following:
In general, this is a sign that your project is misconfigured and that your dependencies are not
compatible. However, there are cases where you still may want to do this, for example when only a
subset of classes of your module actually need the Java 9 dependencies, and are not intended to be
used on earlier releases. Java in general doesn’t encourage you to do this (you should split your
module instead), but if you face this problem, you can workaround by disabling this new behavior
on the consumer side:
java {
disableAutoTargetJvm()
}
Bug fix in Maven / Ivy interoperability with dependency substitution
If you have a Maven dependency pointing to an Ivy dependency where the default configuration
dependencies do not match the compile + runtime + master ones and that Ivy dependency was
substituted (using a resolutionStrategy.force, resolutionStrategy.eachDependency or
resolutionStrategy.dependencySubstitution) then this fix will impact you. The legacy behaviour of
Gradle, prior to 5.0, was still in place instead of being replaced by the changes introduced by
improved pom support.
Gradle no longer ignores the followSymlink option on Windows for the clean task, all Delete tasks,
and project.delete {} operations in the presence of junction points and symbolic links.
In previous Gradle versions, additional artifacts registered at the project level were not published
by maven-publish or ivy-publish unless they were also added as artifacts in the publication
configuration.
With Gradle 5.3, these artifacts are now properly accounted for and published.
This means that artifacts that are registered both on the project and the publication, Ivy or Maven,
will cause publication to fail since it will create duplicate entries. The fix is to remove these artifacts
from the publication configuration.
none
Deprecations
Follow the API links to learn how to deal with these deprecations (if no extra information is
provided here):
• There should not be setters for lazy properties like ConfigurableFileCollection. Use setFrom
instead. For example,
validateTaskProperties.getClasses().setFrom(fileCollection)
validateTaskProperties.getClasspath().setFrom(fileCollection)
Input and output files of Sign tasks are now tracked via Signature.getToSign() and
Signature.getFile(), respectively.
In Gradle 5.0, the collection property instances created using ObjectFactory would have no value
defined, requiring plugin authors to explicitly set an initial value. This proved to be awkward and
error prone so ObjectFactory now returns instances with an empty collection as their initial value.
Since JDK 11 no longer supports changing the working directory of a running process, setting the
working directory of a worker via its fork options is now prohibited. All workers now use the same
working directory to enable reuse. Please pass files and directories as arguments instead. See
examples in the Worker API documentation.
To expand our idiomatic Provider API practices, the install name property from
org.gradle.nativeplatform.tasks.LinkSharedLibrary is affected by this change.
To expand our idiomatic Provider API practices, the WindowsResourceCompile task has been
converted to use the Provider API.
Passing additional compiler arguments now follow the same pattern as the CppCompile and other
tasks.
The list of beforeResolve actions are no longer shared between a copied configuration and the
original. Instead, a copied configuration receives a copy of the beforeResolve actions at the time the
copy is made. Any beforeResolve actions added after copying (to either configuration) will not be
shared between the original and the copy. This may break plugins that relied on the previous
behaviour.
The incubating operatingSystems property on native components has been replaced with the
targetMachines property.
The AbstractArchiveTask has several new properties using the Provider API. Plugins that extend
these types and override methods from the base class may no longer behave the same way.
Internally, AbstractArchiveTask prefers the new properties and methods like getArchiveName() are
façades over the new properties.
If your plugin/build only uses these types (and does not extend them), nothing has changed.
If you are using Gradle for Android, you need to move to version 3.3 or higher of both
TIP
the Android Gradle Plugin and Android Studio.
1. If you are not already on the latest 4.10.x release, read the sections below for help upgrading
your project to the latest 4.10.x release. We recommend upgrading to the latest 4.10.x release to
get the most useful warnings and deprecations information before moving to 5.0. Avoid
upgrading Gradle and migrating to Kotlin DSL at the same time in order to ease troubleshooting
in case of potential issues.
2. Try running gradle help --scan and view the deprecations view of the generated build scan. If
there are no warnings, the Deprecations tab will not appear.
This is so that you can see any deprecation warnings that apply to your build. Gradle 5.x will
generate (potentially less obvious) errors if you try to upgrade directly to it.
Alternatively, you could run gradle help --warning-mode=all to see the deprecations in the
console, though it may not report as much detailed information.
Some plugins will break with this new version of Gradle, for example because they use internal
APIs that have been removed or changed. The previous step will help you identify potential
problems by issuing deprecation warnings when a plugin does try to use a deprecated part of
the API.
In particular, you will need to use at least a 2.x version of the Shadow Plugin.
5. Move to Java 8 or higher if you haven’t already. Whereas Gradle 4.x requires Java 7, Gradle 5
requires Java 8 to run.
6. Read the Upgrading from 4.10 section and make any necessary changes.
7. Try to run the project and debug any errors using the Troubleshooting Guide.
In addition, Gradle has added several significant new and improved features that you should
consider using in your builds:
• Maven Publish and Ivy Publish Plugins that now support digital signatures with the Signing
Plugin.
• A new API for creating and configuring tasks lazily that can significantly improve your build’s
configuration time.
Other notable changes to be aware of that may break your build include:
• A change that means you should configure existing wrapper and init tasks rather than defining
your own.
• The honoring of implicit wildcards in Maven POM exclusions, which may result in
dependencies being excluded that weren’t before.
• The default memory settings for the command-line client, the Gradle daemon, and all workers
including compilers and test executors, have been greatly reduced.
• The default versions of several code quality plugins have been updated.
If you are not already on version 4.10, skip down to the section that applies to your current Gradle
version and work your way up until you reach here. Then, apply these changes when moving from
Gradle 4.10 to 5.0.
Other changes
• Gradle now bundles JAXB for Java 9 and above. You can remove the --add-modules
java.xml.bind option from org.gradle.jvmargs, if set.
The changes in this section have the potential to break your build, but the vast majority have been
deprecated for quite some time and few builds will be affected by a large number of them. We
strongly recommend upgrading to Gradle 4.10 first to get a report on what deprecations affect your
build.
The following breaking changes are not from deprecations, but the result of changes in behavior:
• The evaluation of the publishing {} block is no longer deferred until needed but behaves like
any other block. Please use afterEvaluate {} if you need to defer evaluation.
• The Javadoc and Groovydoc tasks now delete the destination dir for the documentation before
executing. This has been added to remove stale output files from the last task execution.
• The Java Library Distribution Plugin is now based on the Java Library Plugin instead of the Java
Plugin.
While it applies the Java Plugin, it behaves slightly different (e.g. it adds the api configuration).
Thus, make sure to check whether your build behaves as expected after upgrading.
• The html property on CheckstyleReport and FindBugsReport now returns a
CustomizableHtmlReport instance that is easier to configure from statically typed languages like
Java and Kotlin.
• The Configuration Avoidance API has been updated to prevent the creation and configuration of
tasks that are never used.
• The default memory settings for the command-line client, the Gradle daemon, and all workers
including compilers and test executors, have been greatly reduced.
• The default versions of several code quality plugins have been updated.
The following breaking changes will appear as deprecation warnings with Gradle 4.10:
General
• << for task definitions no longer works. In other words, you can not use the syntax task
myTask << { … }.
task myTask {
doLast {
...
}
}
• You can no longer use any of the following characters in domain object names, such as
project and task names: <space> / \ : < > " ? * | . You should also not use . as a leading or
trailing character.
• The -Dtest.single command-line option has been removed — use test filtering instead.
• The -Dtest.debug command-line option has been removed — use the --debug-jvm option
instead.
• The -u/--no-search-upward command-line option has been removed — make sure all your
builds have a settings.gradle file.
• You can no longer have a Gradle build nested in a subdirectory of another Gradle build
unless the nested build has a settings.gradle file.
• You can no longer pass null as the configuration action of CopySpec.from(Object, Action).
• Don’t have your own classes extend AbstractFileCollection — use the Project.files() method
instead. This problem may exhibit as a missing getBuildDependencies() method.
Java builds
• The CompileOptions.bootClasspath property has been removed — use
CompileOptions.bootstrapClasspath instead.
• Gradle will no longer automatically apply annotation processors that are on the compile
classpath — use CompileOptions.annotationProcessorPath instead.
• The testClassesDir property has been removed from the Test task — use testClassesDirs
instead.
• The classesDir property has been removed from both the JDepend task and SourceSetOutput.
Use the JDepend.classesDirs and SourceSetOutput.classesDirs properties instead.
• The Maven Plugin used to publish the highly outdated Maven 2 metadata format. This has
been changed and it will now publish Maven 3 metadata, just like the Maven Publish Plugin.
With the removal of Maven 2 support, the methods that configure unique snapshot behavior
have also been removed. Maven 3 only supports unique snapshots, so we decided to remove
them.
Tasks & properties
• The following legacy classes and methods related to lazy properties have been removed
— use ObjectFactory.property() to create Property instances:
◦ PropertyState
◦ DirectoryVar
◦ RegularFileVar
◦ ProjectLayout.newDirectoryVar()
◦ ProjectLayout.newFileVar()
◦ Project.property(Class)
◦ Script.property(Class)
◦ ProviderFactory.property(Class)
• Tasks configured and registered with the task configuration avoidance APIs have more
restrictions on the other methods that can be called from a configuration action.
• The Task.dependsOnTaskDidWork() method has been removed — use declared inputs and
outputs instead.
• The following properties and methods of TaskInternal have been removed — use task
dependencies, task rules, reusable utility methods, or the Worker API in place of executing a
task directly.
◦ execute()
◦ executer
◦ getValidators()
◦ addValidator()
• The TaskInputs.file(Object) method can no longer be called with an argument that resolves to
anything other than a single regular file.
• The TaskInputs.dir(Object) method can no longer be called with an argument that resolves to
anything other than a single directory.
• You can no longer register invalid inputs and outputs via TaskInputs and TaskOutputs.
Attempting to replace a built-in task will produce an error similar to the following:
> Cannot add task 'wrapper' as a task with that name already exists.
• The ScalaDocOptions.styleSheet property has been removed — the Scaladoc Ant task in Scala
2.11.8 and later no longer supports this property.
Kotlin DSL
• Artifact configuration accessors now have the type
NamedDomainObjectProvider<Configuration> instead of Configuration
Both changes could cause script compilation errors. See the Gradle Kotlin DSL release notes for
more information and how to fix builds broken by the changes described above.
Miscellaneous
• The ConfigurableReport.setDestination(Object) method has been removed — use
ConfigurableReport.setDestination(File) instead.
• The Signature.setFile(File) method has been removed — Gradle does not support changing
the output file for the generated signature.
• The read-only Signature.toSignArtifact property has been removed — it should never have
been part of the public API.
• IdeaPlugin.performPostEvaluationActions() and
EclipsePlugin.performPostEvaluationActions() have been removed.
Ideally you shouldn’t use classes from this package, but, as a quick fix, you can add explicit
imports to your build scripts for those classes.
• The gradlePluginPortal() repository no longer looks for JARs without a POM by default.
• The Tooling API can no longer connect to builds using a Gradle version below Gradle 2.6. The
same applies to builds run through TestKit.
• Gradle 5.0 requires a minimum Tooling API client version of 3.0. Older client libraries can no
longer run builds with Gradle 5.0.
• The IdeaModule Tooling API model element contains methods to retrieve resources and test
resources so those elements were removed from the result of IdeaModule.getSourceDirs()
and IdeaModule.getTestSourceDirs().
• In previous Gradle versions, the source field in SourceTask was accessible from subclasses.
This is not the case anymore as the source field is now declared as private.
• In the Worker API, the working directory of a worker can no longer be set.
• A change in behavior related to dependency and version constraints may impact a small
number of users.
• There have been several changes to property factory methods on DefaultTask that may
impact the creation of custom tasks.
If you are not already on version 4.9, skip down to the section that applies to your current Gradle
version and work your way up until you reach here. Then, apply these changes when upgrading to
Gradle 4.10.
Follow the API links to learn how to deal with these deprecations (if no extra information is
provided here):
• There have been several potentially breaking changes in Kotlin DSL — see the Breaking changes
section of that project’s release notes.
Use the Property.set() method to modify their values rather than using standard property
assignment syntax, unless you are doing so in a Groovy build script. Standard property
assignment still works in that one case.
• Consider trying the lazy API for task creation and configuration
Use Groovy’s spread operator instead. For example, you would replace
tasks.withType(JavaCompile).name with tasks.withType(JavaCompile)*.name.
Upgrading from 4.7 and earlier
• Configure existing wrapper and init tasks rather than defining your own
• Consider migrating to the built-in dependency locking mechanism if you are currently using a
plugin or custom solution for this
• TaskContainer.remove() now actually removes the given task — some plugins may have
accidentally relied on the old behavior.
This will lead to some types annotated according to JSR-305 being treated as nullable where
they were treated as non-nullable before. This may lead to compilation errors in the build
script. See the relevant Kotlin DSL release notes for details.
• Error messages will be directed to standard error rather than standard output now, unless a
console is attached to both standard output and standard error. This may affect tools that scrape
a build’s plain console output. Ignore this change if you’re upgrading from an earlier version of
Gradle.
Deprecations
Prior to this release, builds were allowed to replace built-in tasks. This feature has been deprecated.
The full list of built-in tasks that should not be replaced is: wrapper, init, help, tasks, projects,
buildEnvironment, components, dependencies, dependencyInsight, dependentComponents, model,
properties.
• Gradle will now, by convention, look for Checkstyle configuration files in the root project’s
config/checkstyle directory.
Checkstyle configuration files in subprojects — the old by-convention location — will be ignored
unless you explicitly configure their path via checkstyle.configDir or checkstyle.config.
• The structure of Gradle’s plain console output has changed, which may break tools that scrape
that output.
• The APIs of many native tasks related to compilation, linking and installation have changed in
breaking ways.
• [Kotlin DSL] Delegated properties used to access Gradle’s build properties — defined in
gradle.properties for example — must now be explicitly typed.
• [Kotlin DSL] Declaring a plugins {} block inside a nested scope now throws an exception.
Deprecations
• You should not put annotation processors on the compile classpath or declare them with the
-processorpath compiler argument.
They should be added to the annotationProcessor configuration instead. If you don’t want any
processing, but your compile classpath contains a processor unintentionally (e.g. as part of a
library you depend on), use the -proc:none compiler argument to ignore it.
• The Java plugins now add a sourceSetAnnotationProcessor configuration for each source set,
which might break if any of them match existing configurations you have. We recommend you
remove your conflicting configuration declarations.
• The Visual Studio integration now only configures a single solution for all components in a
build.
• Gradle now bundles the kotlin-stdlib-jdk8 artifact instead of kotlin-stdlib-jre8. This may
affect your build. Please see the Kotlin documentation for more details.
• Make sure you have a settings.gradle file: it avoids a performance penalty and allows you to set
the root project’s name.
• Gradle now ignores the build cache configuration of included builds (composite builds) and
instead uses the root build’s configuration for all the builds.
Potential breaking changes
• The Maven Publish Plugin now produces more complete maven-metadata.xml files, including
maintaining a list of <snapshotVersion> elements. Some older versions of Maven may not be able
to consume this metadata.
• Project.file(Object) no longer normalizes case for file paths on case-insensitive file systems. It
now ignores case in such circumstances and does not touch the file system.
• AbstractTestTask is now extended by non-JVM test tasks as well as Test. Plugins should beware
configuring all tasks of type AbstractTestTask because of this.
• Gradle will no longer prefer a version of Visual Studio found on the path over other locations. It
is now a last resort.
You can bypass the toolchain discovery by specifying the installation directory of the version of
Visual Studio you want via VisualCpp.setInstallDir(Object).
• 5xx HTTP errors during dependency resolution will now trigger exceptions in the build.
• The embedded Apache Ant has been upgraded from 1.9.6 to 1.9.9.
• Several third-party libraries used by Gradle have been upgraded to fix security issues.
• The plugins {} block can now be used in subprojects and for plugins in the buildSrc directory.
Other deprecations
• You should no longer run Gradle versions older than 2.6 via the Tooling API.
• You should no longer run any version of Gradle via an older version of the Tooling API than 3.0.
• Overlapping version ranges for a dependency now result in Gradle picking a version that
satisfies all declared ranges.
For example, if a dependency on some-module is found with a version range of [3,6] and also
transitively with a range of [4,8], Gradle now selects version 6 instead of 8. The prior behavior
was to select 8.
• Gradle will no longer ignore dependency resolution errors from a repository when there is
another repository it can check. Dependency resolution will fail instead. This results in more
deterministic behavior with respect to resolution results.
• The FindBugs Plugin no longer renders progress information from its analysis. If you rely on
that output in any way, you can enable it with FindBugs.showProgress.
Upgrading from 4.0
• Consider using the new Worker API to enable units of work within your build to run in parallel.
Follow the API links to learn how to deal with these deprecations (if no extra information is
provided here):
• Nullable
• Non-Java projects that have a project dependency on a Java project now consume the
runtimeElements configuration by default instead of the default configuration.
To override this behavior, you can explicitly declare the configuration to use in the project
dependency. For example: project(path: ':myJavaProject', configuration: 'default').
Changes in detail
The command line client now starts with 64MB of heap instead of 1GB. This may affect builds
running directly inside the client VM using --no-daemon mode. We discourage the use of --no-daemon,
but if you must use it, you can increase the available memory using the GRADLE_OPTS environment
variable.
The Gradle daemon now starts with 512MB of heap instead of 1GB. Large projects may have to
increase this setting using the org.gradle.jvmargs property.
All workers, including compilers and test executors, now start with 512MB of heap. The previous
default was 1/4th of physical memory. Large projects may have to increase this setting on the
relevant tasks, e.g. JavaCompile or Test.
The default tool versions of the following code quality plugins have been updated:
In addition, the default ruleset was changed from the now deprecated java-basic to
category/java/errorprone.xml.
• The AWS SDK used to access S3-backed Maven/Ivy repositories has been upgraded from 1.11.267
to 1.11.407.
• The BND library used by the OSGi Plugin has been upgraded from 3.4.0 to 4.0.0.
• The Google Cloud Storage JSON API Client Library used to access Google Cloud Storage backed
Maven/Ivy repositories has been upgraded from v1-rev116-1.23.0 to v1-rev136-1.25.0.
• The JUnit Platform libraries used by the Test task have been upgraded from 1.0.3 to 1.3.1.
• The Maven Wagon libraries used to access Maven repositories have been upgraded from 2.4 to
3.0.0.
Through the Gradle 4.x release stream, new @Incubating features were added to the dependency
resolution engine. These include sophisticated version constraints (prefer, strictly, reject),
dependency constraints, and platform dependencies.
If you have been using the IMPROVED_POM_SUPPORT feature preview, playing with constraints or prefer,
reject and other specific version indications, then make sure to take a good look at your
dependency resolution results.
Gradle now provides support for importing bill of materials (BOM) files, which are effectively POM
files that use <dependencyManagement> sections to control the versions of direct and transitive
dependencies. All you need to do is declare the POM as a platform dependency.
The following example picks the versions of the gson and dom4j dependencies from the declared
Spring Boot BOM:
dependencies {
// import a BOM
implementation platform('org.springframework.boot:spring-boot-
dependencies:1.5.8.RELEASE')
Since Gradle 1.0, runtime-scoped dependencies have been included in the Java compilation
classpath, which has some drawbacks:
• The compilation classpath is much larger than it needs to be, slowing down compilation.
• The compilation classpath includes runtime-scoped files that do not impact compilation,
resulting in unnecessary re-compilation when those files change.
With this new behavior, the Java and Java Library plugins both honor the separation of compile
and runtime scopes. This means that the compilation classpath only includes compile-scoped
dependencies, while the runtime classpath adds the runtime-scoped dependencies as well. This is
particularly useful if you develop and publish Java libraries with Gradle where the separation
between api and implementation dependencies is reflected in the published scopes.
The property factory methods such as newInputFile() are intended to be called from the constructor
of a type that extends DefaultTask. These methods are now final to avoid subclasses overriding
these methods and using state that is not initialized.
The Property instances that are returned by these methods are no longer automatically registered
as inputs or outputs of the task. The Property instances need to be declared as inputs or outputs in
the usual ways, such as attaching annotations such as @OutputFile or using the runtime API to
register the property.
For example, you could previously use the following syntax and have both outputFile instances
registered as declared outputs:
build.gradle
task myOtherTask {
def outputFile = newOutputFile()
doLast { ... }
}
build.gradle.kts
task("myOtherTask") {
val outputFile = newOutputFile()
doLast { ... }
}
task myOtherTask {
def outputFile = project.objects.fileProperty()
outputs.file(outputFile) // or to be registered using the runtime API
doLast { ... }
}
build.gradle.kts
task("myOtherTask") {
val outputFile = project.objects.fileProperty()
outputs.file(outputFile) // or to be registered using the runtime API
doLast { ... }
}
In order to use S3 backed artifact repositories, you previously had to add --add-modules
java.xml.bind to org.gradle.jvmargs when running on Java 9 and above.
Since Java 11 no longer contains the java.xml.bind module, Gradle now bundles JAXB 2.3.1
(com.sun.xml.bind:jaxb-impl) and uses it on Java 9 and above.
[5.0] The gradlePluginPortal() repository no longer looks for JARs without a POM by default
With this new behavior, if a plugin or a transitive dependency of a plugin found in the
gradlePluginPortal() repository has no Maven POM it will fail to resolve.
Artifacts published to a Maven repository without a POM should be fixed. If you encounter such
artifacts, please ask the plugin or library author to publish a new version with proper metadata.
If you are stuck with a bad plugin, you can work around by re-enabling JARs as metadata source for
the gradlePluginPortal() repository:
settings.gradle
pluginManagement {
repositories {
gradlePluginPortal().tap {
metadataSources {
mavenPom()
artifact()
}
}
}
}
settings.gradle.kts
pluginManagement {
repositories {
gradlePluginPortal().apply {
(this as MavenArtifactRepository).metadataSources {
mavenPom()
artifact()
}
}
}
}
The Java Library Distribution Plugin is now based on the Java Library Plugin instead of the Java
Plugin.
Additionally, the default distribution created by the plugin will contain all artifacts of the
runtimeClasspath configuration instead of the deprecated runtime configuration.
The configuration avoidance API introduced in Gradle 4.9 allows you to avoid creating and
configuring tasks that are never used.
With the existing API, this example adds two tasks (foo and bar):
build.gradle
tasks.create("foo") {
tasks.create("bar")
}
build.gradle.kts
tasks.create("foo") {
tasks.create("bar")
}
When converting this to use the new API, something surprising happens: bar doesn’t exist. The new
API only executes configuration actions when necessary, so the register() for task bar only
executes when foo is configured.
build.gradle
tasks.register("foo") {
tasks.register("bar") // WRONG
}
build.gradle.kts
tasks.register("foo") {
tasks.register("bar") // WRONG
}
To avoid this, Gradle now detects this and prevents modification to the underlying container
(through create() or register()) when using the new API.
Since JDK 11 no longer supports changing the working directory of a running process, setting the
working directory of a worker via its fork options is now prohibited.
All workers now use the same working directory to enable reuse.
The S3 repository transport protocol allows Gradle to publish artifacts to AWS S3 buckets. Starting
with this release, every artifact uploaded to an S3 bucket will be equipped with the bucket-owner-
full-control canned ACL. Make sure that the AWS account used to publish artifacts has the
s3:PutObjectAcl and s3:PutObjectVersionAcl permissions, otherwise the upload will fail.
{
"Version":"2012-10-17",
"Statement":[
// ...
{
"Effect":"Allow",
"Action":[
"s3:PutObject", // necessary for uploading objects
"s3:PutObjectAcl", // required starting with this release
"s3:PutObjectVersionAcl" // if S3 bucket versioning is enabled
],
"Resource":"arn:aws:s3:::myCompanyBucket/*"
}
]
}
[4.9] Consider trying the lazy API for task creation and configuration
Gradle 4.9 introduced a new way to create and configure tasks that works lazily. When you use this
approach for tasks that are expensive to configure, or when you have many, many tasks, your build
configuration time can drop significantly when those tasks don’t run.
You can learn more about lazily creating tasks in the Task Configuration Avoidance chapter. You
can also read about the background to this new feature in this blog post.
Now that the publishing plugins are stable, we recommend that you migrate from the legacy
publishing mechanism for standard Java projects, i.e. those based on the Java Plugin. That includes
projects that use any one of: Java Library Plugin, Application Plugin or War Plugin.
To use the new approach, simply replace any upload<Conf> configuration with a publishing {} block.
See the publishing overview chapter for more information.
Prior to Gradle 4.8, the publishing {} block was implicitly treated as if all the logic inside it was
executed after the project was evaluated. This was confusing, because it was the only block that
behaved that way. As part of the stabilization effort in Gradle 4.8, we are deprecating this behavior
and asking all users to migrate their build.
The new, stable behavior can be switched on by adding the following to your settings file:
settings.gradle
enableFeaturePreview('STABLE_PUBLISHING')
settings.gradle.kts
enableFeaturePreview("STABLE_PUBLISHING")
We recommend doing a test run with a local repository to see whether all artifacts still have the
expected coordinates. In most cases everything should work as before and you are done. However,
your publishing block may rely on the implicit deferred configuration, particularly if it relies on
values that may change during the configuration phase of the build.
For example, under the new behavior, the following logic assumes that jar.archiveBaseName doesn’t
change after artifactId is set:
build.gradle
subprojects {
publishing {
publications {
mavenJava {
from components.java
artifactId = jar.archiveBaseName
}
}
}
}
build.gradle.kts
subprojects {
publishing {
publications {
named<MavenPublication>("mavenJava") {
from(components["java"])
artifactId = tasks.jar.get().archiveBaseName.get()
}
}
}
}
If that assumption is incorrect or might possibly be incorrect in the future, the artifactId must be
set within an afterEvaluate {} block, like so:
build.gradle
subprojects {
publishing {
publications {
mavenJava {
from components.java
afterEvaluate {
artifactId = jar.archiveBaseName
}
}
}
}
}
build.gradle.kts
subprojects {
publishing {
publications {
named<MavenPublication>("mavenJava") {
from(components["java"])
afterEvaluate {
artifactId = tasks.jar.get().archiveBbaseName.get()
}
}
}
}
}
You should no longer define your own wrapper and init tasks. Configure the existing tasks instead,
for example by converting this:
build.gradle
build.gradle.kts
task<Wrapper>("wrapper") {
...
}
to this:
build.gradle
wrapper {
...
}
build.gradle.kts
tasks.wrapper {
...
}
If an exclusion in a Maven POM was missing either a groupId or artifactId, Gradle used to ignore
the exclusion. Now the missing elements are treated as implicit wildcards — e.g.
<groupId>*</groupId> — which means that some of your dependencies may now be excluded where
they weren’t before.
You will need to explicitly declare any missing dependencies that you need.
The plain console mode now formats output consistently with the rich console, which means that
the output format has changed. For example:
• The output produced by a given task is now grouped together, even when other tasks execute in
parallel with it.
• All output produced during build execution is written to the standard output file handle. This
includes messages written to System.err unless you are redirecting standard error to a file or
any other non-console destination.
This may break tools that scrape details from the plain console output.
[4.6] Changes to the APIs of native tasks related to compilation, linking and installation
Many tasks related to compiling, linking and installing native libraries and applications have been
converted to the Provider API so that they support lazy configuration. This conversion has
introduced some breaking changes to the APIs of the tasks so that they match the conventions of
the Provider API.
CreateStaticLibrary
• getOutputFile() was changed to return a Property.
InstallExecutable
• getSourceFile() was replaced by getExecutableFile().
• Assemble
• WindowsResourceCompile
• StripSymbols
• ExtractSymbols
• SwiftCompile
• LinkMachOBundle
[4.6] Visual Studio integration only supports a single solution file for all components of a
build
VisualStudioExtension no longer has a solutions property. Instead, you configure a single solution
via VisualStudioRootExtension in the root project, like so:
build.gradle
model {
visualStudio {
solution {
solutionFile.location = "vs/${name}.sln"
}
}
}
In addition, there are no longer individual tasks to generate the solution files for each component,
but rather a single visualStudio task that generates a solution file that encompasses all components
in the build.
When connecting to an HTTP build cache backend via HttpBuildCache, Gradle does not follow
redirects any more, treating them as errors instead. Getting a redirect from the build cache
backend is mostly a configuration error — using an "http" URL instead of "https" for example — and
has negative effects on performance.
• CVE-2017-7525 (critical)
• SONATYPE-2017-0359 (critical)
• SONATYPE-2017-0355 (critical)
• SONATYPE-2017-0398 (critical)
• CVE-2013-4002 (critical)
• CVE-2016-2510 (severe)
• SONATYPE-2016-0397 (severe)
• CVE-2009-2625 (severe)
• SONATYPE-2017-0348 (severe)
Gradle does not expose public APIs for these 3rd-party dependencies, but those who customize
Gradle will want to be aware.
Apache Maven is a build tool for Java and other JVM-based projects that’s in widespread use, and so
people that want to use Gradle often have to migrate an existing Maven build. This guide will help
with such a migration by explaining the differences and similarities between the two tools' models
and providing steps that you can follow to ease the process.
Converting a build can be scary, but you don’t have to do it alone. You can search docs, forums, and
StackOverflow from help.gradle.org or reach out to the Gradle community on the forums if you get
stuck.
The primary differences between Gradle and Maven are flexibility, performance, user experience,
and dependency management. A visual overview of these aspects is available in the Maven vs
Gradle feature comparison.
Since Gradle 3.0, Gradle has invested heavily in making Gradle builds much faster, with features
such as build caching, compile avoidance, and an improved incremental Java compiler. Gradle is
now 2-10x faster than Maven for the vast majority of projects, even without using a build cache. In-
depth performance comparison and business cases for switching from Maven to Gradle can be
found here.
General guidelines
Gradle and Maven have fundamentally different views on how to build a project. Gradle provides a
flexible and extensible build model that delegates the actual work to a graph of task dependencies.
Maven uses a model of fixed, linear phases to which you can attach goals (the things that do the
work). This may make migrating between the two seem intimidating, but migrations can be
surprisingly easy because Gradle follows many of the same conventions as Maven — such as the
standard project structure — and its dependency management works in a similar way.
Here we lay out a series of steps for you to follow that will help facilitate the migration of any
Maven build to Gradle:
Keep the old Maven build and new Gradle build side by side. You know the Maven
build works, so you should keep it until you are confident that the Gradle build
TIP
produces all the same artifacts and otherwise does what you need. This also means
that users can try the Gradle build without getting a new copy of the source tree.
A build scan will make it easier to visualize what’s happening in your existing Maven build. For
Maven builds, you’ll be able to see the project structure, what plugins are being used, a timeline
of the build steps, and more. Keep this handy so you can compare it to the Gradle build scans
you get while converting the project.
2. Develop a mechanism to verify that the two builds produce the same artifacts
This is a vitally important step to ensure that your deployments and tests don’t break. Even
small changes, such as the contents of a manifest file in a JAR, can cause problems. If your
Gradle build produces the same output as the Maven build, this will give you and others
confidence in switching over and make it easier to implement the big changes that will provide
the greatest benefits.
This doesn’t mean that you need to verify every artifact at every stage, although doing so can
help you quickly identify the source of a problem. You can just focus on the critical output such
as final reports and the artifacts that are published or deployed.
You will need to factor in some inherent differences in the build output that Gradle produces
compared to Maven. Generated POMs will contain only the information needed for
consumption and they will use <compile> and <runtime> scopes correctly for that scenario. You
might also see differences in the order of files in archives and of files on classpaths. Most
differences will be benign, but it’s worth identifying them and verifying that they are OK.
This will create all the Gradle build files you need, even for multi-module builds. For simpler
Maven projects, the Gradle build will be ready to run!
4. Create a build scan for the Gradle build.
A build scan will make it easier to visualize what’s happening in the build. For Gradle builds,
you’ll be able to see the project structure, the dependencies (regular and inter-project ones),
what plugins are being used and the console output of the build.
Your build may fail at this point, but that’s ok; the scan will still run. Compare the build scan for
the Gradle build to the one for the Maven build and continue down this list to troubleshoot the
failures.
We recommend that you regularly generate build scans during the migration to help you
identify and troubleshoot problems. If you want, you can also use a Gradle build scan to identify
opportunities to improve the performance of the build, after all performance is a big reason for
switching to Gradle in the first place.
Many tests can simply be migrated by configuring an extra source set. If you are using a third-
party library, such as FitNesse, look to see whether there is a suitable community plugin
available on the Gradle Plugin Portal.
In the case of popular plugins, Gradle often has an equivalent plugin that you can use. You
might also find that you can replace a plugin with built-in Gradle functionality. As a last resort,
you may need to reimplement a Maven plugin via your own custom plugins and task types.
The rest of this chapter looks in more detail at specific aspects of migrating a build from Maven to
Gradle.
Maven builds are based around the concept of build lifecycles that consist of a set of fixed phases.
This can prove an impediment for users migrating to Gradle because its build lifecycle is something
different, although it’s important to understand how Gradle builds fit into the structure of
initialization, configuration, and execution phases. Fortunately, Gradle has a feature that can mimic
Maven’s phases: lifecycle tasks.
These allow you to define your own "lifecycles" by creating no-action tasks that simply depend on
the tasks you’re interested in. And to make the transition to Gradle easier for Maven users, the Base
Plugin — applied by all the JVM language plugins like the Java Library Plugin — provides a set of
lifecycle tasks that correspond to the main Maven phases.
Here is a list of some of the main Maven phases and the Gradle tasks that they map to:
clean
Use the clean task provided by the Base Plugin.
compile
Use the classes task provided by the Java Plugin and other JVM language plugins. This compiles
all classes for all source files of all languages and also performs resource filtering via the
processResources task.
test
Use the test task provided by the Java Plugin. It runs just the unit tests, or more specifically, the
tests that make up the test source set.
package
Use the assemble task provided by the Base Plugin. This builds whatever is the appropriate
package for the project, for example a JAR for Java libraries or a WAR for traditional Java
webapps.
verify
Use the check task provided by the Base Plugin. This runs all verification tasks that are attached
to it, which typically includes the unit tests, any static analysis tasks — such as Checkstyle — and
others. If you want to include integration tests, you will have to configure these manually, which
is a simple process.
install
Use the publishToMavenLocal task provided by the Maven Publish Plugin.
Note that Gradle builds don’t require you to "install" artifacts as you have access to more
appropriate features like inter-project dependencies and composite builds. You should only use
publishToMavenLocal for interoperating with Maven builds.
Gradle also allows you to resolve dependencies against the local Maven cache, as described in
the Declaring repositories section.
deploy
Use the publish task provided by the Maven Publish Plugin — making sure you switch from the
older Maven Plugin (ID: maven) if your build is using that one. This will publish your package to
all configured publication repositories. There are also other tasks that allow you to publish to a
single repository even when multiple ones are defined.
Note that the Maven Publish Plugin does not publish source and Javadoc JARs by default, but
this can easily be activated as explained in the guide for building java projects.
Gradle’s init task is typically used to create a new skeleton project, but you can also use it to
convert an existing Maven build to Gradle automatically. Once Gradle is installed on your system,
all you have to do is run the command
from the root project directory and let Gradle do its thing. That basically consists of parsing the
existing POMs and generating the corresponding Gradle build scripts. Gradle will also create a
settings script if you’re migrating a multi-project build.
You’ll find that the new Gradle build includes the following:
• The appropriate plugins to build the project (limited to one or more of the Maven Publish, Java
and War Plugins)
See the Build Init Plugin chapter for a complete list of the automatic conversion features.
One thing to bear in mind is that assemblies are not automatically converted. They aren’t
necessarily problematic to convert, but you will need to do some manual work. Options include:
If your Maven build does not have many plugins or much in the way of customisation, you can
simply run
once the migration has completed. This will run the tests and produce the required artifacts
without any extra intervention on your part.
Migrating dependencies
Gradle’s dependency management system is more flexible than Maven’s, but it still supports the
same concepts of repositories, declared dependencies, scopes (dependency configurations in
Gradle), and transitive dependencies. In fact, Gradle works perfectly with Maven-compatible
repositories, which makes it easy to migrate your dependencies.
One notable difference between the two tools is in how they manage version
conflicts. Maven uses a "closest" match algorithm, whereas Gradle picks the newest.
NOTE
Don’t worry though, you have a lot of control over which versions are selected, as
documented in Managing Transitive Dependencies.
Over the following sections, we will show you how to migrate the most common elements of a
Maven build’s dependency management information.
Declaring dependencies
Gradle uses the same dependency identifier components as Maven: group ID, artifact ID and
version. It also supports classifiers. So all you need to do is substitute the identifier information for
a dependency into Gradle’s syntax, which is described in the Declaring Dependencies chapter.
<dependencies>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.12</version>
</dependency>
</dependencies>
This dependency would look like the following in a Gradle build script:
build.gradle
dependencies {
implementation 'log4j:log4j:1.2.12' ①
}
build.gradle.kts
dependencies {
implementation("log4j:log4j:1.2.12") ①
}
The string identifier takes the Maven values of groupId, artifactId and version, although Gradle
refers to them as group, module and version.
The above example raises an obvious question: what is that implementation configuration? It’s one
of the standard dependency configurations provided by the Java Plugin and is often used as a
substitute for Maven’s default compile scope.
Several of the differences between Maven’s scopes and Gradle’s standard configurations come
down to Gradle distinguishing between the dependencies required to build a module and the
dependencies required to build a module that depends on it. Maven makes no such distinction, so
published POMs typically include dependencies that consumers of a library don’t actually need.
Here are the main Maven dependency scopes and how you should deal with their migration:
compile
Gradle has two configurations that can be used in place of the compile scope: implementation and
api. The former is available to any project that applies the Java Plugin, while api is only available
to projects that specifically apply the Java Library Plugin.
In most cases you should simply use the implementation configuration, particularly if you’re
building an application or webapp. But if you’re building a library, you can learn about which
dependencies should be declared using api in the section on Building Java libraries. Even more
information on the differences between api and implementation is provided in the Java Library
Plugin chapter linked above.
runtime
Use the runtimeOnly configuration.
test
Gradle distinguishes between those dependencies that are required to compile a project’s tests
and those that are only needed to run them.
Dependencies required for test compilation should be declared against the testImplementation
configuration. Those that are only required for running the tests should use testRuntimeOnly.
provided
Use the compileOnly configuration.
Note that the War Plugin adds providedCompile and providedRuntime dependency configurations.
These behave slightly differently from compileOnly and simply ensure that those dependencies
aren’t packaged in the WAR file. However, the dependencies are included on runtime and test
runtime classpaths, so use these configurations if that’s the behavior you need.
import
The import scope is mostly used within <dependencyManagement> blocks and applies solely to POM-
only publications. Read the section on Using bills of materials to learn more about how to
replicate this behavior.
You can also specify a regular dependency on a POM-only publication. In this case, the
dependencies declared in that POM are treated as normal transitive dependencies of the build.
For example, imagine you want to use the groovy-all POM for your tests. It’s a POM-only
publication that has its own dependencies listed inside a <dependencies> block. The appropriate
configuration in the Gradle build looks like this:
Example 2. Consuming a POM-only dependency
build.gradle
dependencies {
testImplementation 'org.codehaus.groovy:groovy-all:2.5.4'
}
build.gradle.kts
dependencies {
testImplementation("org.codehaus.groovy:groovy-all:2.5.4")
}
The result of this will be that all compile and runtime scope dependencies in the groovy-all POM
get added to the test runtime classpath, while only the compile scope dependencies get added to
the test compilation classpath. Dependencies with other scopes will be ignored.
Declaring repositories
Gradle allows you to retrieve declared dependencies from any Maven-compatible or Ivy-compatible
repository. Unlike Maven, it has no default repository and so you have to declare at least one. In
order to have the same behavior as your Maven build, just configure Maven Central in your Gradle
build, like this:
build.gradle
repositories {
mavenCentral()
}
build.gradle.kts
repositories {
mavenCentral()
}
You can also use the repositories {} block to configure custom repositories, as described in the
Repository Types chapter.
Lastly, Gradle allows you to resolve dependencies against the local Maven cache/repository. This
helps Gradle builds interoperate with Maven builds, but it shouldn’t be a technique that you use if
you don’t need that interoperability. If you want to share published artifacts via the filesystem,
consider configuring a custom Maven repository with a file:// URL.
You might also be interested in learning about Gradle’s own dependency cache, which behaves
more reliably than Maven’s and can be used safely by multiple concurrent Gradle processes.
The existence of transitive dependencies means that you can very easily end up with multiple
versions of the same dependency in your dependency graph. By default, Gradle will pick the newest
version of a dependency in the graph, but that’s not always the right solution. That’s why it
provides several mechanisms for controlling which version of a given dependency is resolved.
• Dependency constraints
There are even more, specialized options listed in the controlling transitive dependencies chapter.
If you want to ensure consistency of versions across all projects in a multi-project build, similar to
how the <dependencyManagement> block in Maven works, you can use the Java Platform Plugin. This
allows you declare a set of dependency constraints that can be applied to multiple projects. You can
even publish the platform as a Maven BOM or using Gradle’s metadata format. See the plugin page
for more information on how to do that, and in particular the section on Consuming platforms to
see how you can apply a platform to other projects in the same build.
If you want to exclude a dependency for reasons unrelated to versions, then check out the section
on Excluding transitive dependencies. It shows you how to attach an exclusion either to an entire
configuration (often the most appropriate solution) or to a dependency. You can even easily apply
an exclusion to all configurations.
If you’re more interested in controlling which version of a dependency is actually resolved, see the
previous section.
• You want to declare some of your direct dependencies as optional in your project’s published
POM
For the first scenario, Gradle behaves the same way as Maven and simply ignores any transitive
dependencies that are declared as optional. They are not resolved and have no impact on the
versions selected if the same dependencies appear elsewhere in the dependency graph as non-
optional.
As for publishing dependencies as optional, Gradle provides a richer model called feature variants,
which will let you declare the "optional features" your library provides.
Gradle can use such BOMs for the same purpose, using a special dependency syntax based on
platform() and enforcedPlatform() methods. You simply declare the dependency in the normal way,
but wrap the dependency identifier in the appropriate method, as shown in this example that
"imports" the Spring Boot Dependencies BOM:
Example 4. Importing a BOM in a Gradle build
build.gradle
dependencies {
implementation platform('org.springframework.boot:spring-boot-
dependencies:1.5.8.RELEASE') ①
implementation 'com.google.code.gson:gson' ②
implementation 'dom4j:dom4j'
}
build.gradle.kts
dependencies {
implementation(platform("org.springframework.boot:spring-boot-
dependencies:1.5.8.RELEASE")) ①
implementation("com.google.code.gson:gson") ②
implementation("dom4j:dom4j")
}
You can learn more about this feature and the difference between platform() and
enforcedPlatform() in the section on importing version recommendations from a Maven BOM.
You can use this feature to apply the <dependencyManagement> information from any
dependency’s POM to the Gradle build, even those that don’t have a packaging type
NOTE
of pom. Both platform() and enforcedPlatform() will ignore any dependencies
declared in the <dependencies> block.
Maven’s multi-module builds map nicely to Gradle’s multi-project builds. Try the corresponding
sample to see how a basic multi-project Gradle build is set up.
1. Create a settings script that matches the <modules> block of the root POM.
settings.gradle
rootProject.name = 'simple-multi-module' ①
settings.gradle.kts
rootProject.name = "simple-multi-module" ①
include("simple-weather", "simple-webapp") ②
------------------------------------------------------------
Root project 'simple-multi-module'
------------------------------------------------------------
This basically involves creating a root project build script that injects shared configuration into
the appropriate subprojects.
If you want to replicate the Maven pattern of having dependency versions declared in the
dependencyManagement section of the root POM file, the best approach is to leverage the java-platform
plugin. You will need to add a dedicated project for this and consume it in the regular projects of
your build. See the documentation for more details on this pattern.
Maven allows you parameterize builds using properties of various sorts. Some are read-only
properties of the project model, others are user-defined in the POM. It even allows you to treat
system properties as project properties.
Gradle has a similar system of project properties, although it differentiates between those and
system properties. You can, for example, define properties in:
Those aren’t the only options, so if you are interested in finding out more about how and where you
can define properties, check out the Build Environment chapter.
One important piece of behavior you need to be aware of is what happens when the same property
is defined in both the build script and one of the external properties files: the build script value
takes precedence. Always. Fortunately, you can mimic the concept of profiles to provide overridable
default values.
Which brings us on to Maven profiles. These are a way to enable and disable different
configurations based on environment, target platform, or any other similar factor. Logically, they
are nothing more than limited ‘if' statements. And since Gradle has much more powerful ways to
declare conditions, it does not need to have formal support for profiles (except in the POMs of
dependencies). You can easily get the same behavior by combining conditions with secondary build
scripts, as you’ll see.
Let’s say you have different deployment settings depending on the environment: local development
(the default), a test environment, and production. To add profile-like behavior, you first create build
scripts for each environment in the project root: profile-default.gradle, profile-test.gradle, and
profile-prod.gradle. You can then conditionally apply one of those profile scripts based on a project
property of your own choice.
The following example demonstrates the basic technique using a project property called
buildProfile and profile scripts that simply initialize an extra project property called message:
Example 6. Mimicking the behavior of Maven profiles in Gradle
build.gradle
tasks.register('greeting') {
doLast {
println message ③
}
}
profile-default.gradle
ext.message = 'foobar' ④
profile-test.gradle
profile-prod.gradle
tasks.register("greeting") {
val message: String by project.extra
doLast {
println(message) ③
}
}
profile-default.gradle.kts
profile-test.gradle.kts
profile-prod.gradle.kts
① Checks for the existence of (Groovy) or binds (Kotlin) the buildProfile project property
② Applies the appropriate profile script, using the value of buildProfile in the script filename
④ Initializes the message extra project property, whose value can then be used in the main build
script
With this setup in place, you can activate one of the profiles by passing a value for the project
property you’re using — buildProfile in this case:
One thing to bear in mind is that high level condition statements make builds harder to understand
and maintain, similar to the way they complicate object-oriented code. The same applies to profiles.
Gradle offers you many better ways to avoid the extensive use of profiles that Maven often
requires, for example by configuring multiple tasks that are variants of one another. See the
publishPubNamePublicationToRepoNameRepository tasks created by the Maven Publish Plugin.
For a lengthier discussion on working with Maven profiles in Gradle, look no further than this blog
post.
Filtering resources
Maven has a phase called process-resources that has the goal resources:resources bound to it by
default. This gives the build author an opportunity to perform variable substitution on various files,
such as web resources, packaged properties files, etc.
The Java plugin for Gradle provides a processResources task to do the same thing. This is a Copy task
that copies files from the configured resources directory — src/main/resources by default — to an
output directory. And as with any Copy task, you can configure it to perform file filtering, renaming,
and content filtering.
As an example, here’s a configuration that treats the source files as Groovy SimpleTemplateEngine
templates, providing version and buildNumber properties to those templates:
build.gradle
processResources {
expand(version: version, buildNumber: currentBuildNumber)
}
build.gradle.kts
tasks {
processResources {
expand("version" to version, "buildNumber" to currentBuildNumber)
}
}
See the API docs for CopySpec to see all the options available to you.
Configuring integration tests
Many Maven builds incorporate integration tests of some sort, which Maven supports through an
extra set of phases: pre-integration-test, integration-test, post-integration-test, and verify. It
also uses the Failsafe plugin in place of Surefire so that failed integration tests don’t automatically
fail the build (because you may need to clean up resources, such as a running application server).
This behavior is easy to replicate in Gradle with source sets, as explained in our chapter on Testing
in Java & JVM projects. You can then configure a clean-up task, such as one that shuts down a test
server for example, to always run after the integration tests regardless of whether they succeed or
fail using Task.finalizedBy().
If you really don’t want your integration tests to fail the build, then you can use the
Test.ignoreFailures setting described in the Test execution section of the Java testing chapter.
Source sets also give you a lot of flexibility on where you place the source files for your integration
tests. You can easily keep them in the same directory as the unit tests or, more preferably, in a
separate source directory like src/integTest/java. To support other types of tests, you just add more
source sets and Test tasks!
Maven and Gradle share a common approach of extending the build through plugins. Although the
plugin systems are very different beneath the surface, they share many feature-based plugins, such
as:
• Shade/Shadow
• Jetty
• Checkstyle
• JaCoCo
Why does this matter? Because many plugins rely on standard Java conventions, so migration is
just a matter of replicating the configuration of the Maven plugin in Gradle. As an example, here’s a
simple Maven Checkstyle plugin configuration:
...
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>2.17</version>
<executions>
<execution>
<id>validate</id>
<phase>validate</phase>
<configuration>
<configLocation>checkstyle.xml</configLocation>
<encoding>UTF-8</encoding>
<consoleOutput>true</consoleOutput>
<failsOnError>true</failsOnError>
<linkXRef>false</linkXRef>
</configuration>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
...
Everything outside of the configuration block can safely be ignored when migrating to Gradle. In
this case, the corresponding Gradle configuration looks like the following:
build.gradle
checkstyle {
config = resources.text.fromFile('checkstyle.xml', 'UTF-8')
showViolations = true
ignoreFailures = false
}
build.gradle.kts
checkstyle {
config = resources.text.fromFile("checkstyle.xml", "UTF-8")
isShowViolations = true
isIgnoreFailures = false
}
The Checkstyle tasks are automatically added as dependencies of the check task, which also includes
test. If you want to ensure that Checkstyle runs before the tests, then just specify an ordering with
the mustRunAfter() method:
build.gradle
build.gradle.kts
tasks {
test {
mustRunAfter(checkstyleMain, checkstyleTest)
}
}
As you can see, the Gradle configuration is often much shorter than the Maven equivalent. You also
have a much more flexible execution model since you are no longer constrained by Maven’s fixed
phases.
While migrating a project from Maven, don’t forget about source sets. These often provide a more
elegant solution for handling integration tests or generated sources than Maven can provide, so you
should factor them into your migration plans.
Ant goals
Many Maven builds rely on the AntRun plugin to customize the build without the overhead of
implementing a custom Maven plugin. Gradle has no equivalent plugin because Ant is a first-class
citizen in Gradle builds, via the ant object. For example, you can use Ant’s Echo task like this:
Example 10. Invoking Ant tasks
build.gradle
tasks.register('sayHello') {
doLast {
ant.echo message: 'Hello!'
}
}
build.gradle.kts
tasks.register("sayHello") {
doLast {
ant.withGroovyBuilder {
"echo"("message" to "Hello!")
}
}
}
Even Ant properties and filesets are supported natively. To learn more, see Using Ant from Gradle.
It may be simpler and cleaner to just create custom task types to replace the work that
TIP Ant is doing for you. You can then more readily benefit from incremental build and
other useful Gradle features.
It’s worth remembering that Gradle builds are typically easier to extend and customize than Maven
ones. In this context, that means you may not need a Gradle plugin to replace a Maven one. For
example, the Maven Enforcer plugin allows you to control dependency versions and environmental
factors, but these things can easily be configured in a normal Gradle build script.
You may come across Maven plugins that have no counterpart in Gradle, particularly if you or
someone in your organisation has written a custom plugin. Such cases rely on you understanding
how Gradle (and potentially Maven) works, because you will usually have to write your own
plugin.
For the purposes of migration, there are two key types of Maven plugins:
If a plugin depends on the Maven project, then you will have to rewrite it. Don’t start by
considering how the Maven plugin works, but look at what problem it is trying to solve. Then try to
work out how to solve that problem in Gradle. You’ll probably find that the two build models are
different enough that "transcribing" Maven plugin code into a Gradle plugin just won’t be effective.
On the plus side, the plugin is likely to be much easier to write than the original Maven one because
Gradle has a much richer build model and API.
If you do need to implement custom logic, either via build scripts or plugins, check out the Guides
related to plugin development. Also be sure to familiarize yourself with Gradle’s Groovy DSL
Reference, which provides comprehensive documentation on the API that you’ll be working with. It
details the standard configuration blocks (and the objects that back them), the core types in the
system (Project, Task, etc.), and the standard set of task types. The main entry point is the Project
interface as that’s the top-level object that backs the build scripts.
Further reading
This chapter has covered the major topics that are specific to migrating Maven builds to Gradle. All
that remain are a few other areas that may be useful during or after a migration:
• Learn how to configure Gradle’s build environment, including the JVM settings used to run it
As a final note, this guide has only touched on a few of Gradle’s features and we encourage you to
learn about the rest from the other chapters of the user manual and from our step-by-step samples.
The biggest challenge in migrating from Ant to Gradle is that there is no such thing as a standard
Ant build. That makes it difficult to provide specific instructions. Fortunately, Gradle has some great
integration features with Ant that can make the process relatively smooth. And even migrating
from Ivy-based dependency management isn’t particularly hard because Gradle has a similar
model based on dependency configurations that works with Ivy-compatible repositories.
We will start by outlining the things you should consider at the outset of migrating a build from Ant
to Gradle and offer some general guidelines on how to proceed.
General guidelines
When you undertake to migrate a build from Ant to Gradle, you should keep in mind the nature of
both what you already have and where you would like to end up. Do you want a Gradle build that
mirrors the structure of the existing Ant build? Or do you want to move to something that is more
idiomatic to Gradle? What are the main benefits you are looking for?
To understand the implications, consider the two extreme endpoints that you could aim for:
This approach is quick, simple and works for many Ant-based builds. You end up with a build
that’s effectively identical to the original Ant build, except your Ant targets become Gradle
tasks. Even the dependencies between targets are retained.
The downside is that you’re still using the Ant build, which you must continue to maintain. You
also lose the advantages of Gradle’s conventions, many of its plugins, its dependency
management, and so on. You can still enhance the build with incremental build information,
but it’s more effort than would be the case for a normal Gradle build.
If you want to future proof your build, this is where you want to end up. Making use of Gradle’s
conventions and plugins will result in a smaller, easier-to-maintain build, with a structure that
is familiar to many Java developers. You will also find it easier to take advantage of Gradle’s
power features to improve build performance.
The main downside is the extra work required to perform the migration, particularly if the
existing build is complex and has many inter-project dependencies. But such builds often
benefit the most from a switch to idomatic Gradle. In addition, Gradle provides many features
that can ease the migration, such as the ability to use core and custom Ant tasks directly from a
Gradle build.
You ideally want to end up somewhere close to the second option in the long term, but you don’t
have to get there in one fell swoop.
What follows is a series of steps to help you decide the approach you want to take and how to go
about it:
1. Keep the old Ant build and new Gradle build side by side
You know the Ant build works, so you should keep it until you are confident that the Gradle
build produces all the same artifacts and otherwise does what you need. This also means that
users can try the Gradle build without getting a new copy of the source tree.
Don’t try to change the directory and file structure of the build until after you’re ready to make
the switch.
2. Develop a mechanism to verify that the two builds produce the same artifacts
This is a vitally important step to ensure that your deployments and tests don’t break. Even
small changes, such as the contents of a manifest file in a JAR, can cause problems. If your
Gradle build produces the same output as the Ant build, this will give you and others confidence
in switching over and make it easier to implement the big changes that will provide the greatest
benefits.
Multi-project builds are generally harder to migrate and require more work than single-project
ones. We have provided some dedicated advice to help with the process in the Migrating multi-
project builds section.
We expect that the vast majority of Ant builds are for JVM-based projects, for which there are a
wealth of plugins that provide a lot of the functionality you need. Not only are there the core
plugins that come packaged with Gradle, but you can also find many useful plugins on the
Plugin Portal.
Even if the Java Plugin or one of its derivatives (such as the Java Library Plugin) aren’t a good
match for your build, you should at least consider the Base Plugin for its lifecycle tasks.
This step very much depends on the requirements of your build. If a selection of Gradle plugins
can do the vast majority of the work your Ant build does, then it probably makes sense to create
a fresh Gradle build script that doesn’t depend on the Ant build and either implements the
missing pieces itself or utilizes existing Ant tasks.
The alternative approach is to import the Ant build into the Gradle build script and gradually
replace the Ant build functionality. This allows you to have a working Gradle build at each
stage, but it requires a bit of work to get the Gradle tasks working properly with the Ant ones.
You can learn more about this approach in Working with an imported build.
6. Configure your build for the existing directory and file structure
Gradle makes use of conventions to eliminate much of the boilerplate associated with older
builds and to make it easier for users to work with new builds once they are familiar with those
conventions. But that doesn’t mean you have to follow them.
Gradle provides many configuration options that allow for a good degree of customization.
Those options are typically made available through the plugins that provide the conventions.
For example, the standard source directory structure for production Java code — src/main/java
— is provided by the Java Plugin, which allows you to configure a different source path. Many
paths can be modified via properties on the Project object.
Once you’re confident that the Gradle build is producing the same artifacts and other resources
as the Ant build, you can consider migrating to the standard conventions, such as for source
directory paths. Doing so will allow you to remove the extra configuration that was required to
override those conventions. New team members will also find it easier to work with the build
after the change.
It’s up to you to decide whether this step is worth the time, energy and potential disruption that
it might incur, which in turn depends on your specific build and team.
The rest of the chapter covers some common scenarios you will likely deal with during the
migration, such as dependency management and working with Ant tasks.
The first step of many migrations will involve importing an Ant build using ant.importBuild(). If
you do that, how do you then move towards a standard Gradle build without replacing everything
at once?
The important thing to remember is that the Ant targets become real Gradle tasks, meaning you can
do things like modify their task dependencies, attach extra task actions, and so on. This allows you
to substitute native Gradle tasks for the equivalent Ant ones, maintaining any links to other existing
tasks.
As an example, imagine that you have a Java library project that you want to migrate from Ant to
Gradle. The Gradle build script has the line that imports the Ant build and now want to use the
standard Gradle mechanism for compiling the Java source files. However, you want to keep using
the existing package task that creates the library’s JAR file.
In diagrammatic form, the scenario looks like the following, where each box represents a
target/task:
The idea is to substitute the standard Gradle compileJava task for the Ant build task. There are
several steps involved in this substitution:
The name build conflicts with the standard build task provided by the Base Plugin (via the Java
Library Plugin).
There’s a good chance the Ant build does not conform to the standard Gradle directory
structure, so you need to tell Gradle where to find the source files and where to place the
compiled classes so package can find them.
compileJava must depend on prepare, package must depend on compileJava rather than ant_build,
and assemble must depend on package rather than the standard Gradle jar task.
Applying the plugin is as simple as inserting a plugins {} block at the beginning of the Gradle build
script, i.e. before ant.importBuild(). Here’s how to apply the Java Library Plugin:
Example 11. Applying the Java Library Plugin
build.gradle
plugins {
id 'java-library'
}
build.gradle.kts
plugins {
`java-library`
}
To rename the build task, use the variant of AntBuilder.importBuild() that accepts a transformer,
like this:
build.gradle
build.gradle.kts
① Renames the build target to ant_build and leaves all other targets unchanged
Configuring a different path for the sources is described in the Building Java & JVM projects
chapter, while you can change the output directory for the compiled classes in a similar way.
Let’s say the original Ant build stores these paths in Ant properties, src.dir for the Java source files
and classes.dir for the output. Here’s how you would configure Gradle to use those paths:
Example 13. Configuring the source sets
build.gradle
sourceSets {
main {
java {
srcDirs = [ ant.properties['src.dir'] ]
destinationDirectory.set(file(ant.properties['classes.dir']))
}
}
}
build.gradle.kts
sourceSets {
main {
java.setSrcDirs(listOf(ant.properties["src.dir"]))
java.destinationDirectory.set(file(ant.properties["classes.dir"] ?:
"$buildDir/classes"))
}
}
You should eventually aim to switch the standard directory structure for your type of project if
possible and then you’ll be able to remove this customization.
The last step is also straightforward and involves using the Task.dependsOn property and
Task.dependsOn() method to detach and link tasks. The property is appropriate for replacing
dependencies, while the method is the preferred way to add to the existing dependencies.
Here is the required task dependency configuration required by the example scenario, which
should come after the Ant build import:
Example 14. Configuring the task dependencies
build.gradle
compileJava.dependsOn 'prepare' ①
tasks.named('package') { dependsOn = [ 'compileJava' ] } ②
assemble.dependsOn = [ 'package' ] ③
build.gradle.kts
tasks {
compileJava {
dependsOn("prepare") ①
}
named("package") {
setDependsOn(listOf(compileJava)) ②
}
assemble {
setDependsOn(listOf("package")) ③
}
}
② Detaches package from the ant_build task and makes it depend on compileJava
③ Detaches assemble from the standard Gradle jar task and makes it depend on package instead
That’s it! These four steps will successfully replace the old Ant compilation with the Gradle
implementation. Even this small migration will be a big help because you’ll be able to take
advantage of Gradle’s incremental Java compilation for faster builds.
One important question you will have to ask yourself is how many tasks to migrate in each stage.
The larger the chunks you can migrate in one go the better, but this must be offset against how
many custom steps within the Ant build will be affected by the changes.
For example, if the Ant build follows a fairly standard approach for compilation, static resources,
packaging and unit tests, then it is probably worth migrating all those together. But if the build
performs some extra processing on the compiled classes, or does something unique when
processing the static resources, it is probably worth splitting those tasks into separate stages.
Managing dependencies
Ant builds typically take one of two approaches to dealing with binary dependencies (such as
libraries):
They each require a different technique for the migration to Gradle, but you will find the process
straightforward in either case. We look at the details of each scenario in the following sections.
When you are attempting to migrate a build that stores its dependencies on the filesystem, either
locally or on the network, you should consider whether you want to eventually move to managed
dependencies using remote repositories. That’s because you can incorporate filesystem
dependencies into a Gradle build in one of two ways:
• Attach the files directly to the appropriate dependency configurations (file dependencies)
It’s easier to migrate to managed dependencies served from Maven- or Ivy-compatible repositories
if you take the first approach, but doing so requires all your files to conform to the naming
convention "<moduleName>-<version>.<extension>".
To demonstrate the two techniques, consider a project that has the following library JARs in its libs
directory:
libs
├── our-custom.jar
├── log4j-1.2.8.jar
└── commons-io-2.1.jar
The file our-custom.jar lacks a version number, so it has to be added as a file dependency. But the
other two JARs match the required naming convention and so can be declared as normal module
dependencies that are retrieved from a flat-directory repository.
The following sample build script demonstrates how you can incorporate all of these libraries into a
build:
Example 15. Declaring dependencies served from the filesystem
build.gradle
repositories {
flatDir {
name = 'libs dir'
dir file('libs') ①
}
}
dependencies {
implementation files('libs/our-custom.jar') ②
implementation ':log4j:1.2.8', ':commons-io:2.1' ③
}
build.gradle.kts
repositories {
flatDir {
name = "libs dir"
dir(file("libs")) ①
}
}
dependencies {
implementation(files("libs/our-custom.jar")) ②
implementation(":log4j:1.2.8") ③
implementation(":commons-io:2.1") ③
}
The above sample will add our-custom.jar, log4j-1.2.8.jar and commons-io-2.1.jar to the
implementation configuration, which is used to compile the project’s code.
You can also specify a group in these module dependencies, even though they don’t
actually have a group. That’s because the flat-directory repository simply ignores
the information.
NOTE
If you then add a normal Maven- or Ivy-compatible repository at a later date, Gradle
will preferentially download the module dependencies that are declared with a
group from that repository rather than the flat-directory one.
Apache Ivy is a standalone dependency management tool that is widely used with Ant. It works in a
similar fashion to Gradle. In fact, they both allow you to
The most notable difference is that Gradle has standard configurations for specific types of projects.
For example, the Java Plugin defines configurations like implementation, testImplementation and
runtimeOnly. You can still define your own dependency configurations, though.
This similarity means that it’s usually quite straightforward to migrate from Ivy to Gradle:
• Transcribe the dependency declarations from your module descriptors into the dependencies {}
block of your Gradle build script, ideally using the standard configurations provided by any
plugins you apply.
• Transcribe any configuration declarations from your module descriptors into the configurations
{} block of the build script for any custom configurations that can’t be replaced by Gradle’s
standard ones.
• Transcribe the resolvers from your Ivy settings file into the repositories {} block of the build
script.
See the chapters on Managing Dependency Configurations, Declaring Dependencies and Declaring
Repositories for more information.
Ivy provides several Ant tasks that handle Ivy’s process for fetching dependencies. The basic steps
of that process consist of:
2. Resolve — locates the declared dependencies and downloads them to the cache if necessary
Gradle’s process is similar, but you don’t have to explicitly invoke the first two steps as it performs
them automatically. The third step doesn’t happen at all — unless you create a task to do it —
because Gradle typically uses the files in the dependency cache directly in classpaths and as the
source for assembling application packages.
Configuration
Most of Gradle’s dependency-related configuration is baked into the build script, as you’ve seen
with elements like the dependencies {} block. Another particularly important configuration
element is resolutionStrategy, which can be accessed from dependency configurations. This
provides many of the features you might get from Ivy’s conflict managers and is a powerful way
to control transitive dependencies and caching.
Some Ivy configuration options have no equivalent in Gradle. For example, there are no lock
strategies because Gradle ensures that its dependency cache is concurrency safe, period. Nor are
there "latest strategies" because it’s simpler to have a reliable, single strategy for conflict
resolution. If the "wrong" version is picked, you can easily override it using forced versions or
other resolution strategy options.
See the chapter on controlling transitive dependencies for more information on this aspect of
Gradle.
Resolution
At the beginning of the build, Gradle will automatically resolve any dependencies that you have
declared and download them to its cache. It searches the repositories for those dependencies,
with the search order defined by the order in which the repositories are declared.
It’s worth noting that Gradle supports the same dynamic version syntax as Ivy, so you can still
use versions like 1.0.+. You can also use the special latest.integration and latest.release labels
if you wish. If you decide to use such dynamic and changing dependencies, you can configure
the caching behavior for them via resolutionStrategy.
You might also want to consider dependency locking if you’re using dynamic and/or changing
dependencies. It’s a way to make the build more reliable and allows for reproducible builds.
Retrieval
As mentioned, Gradle does not automatically copy files from the dependency cache. Its standard
tasks typically use the files directly. If you want to copy the dependencies to a local directory, you
can use a Copy task like this in your build script:
Example 16. Copying dependencies to a local directory
build.gradle
tasks.register('retrieveRuntimeDependencies', Copy) {
into layout.buildDirectory.dir('libs')
from configurations.runtimeClasspath
}
build.gradle.kts
tasks.register<Copy>("retrieveRuntimeDependencies") {
into(layout.buildDirectory.dir("libs"))
from(configurations.runtimeClasspath)
}
A configuration is also a file collection, hence why it can be used in the from() configuration. You
can use a similar technique to attach a configuration to a compilation task or one that produces
documentation. See the chapter on Working with Files for more examples and information on
Gradle’s file API.
Publishing artifacts
Projects that use Ivy to manage dependencies often also use it for publishing JARs and other
artifacts to repositories. If you’re migrating such a build, then you’ll be glad to know that Gradle has
built-in support for publishing artifacts to Ivy-compatible repositories.
Before you attempt to migrate this particular aspect of your build, read the Publishing chapter to
learn about Gradle’s publishing model. That chapter’s examples are based on Maven repositories,
but the same model is used for Ivy repositories as well.
• Configure at least one publication, representing what will be published (including additional
artifacts if desired)
Once that’s all done, you’ll be able to generate an Ivy module descriptor for each publication and
publish them to one or more repositories.
Let’s say you have defined a publication named "myLibrary" and a repository named "myRepo".
Ivy’s Ant tasks would then map to the Gradle tasks like this:
• <deliver> → generateDescriptorFileForMyLibraryPublication
• <publish> → publishMyLibraryPublicationToMyRepoRepository
There is also a convenient publish task that publishes all publications to all repositories. If you’d
prefer to limit which publications go to which repositories, check out the relevant section of the
Publishing chapter.
On dependency versions
Ivy will, by default, automatically replace dynamic versions of dependencies with
the resolved "static" versions when it generates the module descriptor. Gradle does
NOTE not mimic this behavior: declared dependency versions are left unchanged.
You can replicate the default Ivy behavior by using the Nebula Ivy Resolved Plugin.
Alternatively, you can customize the descriptor file so that it contains the versions
you want.
One of the advantages of Ant is that it’s fairly easy to create a custom task and incorporate it into a
build. If you have such tasks, then there are two main options for migrating them to a Gradle build:
The first option is usually quick and easy, but not always. And if you want to integrate the task into
incremental build, you must use the incremental build runtime API. You also often have to work
with Ant paths and filesets, which are clunky.
The second option is preferable in the long term, if you have the time. Gradle task types tend to be
simpler than Ant tasks because they don’t have to work with an XML-based interface. You also gain
access to Gradle’s rich APIs. Lastly, this approach can make use of the type-safe incremental build
API based on typed properties.
Ant has many tasks for working with files, most of which have Gradle equivalents. As with other
areas of Ant to Gradle migration, you can use those Ant tasks from within your Gradle build.
However, we strongly recommend migrating to native Gradle constructs where possible so that the
build benefits from:
• Incremental build
• Easier integration with other parts of the build, such as dependency configurations
That said, it can be convenient to use those Ant tasks that have no direct equivalents, such as
<checksum> and <chown>. Even then, in the long run it may be better to convert these to native Gradle
task types that make use of standard Java APIs or third-party libraries to achieve the same thing.
Here are the most common file-related elements used by Ant builds, along with the Gradle
equivalents:
• <zip> (plus Java variants) — prefer the Zip task type (plus Jar, War, and Ear)
You can see several examples of Gradle’s file API and learn more about it in the Working with Files
chapter.
You can still construct Ant paths and filesets from within your build via the ant
object if you need to interact with an Ant task that requires them. The chapter on
Ant integration has examples that use both <path> and <fileset>. There is even a
method on FileCollection that will convert a file collection to a fileset or similar Ant
type.
Ant makes use of a properties map to store values that can be reused throughout the build. The big
downsides to this approach are that property values are all strings and the properties themselves
behave like global variables.
Gradle does use something similar in the form of project properties, which are a reasonable way to
parameterize a build. These can be set from the command line, in a gradle.properties file, or even
via specially named system properties and environment variables.
If you have existing Ant properties files, you can copy their contents into the project’s
gradle.properties file. Just be aware of two important points:
• Properties set in gradle.properties do not override extra project properties defined in the build
script with the same name
• Imported Ant tasks will not automatically "see" the Gradle project properties — you must copy
them into the Ant properties map for that to happen
Another important factor to understand is that a Gradle build script works with an object-oriented
API and it’s often best to use the properties of tasks, source sets and other objects where possible.
For example, this build script fragment creates tasks for packaging Javadoc documentation as a JAR
and unpacking it, linking tasks via their properties:
build.gradle
tasks.register('javadocJarArchive', Jar) {
from javadoc ①
archiveClassifier = 'javadoc'
}
tasks.register('unpackJavadocs', Copy) {
from zipTree(javadocJarArchive.archiveFile) ②
into tmpDistDir ③
}
build.gradle.kts
tasks.register<Jar>("javadocJarArchive") {
from(tasks.javadoc) ①
archiveClassifier.set("javadoc")
}
tasks.register<Copy>("unpackJavadocs") {
from(zipTree(tasks.named<Jar>("javadocJarArchive").get().archiveFile))
②
into(tmpDistDir) ③
}
② Uses the location of the Javadoc JAR held by the javadocJar task
③ Uses an project property called tmpDistDir to define the location of the 'dist' directory
As you can see from the example with tmpDistDir, there is often still a need to define paths and the
like through properties, which is why Gradle also provides extra properties that can be attached to
the project, tasks and some other types of objects.
Migrating multi-project builds
Multi-project builds are a particular challenge to migrate because there is no standard approach in
Ant for either structuring them or handling inter-project dependencies. Most of them likely use the
<ant> task in some way, but that’s about all that one can say.
Fortunately, Gradle’s multi-project support can handle fairly diverse project structures and it
provides much more robust and helpful support than Ant for constructing and maintaining multi-
project builds. The ant.importBuild() method also handles <ant> and <antcall> tasks transparently,
which allows for a phased migration.
We will suggest one process for migration here and hope that it either works for your case or at
least gives you some ideas. It breaks down like this:
2. Create a Gradle build script in each project of the build, setting their contents to this line:
ant.importBuild 'build.xml'
ant.importBuild("build.xml")
Replace build.xml with the path to the actual Ant build file that corresponds to the project. If
there is no corresponding Ant build file, leave the Gradle build script empty. Your build may not
be suitable in that case for this migration approach, but continue with these steps to see
whether there is still a way to do a phased migration.
3. Create a settings file that includes all the projects that now have a Gradle build script.
Some projects in your multi-project build will depend on artifacts produced by one or more
other projects in that build. Such projects need to ensure that those projects they depend on
have produced their artifacts and that they know the paths to those artifacts.
Ensuring the production of the required artifacts typically means calling into other projects'
builds via the <ant> task. This unfortunately bypasses the Gradle build, negating any changes
you make to the Gradle build scripts. You will need to replace targets that use <ant> tasks with
Gradle task dependencies.
For example, imagine you have a web project that depends on a "util" library that’s part of the
same build. The Ant build file for "web" might have a target like this:
web/build.xml
<target name="buildRequiredProjects">
<ant dir="${root.dir}/util" target="build"/> ①
</target>
This can be replaced by an inter-project task dependency in the corresponding Gradle build
script, as demonstrated in the following example that assumes the "web" project’s "compile"
task is the thing that requires "util" to be built beforehand:
web/build.gradle
ant.importBuild 'build.xml'
compile.dependsOn = [ ':util:build' ]
web/build.gradle.kts
ant.importBuild("build.xml")
tasks {
named<Task>("compile") {
setDependsOn(listOf(":util:build"))
}
}
This is not as robust or powerful as Gradle’s project dependencies, but it solves the immediate
problem without big changes to the build. Just be careful to remove or override any
dependencies on tasks that delegate to other subprojects, like the buildRequiredProjects task.
5. Identify the projects that have no dependencies on other projects and migrate them to idiomatic
Gradle builds scripts.
Just follow the advice in the rest of this guide to migrate individual project builds. As mentioned
elsewhere, you should ideally use Gradle standard plugins where possible. This may mean that
you need to add an extra copy task to each build that copies the generated artifacts to the
location expected by the rest of the Ant builds.
6. Migrate projects as and when they depend solely on projects with fully migrated Gradle builds.
At this point, you should be able to switch to using proper project dependencies attached to the
appropriate dependency configurations.
We mentioned in step 5 that you might need to add copy tasks to satisfy the requirements of
dependent Ant builds. Once those builds have been migrated, such build logic will no longer be
needed and should be removed.
At the end of the process you should have a Gradle build that you are confident works as it should,
with much less build logic than before.
Further reading
This chapter has covered the major topics that are specific to migrating Ant builds to Gradle. All
that remain are a few other areas that may be useful during or after a migration:
• Learn how to configure Gradle’s build environment, including the JVM settings used to run it
As a final note, this guide has only touched on a few of Gradle’s features and we encourage you to
learn about the rest from the other chapters of the user manual and from our step-by-step samples.
Running Gradle Builds
Build Environment
Interested in configuring your Build Cache to speed up builds? Register here for our
TIP Build Cache training session to learn some of the tips and tricks top engineering teams
are using to increase build speed.
When configuring Gradle behavior you can use these methods, listed in order of highest to lowest
precedence (first one wins):
• Command-line flags such as --build-cache. These have precedence over properties and
environment variables.
• Environment variables such as GRADLE_OPTS sourced by the environment that executes Gradle.
Aside from configuring the build environment, you can configure a given project build using
Project properties such as -PreleaseType=final.
Gradle properties
Gradle provides several options that make it easy to configure the Java process that will be used to
execute your build. While it’s possible to configure these in your local environment via GRADLE_OPTS
or JAVA_OPTS, it is useful to be able to store certain settings like JVM memory configuration and Java
home location in version control so that an entire team can work with a consistent environment. To
do so, place these settings into a gradle.properties file committed to your version control system.
The final configuration taken into account by Gradle is a combination of all Gradle properties set on
the command line and your gradle.properties files. If an option is configured in multiple locations,
the first one found in any of these locations wins:
Note that the location of the Gradle user home may have been changed beforehand via the
-Dgradle.user.home system property passed on the command line.
The following properties can be used to configure the Gradle build environment:
org.gradle.caching=(true,false)
When set to true, Gradle will reuse task outputs from any previous build, when possible,
resulting is much faster builds. Learn more about using the build cache. By default, the build
cache is not enabled.
org.gradle.caching.debug=(true,false)
When set to true, individual input property hashes and the build cache key for each task are
logged on the console. Learn more about task output caching. Default is false.
org.gradle.configureondemand=(true,false)
Enables incubating configuration on demand, where Gradle will attempt to configure only
necessary projects. Default is false.
org.gradle.console=(auto,plain,rich,verbose)
Customize console output coloring or verbosity. Default depends on how Gradle is invoked. See
command-line logging for additional details.
org.gradle.daemon=(true,false)
When set to true the Gradle Daemon is used to run the build. Default is true, builds will be run
using the daemon.
org.gradle.debug=(true,false)
When set to true, Gradle will run the build with remote debugging enabled, listening on port
5005. Note that this is the equivalent of adding
-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 to the JVM command line
and will suspend the virtual machine until a debugger is attached. Default is false.
org.gradle.debug.port=(port number)
Specifies the port number to listen on when debug is enabled. Default is 5005.
org.gradle.debug.server=(true,false)
If set to true and debugging is enabled, Gradle will run the build with the socket-attach mode of
the debugger. Otherwise, the socket-listen mode is used. Default is true.
org.gradle.debug.suspend=(true,false)
When set to true and debugging is enabled, the JVM running Gradle will suspend until a
debugger is attached. Default is true.
org.gradle.jvmargs=(JVM arguments)
Specifies the JVM arguments used for the Gradle Daemon. The setting is particularly useful for
configuring JVM memory settings for build performance. This does not affect the JVM settings
for the Gradle client VM. The default is -Xmx512m "-XX:MaxMetaspaceSize=256m".
org.gradle.logging.level=(quiet,warn,lifecycle,info,debug)
When set to quiet, warn, lifecycle, info, or debug, Gradle will use this log level. The values are
not case sensitive. See Choosing a log level. The lifecycle level is the default.
org.gradle.parallel=(true,false)
When configured, Gradle will fork up to org.gradle.workers.max JVMs to execute projects in
parallel. To learn more about parallel task execution, see the section on Gradle build
performance. Default is false.
org.gradle.priority=(low,normal)
Specifies the scheduling priority for the Gradle daemon and all processes launched by it. See also
performance command-line options. Default is normal.
org.gradle.vfs.verbose=(true,false)
Configures verbose logging when watching the file system. Default is false.
org.gradle.vfs.watch=(true,false)
Toggles watching the file system. When enabled Gradle re-uses information it collects about the
file system between builds. Enabled by default on operating systems where Gradle supports this
feature.
org.gradle.warning.mode=(all,fail,summary,none)
When set to all, summary or none, Gradle will use different warning type display. See Command-
line logging options for details. Default is summary.
org.gradle.logging.stacktrace=(internal,all,full)
Specifies whether stacktraces should be displayed as part of the build result upon an exception.
See also the --stacktrace command-line option. When set to internal, a stacktrace is present in
the output only in case of internal exceptions. When set to all or full, a stacktrace is present in
the output for all exceptions and build failures. Using full doesn’t truncate the stacktrace, which
leads to a much more verbose output. Default is internal.
gradle.properties
gradlePropertiesProp=gradlePropertiesValue
sysProp=shouldBeOverWrittenBySysProp
systemProp.system=systemValue
build.gradle
tasks.register('printProps') {
doLast {
println commandLineProjectProp
println gradlePropertiesProp
println systemProjectProp
println System.properties['system']
}
}
build.gradle.kts
tasks.register("printProps") {
doLast {
println(commandLineProjectProp)
println(gradlePropertiesProp)
println(systemProjectProp)
println(System.getProperty("system"))
}
}
$ gradle -q -PcommandLineProjectProp=commandLineProjectPropValue
-Dorg.gradle.project.systemProjectProp=systemPropertyValue printProps
commandLineProjectPropValue
gradlePropertiesValue
systemPropertyValue
systemValue
System properties
Using the -D command-line option, you can pass a system property to the JVM which runs Gradle.
The -D option of the gradle command has the same effect as the -D option of the java command.
You can also set system properties in gradle.properties files with the prefix systemProp.
systemProp.gradle.wrapperUser=myuser
systemProp.gradle.wrapperPassword=mypassword
The following system properties are available. Note that command-line options take precedence
over system properties.
gradle.wrapperUser=(myuser)
Specify user name to download Gradle distributions from servers using HTTP Basic
Authentication. Learn more in Authenticated wrapper downloads.
gradle.wrapperPassword=(mypassword)
Specify password for downloading a Gradle distribution using the Gradle wrapper.
gradle.user.home=(path to directory)
Specify the Gradle user home directory.
https.protocols
Specify the supported TLS versions in a comma separated format. For example: TLSv1.2,TLSv1.3.
In a multi project build, “systemProp.” properties set in any project except the root will be ignored.
That is, only the root project’s gradle.properties file will be checked for properties that begin with
the “systemProp.” prefix.
Environment variables
The following environment variables are available for the gradle command. Note that command-
line options and system properties take precedence over environment variables.
GRADLE_OPTS
Specifies JVM arguments to use when starting the Gradle client VM. The client VM only handles
command line input/output, so it is rare that one would need to change its VM options. The
actual build is run by the Gradle daemon, which is not affected by this environment variable.
GRADLE_USER_HOME
Specifies the Gradle user home directory (which defaults to $USER_HOME/.gradle if not set).
JAVA_HOME
Specifies the JDK installation directory to use for the client VM. This VM is also used for the
daemon, unless a different one is specified in a Gradle properties file with org.gradle.java.home.
Project properties
You can add properties directly to your Project object via the -P command line option.
Gradle can also set project properties when it sees specially-named system properties or
environment variables. If the environment variable name looks like ORG_GRADLE_PROJECT
_prop=somevalue, then Gradle will set a prop property on your project object, with the value of
somevalue. Gradle also supports this for system properties, but with a different naming pattern,
which looks like org.gradle.project.prop. Both of the following will set the foo property on your
Project object to "bar".
org.gradle.project.foo=bar
ORG_GRADLE_PROJECT_foo=bar
The properties file in the user’s home directory has precedence over property files
NOTE
in the project directories.
This feature is very useful when you don’t have admin rights to a continuous integration server and
you need to set property values that should not be easily visible. Since you cannot use the -P option
in that scenario, nor change the system-level configuration files, the correct strategy is to change
the configuration of your continuous integration build job, adding an environment variable setting
that matches an expected pattern. This won’t be visible to normal users on the system.
You can access a project property in your build script simply by using its name as you would use a
variable.
If a project property is referenced but does not exist, an exception will be thrown
and the build will fail.
NOTE
You should check for existence of optional project properties before you access
them using the Project.hasProperty(java.lang.String) method.
You can adjust JVM options for Gradle in the following ways:
The org.gradle.jvmargs Gradle property controls the VM running the build. It defaults to -Xmx512m
"-XX:MaxMetaspaceSize=256m"
The JAVA_OPTS environment variable controls the command line client, which is only used to display
console output. It defaults to -Xmx64m
There is one case where the client VM can also serve as the build VM: If you
deactivate the Gradle Daemon and the client VM has the same settings as required
NOTE
for the build VM, the client VM will run the build directly. Otherwise the client VM
will fork a new VM to run the actual build in order to honor the different settings.
Certain tasks, like the test task, also fork additional JVM processes. You can configure these through
the tasks themselves. They all use -Xmx512m by default.
Example 19. Set Java compile options for JavaCompile tasks
build.gradle
plugins {
id 'java'
}
tasks.withType(JavaCompile) {
options.compilerArgs += ['-Xdoclint:none', '-Xlint:none', '-nowarn']
}
build.gradle.kts
plugins {
java
}
tasks.withType<JavaCompile>().configureEach {
options.compilerArgs = listOf("-Xdoclint:none", "-Xlint:none", "-nowarn")
}
See other examples in the Test API documentation and test execution in the Java plugin reference.
Build scans will tell you information about the JVM that executed the build when you use the --scan
option.
Configuring a task using project properties
It’s possible to change the behavior of a task based on project properties specified at invocation
time.
Suppose you’d like to ensure release builds are only triggered by CI. A simple way to handle this is
through an isCI project property.
Example 20. Prevent releasing outside of CI
build.gradle
tasks.register('performRelease') {
doLast {
if (project.hasProperty("isCI")) {
println("Performing release actions")
} else {
throw new InvalidUserDataException("Cannot perform release
outside of CI")
}
}
}
build.gradle.kts
tasks.register("performRelease") {
doLast {
if (project.hasProperty("isCI")) {
println("Performing release actions")
} else {
throw InvalidUserDataException("Cannot perform release outside of
CI")
}
}
}
Configuring an HTTP or HTTPS proxy (for downloading dependencies, for example) is done via
standard JVM system properties. These properties can be set directly in the build script; for
example, setting the HTTP proxy host would be done with System.setProperty('http.proxyHost',
'www.somehost.org'). Alternatively, the properties can be specified in gradle.properties.
Configuring an HTTP proxy using gradle.properties
systemProp.http.proxyHost=www.somehost.org
systemProp.http.proxyPort=8080
systemProp.http.proxyUser=userid
systemProp.http.proxyPassword=password
systemProp.http.nonProxyHosts=*.nonproxyrepos.com|localhost
systemProp.https.proxyHost=www.somehost.org
systemProp.https.proxyPort=8080
systemProp.https.proxyUser=userid
systemProp.https.proxyPassword=password
systemProp.http.nonProxyHosts=*.nonproxyrepos.com|localhost
You may need to set other properties to access other networks. Here are 2 references that may be
helpful:
NTLM Authentication
If your proxy requires NTLM authentication, you may need to provide the authentication domain
as well as the username and password. There are 2 ways that you can provide the domain for
authenticating to a NTLM proxy:
— Wikipedia
Gradle runs on the Java Virtual Machine (JVM) and uses several supporting libraries that require a
non-trivial initialization time. As a result, it can sometimes seem a little slow to start. The solution
to this problem is the Gradle Daemon: a long-lived background process that executes your builds
much more quickly than would otherwise be the case. We accomplish this by avoiding the
expensive bootstrapping process as well as leveraging caching, by keeping data about your project
in memory. Running Gradle builds with the Daemon is no different than without. Simply configure
whether you want to use it or not — everything else is handled transparently by Gradle.
Why the Gradle Daemon is important for performance
The Daemon is a long-lived process, so not only are we able to avoid the cost of JVM startup for
every build, but we are able to cache information about project structure, files, tasks, and more in
memory.
The reasoning is simple: improve build speed by reusing computations from previous builds.
However, the benefits are dramatic: we typically measure build times reduced by 15-75% on
subsequent builds. We recommend profiling your build by using --profile to get a sense of how
much impact the Gradle Daemon can have for you.
The Gradle Daemon is enabled by default starting with Gradle 3.0, so you don’t have to do anything
to benefit from it.
To get a list of running Gradle Daemons and their statuses use the --status command.
Sample output:
Currently, a given Gradle version can only connect to daemons of the same version. This means the
status output will only show Daemons for the version of Gradle being invoked and not for any other
versions. Future versions of Gradle will lift this constraint and will show the running Daemons for
all versions of Gradle.
The Gradle Daemon is enabled by default, and we recommend always enabling it. You can disable
the long-lived Gradle daemon via the --no-daemon command-line option, or by adding
org.gradle.daemon=false to your gradle.properties file. You can find details of other ways to disable
(and enable) the Daemon in Daemon FAQ further down.
In order to honour the required JVM options for your build, Gradle will normally
spawn a separate process for build invocation, even when the Daemon is disabled.
NOTE You can prevent this "single-use Daemon" by ensuring that the JVM settings for the
client VM match those required for the build VM. See Configuring JVM Memory for
more details.
Note that having the Daemon enabled, all your builds will take advantage of the speed boost,
regardless of the version of Gradle a particular build uses.
Continuous integration
Since Gradle 3.0, we enable Daemon by default and recommend using it for both
TIP developers' machines and Continuous Integration servers. However, if you suspect
that Daemon makes your CI builds unstable, you can disable it to use a fresh runtime
for each build since the runtime is completely isolated from any previous builds.
As mentioned, the Daemon is a background process. You needn’t worry about a build up of Gradle
processes on your machine, though. Every Daemon monitors its memory usage compared to total
system memory and will stop itself if idle when available system memory is low. If you want to
explicitly stop running Daemon processes for any reason, just use the command gradle --stop.
This will terminate all Daemon processes that were started with the same version of Gradle used to
execute the command. If you have the Java Development Kit (JDK) installed, you can easily verify
that a Daemon has stopped by running the jps command. You’ll see any running Daemons listed
with the name GradleDaemon.
FAQ
There are two recommended ways to disable the Daemon persistently for an environment:
Both approaches have the same effect. Which one to use is up to personal preference. Most Gradle
users choose the second option and add the entry to the user gradle.properties file.
On Windows, this command will disable the Daemon for the current user:
On UNIX-like operating systems, the following Bash shell command will disable the Daemon for the
current user:
The --daemon and --no-daemon command line options enable and disable usage of the Daemon for
individual build invocations when using the Gradle command line interface. These command line
options have the highest precedence when considering the build environment. Typically, it is more
convenient to enable the Daemon for an environment (e.g. a user account) so that all builds use the
Daemon without requiring to remember to supply the --daemon option.
There are several reasons why Gradle will create a new Daemon, instead of using one that is
already running. The basic rule is that Gradle will start a new Daemon if there are no existing idle
or compatible Daemons available. Gradle will kill any Daemon that has been idle for 3 hours or
more, so you don’t have to worry about cleaning them up manually.
idle
An idle Daemon is one that is not currently executing a build or doing other useful work.
compatible
A compatible Daemon is one that can (or can be made to) meet the requirements of the
requested build environment. The Java runtime used to execute the build is an example aspect
of the build environment. Another example is the set of JVM system properties required by the
build runtime.
Some aspects of the requested build environment may not be met by an Daemon. If the Daemon is
running with a Java 8 runtime, but the requested environment calls for Java 10, then the Daemon is
not compatible and another must be started. Moreover, certain properties of a Java runtime cannot
be changed once the JVM has started. For example, it is not possible to change the memory
allocation (e.g. -Xmx1024m), default text encoding, default locale, etc of a running JVM.
The “requested build environment” is typically constructed implicitly from aspects of the build
client’s (e.g. Gradle command line client, IDE etc.) environment and explicitly via command line
switches and settings. See Build Environment for details on how to specify and control the build
environment.
The following JVM system properties are effectively immutable. If the requested build environment
requires any of these properties, with a different value than a Daemon’s JVM has for this property,
the Daemon is not compatible.
• file.encoding
• user.language
• user.country
• user.variant
• java.io.tmpdir
• javax.net.ssl.keyStore
• javax.net.ssl.keyStorePassword
• javax.net.ssl.keyStoreType
• javax.net.ssl.trustStore
• javax.net.ssl.trustStorePassword
• javax.net.ssl.trustStoreType
• com.sun.management.jmxremote
The following JVM attributes, controlled by startup arguments, are also effectively immutable. The
corresponding attributes of the requested build environment and the Daemon’s environment must
match exactly in order for a Daemon to be compatible.
The required Gradle version is another aspect of the requested build environment. Daemon
processes are coupled to a specific Gradle runtime. Working on multiple Gradle projects during a
session that use different Gradle versions is a common reason for having more than one running
Daemon process.
How much memory does the Daemon use and can I give it more?
If the requested build environment does not specify a maximum heap size, the Daemon will use up
to 512MB of heap. It will use the JVM’s default minimum heap size. 512MB is more than enough for
most builds. Larger builds with hundreds of subprojects, lots of configuration, and source code may
require, or perform better, with more memory.
To increase the amount of memory the Daemon can use, specify the appropriate flags as part of the
requested build environment. Please see Build Environment for details.
Daemon processes will automatically terminate themselves after 3 hours of inactivity or less. If you
wish to stop a Daemon process before this, you can either kill the process via your operating system
or run the gradle --stop command. The --stop switch causes Gradle to request that all running
Daemon processes, of the same Gradle version used to run the command, terminate themselves.
Considerable engineering effort has gone into making the Daemon robust, transparent and
unobtrusive during day to day development. However, Daemon processes can occasionally be
corrupted or exhausted. A Gradle build executes arbitrary code from multiple sources. While
Gradle itself is designed for and heavily tested with the Daemon, user build scripts and third party
plugins can destabilize the Daemon process through defects such as memory leaks or global state
corruption.
It is also possible to destabilize the Daemon (and build environment in general) by running builds
that do not release resources correctly. This is a particularly poignant problem when using
Microsoft Windows as it is less forgiving of programs that fail to close files after reading or writing.
Gradle actively monitors heap usage and attempts to detect when a leak is starting to exhaust the
available heap space in the daemon. When it detects a problem, the Gradle daemon will finish the
currently running build and proactively restart the daemon on the next build. This monitoring is
enabled by default, but can be disabled by setting the org.gradle.daemon.performance.enable-
monitoring system property to false.
If it is suspected that the Daemon process has become unstable, it can simply be killed. Recall that
the --no-daemon switch can be specified for a build to prevent use of the Daemon. This can be useful
to diagnose whether or not the Daemon is actually the culprit of a problem.
The Gradle Tooling API that is used by IDEs and other tools to integrate with Gradle always uses the
Gradle Daemon to execute builds. If you are executing Gradle builds from within your IDE you are
using the Gradle Daemon and do not need to enable it for your environment.
The Gradle Daemon is a long lived build process. In between builds it waits idly for the next build.
This has the obvious benefit of only requiring Gradle to be loaded into memory once for multiple
builds, as opposed to once for each build. This in itself is a significant performance optimization,
but that’s not where it stops.
A significant part of the story for modern JVM performance is runtime code optimization. For
example, HotSpot (the JVM implementation provided by Oracle and used as the basis of OpenJDK)
applies optimization to code while it is running. The optimization is progressive and not
instantaneous. That is, the code is progressively optimized during execution which means that
subsequent builds can be faster purely due to this optimization process. Experiments with HotSpot
have shown that it takes somewhere between 5 and 10 builds for optimization to stabilize. The
difference in perceived build time between the first build and the 10th for a Daemon can be quite
dramatic.
The Daemon also allows more effective in memory caching across builds. For example, the classes
needed by the build (e.g. plugins, build scripts) can be held in memory between builds. Similarly,
Gradle can maintain in-memory caches of build data such as the hashes of task inputs and outputs,
used for incremental building.
To detect changes on the file system, and to calculate what needs to be rebuilt, Gradle collects a lot
of information about the state of the file system during every build. On supported operating
systems the Daemon re-uses the already collected information from the last build (see watching the
file system). This can save a significant amount of time for incremental builds, where the number
of changes to the file system between two builds is typically low.
To detect changes on the file system, and to calculate what needs to be rebuilt, Gradle collects
information about the file system in-memory during every build (aka Virtual File System). By
watching the file system, Gradle can keep the Virtual File System in sync with the file system even
between builds. Doing so allows the Daemon to save the time to rebuild the Virtual File System
from disk for the next build. For incremental builds, there are typically only a few changes between
builds. Therefore, incremental builds can re-use most of the Virtual File System from the last build
and benefit the most from watching the file system.
Gradle uses native operating system features for watching the file system. It supports the feature on
these operating systems:
• Linux (Ubuntu 16.04 or later, CentOS 8 or later, Red Hat Enterprise Linux 8 or later, Amazon
Linux 2 are tested),
• APFS
• btrfs
• ext3
• ext4
• HFS+
• NTFS
File system watching supports working through VirtualBox’s shared folders, too.
Network file systems like Samba and NFS are not supported.
If you have symlinks in your build, you won’t get the performance benefits for those
NOTE
locations.
File system watching is enabled by default for operating systems supported by Gradle.
When the feature is enabled by default, Gradle acts conservatively when it encounters content on
unsupported file systems. This can happen for example if a project directory, or one of its
subdirectories is mounted from a network drive. In default mode information about unsupported
file systems will not be retained in the Virtual File System between builds.
To force Gradle to keep information about unsupported file systems between builds, the feature
must be enabled explicitly by one of these methods:
File system watching can also be disabled completely regardless of file systems by supplying --no
-watch-fs on the command-line, or by specifying org.gradle.vfs.watch=false in gradle.properties.
You can instruct Gradle to some more information about the state of the virtual file system, and the
events received from the file system using the org.gradle.vfs.verbose flag. This produces the
following output at the start and end of the build:
+ Note that on Windows and macOS Gradle might report changes received since the last build even
if you haven’t changed anything. These are harmless notifications about changes to Gradle’s own
caches and can be ignored safely.
Common problems
• too many changes happened, and the watching API couldn’t handle it.
In both cases the build cannot benefit from file system watching.
Linux-specific notes
File system watching uses inotify on Linux. Depending on the size of your build, it may be
necessary to increase inotify limits. If you are using an IDE, then you probably already had to
increase the limits in the past.
File system watching uses one inotify watch per watched directory. You can see the current limit of
inotify watches per user by running:
cat /proc/sys/fs/inotify/max_user_watches
Each used inotify watch takes up to 1KB of memory. Assuming inotify uses all the 512K watches
then around 500MB will be used for watching the file system. If your environment is memory
constraint, you may want to disable file system watching.
Initialization Scripts
Gradle provides a powerful mechanism to allow customizing the build based on the current
environment. This mechanism also supports tools that wish to integrate with Gradle.
Note that this is completely different from the “init” task provided by the “build-init” plugin (see
Build Init Plugin).
Basic usage
Initialization scripts (a.k.a. init scripts) are similar to other scripts in Gradle. These scripts, however,
are run before the build starts. Here are several possible uses:
• Set up properties based on the current environment, such as a developer’s machine vs. a
continuous integration server.
• Supply personal information about the user that is required by the build, such as repository or
database authentication credentials.
• Register build loggers. You might wish to customize how Gradle logs the events that it generates.
One main limitation of init scripts is that they cannot access classes in the buildSrc project (see
Using buildSrc to extract imperative logic for details of this feature).
• Specify a file on the command line. The command line option is -I or --init-script followed by
the path to the script. The command line option can appear more than once, each time adding
another init script. The build will fail if any of the files specified on the command line does not
exist.
• Put a file called init.gradle (or init.gradle.kts for Kotlin) in the USER_HOME/.gradle/ directory.
• Put a file that ends with .gradle (or .init.gradle.kts for Kotlin) in the
USER_HOME/.gradle/init.d/ directory.
• Put a file that ends with .gradle (or .init.gradle.kts for Kotlin) in the GRADLE_HOME/init.d/
directory, in the Gradle distribution. This allows you to package up a custom Gradle distribution
containing some custom build logic and plugins. You can combine this with the Gradle wrapper
as a way to make custom logic available to all builds in your enterprise.
If more than one init script is found they will all be executed, in the order specified above. Scripts
in a given directory are executed in alphabetical order. This allows, for example, a tool to specify an
init script on the command line and the user to put one in their home directory for defining the
environment and both scripts will run when Gradle is executed.
Similar to a Gradle build script, an init script is a Groovy or Kotlin script. Each init script has a
Gradle instance associated with it. Any property reference and method call in the init script will
delegate to this Gradle instance.
You can use an init script to configure the projects in the build. This works in a similar way to
configuring projects in a multi-project build. The following sample shows how to perform extra
configuration from an init script before the projects are evaluated. This sample uses this feature to
configure an extra repository to be used only for certain environments.
Example 21. Using init script to perform extra configuration before projects are evaluated
build.gradle
repositories {
mavenCentral()
}
tasks.register('showRepos') {
doLast {
println "All repos:"
println repositories.collect { it.name }
}
}
init.gradle
allprojects {
repositories {
mavenLocal()
}
}
build.gradle.kts
repositories {
mavenCentral()
}
tasks.register("showRepos") {
doLast {
println("All repos:")
println(repositories.map { it.name })
}
}
init.gradle.kts
allprojects {
repositories {
mavenLocal()
}
}
Output when applying the init script
In External dependencies for the build script it was explained how to add external dependencies to
a build script. Init scripts can also declare dependencies. You do this with the initscript() method,
passing in a closure which declares the init script classpath.
init.gradle
initscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'org.apache.commons:commons-math:2.0'
}
}
init.gradle.kts
initscript {
repositories {
mavenCentral()
}
dependencies {
classpath("org.apache.commons:commons-math:2.0")
}
}
The closure passed to the initscript() method configures a ScriptHandler instance. You declare the
init script classpath by adding dependencies to the classpath configuration. This is the same way
you declare, for example, the Java compilation classpath. You can use any of the dependency types
described in Declaring Dependencies, except project dependencies.
Having declared the init script classpath, you can use the classes in your init script as you would
any other classes on the classpath. The following example adds to the previous example, and uses
classes from the init script classpath.
Example 23. An init script with external dependencies
init.gradle
import org.apache.commons.math.fraction.Fraction
initscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'org.apache.commons:commons-math:2.0'
}
}
println Fraction.ONE_FIFTH.multiply(2)
build.gradle
tasks.register('doNothing')
init.gradle.kts
import org.apache.commons.math.fraction.Fraction
initscript {
repositories {
mavenCentral()
}
dependencies {
classpath("org.apache.commons:commons-math:2.0")
}
}
println(Fraction.ONE_FIFTH.multiply(2))
build.gradle.kts
tasks.register("doNothing")
Similar to a Gradle build script or a Gradle settings file, plugins can be applied on init scripts.
Example 24. Using plugins in init scripts
init.gradle
build.gradle
repositories{
mavenCentral()
}
tasks.register('showRepositories') {
doLast {
repositories.each {
println "repository: ${it.name} ('${it.url}')"
}
}
}
init.gradle.kts
apply<EnterpriseRepositoryPlugin>()
repositories{
mavenCentral()
}
tasks.register("showRepositories") {
doLast {
repositories.map { it as MavenArtifactRepository }.forEach {
println("repository: ${it.name} ('${it.url}')")
}
}
}
The plugin in the init script ensures that only a specified repository is used when running the build.
When applying plugins within the init script, Gradle instantiates the plugin and calls the plugin
instance’s Plugin.apply(T) method. The gradle object is passed as a parameter, which can be used to
configure all aspects of a build. Of course, the applied plugin can be resolved as an external
dependency as described in External dependencies for the init script
For details about authoring multi-project builds, consult the Authoring Multi-Project Builds section
of the user manual.
To identify the project structure, you can use gradle projects command. As an example, let’s use a
multi-project build with the following structure:
> gradle -q projects
------------------------------------------------------------
Root project 'multiproject'
------------------------------------------------------------
From a user’s perspective, multi-project builds are still collections of tasks you can run. The
difference is that you may want to control which project’s tasks get executed. The following sections
will cover the two options you have for executing tasks in a multi-project build.
The command gradle test will execute the test task in any subprojects, relative to the current
working directory, that have that task. If you run the command from the root project directory,
you’ll run test in api, shared, services:shared and services:webservice. If you run the command from
the services project directory, you’ll only execute the task in services:shared and services:webservice.
The basic rule behind Gradle’s behavior is: execute all tasks down the hierarchy which have this
name. Only complain if there is no such task found in any of the subprojects traversed.
Some tasks selectors, like help or dependencies, will only run the task on the project
they are invoked on and not on all the subprojects. The main motivation for this is
NOTE
that these tasks print out information that would be hard to process if it combined
the information from all projects.
Gradle looks down the hierarchy, starting with the current dir, for tasks with the given name and
executes them. One thing is very important to note. Gradle always evaluates every project of the
multi-project build and creates all existing task objects. Then, according to the task name
arguments and the current directory, Gradle filters the tasks which should be executed. Because of
Gradle’s cross project configuration, every project has to be evaluated before any task gets executed.
When you’re using the Gradle wrapper, executing a task for a specific subproject by running Gradle
from the subproject’s directory doesn’t work well because you have to specify the path to the
wrapper script if you’re not in the project root. For example, if want to run build task for the
webservice subproject and you’re in the webservice subproject directory, you would have to run
../../gradlew build. The next section shows how this can be achieved directly from the project’s
root directory.
Executing tasks by fully qualified name
You can use task’s fully qualified name to execute a specific task in a specific subproject. For
example: gradle :services:webservice:build will run the build task of the webservice subproject.
The fully qualified name of a task is simply its project path plus the task name.
A project path has the following pattern: It starts with an optional colon, which denotes the root
project. The root project is the only project in a path that is not specified by its name. The rest of a
project path is a colon-separated sequence of project names, where the next project is a subproject
of the previous project. You can see the project paths when running gradle projects as shown in
identifying project structure section.
This approach works for any task, so if you want to know what tasks are in a particular subproject,
just use the tasks task, e.g. gradle :services:webservice:tasks.
Regardless of which technique you use to execute tasks, Gradle will take care of building any
subprojects that the target depends on. You don’t have to worry about the inter-project
dependencies yourself. If you’re interested in how this is configured, you can read about writing
multi-project builds later in the user manual.
That’s all you really need to know about multi-project builds as a build user. You can now identify
whether a build is a multi-project one and you can discover its structure. And finally, you can
execute tasks within specific subprojects.
The build task of the Java plugin is typically used to compile, test, and perform code style checks (if
the CodeQuality plugin is used) of a single project. In multi-project builds you may often want to do
all of these tasks across a range of projects. The buildNeeded and buildDependents tasks can help with
this.
In this example, the :services:person-service project depends on both the :api and :shared
projects. The :api project also depends on the :shared project.
Assume you are working on a single project, the :api project. You have been making changes, but
have not built the entire project since performing a clean. You want to build any necessary
supporting jars, but only perform code quality and unit tests on the project you have changed. The
build task does this.
Example 25. Build and Test Single Project
BUILD SUCCESSFUL in 0s
If you have just gotten the latest version of source from your version control system which included
changes in other projects that :api depends on, you might want to not only build all the projects
you depend on, but test them as well. The buildNeeded task also tests all the projects from the project
dependencies of the testRuntime configuration.
Example 26. Build and Test Depended On Projects
BUILD SUCCESSFUL in 0s
You also might want to refactor some part of the :api project that is used in other projects. If you
make these types of changes, it is not sufficient to test just the :api project, you also need to test all
projects that depend on the :api project. The buildDependents task also tests all the projects that
have a project dependency (in the testRuntime configuration) on the specified project.
Example 27. Build and Test Dependent Projects
BUILD SUCCESSFUL in 0s
Finally, you may want to build and test everything in all projects. Any task you run in the root
project folder will cause that same named task to be run on all the children. So you can just run
gradle build to build and test all projects.
Build Cache
Want to learn the tips and tricks top engineering teams use to keep builds fast and
TIP
performant? Register here for our Build Cache Training.
The build cache feature described here is different from the Android plugin build
NOTE
cache.
Overview
The Gradle build cache is a cache mechanism that aims to save time by reusing outputs produced by
other builds. The build cache works by storing (locally or remotely) build outputs and allowing
builds to fetch these outputs from the cache when it is determined that inputs have not changed,
avoiding the expensive work of regenerating them.
A first feature using the build cache is task output caching. Essentially, task output caching
leverages the same intelligence as up-to-date checks that Gradle uses to avoid work when a
previous local build has already produced a set of task outputs. But instead of being limited to the
previous build in the same workspace, task output caching allows Gradle to reuse task outputs from
any earlier build in any location on the local machine. When using a shared build cache for task
output caching this even works across developer machines and build agents.
Apart from tasks, artifact transforms can also leverage the build cache and re-use their outputs
similarly to task output caching.
For a hands-on approach to learning how to use the build cache, start with reading
through the use cases for the build cache and the follow up sections. It covers the
TIP
different scenarios that caching can improve and has detailed discussions of the
different caveats you need to be aware of when enabling caching for a build.
By default, the build cache is not enabled. You can enable the build cache in a couple of ways:
When the build cache is enabled, it will store build outputs in the Gradle user home. For
configuring this directory or different kinds of build caches see Configure the Build Cache.
Beyond incremental builds described in up-to-date checks, Gradle can save time by reusing outputs
from previous executions of a task by matching inputs to the task. Task outputs can be reused
between builds on one computer or even between builds running on different computers via a
build cache.
We have focused on the use case where users have an organization-wide remote build cache that is
populated regularly by continuous integration builds. Developers and other continuous integration
agents should load cache entries from the remote build cache. We expect that developers will not
be allowed to populate the remote build cache, and all continuous integration builds populate the
build cache after running the clean task.
For your build to play well with task output caching it must work well with the incremental build
feature. For example, when running your build twice in a row all tasks with outputs should be UP-
TO-DATE. You cannot expect faster builds or correct builds when enabling task output caching when
this prerequisite is not met.
Task output caching is automatically enabled when you enable the build cache, see Enable the
Build Cache.
Let us start with a project using the Java plugin which has a few Java source files. We run the build
the first time.
BUILD SUCCESSFUL
We see the directory used by the local build cache in the output. Apart from that the build was the
same as without the build cache. Let’s clean and run the build again.
BUILD SUCCESSFUL
BUILD SUCCESSFUL
Now we see that, instead of executing the :compileJava task, the outputs of the task have been
loaded from the build cache. The other tasks have not been loaded from the build cache since they
are not cacheable. This is due to :classes and :assemble being lifecycle tasks and :processResources
and :jar being Copy-like tasks which are not cacheable since it is generally faster to execute them.
Cacheable tasks
Since a task describes all of its inputs and outputs, Gradle can compute a build cache key that
uniquely defines the task’s outputs based on its inputs. That build cache key is used to request
previous outputs from a build cache or store new outputs in the build cache. If the previous build
outputs have been already stored in the cache by someone else, e.g. your continuous integration
server or other developers, you can avoid executing most tasks locally.
The following inputs contribute to the build cache key for a task in the same way that they do for
up-to-date checks:
• The names and values of properties annotated as described in the section called "Custom task
types"
• The names and values of properties added by the DSL via TaskInputs
• The content of the build script when it affects execution of the task
Task types need to opt-in to task output caching using the @CacheableTask annotation. Note that
@CacheableTask is not inherited by subclasses. Custom task types are not cacheable by default.
• Testing: Test
Some tasks, like Copy or Jar, usually do not make sense to make cacheable because Gradle is only
copying files from one location to another. It also doesn’t make sense to make tasks cacheable that
do not produce outputs or have no task actions.
There are third party plugins that work well with the build cache. The most prominent examples
are the Android plugin 3.1+ and the Kotlin plugin 1.2.21+. For other third party plugins, check their
documentation to find out whether they support the build cache.
It is very important that a cacheable task has a complete picture of its inputs and outputs, so that
the results from one build can be safely re-used somewhere else.
Missing task inputs can cause incorrect cache hits, where different results are treated as identical
because the same cache key is used by both executions. Missing task outputs can cause build
failures if Gradle does not completely capture all outputs for a given task. Wrongly declared task
inputs can lead to cache misses especially when containing volatile data or absolute paths. (See the
section called "Task inputs and outputs" on what should be declared as inputs and outputs.)
The task path is not an input to the build cache key. This means that tasks with
NOTE different task paths can re-use each other’s outputs as long as Gradle determines
that executing them yields the same result.
In order to ensure that the inputs and outputs are properly declared use integration tests (for
example using TestKit) to check that a task produces the same outputs for identical inputs and
captures all output files for the task. We suggest adding tests to ensure that the task inputs are
relocatable, i.e. that the task can be loaded from the cache into a different build directory (see
@PathSensitive).
In order to handle volatile inputs for your tasks consider configuring input normalization.
There are certain tasks that don’t benefit from using the build cache. One example is a task that
only moves data around the file system, like a Copy task. You can signify that a task is not to be
cached by adding the @DisableCachingByDefault annotation to it. You can also give a human-
readable reason for not caching the task by default. The annotation can be used on its own, or
together with @CacheableTask.
This annotation is only for documenting the reason behind not caching the task by
NOTE
default. Build logic can override this decision via the runtime API (see below).
As we have seen, built-in tasks, or tasks provided by plugins, are cacheable if their class is
annotated with the Cacheable annotation. But what if you want to make cacheable a task whose
class is not cacheable? Let’s take a concrete example: your build script uses a generic NpmTask task to
create a JavaScript bundle by delegating to NPM (and running npm run bundle). This process is
similar to a complex compilation task, but NpmTask is too generic to be cacheable by default: it just
takes arguments and runs npm with those arguments.
The inputs and outputs of this task are simple to figure out. The inputs are the directory containing
the JavaScript files, and the NPM configuration files. The output is the bundle file generated by this
task.
Using annotations
We create a subclass of the NpmTask and use annotations to declare the inputs and outputs.
When possible, it is better to use delegation instead of creating a subclass. That is the case for the
built in JavaExec, Exec, Copy and Sync tasks, which have a method on Project to do the actual work.
If you’re a modern JavaScript developer, you know that bundling can be quite long, and is worth
caching. To achieve that, we need to tell Gradle that it’s allowed to cache the output of that task,
using the @CacheableTask annotation.
This is sufficient to make the task cacheable on your own machine. However, input files are
identified by default by their absolute path. So if the cache needs to be shared between several
developers or machines using different paths, that won’t work as expected. So we also need to set
the path sensitivity. In this case, the relative path of the input files can be used to identify them.
Note that it is possible to override property annotations from the base class by overriding the getter
of the base class and annotating that method.
Example 28. Custom cacheable BundleTask
build.gradle
@CacheableTask ①
abstract class BundleTask extends NpmTask {
@Override @Internal ②
ListProperty<String> getArgs() {
super.getArgs()
}
@InputDirectory
@SkipWhenEmpty
@PathSensitive(PathSensitivity.RELATIVE) ③
abstract DirectoryProperty getScripts()
@InputFiles
@PathSensitive(PathSensitivity.RELATIVE) ④
abstract ConfigurableFileCollection getConfigFiles()
@OutputFile
abstract RegularFileProperty getBundle()
BundleTask() {
args.addAll("run", "bundle")
bundle.set(project.layout.buildDirectory.file("bundle.js"))
scripts.set(project.layout.projectDirectory.dir("scripts"))
configFiles.from(project.layout.projectDirectory.file("package.json"
))
configFiles.from(project.layout.projectDirectory.file("package-
lock.json"))
}
}
tasks.register('bundle', BundleTask)
build.gradle.kts
@CacheableTask ①
abstract class BundleTask : NpmTask() {
@get:Internal ②
override val args
get() = super.args
@get:InputDirectory
@get:SkipWhenEmpty
@get:PathSensitive(PathSensitivity.RELATIVE) ③
abstract val scripts: DirectoryProperty
@get:InputFiles
@get:PathSensitive(PathSensitivity.RELATIVE) ④
abstract val configFiles: ConfigurableFileCollection
@get:OutputFile
abstract val bundle: RegularFileProperty
init {
args.addAll("run", "bundle")
bundle.set(project.layout.buildDirectory.file("bundle.js"))
scripts.set(project.layout.projectDirectory.dir("scripts"))
configFiles.from(project.layout.projectDirectory.file("package.json"))
configFiles.from(project.layout.projectDirectory.file("package-
lock.json"))
}
}
tasks.register<BundleTask>("bundle")
• (2) Override the getter of a property of the base class to change the input annotation to
@Internal.
If for some reason you cannot create a new custom task class, it is also possible to make a task
cacheable using the runtime API to declare the inputs and outputs.
For enabling caching for the task you need to use the TaskOutputs.cacheIf() method.
The declarations via the runtime API have the same effect as the annotations described above. Note
that you cannot override file inputs and outputs via the runtime API. Input properties can be
overridden by specifying the same property name.
build.gradle
tasks.register('bundle', NpmTask) {
args = ['run', 'bundle']
outputs.cacheIf { true }
inputs.dir(file("scripts"))
.withPropertyName("scripts")
.withPathSensitivity(PathSensitivity.RELATIVE)
inputs.files("package.json", "package-lock.json")
.withPropertyName("configFiles")
.withPathSensitivity(PathSensitivity.RELATIVE)
outputs.file("$buildDir/bundle.js")
.withPropertyName("bundle")
}
build.gradle.kts
tasks.register<NpmTask>("bundle") {
args.set(listOf("run", "bundle"))
outputs.cacheIf { true }
inputs.dir(file("scripts"))
.withPropertyName("scripts")
.withPathSensitivity(PathSensitivity.RELATIVE)
inputs.files("package.json", "package-lock.json")
.withPropertyName("configFiles")
.withPathSensitivity(PathSensitivity.RELATIVE)
outputs.file("$buildDir/bundle.js")
.withPropertyName("bundle")
}
Configure the Build Cache
You can configure the build cache by using the Settings.buildCache(org.gradle.api.Action) block in
settings.gradle.
Gradle supports a local and a remote build cache that can be configured separately. When both
build caches are enabled, Gradle tries to load build outputs from the local build cache first, and
then tries the remote build cache if no build outputs are found. If outputs are found in the remote
cache, they are also stored in the local cache, so next time they will be found locally. Gradle stores
("pushes") build outputs in any build cache that is enabled and has BuildCache.isPush() set to true.
By default, the local build cache has push enabled, and the remote build cache has push disabled.
The local build cache is pre-configured to be a DirectoryBuildCache and enabled by default. The
remote build cache can be configured by specifying the type of build cache to connect to
(BuildCacheConfiguration.remote(java.lang.Class)).
The built-in local build cache, DirectoryBuildCache, uses a directory to store build cache artifacts.
By default, this directory resides in the Gradle user home directory, but its location is configurable.
Gradle will periodically clean-up the local cache directory by removing entries that have not been
used recently to conserve disk space. Note that cache entries are cleaned-up regardless of the
project they were produced by.
For more details on the configuration options refer to the DSL documentation of
DirectoryBuildCache. Here is an example of the configuration.
Example 30. Configure the local cache
settings.gradle
buildCache {
local {
directory = new File(rootDir, 'build-cache')
removeUnusedEntriesAfterDays = 30
}
}
settings.gradle.kts
buildCache {
local {
directory = File(rootDir, "build-cache")
removeUnusedEntriesAfterDays = 30
}
}
HttpBuildCache provides the ability read to and write from a remote cache via HTTP.
With the following configuration, the local build cache will be used for storing build outputs while
the local and the remote build cache will be used for retrieving build outputs.
Example 31. Load from HttpBuildCache
settings.gradle
buildCache {
remote(HttpBuildCache) {
url = 'https://example.com:8123/cache/'
}
}
settings.gradle.kts
buildCache {
remote<HttpBuildCache> {
url = uri("https://example.com:8123/cache/")
}
}
settings.gradle
buildCache {
remote(HttpBuildCache) {
url = 'https://example.com:8123/cache/'
credentials {
username = 'build-cache-user'
password = 'some-complicated-password'
}
}
}
settings.gradle.kts
buildCache {
remote<HttpBuildCache> {
url = uri("https://example.com:8123/cache/")
credentials {
username = "build-cache-user"
password = "some-complicated-password"
}
}
}
Redirects
Servers must take care when redirecting PUT requests as only 307 and 308 redirect responses will be
followed with a PUT request. All other redirect responses will be followed with a GET request, as per
RFC 7231, without the entry payload as the body.
Requests that fail during request transmission, after having established a TCP connection, will be
retried automatically.
This prevents temporary problems, such as connection drops, read or write timeouts, and low level
network failures such as a connection resets, causing cache operations to fail and disabling the
remote cache for the remainder of the build.
Requests will be retried up to 3 times. If the problem persists, the cache operation will fail and the
remote cache will be disabled for the remainder of the build.
Using SSL
By default, use of HTTPS requires the server to present a certificate that is trusted by the build’s
Java runtime. If your server’s certificate is not trusted, you can:
2. Change the build environment to use an alternative trust store for the build runtime
settings.gradle
buildCache {
remote(HttpBuildCache) {
url = 'https://example.com:8123/cache/'
allowUntrustedServer = true
}
}
settings.gradle.kts
buildCache {
remote<HttpBuildCache> {
url = uri("https://example.com:8123/cache/")
isAllowUntrustedServer = true
}
}
HTTP expect-continue
Use of HTTP Expect-Continue can be enabled. This causes upload requests to happen in two parts:
first a check whether a body would be accepted, then transmission of the body if the server
indicates it will accept it.
This is useful when uploading to cache servers that routinely redirect or reject upload requests, as
it avoids uploading the cache entry just to have it rejected (e.g. the cache entry is larger than the
cache will allow) or redirected. This additional check incurs extra latency when the server accepts
the request, but reduces latency when the request is rejected or redirected.
Not all HTTP servers and proxies reliably implement Expect-Continue. Be sure to check that your
cache server does support it before enabling.
settings.gradle
buildCache {
remote(HttpBuildCache) {
url = 'https://example.com:8123/cache/'
useExpectContinue = true
}
}
settings.gradle.kts
buildCache {
remote<HttpBuildCache> {
url = uri("https://example.com:8123/cache/")
isUseExpectContinue = true
}
}
The recommended use case for the remote build cache is that your continuous integration server
populates it from clean builds while developers only load from it. The configuration would then
look as follows.
Example 35. Recommended setup for CI push use case
settings.gradle
buildCache {
remote(HttpBuildCache) {
url = 'https://example.com:8123/cache/'
push = isCiServer
}
}
settings.gradle.kts
buildCache {
remote<HttpBuildCache> {
url = uri("https://example.com:8123/cache/")
isPush = isCiServer
}
}
It is also possible to configure the build cache from an init script, which can be used from the
command line, added to your Gradle user home or be a part of your custom Gradle distribution.
Example 36. Init script to configure the build cache
init.gradle
init.gradle.kts
gradle.settingsEvaluated {
buildCache {
// vvv Your custom configuration goes here
remote<HttpBuildCache> {
url = uri("https://example.com:8123/cache/")
}
// ^^^ Your custom configuration goes here
}
}
Gradle’s composite build feature allows including other complete Gradle builds into another. Such
included builds will inherit the build cache configuration from the top level build, regardless of
whether the included builds define build cache configuration themselves or not.
The build cache configuration present for any included build is effectively ignored, in favour of the
top level build’s configuration. This also applies to any buildSrc projects of any included builds.
The buildSrc directory is treated as an included build, and as such it inherits the build cache
configuration from the top-level build.
This configuration precedence does not apply to plugin builds included through
NOTE
pluginManagement as these are loaded before the cache configuration itself.
Gradle provides a Docker image for a build cache node, which can connect with Gradle Enterprise
for centralized management. The cache node can also be used without a Gradle Enterprise
installation with restricted functionality.
Using a different build cache backend to store build outputs (which is not covered by the built-in
support for connecting to an HTTP backend) requires implementing your own logic for connecting
to your custom build cache backend. To this end, custom build cache types can be registered via
BuildCacheConfiguration.registerBuildCacheService(java.lang.Class, java.lang.Class).
Gradle Enterprise includes a high-performance, easy to install and operate, shared build cache
backend.
Authoring Gradle Builds
Build Script Basics
This chapter introduces you to the basics of writing Gradle build scripts. It uses toy examples to
explain basic functionality of Gradle, which is helpful to get an understanding of the basic concepts.
Especially if you move to Gradle from other build tools like Ant and want to understand differences
and advantages.
However, to get started with a standard project setup, you don’t even need to go into these concepts
in detail. Instead, you can have a quick hands-on introduction, through our step-by-step samples.
Every Gradle build is made up of one or more projects. What a project represents depends on what
it is that you are doing with Gradle. For example, a project might represent a library JAR or a web
application. It might represent a distribution ZIP assembled from the JARs produced by other
projects. A project does not necessarily represent a thing to be built. It might represent a thing to be
done, such as deploying your application to staging or production environments. Don’t worry if this
seems a little vague for now. Gradle’s build-by-convention support adds a more concrete definition
for what a project is.
The work that Gradle can do on a project is defined by one or more tasks. A task represents some
atomic piece of work which a build performs. This might be compiling some classes, creating a JAR,
generating Javadoc, or publishing some archives to a repository.
Typically, tasks are provided by applying a plugin so that you do not have to define them yourself.
Still, to give you an idea of what a task is, we will look at defining some simple tasks in a build with
one project in this chapter.
Hello world
You run a Gradle build using the gradle command. The gradle command looks for a file called
[1]
build.gradle in the current directory. We call this build.gradle file a build script, although strictly
speaking it is a build configuration script, as we will see later. The build script defines a project and
its tasks.
To try this out, create the following build script named build.gradle.
You run a Gradle build using the gradle command. The gradle command looks for a file called
[2]
build.gradle.kts in the current directory. We call this build.gradle.kts file a build script, although
strictly speaking it is a build configuration script, as we will see later. The build script defines a
project and its tasks.
To try this out, create the following build script named build.gradle.kts.
Example 37. Your first build script
build.gradle
tasks.register('hello') {
doLast {
println 'Hello world!'
}
}
build.gradle.kts
tasks.register("hello") {
doLast {
println("Hello world!")
}
}
In a command-line shell, move to the containing directory and execute the build script with gradle
-q hello:
What’s going on here? This build script defines a single task, called hello, and adds an action to it.
When you run gradle hello, Gradle executes the hello task, which in turn executes the action
you’ve provided. The action is simply a block containing some code to execute.
If you think this looks similar to Ant’s targets, you would be right. Gradle tasks are the equivalent to
Ant targets, but as you will see, they are much more powerful. We have used a different
terminology than Ant as we think the word task is more expressive than the word target.
Unfortunately this introduces a terminology clash with Ant, as Ant calls its commands, such as
javac or copy, tasks. So when we talk about tasks, we always mean Gradle tasks, which are the
equivalent to Ant’s targets. If we talk about Ant tasks (Ant commands), we explicitly say Ant task.
Gradle’s build scripts give you the full power of Groovy and Kotlin. As an appetizer, have a look at
this:
build.gradle
tasks.register('upper') {
doLast {
String someString = 'mY_nAmE'
println "Original: $someString"
println "Upper case: ${someString.toUpperCase()}"
}
}
build.gradle.kts
tasks.register("upper") {
doLast {
val someString = "mY_nAmE"
println("Original: $someString")
println("Upper case: ${someString.toUpperCase()}")
}
}
or
Example 40. Using Groovy or Kotlin in Gradle’s tasks
build.gradle
tasks.register('count') {
doLast {
4.times { print "$it " }
}
}
build.gradle.kts
tasks.register("count") {
doLast {
repeat(4) { print("$it ") }
}
}
Task dependencies
As you probably have guessed, you can declare tasks that depend on other tasks.
Example 41. Declaration of task that depends on other task
build.gradle
tasks.register('hello') {
doLast {
println 'Hello world!'
}
}
tasks.register('intro') {
dependsOn tasks.hello
doLast {
println "I'm Gradle"
}
}
build.gradle.kts
tasks.register("hello") {
doLast {
println("Hello world!")
}
}
tasks.register("intro") {
dependsOn("hello")
doLast {
println("I'm Gradle")
}
}
build.gradle
tasks.register('taskX') {
dependsOn 'taskY'
doLast {
println 'taskX'
}
}
tasks.register('taskY') {
doLast {
println 'taskY'
}
}
build.gradle.kts
tasks.register("taskX") {
dependsOn("taskY")
doLast {
println("taskX")
}
}
tasks.register("taskY") {
doLast {
println("taskY")
}
}
The dependency of taskX to taskY may be declared before taskY is defined. Task dependencies are
discussed in more detail in Adding dependencies to a task.
The power of Groovy or Kotlin can be used for more than defining what a task does. For example,
you can use it to register multiple tasks of the same type in a loop.
Example 43. Flexible registration of a task
build.gradle
build.gradle.kts
Once tasks are registered, they can be accessed via an API. For instance, you could use this to
dynamically add dependencies to a task, at runtime. Ant doesn’t allow anything like this.
Example 44. Accessing a task via API - adding a dependency
build.gradle
build.gradle.kts
build.gradle
tasks.register('hello') {
doLast {
println 'Hello Earth'
}
}
tasks.named('hello') {
doFirst {
println 'Hello Venus'
}
}
tasks.named('hello') {
doLast {
println 'Hello Mars'
}
}
tasks.named('hello') {
doLast {
println 'Hello Jupiter'
}
}
build.gradle.kts
tasks.register("hello") {
doLast {
println("Hello Earth")
}
}
tasks.named("hello") {
doFirst {
println("Hello Venus")
}
}
tasks.named("hello") {
doLast {
println("Hello Mars")
}
}
tasks.named("hello") {
doLast {
println("Hello Jupiter")
}
}
The calls doFirst and doLast can be executed multiple times. They add an action to the beginning or
the end of the task’s actions list. When the task executes, the actions in the action list are executed
in order.
Ant tasks are first-class citizens in Gradle. Gradle provides excellent integration for Ant tasks by
simply relying on Groovy. Groovy is shipped with the fantastic AntBuilder. Using Ant tasks from
Gradle is as convenient and more powerful than using Ant tasks from a build.xml file. And it is
usable from Kotlin too. From the example below, you can learn how to execute Ant tasks and how
to access Ant properties:
Example 46. Using AntBuilder to execute ant.loadfile target
build.gradle
tasks.register('loadfile') {
doLast {
def files = file('./antLoadfileResources').listFiles().sort()
files.each { File file ->
if (file.isFile()) {
ant.loadfile(srcFile: file, property: file.name)
println " *** $file.name ***"
println "${ant.properties[file.name]}"
}
}
}
}
build.gradle.kts
tasks.register("loadfile") {
doLast {
val files = file("./antLoadfileResources").listFiles().sorted()
files.forEach { file ->
if (file.isFile) {
ant.withGroovyBuilder {
"loadfile"("srcFile" to file, "property" to file.name)
}
println(" *** ${file.name} ***")
println("${ant.properties[file.name]}")
}
}
}
}
Using methods
Gradle scales in how you can organize your build logic. The first level of organizing your build logic
for the example above, is extracting a method.
Example 47. Using methods to organize your build logic
build.gradle
tasks.register('checksum') {
doLast {
fileList('./antLoadfileResources').each { File file ->
ant.checksum(file: file, property: "cs_$file.name")
println "$file.name Checksum: ${ant.properties["cs_$file.name"]}"
}
}
}
tasks.register('loadfile') {
doLast {
fileList('./antLoadfileResources').each { File file ->
ant.loadfile(srcFile: file, property: file.name)
println "I'm fond of $file.name"
}
}
}
tasks.register("checksum") {
doLast {
fileList("./antLoadfileResources").forEach { file ->
ant.withGroovyBuilder {
"checksum"("file" to file, "property" to "cs_${file.name}")
}
println("$file.name Checksum:
${ant.properties["cs_${file.name}"]}")
}
}
}
tasks.register("loadfile") {
doLast {
fileList("./antLoadfileResources").forEach { file ->
ant.withGroovyBuilder {
"loadfile"("srcFile" to file, "property" to file.name)
}
println("I'm fond of ${file.name}")
}
}
}
Later you will see that such methods can be shared among subprojects in multi-project builds. If
your build logic becomes more complex, Gradle offers you other very convenient ways to organize
it. We have devoted a whole chapter to this. See Organizing Gradle Projects.
Default tasks
Gradle allows you to define one or more default tasks that are executed if no other tasks are
specified.
Example 48. Defining a default task
build.gradle
tasks.register('clean') {
doLast {
println 'Default Cleaning!'
}
}
tasks.register('run') {
doLast {
println 'Default Running!'
}
}
tasks.register('other') {
doLast {
println "I'm not a default task!"
}
}
build.gradle.kts
defaultTasks("clean", "run")
tasks.register("clean") {
doLast {
println("Default Cleaning!")
}
}
tasks.register("run") {
doLast {
println("Default Running!")
}
}
tasks.register("other") {
doLast {
println("I'm not a default task!")
}
}
Output of gradle -q
> gradle -q
Default Cleaning!
Default Running!
This is equivalent to running gradle clean run. In a multi-project build every subproject can have
its own specific default tasks. If a subproject does not specify default tasks, the default tasks of the
parent project are used (if defined).
If your build script needs to use external libraries, you can add them to the script’s classpath in the
build script itself. You do this using the buildscript() method, passing in a block which declares the
build script classpath.
Example 49. Declaring external dependencies for the build script
build.gradle
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath group: 'commons-codec', name: 'commons-codec', version:
'1.2'
}
}
build.gradle.kts
buildscript {
repositories {
mavenCentral()
}
dependencies {
"classpath"(group = "commons-codec", name = "commons-codec", version
= "1.2")
}
}
The block passed to the buildscript() method configures a ScriptHandler instance. You declare the
build script classpath by adding dependencies to the classpath configuration. This is the same way
you declare, for example, the Java compilation classpath. You can use any of the dependency types
except project dependencies.
Having declared the build script classpath, you can use the classes in your build script as you would
any other classes on the classpath. The following example adds to the previous example, and uses
classes from the build script classpath.
Example 50. A build script with external dependencies
build.gradle
import org.apache.commons.codec.binary.Base64
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath group: 'commons-codec', name: 'commons-codec', version:
'1.2'
}
}
tasks.register('encode') {
doLast {
def byte[] encodedString = new Base64().encode('hello world\n'
.getBytes())
println new String(encodedString)
}
}
build.gradle.kts
import org.apache.commons.codec.binary.Base64
buildscript {
repositories {
mavenCentral()
}
dependencies {
"classpath"(group = "commons-codec", name = "commons-codec", version
= "1.2")
}
}
tasks.register("encode") {
doLast {
val encodedString = Base64().encode("hello world\n".toByteArray())
println(String(encodedString))
}
}
Output of gradle -q encode
For multi-project builds, the dependencies declared with a project’s buildscript() method are
available to the build scripts of all its sub-projects.
Build script dependencies may be Gradle plugins. Please consult Using Gradle Plugins for more
information on Gradle plugins.
Further Reading
This chapter only scratched the surface with what’s possible. Here are some other topics that may
be interesting:
Authoring Tasks
In the introductory tutorial you learned how to create simple tasks. You also learned how to add
additional behavior to these tasks later on, and you learned how to create dependencies between
tasks. This was all about simple tasks, but Gradle takes the concept of tasks further. Gradle supports
tasks that have their own properties and methods. Such tasks are either provided by you or built
into Gradle.
Task outcomes
When Gradle executes a task, it can label the task with different outcomes in the console UI and via
the Tooling API. These labels are based on if a task has actions to execute, if it should execute those
actions, if it did execute those actions and if those actions made any changes.
• Task has actions and Gradle has determined they should be executed as part of a build.
• Task has no actions and some dependencies, and any of the dependencies are executed. See
also Lifecycle Tasks.
UP-TO-DATE
Task’s outputs did not change.
• Task has outputs and inputs and they have not changed. See Incremental Builds.
• Task has actions, but the task tells Gradle it did not change its outputs.
• Task has no actions and some dependencies, but all of the dependencies are up-to-date,
skipped or from cache. See also Lifecycle Tasks.
FROM-CACHE
Task’s outputs could be found from a previous execution.
• Task has outputs restored from the build cache. See Build Cache.
SKIPPED
Task did not execute its actions.
• Task has been explicitly excluded from the command-line. See Excluding tasks from
execution.
NO-SOURCE
Task did not need to execute its actions.
• Task has inputs and outputs, but no sources. For example, source files are .java files for
JavaCompile.
Defining tasks
We have already seen how to define tasks using strings for task names in this chapter. There are a
few variations on this style, which you may need to use in certain situations.
The task configuration APIs are described in more detail in the task configuration
NOTE
avoidance chapter.
Example 51. Defining tasks using strings for task names
build.gradle
tasks.register('hello') {
doLast {
println 'hello'
}
}
tasks.register('copy', Copy) {
from(file('srcDir'))
into(buildDir)
}
build.gradle.kts
tasks.register("hello") {
doLast {
println("hello")
}
}
tasks.register<Copy>("copy") {
from(file("srcDir"))
into(buildDir)
}
We add the tasks to the tasks collection. Have a look at TaskContainer for more variations of the
register() method.
In the Kotlin DSL there is also a specific delegated properties syntax that is useful if you need the
registered task for further reference.
Example 52. Assigning tasks to variables with DSL specific syntax
build.gradle
build.gradle.kts
If you look at the API of the tasks container you may notice that there are
additional methods to create tasks. The use of these methods is discouraged
WARNING and will be deprecated in future versions. These methods only exist for
backward compatibility as they were introduced before task configuration
avoidance was added to Gradle.
Locating tasks
You often need to locate the tasks that you have defined in the build file, for example, to configure
them or use them for dependencies. There are a number of ways of doing this. Firstly, just like with
defining tasks there are language specific syntaxes for the Groovy and Kotlin DSL:
In general, tasks are available through the tasks collection. You should use of the methods that
return a task provider – register() or named() – to make sure you do not break task configuration
avoidance.
build.gradle
tasks.register('hello')
tasks.register('copy', Copy)
println tasks.named('hello').get().name
println tasks.named('copy').get().destinationDir
build.gradle.kts
tasks.register("hello")
tasks.register<Copy>("copy")
println(tasks.named<Copy>("copy").get().destinationDir)
Tasks of a specific type can also be accessed by using the tasks.withType() method. This enables to
easily avoid duplication of code and reduce redundancy.
Example 54. Accessing tasks by their type
build.gradle
tasks.withType(Tar).configureEach {
enabled = false
}
tasks.register('test') {
dependsOn tasks.withType(Copy)
}
build.gradle.kts
tasks.withType<Tar>().configureEach {
enabled = false
}
tasks.register("test") {
dependsOn(tasks.withType<Copy>())
}
The following shows how to access a task by path. This is not a recommended
WARNING practice anymore as it breaks task configuration avoidance and project
isolation. Dependencies between projects should be declared as dependencies.
You can access tasks from any project using the task’s path using the tasks.getByPath() method. You
can call the getByPath() method with a task name, or a relative path, or an absolute path.
Example 55. Accessing tasks by path
project-a/build.gradle
tasks.register('hello')
build.gradle
tasks.register('hello')
println tasks.getByPath('hello').path
println tasks.getByPath(':hello').path
println tasks.getByPath('project-a:hello').path
println tasks.getByPath(':project-a:hello').path
project-a/build.gradle.kts
tasks.register("hello")
build.gradle.kts
tasks.register("hello")
println(tasks.getByPath("hello").path)
println(tasks.getByPath(":hello").path)
println(tasks.getByPath("project-a:hello").path)
println(tasks.getByPath(":project-a:hello").path)
Configuring tasks
As an example, let’s look at the Copy task provided by Gradle. To register a Copy task for your build,
you can declare in your build script:
Example 56. Registering a copy task
build.gradle
tasks.register('myCopy', Copy)
build.gradle.kts
tasks.register<Copy>("myCopy")
This registers a copy task with no default behavior. The task can be configured using its API (see
Copy). The following examples show several different ways to achieve the same configuration.
Just to be clear, realize that the name of this task is myCopy, but it is of type Copy. You can have
multiple tasks of the same type, but with different names. You’ll find this gives you a lot of power to
implement cross-cutting concerns across all tasks of a particular type.
build.gradle
tasks.named('myCopy') {
from 'resources'
into 'target'
include('**/*.txt', '**/*.xml', '**/*.properties')
}
build.gradle.kts
tasks.named<Copy>("myCopy") {
from("resources")
into("target")
include("**/*.txt", "**/*.xml", "**/*.properties")
}
You can also store the task reference in a variable and use to configure the task further at a later
point in the script.
Example 58. Retrieve a task reference and use it to configuring the task
build.gradle
build.gradle.kts
If you use the Kotlin DSL and the task you want to configure was added by a plugin,
TIP you can use a convenient accessor for the task. That is, instead of tasks.named("test")
you can just write tasks.test.
You can also use a configuration block when you define a task.
Example 59. Defining a task with a configuration block
build.gradle
tasks.register('copy', Copy) {
from 'resources'
into 'target'
include('**/*.txt', '**/*.xml', '**/*.properties')
}
build.gradle.kts
tasks.register<Copy>("copy") {
from("resources")
into("target")
include("**/*.txt", "**/*.xml", "**/*.properties")
}
As opposed to configuring the mutable properties of a Task after creation, you can pass argument
values to the Task class’s constructor. In order to pass values to the Task constructor, you must
annotate the relevant constructor with @javax.inject.Inject.
Example 60. Task class with @Inject constructor
build.gradle
@Inject
CustomTask(String message, int number) {
this.message = message
this.number = number
}
}
build.gradle.kts
You can then create a task, passing the constructor arguments at the end of the parameter list.
build.gradle
build.gradle.kts
In all circumstances, the values passed as constructor arguments must be non-null. If you attempt
to pass a null value, Gradle will throw a NullPointerException indicating which runtime value is
null.
Adding dependencies to a task
There are several ways you can define the dependencies of a task. In Task dependencies you were
introduced to defining dependencies using task names. Task names can refer to tasks in the same
project as the task, or to tasks in other projects. To refer to a task in another project, you prefix the
name of the task with the path of the project it belongs to. The following is an example which adds
a dependency from project-a:taskX to project-b:taskY:
Example 62. Adding dependency on task from another project
build.gradle
project('project-a') {
tasks.register('taskX') {
dependsOn ':project-b:taskY'
doLast {
println 'taskX'
}
}
}
project('project-b') {
tasks.register('taskY') {
doLast {
println 'taskY'
}
}
}
build.gradle.kts
project("project-a") {
tasks.register("taskX") {
dependsOn(":project-b:taskY")
doLast {
println("taskX")
}
}
}
project("project-b") {
tasks.register("taskY") {
doLast {
println("taskY")
}
}
}
build.gradle
taskX.configure {
dependsOn taskY
}
build.gradle.kts
taskX {
dependsOn(taskY)
}
For more advanced uses, you can define a task dependency using a lazy block. When evaluated, the
block is passed the task whose dependencies are being calculated. The lazy block should return a
single Task or collection of Task objects, which are then treated as dependencies of the task. The
following example adds a dependency from taskX to all the tasks in the project whose name starts
with lib:
Example 64. Adding dependency using a lazy block
build.gradle
tasks.register('lib1') {
doLast {
println('lib1')
}
}
tasks.register('lib2') {
doLast {
println('lib2')
}
}
tasks.register('notALib') {
doLast {
println('notALib')
}
}
build.gradle.kts
tasks.register("lib1") {
doLast {
println("lib1")
}
}
tasks.register("lib2") {
doLast {
println("lib2")
}
}
tasks.register("notALib") {
doLast {
println("notALib")
}
}
For more information about task dependencies, see the Task API.
Ordering tasks
In some cases it is useful to control the order in which 2 tasks will execute, without introducing an
explicit dependency between those tasks. The primary difference between a task ordering and a
task dependency is that an ordering rule does not influence which tasks will be executed, only the
order in which they will be executed.
• Enforce sequential ordering of tasks: e.g. 'build' never runs before 'clean'.
• Run build validations early in the build: e.g. validate I have the correct credentials before
starting the work for a release build.
• Get feedback faster by running quick verification tasks before long verification tasks: e.g. unit
tests should run before integration tests.
• A task that aggregates the results of all tasks of a particular type: e.g. test report task combines
the outputs of all executed test tasks.
There are two ordering rules available: “must run after” and “should run after”.
When you use the “must run after” ordering rule you specify that taskB must always run after
taskA, whenever both taskA and taskB will be run. This is expressed as taskB.mustRunAfter(taskA).
The “should run after” ordering rule is similar but less strict as it will be ignored in two situations.
Firstly if using that rule introduces an ordering cycle. Secondly when using parallel execution and
all dependencies of a task have been satisfied apart from the “should run after” task, then this task
will be run regardless of whether its “should run after” dependencies have been run or not. You
should use “should run after” where the ordering is helpful but not strictly required.
With these rules present it is still possible to execute taskA without taskB and vice-versa.
Example 65. Adding a 'must run after' task ordering
build.gradle
build.gradle.kts
build.gradle
build.gradle.kts
In the examples above, it is still possible to execute taskY without causing taskX to run:
Example 67. Task ordering does not imply task execution
To specify a “must run after” or “should run after” ordering between 2 tasks, you use the
Task.mustRunAfter(java.lang.Object...) and Task.shouldRunAfter(java.lang.Object...) methods. These
methods accept a task instance, a task name or any other input accepted by
Task.dependsOn(java.lang.Object...).
Note that “B.mustRunAfter(A)” or “B.shouldRunAfter(A)” does not imply any execution dependency
between the tasks:
• It is possible to execute tasks A and B independently. The ordering rule only has an effect when
both tasks are scheduled for execution.
• When run with --continue, it is possible for B to execute in the event that A fails.
As mentioned before, the “should run after” ordering rule will be ignored if it introduces an
ordering cycle:
Example 68. A 'should run after' task ordering is ignored if it introduces an ordering cycle
build.gradle
build.gradle.kts
You can add a description to your task. This description is displayed when executing gradle tasks.
build.gradle
tasks.register('copy', Copy) {
description 'Copies the resource directory to the target directory.'
from 'resources'
into 'target'
include('**/*.txt', '**/*.xml', '**/*.properties')
}
build.gradle.kts
tasks.register<Copy>("copy") {
description = "Copies the resource directory to the target directory."
from("resources")
into("target")
include("**/*.txt", "**/*.xml", "**/*.properties")
}
Skipping tasks
Using a predicate
You can use the onlyIf() method to attach a predicate to a task. The task’s actions are only executed
if the predicate evaluates to true. You implement the predicate as a closure. The closure is passed
the task as a parameter, and should return true if the task should execute and false if the task
should be skipped. The predicate is evaluated just before the task is due to be executed.
Example 70. Skipping a task using a predicate
build.gradle
hello.configure {
onlyIf { !project.hasProperty('skipHello') }
}
build.gradle.kts
hello {
onlyIf { !project.hasProperty("skipHello") }
}
BUILD SUCCESSFUL in 0s
Using StopExecutionException
If the logic for skipping a task can’t be expressed with a predicate, you can use the
StopExecutionException. If this exception is thrown by an action, the further execution of this
action as well as the execution of any following action of this task is skipped. The build continues
with executing the next task.
Example 71. Skipping tasks with StopExecutionException
build.gradle
compile.configure {
doFirst {
// Here you would put arbitrary conditions in real life.
if (true) {
throw new StopExecutionException()
}
}
}
tasks.register('myTask') {
dependsOn('compile')
doLast {
println 'I am not affected'
}
}
build.gradle.kts
compile {
doFirst {
// Here you would put arbitrary conditions in real life.
if (true) {
throw StopExecutionException()
}
}
}
tasks.register("myTask") {
dependsOn(compile)
doLast {
println("I am not affected")
}
}
This feature is helpful if you work with tasks provided by Gradle. It allows you to add conditional
[3]
execution of the built-in actions of such a task.
Every task has an enabled flag which defaults to true. Setting it to false prevents the execution of
any of the task’s actions. A disabled task will be labelled SKIPPED.
Example 72. Enabling and disabling tasks
build.gradle
disableMe.configure {
enabled = false
}
build.gradle.kts
disableMe {
enabled = false
}
BUILD SUCCESSFUL in 0s
Task timeouts
Every task has a timeout property which can be used to limit its execution time. When a task
reaches its timeout, its task execution thread is interrupted. The task will be marked as failed.
Finalizer tasks will still be run. If --continue is used, other tasks can continue running after it. Tasks
that don’t respond to interrupts can’t be timed out. All of Gradle’s built-in tasks respond to timeouts
in a timely manner.
Example 73. Specifying task timeouts
build.gradle
tasks.register("hangingTask") {
doLast {
Thread.sleep(100000)
}
timeout = Duration.ofMillis(500)
}
build.gradle.kts
tasks.register("hangingTask") {
doLast {
Thread.sleep(100000)
}
timeout.set(Duration.ofMillis(500))
}
An important part of any build tool is the ability to avoid doing work that has already been done.
Consider the process of compilation. Once your source files have been compiled, there should be no
need to recompile them unless something has changed that affects the output, such as the
modification of a source file or the removal of an output file. And compilation can take a significant
amount of time, so skipping the step when it’s not needed saves a lot of time.
Gradle supports this behavior out of the box through a feature it calls incremental build. You have
almost certainly already seen it in action: it’s active nearly every time the UP-TO-DATE text appears
next to the name of a task when you run a build. Task outcomes are described in Task outcomes.
How does incremental build work? And what does it take to make use of it in your own tasks? Let’s
take a look.
In the most common case, a task takes some inputs and generates some outputs. If we use the
compilation example from earlier, we can see that the source files are the inputs and, in the case of
Java, the generated class files are the outputs. Other inputs might include things like whether debug
information should be included.
Figure 7. Example task inputs and outputs
An important characteristic of an input is that it affects one or more outputs, as you can see from
the previous figure. Different bytecode is generated depending on the content of the source files
and the minimum version of the Java runtime you want to run the code on. That makes them task
inputs. But whether compilation has 500MB or 600MB of maximum memory available, determined
by the memoryMaximumSize property, has no impact on what bytecode gets generated. In Gradle
terminology, memoryMaximumSize is just an internal task property.
As part of incremental build, Gradle tests whether any of the task inputs or outputs has changed
since the last build. If they haven’t, Gradle can consider the task up to date and therefore skip
executing its actions. Also note that incremental build won’t work unless a task has at least one task
output, although tasks usually have at least one input as well.
What this means for build authors is simple: you need to tell Gradle which task properties are
inputs and which are outputs. If a task property affects the output, be sure to register it as an input,
otherwise the task will be considered up to date when it’s not. Conversely, don’t register properties
as inputs if they don’t affect the output, otherwise the task will potentially execute when it doesn’t
need to. Also be careful of non-deterministic tasks that may generate different output for exactly
the same inputs: these should not be configured for incremental build as the up-to-date checks
won’t work.
Let’s now look at how you can register task properties as inputs and outputs.
If you’re implementing a custom task as a class, then it takes just two steps to make it work with
incremental build:
1. Create typed properties (via getter methods) for each of your task inputs and outputs
• Simple values
Things like strings and numbers. More generally, a simple value can have any type that
implements Serializable.
• Filesystem types
These consist of the standard File class but also derivatives of Gradle’s FileCollection type and
anything else that can be passed to either the Project.file(java.lang.Object) method — for single
file/directory properties — or the Project.files(java.lang.Object...) method.
• Nested values
Custom types that don’t conform to the other two categories but have their own properties that
are inputs or outputs. In effect, the task inputs or outputs are nested inside these custom types.
As an example, imagine you have a task that processes templates of varying types, such as
FreeMarker, Velocity, Moustache, etc. It takes template source files and combines them with some
model data to generate populated versions of the template files.
• Model data
• Template engine
When you’re writing a custom task class, it’s easy to register properties as inputs or outputs via
annotations. To demonstrate, here is a skeleton task implementation with some suitable inputs and
outputs, along with their annotations:
Example 74. Custom task class
buildSrc/src/main/java/org/example/ProcessTemplates.java
package org.example;
import java.util.HashMap;
import org.gradle.api.DefaultTask;
import org.gradle.api.file.ConfigurableFileCollection;
import org.gradle.api.file.DirectoryProperty;
import org.gradle.api.provider.Property;
import org.gradle.api.tasks.*;
@Input
public abstract Property<TemplateEngineType> getTemplateEngine();
@InputFiles
public abstract ConfigurableFileCollection getSourceFiles();
@Nested
public abstract TemplateData getTemplateData();
@OutputDirectory
public abstract DirectoryProperty getOutputDir();
@TaskAction
public void processTemplates() {
// ...
}
}
buildSrc/src/main/java/org/example/TemplateData.java
package org.example;
import org.gradle.api.provider.MapProperty;
import org.gradle.api.provider.Property;
import org.gradle.api.tasks.Input;
@Input
public abstract Property<String> getName();
@Input
public abstract MapProperty<String, String> getVariables();
}
Output of gradle processTemplates
BUILD SUCCESSFUL in 0s
3 actionable tasks: 3 up-to-date
BUILD SUCCESSFUL in 0s
3 actionable tasks: 3 up-to-date
There’s plenty to talk about in this example, so let’s work through each of the input and output
properties in turn:
• templateEngine
Represents which engine to use when processing the source templates, e.g. FreeMarker,
Velocity, etc. You could implement this as a string, but in this case we have gone for a custom
enum as it provides greater type information and safety. Since enums implement Serializable
automatically, we can treat this as a simple value and use the @Input annotation, just as we
would with a String property.
• sourceFiles
The source templates that the task will be processing. Single files and collections of files need
their own special annotations. In this case, we’re dealing with a collection of input files and so
we use the @InputFiles annotation. You’ll see more file-oriented annotations in a table later.
• templateData
For this example, we’re using a custom class to represent the model data. However, it does not
implement Serializable, so we can’t use the @Input annotation. That’s not a problem as the
properties within TemplateData — a string and a hash map with serializable type parameters —
are serializable and can be annotated with @Input. We use @Nested on templateData to let Gradle
know that this is a value with nested input properties.
• outputDir
The directory where the generated files go. As with input files, there are several annotations for
output files and directories. A property representing a single directory requires
@OutputDirectory. You’ll learn about the others soon.
These annotated properties mean that Gradle will skip the task if none of the source files, template
engine, model data or generated files has changed since the previous time Gradle executed the task.
This will often save a significant amount of time. You can learn how Gradle detects changes later.
This example is particularly interesting because it works with collections of source files. What
happens if only one source file changes? Does the task process all the source files again or just the
modified one? That depends on the task implementation. If the latter, then the task itself is
incremental, but that’s a different feature to the one we’re discussing here. Gradle does help task
implementers with this via its incremental task inputs feature.
Now that you have seen some of the input and output annotations in practice, let’s take a look at all
the annotations available to you and when you should use them. The table below lists the available
annotations and the corresponding property type you can use with each one.
• Changes to debug
information, for example
when a change to a
comment affects the line
numbers in class debug
information.
• Changes to directories,
including directory entries
in Jars.
Annotation Expected property type Description
@OutputFile File* A single output file (not
The
directory) @CompileClasspa
th annotation
@OutputDirectory File* was directory
A single output introduced (not
file) in Gradle 3.4. To
stay compatible
@OutputFiles Map<String, File>** with
or An iterable or Gradle
map 3.3
of output
Iterable<File>* files. Using and 3.2,tree
a file compile
turns
classpath
caching off for the task.
properties
NOTE
@OutputDirectories should also be
Map<String, File>** or An iterable of output
annotated with
Iterable<File>* directories. Using a file tree
@Classpath. For
turns caching off for the task.
compatibility
with Gradle
@Destroys File or Iterable<File>* Specifies one or more files that
versions before
are removed by this task. Note
3.2 the property
that a task can define either
should also be
inputs/outputs or destroyables,
annotated with
but not both.
@InputFiles.
Implies @Incremental.
The Console and Internal annotations in the table are special cases as they don’t declare either task
inputs or task outputs. So why use them? It’s so that you can take advantage of the Java Gradle
Plugin Development plugin to help you develop and publish your own plugins. This plugin checks
whether any properties of your custom task classes lack an incremental build annotation. This
protects you from forgetting to add an appropriate annotation during development.
Besides @InputFiles, for JVM-related tasks Gradle understands the concept of classpath inputs. Both
runtime and compile classpaths are treated differently when Gradle is looking for changes.
As opposed to input properties annotated with @InputFiles, for classpath properties the order of the
entries in the file collection matter. On the other hand, the names and paths of the directories and
jar files on the classpath itself are ignored. Timestamps and the order of class files and resources
inside jar files on a classpath are ignored, too, thus recreating a jar file with different file dates will
not make the task out of date.
Runtime classpaths are marked with @Classpath, and they offer further customization via classpath
normalization.
Input properties annotated with @CompileClasspath are considered Java compile classpaths.
Additionally to the aforementioned general classpath rules, compile classpaths ignore changes to
everything but class files. Gradle uses the same class analysis described in Java compile avoidance
to further filter changes that don’t affect the class' ABIs. This means that changes which only touch
the implementation of classes do not make the task out of date.
Nested inputs
When analyzing @Nested task properties for declared input and output sub-properties Gradle uses
the type of the actual value. Hence it can discover all sub-properties declared by a runtime sub-
type.
When adding @Nested to a Provider, the value of the Provider is treated as a nested input.
When adding @Nested to an iterable, each element is treated as a separate nested input. Each nested
input in the iterable is assigned a name, which by default is the dollar sign followed by the index in
the iterable, e.g. $2. If an element of the iterable implements Named, then the name is used as
property name. The ordering of the elements in the iterable is crucial for for reliable up-to-date
checks and caching if not all of the elements implement Named. Multiple elements which have the
same name are not allowed.
When adding @Nested to a map, then for each value a nested input is added, using the key as name.
The type and classpath of nested inputs is tracked, too. This ensures that changes to the
implementation of a nested input causes the build to be out of date. By this it is also possible to add
user provided code as an input, e.g. by annotating an @Action property with @Nested. Note that any
inputs to such actions should be tracked, either by annotated properties on the action or by
manually registering them with the task.
Using nested inputs allows richer modeling and extensibility for tasks, as e.g. shown by
Test.getJvmArgumentProviders().
This allows us to model the JaCoCo Java agent, thus declaring the necessary JVM arguments and
providing the inputs and outputs to Gradle:
JacocoAgent.java
@Nested
@Optional
public JacocoTaskExtension getJacoco() {
return jacoco.isEnabled() ? jacoco : null;
}
@Override
public Iterable<String> asArguments() {
return jacoco.isEnabled() ? ImmutableList.of(jacoco.getAsJvmArg()) :
Collections.<String>emptyList();
}
}
test.getJvmArgumentProviders().add(new JacocoAgent(extension));
For this to work, JacocoTaskExtension needs to have the correct input and output annotations.
The approach works for Test JVM arguments, since Test.getJvmArgumentProviders() is an Iterable
annotated with @Nested.
There are other task types where this kind of nested inputs are available:
• GroovyCompile.getGroovyOptions().getForkOptions().getJvmArgumentProviders() - model
Groovy compiler daemon command line arguments
Runtime validation
When executing the build Gradle checks if task types are declared with the proper annotations. It
tries to identify problems where e.g. annotations are used on incompatible types, or on setters etc.
Any getter not annotated with an input/output annotation is also flagged. These problems then fail
the build or are turned into deprecation warnings when the task is executed.
Tasks that have a validation warning are executed without any optimizations.
Specifically, they never can be:
• up-to-date,
• executed incrementally.
The in-memory representation of the file system state (aka Virtual File System) is
also invalidated before an invalid task is executed.
Runtime API
Custom task classes are an easy way to bring your own build logic into the arena of incremental
build, but you don’t always have that option. That’s why Gradle also provides an alternative API
that can be used with any tasks, which we look at next.
When you don’t have access to the source for a custom task class, there is no way to add any of the
annotations we covered in the previous section. Fortunately, Gradle provides a runtime API for
scenarios just like that. It can also be used for ad-hoc tasks, as you’ll see next.
This runtime API is provided through a couple of aptly named properties that are available on
every Gradle task:
These objects have methods that allow you to specify files, directories and values which constitute
the task’s inputs and outputs. In fact, the runtime API has almost feature parity with the
annotations. All it lacks is an equivalent for @Nested.
Let’s take the template processing example from before and see how it would look as an ad-hoc task
that uses the runtime API:
Example 75. Ad-hoc task
build.gradle
tasks.register('processTemplatesAdHoc') {
inputs.property('engine', TemplateEngineType.FREEMARKER)
inputs.files(fileTree('src/templates'))
.withPropertyName('sourceFiles')
.withPathSensitivity(PathSensitivity.RELATIVE)
inputs.property('templateData.name', 'docs')
inputs.property('templateData.variables', [year: '2013'])
outputs.dir(layout.buildDirectory.dir('genOutput2'))
.withPropertyName('outputDir')
doLast {
// Process the templates here
}
}
build.gradle.kts
tasks.register("processTemplatesAdHoc") {
inputs.property("engine", TemplateEngineType.FREEMARKER)
inputs.files(fileTree("src/templates"))
.withPropertyName("sourceFiles")
.withPathSensitivity(PathSensitivity.RELATIVE)
inputs.property("templateData.name", "docs")
inputs.property("templateData.variables", mapOf("year" to "2013"))
outputs.dir(layout.buildDirectory.dir("genOutput2"))
.withPropertyName("outputDir")
doLast {
// Process the templates here
}
}
BUILD SUCCESSFUL in 0s
3 actionable tasks: 3 executed
As before, there’s much to talk about. To begin with, you should really write a custom task class for
this as it’s a non-trivial implementation that has several configuration options. In this case, there
are no task properties to store the root source folder, the location of the output directory or any of
the other settings. That’s deliberate to highlight the fact that the runtime API doesn’t require the
task to have any state. In terms of incremental build, the above ad-hoc task will behave the same as
the custom task class.
All the input and output definitions are done through the methods on inputs and outputs, such as
property(), files(), and dir(). Gradle performs up-to-date checks on the argument values to
determine whether the task needs to run again or not. Each method corresponds to one of the
incremental build annotations, for example inputs.property() maps to @Input and outputs.dir()
maps to @OutputDirectory.
build.gradle
tasks.register('removeTempDir') {
destroyables.register(layout.projectDirectory.dir('tmpDir'))
doLast {
delete(layout.projectDirectory.dir('tmpDir'))
}
}
build.gradle.kts
tasks.register("removeTempDir") {
destroyables.register(layout.projectDirectory.dir("tmpDir"))
doLast {
delete(layout.projectDirectory.dir("tmpDir"))
}
}
One notable difference between the runtime API and the annotations is the lack of a method that
corresponds directly to @Nested. That’s why the example uses two property() declarations for the
template data, one for each TemplateData property. You should utilize the same technique when
using the runtime API with nested values. Any given task can either declare destroyables or
inputs/outputs, but cannot declare both.
Fine-grained configuration
The runtime API methods only allow you to declare your inputs and outputs in themselves.
However, the file-oriented ones return a builder — of type TaskInputFilePropertyBuilder — that
lets you provide additional information about those inputs and outputs.
You can learn about all the options provided by the builder in its API documentation, but we’ll
show you a simple example here to give you an idea of what you can do.
Let’s say we don’t want to run the processTemplates task if there are no source files, regardless of
whether it’s a clean build or not. After all, if there are no source files, there’s nothing for the task to
do. The builder allows us to configure this like so:
Example 77. Using skipWhenEmpty() via the runtime API
build.gradle
tasks.register('processTemplatesAdHocSkipWhenEmpty') {
// ...
inputs.files(fileTree('src/templates') {
include '**/*.fm'
})
.skipWhenEmpty()
.withPropertyName('sourceFiles')
.withPathSensitivity(PathSensitivity.RELATIVE)
.ignoreEmptyDirectories()
// ...
}
build.gradle.kts
tasks.register("processTemplatesAdHocSkipWhenEmpty") {
// ...
inputs.files(fileTree("src/templates") {
include("**/*.fm")
})
.skipWhenEmpty()
.withPropertyName("sourceFiles")
.withPathSensitivity(PathSensitivity.RELATIVE)
.ignoreEmptyDirectories()
// ...
}
BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date
The TaskInputs.files() method returns a builder that has a skipWhenEmpty() method. Invoking this
method is equivalent to annotating to the property with @SkipWhenEmpty.
Now that you have seen both the annotations and the runtime API, you may be wondering which
API you should be using. Our recommendation is to use the annotations wherever possible, and it’s
sometimes worth creating a custom task class just so that you can make use of them. The runtime
API is more for situations in which you can’t use the annotations.
Another type of example involves registering additional inputs and outputs for instances of a
custom task class. For example, imagine that the ProcessTemplates task also needs to read
src/headers/headers.txt (e.g. because it is included from one of the sources). You’d want Gradle to
know about this input file, so that it can re-execute the task whenever the contents of this file
change. With the runtime API you can do just that:
build.gradle
tasks.register('processTemplatesWithExtraInputs', ProcessTemplates) {
// ...
inputs.file('src/headers/headers.txt')
.withPropertyName('headers')
.withPathSensitivity(PathSensitivity.NONE)
}
build.gradle.kts
tasks.register<ProcessTemplates>("processTemplatesWithExtraInputs") {
// ...
inputs.file("src/headers/headers.txt")
.withPropertyName("headers")
.withPathSensitivity(PathSensitivity.NONE)
}
Using the runtime API like this is a little like using doLast() and doFirst() to attach extra actions to
a task, except in this case we’re attaching information about inputs and outputs.
If the task type is already using the incremental build annotations, registering
WARNING
inputs or outputs with the same property names will result in an error.
Once you declare a task’s formal inputs and outputs, Gradle can then infer things about those
properties. For example, if an input of one task is set to the output of another, that means the first
task depends on the second, right? Gradle knows this and can act upon it.
We’ll look at this feature next and also some other features that come from Gradle knowing things
about inputs and outputs.
Consider an archive task that packages the output of the processTemplates task. A build author will
see that the archive task obviously requires processTemplates to run first and so may add an explicit
dependsOn. However, if you define the archive task like so:
build.gradle
tasks.register('packageFiles', Zip) {
from processTemplates.map {it.outputs }
}
build.gradle.kts
tasks.register<Zip>("packageFiles") {
from(processTemplates.map {it.outputs })
}
BUILD SUCCESSFUL in 0s
5 actionable tasks: 4 executed, 1 up-to-date
Gradle will automatically make packageFiles depend on processTemplates. It can do this because it’s
aware that one of the inputs of packageFiles requires the output of the processTemplates task. We
call this an inferred task dependency.
build.gradle
tasks.register('packageFiles2', Zip) {
from processTemplates
}
build.gradle.kts
tasks.register<Zip>("packageFiles2") {
from(processTemplates)
}
BUILD SUCCESSFUL in 0s
5 actionable tasks: 4 executed, 1 up-to-date
This is because the from() method can accept a task object as an argument. Behind the scenes,
from() uses the project.files() method to wrap the argument, which in turn exposes the task’s
formal outputs as a file collection. In other words, it’s a special case!
The incremental build annotations provide enough information for Gradle to perform some basic
validation on the annotated properties. In particular, it does the following for each property before
the task executes:
• @InputFile - verifies that the property has a value and that the path corresponds to a file (not a
directory) that exists.
• @InputDirectory - same as for @InputFile, except the path must correspond to a directory.
• @OutputDirectory - verifies that the path doesn’t match a file and also creates the directory if it
doesn’t already exist.
If one task produces an output in a location and another task consumes that location by referring to
it as an input, then Gradle checks that the consumer task depends on the producer task. When the
producer and the consumer tasks are executing at the same time, the build fails to avoid capturing
an incorrect state.
Such validation improves the robustness of the build, allowing you to identify issues related to
inputs and outputs quickly.
You will occasionally want to disable some of this validation, specifically when an input file may
validly not exist. That’s why Gradle provides the @Optional annotation: you use it to tell Gradle that
a particular input is optional and therefore the build should not fail if the corresponding file or
directory doesn’t exist.
Continuous build
Another benefit of defining task inputs and outputs is continuous build. Since Gradle knows what
files a task depends on, it can automatically run a task again if any of its inputs change. By
activating continuous build when you run Gradle — through the --continuous or -t options — you
will put Gradle into a state in which it continually checks for changes and executes the requested
tasks when it encounters such changes.
You can find out more about this feature in Continuous build.
Task parallelism
One last benefit of defining task inputs and outputs is that Gradle can use this information to make
decisions about how to run tasks when the "--parallel" option is used. For instance, Gradle will
inspect the outputs of tasks when selecting the next task to run and will avoid concurrent execution
of tasks that write to the same output directory. Similarly, Gradle will use the information about
what files a task destroys (e.g. specified by the Destroys annotation) and avoid running a task that
removes a set of files while another task is running that consumes or creates those same files (and
vice versa). It can also determine that a task that creates a set of files has already run and that a
task that consumes those files has yet to run and will avoid running a task that removes those files
in between. By providing task input and output information in this way, Gradle can infer
creation/consumption/destruction relationships between tasks and can ensure that task execution
does not violate those relationships.
Before a task is executed for the first time, Gradle takes a fingerprint of the inputs. This fingerprint
contains the paths of input files and a hash of the contents of each file. Gradle then executes the
task. If the task completes successfully, Gradle takes a fingerprint of the outputs. This fingerprint
contains the set of output files and a hash of the contents of each file. Gradle persists both
fingerprints for the next time the task is executed.
Each time after that, before the task is executed, Gradle takes a new fingerprint of the inputs and
outputs. If the new fingerprints are the same as the previous fingerprints, Gradle assumes that the
outputs are up to date and skips the task. If they are not the same, Gradle executes the task. Gradle
persists both fingerprints for the next time the task is executed.
If the stats of a file (i.e. lastModified and size) did not change, Gradle will reuse the file’s fingerprint
from the previous run. That means that Gradle does not detect changes when the stats of a file did
not change.
Gradle also considers the code of the task as part of the inputs to the task. When a task, its actions,
or its dependencies change between executions, Gradle considers the task as out-of-date.
Gradle understands if a file property (e.g. one holding a Java classpath) is order-sensitive. When
comparing the fingerprint of such a property, even a change in the order of the files will result in
the task becoming out-of-date.
Note that if a task has an output directory specified, any files added to that directory since the last
time it was executed are ignored and will NOT cause the task to be out of date. This is so unrelated
tasks may share an output directory without interfering with each other. If this is not the behaviour
you want for some reason, consider using TaskOutputs.upToDateWhen(groovy.lang.Closure)
Note also that changing the availability of an unavailable file (e.g. modifying the target of a broken
symlink to a valid file, or vice versa), will be detected and handled by up-to-date check.
The inputs for the task are also used to calculate the build cache key used to load task outputs when
enabled. For more details see Task output caching.
For tracking the implementation of tasks, task actions and nested inputs, Gradle
uses the class name and an identifier for the classpath which contains the
implementation. There are some situations when Gradle is not able to track the
implementation precisely:
Unknown classloader
When the classloader which loaded the implementation has not been created by
Gradle, the classpath cannot be determined.
NOTE
Java lambda
Java lambda classes are created at runtime with a non-deterministic classname.
Therefore, the class name does not identify the implementation of the lambda
and changes between different Gradle runs.
When the implementation of a task, task action or a nested input cannot be tracked
precisely, Gradle disables any caching for the task. That means that the task will
never be up-to-date or loaded from the build cache.
Advanced techniques
Everything you’ve seen so far in this section will cover most of the use cases you’ll encounter, but
there are some scenarios that need special treatment. We’ll present a few of those next with the
appropriate solutions.
Have you ever wondered how the from() method of the Copy task works? It’s not annotated with
@InputFiles and yet any files passed to it are treated as formal inputs of the task. What’s
happening?
The implementation is quite simple and you can use the same technique for your own tasks to
improve their APIs. Write your methods so that they add files directly to the appropriate annotated
property. As an example, here’s how to add a sources() method to the custom ProcessTemplates class
we introduced earlier:
Example 81. Declaring a method to add task inputs
build.gradle
tasks.register('processTemplates', ProcessTemplates) {
templateEngine = TemplateEngineType.FREEMARKER
templateData.name = 'test'
templateData.variables = [year: '2012']
outputDir = file(layout.buildDirectory.dir('genOutput'))
sources fileTree('src/templates')
}
build.gradle.kts
tasks.register<ProcessTemplates>("processTemplates") {
templateEngine.set(TemplateEngineType.FREEMARKER)
templateData.name.set("test")
templateData.variables.set(mapOf("year" to "2012"))
outputDir.set(file(layout.buildDirectory.dir("genOutput")))
sources(fileTree("src/templates"))
}
ProcessTemplates.java
// ...
}
Output of gradle processTemplates
BUILD SUCCESSFUL in 0s
3 actionable tasks: 3 executed
In other words, as long as you add values and files to formal task inputs and outputs during the
configuration phase, they will be treated as such regardless from where in the build you add them.
If we want to support tasks as arguments as well and treat their outputs as the inputs, we can use
the project.layout.files() method like so:
Example 82. Declaring a method to add a task as an input
build.gradle
tasks.register('processTemplates2', ProcessTemplates) {
// ...
sources copyTemplates
}
build.gradle.kts
tasks.register<ProcessTemplates>("processTemplates2") {
// ...
sources(copyTemplates)
}
ProcessTemplates.java
// ...
public void sources(TaskProvider<?> inputTask) {
getSourceFiles().from(getProject().getLayout().files(inputTask));
}
// ...
BUILD SUCCESSFUL in 0s
4 actionable tasks: 4 executed
This technique can make your custom task easier to use and result in cleaner build files. As an
added benefit, our use of getProject().getLayout().files() means that our custom method can set
up an inferred task dependency.
One last thing to note: if you are developing a task that takes collections of source files as inputs,
like this example, consider using the built-in SourceTask. It will save you having to implement some
of the plumbing that we put into ProcessTemplates.
When you want to link the output of one task to the input of another, the types often match and a
simple property assignment will provide that link. For example, a File output property can be
assigned to a File input.
Unfortunately, this approach breaks down when you want the files in a task’s @OutputDirectory (of
type File) to become the source for another task’s @InputFiles property (of type FileCollection).
Since the two have different types, property assignment won’t work.
As an example, imagine you want to use the output of a Java compilation task — via the
destinationDir property — as the input of a custom task that instruments a set of files containing
Java bytecode. This custom task, which we’ll call Instrument, has a classFiles property annotated
with @InputFiles. You might initially try to configure the task like so:
Example 83. Failed attempt at setting up an inferred task dependency
build.gradle
plugins {
id 'java-library'
}
tasks.register('badInstrumentClasses', Instrument) {
classFiles.from fileTree(tasks.named('compileJava').map { it
.destinationDir })
destinationDir = file(layout.buildDirectory.dir('instrumented'))
}
build.gradle.kts
plugins {
id("java-library")
}
tasks.register<Instrument>("badInstrumentClasses") {
classFiles.from(fileTree(tasks.compileJava.map { it.destinationDir }))
destinationDir.set(file(layout.buildDirectory.dir("instrumented")))
}
BUILD SUCCESSFUL in 0s
3 actionable tasks: 2 executed, 1 up-to-date
There’s nothing obviously wrong with this code, but you can see from the console output that the
compilation task is missing. In this case you would need to add an explicit task dependency
between instrumentClasses and compileJava via dependsOn. The use of fileTree() means that Gradle
can’t infer the task dependency itself.
One solution is to use the TaskOutputs.files property, as demonstrated by the following example:
Example 84. Setting up an inferred task dependency between output dir and input files
build.gradle
tasks.register('instrumentClasses', Instrument) {
classFiles.from tasks.named('compileJava').map { it.outputs.files }
destinationDir = file(layout.buildDirectory.dir('instrumented'))
}
build.gradle.kts
tasks.register<Instrument>("instrumentClasses") {
classFiles.from(tasks.compileJava.map { it.outputs.files })
destinationDir.set(file(layout.buildDirectory.dir("instrumented")))
}
BUILD SUCCESSFUL in 0s
5 actionable tasks: 4 executed, 1 up-to-date
Alternatively, you can get Gradle to access the appropriate property itself by using one of
project.files(), project.layout.files() or project.objects.fileCollection() in place of
project.fileTree():
Example 85. Setting up an inferred task dependency with layout.files()
build.gradle
tasks.register('instrumentClasses2', Instrument) {
classFiles.from layout.files(tasks.named('compileJava'))
destinationDir = file(layout.buildDirectory.dir('instrumented'))
}
build.gradle.kts
tasks.register<Instrument>("instrumentClasses2") {
classFiles.from(layout.files(tasks.compileJava))
destinationDir.set(file(layout.buildDirectory.dir("instrumented")))
}
BUILD SUCCESSFUL in 0s
5 actionable tasks: 4 executed, 1 up-to-date
Remember that files(), layout.files() and objects.fileCollection() can take tasks as arguments,
whereas fileTree() cannot.
The downside of this approach is that all file outputs of the source task become the input files of the
target — instrumentClasses in this case. That’s fine as long as the source task only has a single file-
based output, like the JavaCompile task. But if you have to link just one output property among
several, then you need to explicitly tell Gradle which task generates the input files using the builtBy
method:
Example 86. Setting up an inferred task dependency with builtBy()
build.gradle
tasks.register('instrumentClassesBuiltBy', Instrument) {
classFiles.from fileTree(tasks.named('compileJava').map { it
.destinationDir }) {
builtBy tasks.named('compileJava')
}
destinationDir = file(layout.buildDirectory.dir('instrumented'))
}
build.gradle.kts
tasks.register<Instrument>("instrumentClassesBuiltBy") {
classFiles.from(fileTree(tasks.compileJava.map { it.destinationDir }) {
builtBy(tasks.compileJava)
})
destinationDir.set(file(layout.buildDirectory.dir("instrumented")))
}
BUILD SUCCESSFUL in 0s
5 actionable tasks: 4 executed, 1 up-to-date
You can of course just add an explicit task dependency via dependsOn, but the above approach
provides more semantic meaning, explaining why compileJava has to run beforehand.
Gradle automatically handles up-to-date checks for output files and directories, but what if the task
output is something else entirely? Perhaps it’s an update to a web service or a database table. Or
sometimes you have a task which should always run.
That’s where the doNotTrackState() method on Task comes in. One can use this to disable up-to-date
checks completely for a task, like so:
Example 87. Ignoring up-to-date checks
build.gradle
tasks.register('alwaysInstrumentClasses', Instrument) {
classFiles.from layout.files(tasks.named('compileJava'))
destinationDir = file(layout.buildDirectory.dir('instrumented'))
doNotTrackState("Instrumentation needs to re-run every time")
}
build.gradle.kts
tasks.register<Instrument>("alwaysInstrumentClasses") {
classFiles.from(layout.files(tasks.compileJava))
destinationDir.set(file(layout.buildDirectory.dir("instrumented")))
doNotTrackState("Instrumentation needs to re-run every time")
}
BUILD SUCCESSFUL in 0s
4 actionable tasks: 1 executed, 3 up-to-date
BUILD SUCCESSFUL in 0s
4 actionable tasks: 1 executed, 3 up-to-date
If you are writing your own task that always should run, then you can also use the @UntrackedTask
annotation on the task class instead of calling Task.doNotTrackState().
Sometimes you want to integrate an external tool like Git or Npm, both of which do their own up-to-
date checking. In that case it doesn’t make much sense for Gradle to also do up-to-date checks. You
can disable Gradle’s up-to-date checks by using the @UntrackedTask annotation on the task wrapping
the tool. Alternatively, you can use the runtime API method Task.doNotTrackState().
For example, let’s say you want to implement a task which clones a Git repository.
Example 88. Task for Git clone
buildSrc/src/main/java/org/example/GitClone.java
@Input
public abstract Property<String> getRemoteUri();
@Input
public abstract Property<String> getCommitId();
@OutputDirectory
public abstract DirectoryProperty getDestinationDir();
@TaskAction
public void gitClone() throws IOException {
File destinationDir = getDestinationDir().get().getAsFile()
.getAbsoluteFile(); ②
String remoteUri = getRemoteUri().get();
// Fetch origin or clone and checkout
// ...
}
build.gradle
tasks.register("cloneGradleProfiler", GitClone) {
destinationDir = layout.buildDirectory.dir("gradle-profiler")
③
remoteUri = "https://github.com/gradle/gradle-profiler.git"
commitId = "d6c18a21ca6c45fd8a9db321de4478948bdf801b"
}
build.gradle.kts
tasks.register<GitClone>("cloneGradleProfiler") {
destinationDir.set(layout.buildDirectory.dir("gradle-profiler"))
③
remoteUri.set("https://github.com/gradle/gradle-profiler.git")
commitId.set("d6c18a21ca6c45fd8a9db321de4478948bdf801b")
}
• (1) Declare the task as untracked.
• (3) Add the task and configure the output directory in your build.
For up to date checks and the build cache Gradle needs to determine if two task input properties
have the same value. In order to do so, Gradle first normalizes both inputs and then compares the
result. For example, for a compile classpath, Gradle extracts the ABI signature from the classes on
the classpath and then compares signatures between the last Gradle run and the current Gradle run
as described in Java compile avoidance.
Normalization applies to all zip files on the classpath (e.g. jars, wars, aars, apks, etc). This allows
Gradle to recognize when two zip files are functionally the same, even though the zip files
themselves might be slightly different due to metadata (such as timestamps or file order).
Normalization applies not only to zip files directly on the classpath, but also to zip files nested
inside directories or inside other zip files on the classpath.
It is possible to customize Gradle’s built-in strategy for runtime classpath normalization. All inputs
annotated with @Classpath are considered to be runtime classpaths.
Let’s say you want to add a file build-info.properties to all your produced jar files which contains
information about the build, e.g. the timestamp when the build started or some ID to identify the CI
job that published the artifact. This file is only for auditing purposes, and has no effect on the
outcome of running tests. Nonetheless, this file is part of the runtime classpath for the test task and
changes on every build invocation. Therefore, the test would be never up-to-date or pulled from
the build cache. In order to benefit from incremental builds again, you are able tell Gradle to ignore
this file on the runtime classpath at the project level by using
Project.normalization(org.gradle.api.Action) (in the consuming project):
Example 89. Runtime classpath normalization
build.gradle
normalization {
runtimeClasspath {
ignore 'build-info.properties'
}
}
build.gradle.kts
normalization {
runtimeClasspath {
ignore("build-info.properties")
}
}
If adding such a file to your jar files is something you do for all of the projects in your build, and
you want to filter this file for all consumers, you should consider configuring such normalization in
a convention plugin to share it between subprojects.
The effect of this configuration would be that changes to build-info.properties would be ignored
for up-to-date checks and build cache key calculations. Note that this will not change the runtime
behavior of the test task — i.e. any test is still able to load build-info.properties and the runtime
classpath is still the same as before.
By default, properties files (i.e. files that end in a .properties extension) will be normalized to
ignore differences in comments, whitespace and the order of properties. Gradle does this by
loading the properties files and only considering the individual properties during up-to-date checks
or build cache key calculations.
It is sometimes the case, though, that certain properties have a runtime impact, while others do not.
If a property is changing that does not have an impact on the runtime classpath, it may be desirable
to exclude it from up-to-date checks and build cache key calculations. However, excluding the
entire file would also exclude the properties that do have a runtime impact. In this case, properties
can be excluded selectively from any or all properties files on the runtime classpath.
A rule for ignoring properties can be applied to a specific set of files using the patterns described in
RuntimeClasspathNormalization. In the event that a file matches a rule, but cannot be loaded as a
properties file (e.g. because it is not formatted properly or uses a non-standard encoding), it will be
incorporated into the up-to-date or build cache key calculation as a normal file. In other words, if
the file cannot be loaded as a properties file, any changes to whitespace, property order, or
comments may cause the task to become out-of-date or cause a cache miss.
build.gradle
normalization {
runtimeClasspath {
properties('**/build-info.properties') {
ignoreProperty 'timestamp'
}
}
}
build.gradle.kts
normalization {
runtimeClasspath {
properties("**/build-info.properties") {
ignoreProperty("timestamp")
}
}
}
Example 91. Ignore a property in all properties files
build.gradle
normalization {
runtimeClasspath {
properties {
ignoreProperty 'timestamp'
}
}
}
build.gradle.kts
normalization {
runtimeClasspath {
properties {
ignoreProperty("timestamp")
}
}
}
For files in the META-INF directory of jar archives it’s not always possible to ignore files completely
due to their runtime impact.
Manifest files within META-INF are normalized to ignore comments, whitespace and order
differences. Manifest attribute names are compared case-and-order insensitively. Manifest
properties files are normalized according to Properties File Normalization.
Example 92. Ignore META-INF manifest attributes
build.gradle
normalization {
runtimeClasspath {
metaInf {
ignoreAttribute("Implementation-Version")
}
}
}
build.gradle.kts
normalization {
runtimeClasspath {
metaInf {
ignoreAttribute("Implementation-Version")
}
}
}
Example 93. Ignore META-INF property keys
build.gradle
normalization {
runtimeClasspath {
metaInf {
ignoreProperty("app.version")
}
}
}
build.gradle.kts
normalization {
runtimeClasspath {
metaInf {
ignoreProperty("app.version")
}
}
}
Example 94. Ignore META-INF/MANIFEST.MF
build.gradle
normalization {
runtimeClasspath {
metaInf {
ignoreManifest()
}
}
}
build.gradle.kts
normalization {
runtimeClasspath {
metaInf {
ignoreManifest()
}
}
}
Example 95. Ignore all files and directories inside META-INF
build.gradle
normalization {
runtimeClasspath {
metaInf {
ignoreCompletely()
}
}
}
build.gradle.kts
normalization {
runtimeClasspath {
metaInf {
ignoreCompletely()
}
}
}
Gradle automatically handles up-to-date checks for output files and directories, but what if the task
output is something else entirely? Perhaps it’s an update to a web service or a database table.
Gradle has no way of knowing how to check whether the task is up to date in such cases.
That’s where the upToDateWhen() method on TaskOutputs comes in. This takes a predicate function
that is used to determine whether a task is up to date or not. For example, you could read the
version number of your database schema from the database. Or, you could check whether a
particular record in a database table exists or has changed for example.
Just be aware that up-to-date checks should save you time. Don’t add checks that cost as much or
more time than the standard execution of the task. In fact, if a task ends up running frequently
anyway, because it’s rarely up to date, then it may not be worth having no up-to-date checks at all
as described in Disabling up-to-date checks. Remember that your checks will always run if the task
is in the execution task graph.
One common mistake is to use upToDateWhen() instead of Task.onlyIf(). If you want to skip a task on
the basis of some condition unrelated to the task inputs and outputs, then you should use onlyIf().
For example, in cases where you want to skip a task when a particular property is set or not set.
Stale task outputs
When the Gradle version changes, Gradle detects that outputs from tasks that ran with older
versions of Gradle need to be removed to ensure that the newest version of the tasks are starting
from a known clean state.
Automatic clean-up of stale output directories has only been implemented for the
NOTE
output of source sets (Java/Groovy/Scala compilation).
Task rules
Sometimes you want to have a task whose behavior depends on a large or infinite number value
range of parameters. A very nice and expressive way to provide such tasks are task rules:
Example 96. Task rule
build.gradle
if (taskName.startsWith("ping")) {
task(taskName) {
doLast {
println "Pinging: " + (taskName - 'ping')
}
}
}
}
build.gradle.kts
tasks.addRule("Pattern: ping<ID>") {
val taskName = this
if (startsWith("ping")) {
task(taskName) {
doLast {
println("Pinging: " + (taskName.replace("ping", "")))
}
}
}
}
The String parameter is used as a description for the rule, which is shown with gradle tasks.
Rules are not only used when calling tasks from the command line. You can also create dependsOn
relations on rule based tasks:
Example 97. Dependency on rule based tasks
build.gradle
if (taskName.startsWith("ping")) {
task(taskName) {
doLast {
println "Pinging: " + (taskName - 'ping')
}
}
}
}
tasks.register('groupPing') {
dependsOn 'pingServer1', 'pingServer2'
}
build.gradle.kts
tasks.addRule("Pattern: ping<ID>") {
val taskName = this
if (startsWith("ping")) {
task(taskName) {
doLast {
println("Pinging: " + (taskName.replace("ping", "")))
}
}
}
}
tasks.register("groupPing") {
dependsOn("pingServer1", "pingServer2")
}
If you run gradle -q tasks you won’t find a task named pingServer1 or pingServer2, but this script is
executing logic based on the request to run those tasks.
Finalizer tasks
Finalizer tasks are automatically added to the task graph when the finalized task is scheduled to
run.
build.gradle
build.gradle.kts
taskX { finalizedBy(taskY) }
Finalizer tasks will be executed even if the finalized task fails or if the finalized task is considered
up to date.
Example 99. Task finalizer for a failing task
build.gradle
build.gradle.kts
taskX { finalizedBy(taskY) }
Output of gradle -q taskX
* Where:
Build file '/home/user/gradle/samples/build.gradle' line: 4
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
BUILD FAILED in 0s
Finalizer tasks are useful in situations where the build creates a resource that has to be cleaned up
regardless of the build failing or succeeding. An example of such a resource is a web container that
is started before an integration test task and which should be always shut down, even if some of the
tests fail.
To specify a finalizer task you use the Task.finalizedBy(java.lang.Object…) method. This method
accepts a task instance, a task name, or any other input accepted by
Task.dependsOn(java.lang.Object…).
Lifecycle tasks
Lifecycle tasks are tasks that do not do work themselves. They typically do not have any task
actions. Lifecycle tasks can represent several concepts:
• a buildable thing (e.g., create a debug 32-bit executable for native components with
debug32MainExecutable)
• a convenience task to execute many of the same logical tasks (e.g., run all compilation tasks with
compileAll)
The Base Plugin defines several standard lifecycle tasks, such as build, assemble, and check. All the
core language plugins, like the Java Plugin, apply the Base Plugin and hence have the same base set
of lifecycle tasks.
Unless a lifecycle task has actions, its outcome is determined by its task dependencies. If any of
those dependencies are executed, the lifecycle task will be considered EXECUTED. If all of the task
dependencies are up to date, skipped or from cache, the lifecycle task will be considered UP-TO-DATE.
Summary
If you are coming from Ant, an enhanced Gradle task like Copy seems like a cross between an Ant
target and an Ant task. Although Ant’s tasks and targets are really different entities, Gradle
combines these notions into a single entity. Simple Gradle tasks are like Ant’s targets, but enhanced
Gradle tasks also include aspects of Ant tasks. All of Gradle’s tasks share a common API and you can
create dependencies between them. These tasks are much easier to configure than an Ant task.
They make full use of the type system, and are more expressive and easier to maintain.
Gradle provides a domain specific language, or DSL, for describing builds. This build language is
available in Groovy and Kotlin.
[4]
A Groovy build script can contain any Groovy language element. A Kotlin build script can contain
any Kotlin language element. Gradle assumes that each build script is encoded using UTF-8.
Build scripts describe your build by configuring projects. A project is an abstract concept, but you
typically map a Gradle project to a software component that needs to be built, like a library or an
application. Each build script you have is associated with an object of type Project and as the build
script executes, it configures this Project.
In fact, almost all top-level properties and blocks in a build script are part of the Project API. To
demonstrate, take a look at this example build script that prints the name of its project, which is
accessed via the Project.name property:
Example 100. Accessing property of the Project object
build.gradle
println name
println project.name
build.gradle.kts
println(name)
println(project.name)
Both println statements print out the same property. The first uses the top-level reference to the
name property of the Project object. The other statement uses the project property available to any
build script, which returns the associated Project object. Only if you define a property or a method
which has the same name as a member of the Project object, would you need to use the project
property.
The Project object provides some standard properties, which are available in your build script. The
following table lists a few of the commonly used ones.
When Gradle executes a Groovy build script (.gradle), it compiles the script into a class which
implements Script. This means that all of the properties and methods declared by the Script
interface are available in your script.
When Gradle executes a Kotlin build script (.gradle.kts), it compiles the script into a subclass of
KotlinBuildScript. This means that all of the visible properties and functions declared by the
KotlinBuildScript type are available in your script. Also see the KotlinSettingsScript and
KotlinInitScript types respectively for settings scripts and init scripts.
Declaring variables
There are two kinds of variables that can be declared in a build script: local variables and extra
properties.
Local variables
Local variables are declared with the def keyword. They are only visible in the scope where they
have been declared. Local variables are a feature of the underlying Groovy language.
Local variables are declared with the val keyword. They are only visible in the scope where they
have been declared. Local variables are a feature of the underlying Kotlin language.
Example 101. Using local variables
build.gradle
tasks.register('copy', Copy) {
from 'source'
into dest
}
build.gradle.kts
tasks.register<Copy>("copy") {
from("source")
into(dest)
}
Extra properties
All enhanced objects in Gradle’s domain model can hold extra user-defined properties. This
includes, but is not limited to, projects, tasks, and source sets.
Extra properties can be added, read and set via the owning object’s ext property. Alternatively, an
ext block can be used to add multiple properties at once.
Extra properties can be added, read and set via the owning object’s extra property. Alternatively,
they can be addressed via Kotlin delegated properties using by extra.
Example 102. Using extra properties
build.gradle
plugins {
id 'java-library'
}
ext {
springVersion = "3.1.0.RELEASE"
emailNotification = "[email protected]"
}
sourceSets {
main {
purpose = "production"
}
test {
purpose = "test"
}
plugin {
purpose = "production"
}
}
tasks.register('printProperties') {
doLast {
println springVersion
println emailNotification
sourceSets.matching { it.purpose == "production" }.each { println it
.name }
}
}
build.gradle.kts
plugins {
id("java-library")
}
sourceSets {
main {
extra["purpose"] = "production"
}
test {
extra["purpose"] = "test"
}
create("plugin") {
extra["purpose"] = "production"
}
}
tasks.register("printProperties") {
doLast {
println(springVersion)
println(emailNotification)
sourceSets.matching { it.extra["purpose"] == "production" }.forEach {
println(it.name) }
}
}
In this example, an ext block adds two extra properties to the project object. Additionally, a
property named purpose is added to each source set by setting ext.purpose to null (null is a
permissible value). Once the properties have been added, they can be read and set like predefined
properties.
In this example, two extra properties are added to the project object using by extra. Additionally, a
property named purpose is added to each source set by setting extra["purpose"] to null (null is a
permissible value). Once the properties have been added, they can be read and set on extra.
By requiring special syntax for adding a property, Gradle can fail fast when an attempt is made to
set a (predefined or extra) property but the property is misspelled or does not exist. Extra
properties can be accessed from anywhere their owning object can be accessed, giving them a
wider scope than local variables. Extra properties on a project are visible from its subprojects.
For further details on extra properties and their API, see the ExtraPropertiesExtension class in the
API documentation.
You can configure arbitrary objects in the following very readable way.
Example 103. Configuring arbitrary objects
build.gradle
import java.text.FieldPosition
tasks.register('configure') {
doLast {
def pos = configure(new FieldPosition(10)) {
beginIndex = 1
endIndex = 5
}
println pos.beginIndex
println pos.endIndex
}
}
build.gradle.kts
import java.text.FieldPosition
tasks.register("configure") {
doLast {
val pos = FieldPosition(10).apply {
beginIndex = 1
endIndex = 5
}
println(pos.beginIndex)
println(pos.endIndex)
}
}
build.gradle
tasks.register('configure') {
doLast {
def pos = new java.text.FieldPosition(10)
// Apply the script
apply from: 'other.gradle', to: pos
println pos.beginIndex
println pos.endIndex
}
}
other.gradle
// Set properties.
beginIndex = 1
endIndex = 5
Looking for some Kotlin basics, the Kotlin reference documentation and Kotlin Koans
TIP
should be useful to you.
The Groovy language provides plenty of features for creating DSLs, and the Gradle build language
takes advantage of these. Understanding how the build language works will help you when you
write your build script, and in particular, when you start to write custom plugins and tasks.
Groovy JDK
Groovy adds lots of useful methods to the standard Java classes. For example, Iterable gets an each
method, which iterates over the elements of the Iterable:
Example 105. Groovy JDK methods
build.gradle
Property accessors
Groovy automatically converts a property reference into a call to the appropriate getter or setter
method.
build.gradle
build.gradle
Groovy provides some shortcuts for defining List and Map instances. Both kinds of literals are
straightforward, but map literals have some interesting twists.
For instance, the “apply” method (where you typically apply plugins) actually takes a map
parameter. However, when you have a line like “apply plugin:'java'”, you aren’t actually using a
map literal, you’re actually using “named parameters”, which have almost exactly the same syntax
as a map literal (without the wrapping brackets). That named parameter list gets converted to a
map when the method is called, but it doesn’t start out as a map.
build.gradle
// List literal
test.includes = ['org/gradle/api/**', 'org/gradle/internal/**']
// Map literal.
Map<String, String> map = [key1:'value1', key2: 'value2']
The Gradle DSL uses closures in many places. You can find out more about closures here. When the
last parameter of a method is a closure, you can place the closure after the method call:
build.gradle
repositories {
println "in a closure"
}
repositories() { println "in a closure" }
repositories({ println "in a closure" })
Closure delegate
Each closure has a delegate object, which Groovy uses to look up variable and method references
which are not local variables or parameters of the closure. Gradle uses this for configuration
closures, where the delegate object is set to the object to be configured.
build.gradle
dependencies {
assert delegate == project.dependencies
testImplementation('junit:junit:4.13')
delegate.testImplementation('junit:junit:4.13')
}
Default imports
To make build scripts more concise, Gradle automatically adds a set of import statements to the
Gradle scripts. This means that instead of using throw new
org.gradle.api.tasks.StopExecutionException() you can just type throw new
StopExecutionException() instead.
import org.gradle.*
import org.gradle.api.*
import org.gradle.api.artifacts.*
import org.gradle.api.artifacts.component.*
import org.gradle.api.artifacts.dsl.*
import org.gradle.api.artifacts.ivy.*
import org.gradle.api.artifacts.maven.*
import org.gradle.api.artifacts.query.*
import org.gradle.api.artifacts.repositories.*
import org.gradle.api.artifacts.result.*
import org.gradle.api.artifacts.transform.*
import org.gradle.api.artifacts.type.*
import org.gradle.api.artifacts.verification.*
import org.gradle.api.attributes.*
import org.gradle.api.attributes.java.*
import org.gradle.api.attributes.plugin.*
import org.gradle.api.capabilities.*
import org.gradle.api.component.*
import org.gradle.api.credentials.*
import org.gradle.api.distribution.*
import org.gradle.api.distribution.plugins.*
import org.gradle.api.execution.*
import org.gradle.api.file.*
import org.gradle.api.initialization.*
import org.gradle.api.initialization.definition.*
import org.gradle.api.initialization.dsl.*
import org.gradle.api.initialization.resolve.*
import org.gradle.api.invocation.*
import org.gradle.api.java.archives.*
import org.gradle.api.jvm.*
import org.gradle.api.logging.*
import org.gradle.api.logging.configuration.*
import org.gradle.api.model.*
import org.gradle.api.plugins.*
import org.gradle.api.plugins.antlr.*
import org.gradle.api.plugins.catalog.*
import org.gradle.api.plugins.jvm.*
import org.gradle.api.plugins.quality.*
import org.gradle.api.plugins.scala.*
import org.gradle.api.provider.*
import org.gradle.api.publish.*
import org.gradle.api.publish.ivy.*
import org.gradle.api.publish.ivy.plugins.*
import org.gradle.api.publish.ivy.tasks.*
import org.gradle.api.publish.maven.*
import org.gradle.api.publish.maven.plugins.*
import org.gradle.api.publish.maven.tasks.*
import org.gradle.api.publish.plugins.*
import org.gradle.api.publish.tasks.*
import org.gradle.api.reflect.*
import org.gradle.api.reporting.*
import org.gradle.api.reporting.components.*
import org.gradle.api.reporting.dependencies.*
import org.gradle.api.reporting.dependents.*
import org.gradle.api.reporting.model.*
import org.gradle.api.reporting.plugins.*
import org.gradle.api.resources.*
import org.gradle.api.services.*
import org.gradle.api.specs.*
import org.gradle.api.tasks.*
import org.gradle.api.tasks.ant.*
import org.gradle.api.tasks.application.*
import org.gradle.api.tasks.bundling.*
import org.gradle.api.tasks.compile.*
import org.gradle.api.tasks.diagnostics.*
import org.gradle.api.tasks.incremental.*
import org.gradle.api.tasks.javadoc.*
import org.gradle.api.tasks.options.*
import org.gradle.api.tasks.scala.*
import org.gradle.api.tasks.testing.*
import org.gradle.api.tasks.testing.junit.*
import org.gradle.api.tasks.testing.junitplatform.*
import org.gradle.api.tasks.testing.testng.*
import org.gradle.api.tasks.util.*
import org.gradle.api.tasks.wrapper.*
import org.gradle.authentication.*
import org.gradle.authentication.aws.*
import org.gradle.authentication.http.*
import org.gradle.build.event.*
import org.gradle.buildinit.*
import org.gradle.buildinit.plugins.*
import org.gradle.buildinit.tasks.*
import org.gradle.caching.*
import org.gradle.caching.configuration.*
import org.gradle.caching.http.*
import org.gradle.caching.local.*
import org.gradle.concurrent.*
import org.gradle.external.javadoc.*
import org.gradle.ide.visualstudio.*
import org.gradle.ide.visualstudio.plugins.*
import org.gradle.ide.visualstudio.tasks.*
import org.gradle.ide.xcode.*
import org.gradle.ide.xcode.plugins.*
import org.gradle.ide.xcode.tasks.*
import org.gradle.ivy.*
import org.gradle.jvm.*
import org.gradle.jvm.application.scripts.*
import org.gradle.jvm.application.tasks.*
import org.gradle.jvm.tasks.*
import org.gradle.jvm.toolchain.*
import org.gradle.language.*
import org.gradle.language.assembler.*
import org.gradle.language.assembler.plugins.*
import org.gradle.language.assembler.tasks.*
import org.gradle.language.base.*
import org.gradle.language.base.artifact.*
import org.gradle.language.base.compile.*
import org.gradle.language.base.plugins.*
import org.gradle.language.base.sources.*
import org.gradle.language.c.*
import org.gradle.language.c.plugins.*
import org.gradle.language.c.tasks.*
import org.gradle.language.cpp.*
import org.gradle.language.cpp.plugins.*
import org.gradle.language.cpp.tasks.*
import org.gradle.language.java.artifact.*
import org.gradle.language.jvm.tasks.*
import org.gradle.language.nativeplatform.*
import org.gradle.language.nativeplatform.tasks.*
import org.gradle.language.objectivec.*
import org.gradle.language.objectivec.plugins.*
import org.gradle.language.objectivec.tasks.*
import org.gradle.language.objectivecpp.*
import org.gradle.language.objectivecpp.plugins.*
import org.gradle.language.objectivecpp.tasks.*
import org.gradle.language.plugins.*
import org.gradle.language.rc.*
import org.gradle.language.rc.plugins.*
import org.gradle.language.rc.tasks.*
import org.gradle.language.scala.tasks.*
import org.gradle.language.swift.*
import org.gradle.language.swift.plugins.*
import org.gradle.language.swift.tasks.*
import org.gradle.maven.*
import org.gradle.model.*
import org.gradle.nativeplatform.*
import org.gradle.nativeplatform.platform.*
import org.gradle.nativeplatform.plugins.*
import org.gradle.nativeplatform.tasks.*
import org.gradle.nativeplatform.test.*
import org.gradle.nativeplatform.test.cpp.*
import org.gradle.nativeplatform.test.cpp.plugins.*
import org.gradle.nativeplatform.test.cunit.*
import org.gradle.nativeplatform.test.cunit.plugins.*
import org.gradle.nativeplatform.test.cunit.tasks.*
import org.gradle.nativeplatform.test.googletest.*
import org.gradle.nativeplatform.test.googletest.plugins.*
import org.gradle.nativeplatform.test.plugins.*
import org.gradle.nativeplatform.test.tasks.*
import org.gradle.nativeplatform.test.xctest.*
import org.gradle.nativeplatform.test.xctest.plugins.*
import org.gradle.nativeplatform.test.xctest.tasks.*
import org.gradle.nativeplatform.toolchain.*
import org.gradle.nativeplatform.toolchain.plugins.*
import org.gradle.normalization.*
import org.gradle.platform.base.*
import org.gradle.platform.base.binary.*
import org.gradle.platform.base.component.*
import org.gradle.platform.base.plugins.*
import org.gradle.plugin.devel.*
import org.gradle.plugin.devel.plugins.*
import org.gradle.plugin.devel.tasks.*
import org.gradle.plugin.management.*
import org.gradle.plugin.use.*
import org.gradle.plugins.ear.*
import org.gradle.plugins.ear.descriptor.*
import org.gradle.plugins.ide.*
import org.gradle.plugins.ide.api.*
import org.gradle.plugins.ide.eclipse.*
import org.gradle.plugins.ide.idea.*
import org.gradle.plugins.signing.*
import org.gradle.plugins.signing.signatory.*
import org.gradle.plugins.signing.signatory.pgp.*
import org.gradle.plugins.signing.type.*
import org.gradle.plugins.signing.type.pgp.*
import org.gradle.process.*
import org.gradle.swiftpm.*
import org.gradle.swiftpm.plugins.*
import org.gradle.swiftpm.tasks.*
import org.gradle.testing.base.*
import org.gradle.testing.base.plugins.*
import org.gradle.testing.jacoco.plugins.*
import org.gradle.testing.jacoco.tasks.*
import org.gradle.testing.jacoco.tasks.rules.*
import org.gradle.testkit.runner.*
import org.gradle.util.*
import org.gradle.vcs.*
import org.gradle.vcs.git.*
import org.gradle.work.*
import org.gradle.workers.*
The File paths in depth section covers the first of these in detail, while subsequent sections, like File
copying in depth, cover the second. To begin with, we’ll show you examples of the most common
scenarios that users encounter.
You copy a file by creating an instance of Gradle’s builtin Copy task and configuring it with the
location of the file and where you want to put it. This example mimics copying a generated report
into a directory that will be packed into an archive, such as a ZIP or TAR:
Example 111. How to copy a single file
build.gradle
tasks.register('copyReport', Copy) {
from layout.buildDirectory.dir("reports/my-report.pdf")
into layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Copy>("copyReport") {
from(layout.buildDirectory.dir("reports/my-report.pdf"))
into(layout.buildDirectory.dir("toArchive"))
}
The Project.file(java.lang.Object) method is used to create a file or directory path relative to the
current project and is a common way to make build scripts work regardless of the project path. The
file and directory paths are then used to specify what file to copy using
Copy.from(java.lang.Object…) and which directory to copy it to using Copy.into(java.lang.Object).
You can even use the path directly without the file() method, as explained early in the section File
copying in depth:
build.gradle
tasks.register('copyReport2', Copy) {
from "$buildDir/reports/my-report.pdf"
into "$buildDir/toArchive"
}
build.gradle.kts
tasks.register<Copy>("copyReport2") {
from("$buildDir/reports/my-report.pdf")
into("$buildDir/toArchive")
}
Although hard-coded paths make for simple examples, they also make the build brittle. It’s better to
use a reliable, single source of truth, such as a task or shared project property. In the following
modified example, we use a report task defined elsewhere that has the report’s location stored in
its outputFile property:
build.gradle
tasks.register('copyReport3', Copy) {
from myReportTask.outputFile
into archiveReportsTask.dirToArchive
}
build.gradle.kts
tasks.register<Copy>("copyReport3") {
val outputFile: File by myReportTask.get().extra
val dirToArchive: File by archiveReportsTask.get().extra
from(outputFile)
into(dirToArchive)
}
We have also assumed that the reports will be archived by archiveReportsTask, which provides us
with the directory that will be archived and hence where we want to put the copies of the reports.
You can extend the previous examples to multiple files very easily by providing multiple arguments
to from():
Example 114. Using multiple arguments with from()
build.gradle
tasks.register('copyReportsForArchiving', Copy) {
from layout.buildDirectory.file("reports/my-report.pdf"), layout
.projectDirectory.file("src/docs/manual.pdf")
into layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Copy>("copyReportsForArchiving") {
from(layout.buildDirectory.file("reports/my-report.pdf"),
layout.projectDirectory.file("src/docs/manual.pdf"))
into(layout.buildDirectory.dir("toArchive"))
}
Two files are now copied into the archive directory. You can also use multiple from() statements to
do the same thing, as shown in the first example of the section File copying in depth.
Now consider another example: what if you want to copy all the PDFs in a directory without having
to specify each one? To do this, attach inclusion and/or exclusion patterns to the copy specification.
Here we use a string pattern to include PDFs only:
Example 115. Using a flat filter
build.gradle
tasks.register('copyPdfReportsForArchiving', Copy) {
from layout.buildDirectory.dir("reports")
include "*.pdf"
into layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Copy>("copyPdfReportsForArchiving") {
from(layout.buildDirectory.dir("reports"))
include("*.pdf")
into(layout.buildDirectory.dir("toArchive"))
}
One thing to note, as demonstrated in the following diagram, is that only the PDFs that reside
directly in the reports directory are copied:
You can include files in subdirectories by using an Ant-style glob pattern (**/*), as done in this
updated example:
Example 116. Using a deep filter
build.gradle
tasks.register('copyAllPdfReportsForArchiving', Copy) {
from layout.buildDirectory.dir("reports")
include "**/*.pdf"
into layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Copy>("copyAllPdfReportsForArchiving") {
from(layout.buildDirectory.dir("reports"))
include("**/*.pdf")
into(layout.buildDirectory.dir("toArchive"))
}
One thing to bear in mind is that a deep filter like this has the side effect of copying the directory
structure below reports as well as the files. If you just want to copy the files without the directory
structure, you need to use an explicit fileTree(dir) { includes }.files expression. We talk more
about the difference between file trees and file collections in the File trees section.
This is just one of the variations in behavior you’re likely to come across when dealing with file
operations in Gradle builds. Fortunately, Gradle provides elegant solutions to almost all those use
cases. Read the in-depth sections later in the chapter for more detail on how the file operations
work in Gradle and what options you have for configuring them.
You may have a need to copy not just files, but the directory structure they reside in as well. This is
the default behavior when you specify a directory as the from() argument, as demonstrated by the
following example that copies everything in the reports directory, including all its subdirectories, to
the destination:
build.gradle
tasks.register('copyReportsDirForArchiving', Copy) {
from layout.buildDirectory.dir("reports")
into layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Copy>("copyReportsDirForArchiving") {
from(layout.buildDirectory.dir("reports"))
into(layout.buildDirectory.dir("toArchive"))
}
The key aspect that users struggle with is controlling how much of the directory structure goes to
the destination. In the above example, do you get a toArchive/reports directory or does everything
in reports go straight into toArchive? The answer is the latter. If a directory is part of the from()
path, then it won’t appear in the destination.
So how do you ensure that reports itself is copied across, but not any other directory in $buildDir?
The answer is to add it as an include pattern:
Example 118. Copying an entire directory, including itself
build.gradle
tasks.register('copyReportsDirForArchiving2', Copy) {
from(layout.buildDirectory) {
include "reports/**"
}
into layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Copy>("copyReportsDirForArchiving2") {
from(layout.buildDirectory) {
include("reports/**")
}
into(layout.buildDirectory.dir("toArchive"))
}
You’ll get the same behavior as before except with one extra level of directory in the destination, i.e.
toArchive/reports.
One thing to note is how the include() directive applies only to the from(), whereas the directive in
the previous section applied to the whole task. These different levels of granularity in the copy
specification allow you to easily handle most requirements that you will come across. You can learn
more about this in the section on child specifications.
From the perspective of Gradle, packing files into an archive is effectively a copy in which the
destination is the archive file rather than a directory on the file system. This means that creating
archives looks a lot like copying, with all of the same features!
The simplest case involves archiving the entire contents of a directory, which this example
demonstrates by creating a ZIP of the toArchive directory:
Example 119. Archiving a directory as a ZIP
build.gradle
tasks.register('packageDistribution', Zip) {
archiveFileName = "my-distribution.zip"
destinationDirectory = layout.buildDirectory.dir('dist')
from layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Zip>("packageDistribution") {
archiveFileName.set("my-distribution.zip")
destinationDirectory.set(layout.buildDirectory.dir("dist"))
from(layout.buildDirectory.dir("toArchive"))
}
Notice how we specify the destination and name of the archive instead of an into(): both are
required. You often won’t see them explicitly set, because most projects apply the Base Plugin. It
provides some conventional values for those properties. The next example demonstrates this and
you can learn more about the conventions in the archive naming section.
Each type of archive has its own task type, the most common ones being Zip, Tar and Jar. They all
share most of the configuration options of Copy, including filtering and renaming.
One of the most common scenarios involves copying files into specified subdirectories of the
archive. For example, let’s say you want to package all PDFs into a docs directory in the root of the
archive. This docs directory doesn’t exist in the source location, so you have to create it as part of
the archive. You do this by adding an into() declaration for just the PDFs:
Example 120. Using the Base Plugin for its archive name convention
build.gradle
plugins {
id 'base'
}
version = "1.0.0"
tasks.register('packageDistribution', Zip) {
from(layout.buildDirectory.dir("toArchive")) {
exclude "**/*.pdf"
}
from(layout.buildDirectory.dir("toArchive")) {
include "**/*.pdf"
into "docs"
}
}
build.gradle.kts
plugins {
base
}
version = "1.0.0"
tasks.register<Zip>("packageDistribution") {
from(layout.buildDirectory.dir("toArchive")) {
exclude("**/*.pdf")
}
from(layout.buildDirectory.dir("toArchive")) {
include("**/*.pdf")
into("docs")
}
}
As you can see, you can have multiple from() declarations in a copy specification, each with its own
configuration. See Using child copy specifications for more information on this feature.
Unpacking archives
Archives are effectively self-contained file systems, so unpacking them is a case of copying the files
from that file system onto the local file system — or even into another archive. Gradle enables this
by providing some wrapper functions that make archives available as hierarchical collections of
files (file trees).
build.gradle
tasks.register('unpackFiles', Copy) {
from zipTree("src/resources/thirdPartyResources.zip")
into layout.buildDirectory.dir("resources")
}
build.gradle.kts
tasks.register<Copy>("unpackFiles") {
from(zipTree("src/resources/thirdPartyResources.zip"))
into(layout.buildDirectory.dir("resources"))
}
As with a normal copy, you can control which files are unpacked via filters and even rename files
as they are unpacked.
More advanced processing can be handled by the eachFile() method. For example, you might need
to extract different subtrees of the archive into different paths within the destination directory. The
following sample uses the method to extract the files within the archive’s libs directory into the
root destination directory, rather than into a libs subdirectory:
Example 122. Unpacking a subset of a ZIP file
build.gradle
tasks.register('unpackLibsDirectory', Copy) {
from(zipTree("src/resources/thirdPartyResources.zip")) {
include "libs/**" ①
eachFile { fcd ->
fcd.relativePath = new RelativePath(true, fcd.relativePath
.segments.drop(1)) ②
}
includeEmptyDirs = false ③
}
into layout.buildDirectory.dir("resources")
}
build.gradle.kts
tasks.register<Copy>("unpackLibsDirectory") {
from(zipTree("src/resources/thirdPartyResources.zip")) {
include("libs/**") ①
eachFile {
relativePath = RelativePath(true,
*relativePath.segments.drop(1).toTypedArray()) ②
}
includeEmptyDirs = false ③
}
into(layout.buildDirectory.dir("resources"))
}
① Extracts only the subset of files that reside in the libs directory
② Remaps the path of the extracting files into the destination directory by dropping the libs
segment from the file path
③ Ignores the empty directories resulting from the remapping, see Caution note below
You can not change the destination path of empty directories with this
CAUTION
technique. You can learn more in this issue.
If you’re a Java developer and are wondering why there is no jarTree() method, that’s because
zipTree() works perfectly well for JARs, WARs and EARs.
In the Java space, applications and their dependencies typically used to be packaged as separate
JARs within a single distribution archive. That still happens, but there is another approach that is
now common: placing the classes and resources of the dependencies directly into the application
JAR, creating what is known as an uber or fat JAR.
Gradle makes this approach easy to accomplish. Consider the aim: to copy the contents of other JAR
files into the application JAR. All you need for this is the Project.zipTree(java.lang.Object) method
and the Jar task, as demonstrated by the uberJar task in the following example:
Example 123. Creating a Java uber or fat JAR
build.gradle
plugins {
id 'java'
}
version = '1.0.0'
repositories {
mavenCentral()
}
dependencies {
implementation 'commons-io:commons-io:2.6'
}
tasks.register('uberJar', Jar) {
archiveClassifier = 'uber'
from sourceSets.main.output
dependsOn configurations.runtimeClasspath
from {
configurations.runtimeClasspath.findAll { it.name.endsWith('jar') }
.collect { zipTree(it) }
}
}
build.gradle.kts
plugins {
java
}
version = "1.0.0"
repositories {
mavenCentral()
}
dependencies {
implementation("commons-io:commons-io:2.6")
}
tasks.register<Jar>("uberJar") {
archiveClassifier.set("uber")
from(sourceSets.main.get().output)
dependsOn(configurations.runtimeClasspath)
from({
configurations.runtimeClasspath.get().filter {
it.name.endsWith("jar") }.map { zipTree(it) }
})
}
Creating directories
Many tasks need to create directories to store the files they generate, which is why Gradle
automatically manages this aspect of tasks when they explicitly define file and directory outputs.
You can learn about this feature in the incremental build section of the user manual. All core
Gradle tasks ensure that any output directories they need are created if necessary using this
mechanism.
In cases where you need to create a directory manually, you can use the
Project.mkdir(java.lang.Object) method from within your build scripts or custom task
implementations. Here’s a simple example that creates a single images directory in the project
folder:
Example 124. Manually creating a directory
build.gradle
tasks.register('ensureDirectory') {
doLast {
mkdir "images"
}
}
build.gradle.kts
tasks.register("ensureDirectory") {
doLast {
mkdir("images")
}
}
As described in the Apache Ant manual, the mkdir task will automatically create all necessary
directories in the given path and will do nothing if the directory already exists.
Gradle has no API for moving files and directories around, but you can use the Apache Ant
integration to easily do that, as shown in this example:
Example 125. Moving a directory using the Ant task
build.gradle
tasks.register('moveReports') {
doLast {
ant.move file: "${buildDir}/reports",
todir: "${buildDir}/toArchive"
}
}
build.gradle.kts
tasks.register("moveReports") {
doLast {
ant.withGroovyBuilder {
"move"("file" to "${buildDir}/reports", "todir" to
"${buildDir}/toArchive")
}
}
}
This is not a common requirement and should be used sparingly as you lose information and can
easily break a build. It’s generally preferable to copy directories and files instead.
The files used and generated by your builds sometimes don’t have names that suit, in which case
you want to rename those files as you copy them. Gradle allows you to do this as part of a copy
specification using the rename() configuration.
The following example removes the "-staging-" marker from the names of any files that have it:
Example 126. Renaming files as they are copied
build.gradle
tasks.register('copyFromStaging', Copy) {
from "src/main/webapp"
into layout.buildDirectory.dir('explodedWar')
build.gradle.kts
tasks.register<Copy>("copyFromStaging") {
from("src/main/webapp")
into(layout.buildDirectory.dir("explodedWar"))
rename("(.+)-staging(.+)", "$1$2")
}
You can use regular expressions for this, as in the above example, or closures that use more
complex logic to determine the target filename. For example, the following task truncates
filenames:
Example 127. Truncating filenames as they are copied
build.gradle
tasks.register('copyWithTruncate', Copy) {
from layout.buildDirectory.dir("reports")
rename { String filename ->
if (filename.size() > 10) {
return filename[0..7] + "~" + filename.size()
}
else return filename
}
into layout.buildDirectory.dir("toArchive")
}
build.gradle.kts
tasks.register<Copy>("copyWithTruncate") {
from(layout.buildDirectory.dir("reports"))
rename { filename: String ->
if (filename.length > 10) {
filename.slice(0..7) + "~" + filename.length
}
else filename
}
into(layout.buildDirectory.dir("toArchive"))
}
As with filtering, you can also apply renaming to a subset of files by configuring it as part of a child
specification on a from().
You can easily delete files and directories using either the Delete task or the
Project.delete(org.gradle.api.Action) method. In both cases, you specify which files and directories
to delete in a way supported by the Project.files(java.lang.Object…) method.
For example, the following task deletes the entire contents of a build’s output directory:
Example 128. Deleting a directory
build.gradle
tasks.register('myClean', Delete) {
delete buildDir
}
build.gradle.kts
tasks.register<Delete>("myClean") {
delete(buildDir)
}
If you want more control over which files are deleted, you can’t use inclusions and exclusions in
the same way as for copying files. Instead, you have to use the builtin filtering mechanisms of
FileCollection and FileTree. The following example does just that to clear out temporary files from
a source directory:
build.gradle
tasks.register('cleanTempFiles', Delete) {
delete fileTree("src").matching {
include "**/*.tmp"
}
}
build.gradle.kts
tasks.register<Delete>("cleanTempFiles") {
delete(fileTree("src").matching {
include("**/*.tmp")
})
}
You’ll learn more about file collections and file trees in the next section.
File paths in depth
In order to perform some action on a file, you need to know where it is, and that’s the information
provided by file paths. Gradle builds on the standard Java File class, which represents the location
of a single file, and provides new APIs for dealing with collections of paths. This section shows you
how to use the Gradle APIs to specify file paths for use in tasks and file operations.
But first, an important note on using hard-coded file paths in your builds.
Many examples in this chapter use hard-coded paths as string literals. This makes them easy to
understand, but it’s not good practice for real builds. The problem is that paths often change and
the more places you need to change them, the more likely you are to miss one and break the build.
Where possible, you should use tasks, task properties, and project properties — in that order of
preference — to configure file paths. For example, if you were to create a task that packages the
compiled classes of a Java application, you should aim for something like this:
Example 130. How to minimize the number of hard-coded paths in your build
build.gradle
tasks.register('packageClasses', Zip) {
archiveAppendix = "classes"
destinationDirectory = archivesDirPath
from compileJava
}
build.gradle.kts
tasks.register<Zip>("packageClasses") {
archiveAppendix.set("classes")
destinationDirectory.set(archivesDirPath)
from(tasks.compileJava)
}
See how we’re using the compileJava task as the source of the files to package and we’ve created a
project property archivesDirPath to store the location where we put archives, on the basis we’re
likely to use it elsewhere in the build.
Using a task directly as an argument like this relies on it having defined outputs, so it won’t always
be possible. In addition, this example could be improved further by relying on the Java plugin’s
convention for destinationDirectory rather than overriding it, but it does demonstrate the use of
project properties.
Gradle provides the Project.file(java.lang.Object) method for specifying the location of a single file
or directory. Relative paths are resolved relative to the project directory, while absolute paths
remain unchanged.
Never use new File(relative path) because this creates a path relat