0% found this document useful (0 votes)
763 views761 pages

AkkaJava PDF

Uploaded by

vikasvikarm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
763 views761 pages

AkkaJava PDF

Uploaded by

vikasvikarm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Akka Java Documentation

Release 2.4.20

Lightbend Inc

August 10, 2017


CONTENTS

1 Security Announcements 1
1.1 Receiving Security Advisories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Reporting Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Security Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.4 Fixed Security Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Introduction 4
2.1 What is Akka? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Why Akka? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 The Obligatory Hello World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.5 Use-case and Deployment Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Examples of use-cases for Akka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 General 13
3.1 Terminology, Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Actor Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 What is an Actor? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Supervision and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5 Actor References, Paths and Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.6 Location Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.7 Akka and the Java Memory Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.8 Message Delivery Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.9 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4 Actors 102
4.1 Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.2 Typed Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.3 Fault Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.4 Dispatchers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.5 Mailboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.6 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.7 Building Finite State Machine Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.8 Persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4.9 Persistence - Schema Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
4.10 Persistence Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
4.11 Persistence Query for LevelDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
4.12 Testing Actor Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

5 Actors (Java with Lambda Support) 254


5.1 Actors (Java with Lambda Support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
5.2 Fault Tolerance (Java with Lambda Support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5.3 FSM (Java with Lambda Support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
5.4 Persistence (Java with Lambda Support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

i
6 Futures and Agents 330
6.1 Futures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
6.2 Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

7 Networking 340
7.1 Cluster Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
7.2 Cluster Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
7.3 Cluster Singleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
7.4 Distributed Publish Subscribe in Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
7.5 Cluster Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
7.6 Cluster Sharding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
7.7 Cluster Metrics Extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
7.8 Distributed Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
7.9 Remoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
7.10 Remoting (codename Artery) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
7.11 Serialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
7.12 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
7.13 Using TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
7.14 Using UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
7.15 Camel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465

8 Utilities 478
8.1 Event Bus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
8.2 Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
8.3 Scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
8.4 Duration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
8.5 Circuit Breaker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
8.6 Akka Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
8.7 Use-case and Deployment Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503

9 Streams 505
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.2 Quick Start Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
9.3 Reactive Tweets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.4 Design Principles behind Akka Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
9.5 Basics and working with Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.6 Working with Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
9.7 Modularity, Composition and Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.8 Buffers and working with rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
9.9 Dynamic stream handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
9.10 Custom stream processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
9.11 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
9.12 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
9.13 Working with streaming IO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
9.14 Pipelining and Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
9.15 Testing streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
9.16 Overview of built-in stages and their semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
9.17 Streams Cookbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
9.18 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
9.19 Migration Guide 1.0 to 2.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
9.20 Migration Guide 2.0.x to 2.4.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640

10 Akka HTTP Documentation (Java) moved! 644

11 HowTo: Common Patterns 645


11.1 Scheduling Periodic Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
11.2 Single-Use Actor Trees with High-Level Error Reporting . . . . . . . . . . . . . . . . . . . . . . 646

12 Experimental Modules 650

ii
12.1 Multi Node Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
12.2 Actors (Java with Lambda Support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
12.3 FSM (Java with Lambda Support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
12.4 Persistence Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
12.5 External Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694

13 Information for Akka Developers 716


13.1 Building Akka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
13.2 Multi JVM Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
13.3 I/O Layer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
13.4 Developer Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
13.5 Documentation Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724

14 Project Information 727


14.1 Migration Guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727
14.2 Issue Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
14.3 Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
14.4 Sponsors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
14.5 Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746

15 Additional Information 749


15.1 Binary Compatibility Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 749
15.2 Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
15.3 Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
15.4 Videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
15.5 Akka in OSGi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755

iii
CHAPTER

ONE

SECURITY ANNOUNCEMENTS

1.1 Receiving Security Advisories

The best way to receive any and all security announcements is to subscribe to the Akka security list.
The mailing list is very low traffic, and receives notifications only after security reports have been managed by the
core team and fixes are publicly available.

1.2 Reporting Vulnerabilities

We strongly encourage people to report such problems to our private security mailing list first, before disclosing
them in a public forum.
Following best practice, we strongly encourage anyone to report potential security vulnerabilities to secu-
rity@[Link] before disclosing them in a public forum like the mailing list or as a Github issue.
Reports to this email address will be handled by our security team, who will work together with you to ensure that
a fix can be provided without delay.

1.3 Security Related Documentation

• disable-java-serializer-scala
• remote-deployment-whitelist-scala
• remote-security-scala

1.4 Fixed Security Vulnerabilities

1.4.1 Java Serialization, Fixed in Akka 2.4.17

Date

10 Feburary 2017

Description of Vulnerability

An attacker that can connect to an ActorSystem exposed via Akka Remote over TCP can gain remote code
execution capabilities in the context of the JVM process that runs the ActorSystem if:
• JavaSerializer is enabled (default in Akka 2.4.x)

1
Akka Java Documentation, Release 2.4.20

• and TLS is disabled or TLS is enabled with [Link]-mutual-authentic


= false (which is still the default in Akka 2.4.x)
• or if TLS is enabled with mutual authentication and the authentication keys of a host that is allowed to
connect have been compromised, an attacker gained access to a valid certificate (e.g. by compromising a
node with certificates issued by the same internal PKI tree to get access of the certificate)
• regardless of whether untrusted mode is enabled or not
Java deserialization is known to be vulnerable to attacks when attacker can provide arbitrary types.
Akka Remoting uses Java serialiser as default configuration which makes it vulnerable in its default form. The
documentation of how to disable Java serializer was not complete. The documentation of how to enable mutual
authentication was missing (only described in [Link]).
To protect against such attacks the system should be updated to Akka 2.4.17 or later and be configured with
disabled Java serializer. Additional protection can be achieved when running in an untrusted network by enabling
TLS with mutual authentication.
Please subscribe to the akka-security mailing list to be notified promptly about future security issues.

Severity

The CVSS score of this vulnerability is 6.8 (Medium), based on vector


AV:A/AC:M/Au:N/C:C/I:C/A:C/E:F/RL:TF/RC:C.
Rationale for the score:
• AV:A - Best practice is that Akka remoting nodes should only be accessible from the adjacent network, so
in good setups, this will be adjacent.
• AC:M - Any one in the adjacent network can launch the attack with non-special access privileges.
• C:C, I:C, A:C - Remote Code Execution vulnerabilities are by definition CIA:C.

Affected Versions

• Akka 2.4.16 and prior


• Akka 2.5-M1 (milestone not intended for production)

Fixed Versions

We have prepared patches for the affected versions, and have released the following versions which resolve the
issue:
• Akka 2.4.17 (Scala 2.11, 2.12)
Binary and source compatibility has been maintained for the patched releases so the upgrade procedure is as
simple as changing the library dependency.
It will also be fixed in 2.5-M2 or 2.5.0-RC1.

Acknowledgements

We would like to thank Alvaro Munoz at Hewlett Packard Enterprise Security & Adrian Bravo at Workday for
their thorough investigation and bringing this issue to our attention.

1.4. Fixed Security Vulnerabilities 2


Akka Java Documentation, Release 2.4.20

1.4.2 Camel Dependency, Fixed in Akka 2.4.20

Date

9 August 2017

Description of Vulnerability

Apache Camel’s Validation Component is vulnerable against SSRF via remote DTDs and XXE, as described in
CVE-2017-5643
To protect against such attacks the system should be updated to Akka 2.4.20, 2.5.4 or later. Dependencies to
Camel libraries should be updated to version 2.7.17.

Severity

The CVSS score of this vulnerability is 7.4 (High), according to CVE-2017-5643.

Affected Versions

• Akka 2.4.19 and prior


• Akka 2.5.3 and prior

Fixed Versions

We have prepared patches for the affected versions, and have released the following versions which resolve the
issue:
• Akka 2.4.20 (Scala 2.11, 2.12)
• Akka 2.5.4 (Scala 2.11, 2.12)

Acknowledgements

We would like to thank Thomas Szymanski for bringing this issue to our attention.

1.4. Fixed Security Vulnerabilities 3


CHAPTER

TWO

INTRODUCTION

2.1 What is Akka?

«resilient elastic distributed real-time transaction processing»


We believe that writing correct distributed, concurrent, fault-tolerant and scalable applications is too hard. Most
of the time it’s because we are using the wrong tools and the wrong level of abstraction. Akka is here to change
that. Using the Actor Model we raise the abstraction level and provide a better platform to build scalable, resilient
and responsive applications—see the Reactive Manifesto for more details. For fault-tolerance we adopt the “let
it crash” model which the telecom industry has used with great success to build applications that self-heal and
systems that never stop. Actors also provide the abstraction for transparent distribution and the basis for truly
scalable and fault-tolerant applications.
Akka is Open Source and available under the Apache 2 License.
Download from [Link]
Please note that all code samples compile, so if you want direct access to the sources, have a look over at the Akka
Docs subproject on github: for Java and Scala.

2.1.1 Akka implements a unique hybrid

Actors

Actors give you:


• Simple and high-level abstractions for distribution, concurrency and parallelism.
• Asynchronous, non-blocking and highly performant message-driven programming model.
• Very lightweight event-driven processes (several million actors per GB of heap memory).
See the chapter for Scala or Java.

Fault Tolerance

• Supervisor hierarchies with “let-it-crash” semantics.


• Actor systems can span over multiple JVMs to provide truly fault-tolerant systems.
• Excellent for writing highly fault-tolerant systems that self-heal and never stop.
See Fault Tolerance (Scala) and Fault Tolerance (Java).

4
Akka Java Documentation, Release 2.4.20

Location Transparency

Everything in Akka is designed to work in a distributed environment: all interactions of actors use pure message
passing and everything is asynchronous.
For an overview of the cluster support see the Java and Scala documentation chapters.

Persistence

State changes experienced by an actor can optionally be persisted and replayed when the actor is started or
restarted. This allows actors to recover their state, even after JVM crashes or when being migrated to another
node.
You can find more details in the respective chapter for Java or Scala.

2.1.2 Scala and Java APIs

Akka has both a scala-api and a Java Documentation.

2.1.3 Akka can be used in different ways

Akka is a toolkit, not a framework: you integrate it into your build like any other library without having to follow
a particular source code layout. When expressing your systems as collaborating Actors you may feel pushed more
towards proper encapsulation of internal state, you may find that there is a natural separation between business
logic and inter-component communication.
Akka applications are typically deployed as follows:
• as a library: used as a regular JAR on the classpath or in a web app.
• packaged with sbt-native-packager.
• packaged and deployed using Lightbend ConductR.

2.1.4 Commercial Support

Akka is available from Lightbend Inc. under a commercial license which includes development or production
support, read more here.

2.2 Why Akka?

2.2.1 What features can the Akka platform offer, over the competition?

Akka provides scalable real-time transaction processing.


Akka is an unified runtime and programming model for:
• Scale up (Concurrency)
• Scale out (Remoting)
• Fault tolerance
One thing to learn and admin, with high cohesion and coherent semantics.
Akka is a very scalable piece of software, not only in the context of performance but also in the size of applications
it is useful for. The core of Akka, akka-actor, is very small and easily dropped into an existing project where you
need asynchronicity and lockless concurrency without hassle.

2.2. Why Akka? 5


Akka Java Documentation, Release 2.4.20

You can choose to include only the parts of Akka you need in your application. With CPUs growing more and
more cores every cycle, Akka is the alternative that provides outstanding performance even if you’re only running
it on one machine. Akka also supplies a wide array of concurrency-paradigms, allowing users to choose the right
tool for the job.

2.2.2 What’s a good use-case for Akka?

We see Akka being adopted by many large organizations in a big range of industries:
• Investment and Merchant Banking
• Retail
• Social Media
• Simulation
• Gaming and Betting
• Automobile and Traffic Systems
• Health Care
• Data Analytics
and much more. Any system with the need for high-throughput and low latency is a good candidate for using
Akka.
Actors let you manage service failures (Supervisors), load management (back-off strategies, timeouts and
processing-isolation), as well as both horizontal and vertical scalability (add more cores and/or add more ma-
chines).
Here’s what some of the Akka users have to say about how they are using Akka:
[Link]
All this in the ApacheV2-licensed open source project.

2.3 Getting Started

2.3.1 Prerequisites

Akka requires that you have Java 8 or later installed on your machine.
Lightbend Inc. provides a commercial build of Akka and related projects such as Scala or Play as part of the
Lightbend Reactive Platform which is made available for Java 6 in case your project can not upgrade to Java 8 just
yet. It also includes additional commercial features or libraries.

2.3.2 Getting Started Guides and Template Projects

The best way to start learning Akka is to download Lightbend Activator and try out one of Akka Template Projects.

2.3.3 Download

There are several ways to download Akka. You can download it as part of the Lightbend Platform (as described
above). You can download the full distribution, which includes all modules. Or you can use a build tool like
Maven or SBT to download dependencies from the Akka Maven repository.

2.3. Getting Started 6


Akka Java Documentation, Release 2.4.20

2.3.4 Modules

Akka is very modular and consists of several JARs containing different features.
• akka-actor – Classic Actors, Typed Actors, IO Actor etc.
• akka-agent – Agents, integrated with Scala STM
• akka-camel – Apache Camel integration
• akka-cluster – Cluster membership management, elastic routers.
• akka-osgi – Utilities for using Akka in OSGi containers
• akka-osgi-aries – Aries blueprint for provisioning actor systems
• akka-remote – Remote Actors
• akka-slf4j – SLF4J Logger (event bus listener)
• akka-stream – Reactive stream processing
• akka-testkit – Toolkit for testing Actor systems
In addition to these stable modules there are several which are on their way into the stable core but are still marked
“experimental” at this point. This does not mean that they do not function as intended, it primarily means that
their API has not yet solidified enough in order to be considered frozen. You can help accelerating this process by
giving feedback on these modules on our mailing list.
• akka-contrib – an assortment of contributions which may or may not be moved into core modules, see
External Contributions for more details.
The filename of the actual JAR is for example akka-actor_2.[Link] (and analog for the other
modules).
How to see the JARs dependencies of each Akka module is described in the Dependencies section.

2.3.5 Using a release distribution

Download the release you need from [Link] and unzip it.

2.3.6 Using a snapshot version

The Akka nightly snapshots are published to [Link] and are versioned with both
SNAPSHOT and timestamps. You can choose a timestamped version to work with and can decide when to update
to a newer version.

Warning: The use of Akka SNAPSHOTs, nightlies and milestone releases is discouraged unless you know
what you are doing.

2.3.7 Using a build tool

Akka can be used with build tools that support Maven repositories.

2.3.8 Maven repositories

For Akka version 2.1-M2 and onwards:


Maven Central
For previous Akka versions:
Akka Repo

2.3. Getting Started 7


Akka Java Documentation, Release 2.4.20

2.3.9 Using Akka with Maven

The simplest way to get started with Akka and Maven is to check out the Lightbend Activator tutorial named Akka
Main in Java.
Since Akka is published to Maven Central (for versions since 2.1-M2), it is enough to add the Akka dependencies
to the POM. For example, here is the dependency for akka-actor:
<dependency>
<groupId>[Link]</groupId>
<artifactId>akka-actor_2.11</artifactId>
<version>2.4.20</version>
</dependency>

For snapshot versions, the snapshot repository needs to be added as well:


<repositories>
<repository>
<id>akka-snapshots</id>
<snapshots>
<enabled>true</enabled>
</snapshots>
<url>[Link]
</repository>
</repositories>

Note: for snapshot versions both SNAPSHOT and timestamped versions are published.

2.3.10 Using Akka with SBT

The simplest way to get started with Akka and SBT is to use Lightbend Activator with one of the SBT templates.
Summary of the essential parts for using Akka with SBT:
SBT installation instructions on [Link]
[Link] file:
name := "My Project"

version := "1.0"

scalaVersion := "2.11.11"

libraryDependencies +=
"[Link]" %% "akka-actor" % "2.4.20"

Note: the libraryDependencies setting above is specific to SBT v0.12.x and higher. If you are using an older
version of SBT, the libraryDependencies should look like this:
libraryDependencies +=
"[Link]" % "akka-actor_2.11" % "2.4.20"

For snapshot versions, the snapshot repository needs to be added as well:


resolvers += "Akka Snapshot Repository" at "[Link]

2.3.11 Using Akka with Gradle

Requires at least Gradle 1.4 Uses the Scala plugin

2.3. Getting Started 8


Akka Java Documentation, Release 2.4.20

apply plugin: ’scala’

repositories {
mavenCentral()
}

dependencies {
compile ’[Link]-lang:scala-library:2.11.11’
}

[Link](ScalaCompile) {
[Link] = false
}

dependencies {
compile group: ’[Link]’, name: ’akka-actor_2.11’, version: ’2.4.20’
compile group: ’[Link]-lang’, name: ’scala-library’, version: ’2.11.11’
}

For snapshot versions, the snapshot repository needs to be added as well:


repositories {
mavenCentral()
maven {
url "[Link]
}
}

2.3.12 Using Akka with Eclipse

Setup SBT project and then use sbteclipse to generate an Eclipse project.

2.3.13 Using Akka with IntelliJ IDEA

Setup SBT project and then use sbt-idea to generate an IntelliJ IDEA project.

2.3.14 Using Akka with NetBeans

Setup SBT project and then use nbsbt to generate a NetBeans project.
You should also use nbscala for general scala support in the IDE.

2.3.15 Do not use -optimize Scala compiler flag

Warning: Akka has not been compiled or tested with -optimize Scala compiler flag. Strange behavior has
been reported by users that have tried it.

2.3.16 Build from sources

Akka uses Git and is hosted at Github.


• Akka: clone the Akka repository from [Link]
Continue reading the page on Building Akka

2.3. Getting Started 9


Akka Java Documentation, Release 2.4.20

2.3.17 Need help?

If you have questions you can get help on the Akka Mailing List.
You can also ask for commercial support.
Thanks for being a part of the Akka community.

2.4 The Obligatory Hello World

The actor based version of the tough problem of printing a well-known greeting to the console is introduced in a
Lightbend Activator tutorial named Akka Main in Java.
The tutorial illustrates the generic launcher class [Link] which expects only one command line argument:
the class name of the application’s main actor. This main method will then create the infrastructure needed for
running the actors, start the given main actor and arrange for the whole application to shut down once the main
actor terminates.
There is also another Lightbend Activator tutorial in the same problem domain that is named Hello Akka!. It
describes the basics of Akka in more depth.

2.5 Use-case and Deployment Scenarios

2.5.1 How can I use and deploy Akka?

Akka can be used in different ways:


• As a library: used as a regular JAR on the classpath and/or in a web app, to be put into WEB-INF/lib
• Package with sbt-native-packager
• Package and deploy using Lightbend ConductR.

2.5.2 Native Packager

sbt-native-packager is a tool for creating distributions of any type of application, including an Akka applications.
Define sbt version in project/[Link] file:
[Link]=0.13.7

Add sbt-native-packager in project/[Link] file:


addSbtPlugin("[Link]" % "sbt-native-packager" % "1.0.0-RC1")

Use the package settings and optionally specify the mainClass in [Link] file:
import NativePackagerHelper._

name := "akka-sample-main-scala"

version := "2.4.20"

scalaVersion := "2.11.8"

libraryDependencies ++= Seq(


"[Link]" %% "akka-actor" % "2.4.20"
)

enablePlugins(JavaServerAppPackaging)

2.4. The Obligatory Hello World 10


Akka Java Documentation, Release 2.4.20

mainClass in Compile := Some("[Link]")

mappings in Universal ++= {


// optional example illustrating how to copy additional directory
directory("scripts") ++
// copy configuration files to config directory
contentOf("src/main/resources").[Link]("config/" + _)
}

// add ’config’ directory first in the classpath of the start script,


// an alternative is to set the config file locations via CLI parameters
// when starting the application
scriptClasspath := Seq("../config/") ++ [Link]

licenses := Seq(("CC0", url("[Link]

Note: Use the JavaServerAppPackaging. Don’t use the deprecated AkkaAppPackaging (previously
named packageArchetype.akka_application), since it doesn’t have the same flexibility and quality as
the JavaServerAppPackaging.

Use sbt task dist package the application.


To start the application (on a unix-based system):
cd target/universal/
unzip [Link]
chmod u+x akka-sample-main-scala-2.4.20/bin/akka-sample-main-scala
akka-sample-main-scala-2.4.20/bin/akka-sample-main-scala [Link]

Use Ctrl-C to interrupt and exit the application.


On a Windows machine you can also use the bin\[Link] script.

2.5.3 In a Docker container

You can use both Akka remoting and Akka Cluster inside of Docker containers. But note that you will need to
take special care with the network configuration when using Docker, described here: remote-configuration-nat
For an example of how to set up a project using Akka Cluster and Docker take a look at the “akka-docker-cluster”
activator template.

2.6 Examples of use-cases for Akka

We see Akka being adopted by many large organizations in a big range of industries all from investment and
merchant banking, retail and social media, simulation, gaming and betting, automobile and traffic systems, health
care, data analytics and much more. Any system that have the need for high-throughput and low latency is a good
candidate for using Akka.
There is a great discussion on use-cases for Akka with some good write-ups by production users here

2.6.1 Here are some of the areas where Akka is being deployed into production

Transaction processing (Online Gaming, Finance/Banking, Trading, Statistics, Betting, Social


Media, Telecom)

Scale up, scale out, fault-tolerance / HA

2.6. Examples of use-cases for Akka 11


Akka Java Documentation, Release 2.4.20

Service backend (any industry, any app)

Service REST, SOAP, Cometd, WebSockets etc Act as message hub / integration layer Scale up, scale
out, fault-tolerance / HA

Concurrency/parallelism (any app)

Correct Simple to work with and understand Just add the jars to your existing JVM project (use Scala,
Java, Groovy or JRuby)

Simulation

Master/Worker, Compute Grid, MapReduce etc.

Batch processing (any industry)

Camel integration to hook up with batch data sources Actors divide and conquer the batch workloads

Communications Hub (Telecom, Web media, Mobile media)

Scale up, scale out, fault-tolerance / HA

Gaming and Betting (MOM, online gaming, betting)

Scale up, scale out, fault-tolerance / HA

Business Intelligence/Data Mining/general purpose crunching

Scale up, scale out, fault-tolerance / HA

Complex Event Stream Processing

Scale up, scale out, fault-tolerance / HA

2.6. Examples of use-cases for Akka 12


CHAPTER

THREE

GENERAL

3.1 Terminology, Concepts

In this chapter we attempt to establish a common terminology to define a solid ground for communicating about
concurrent, distributed systems which Akka targets. Please note that, for many of these terms, there is no sin-
gle agreed definition. We simply seek to give working definitions that will be used in the scope of the Akka
documentation.

3.1.1 Concurrency vs. Parallelism

Concurrency and parallelism are related concepts, but there are small differences. Concurrency means that two or
more tasks are making progress even though they might not be executing simultaneously. This can for example
be realized with time slicing where parts of tasks are executed sequentially and mixed with parts of other tasks.
Parallelism on the other hand arise when the execution can be truly simultaneous.

3.1.2 Asynchronous vs. Synchronous

A method call is considered synchronous if the caller cannot make progress until the method returns a value or
throws an exception. On the other hand, an asynchronous call allows the caller to progress after a finite number of
steps, and the completion of the method may be signalled via some additional mechanism (it might be a registered
callback, a Future, or a message).
A synchronous API may use blocking to implement synchrony, but this is not a necessity. A very CPU intensive
task might give a similar behavior as blocking. In general, it is preferred to use asynchronous APIs, as they
guarantee that the system is able to progress. Actors are asynchronous by nature: an actor can progress after a
message send without waiting for the actual delivery to happen.

3.1.3 Non-blocking vs. Blocking

We talk about blocking if the delay of one thread can indefinitely delay some of the other threads. A good example
is a resource which can be used exclusively by one thread using mutual exclusion. If a thread holds on to the
resource indefinitely (for example accidentally running an infinite loop) other threads waiting on the resource can
not progress. In contrast, non-blocking means that no thread is able to indefinitely delay others.
Non-blocking operations are preferred to blocking ones, as the overall progress of the system is not trivially
guaranteed when it contains blocking operations.

3.1.4 Deadlock vs. Starvation vs. Live-lock

Deadlock arises when several participants are waiting on each other to reach a specific state to be able to progress.
As none of them can progress without some other participant to reach a certain state (a “Catch-22” problem) all

13
Akka Java Documentation, Release 2.4.20

affected subsystems stall. Deadlock is closely related to blocking, as it is necessary that a participant thread be
able to delay the progression of other threads indefinitely.
In the case of deadlock, no participants can make progress, while in contrast Starvation happens, when there are
participants that can make progress, but there might be one or more that cannot. Typical scenario is the case
of a naive scheduling algorithm that always selects high-priority tasks over low-priority ones. If the number of
incoming high-priority tasks is constantly high enough, no low-priority ones will be ever finished.
Livelock is similar to deadlock as none of the participants make progress. The difference though is that instead
of being frozen in a state of waiting for others to progress, the participants continuously change their state. An
example scenario when two participants have two identical resources available. They each try to get the resource,
but they also check if the other needs the resource, too. If the resource is requested by the other participant, they
try to get the other instance of the resource. In the unfortunate case it might happen that the two participants
“bounce” between the two resources, never acquiring it, but always yielding to the other.

3.1.5 Race Condition

We call it a Race condition when an assumption about the ordering of a set of events might be violated by external
non-deterministic effects. Race conditions often arise when multiple threads have a shared mutable state, and the
operations of thread on the state might be interleaved causing unexpected behavior. While this is a common case,
shared state is not necessary to have race conditions. One example could be a client sending unordered packets
(e.g UDP datagrams) P1, P2 to a server. As the packets might potentially travel via different network routes, it
is possible that the server receives P2 first and P1 afterwards. If the messages contain no information about their
sending order it is impossible to determine by the server that they were sent in a different order. Depending on the
meaning of the packets this can cause race conditions.

Note: The only guarantee that Akka provides about messages sent between a given pair of actors is that their
order is always preserved. see Message Delivery Reliability

3.1.6 Non-blocking Guarantees (Progress Conditions)

As discussed in the previous sections blocking is undesirable for several reasons, including the dangers of dead-
locks and reduced throughput in the system. In the following sections we discuss various non-blocking properties
with different strength.

Wait-freedom

A method is wait-free if every call is guaranteed to finish in a finite number of steps. If a method is bounded
wait-free then the number of steps has a finite upper bound.
From this definition it follows that wait-free methods are never blocking, therefore deadlock can not happen.
Additionally, as each participant can progress after a finite number of steps (when the call finishes), wait-free
methods are free of starvation.

Lock-freedom

Lock-freedom is a weaker property than wait-freedom. In the case of lock-free calls, infinitely often some method
finishes in a finite number of steps. This definition implies that no deadlock is possible for lock-free calls. On the
other hand, the guarantee that some call finishes in a finite number of steps is not enough to guarantee that all of
them eventually finish. In other words, lock-freedom is not enough to guarantee the lack of starvation.

Obstruction-freedom

Obstruction-freedom is the weakest non-blocking guarantee discussed here. A method is called obstruction-free if
there is a point in time after which it executes in isolation (other threads make no steps, e.g.: become suspended),

3.1. Terminology, Concepts 14


Akka Java Documentation, Release 2.4.20

it finishes in a bounded number of steps. All lock-free objects are obstruction-free, but the opposite is generally
not true.
Optimistic concurrency control (OCC) methods are usually obstruction-free. The OCC approach is that every
participant tries to execute its operation on the shared object, but if a participant detects conflicts from others, it
rolls back the modifications, and tries again according to some schedule. If there is a point in time, where one of
the participants is the only one trying, the operation will succeed.

3.1.7 Recommended literature

• The Art of Multiprocessor Programming, M. Herlihy and N Shavit, 2008. ISBN 978-0123705914
• Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006.
ISBN 978-0321349606

3.2 Actor Systems

Actors are objects which encapsulate state and behavior, they communicate exclusively by exchanging messages
which are placed into the recipient’s mailbox. In a sense, actors are the most stringent form of object-oriented
programming, but it serves better to view them as persons: while modeling a solution with actors, envision a group
of people and assign sub-tasks to them, arrange their functions into an organizational structure and think about
how to escalate failure (all with the benefit of not actually dealing with people, which means that we need not
concern ourselves with their emotional state or moral issues). The result can then serve as a mental scaffolding for
building the software implementation.

Note: An ActorSystem is a heavyweight structure that will allocate 1. . . N Threads, so create one per logical
application.

3.2.1 Hierarchical Structure

Like in an economic organization, actors naturally form hierarchies. One actor, which is to oversee a certain
function in the program might want to split up its task into smaller, more manageable pieces. For this purpose it
starts child actors which it supervises. While the details of supervision are explained here, we shall concentrate on
the underlying concepts in this section. The only prerequisite is to know that each actor has exactly one supervisor,
which is the actor that created it.
The quintessential feature of actor systems is that tasks are split up and delegated until they become small enough
to be handled in one piece. In doing so, not only is the task itself clearly structured, but the resulting actors can
be reasoned about in terms of which messages they should process, how they should react normally and how
failure should be handled. If one actor does not have the means for dealing with a certain situation, it sends a
corresponding failure message to its supervisor, asking for help. The recursive structure then allows to handle
failure at the right level.
Compare this to layered software design which easily devolves into defensive programming with the aim of not
leaking any failure out: if the problem is communicated to the right person, a better solution can be found than if
trying to keep everything “under the carpet”.
Now, the difficulty in designing such a system is how to decide who should supervise what. There is of course no
single best solution, but there are a few guidelines which might be helpful:
• If one actor manages the work another actor is doing, e.g. by passing on sub-tasks, then the manager should
supervise the child. The reason is that the manager knows which kind of failures are expected and how to
handle them.
• If one actor carries very important data (i.e. its state shall not be lost if avoidable), this actor should source
out any possibly dangerous sub-tasks to children it supervises and handle failures of these children as ap-
propriate. Depending on the nature of the requests, it may be best to create a new child for each request,

3.2. Actor Systems 15


Akka Java Documentation, Release 2.4.20

which simplifies state management for collecting the replies. This is known as the “Error Kernel Pattern”
from Erlang.
• If one actor depends on another actor for carrying out its duty, it should watch that other actor’s liveness
and act upon receiving a termination notice. This is different from supervision, as the watching party has
no influence on the supervisor strategy, and it should be noted that a functional dependency alone is not a
criterion for deciding where to place a certain child actor in the hierarchy.
There are of course always exceptions to these rules, but no matter whether you follow the rules or break them,
you should always have a reason.

3.2.2 Configuration Container

The actor system as a collaborating ensemble of actors is the natural unit for managing shared facilities like
scheduling services, configuration, logging, etc. Several actor systems with different configuration may co-exist
within the same JVM without problems, there is no global shared state within Akka itself. Couple this with the
transparent communication between actor systems—within one node or across a network connection—to see that
actor systems themselves can be used as building blocks in a functional hierarchy.

3.2.3 Actor Best Practices

1. Actors should be like nice co-workers: do their job efficiently without bothering everyone else needlessly
and avoid hogging resources. Translated to programming this means to process events and generate re-
sponses (or more requests) in an event-driven manner. Actors should not block (i.e. passively wait while
occupying a Thread) on some external entity—which might be a lock, a network socket, etc.—unless it is
unavoidable; in the latter case see below.
2. Do not pass mutable objects between actors. In order to ensure that, prefer immutable messages. If the
encapsulation of actors is broken by exposing their mutable state to the outside, you are back in normal Java
concurrency land with all the drawbacks.
3. Actors are made to be containers for behavior and state, embracing this means to not routinely send behavior
within messages (which may be tempting using Scala closures). One of the risks is to accidentally share
mutable state between actors, and this violation of the actor model unfortunately breaks all the properties
which make programming in actors such a nice experience.
4. Top-level actors are the innermost part of your Error Kernel, so create them sparingly and prefer truly
hierarchical systems. This has benefits with respect to fault-handling (both considering the granularity of
configuration and the performance) and it also reduces the strain on the guardian actor, which is a single
point of contention if over-used.

3.2.4 Blocking Needs Careful Management

In some cases it is unavoidable to do blocking operations, i.e. to put a thread to sleep for an indeterminate
time, waiting for an external event to occur. Examples are legacy RDBMS drivers or messaging APIs, and the
underlying reason is typically that (network) I/O occurs under the covers. When facing this, you may be tempted
to just wrap the blocking call inside a Future and work with that instead, but this strategy is too simple: you are
quite likely to find bottlenecks or run out of memory or threads when the application runs under increased load.
The non-exhaustive list of adequate solutions to the “blocking problem” includes the following suggestions:
• Do the blocking call within an actor (or a set of actors managed by a router [Java, Scala]), making sure to
configure a thread pool which is either dedicated for this purpose or sufficiently sized.
• Do the blocking call within a Future, ensuring an upper bound on the number of such calls at any point in
time (submitting an unbounded number of tasks of this nature will exhaust your memory or thread limits).
• Do the blocking call within a Future, providing a thread pool with an upper limit on the number of threads
which is appropriate for the hardware on which the application runs.

3.2. Actor Systems 16


Akka Java Documentation, Release 2.4.20

• Dedicate a single thread to manage a set of blocking resources (e.g. a NIO selector driving multiple chan-
nels) and dispatch events as they occur as actor messages.
The first possibility is especially well-suited for resources which are single-threaded in nature, like database han-
dles which traditionally can only execute one outstanding query at a time and use internal synchronization to
ensure this. A common pattern is to create a router for N actors, each of which wraps a single DB connection and
handles queries as sent to the router. The number N must then be tuned for maximum throughput, which will vary
depending on which DBMS is deployed on what hardware.

Note: Configuring thread pools is a task best delegated to Akka, simply configure in the [Link]
and instantiate through an ActorSystem [Java, Scala]

3.2.5 What you should not concern yourself with

An actor system manages the resources it is configured to use in order to run the actors which it contains. There
may be millions of actors within one such system, after all the mantra is to view them as abundant and they
weigh in at an overhead of only roughly 300 bytes per instance. Naturally, the exact order in which messages are
processed in large systems is not controllable by the application author, but this is also not intended. Take a step
back and relax while Akka does the heavy lifting under the hood.

3.3 What is an Actor?

The previous section about Actor Systems explained how actors form hierarchies and are the smallest unit when
building an application. This section looks at one such actor in isolation, explaining the concepts you encounter
while implementing it. For a more in depth reference with all the details please refer to Actors (Scala) and Untyped
Actors (Java).
An actor is a container for State, Behavior, a Mailbox, Child Actors and a Supervisor Strategy. All of this is
encapsulated behind an Actor Reference. One noteworthy aspect is that actors have an explicit lifecycle, they are
not automatically destroyed when no longer referenced; after having created one, it is your responsibility to make
sure that it will eventually be terminated as well—which also gives you control over how resources are released
When an Actor Terminates.

3.3.1 Actor Reference

As detailed below, an actor object needs to be shielded from the outside in order to benefit from the actor model.
Therefore, actors are represented to the outside using actor references, which are objects that can be passed around
freely and without restriction. This split into inner and outer object enables transparency for all the desired
operations: restarting an actor without needing to update references elsewhere, placing the actual actor object on
remote hosts, sending messages to actors in completely different applications. But the most important aspect is
that it is not possible to look inside an actor and get hold of its state from the outside, unless the actor unwisely
publishes this information itself.

3.3.2 State

Actor objects will typically contain some variables which reflect possible states the actor may be in. This can be an
explicit state machine (e.g. using the fsm-scala module), or it could be a counter, set of listeners, pending requests,
etc. These data are what make an actor valuable, and they must be protected from corruption by other actors. The
good news is that Akka actors conceptually each have their own light-weight thread, which is completely shielded
from the rest of the system. This means that instead of having to synchronize access using locks you can just write
your actor code without worrying about concurrency at all.

3.3. What is an Actor? 17


Akka Java Documentation, Release 2.4.20

Behind the scenes Akka will run sets of actors on sets of real threads, where typically many actors share one
thread, and subsequent invocations of one actor may end up being processed on different threads. Akka ensures
that this implementation detail does not affect the single-threadedness of handling the actor’s state.
Because the internal state is vital to an actor’s operations, having inconsistent state is fatal. Thus, when the actor
fails and is restarted by its supervisor, the state will be created from scratch, like upon first creating the actor. This
is to enable the ability of self-healing of the system.
Optionally, an actor’s state can be automatically recovered to the state before a restart by persisting received
messages and replaying them after restart (see persistence-scala).

3.3.3 Behavior

Every time a message is processed, it is matched against the current behavior of the actor. Behavior means a
function which defines the actions to be taken in reaction to the message at that point in time, say forward a
request if the client is authorized, deny it otherwise. This behavior may change over time, e.g. because different
clients obtain authorization over time, or because the actor may go into an “out-of-service” mode and later come
back. These changes are achieved by either encoding them in state variables which are read from the behavior
logic, or the function itself may be swapped out at runtime, see the become and unbecome operations. However,
the initial behavior defined during construction of the actor object is special in the sense that a restart of the actor
will reset its behavior to this initial one.

3.3.4 Mailbox

An actor’s purpose is the processing of messages, and these messages were sent to the actor from other actors (or
from outside the actor system). The piece which connects sender and receiver is the actor’s mailbox: each actor
has exactly one mailbox to which all senders enqueue their messages. Enqueuing happens in the time-order of
send operations, which means that messages sent from different actors may not have a defined order at runtime
due to the apparent randomness of distributing actors across threads. Sending multiple messages to the same target
from the same actor, on the other hand, will enqueue them in the same order.
There are different mailbox implementations to choose from, the default being a FIFO: the order of the messages
processed by the actor matches the order in which they were enqueued. This is usually a good default, but
applications may need to prioritize some messages over others. In this case, a priority mailbox will enqueue not
always at the end but at a position as given by the message priority, which might even be at the front. While using
such a queue, the order of messages processed will naturally be defined by the queue’s algorithm and in general
not be FIFO.
An important feature in which Akka differs from some other actor model implementations is that the current
behavior must always handle the next dequeued message, there is no scanning the mailbox for the next matching
one. Failure to handle a message will typically be treated as a failure, unless this behavior is overridden.

3.3.5 Child Actors

Each actor is potentially a supervisor: if it creates children for delegating sub-tasks, it will automatically supervise
them. The list of children is maintained within the actor’s context and the actor has access to it. Modifications to
the list are done by creating ([Link](...)) or stopping ([Link](child)) children
and these actions are reflected immediately. The actual creation and termination actions happen behind the scenes
in an asynchronous way, so they do not “block” their supervisor.

3.3.6 Supervisor Strategy

The final piece of an actor is its strategy for handling faults of its children. Fault handling is then done transparently
by Akka, applying one of the strategies described in Supervision and Monitoring for each incoming failure. As
this strategy is fundamental to how an actor system is structured, it cannot be changed once an actor has been
created.

3.3. What is an Actor? 18


Akka Java Documentation, Release 2.4.20

Considering that there is only one such strategy for each actor, this means that if different strategies apply to
the various children of an actor, the children should be grouped beneath intermediate supervisors with matching
strategies, preferring once more the structuring of actor systems according to the splitting of tasks into sub-tasks.

3.3.7 When an Actor Terminates

Once an actor terminates, i.e. fails in a way which is not handled by a restart, stops itself or is stopped by its
supervisor, it will free up its resources, draining all remaining messages from its mailbox into the system’s “dead
letter mailbox” which will forward them to the EventStream as DeadLetters. The mailbox is then replaced within
the actor reference with a system mailbox, redirecting all new messages to the EventStream as DeadLetters. This
is done on a best effort basis, though, so do not rely on it in order to construct “guaranteed delivery”.
The reason for not just silently dumping the messages was inspired by our tests: we register the TestEventLis-
tener on the event bus to which the dead letters are forwarded, and that will log a warning for every dead letter
received—this has been very helpful for deciphering test failures more quickly. It is conceivable that this feature
may also be of use for other purposes.

3.4 Supervision and Monitoring

This chapter outlines the concept behind supervision, the primitives offered and their semantics. For details on
how that translates into real code, please refer to the corresponding chapters for Scala and Java APIs.

3.4.1 What Supervision Means

As described in Actor Systems supervision describes a dependency relationship between actors: the supervisor
delegates tasks to subordinates and therefore must respond to their failures. When a subordinate detects a failure
(i.e. throws an exception), it suspends itself and all its subordinates and sends a message to its supervisor, signaling
failure. Depending on the nature of the work to be supervised and the nature of the failure, the supervisor has a
choice of the following four options:
1. Resume the subordinate, keeping its accumulated internal state
2. Restart the subordinate, clearing out its accumulated internal state
3. Stop the subordinate permanently
4. Escalate the failure, thereby failing itself
It is important to always view an actor as part of a supervision hierarchy, which explains the existence of the fourth
choice (as a supervisor also is subordinate to another supervisor higher up) and has implications on the first three:
resuming an actor resumes all its subordinates, restarting an actor entails restarting all its subordinates (but see
below for more details), similarly terminating an actor will also terminate all its subordinates. It should be noted
that the default behavior of the preRestart hook of the Actor class is to terminate all its children before
restarting, but this hook can be overridden; the recursive restart applies to all children left after this hook has been
executed.
Each supervisor is configured with a function translating all possible failure causes (i.e. exceptions) into one of
the four choices given above; notably, this function does not take the failed actor’s identity as an input. It is quite
easy to come up with examples of structures where this might not seem flexible enough, e.g. wishing for different
strategies to be applied to different subordinates. At this point it is vital to understand that supervision is about
forming a recursive fault handling structure. If you try to do too much at one level, it will become hard to reason
about, hence the recommended way in this case is to add a level of supervision.
Akka implements a specific form called “parental supervision”. Actors can only be created by other actors—where
the top-level actor is provided by the library—and each created actor is supervised by its parent. This restriction
makes the formation of actor supervision hierarchies implicit and encourages sound design decisions. It should
be noted that this also guarantees that actors cannot be orphaned or attached to supervisors from the outside,
which might otherwise catch them unawares. In addition, this yields a natural and clean shutdown procedure for
(sub-trees of) actor applications.

3.4. Supervision and Monitoring 19


Akka Java Documentation, Release 2.4.20

Warning: Supervision related parent-child communication happens by special system messages that have
their own mailboxes separate from user messages. This implies that supervision related events are not deter-
ministically ordered relative to ordinary messages. In general, the user cannot influence the order of normal
messages and failure notifications. For details and example see the Discussion: Message Ordering section.

3.4.2 The Top-Level Supervisors

An actor system will during its creation start at least three actors, shown in the image above. For more information
about the consequences for actor paths see Top-Level Scopes for Actor Paths.

/user: The Guardian Actor

The actor which is probably most interacted with is the parent of all user-created actors, the guardian named
"/user". Actors created using [Link]() are children of this actor. This means that when this
guardian terminates, all normal actors in the system will be shutdown, too. It also means that this guardian’s
supervisor strategy determines how the top-level normal actors are supervised. Since Akka 2.1 it is possible to
configure this using the setting [Link]-supervisor-strategy, which takes the fully-
qualified class-name of a SupervisorStrategyConfigurator. When the guardian escalates a failure, the
root guardian’s response will be to terminate the guardian, which in effect will shut down the whole actor system.

/system: The System Guardian

This special guardian has been introduced in order to achieve an orderly shut-down sequence where logging re-
mains active while all normal actors terminate, even though logging itself is implemented using actors. This
is realized by having the system guardian watch the user guardian and initiate its own shut-down upon re-
ception of the Terminated message. The top-level system actors are supervised using a strategy which
will restart indefinitely upon all types of Exception except for ActorInitializationException and
ActorKilledException, which will terminate the child in question. All other throwables are escalated,
which will shut down the whole actor system.

/: The Root Guardian

The root guardian is the grand-parent of all so-called “top-level” actors and supervises all the special actors
mentioned in Top-Level Scopes for Actor Paths using the [Link],
whose purpose is to terminate the child upon any type of Exception. All other throwables will be escalated
. . . but to whom? Since every real actor has a supervisor, the supervisor of the root guardian cannot be a real

3.4. Supervision and Monitoring 20


Akka Java Documentation, Release 2.4.20

actor. And because this means that it is “outside of the bubble”, it is called the “bubble-walker”. This is a
synthetic ActorRef which in effect stops its child upon the first sign of trouble and sets the actor system’s
isTerminated status to true as soon as the root guardian is fully terminated (all children recursively stopped).

3.4.3 What Restarting Means

When presented with an actor which failed while processing a certain message, causes for the failure fall into three
categories:
• Systematic (i.e. programming) error for the specific message received
• (Transient) failure of some external resource used during processing the message
• Corrupt internal state of the actor
Unless the failure is specifically recognizable, the third cause cannot be ruled out, which leads to the conclusion
that the internal state needs to be cleared out. If the supervisor decides that its other children or itself is not
affected by the corruption—e.g. because of conscious application of the error kernel pattern—it is therefore best
to restart the child. This is carried out by creating a new instance of the underlying Actor class and replacing
the failed instance with the fresh one inside the child’s ActorRef; the ability to do this is one of the reasons for
encapsulating actors within special references. The new actor then resumes processing its mailbox, meaning that
the restart is not visible outside of the actor itself with the notable exception that the message during which the
failure occurred is not re-processed.
The precise sequence of events during a restart is the following:
1. suspend the actor (which means that it will not process normal messages until resumed), and recursively
suspend all children
2. call the old instance’s preRestart hook (defaults to sending termination requests to all children and
calling postStop)
3. wait for all children which were requested to terminate (using [Link]()) during preRestart
to actually terminate; this—like all actor operations—is non-blocking, the termination notice from the last
killed child will effect the progression to the next step
4. create new actor instance by invoking the originally provided factory again
5. invoke postRestart on the new instance (which by default also calls preStart)
6. send restart request to all children which were not killed in step 3; restarted children will follow the same
process recursively, from step 2
7. resume the actor

3.4.4 What Lifecycle Monitoring Means

Note: Lifecycle Monitoring in Akka is usually referred to as DeathWatch

In contrast to the special relationship between parent and child described above, each actor may monitor any other
actor. Since actors emerge from creation fully alive and restarts are not visible outside of the affected supervisors,
the only state change available for monitoring is the transition from alive to dead. Monitoring is thus used to tie
one actor to another so that it may react to the other actor’s termination, in contrast to supervision which reacts to
failure.
Lifecycle monitoring is implemented using a Terminated message to be received by the monitoring actor,
where the default behavior is to throw a special DeathPactException if not otherwise handled. In order to
start listening for Terminated messages, invoke [Link](targetActorRef). To stop
listening, invoke [Link](targetActorRef). One important property is that the mes-
sage will be delivered irrespective of the order in which the monitoring request and target’s termination occur, i.e.
you still get the message even if at the time of registration the target is already dead.

3.4. Supervision and Monitoring 21


Akka Java Documentation, Release 2.4.20

Monitoring is particularly useful if a supervisor cannot simply restart its children and has to terminate them, e.g.
in case of errors during actor initialization. In that case it should monitor those children and re-create them or
schedule itself to retry this at a later time.
Another common use case is that an actor needs to fail in the absence of an external resource, which may also be
one of its own children. If a third party terminates a child by way of the [Link](child) method or
sending a PoisonPill, the supervisor might well be affected.

Delayed restarts with the BackoffSupervisor pattern

Provided as a built-in pattern the [Link] implements the so-called exponen-


tial backoff supervision strategy, starting a child actor again when it fails, each time with a growing time delay
between restarts.
This pattern is useful when the started actor fails 1 because some external resource is not available, and we need to
give it some time to start-up again. One of the prime examples when this is useful is when a PersistentActor fails
(by stopping) with a persistence failure - which indicates that the database may be down or overloaded, in such
situations it makes most sense to give it a little bit of time to recover before the peristent actor is started.
The following Scala snippet shows how to create a backoff supervisor which will start the given echo actor after
it has stopped because of a failure, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds:
val childProps = Props(classOf[EchoActor])

val supervisor = [Link](


[Link](
childProps,
childName = "myEcho",
minBackoff = [Link],
maxBackoff = [Link],
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
))

[Link](supervisor, name = "echoSupervisor")

The above is equivalent to this Java code:


import [Link];

final Props childProps = [Link]([Link]);

final Props supervisorProps = [Link](


[Link](
childProps,
"myEcho",
[Link](3, [Link]),
[Link](30, [Link]),
0.2)); // adds 20% "noise" to vary the intervals slightly

[Link](supervisorProps, "echoSupervisor");

Using a randomFactor to add a little bit of additional variance to the backoff intervals is highly recommended,
in order to avoid multiple actors re-start at the exact same point in time, for example because they were stopped
due to a shared resource such as a database going down and re-starting after the same configured interval. By
adding additional randomness to the re-start intervals the actors will start in slightly different points in time, thus
avoiding large spikes of traffic hitting the recovering shared database or other resource that they all need to contact.
The [Link] actor can also be configured to restart the actor after a delay
when the actor crashes and the supervision strategy decides that it should restart.
The following Scala snippet shows how to create a backoff supervisor which will start the given echo actor after
it has crashed because of some exception, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds:
1 A failure can be indicated in two different ways; by an actor stopping or crashing.

3.4. Supervision and Monitoring 22


Akka Java Documentation, Release 2.4.20

val childProps = Props(classOf[EchoActor])

val supervisor = [Link](


[Link](
childProps,
childName = "myEcho",
minBackoff = [Link],
maxBackoff = [Link],
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
))

[Link](supervisor, name = "echoSupervisor")

The above is equivalent to this Java code:


import [Link];

final Props childProps = [Link]([Link]);

final Props supervisorProps = [Link](


[Link](
childProps,
"myEcho",
[Link](3, [Link]),
[Link](30, [Link]),
0.2)); // adds 20% "noise" to vary the intervals slightly

[Link](supervisorProps, "echoSupervisor");

The [Link] can be used to customize the behavior of the back-off supervisor
actor, below are some examples:
val supervisor = [Link](
[Link](
childProps,
childName = "myEcho",
minBackoff = [Link],
maxBackoff = [Link],
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
).withManualReset // the child must send [Link] to its parent
.withDefaultStoppingStrategy // Stop at any Exception thrown
)

The above code sets up a back-off supervisor that requires the child actor to send a
[Link] message to its parent when a message is successfully
processed, resetting the back-off. It also uses a default stopping strategy, any exception will cause the child to
stop.
val supervisor = [Link](
[Link](
childProps,
childName = "myEcho",
minBackoff = [Link],
maxBackoff = [Link],
randomFactor = 0.2 // adds 20% "noise" to vary the intervals slightly
).withAutoReset([Link]) // the child must send [Link] to its parent
.withSupervisorStrategy(
OneForOneStrategy() {
case _: MyException => [Link]
case _ => [Link]
}))

The above code sets up a back-off supervisor that restarts the child after back-off if MyException is thrown, any
other exception will be escalated. The back-off is automatically reset if the child does not throw any errors within

3.4. Supervision and Monitoring 23


Akka Java Documentation, Release 2.4.20

10 seconds.

3.4.5 One-For-One Strategy vs. All-For-One Strategy

There are two classes of supervision strategies which come with Akka: OneForOneStrategy and
AllForOneStrategy. Both are configured with a mapping from exception type to supervision directive (see
above) and limits on how often a child is allowed to fail before terminating it. The difference between them is that
the former applies the obtained directive only to the failed child, whereas the latter applies it to all siblings as well.
Normally, you should use the OneForOneStrategy, which also is the default if none is specified explicitly.
The AllForOneStrategy is applicable in cases where the ensemble of children has such tight dependencies
among them, that a failure of one child affects the function of the others, i.e. they are inextricably linked. Since
a restart does not clear out the mailbox, it often is best to terminate the children upon failure and re-create them
explicitly from the supervisor (by watching the children’s lifecycle); otherwise you have to make sure that it is no
problem for any of the actors to receive a message which was queued before the restart but processed afterwards.
Normally stopping a child (i.e. not in response to a failure) will not automatically terminate the other children
in an all-for-one strategy; this can easily be done by watching their lifecycle: if the Terminated message is
not handled by the supervisor, it will throw a DeathPactException which (depending on its supervisor) will
restart it, and the default preRestart action will terminate all children. Of course this can be handled explicitly
as well.
Please note that creating one-off actors from an all-for-one supervisor entails that failures escalated by the tempo-
rary actor will affect all the permanent ones. If this is not desired, install an intermediate supervisor; this can very
easily be done by declaring a router of size 1 for the worker, see routing-scala or Routing.

3.5 Actor References, Paths and Addresses

This chapter describes how actors are identified and located within a possibly distributed actor system. It ties into
the central idea that Actor Systems form intrinsic supervision hierarchies as well as that communication between
actors is transparent with respect to their placement across multiple network nodes.

The above image displays the relationship between the most important entities within an actor system, please read
on for the details.

3.5. Actor References, Paths and Addresses 24


Akka Java Documentation, Release 2.4.20

3.5.1 What is an Actor Reference?

An actor reference is a subtype of ActorRef, whose foremost purpose is to support sending messages to the
actor it represents. Each actor has access to its canonical (local) reference through the self field; this reference
is also included as sender reference by default for all messages sent to other actors. Conversely, during message
processing the actor has access to a reference representing the sender of the current message through the sender
method.
There are several different types of actor references that are supported depending on the configuration of the actor
system:
• Purely local actor references are used by actor systems which are not configured to support networking
functions. These actor references will not function if sent across a network connection to a remote JVM.
• Local actor references when remoting is enabled are used by actor systems which support networking func-
tions for those references which represent actors within the same JVM. In order to also be reachable when
sent to other network nodes, these references include protocol and remote addressing information.
• There is a subtype of local actor references which is used for routers (i.e. actors mixing in the Router
trait). Its logical structure is the same as for the aforementioned local references, but sending a message to
them dispatches to one of their children directly instead.
• Remote actor references represent actors which are reachable using remote communication, i.e. sending
messages to them will serialize the messages transparently and send them to the remote JVM.
• There are several special types of actor references which behave like local actor references for all practical
purposes:
– PromiseActorRef is the special representation of a Promise for the purpose of being completed
by the response from an actor. [Link] creates this actor reference.
– DeadLetterActorRef is the default implementation of the dead letters service to which Akka
routes all messages whose destinations are shut down or non-existent.
– EmptyLocalActorRef is what Akka returns when looking up a non-existent local actor path: it
is equivalent to a DeadLetterActorRef, but it retains its path so that Akka can send it over the
network and compare it to other existing actor references for that path, some of which might have been
obtained before the actor died.
• And then there are some one-off internal implementations which you should never really see:
– There is an actor reference which does not represent an actor but acts only as a pseudo-supervisor for
the root guardian, we call it “the one who walks the bubbles of space-time”.
– The first logging service started before actually firing up actor creation facilities is a fake
actor reference which accepts log events and prints them directly to standard output; it is
[Link].

3.5.2 What is an Actor Path?

Since actors are created in a strictly hierarchical fashion, there exists a unique sequence of actor names given by
recursively following the supervision links between child and parent down towards the root of the actor system.
This sequence can be seen as enclosing folders in a file system, hence we adopted the name “path” to refer to it,
although actor hierarchy has some fundamental difference from file system hierarchy.
An actor path consists of an anchor, which identifies the actor system, followed by the concatenation of the path
elements, from root guardian to the designated actor; the path elements are the names of the traversed actors and
are separated by slashes.

What is the Difference Between Actor Reference and Path?

An actor reference designates a single actor and the life-cycle of the reference matches that actor’s life-cycle; an
actor path represents a name which may or may not be inhabited by an actor and the path itself does not have a

3.5. Actor References, Paths and Addresses 25


Akka Java Documentation, Release 2.4.20

life-cycle, it never becomes invalid. You can create an actor path without creating an actor, but you cannot create
an actor reference without creating corresponding actor.
You can create an actor, terminate it, and then create a new actor with the same actor path. The newly created
actor is a new incarnation of the actor. It is not the same actor. An actor reference to the old incarnation is not
valid for the new incarnation. Messages sent to the old actor reference will not be delivered to the new incarnation
even though they have the same path.

Actor Path Anchors

Each actor path has an address component, describing the protocol and location by which the corresponding actor
is reachable, followed by the names of the actors in the hierarchy from the root up. Examples are:
"akka://my-sys/user/service-a/worker1" // purely local
"[Link]://my-sys@[Link]/user/service-b" // remote

Here, [Link] is the default remote transport for the 2.4 release; other transports are pluggable. The inter-
pretation of the host and port part (i.e. [Link] in the example) depends on the transport
mechanism used, but it must abide by the URI structural rules.

Logical Actor Paths

The unique path obtained by following the parental supervision links towards the root guardian is called the logical
actor path. This path matches exactly the creation ancestry of an actor, so it is completely deterministic as soon as
the actor system’s remoting configuration (and with it the address component of the path) is set.

Physical Actor Paths

While the logical actor path describes the functional location within one actor system, configuration-based remote
deployment means that an actor may be created on a different network host than its parent, i.e. within a different
actor system. In this case, following the actor path from the root guardian up entails traversing the network, which
is a costly operation. Therefore, each actor also has a physical path, starting at the root guardian of the actor
system where the actual actor object resides. Using this path as sender reference when querying other actors will
let them reply directly to this actor, minimizing delays incurred by routing.
One important aspect is that a physical actor path never spans multiple actor systems or JVMs. This means that
the logical path (supervision hierarchy) and the physical path (actor deployment) of an actor may diverge if one
of its ancestors is remotely supervised.

Actor path alias or symbolic link?

As in some real file-systems you might think of a “path alias” or “symbolic link” for an actor, i.e. one actor
may be reachable using more than one path. However, you should note that actor hierarchy is different from file
system hierarchy. You cannot freely create actor paths like symbolic links to refer to arbitrary actors. As described
in the above logical and physical actor path sections, an actor path must be either logical path which represents
supervision hierarchy, or physical path which represents actor deployment.

3.5.3 How are Actor References obtained?

There are two general categories to how actor references may be obtained: by creating actors or by looking them
up, where the latter functionality comes in the two flavours of creating actor references from concrete actor paths
and querying the logical actor hierarchy.

3.5. Actor References, Paths and Addresses 26


Akka Java Documentation, Release 2.4.20

Creating Actors

An actor system is typically started by creating actors beneath the guardian actor using the
[Link] method and then using [Link] from within the created
actors to spawn the actor tree. These methods return a reference to the newly created actor. Each actor has direct
access (through its ActorContext) to references for its parent, itself and its children. These references may be
sent within messages to other actors, enabling those to reply directly.

Looking up Actors by Concrete Path

In addition, actor references may be looked up using the [Link] method. The
selection can be used for communicating with said actor and the actor corresponding to the selection is looked up
when delivering each message.
To acquire an ActorRef that is bound to the life-cycle of a specific actor you need to send a message, such as
the built-in Identify message, to the actor and use the sender() reference of a reply from the actor.

Absolute vs. Relative Paths

In addition to [Link] there is also [Link], which


is available inside any actor as [Link]. This yields an actor selection much like its twin
on ActorSystem, but instead of looking up the path starting from the root of the actor tree it starts out on the
current actor. Path elements consisting of two dots ("..") may be used to access the parent actor. You can for
example send a message to a specific sibling:
[Link]("../brother") ! msg

Absolute paths may of course also be looked up on context in the usual way, i.e.
[Link]("/user/serviceA") ! msg

will work as expected.

Querying the Logical Actor Hierarchy

Since the actor system forms a file-system like hierarchy, matching on paths is possible in the same way as sup-
ported by Unix shells: you may replace (parts of) path element names with wildcards («*» and «?») to formulate
a selection which may match zero or more actual actors. Because the result is not a single actor reference, it has a
different type ActorSelection and does not support the full set of operations an ActorRef does. Selections
may be formulated using the [Link] and [Link]
methods and do support sending messages:
[Link]("../*") ! msg

will send msg to all siblings including the current actor. As for references obtained using actorSelection, a traversal
of the supervision hierarchy is done in order to perform the message send. As the exact set of actors which match
a selection may change even while a message is making its way to the recipients, it is not possible to watch a
selection for liveliness changes. In order to do that, resolve the uncertainty by sending a request and gathering all
answers, extracting the sender references, and then watch all discovered concrete actors. This scheme of resolving
a selection may be improved upon in a future release.

Summary: actorOf vs. actorSelection

Note: What the above sections described in some detail can be summarized and memorized easily as follows:
• actorOf only ever creates a new actor, and it creates it as a direct child of the context on which this method
is invoked (which may be any actor or actor system).

3.5. Actor References, Paths and Addresses 27


Akka Java Documentation, Release 2.4.20

• actorSelection only ever looks up existing actors when messages are delivered, i.e. does not create
actors, or verify existence of actors when the selection is created.

3.5.4 Actor Reference and Path Equality

Equality of ActorRef match the intention that an ActorRef corresponds to the target actor incarnation. Two
actor references are compared equal when they have the same path and point to the same actor incarnation. A
reference pointing to a terminated actor does not compare equal to a reference pointing to another (re-created)
actor with the same path. Note that a restart of an actor caused by a failure still means that it is the same actor
incarnation, i.e. a restart is not visible for the consumer of the ActorRef.
If you need to keep track of actor references in a collection and do not care about the exact actor incarnation you
can use the ActorPath as key, because the identifier of the target actor is not taken into account when comparing
actor paths.

3.5.5 Reusing Actor Paths

When an actor is terminated, its reference will point to the dead letter mailbox, DeathWatch will publish its
final transition and in general it is not expected to come back to life again (since the actor life cycle does not
allow this). While it is possible to create an actor at a later time with an identical path—simply due to it being
impossible to enforce the opposite without keeping the set of all actors ever created available—this is not good
practice: messages sent with actorSelection to an actor which “died” suddenly start to work again, but
without any guarantee of ordering between this transition and any other event, hence the new inhabitant of the
path may receive messages which were destined for the previous tenant.
It may be the right thing to do in very specific circumstances, but make sure to confine the handling of this precisely
to the actor’s supervisor, because that is the only actor which can reliably detect proper deregistration of the name,
before which creation of the new child will fail.
It may also be required during testing, when the test subject depends on being instantiated at a specific path. In
that case it is best to mock its supervisor so that it will forward the Terminated message to the appropriate point in
the test procedure, enabling the latter to await proper deregistration of the name.

3.5.6 The Interplay with Remote Deployment

When an actor creates a child, the actor system’s deployer will decide whether the new actor resides in the same
JVM or on another node. In the second case, creation of the actor will be triggered via a network connection to
happen in a different JVM and consequently within a different actor system. The remote system will place the
new actor below a special path reserved for this purpose and the supervisor of the new actor will be a remote actor
reference (representing that actor which triggered its creation). In this case, [Link] (the supervisor
reference) and [Link] (the parent node in the actor’s path) do not represent the same actor.
However, looking up the child’s name within the supervisor will find it on the remote node, preserving logical
structure e.g. when sending to an unresolved actor reference.

3.5. Actor References, Paths and Addresses 28


Akka Java Documentation, Release 2.4.20

3.5.7 What is the Address part used for?

When sending an actor reference across the network, it is represented by its path. Hence, the path must fully
encode all information necessary to send messages to the underlying actor. This is achieved by encoding protocol,
host and port in the address part of the path string. When an actor system receives an actor path from a remote
node, it checks whether that path’s address matches the address of this actor system, in which case it will be
resolved to the actor’s local reference. Otherwise, it will be represented by a remote actor reference.

3.5.8 Top-Level Scopes for Actor Paths

At the root of the path hierarchy resides the root guardian above which all other actors are found; its name is "/".
The next level consists of the following:
• "/user" is the guardian actor for all user-created top-level actors; actors created using
[Link] are found below this one.
• "/system" is the guardian actor for all system-created top-level actors, e.g. logging listeners or actors
automatically deployed by configuration at the start of the actor system.
• "/deadLetters" is the dead letter actor, which is where all messages sent to stopped or non-existing
actors are re-routed (on a best-effort basis: messages may be lost even within the local JVM).
• "/temp" is the guardian for all short-lived system-created actors, e.g. those which are used in the imple-
mentation of [Link].
• "/remote" is an artificial path below which all actors reside whose supervisors are remote actor references
The need to structure the name space for actors like this arises from a central and very simple design goal:
everything in the hierarchy is an actor, and all actors function in the same way. Hence you can not only look
up the actors you created, you can also look up the system guardian and send it a message (which it will dutifully

3.5. Actor References, Paths and Addresses 29


Akka Java Documentation, Release 2.4.20

discard in this case). This powerful principle means that there are no quirks to remember, it makes the whole
system more uniform and consistent.
If you want to read more about the top-level structure of an actor system, have a look at The Top-Level Supervisors.

3.6 Location Transparency

The previous section describes how actor paths are used to enable location transparency. This special feature
deserves some extra explanation, because the related term “transparent remoting” was used quite differently in the
context of programming languages, platforms and technologies.

3.6.1 Distributed by Default

Everything in Akka is designed to work in a distributed setting: all interactions of actors use purely message
passing and everything is asynchronous. This effort has been undertaken to ensure that all functions are available
equally when running within a single JVM or on a cluster of hundreds of machines. The key for enabling this
is to go from remote to local by way of optimization instead of trying to go from local to remote by way of
generalization. See this classic paper for a detailed discussion on why the second approach is bound to fail.

3.6.2 Ways in which Transparency is Broken

What is true of Akka need not be true of the application which uses it, since designing for distributed execution
poses some restrictions on what is possible. The most obvious one is that all messages sent over the wire must be
serializable. While being a little less obvious this includes closures which are used as actor factories (i.e. within
Props) if the actor is to be created on a remote node.
Another consequence is that everything needs to be aware of all interactions being fully asynchronous, which in
a computer network might mean that it may take several minutes for a message to reach its recipient (depending
on configuration). It also means that the probability for a message to be lost is much higher than within one JVM,
where it is close to zero (still: no hard guarantee!).

3.6.3 How is Remoting Used?

We took the idea of transparency to the limit in that there is nearly no API for the remoting layer of Akka: it is
purely driven by configuration. Just write your application according to the principles outlined in the previous
sections, then specify remote deployment of actor sub-trees in the configuration file. This way, your application
can be scaled out without having to touch the code. The only piece of the API which allows programmatic
influence on remote deployment is that Props contain a field which may be set to a specific Deploy instance; this
has the same effect as putting an equivalent deployment into the configuration file (if both are given, configuration
file wins).

3.6.4 Peer-to-Peer vs. Client-Server

Akka Remoting is a communication module for connecting actor systems in a peer-to-peer fashion, and it is the
foundation for Akka Clustering. The design of remoting is driven by two (related) design decisions:
1. Communication between involved systems is symmetric: if a system A can connect to a system B then
system B must also be able to connect to system A independently.
2. The role of the communicating systems are symmetric in regards to connection patterns: there is no system
that only accepts connections, and there is no system that only initiates connections.
The consequence of these decisions is that it is not possible to safely create pure client-server setups with prede-
fined roles (violates assumption 2). For client-server setups it is better to use HTTP or Akka I/O.

3.6. Location Transparency 30


Akka Java Documentation, Release 2.4.20

Important: Using setups involving Network Address Translation, Load Balancers or Docker containers violates
assumption 1, unless additional steps are taken in the network configuration to allow symmetric communication
between involved systems. In such situations Akka can be configured to bind to a different network address than
the one used for establishing connections between Akka nodes. See remote-configuration-nat.

3.6.5 Marking Points for Scaling Up with Routers

In addition to being able to run different parts of an actor system on different nodes of a cluster, it is also possible
to scale up onto more cores by multiplying actor sub-trees which support parallelization (think for example a
search engine processing different queries in parallel). The clones can then be routed to in different fashions, e.g.
round-robin. The only thing necessary to achieve this is that the developer needs to declare a certain actor as
“withRouter”, then—in its stead—a router actor will be created which will spawn up a configurable number of
children of the desired type and route to them in the configured fashion. Once such a router has been declared, its
configuration can be freely overridden from the configuration file, including mixing it with the remote deployment
of (some of) the children. Read more about this in Routing (Scala) and Routing (Java).

3.7 Akka and the Java Memory Model

A major benefit of using the Lightbend Platform, including Scala and Akka, is that it simplifies the process of writ-
ing concurrent software. This article discusses how the Lightbend Platform, and Akka in particular, approaches
shared memory in concurrent applications.

3.7.1 The Java Memory Model

Prior to Java 5, the Java Memory Model (JMM) was ill defined. It was possible to get all kinds of strange results
when shared memory was accessed by multiple threads, such as:
• a thread not seeing values written by other threads: a visibility problem
• a thread observing ‘impossible’ behavior of other threads, caused by instructions not being executed in the
order expected: an instruction reordering problem.
With the implementation of JSR 133 in Java 5, a lot of these issues have been resolved. The JMM is a set of rules
based on the “happens-before” relation, which constrain when one memory access must happen before another,
and conversely, when they are allowed to happen out of order. Two examples of these rules are:
• The monitor lock rule: a release of a lock happens before every subsequent acquire of the same lock.
• The volatile variable rule: a write of a volatile variable happens before every subsequent read of the same
volatile variable
Although the JMM can seem complicated, the specification tries to find a balance between ease of use and the
ability to write performant and scalable concurrent data structures.

3.7.2 Actors and the Java Memory Model

With the Actors implementation in Akka, there are two ways multiple threads can execute actions on shared
memory:
• if a message is sent to an actor (e.g. by another actor). In most cases messages are immutable, but if
that message is not a properly constructed immutable object, without a “happens before” rule, it would be
possible for the receiver to see partially initialized data structures and possibly even values out of thin air
(longs/doubles).
• if an actor makes changes to its internal state while processing a message, and accesses that state while
processing another message moments later. It is important to realize that with the actor model you don’t get
any guarantee that the same thread will be executing the same actor for different messages.

3.7. Akka and the Java Memory Model 31


Akka Java Documentation, Release 2.4.20

To prevent visibility and reordering problems on actors, Akka guarantees the following two “happens before”
rules:
• The actor send rule: the send of the message to an actor happens before the receive of that message by the
same actor.
• The actor subsequent processing rule: processing of one message happens before processing of the next
message by the same actor.

Note: In layman’s terms this means that changes to internal fields of the actor are visible when the next message
is processed by that actor. So fields in your actor need not be volatile or equivalent.

Both rules only apply for the same actor instance and are not valid if different actors are used.

3.7.3 Futures and the Java Memory Model

The completion of a Future “happens before” the invocation of any callbacks registered to it are executed.
We recommend not to close over non-final fields (final in Java and val in Scala), and if you do choose to close
over non-final fields, they must be marked volatile in order for the current value of the field to be visible to the
callback.
If you close over a reference, you must also ensure that the instance that is referred to is thread safe. We highly
recommend staying away from objects that use locking, since it can introduce performance problems and in the
worst case, deadlocks. Such are the perils of synchronized.

3.7.4 Actors and shared mutable state

Since Akka runs on the JVM there are still some rules to be followed.
• Closing over internal Actor state and exposing it to other threads
class MyActor extends Actor {
var state = ...
def receive = {
case _ =>
//Wrongs

// Very bad, shared mutable state,


// will break your application in weird ways
Future { state = NewState }
anotherActor ? message onSuccess { r => state = r }

// Very bad, "sender" changes for every message,


// shared mutable state bug
Future { expensiveCalculation(sender()) }

//Rights

// Completely safe, "self" is OK to close over


// and it’s an ActorRef, which is thread-safe
Future { expensiveCalculation() } onComplete { f => self ! [Link] }

// Completely safe, we close over a fixed value


// and it’s an ActorRef, which is thread-safe
val currentSender = sender()
Future { expensiveCalculation(currentSender) }
}
}

• Messages should be immutable, this is to avoid the shared mutable state trap.

3.7. Akka and the Java Memory Model 32


Akka Java Documentation, Release 2.4.20

3.8 Message Delivery Reliability

Akka helps you build reliable applications which make use of multiple processor cores in one machine (“scaling
up”) or distributed across a computer network (“scaling out”). The key abstraction to make this work is that all
interactions between your code units—actors—happen via message passing, which is why the precise semantics
of how messages are passed between actors deserve their own chapter.
In order to give some context to the discussion below, consider an application which spans multiple network hosts.
The basic mechanism for communication is the same whether sending to an actor on the local JVM or to a remote
actor, but of course there will be observable differences in the latency of delivery (possibly also depending on the
bandwidth of the network link and the message size) and the reliability. In case of a remote message send there
are obviously more steps involved which means that more can go wrong. Another aspect is that local sending will
just pass a reference to the message inside the same JVM, without any restrictions on the underlying object which
is sent, whereas a remote transport will place a limit on the message size.
Writing your actors such that every interaction could possibly be remote is the safe, pessimistic bet. It means to
only rely on those properties which are always guaranteed and which are discussed in detail below. This has of
course some overhead in the actor’s implementation. If you are willing to sacrifice full location transparency—for
example in case of a group of closely collaborating actors—you can place them always on the same JVM and
enjoy stricter guarantees on message delivery. The details of this trade-off are discussed further below.
As a supplementary part we give a few pointers at how to build stronger reliability on top of the built-in ones. The
chapter closes by discussing the role of the “Dead Letter Office”.

3.8.1 The General Rules

These are the rules for message sends (i.e. the tell or ! method, which also underlies the ask pattern):
• at-most-once delivery, i.e. no guaranteed delivery
• message ordering per sender–receiver pair
The first rule is typically found also in other actor implementations while the second is specific to Akka.

Discussion: What does “at-most-once” mean?

When it comes to describing the semantics of a delivery mechanism, there are three basic categories:
• at-most-once delivery means that for each message handed to the mechanism, that message is delivered
zero or one times; in more casual terms it means that messages may be lost.
• at-least-once delivery means that for each message handed to the mechanism potentially multiple attempts
are made at delivering it, such that at least one succeeds; again, in more casual terms this means that
messages may be duplicated but not lost.
• exactly-once delivery means that for each message handed to the mechanism exactly one delivery is made
to the recipient; the message can neither be lost nor duplicated.
The first one is the cheapest—highest performance, least implementation overhead—because it can be done in
a fire-and-forget fashion without keeping state at the sending end or in the transport mechanism. The second
one requires retries to counter transport losses, which means keeping state at the sending end and having an
acknowledgement mechanism at the receiving end. The third is most expensive—and has consequently worst
performance—because in addition to the second it requires state to be kept at the receiving end in order to filter
out duplicate deliveries.

Discussion: Why No Guaranteed Delivery?

At the core of the problem lies the question what exactly this guarantee shall mean:
1. The message is sent out on the network?

3.8. Message Delivery Reliability 33


Akka Java Documentation, Release 2.4.20

2. The message is received by the other host?


3. The message is put into the target actor’s mailbox?
4. The message is starting to be processed by the target actor?
5. The message is processed successfully by the target actor?
Each one of these have different challenges and costs, and it is obvious that there are conditions under which
any message passing library would be unable to comply; think for example about configurable mailbox types
and how a bounded mailbox would interact with the third point, or even what it would mean to decide upon the
“successfully” part of point five.
Along those same lines goes the reasoning in Nobody Needs Reliable Messaging. The only meaningful way for a
sender to know whether an interaction was successful is by receiving a business-level acknowledgement message,
which is not something Akka could make up on its own (neither are we writing a “do what I mean” framework
nor would you want us to).
Akka embraces distributed computing and makes the fallibility of communication explicit through message pass-
ing, therefore it does not try to lie and emulate a leaky abstraction. This is a model that has been used with great
success in Erlang and requires the users to design their applications around it. You can read more about this
approach in the Erlang documentation (section 10.9 and 10.10), Akka follows it closely.
Another angle on this issue is that by providing only basic guarantees those use cases which do not need stronger
reliability do not pay the cost of their implementation; it is always possible to add stronger reliability on top of
basic ones, but it is not possible to retro-actively remove reliability in order to gain more performance.

Discussion: Message Ordering

The rule more specifically is that for a given pair of actors, messages sent directly from the first to the second will
not be received out-of-order. The word directly emphasizes that this guarantee only applies when sending with
the tell operator to the final destination, not when employing mediators or other message dissemination features
(unless stated otherwise).
The guarantee is illustrated in the following:
Actor A1 sends messages M1, M2, M3 to A2
Actor A3 sends messages M4, M5, M6 to A2
This means that:
1. If M1 is delivered it must be delivered before M2 and M3
2. If M2 is delivered it must be delivered before M3
3. If M4 is delivered it must be delivered before M5 and M6
4. If M5 is delivered it must be delivered before M6
5. A2 can see messages from A1 interleaved with messages from A3
6. Since there is no guaranteed delivery, any of the messages may be dropped, i.e. not arrive
at A2

Note: It is important to note that Akka’s guarantee applies to the order in which messages are enqueued into the
recipient’s mailbox. If the mailbox implementation does not respect FIFO order (e.g. a PriorityMailbox),
then the order of processing by the actor can deviate from the enqueueing order.

Please note that this rule is not transitive:


Actor A sends message M1 to actor C
Actor A then sends message M2 to actor B
Actor B forwards message M2 to actor C

3.8. Message Delivery Reliability 34


Akka Java Documentation, Release 2.4.20

Actor C may receive M1 and M2 in any order


Causal transitive ordering would imply that M2 is never received before M1 at actor C (though any of them might
be lost). This ordering can be violated due to different message delivery latencies when A, B and C reside on
different network hosts, see more below.

Note: Actor creation is treated as a message sent from the parent to the child, with the same semantics as discussed
above. Sending a message to an actor in a way which could be reordered with this initial creation message means
that the message might not arrive because the actor does not exist yet. An example where the message might arrive
too early would be to create a remote-deployed actor R1, send its reference to another remote actor R2 and have
R2 send a message to R1. An example of well-defined ordering is a parent which creates an actor and immediately
sends a message to it.

Communication of failure

Please note, that the ordering guarantees discussed above only hold for user messages between actors. Failure
of a child of an actor is communicated by special system messages that are not ordered relative to ordinary user
messages. In particular:
Child actor C sends message M to its parent P
Child actor fails with failure F
Parent actor P might receive the two events either in order M, F or F, M
The reason for this is that internal system messages has their own mailboxes therefore the ordering of enqueue
calls of a user and system message cannot guarantee the ordering of their dequeue times.

3.8.2 The Rules for In-JVM (Local) Message Sends

Be careful what you do with this section!

Relying on the stronger reliability in this section is not recommended since it will bind your application to local-
only deployment: an application may have to be designed differently (as opposed to just employing some message
exchange patterns local to some actors) in order to be fit for running on a cluster of machines. Our credo is “design
once, deploy any way you wish”, and to achieve this you should only rely on The General Rules.

Reliability of Local Message Sends

The Akka test suite relies on not losing messages in the local context (and for non-error condition tests also for
remote deployment), meaning that we actually do apply the best effort to keep our tests stable. A local tell
operation can however fail for the same reasons as a normal method call can on the JVM:
• StackOverflowError
• OutOfMemoryError
• other VirtualMachineError
In addition, local sends can fail in Akka-specific ways:
• if the mailbox does not accept the message (e.g. full BoundedMailbox)
• if the receiving actor fails while processing the message or is already terminated
While the first is clearly a matter of configuration the second deserves some thought: the sender of a message does
not get feedback if there was an exception while processing, that notification goes to the supervisor instead. This
is in general not distinguishable from a lost message for an outside observer.

3.8. Message Delivery Reliability 35


Akka Java Documentation, Release 2.4.20

Ordering of Local Message Sends

Assuming strict FIFO mailboxes the aforementioned caveat of non-transitivity of the message ordering guarantee
is eliminated under certain conditions. As you will note, these are quite subtle as it stands, and it is even possible
that future performance optimizations will invalidate this whole paragraph. The possibly non-exhaustive list of
counter-indications is:
• Before receiving the first reply from a top-level actor, there is a lock which protects an internal interim
queue, and this lock is not fair; the implication is that enqueue requests from different senders which arrive
during the actor’s construction (figuratively, the details are more involved) may be reordered depending on
low-level thread scheduling. Since completely fair locks do not exist on the JVM this is unfixable.
• The same mechanism is used during the construction of a Router, more precisely the routed ActorRef, hence
the same problem exists for actors deployed with Routers.
• As mentioned above, the problem occurs anywhere a lock is involved during enqueueing, which may also
apply to custom mailboxes.
This list has been compiled carefully, but other problematic scenarios may have escaped our analysis.

How does Local Ordering relate to Network Ordering

The rule that for a given pair of actors, messages sent directly from the first to the second will not be received
out-of-order holds for messages sent over the network with the TCP based Akka remote transport protocol.
As explained in the previous section local message sends obey transitive causal ordering under certain conditions.
This ordering can be violated due to different message delivery latencies. For example:
Actor A on node-1 sends message M1 to actor C on node-3
Actor A on node-1 then sends message M2 to actor B on node-2
Actor B on node-2 forwards message M2 to actor C on node-3
Actor C may receive M1 and M2 in any order
It might take longer time for M1 to “travel” to node-3 than it takes for M2 to “travel” to node-3 via node-2.

3.8.3 Higher-level abstractions

Based on a small and consistent tool set in Akka’s core, Akka also provides powerful, higher-level abstractions on
top it.

Messaging Patterns

As discussed above a straight-forward answer to the requirement of reliable delivery is an explicit ACK–RETRY
protocol. In its simplest form this requires
• a way to identify individual messages to correlate message with acknowledgement
• a retry mechanism which will resend messages if not acknowledged in time
• a way for the receiver to detect and discard duplicates
The third becomes necessary by virtue of the acknowledgements not being guaranteed to arrive either. An ACK-
RETRY protocol with business-level acknowledgements is supported by at-least-once-delivery-scala of the Akka
Persistence module. Duplicates can be detected by tracking the identifiers of messages sent via at-least-once-
delivery-scala. Another way of implementing the third part would be to make processing the messages idempotent
on the level of the business logic.
Another example of implementing all three requirements is shown at Reliable Proxy Pattern (which is now super-
seded by at-least-once-delivery-scala).

3.8. Message Delivery Reliability 36


Akka Java Documentation, Release 2.4.20

Event Sourcing

Event sourcing (and sharding) is what makes large websites scale to billions of users, and the idea is quite simple:
when a component (think actor) processes a command it will generate a list of events representing the effect of
the command. These events are stored in addition to being applied to the component’s state. The nice thing about
this scheme is that events only ever are appended to the storage, nothing is ever mutated; this enables perfect
replication and scaling of consumers of this event stream (i.e. other components may consume the event stream as
a means to replicate the component’s state on a different continent or to react to changes). If the component’s state
is lost—due to a machine failure or by being pushed out of a cache—it can easily be reconstructed by replaying
the event stream (usually employing snapshots to speed up the process). event-sourcing-scala is supported by
Akka Persistence.

Mailbox with Explicit Acknowledgement

By implementing a custom mailbox type it is possible to retry message processing at the receiving actor’s end
in order to handle temporary failures. This pattern is mostly useful in the local communication context where
delivery guarantees are otherwise sufficient to fulfill the application’s requirements.
Please note that the caveats for The Rules for In-JVM (Local) Message Sends do apply.
An example implementation of this pattern is shown at Mailbox with Explicit Acknowledgement.

3.8.4 Dead Letters

Messages which cannot be delivered (and for which this can be ascertained) will be delivered to a synthetic actor
called /deadLetters. This delivery happens on a best-effort basis; it may fail even within the local JVM (e.g.
during actor termination). Messages sent via unreliable network transports will be lost without turning up as dead
letters.

What Should I Use Dead Letters For?

The main use of this facility is for debugging, especially if an actor send does not arrive consistently (where
usually inspecting the dead letters will tell you that the sender or recipient was set wrong somewhere along the
way). In order to be useful for this purpose it is good practice to avoid sending to deadLetters where possible, i.e.
run your application with a suitable dead letter logger (see more below) from time to time and clean up the log
output. This exercise—like all else—requires judicious application of common sense: it may well be that avoiding
to send to a terminated actor complicates the sender’s code more than is gained in debug output clarity.
The dead letter service follows the same rules with respect to delivery guarantees as all other message sends, hence
it cannot be used to implement guaranteed delivery.

How do I Receive Dead Letters?

An actor can subscribe to class [Link] on the event stream, see Event Stream (Java) or
event-stream-scala (Scala) for how to do that. The subscribed actor will then receive all dead letters published
in the (local) system from that point onwards. Dead letters are not propagated over the network, if you want to
collect them in one place you will have to subscribe one actor per network node and forward them manually. Also
consider that dead letters are generated at that node which can determine that a send operation is failed, which for
a remote send can be the local system (if no network connection can be established) or the remote one (if the actor
you are sending to does not exist at that point in time).

Dead Letters Which are (Usually) not Worrisome

Every time an actor does not terminate by its own decision, there is a chance that some messages which it sends
to itself are lost. There is one which happens quite easily in complex shutdown scenarios that is usually benign:
seeing a [Link] message dropped means that two termination requests were given, but

3.8. Message Delivery Reliability 37


Akka Java Documentation, Release 2.4.20

of course only one can succeed. In the same vein, you might see [Link] messages from
children while stopping a hierarchy of actors turning up in dead letters if the parent is still watching the child when
the parent terminates.

3.9 Configuration

You can start using Akka without defining any configuration, since sensible default values are provided. Later on
you might need to amend the settings to change the default behavior or adapt for specific runtime environments.
Typical examples of settings that you might amend:
• log level and logger backend
• enable remoting
• message serializers
• definition of routers
• tuning of dispatchers
Akka uses the Typesafe Config Library, which might also be a good choice for the configuration of your own ap-
plication or library built with or without Akka. This library is implemented in Java with no external dependencies;
you should have a look at its documentation (in particular about ConfigFactory), which is only summarized in the
following.

Warning: If you use Akka from the Scala REPL from the 2.9.x series, and you do not provide your own
ClassLoader to the ActorSystem, start the REPL with “-Yrepl-sync” to work around a deficiency in the REPLs
provided Context ClassLoader.

3.9.1 Where configuration is read from

All configuration for Akka is held within instances of ActorSystem, or put differently, as viewed from
the outside, ActorSystem is the only consumer of configuration information. While constructing an ac-
tor system, you can either pass in a Config object or not, where the second case is equivalent to passing
[Link]() (with the right class loader). This means roughly that the default is to parse all
[Link], [Link] and [Link] found at the root of the
class path—please refer to the aforementioned documentation for details. The actor system then merges in all
[Link] resources found at the root of the class path to form the fallback configuration, i.e. it inter-
nally uses
[Link]([Link](classLoader))

The philosophy is that code never contains default values, but instead relies upon their presence in the
[Link] supplied with the library in question.
Highest precedence is given to overrides given as system properties, see the HOCON specification (near the
bottom). Also noteworthy is that the application configuration—which defaults to application—may be
overridden using the [Link] property (there are more, please refer to the Config docs).

Note: If you are writing an Akka application, keep you configuration in [Link] at the root of
the class path. If you are writing an Akka-based library, keep its configuration in [Link] at the root
of the JAR file.

3.9. Configuration 38
Akka Java Documentation, Release 2.4.20

3.9.2 When using JarJar, OneJar, Assembly or any jar-bundler

Warning: Akka’s configuration approach relies heavily on the notion of every module/jar having its own
[Link] file, all of these will be discovered by the configuration and loaded. Unfortunately this also
means that if you put/merge multiple jars into the same jar, you need to merge all the [Link] as well.
Otherwise all defaults will be lost and Akka will not function.

If you are using Maven to package your application, you can also make use of the Apache Maven Shade Plugin
support for Resource Transformers to merge all the [Link] on the build classpath into one.
The plugin configuration might look like this:
<plugin>
<groupId>[Link]</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>1.5</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<shadedArtifactAttached>true</shadedArtifactAttached>
<shadedClassifierName>allinone</shadedClassifierName>
<artifactSet>
<includes>
<include>*:*</include>
</includes>
</artifactSet>
<transformers>
<transformer
implementation="[Link]">
<resource>[Link]</resource>
</transformer>
<transformer
implementation="[Link]">
<manifestEntries>
<Main-Class>[Link]</Main-Class>
</manifestEntries>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>

3.9.3 Custom [Link]

A custom [Link] might look like this:


# In this file you can override any option defined in the reference files.
# Copy in parts of the reference files and modify as you please.

akka {

# Loggers to register at boot time ([Link]$DefaultLogger logs


# to STDOUT)
loggers = ["[Link].slf4j.Slf4jLogger"]

# Log level used by the configured loggers (see "loggers") as soon

3.9. Configuration 39
Akka Java Documentation, Release 2.4.20

# as they have been started; before that, see "stdout-loglevel"


# Options: OFF, ERROR, WARNING, INFO, DEBUG
loglevel = "DEBUG"

# Log level for the very basic logger activated during ActorSystem startup.
# This logger prints the log messages to stdout ([Link]).
# Options: OFF, ERROR, WARNING, INFO, DEBUG
stdout-loglevel = "DEBUG"

# Filter of log events that is used by the LoggingAdapter before


# publishing log events to the eventStream.
logging-filter = "[Link].slf4j.Slf4jLoggingFilter"

actor {
provider = "cluster"

default-dispatcher {
# Throughput for default Dispatcher, set to 1 for as fair as possible
throughput = 10
}
}

remote {
# The port clients should connect to. Default is 2552.
[Link] = 4711
}
}

3.9.4 Including files

Sometimes it can be useful to include another configuration file, for example if you have one
[Link] with all environment independent settings and then override some settings for specific
environments.
Specifying system property with -[Link]=/[Link] will load the [Link] file, which
includes the [Link]
[Link]:
include "application"

akka {
loglevel = "DEBUG"
}

More advanced include and substitution mechanisms are explained in the HOCON specification.

3.9.5 Logging of Configuration

If the system or config property [Link]-config-on-start is set to on, then the complete configuration is
logged at INFO level when the actor system is started. This is useful when you are uncertain of what configuration
is used.
If in doubt, you can also easily and nicely inspect configuration objects before or after using them to construct an
actor system:
Welcome to Scala version 2.11.11 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0).
Type in expressions to have them evaluated.
Type :help for more information.

scala> import [Link]._

3.9. Configuration 40
Akka Java Documentation, Release 2.4.20

import [Link]._

scala> [Link]("a.b=12")
res0: [Link] = Config(SimpleConfigObject({"a" : {"b" : 12}}))

scala> [Link]
res1: [Link] =
{
# String: 1
"a" : {
# String: 1
"b" : 12
}
}

The comments preceding every item give detailed information about the origin of the setting (file & line number)
plus possible comments which were present, e.g. in the reference configuration. The settings as merged with the
reference and parsed by the actor system can be displayed like this:
final ActorSystem system = [Link]();
[Link]([Link]());
// this is a shortcut for [Link]().config().root().render()

3.9.6 A Word About ClassLoaders

In several places of the configuration file it is possible to specify the fully-qualified class name of something to be
instantiated by Akka. This is done using Java reflection, which in turn uses a ClassLoader. Getting the right
one in challenging environments like application containers or OSGi bundles is not always trivial, the current
approach of Akka is that each ActorSystem implementation stores the current thread’s context class loader
(if available, otherwise just its own loader as in [Link]) and uses that for all
reflective accesses. This implies that putting Akka on the boot class path will yield NullPointerException
from strange places: this is simply not supported.

3.9.7 Application specific settings

The configuration can also be used for application specific settings. A good practice is to place those settings in
an Extension, as described in:
• Scala API: [Link]
• Java API: Application specific settings

3.9.8 Configuring multiple ActorSystem

If you have more than one ActorSystem (or you’re writing a library and have an ActorSystem that may be
separate from the application’s) you may want to separate the configuration for each system.
Given that [Link]() merges all resources with matching name from the whole class path, it
is easiest to utilize that functionality and differentiate actor systems within the hierarchy of the configuration:
myapp1 {
[Link] = "WARNING"
[Link] = 43
}
myapp2 {
[Link] = "ERROR"
[Link] = "appname"
}

3.9. Configuration 41
Akka Java Documentation, Release 2.4.20

[Link] = 42
[Link] = "hello"

val config = [Link]()


val app1 = ActorSystem("MyApp1", [Link]("myapp1").withFallback(config))
val app2 = ActorSystem("MyApp2",
[Link]("myapp2").withOnlyPath("akka").withFallback(config))

These two samples demonstrate different variations of the “lift-a-subtree” trick: in the first case, the configuration
accessible from within the actor system is this
[Link] = "WARNING"
[Link] = 43
[Link] = "hello"
// plus myapp1 and myapp2 subtrees

while in the second one, only the “akka” subtree is lifted, with the following result
[Link] = "ERROR"
[Link] = 42
[Link] = "hello"
// plus myapp1 and myapp2 subtrees

Note: The configuration library is really powerful, explaining all features exceeds the scope affordable here.
In particular not covered are how to include other configuration files within other files (see a small example at
Including files) and copying parts of the configuration tree by way of path substitutions.

You may also specify and parse the configuration programmatically in other ways when instantiating the
ActorSystem.
import [Link]
import [Link]
val customConf = [Link]("""
[Link] {
/my-service {
router = round-robin-pool
nr-of-instances = 3
}
}
""")
// [Link] sandwiches customConfig between default reference
// config and default overrides, and then resolves it.
val system = ActorSystem("MySystem", [Link](customConf))

3.9.9 Reading configuration from a custom location

You can replace or supplement [Link] either in code or using system properties.
If you’re using [Link]() (which Akka does by default) you can replace
[Link] by defining -[Link]=whatever, -[Link]=whatever, or
-[Link]=whatever.
From inside your replacement file specified with -[Link] and friends, you can include
"application" if you still want to use application.{conf,json,properties} as well. Settings
specified before include "application" would be overridden by the included file, while those after would
override the included file.
In code, there are many customization options.
There are several overloads of [Link](); these allow you to specify something to be sand-
wiched between system properties (which override) and the defaults (from [Link]), replacing the

3.9. Configuration 42
Akka Java Documentation, Release 2.4.20

usual application.{conf,json,properties} and replacing -[Link] and friends.


The simplest variant of [Link]() takes a resource basename (instead of application);
[Link], [Link], and [Link] would then be used instead of
application.{conf,json,properties}.
The most flexible variant takes a Config object, which you can load using any method in ConfigFactory.
For example you could put a config string in code using [Link]() or you could
make a map and [Link](), or you could load a file.
You can also combine your custom config with the usual config, that might look like:
// make a Config with just your special setting
Config myConfig =
[Link]("something=somethingElse");
// load the normal config stack (system props,
// then [Link], then [Link])
Config regularConfig =
[Link]();
// override regular stack with myConfig
Config combined =
[Link](regularConfig);
// put the result in between the overrides
// (system props) and defaults again
Config complete =
[Link](combined);
// create ActorSystem
ActorSystem system =
[Link]("myname", complete);

When working with Config objects, keep in mind that there are three “layers” in the cake:
• [Link]() (system properties)
• the app’s settings
• [Link]() ([Link])
The normal goal is to customize the middle layer while leaving the other two alone.
• [Link]() loads the whole stack
• the overloads of [Link]() let you specify a different middle layer
• the [Link]() variations load single files or resources
To stack two layers, use [Link](fallback); try to keep system props
(defaultOverrides()) on top and [Link] (defaultReference()) on the bottom.
Do keep in mind, you can often just add another include statement in [Link] rather than writ-
ing code. Includes at the top of [Link] will be overridden by the rest of [Link],
while those at the bottom will override the earlier stuff.

3.9.10 Actor Deployment Configuration

Deployment settings for specific actors can be defined in the [Link] section of the con-
figuration. In the deployment section it is possible to define things like dispatcher, mailbox, router settings, and
remote deployment. Configuration of these features are described in the chapters detailing corresponding topics.
An example may look like this:
[Link] {

# ’/user/actorA/actorB’ is a remote deployed actor


/actorA/actorB {
remote = "[Link]://sampleActorSystem@[Link]:2553"
}

3.9. Configuration 43
Akka Java Documentation, Release 2.4.20

# all direct children of ’/user/actorC’ have a dedicated dispatcher


"/actorC/*" {
dispatcher = my-dispatcher
}

# all descendants of ’/user/actorC’ (direct children, and their children recursively)


# have a dedicated dispatcher
"/actorC/**" {
dispatcher = my-dispatcher
}

# ’/user/actorD/actorE’ has a special priority mailbox


/actorD/actorE {
mailbox = prio-mailbox
}

# ’/user/actorF/actorG/actorH’ is a random pool


/actorF/actorG/actorH {
router = random-pool
nr-of-instances = 5
}
}

my-dispatcher {
[Link]-min = 10
[Link]-max = 10
}
prio-mailbox {
mailbox-type = "[Link]"
}

Note: The deployment section for a specific actor is identified by the path of the actor relative to /user.

You can use asterisks as wildcard matches for the actor path sections, so you could specify: /*/sampleActor
and that would match all sampleActor on that level in the hierarchy. In addition, please note:
• you can also use wildcards in the last position to match all actors at a certain level: /someParent/*
• you can use double-wildcards in the last position to match all child actors and their children recursively:
/someParent/**
• non-wildcard matches always have higher priority to match than wildcards, and single wildcard matches
have higher priority than double-wildcards, so: /foo/bar is considered more specific than /foo/*,
which is considered more specific than /foo/**. Only the highest priority match is used
• wildcards cannot be used to partially match section, like this: /foo*/bar, /f*o/bar etc.

Note: Double-wildcards can only be placed in the last position.

3.9.11 Listing of the Reference Configuration

Each Akka module has a reference configuration file with the default values.

akka-actor

####################################
# Akka Actor Reference Config File #
####################################

3.9. Configuration 44
Akka Java Documentation, Release 2.4.20

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

# Akka version, checked against the runtime version of Akka. Loaded from generated conf file.
include "version"

akka {
# Home directory of Akka, modules in the deploy directory will be loaded
home = ""

# Loggers to register at boot time ([Link]$DefaultLogger logs


# to STDOUT)
loggers = ["[Link]$DefaultLogger"]

# Filter of log events that is used by the LoggingAdapter before


# publishing log events to the eventStream. It can perform
# fine grained filtering based on the log source. The default
# implementation filters on the ‘loglevel‘.
# FQCN of the LoggingFilter. The Class of the FQCN must implement
# [Link] and have a public constructor with
# ([Link], [Link]) parameters.
logging-filter = "[Link]"

# Specifies the default loggers dispatcher


loggers-dispatcher = "[Link]-dispatcher"

# Loggers are created and registered synchronously during ActorSystem


# start-up, and since they are actors, this timeout is used to bound the
# waiting time
logger-startup-timeout = 5s

# Log level used by the configured loggers (see "loggers") as soon


# as they have been started; before that, see "stdout-loglevel"
# Options: OFF, ERROR, WARNING, INFO, DEBUG
loglevel = "INFO"

# Log level for the very basic logger activated during ActorSystem startup.
# This logger prints the log messages to stdout ([Link]).
# Options: OFF, ERROR, WARNING, INFO, DEBUG
stdout-loglevel = "WARNING"

# Log the complete configuration at INFO level when the actor system is started.
# This is useful when you are uncertain of what configuration is used.
log-config-on-start = off

# Log at info level when messages are sent to dead letters.


# Possible values:
# on: all dead letters are logged
# off: no logging of dead letters
# n: positive integer, number of dead letters that will be logged
log-dead-letters = 10

# Possibility to turn off logging of dead letters while the actor system
# is shutting down. Logging is only done when enabled by ’log-dead-letters’
# setting.
log-dead-letters-during-shutdown = on

# List FQCN of extensions which shall be loaded at actor system startup.


# Library extensions are regular extensions that are loaded at startup and are
# available for third party library authors to enable auto-loading of extensions when
# present on the classpath. This is done by appending entries:
# ’library-extensions += "Extension"’ in the library ‘[Link]‘.

3.9. Configuration 45
Akka Java Documentation, Release 2.4.20

#
# Should not be set by end user applications in ’[Link]’, use the extensions property
#
library-extensions = ${?[Link]-extensions} []

# List FQCN of extensions which shall be loaded at actor system startup.


# Should be on the format: ’extensions = ["foo", "bar"]’ etc.
# See the Akka Documentation for more info about Extensions
extensions = []

# Toggles whether threads created by this ActorSystem should be daemons or not


daemonic = off

# JVM shutdown, [Link](-1), in case of a fatal error,


# such as OutOfMemoryError
jvm-exit-on-fatal-error = on

actor {

# Either one of "local", "remote" or "cluster" or the


# FQCN of the ActorRefProvider to be used; the below is the built-in default,
# note that "remote" and "cluster" requires the akka-remote and akka-cluster
# artifacts to be on the classpath.
provider = "local"

# The guardian "/user" will use this class to obtain its supervisorStrategy.
# It needs to be a subclass of [Link].
# In addition to the default there is [Link].
guardian-supervisor-strategy = "[Link]"

# Timeout for [Link]


creation-timeout = 20s

# Serializes and deserializes (non-primitive) messages to ensure immutability,


# this is only intended for testing.
serialize-messages = off

# Serializes and deserializes creators (in Props) to ensure that they can be
# sent over the network, this is only intended for testing. Purely local deployments
# as marked with [Link] == LocalScope are exempt from verification.
serialize-creators = off

# Timeout for send operations to top-level actors which are in the process
# of being started. This is only relevant if using a bounded mailbox or the
# CallingThreadDispatcher for a top-level actor.
unstarted-push-timeout = 10s

typed {
# Default timeout for typed actor methods with non-void return type
timeout = 5s
}

# Mapping between ´[Link]’ short names to fully qualified class names


[Link]-mapping {
from-code = "[Link]"
round-robin-pool = "[Link]"
round-robin-group = "[Link]"
random-pool = "[Link]"
random-group = "[Link]"
balancing-pool = "[Link]"
smallest-mailbox-pool = "[Link]"
broadcast-pool = "[Link]"
broadcast-group = "[Link]"

3.9. Configuration 46
Akka Java Documentation, Release 2.4.20

scatter-gather-pool = "[Link]"
scatter-gather-group = "[Link]"
tail-chopping-pool = "[Link]"
tail-chopping-group = "[Link]"
consistent-hashing-pool = "[Link]"
consistent-hashing-group = "[Link]"
}

deployment {

# deployment id pattern - on the format: /parent/child etc.


default {

# The id of the dispatcher to use for this actor.


# If undefined or empty the dispatcher specified in code
# ([Link]) is used, or default-dispatcher if not
# specified at all.
dispatcher = ""

# The id of the mailbox to use for this actor.


# If undefined or empty the default mailbox of the configured dispatcher
# is used or if there is no mailbox configuration the mailbox specified
# in code ([Link]) is used.
# If there is a mailbox defined in the configured dispatcher then that
# overrides this setting.
mailbox = ""

# routing (load-balance) scheme to use


# - available: "from-code", "round-robin", "random", "smallest-mailbox",
# "scatter-gather", "broadcast"
# - or: Fully qualified class name of the router class.
# The class must extend [Link] and
# have a public constructor with [Link]
# and optional [Link] parameter.
# - default is "from-code";
# Whether or not an actor is transformed to a Router is decided in code
# only ([Link]). The type of router can be overridden in the
# configuration; specifying "from-code" means that the values specified
# in the code shall be used.
# In case of routing, the actors to be routed to can be specified
# in several ways:
# - nr-of-instances: will create that many children
# - [Link]: will route messages to these paths using ActorSelection,
# i.e. will not create children
# - resizer: dynamically resizable number of routees as specified in
# resizer below
router = "from-code"

# number of children to create in case of a router;


# this setting is ignored if [Link] is given
nr-of-instances = 1

# within is the timeout used for routers containing future calls


within = 5 seconds

# number of virtual nodes per node for consistent-hashing router


virtual-nodes-factor = 10

tail-chopping-router {
# interval is duration between sending message to next routee
interval = 10 milliseconds
}

3.9. Configuration 47
Akka Java Documentation, Release 2.4.20

routees {
# Alternatively to giving nr-of-instances you can specify the full
# paths of those actors which should be routed to. This setting takes
# precedence over nr-of-instances
paths = []
}

# To use a dedicated dispatcher for the routees of the pool you can
# define the dispatcher configuration inline with the property name
# ’pool-dispatcher’ in the deployment section of the router.
# For example:
# pool-dispatcher {
# [Link]-min = 5
# [Link]-max = 5
# }

# Routers with dynamically resizable number of routees; this feature is


# enabled by including (parts of) this section in the deployment
resizer {

enabled = off

# The fewest number of routees the router should ever have.


lower-bound = 1

# The most number of routees the router should ever have.


# Must be greater than or equal to lower-bound.
upper-bound = 10

# Threshold used to evaluate if a routee is considered to be busy


# (under pressure). Implementation depends on this value (default is 1).
# 0: number of routees currently processing a message.
# 1: number of routees currently processing a message has
# some messages in mailbox.
# > 1: number of routees with at least the configured pressure-threshold
# messages in their mailbox. Note that estimating mailbox size of
# default UnboundedMailbox is O(N) operation.
pressure-threshold = 1

# Percentage to increase capacity whenever all routees are busy.


# For example, 0.2 would increase 20% (rounded up), i.e. if current
# capacity is 6 it will request an increase of 2 more routees.
rampup-rate = 0.2

# Minimum fraction of busy routees before backing off.


# For example, if this is 0.3, then we’ll remove some routees only when
# less than 30% of routees are busy, i.e. if current capacity is 10 and
# 3 are busy then the capacity is unchanged, but if 2 or less are busy
# the capacity is decreased.
# Use 0.0 or negative to avoid removal of routees.
backoff-threshold = 0.3

# Fraction of routees to be removed when the resizer reaches the


# backoffThreshold.
# For example, 0.1 would decrease 10% (rounded up), i.e. if current
# capacity is 9 it will request an decrease of 1 routee.
backoff-rate = 0.1

# Number of messages between resize operation.


# Use 1 to resize before each message.
messages-per-resize = 10
}

3.9. Configuration 48
Akka Java Documentation, Release 2.4.20

# Routers with dynamically resizable number of routees based on


# performance metrics.
# This feature is enabled by including (parts of) this section in
# the deployment, cannot be enabled together with default resizer.
optimal-size-exploring-resizer {

enabled = off

# The fewest number of routees the router should ever have.


lower-bound = 1

# The most number of routees the router should ever have.


# Must be greater than or equal to lower-bound.
upper-bound = 10

# probability of doing a ramping down when all routees are busy


# during exploration.
chance-of-ramping-down-when-full = 0.2

# Interval between each resize attempt


action-interval = 5s

# If the routees have not been fully utilized (i.e. all routees busy)
# for such length, the resizer will downsize the pool.
downsize-after-underutilized-for = 72h

# Duration exploration, the ratio between the largest step size and
# current pool size. E.g. if the current pool size is 50, and the
# explore-step-size is 0.1, the maximum pool size change during
# exploration will be +- 5
explore-step-size = 0.1

# Probabily of doing an exploration v.s. optmization.


chance-of-exploration = 0.4

# When downsizing after a long streak of underutilization, the resizer


# will downsize the pool to the highest utiliziation multiplied by a
# a downsize rasio. This downsize ratio determines the new pools size
# in comparison to the highest utilization.
# E.g. if the highest utilization is 10, and the down size ratio
# is 0.8, the pool will be downsized to 8
downsize-ratio = 0.8

# When optimizing, the resizer only considers the sizes adjacent to the
# current size. This number indicates how many adjacent sizes to consider.
optimization-range = 16

# The weight of the latest metric over old metrics when collecting
# performance metrics.
# E.g. if the last processing speed is 10 millis per message at pool
# size 5, and if the new processing speed collected is 6 millis per
# message at pool size 5. Given a weight of 0.3, the metrics
# representing pool size 5 will be 6 * 0.3 + 10 * 0.7, i.e. 8.8 millis
# Obviously, this number should be between 0 and 1.
weight-of-latest-metric = 0.5
}
}

/IO-DNS/inet-address {
mailbox = "unbounded"
router = "consistent-hashing-pool"
nr-of-instances = 4
}

3.9. Configuration 49
Akka Java Documentation, Release 2.4.20

default-dispatcher {
# Must be one of the following
# Dispatcher, PinnedDispatcher, or a FQCN to a class inheriting
# MessageDispatcherConfigurator with a public constructor with
# both [Link] parameter and
# [Link] parameters.
# PinnedDispatcher must be used together with executor=thread-pool-executor.
type = "Dispatcher"

# Which kind of ExecutorService to use for this dispatcher


# Valid options:
# - "default-executor" requires a "default-executor" section
# - "fork-join-executor" requires a "fork-join-executor" section
# - "thread-pool-executor" requires a "thread-pool-executor" section
# - A FQCN of a class extending ExecutorServiceConfigurator
executor = "default-executor"

# This will be used if you have set "executor = "default-executor"".


# If an ActorSystem is created with a given ExecutionContext, this
# ExecutionContext will be used as the default executor for all
# dispatchers in the ActorSystem configured with
# executor = "default-executor". Note that "default-executor"
# is the default value for executor, and therefore used if not
# specified otherwise. If no ExecutionContext is given,
# the executor configured in "fallback" will be used.
default-executor {
fallback = "fork-join-executor"
}

# This will be used if you have set "executor = "fork-join-executor""


# Underlying thread pool implementation is [Link]
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 8

# The parallelism factor is used to determine thread pool size using the
# following formula: ceil(available processors * factor). Resulting size
# is then bounded by the parallelism-min and parallelism-max values.
parallelism-factor = 3.0

# Max number of threads to cap factor-based parallelism number to


parallelism-max = 64

# Setting to "FIFO" to use queue like peeking mode which "poll" or "LIFO" to use stack
# like peeking mode which "pop".
task-peeking-mode = "FIFO"
}

# This will be used if you have set "executor = "thread-pool-executor""


# Underlying thread pool implementation is [Link]
thread-pool-executor {
# Keep alive time for threads
keep-alive-time = 60s

# Define a fixed thread pool size with this property. The corePoolSize
# and the maximumPoolSize of the ThreadPoolExecutor will be set to this
# value, if it is defined. Then the other pool-size properties will not
# be used.
#
# Valid values are: ‘off‘ or a positive integer.
fixed-pool-size = off

3.9. Configuration 50
Akka Java Documentation, Release 2.4.20

# Min number of threads to cap factor-based corePoolSize number to


core-pool-size-min = 8

# The core-pool-size-factor is used to determine corePoolSize of the


# ThreadPoolExecutor using the following formula:
# ceil(available processors * factor).
# Resulting size is then bounded by the core-pool-size-min and
# core-pool-size-max values.
core-pool-size-factor = 3.0

# Max number of threads to cap factor-based corePoolSize number to


core-pool-size-max = 64

# Minimum number of threads to cap factor-based maximumPoolSize number to


max-pool-size-min = 8

# The max-pool-size-factor is used to determine maximumPoolSize of the


# ThreadPoolExecutor using the following formula:
# ceil(available processors * factor)
# The maximumPoolSize will not be less than corePoolSize.
# It is only used if using a bounded task queue.
max-pool-size-factor = 3.0

# Max number of threads to cap factor-based maximumPoolSize number to


max-pool-size-max = 64

# Specifies the bounded capacity of the task queue (< 1 == unbounded)


task-queue-size = -1

# Specifies which type of task queue will be used, can be "array" or


# "linked" (default)
task-queue-type = "linked"

# Allow core threads to time out


allow-core-timeout = on
}

# How long time the dispatcher will wait for new actors until it shuts down
shutdown-timeout = 1s

# Throughput defines the number of messages that are processed in a batch


# before the thread is returned to the pool. Set to 1 for as fair as possible.
throughput = 5

# Throughput deadline for Dispatcher, set to 0 or negative for no deadline


throughput-deadline-time = 0ms

# For BalancingDispatcher: If the balancing dispatcher should attempt to


# schedule idle actors using the same dispatcher when a message comes in,
# and the dispatchers ExecutorService is not fully busy already.
attempt-teamwork = on

# If this dispatcher requires a specific type of mailbox, specify the


# fully-qualified class name here; the actually created mailbox will
# be a subtype of this type. The empty string signifies no requirement.
mailbox-requirement = ""
}

default-mailbox {
# FQCN of the MailboxType. The Class of the FQCN must have a public
# constructor with
# ([Link], [Link]) parameters.

3.9. Configuration 51
Akka Java Documentation, Release 2.4.20

mailbox-type = "[Link]"

# If the mailbox is bounded then it uses this setting to determine its


# capacity. The provided value must be positive.
# NOTICE:
# Up to version 2.1 the mailbox type was determined based on this setting;
# this is no longer the case, the type must explicitly be a bounded mailbox.
mailbox-capacity = 1000

# If the mailbox is bounded then this is the timeout for enqueueing


# in case the mailbox is full. Negative values signify infinite
# timeout, which should be avoided as it bears the risk of dead-lock.
mailbox-push-timeout-time = 10s

# For Actor with Stash: The default capacity of the stash.


# If negative (or zero) then an unbounded stash is used (default)
# If positive then a bounded stash is used and the capacity is set using
# the property
stash-capacity = -1
}

mailbox {
# Mapping between message queue semantics and mailbox configurations.
# Used by [Link][T] to enforce different
# mailbox types on actors.
# If your Actor implements RequiresMessageQueue[T], then when you create
# an instance of that actor its mailbox type will be decided by looking
# up a mailbox configuration via T in this mapping
requirements {
"[Link]" =
[Link]-queue-based
"[Link]" =
[Link]-queue-based
"[Link]" =
[Link]-deque-based
"[Link]" =
[Link]-deque-based
"[Link]" =
[Link]-deque-based
"[Link]" =
[Link]-queue-based
"[Link]" =
[Link]-control-aware-queue-based
"[Link]" =
[Link]-control-aware-queue-based
"[Link]" =
[Link]-control-aware-queue-based
"[Link]" =
[Link]-queue
}

unbounded-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
# constructor with ([Link],
# [Link]) parameters.
mailbox-type = "[Link]"
}

bounded-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
# constructor with ([Link],
# [Link]) parameters.
mailbox-type = "[Link]"

3.9. Configuration 52
Akka Java Documentation, Release 2.4.20

unbounded-deque-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
# constructor with ([Link],
# [Link]) parameters.
mailbox-type = "[Link]"
}

bounded-deque-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
# constructor with ([Link],
# [Link]) parameters.
mailbox-type = "[Link]"
}

unbounded-control-aware-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
# constructor with ([Link],
# [Link]) parameters.
mailbox-type = "[Link]"
}

bounded-control-aware-queue-based {
# FQCN of the MailboxType, The Class of the FQCN must have a public
# constructor with ([Link],
# [Link]) parameters.
mailbox-type = "[Link]"
}

# The LoggerMailbox will drain all messages in the mailbox


# when the system is shutdown and deliver them to the StandardOutLogger.
# Do not change this unless you know what you are doing.
logger-queue {
mailbox-type = "[Link]"
}
}

debug {
# enable function of [Link](), which is to log any received message
# at DEBUG level, see the “Testing Actor Systems” section of the Akka
# Documentation at [Link]
receive = off

# enable DEBUG logging of all AutoReceiveMessages (Kill, PoisonPill et.c.)


autoreceive = off

# enable DEBUG logging of actor lifecycle changes


lifecycle = off

# enable DEBUG logging of all LoggingFSMs for events, transitions and timers
fsm = off

# enable DEBUG logging of subscription changes on the eventStream


event-stream = off

# enable DEBUG logging of unhandled messages


unhandled = off

# enable WARN logging of misconfigured routers


router-misconfiguration = off
}

3.9. Configuration 53
Akka Java Documentation, Release 2.4.20

# SECURITY BEST-PRACTICE is to disable java serialization for its multiple


# known attack surfaces.
#
# This setting is a short-cut to
# - using DisabledJavaSerializer instead of JavaSerializer
# - enable-additional-serialization-bindings = on
#
# Completely disable the use of ‘[Link]‘ by the
# Akka Serialization extension, instead DisabledJavaSerializer will
# be inserted which will fail explicitly if attempts to use java serialization are made.
#
# The log messages emitted by such serializer SHOULD be be treated as potential
# attacks which the serializer prevented, as they MAY indicate an external operator
# attempting to send malicious messages intending to use java serialization as attack vector.
# The attempts are logged with the SECURITY marker.
#
# Please note that this option does not stop you from manually invoking java serialization
#
# The default value for this might be changed to off in future versions of Akka.
allow-java-serialization = on

# Entries for pluggable serializers and their bindings.


serializers {
java = "[Link]"
bytes = "[Link]"
}

# Class to Serializer binding. You only need to specify the name of an


# interface or abstract base class of the messages. In case of ambiguity it
# is using the most specific configured class, or giving a warning and
# choosing the “first” one.
#
# To disable one of the default serializers, assign its class to "none", like
# "[Link]" = none
serialization-bindings {
"[B" = bytes
"[Link]" = java
}

# Set this to on to enable serialization-bindings define in


# additional-serialization-bindings. Those are by default not included
# for backwards compatibility reasons. They are enabled by default if
# [Link]=on or if [Link]-java-serialization=off.
enable-additional-serialization-bindings = off

# Additional serialization-bindings that are replacing Java serialization are


# defined in this section and not included by default for backwards compatibility
# reasons. They can be enabled with enable-additional-serialization-bindings=on.
# They are enabled by default if [Link]=on or if
# [Link]-java-serialization=off.
additional-serialization-bindings {
}

# Log warnings when the default Java serialization is used to serialize messages.
# The default serializer uses Java serialization which is not very performant and should not
# be used in production environments unless you don’t care about performance. In that case
# you can turn this off.
warn-about-java-serializer-usage = on

# To be used with the above warn-about-java-serializer-usage


# When warn-about-java-serializer-usage = on, and this warn-on-no-serialization-verification =
# warnings are suppressed for classes extending NoSerializationVerificationNeeded
# to reduce noize.

3.9. Configuration 54
Akka Java Documentation, Release 2.4.20

warn-on-no-serialization-verification = on

# Configuration namespace of serialization identifiers.


# Each serializer implementation must have an entry in the following format:
# ‘[Link]-identifiers."FQCN" = ID‘
# where ‘FQCN‘ is fully qualified class name of the serializer implementation
# and ‘ID‘ is globally unique serializer identifier number.
# Identifier values from 0 to 40 are reserved for Akka internal usage.
serialization-identifiers {
"[Link]" = 1
"[Link]" = 4
}

# Configuration items which are used by the [Link]._ methods


dsl {
# Maximum queue size of the actor created by newInbox(); this protects
# against faulty programs which use select() and consistently miss messages
inbox-size = 1000

# Default timeout to assume for operations like [Link] et al


default-timeout = 5s
}
}

# Used to set the behavior of the scheduler.


# Changing the default values may change the system behavior drastically so make
# sure you know what you’re doing! See the Scheduler section of the Akka
# Documentation for more details.
scheduler {
# The LightArrayRevolverScheduler is used as the default scheduler in the
# system. It does not execute the scheduled tasks on exact time, but on every
# tick, it will run everything that is (over)due. You can increase or decrease
# the accuracy of the execution timing by specifying smaller or larger tick
# duration. If you are scheduling a lot of tasks you should consider increasing
# the ticks per wheel.
# Note that it might take up to 1 tick to stop the Timer, so setting the
# tick-duration to a high value will make shutting down the actor system
# take longer.
tick-duration = 10ms

# The timer uses a circular wheel of buckets to store the timer tasks.
# This should be set such that the majority of scheduled timeouts (for high
# scheduling frequency) will be shorter than one rotation of the wheel
# (ticks-per-wheel * ticks-duration)
# THIS MUST BE A POWER OF TWO!
ticks-per-wheel = 512

# This setting selects the timer implementation which shall be loaded at


# system start-up.
# The class given here must implement the [Link] interface
# and offer a public constructor which takes three arguments:
# 1) [Link]
# 2) [Link]
# 3) [Link]
implementation = [Link]

# When shutting down the scheduler, there will typically be a thread which
# needs to be stopped, and this timeout determines how long to wait for
# that to happen. In case of timeout the shutdown of the actor system will
# proceed without running possibly still enqueued tasks.
shutdown-timeout = 5s
}

3.9. Configuration 55
Akka Java Documentation, Release 2.4.20

io {

# By default the select loops run on dedicated threads, hence using a


# PinnedDispatcher
pinned-dispatcher {
type = "PinnedDispatcher"
executor = "thread-pool-executor"
[Link]-core-timeout = off
}

tcp {

# The number of selectors to stripe the served channels over; each of


# these will use one select loop on the selector-dispatcher.
nr-of-selectors = 1

# Maximum number of open channels supported by this TCP module; there is


# no intrinsic general limit, this setting is meant to enable DoS
# protection by limiting the number of concurrently connected clients.
# Also note that this is a "soft" limit; in certain cases the implementation
# will accept a few connections more or a few less than the number configured
# here. Must be an integer > 0 or "unlimited".
max-channels = 256000

# When trying to assign a new connection to a selector and the chosen


# selector is at full capacity, retry selector choosing and assignment
# this many times before giving up
selector-association-retries = 10

# The maximum number of connection that are accepted in one go,


# higher numbers decrease latency, lower numbers increase fairness on
# the worker-dispatcher
batch-accept-limit = 10

# The number of bytes per direct buffer in the pool used to read or write
# network data from the kernel.
direct-buffer-size = 128 KiB

# The maximal number of direct buffers kept in the direct buffer pool for
# reuse.
direct-buffer-pool-limit = 1000

# The duration a connection actor waits for a ‘Register‘ message from


# its commander before aborting the connection.
register-timeout = 5s

# The maximum number of bytes delivered by a ‘Received‘ message. Before


# more data is read from the network the connection actor will try to
# do other work.
# The purpose of this setting is to impose a smaller limit than the
# configured receive buffer size. When using value ’unlimited’ it will
# try to read all from the receive buffer.
max-received-message-size = unlimited

# Enable fine grained logging of what goes on inside the implementation.


# Be aware that this may log more than once per message sent to the actors
# of the tcp implementation.
trace-logging = off

# Fully qualified config path which holds the dispatcher configuration


# to be used for running the select() calls in the selectors
selector-dispatcher = "[Link]-dispatcher"

3.9. Configuration 56
Akka Java Documentation, Release 2.4.20

# Fully qualified config path which holds the dispatcher configuration


# for the read/write worker actors
worker-dispatcher = "[Link]-dispatcher"

# Fully qualified config path which holds the dispatcher configuration


# for the selector management actors
management-dispatcher = "[Link]-dispatcher"

# Fully qualified config path which holds the dispatcher configuration


# on which file IO tasks are scheduled
file-io-dispatcher = "[Link]-dispatcher"

# The maximum number of bytes (or "unlimited") to transfer in one batch


# when using ‘WriteFile‘ command which uses ‘[Link]‘ to
# pipe files to a TCP socket. On some OS like Linux ‘[Link]‘
# may block for a long time when network IO is faster than file IO.
# Decreasing the value may improve fairness while increasing may improve
# throughput.
file-io-transferTo-limit = 512 KiB

# The number of times to retry the ‘finishConnect‘ call after being notified about
# OP_CONNECT. Retries are needed if the OP_CONNECT notification doesn’t imply that
# ‘finishConnect‘ will succeed, which is the case on Android.
finish-connect-retries = 5

# On Windows connection aborts are not reliably detected unless an OP_READ is


# registered on the selector _after_ the connection has been reset. This
# workaround enables an OP_CONNECT which forces the abort to be visible on Windows.
# Enabling this setting on other platforms than Windows will cause various failures
# and undefined behavior.
# Possible values of this key are on, off and auto where auto will enable the
# workaround if Windows is detected automatically.
windows-connection-abort-workaround-enabled = off
}

udp {

# The number of selectors to stripe the served channels over; each of


# these will use one select loop on the selector-dispatcher.
nr-of-selectors = 1

# Maximum number of open channels supported by this UDP module Generally


# UDP does not require a large number of channels, therefore it is
# recommended to keep this setting low.
max-channels = 4096

# The select loop can be used in two modes:


# - setting "infinite" will select without a timeout, hogging a thread
# - setting a positive timeout will do a bounded select call,
# enabling sharing of a single thread between multiple selectors
# (in this case you will have to use a different configuration for the
# selector-dispatcher, e.g. using "type=Dispatcher" with size 1)
# - setting it to zero means polling, i.e. calling selectNow()
select-timeout = infinite

# When trying to assign a new connection to a selector and the chosen


# selector is at full capacity, retry selector choosing and assignment
# this many times before giving up
selector-association-retries = 10

# The maximum number of datagrams that are read in one go,


# higher numbers decrease latency, lower numbers increase fairness on
# the worker-dispatcher

3.9. Configuration 57
Akka Java Documentation, Release 2.4.20

receive-throughput = 3

# The number of bytes per direct buffer in the pool used to read or write
# network data from the kernel.
direct-buffer-size = 128 KiB

# The maximal number of direct buffers kept in the direct buffer pool for
# reuse.
direct-buffer-pool-limit = 1000

# Enable fine grained logging of what goes on inside the implementation.


# Be aware that this may log more than once per message sent to the actors
# of the tcp implementation.
trace-logging = off

# Fully qualified config path which holds the dispatcher configuration


# to be used for running the select() calls in the selectors
selector-dispatcher = "[Link]-dispatcher"

# Fully qualified config path which holds the dispatcher configuration


# for the read/write worker actors
worker-dispatcher = "[Link]-dispatcher"

# Fully qualified config path which holds the dispatcher configuration


# for the selector management actors
management-dispatcher = "[Link]-dispatcher"
}

udp-connected {

# The number of selectors to stripe the served channels over; each of


# these will use one select loop on the selector-dispatcher.
nr-of-selectors = 1

# Maximum number of open channels supported by this UDP module Generally


# UDP does not require a large number of channels, therefore it is
# recommended to keep this setting low.
max-channels = 4096

# The select loop can be used in two modes:


# - setting "infinite" will select without a timeout, hogging a thread
# - setting a positive timeout will do a bounded select call,
# enabling sharing of a single thread between multiple selectors
# (in this case you will have to use a different configuration for the
# selector-dispatcher, e.g. using "type=Dispatcher" with size 1)
# - setting it to zero means polling, i.e. calling selectNow()
select-timeout = infinite

# When trying to assign a new connection to a selector and the chosen


# selector is at full capacity, retry selector choosing and assignment
# this many times before giving up
selector-association-retries = 10

# The maximum number of datagrams that are read in one go,


# higher numbers decrease latency, lower numbers increase fairness on
# the worker-dispatcher
receive-throughput = 3

# The number of bytes per direct buffer in the pool used to read or write
# network data from the kernel.
direct-buffer-size = 128 KiB

# The maximal number of direct buffers kept in the direct buffer pool for

3.9. Configuration 58
Akka Java Documentation, Release 2.4.20

# reuse.
direct-buffer-pool-limit = 1000

# Enable fine grained logging of what goes on inside the implementation.


# Be aware that this may log more than once per message sent to the actors
# of the tcp implementation.
trace-logging = off

# Fully qualified config path which holds the dispatcher configuration


# to be used for running the select() calls in the selectors
selector-dispatcher = "[Link]-dispatcher"

# Fully qualified config path which holds the dispatcher configuration


# for the read/write worker actors
worker-dispatcher = "[Link]-dispatcher"

# Fully qualified config path which holds the dispatcher configuration


# for the selector management actors
management-dispatcher = "[Link]-dispatcher"
}

dns {
# Fully qualified config path which holds the dispatcher configuration
# for the manager and resolver router actors.
# For actual router configuration see [Link]./IO-DNS/*
dispatcher = "[Link]-dispatcher"

# Name of the subconfig at path [Link], see inet-address below


resolver = "inet-address"

inet-address {
# Must implement [Link]
provider-object = "[Link]"

# These TTLs are set to default java 6 values


positive-ttl = 30s
negative-ttl = 10s

# How often to sweep out expired cache entries.


# Note that this interval has nothing to do with TTLs
cache-cleanup-interval = 120s
}
}
}

akka-agent

####################################
# Akka Agent Reference Config File #
####################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

akka {
agent {

# The dispatcher used for agent-send-off actor

3.9. Configuration 59
Akka Java Documentation, Release 2.4.20

send-off-dispatcher {
executor = thread-pool-executor
type = PinnedDispatcher
}

# The dispatcher used for agent-alter-off actor


alter-off-dispatcher {
executor = thread-pool-executor
type = PinnedDispatcher
}
}
}

akka-camel

####################################
# Akka Camel Reference Config File #
####################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

akka {
camel {
# FQCN of the ContextProvider to be used to create or locate a CamelContext
# it must implement [Link] and have a no-arg constructor
# the built-in default create a fresh DefaultCamelContext
context-provider = [Link]

# Whether JMX should be enabled or disabled for the Camel Context


jmx = off
# enable/disable streaming cache on the Camel Context
streamingCache = on
consumer {
# Configured setting which determines whether one-way communications
# between an endpoint and this consumer actor
# should be auto-acknowledged or application-acknowledged.
# This flag has only effect when exchange is in-only.
auto-ack = on

# When endpoint is out-capable (can produce responses) reply-timeout is the


# maximum time the endpoint can take to send the response before the message
# exchange fails. This setting is used for out-capable, in-only,
# manually acknowledged communication.
reply-timeout = 1m

# The duration of time to await activation of an endpoint.


activation-timeout = 10s
}

producer {
# The id of the dispatcher to use for producer child actors, i.e. the actor that
# interacts with the Camel endpoint. Some endpoints may be blocking and then it
# can be good to define a dedicated dispatcher.
# If not defined the producer child actor is using the same dispatcher as the
# parent producer actor.
use-dispatcher = ""
}

#Scheme to FQCN mappings for CamelMessage body conversions


conversions {

3.9. Configuration 60
Akka Java Documentation, Release 2.4.20

"file" = "[Link]"
}
}
}

akka-cluster

######################################
# Akka Cluster Reference Config File #
######################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

akka {

cluster {
# Initial contact points of the cluster.
# The nodes to join automatically at startup.
# Comma separated full URIs defined by a string on the form of
# "[Link]://system@hostname:port"
# Leave as empty if the node is supposed to be joined manually.
seed-nodes = []

# how long to wait for one of the seed nodes to reply to initial join request
seed-node-timeout = 5s

# If a join request fails it will be retried after this period.


# Disable join retry by specifying "off".
retry-unsuccessful-join-after = 10s

# Should the ’leader’ in the cluster be allowed to automatically mark


# unreachable nodes as DOWN after a configured time of unreachability?
# Using auto-down implies that two separate clusters will automatically be
# formed in case of network partition.
#
# Don’t enable this in production, see ’Auto-downing (DO NOT USE)’ section
# of Akka Cluster documentation.
#
# Disable with "off" or specify a duration to enable auto-down.
# If a downing-provider-class is configured this setting is ignored.
auto-down-unreachable-after = off

# Time margin after which shards or singletons that belonged to a downed/removed


# partition are created in surviving partition. The purpose of this margin is that
# in case of a network partition the persistent actors in the non-surviving partitions
# must be stopped before corresponding persistent actors are started somewhere else.
# This is useful if you implement downing strategies that handle network partitions,
# e.g. by keeping the larger side of the partition and shutting down the smaller side.
# It will not add any extra safety for auto-down-unreachable-after, since that is not
# handling network partitions.
# Disable with "off" or specify a duration to enable.
down-removal-margin = off

# Pluggable support for downing of nodes in the cluster.


# If this setting is left empty behaviour will depend on ’auto-down-unreachable’ in the follow
# * if it is ’off’ the ‘NoDowning‘ provider is used and no automatic downing will be performed
# * if it is set to a duration the ‘AutoDowning‘ provider is with the configured downing durat
#
# If specified the value must be the fully qualified class name of a subclass of
# ‘[Link]‘ having a public one argument constructor accepting an ‘ActorS

3.9. Configuration 61
Akka Java Documentation, Release 2.4.20

downing-provider-class = ""

# Artery only setting


# When a node has been gracefully removed, let this time pass (to allow for example
# cluster singleton handover to complete) and then quarantine the removed node.
quarantine-removed-node-after=30s

# By default, the leader will not move ’Joining’ members to ’Up’ during a network
# split. This feature allows the leader to accept ’Joining’ members to be ’WeaklyUp’
# so they become part of the cluster even during a network split. The leader will
# move ’WeaklyUp’ members to ’Up’ status once convergence has been reached. This
# feature must be off if some members are running Akka 2.3.X.
# WeaklyUp is an EXPERIMENTAL feature.
allow-weakly-up-members = off

# The roles of this member. List of strings, e.g. roles = ["A", "B"].
# The roles are part of the membership information and can be used by
# routers or other services to distribute work to certain member types,
# e.g. front-end and back-end nodes.
roles = []

role {
# Minimum required number of members of a certain role before the leader
# changes member status of ’Joining’ members to ’Up’. Typically used together
# with ’[Link]’ to defer some action, such as starting
# actors, until the cluster has reached a certain size.
# E.g. to require 2 nodes with role ’frontend’ and 3 nodes with role ’backend’:
# [Link]-nr-of-members = 2
# [Link]-nr-of-members = 3
#<role-name>.min-nr-of-members = 1
}

# Minimum required number of members before the leader changes member status
# of ’Joining’ members to ’Up’. Typically used together with
# ’[Link]’ to defer some action, such as starting actors,
# until the cluster has reached a certain size.
min-nr-of-members = 1

# Enable/disable info level logging of cluster events


log-info = on

# Enable or disable JMX MBeans for management of the cluster


[Link] = on

# how long should the node wait before starting the periodic tasks
# maintenance tasks?
periodic-tasks-initial-delay = 1s

# how often should the node send out gossip information?


gossip-interval = 1s

# discard incoming gossip messages if not handled within this duration


gossip-time-to-live = 2s

# how often should the leader perform maintenance tasks?


leader-actions-interval = 1s

# how often should the node move nodes, marked as unreachable by the failure
# detector, out of the membership ring?
unreachable-nodes-reaper-interval = 1s

# How often the current internal stats should be published.


# A value of 0s can be used to always publish the stats, when it happens.

3.9. Configuration 62
Akka Java Documentation, Release 2.4.20

# Disable with "off".


publish-stats-interval = off

# The id of the dispatcher to use for cluster actors. If not specified


# default dispatcher is used.
# If specified you need to define the settings of the actual dispatcher.
use-dispatcher = ""

# Gossip to random node with newer or older state information, if any with
# this probability. Otherwise Gossip to any random live node.
# Probability value is between 0.0 and 1.0. 0.0 means never, 1.0 means always.
gossip-different-view-probability = 0.8

# Reduced the above probability when the number of nodes in the cluster
# greater than this value.
reduce-gossip-different-view-probability = 400

# Settings for the Phi accrual failure detector ([Link]


# [Hayashibara et al]) used by the cluster subsystem to detect unreachable
# members.
# The default PhiAccrualFailureDetector will trigger if there are no heartbeats within
# the duration heartbeat-interval + acceptable-heartbeat-pause + threshold_adjustment,
# i.e. around 5.5 seconds with default settings.
failure-detector {

# FQCN of the failure detector implementation.


# It must implement [Link] and have
# a public constructor with a [Link] and
# [Link] parameter.
implementation-class = "[Link]"

# How often keep-alive heartbeat messages should be sent to each connection.


heartbeat-interval = 1 s

# Defines the failure detector threshold.


# A low threshold is prone to generate many wrong suspicions but ensures
# a quick detection in the event of a real crash. Conversely, a high
# threshold generates fewer mistakes but needs more time to detect
# actual crashes.
threshold = 8.0

# Number of the samples of inter-heartbeat arrival times to adaptively


# calculate the failure timeout for connections.
max-sample-size = 1000

# Minimum standard deviation to use for the normal distribution in


# AccrualFailureDetector. Too low standard deviation might result in
# too much sensitivity for sudden, but normal, deviations in heartbeat
# inter arrival times.
min-std-deviation = 100 ms

# Number of potentially lost/delayed heartbeats that will be


# accepted before considering it to be an anomaly.
# This margin is important to be able to survive sudden, occasional,
# pauses in heartbeat arrivals, due to for example garbage collect or
# network drop.
acceptable-heartbeat-pause = 3 s

# Number of member nodes that each member will send heartbeat messages to,
# i.e. each node will be monitored by this number of other nodes.
monitored-by-nr-of-members = 5

# After the heartbeat request has been sent the first failure detection

3.9. Configuration 63
Akka Java Documentation, Release 2.4.20

# will start after this period, even though no heartbeat message has
# been received.
expected-response-after = 1 s

metrics {
# Enable or disable metrics collector for load-balancing nodes.
enabled = on

# FQCN of the metrics collector implementation.


# It must implement [Link] and
# have public constructor with [Link] parameter.
# The default SigarMetricsCollector uses JMX and Hyperic SIGAR, if SIGAR
# is on the classpath, otherwise only JMX.
collector-class = "[Link]"

# How often metrics are sampled on a node.


# Shorter interval will collect the metrics more often.
collect-interval = 3s

# How often a node publishes metrics information.


gossip-interval = 3s

# How quickly the exponential weighting of past data is decayed compared to


# new data. Set lower to increase the bias toward newer values.
# The relevance of each data sample is halved for every passing half-life
# duration, i.e. after 4 times the half-life, a data sample’s relevance is
# reduced to 6% of its original relevance. The initial relevance of a data
# sample is given by 1 - 0.5 ^ (collect-interval / half-life).
# See [Link]
moving-average-half-life = 12s
}

# If the tick-duration of the default scheduler is longer than the


# tick-duration configured here a dedicated scheduler will be used for
# periodic tasks of the cluster, otherwise the default scheduler is used.
# See [Link] settings for more details.
scheduler {
tick-duration = 33ms
ticks-per-wheel = 512
}

debug {
# log heartbeat events (very verbose, useful mostly when debugging heartbeating issues)
verbose-heartbeat-logging = off
}

# Default configuration for routers


[Link] {
# MetricsSelector to use
# - available: "mix", "heap", "cpu", "load"
# - or: Fully qualified class name of the MetricsSelector class.
# The class must extend [Link]
# and have a public constructor with [Link]
# parameter.
# - default is "mix"
metrics-selector = mix
}
[Link] {
# enable cluster aware router that deploys to nodes in the cluster

3.9. Configuration 64
Akka Java Documentation, Release 2.4.20

enabled = off

# Maximum number of routees that will be deployed on each cluster


# member node.
# Note that max-total-nr-of-instances defines total number of routees, but
# number of routees per node will not be exceeded, i.e. if you
# define max-total-nr-of-instances = 50 and max-nr-of-instances-per-node = 2
# it will deploy 2 routees per new member in the cluster, up to
# 25 members.
max-nr-of-instances-per-node = 1

# Maximum number of routees that will be deployed, in total


# on all nodes. See also description of max-nr-of-instances-per-node.
# For backwards compatibility reasons, nr-of-instances
# has the same purpose as max-total-nr-of-instances for cluster
# aware routers and nr-of-instances (if defined by user) takes
# precedence over max-total-nr-of-instances.
max-total-nr-of-instances = 10000

# Defines if routees are allowed to be located on the same node as


# the head router actor, or only on remote nodes.
# Useful for master-worker scenario where all routees are remote.
allow-local-routees = on

# Use members with specified role, or all members if undefined or empty.


use-role = ""

# Protobuf serializer for cluster messages


actor {
serializers {
akka-cluster = "[Link]"
}

serialization-bindings {
"[Link]" = akka-cluster
}

serialization-identifiers {
"[Link]" = 5
}

[Link]-mapping {
adaptive-pool = "[Link]"
adaptive-group = "[Link]"
}
}

akka-multi-node-testkit

#############################################
# Akka Remote Testing Reference Config File #
#############################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

akka {

3.9. Configuration 65
Akka Java Documentation, Release 2.4.20

testconductor {

# Timeout for joining a barrier: this is the maximum time any participants
# waits for everybody else to join a named barrier.
barrier-timeout = 30s

# Timeout for interrogation of TestConductor’s Controller actor


query-timeout = 10s

# Threshold for packet size in time unit above which the failure injector will
# split the packet and deliver in smaller portions; do not give value smaller
# than HashedWheelTimer resolution (would not make sense)
packet-split-threshold = 100ms

# amount of time for the ClientFSM to wait for the connection to the conductor
# to be successful
connect-timeout = 20s

# Number of connect attempts to be made to the conductor controller


client-reconnects = 30

# minimum time interval which is to be inserted between reconnect attempts


reconnect-backoff = 1s

netty {
# (I&O) Used to configure the number of I/O worker threads on server sockets
server-socket-worker-pool {
# Min number of threads to cap factor-based number to
pool-size-min = 1

# The pool size factor is used to determine thread pool size


# using the following formula: ceil(available processors * factor).
# Resulting size is then bounded by the pool-size-min and
# pool-size-max values.
pool-size-factor = 1.0

# Max number of threads to cap factor-based number to


pool-size-max = 2
}

# (I&O) Used to configure the number of I/O worker threads on client sockets
client-socket-worker-pool {
# Min number of threads to cap factor-based number to
pool-size-min = 1

# The pool size factor is used to determine thread pool size


# using the following formula: ceil(available processors * factor).
# Resulting size is then bounded by the pool-size-min and
# pool-size-max values.
pool-size-factor = 1.0

# Max number of threads to cap factor-based number to


pool-size-max = 2
}
}
}
}

3.9. Configuration 66
Akka Java Documentation, Release 2.4.20

akka-persistence

###########################################################
# Akka Persistence Extension Reference Configuration File #
###########################################################

# This is the reference config file that contains all the default settings.
# Make your edits in your [Link] in order to override these settings.

# Directory of persistence journal and snapshot store plugins is available at the


# Akka Community Projects page [Link]

# Default persistence extension settings.


[Link] {

# When starting many persistent actors at the same time the journal
# and its data store is protected from being overloaded by limiting number
# of recoveries that can be in progress at the same time. When
# exceeding the limit the actors will wait until other recoveries have
# been completed.
max-concurrent-recoveries = 50

# Fully qualified class name providing a default internal stash overflow strategy.
# It needs to be a subclass of [Link].
# The default strategy throws StashOverflowException.
internal-stash-overflow-strategy = "[Link]"
journal {
# Absolute path to the journal plugin configuration entry used by
# persistent actor or view by default.
# Persistent actor or view can override ‘journalPluginId‘ method
# in order to rely on a different journal plugin.
plugin = ""
# List of journal plugins to start automatically. Use "" for the default journal plugin.
auto-start-journals = []
}
snapshot-store {
# Absolute path to the snapshot plugin configuration entry used by
# persistent actor or view by default.
# Persistent actor or view can override ‘snapshotPluginId‘ method
# in order to rely on a different snapshot plugin.
# It is not mandatory to specify a snapshot store plugin.
# If you don’t use snapshots you don’t have to configure it.
# Note that Cluster Sharding is using snapshots, so if you
# use Cluster Sharding you need to define a snapshot store plugin.
plugin = ""
# List of snapshot stores to start automatically. Use "" for the default snapshot store.
auto-start-snapshot-stores = []
}
# used as default-snapshot store if no plugin configured
# (see ‘[Link]-store‘)
no-snapshot-store {
class = "[Link]"
}
# Default persistent view settings.
view {
# Automated incremental view update.
auto-update = on
# Interval between incremental updates.
auto-update-interval = 5s
# Maximum number of messages to replay per incremental view update.
# Set to -1 for no upper limit.
auto-update-replay-max = -1
}

3.9. Configuration 67
Akka Java Documentation, Release 2.4.20

# Default reliable delivery settings.


at-least-once-delivery {
# Interval between re-delivery attempts.
redeliver-interval = 5s
# Maximum number of unconfirmed messages that will be sent in one
# re-delivery burst.
redelivery-burst-limit = 10000
# After this number of delivery attempts a
# ‘[Link]‘, message will be sent to the actor.
warn-after-number-of-unconfirmed-attempts = 5
# Maximum number of unconfirmed messages that an actor with
# AtLeastOnceDelivery is allowed to hold in memory.
max-unconfirmed-messages = 100000
}
# Default persistent extension thread pools.
dispatchers {
# Dispatcher used by every plugin which does not declare explicit
# ‘plugin-dispatcher‘ field.
default-plugin-dispatcher {
type = PinnedDispatcher
executor = "thread-pool-executor"
}
# Default dispatcher for message replay.
default-replay-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
parallelism-max = 8
}
}
# Default dispatcher for streaming snapshot IO
default-stream-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
parallelism-max = 8
}
}
}

# Fallback settings for journal plugin configurations.


# These settings are used if they are not defined in plugin config section.
journal-plugin-fallback {

# Fully qualified class name providing journal plugin api implementation.


# It is mandatory to specify this property.
# The class must have a constructor without parameters or constructor with
# one ‘[Link]‘ parameter.
class = ""

# Dispatcher for the plugin actor.


plugin-dispatcher = "[Link]-plugin-dispatcher"

# Dispatcher for message replay.


replay-dispatcher = "[Link]-replay-dispatcher"

# Removed: used to be the Maximum size of a persistent message batch written to the journal.
# Now this setting is without function, PersistentActor will write as many messages
# as it has accumulated since the last write.
max-message-batch-size = 200

3.9. Configuration 68
Akka Java Documentation, Release 2.4.20

# If there is more time in between individual events gotten from the journal
# recovery than this the recovery will fail.
# Note that it also affects reading the snapshot before replaying events on
# top of it, even though it is configured for the journal.
recovery-event-timeout = 30s

circuit-breaker {
max-failures = 10
call-timeout = 10s
reset-timeout = 30s
}

# The replay filter can detect a corrupt event stream by inspecting


# sequence numbers and writerUuid when replaying events.
replay-filter {
# What the filter should do when detecting invalid events.
# Supported values:
# ‘repair-by-discard-old‘ : discard events from old writers,
# warning is logged
# ‘fail‘ : fail the replay, error is logged
# ‘warn‘ : log warning but emit events untouched
# ‘off‘ : disable this feature completely
mode = repair-by-discard-old

# It uses a look ahead buffer for analyzing the events.


# This defines the size (in number of events) of the buffer.
window-size = 100

# How many old writerUuid to remember


max-old-writers = 10

# Set this to ‘on‘ to enable detailed debug logging of each


# replayed event.
debug = off
}
}

# Fallback settings for snapshot store plugin configurations


# These settings are used if they are not defined in plugin config section.
snapshot-store-plugin-fallback {

# Fully qualified class name providing snapshot store plugin api


# implementation. It is mandatory to specify this property if
# snapshot store is enabled.
# The class must have a constructor without parameters or constructor with
# one ‘[Link]‘ parameter.
class = ""

# Dispatcher for the plugin actor.


plugin-dispatcher = "[Link]-plugin-dispatcher"

circuit-breaker {
max-failures = 5
call-timeout = 20s
reset-timeout = 60s
}
}
}

# Protobuf serialization for the persistent extension messages.


[Link] {
serializers {
akka-persistence-message = "[Link]"

3.9. Configuration 69
Akka Java Documentation, Release 2.4.20

akka-persistence-snapshot = "[Link]"
}
serialization-bindings {
"[Link]" = akka-persistence-message
"[Link]" = akka-persistence-snapshot
}
serialization-identifiers {
"[Link]" = 7
"[Link]" = 8
}
}

###################################################
# Persistence plugins included with the extension #
###################################################

# In-memory journal plugin.


[Link] {
# Class name of the plugin.
class = "[Link]"
# Dispatcher for the plugin actor.
plugin-dispatcher = "[Link]-dispatcher"
}

# Local file system snapshot store plugin.


[Link] {
# Class name of the plugin.
class = "[Link]"
# Dispatcher for the plugin actor.
plugin-dispatcher = "[Link]-plugin-dispatcher"
# Dispatcher for streaming snapshot IO.
stream-dispatcher = "[Link]-stream-dispatcher"
# Storage location of snapshot files.
dir = "snapshots"
# Number load attempts when recovering from the latest snapshot fails
# yet older snapshot files are available. Each recovery attempt will try
# to recover using an older than previously failed-on snapshot file
# (if any are present). If all attempts fail the recovery will fail and
# the persistent actor will be stopped.
max-load-attempts = 3
}

# LevelDB journal plugin.


# Note: this plugin requires explicit LevelDB dependency, see below.
[Link] {
# Class name of the plugin.
class = "[Link]"
# Dispatcher for the plugin actor.
plugin-dispatcher = "[Link]-plugin-dispatcher"
# Dispatcher for message replay.
replay-dispatcher = "[Link]-replay-dispatcher"
# Storage location of LevelDB files.
dir = "journal"
# Use fsync on write.
fsync = on
# Verify checksum on read.
checksum = off
# Native LevelDB (via JNI) or LevelDB Java port.
native = on
}

# Shared LevelDB journal plugin (for testing only).

3.9. Configuration 70
Akka Java Documentation, Release 2.4.20

# Note: this plugin requires explicit LevelDB dependency, see below.


[Link]-shared {
# Class name of the plugin.
class = "[Link]"
# Dispatcher for the plugin actor.
plugin-dispatcher = "[Link]-dispatcher"
# Timeout for async journal operations.
timeout = 10s
store {
# Dispatcher for shared store actor.
store-dispatcher = "[Link]-plugin-dispatcher"
# Dispatcher for message replay.
replay-dispatcher = "[Link]-replay-dispatcher"
# Storage location of LevelDB files.
dir = "journal"
# Use fsync on write.
fsync = on
# Verify checksum on read.
checksum = off
# Native LevelDB (via JNI) or LevelDB Java port.
native = on
}
}

[Link] {
# Class name of the plugin.
class = "[Link]"
# Dispatcher for the plugin actor.
plugin-dispatcher = "[Link]-dispatcher"
# Set this to on in the configuration of the ActorSystem
# that will host the target journal
start-target-journal = off
# The journal plugin config path to use for the target journal
target-journal-plugin = ""
# The address of the proxy to connect to from other nodes. Optional setting.
target-journal-address = ""
# Initialization timeout of target lookup
init-timeout = 10s
}

[Link] {
# Class name of the plugin.
class = "[Link]"
# Dispatcher for the plugin actor.
plugin-dispatcher = "[Link]-dispatcher"
# Set this to on in the configuration of the ActorSystem
# that will host the target snapshot-store
start-target-snapshot-store = off
# The journal plugin config path to use for the target snapshot-store
target-snapshot-store-plugin = ""
# The address of the proxy to connect to from other nodes. Optional setting.
target-snapshot-store-address = ""
# Initialization timeout of target lookup
init-timeout = 10s
}

# LevelDB persistence requires the following dependency declarations:


#
# SBT:
# "[Link]" % "leveldb" % "0.7"
# "[Link]" % "leveldbjni-all" % "1.8"
#
# Maven:

3.9. Configuration 71
Akka Java Documentation, Release 2.4.20

# <dependency>
# <groupId>[Link]</groupId>
# <artifactId>leveldb</artifactId>
# <version>0.7</version>
# </dependency>
# <dependency>
# <groupId>[Link]</groupId>
# <artifactId>leveldbjni-all</artifactId>
# <version>1.8</version>
# </dependency>

akka-remote

#####################################
# Akka Remote Reference Config File #
#####################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

# comments about [Link] settings left out where they are already in akka-
# [Link], because otherwise they would be repeated in config rendering.
#
# For the configuration of the new remoting implementation (Artery) please look
# at the bottom section of this file as it is listed separately.

akka {

actor {

serializers {
akka-containers = "[Link]"
akka-misc = "[Link]"
artery = "[Link]"
proto = "[Link]"
daemon-create = "[Link]"
primitive-long = "[Link]"
primitive-int = "[Link]"
primitive-string = "[Link]"
primitive-bytestring = "[Link]"
akka-system-msg = "[Link]"
}

serialization-bindings {
"[Link]" = akka-containers

"[Link]" = daemon-create

"[Link]" = artery

# Since [Link] does not extend Serializable but


# GeneratedMessage does, need to use the more specific one here in order
# to avoid ambiguity.
"[Link]" = proto

# Since [Link] does not extend Serializable but


# GeneratedMessage does, need to use the more specific one here in order
# to avoid ambiguity.
# This [Link] serialization binding is only used if the class can be loaded,
# i.e. [Link] dependency has been added in the application project.
"[Link]" = proto

3.9. Configuration 72
Akka Java Documentation, Release 2.4.20

"[Link]" = akka-misc
}

# For the purpose of preserving protocol backward compatibility these bindings are not
# included by default. They can be enabled with enable-additional-serialization-bindings=on.
# They are enabled by default if [Link]=on or if
# [Link]-java-serialization=off.
additional-serialization-bindings {
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]$" = akka-misc
"[Link]$Success" = akka-misc
"[Link]$Failure" = akka-misc
"[Link]" = akka-misc
"[Link]$" = akka-misc
"[Link]$" = akka-misc
"[Link]$Heartbeat$" = akka-misc
"[Link]$HeartbeatRsp" = akka-misc
"[Link]" = akka-misc

"[Link]" = akka-system-msg

"[Link]" = primitive-string
"[Link]$ByteString1C" = primitive-bytestring
"[Link]$ByteString1" = primitive-bytestring
"[Link]$ByteStrings" = primitive-bytestring
"[Link]" = primitive-long
"[Link]" = primitive-long
"[Link]" = primitive-int
"[Link]" = primitive-int

# Java Serializer is by default used for exceptions.


# It’s recommended that you implement custom serializer for exceptions that are
# sent remotely, e.g. in [Link] for ask replies. You can add
# binding to akka-misc (MiscMessageSerializerSpec) for the exceptions that have
# a constructor with single message String or constructor with message String as
# first parameter and cause Throwable as second parameter. Note that it’s not
# safe to add this binding for general exceptions such as IllegalArgumentException
# because it may have a subclass without required constructor.
"[Link]" = java
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]" = akka-misc
}

serialization-identifiers {
"[Link]" = 2
"[Link]" = 3
"[Link]" = 6
"[Link]" = 16
"[Link]" = 17
"[Link]" = 18
"[Link]" = 19
"[Link]" = 20
"[Link]" = 21
"[Link]" = 22
}

deployment {

3.9. Configuration 73
Akka Java Documentation, Release 2.4.20

default {

# if this is set to a valid remote address, the named actor will be


# deployed at that node e.g. "[Link]://sys@host:port"
remote = ""

target {

# A list of hostnames and ports for instantiating the children of a


# router
# The format should be on "[Link]://sys@host:port", where:
# - sys is the remote actor system name
# - hostname can be either hostname or IP address the remote actor
# should connect to
# - port should be the port for the remote server on the other node
# The number of actor instances to be spawned is still taken from the
# nr-of-instances setting as for local routers; the instances will be
# distributed round-robin among the given nodes.
nodes = []

}
}
}
}

remote {
### Settings shared by classic remoting and Artery (the new implementation of remoting)

# If set to a nonempty string remoting will use the given dispatcher for
# its internal actors otherwise the default dispatcher is used. Please note
# that since remoting can load arbitrary 3rd party drivers (see
# "enabled-transport" and "adapters" entries) it is not guaranteed that
# every module will respect this setting.
use-dispatcher = "[Link]-remote-dispatcher"

# Settings for the failure detector to monitor connections.


# For TCP it is not important to have fast failure detection, since
# most connection failures are captured by TCP itself.
# The default DeadlineFailureDetector will trigger if there are no heartbeats within
# the duration heartbeat-interval + acceptable-heartbeat-pause, i.e. 124 seconds
# with the default settings.
transport-failure-detector {

# FQCN of the failure detector implementation.


# It must implement [Link] and have
# a public constructor with a [Link] and
# [Link] parameter.
implementation-class = "[Link]"

# How often keep-alive heartbeat messages should be sent to each connection.


heartbeat-interval = 4 s

# Number of potentially lost/delayed heartbeats that will be


# accepted before considering it to be an anomaly.
# A margin to the ‘heartbeat-interval‘ is important to be able to survive sudden,
# occasional, pauses in heartbeat arrivals, due to for example garbage collect or
# network drop.
acceptable-heartbeat-pause = 120 s
}

# Settings for the Phi accrual failure detector ([Link]


# [Hayashibara et al]) used for remote death watch.
# The default PhiAccrualFailureDetector will trigger if there are no heartbeats within

3.9. Configuration 74
Akka Java Documentation, Release 2.4.20

# the duration heartbeat-interval + acceptable-heartbeat-pause + threshold_adjustment,


# i.e. around 12.5 seconds with default settings.
watch-failure-detector {

# FQCN of the failure detector implementation.


# It must implement [Link] and have
# a public constructor with a [Link] and
# [Link] parameter.
implementation-class = "[Link]"

# How often keep-alive heartbeat messages should be sent to each connection.


heartbeat-interval = 1 s

# Defines the failure detector threshold.


# A low threshold is prone to generate many wrong suspicions but ensures
# a quick detection in the event of a real crash. Conversely, a high
# threshold generates fewer mistakes but needs more time to detect
# actual crashes.
threshold = 10.0

# Number of the samples of inter-heartbeat arrival times to adaptively


# calculate the failure timeout for connections.
max-sample-size = 200

# Minimum standard deviation to use for the normal distribution in


# AccrualFailureDetector. Too low standard deviation might result in
# too much sensitivity for sudden, but normal, deviations in heartbeat
# inter arrival times.
min-std-deviation = 100 ms

# Number of potentially lost/delayed heartbeats that will be


# accepted before considering it to be an anomaly.
# This margin is important to be able to survive sudden, occasional,
# pauses in heartbeat arrivals, due to for example garbage collect or
# network drop.
acceptable-heartbeat-pause = 10 s

# How often to check for nodes marked as unreachable by the failure


# detector
unreachable-nodes-reaper-interval = 1s

# After the heartbeat request has been sent the first failure detection
# will start after this period, even though no heartbeat mesage has
# been received.
expected-response-after = 1 s

# remote deployment configuration section


deployment {
# If true, will only allow specific classes to be instanciated on this system via remote dep
enable-whitelist = off

whitelist = []
}

### Configuration for classic remoting

# Timeout after which the startup of the remoting subsystem is considered


# to be failed. Increase this value if your transport drivers (see the
# enabled-transports section) need longer time to be loaded.
startup-timeout = 10 s

3.9. Configuration 75
Akka Java Documentation, Release 2.4.20

# Timout after which the graceful shutdown of the remoting subsystem is


# considered to be failed. After the timeout the remoting system is
# forcefully shut down. Increase this value if your transport drivers
# (see the enabled-transports section) need longer time to stop properly.
shutdown-timeout = 10 s

# Before shutting down the drivers, the remoting subsystem attempts to flush
# all pending writes. This setting controls the maximum time the remoting is
# willing to wait before moving on to shut down the drivers.
flush-wait-on-shutdown = 2 s

# Reuse inbound connections for outbound messages


use-passive-connections = on

# Controls the backoff interval after a refused write is reattempted.


# (Transports may refuse writes if their internal buffer is full)
backoff-interval = 5 ms

# Acknowledgment timeout of management commands sent to the transport stack.


command-ack-timeout = 30 s

# The timeout for outbound associations to perform the handshake.


# If the transport is [Link] or [Link]
# the configured connection-timeout for the transport will be used instead.
handshake-timeout = 15 s

### Security settings

# Enable untrusted mode for full security of server managed actors, prevents
# system messages to be send by clients, e.g. messages like ’Create’,
# ’Suspend’, ’Resume’, ’Terminate’, ’Supervise’, ’Link’ etc.
untrusted-mode = off

# When ’untrusted-mode=on’ inbound actor selections are by default discarded.


# Actors with paths defined in this white list are granted permission to receive actor
# selections messages.
# E.g. trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
trusted-selection-paths = []

# Should the remote server require that its peers share the same
# secure-cookie (defined in the ’remote’ section)? Secure cookies are passed
# between during the initial handshake. Connections are refused if the initial
# message contains a mismatching cookie or the cookie is missing.
require-cookie = off

# Deprecated since 2.4-M1


secure-cookie = ""

### Logging

# If this is "on", Akka will log all inbound messages at DEBUG level,
# if off then they are not logged
log-received-messages = off

# If this is "on", Akka will log all outbound messages at DEBUG level,
# if off then they are not logged
log-sent-messages = off

# Sets the log granularity level at which Akka logs remoting events. This setting
# can take the values OFF, ERROR, WARNING, INFO, DEBUG, or ON. For compatibility
# reasons the setting "on" will default to "debug" level. Please note that the effective
# logging level is still determined by the global logging level of the actor system:

3.9. Configuration 76
Akka Java Documentation, Release 2.4.20

# for example debug level remoting events will be only logged if the system
# is running with debug level logging.
# Failures to deserialize received messages also fall under this flag.
log-remote-lifecycle-events = on

# Logging of message types with payload size in bytes larger than


# this value. Maximum detected size per message type is logged once,
# with an increase threshold of 10%.
# By default this feature is turned off. Activate it by setting the property to
# a value in bytes, such as 1000b. Note that for all messages larger than this
# limit there will be extra performance and scalability cost.
log-frame-size-exceeding = off

# Log warning if the number of messages in the backoff buffer in the endpoint
# writer exceeds this limit. It can be disabled by setting the value to off.
log-buffer-size-exceeding = 50000

# After failed to establish an outbound connection, the remoting will mark the
# address as failed. This configuration option controls how much time should
# be elapsed before reattempting a new connection. While the address is
# gated, all messages sent to the address are delivered to dead-letters.
# Since this setting limits the rate of reconnects setting it to a
# very short interval (i.e. less than a second) may result in a storm of
# reconnect attempts.
retry-gate-closed-for = 5 s

# After catastrophic communication failures that result in the loss of system


# messages or after the remote DeathWatch triggers the remote system gets
# quarantined to prevent inconsistent behavior.
# This setting controls how long the Quarantine marker will be kept around
# before being removed to avoid long-term memory leaks.
# WARNING: DO NOT change this to a small value to re-enable communication with
# quarantined nodes. Such feature is not supported and any behavior between
# the affected systems after lifting the quarantine is undefined.
prune-quarantine-marker-after = 5 d

# If system messages have been exchanged between two systems (i.e. remote death
# watch or remote deployment has been used) a remote system will be marked as
# quarantined after the two system has no active association, and no
# communication happens during the time configured here.
# The only purpose of this setting is to avoid storing system message redelivery
# data (sequence number state, etc.) for an undefined amount of time leading to long
# term memory leak. Instead, if a system has been gone for this period,
# or more exactly
# - there is no association between the two systems (TCP connection, if TCP transport is used)
# - neither side has been attempting to communicate with the other
# - there are no pending system messages to deliver
# for the amount of time configured here, the remote system will be quarantined and all state
# associated with it will be dropped.
quarantine-after-silence = 2 d

# This setting defines the maximum number of unacknowledged system messages


# allowed for a remote system. If this limit is reached the remote system is
# declared to be dead and its UID marked as tainted.
system-message-buffer-size = 20000

# This setting defines the maximum idle time after an individual


# acknowledgement for system messages is sent. System message delivery
# is guaranteed by explicit acknowledgement messages. These acks are
# piggybacked on ordinary traffic messages. If no traffic is detected
# during the time period configured here, the remoting will send out

3.9. Configuration 77
Akka Java Documentation, Release 2.4.20

# an individual ack.
system-message-ack-piggyback-timeout = 0.3 s

# This setting defines the time after internal management signals


# between actors (used for DeathWatch and supervision) that have not been
# explicitly acknowledged or negatively acknowledged are resent.
# Messages that were negatively acknowledged are always immediately
# resent.
resend-interval = 2 s

# Maximum number of unacknowledged system messages that will be resent


# each ’resend-interval’. If you watch many (> 1000) remote actors you can
# increase this value to for example 600, but a too large limit (e.g. 10000)
# may flood the connection and might cause false failure detection to trigger.
# Test such a configuration by watching all actors at the same time and stop
# all watched actors at the same time.
resend-limit = 200

# WARNING: this setting should not be not changed unless all of its consequences
# are properly understood which assumes experience with remoting internals
# or expert advice.
# This setting defines the time after redelivery attempts of internal management
# signals are stopped to a remote system that has been not confirmed to be alive by
# this system before.
initial-system-message-delivery-timeout = 3 m

### Transports and adapters

# List of the transport drivers that will be loaded by the remoting.


# A list of fully qualified config paths must be provided where
# the given configuration path contains a transport-class key
# pointing to an implementation class of the Transport interface.
# If multiple transports are provided, the address of the first
# one will be used as a default address.
enabled-transports = ["[Link]"]

# Transport drivers can be augmented with adapters by adding their


# name to the applied-adapters setting in the configuration of a
# transport. The available adapters should be configured in this
# section by providing a name, and the fully qualified name of
# their corresponding implementation. The class given here
# must implement [Link]
# and have public constructor without parameters.
adapters {
gremlin = "[Link]"
trttl = "[Link]"
}

### Default configuration for the Netty based transport drivers

[Link] {
# The class given here must implement the [Link]
# interface and offer a public constructor which takes two arguments:
# 1) [Link]
# 2) [Link]
transport-class = "[Link]"

# Transport drivers can be augmented with adapters by adding their


# name to the applied-adapters list. The last adapter in the
# list is the adapter immediately above the driver, while
# the first one is the top of the stack below the standard
# Akka protocol
applied-adapters = []

3.9. Configuration 78
Akka Java Documentation, Release 2.4.20

transport-protocol = tcp

# The default remote server port clients should connect to.


# Default is 2552 (AKKA), use 0 if you want a random available port
# This port needs to be unique for each actor system on the same machine.
port = 2552

# The hostname or ip clients should connect to.


# [Link] is used if empty
hostname = ""

# Use this setting to bind a network interface to a different port


# than remoting protocol expects messages at. This may be used
# when running akka nodes in a separated networks (under NATs or docker containers).
# Use 0 if you want a random available port. Examples:
#
# [Link] = 2552
# [Link]-port = 2553
# Network interface will be bound to the 2553 port, but remoting protocol will
# expect messages sent to port 2552.
#
# [Link] = 0
# [Link]-port = 0
# Network interface will be bound to a random port, and remoting protocol will
# expect messages sent to the bound port.
#
# [Link] = 2552
# [Link]-port = 0
# Network interface will be bound to a random port, but remoting protocol will
# expect messages sent to port 2552.
#
# [Link] = 0
# [Link]-port = 2553
# Network interface will be bound to the 2553 port, and remoting protocol will
# expect messages sent to the bound port.
#
# [Link] = 2552
# [Link]-port = ""
# Network interface will be bound to the 2552 port, and remoting protocol will
# expect messages sent to the bound port.
#
# [Link] if empty
bind-port = ""

# Use this setting to bind a network interface to a different hostname or ip


# than remoting protocol expects messages at.
# Use "[Link]" to bind to all interfaces.
# [Link] if empty
bind-hostname = ""

# Enables SSL support on this transport


enable-ssl = false

# Sets the connectTimeoutMillis of all outbound connections,


# i.e. how long a connect may take until it is timed out
connection-timeout = 15 s

# If set to "<[Link]>" then the specified dispatcher


# will be used to accept inbound connections, and perform IO. If "" then
# dedicated threads will be used.
# Please note that the Netty driver only uses this configuration and does
# not read the "[Link]-dispatcher" entry. Instead it has to be

3.9. Configuration 79
Akka Java Documentation, Release 2.4.20

# configured manually to point to the same dispatcher if needed.


use-dispatcher-for-io = ""

# Sets the high water mark for the in and outbound sockets,
# set to 0b for platform default
write-buffer-high-water-mark = 0b

# Sets the low water mark for the in and outbound sockets,
# set to 0b for platform default
write-buffer-low-water-mark = 0b

# Sets the send buffer size of the Sockets,


# set to 0b for platform default
send-buffer-size = 256000b

# Sets the receive buffer size of the Sockets,


# set to 0b for platform default
receive-buffer-size = 256000b

# Maximum message size the transport will accept, but at least


# 32000 bytes.
# Please note that UDP does not support arbitrary large datagrams,
# so this setting has to be chosen carefully when using UDP.
# Both send-buffer-size and receive-buffer-size settings has to
# be adjusted to be able to buffer messages of maximum size.
maximum-frame-size = 128000b

# Sets the size of the connection backlog


backlog = 4096

# Enables the TCP_NODELAY flag, i.e. disables Nagle’s algorithm


tcp-nodelay = on

# Enables TCP Keepalive, subject to the O/S kernel’s configuration


tcp-keepalive = on

# Enables SO_REUSEADDR, which determines when an ActorSystem can open


# the specified listen port (the meaning differs between *nix and Windows)
# Valid values are "on", "off" and "off-for-windows"
# due to the following Windows bug: [Link]
# "off-for-windows" of course means that it’s "on" for all other platforms
tcp-reuse-addr = off-for-windows

# Used to configure the number of I/O worker threads on server sockets


server-socket-worker-pool {
# Min number of threads to cap factor-based number to
pool-size-min = 2

# The pool size factor is used to determine thread pool size


# using the following formula: ceil(available processors * factor).
# Resulting size is then bounded by the pool-size-min and
# pool-size-max values.
pool-size-factor = 1.0

# Max number of threads to cap factor-based number to


pool-size-max = 2
}

# Used to configure the number of I/O worker threads on client sockets


client-socket-worker-pool {
# Min number of threads to cap factor-based number to
pool-size-min = 2

3.9. Configuration 80
Akka Java Documentation, Release 2.4.20

# The pool size factor is used to determine thread pool size


# using the following formula: ceil(available processors * factor).
# Resulting size is then bounded by the pool-size-min and
# pool-size-max values.
pool-size-factor = 1.0

# Max number of threads to cap factor-based number to


pool-size-max = 2
}

[Link] = ${[Link]}
[Link] {
transport-protocol = udp
}

[Link] = ${[Link]}
[Link] = {
# Enable SSL/TLS encryption.
# This must be enabled on both the client and server to work.
enable-ssl = true

security {
# This is the Java Key Store used by the server connection
key-store = "keystore"

# This password is used for decrypting the key store


key-store-password = "changeme"

# This password is used for decrypting the key


key-password = "changeme"

# This is the Java Key Store used by the client connection


trust-store = "truststore"

# This password is used for decrypting the trust store


trust-store-password = "changeme"

# Protocol to use for SSL encryption, choose from:


# TLS 1.2 is available since JDK7, and default since JDK8:
# [Link]
protocol = "TLSv1.2"

# Example: ["TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA"]


# You need to install the JCE Unlimited Strength Jurisdiction Policy
# Files to use AES 256.
# More info here:
# [Link]
enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]

# There are three options, in increasing order of security:


# "" or SecureRandom => (default)
# "SHA1PRNG" => Can be slow because of blocking issues on Linux
# "AES128CounterSecureRNG" => fastest startup and based on AES encryption
# algorithm
# "AES256CounterSecureRNG"
#
# The following are deprecated in Akka 2.4. They use one of 3 possible
# seed sources, depending on availability: /dev/random, [Link] and
# SecureRandom (provided by Java)
# "AES128CounterInetRNG"

3.9. Configuration 81
Akka Java Documentation, Release 2.4.20

# "AES256CounterInetRNG" (Install JCE Unlimited Strength Jurisdiction


# Policy Files first)
# Setting a value here may require you to supply the appropriate cipher
# suite (see enabled-algorithms section above)
random-number-generator = ""

# Require mutual authentication between TLS peers


#
# Without mutual authentication only the peer that actively establishes a connection (TLS
# checks if the passive side (TLS server side) sends over a trusted certificate. With the
# the passive side will also request and verify a certificate from the connecting peer.
#
# To prevent man-in-the-middle attacks you should enable this setting. For compatibility r
# still set to ’off’ per default.
#
# Note: Nodes that are configured with this setting to ’on’ might not be able to receive m
# run on older versions of akka-remote. This is because in older versions of Akka the acti
# connection will not send over certificates.
#
# However, starting from the version this setting was added, even with this setting "off",
# (TLS client side) will use the given key-store to send over a certificate if asked. A ro
# older versions of Akka can therefore work like this:
# - upgrade all nodes to an Akka version supporting this flag, keeping it off
# - then switch the flag on and do again a rolling upgrade of all nodes
# The first step ensures that all nodes will send over a certificate when asked to. The se
# step will ensure that all nodes finally enforce the secure checking of client certificat
require-mutual-authentication = off
}
}

### Default configuration for the failure injector transport adapter

gremlin {
# Enable debug logging of the failure injector transport adapter
debug = off
}

### Default dispatcher for the remoting subsystem

default-remote-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
parallelism-factor = 0.5
parallelism-max = 16
}
throughput = 10
}

backoff-remote-dispatcher {
type = Dispatcher
executor = "fork-join-executor"
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
parallelism-max = 2
}
}
}
}

3.9. Configuration 82
Akka Java Documentation, Release 2.4.20

akka-remote (artery)

#####################################
# Akka Remote Reference Config File #
#####################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

# comments about [Link] settings left out where they are already in akka-
# [Link], because otherwise they would be repeated in config rendering.
#
# For the configuration of the new remoting implementation (Artery) please look
# at the bottom section of this file as it is listed separately.

akka {

actor {

serializers {
akka-containers = "[Link]"
akka-misc = "[Link]"
artery = "[Link]"
proto = "[Link]"
daemon-create = "[Link]"
primitive-long = "[Link]"
primitive-int = "[Link]"
primitive-string = "[Link]"
primitive-bytestring = "[Link]"
akka-system-msg = "[Link]"
}

serialization-bindings {
"[Link]" = akka-containers

"[Link]" = daemon-create

"[Link]" = artery

# Since [Link] does not extend Serializable but


# GeneratedMessage does, need to use the more specific one here in order
# to avoid ambiguity.
"[Link]" = proto

# Since [Link] does not extend Serializable but


# GeneratedMessage does, need to use the more specific one here in order
# to avoid ambiguity.
# This [Link] serialization binding is only used if the class can be loaded,
# i.e. [Link] dependency has been added in the application project.
"[Link]" = proto

"[Link]" = akka-misc
}

# For the purpose of preserving protocol backward compatibility these bindings are not
# included by default. They can be enabled with enable-additional-serialization-bindings=on.
# They are enabled by default if [Link]=on or if
# [Link]-java-serialization=off.
additional-serialization-bindings {
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]$" = akka-misc

3.9. Configuration 83
Akka Java Documentation, Release 2.4.20

"[Link]$Success" = akka-misc
"[Link]$Failure" = akka-misc
"[Link]" = akka-misc
"[Link]$" = akka-misc
"[Link]$" = akka-misc
"[Link]$Heartbeat$" = akka-misc
"[Link]$HeartbeatRsp" = akka-misc
"[Link]" = akka-misc

"[Link]" = akka-system-msg

"[Link]" = primitive-string
"[Link]$ByteString1C" = primitive-bytestring
"[Link]$ByteString1" = primitive-bytestring
"[Link]$ByteStrings" = primitive-bytestring
"[Link]" = primitive-long
"[Link]" = primitive-long
"[Link]" = primitive-int
"[Link]" = primitive-int

# Java Serializer is by default used for exceptions.


# It’s recommended that you implement custom serializer for exceptions that are
# sent remotely, e.g. in [Link] for ask replies. You can add
# binding to akka-misc (MiscMessageSerializerSpec) for the exceptions that have
# a constructor with single message String or constructor with message String as
# first parameter and cause Throwable as second parameter. Note that it’s not
# safe to add this binding for general exceptions such as IllegalArgumentException
# because it may have a subclass without required constructor.
"[Link]" = java
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]" = akka-misc
"[Link]" = akka-misc
}

serialization-identifiers {
"[Link]" = 2
"[Link]" = 3
"[Link]" = 6
"[Link]" = 16
"[Link]" = 17
"[Link]" = 18
"[Link]" = 19
"[Link]" = 20
"[Link]" = 21
"[Link]" = 22
}

deployment {

default {

# if this is set to a valid remote address, the named actor will be


# deployed at that node e.g. "[Link]://sys@host:port"
remote = ""

target {

# A list of hostnames and ports for instantiating the children of a


# router
# The format should be on "[Link]://sys@host:port", where:
# - sys is the remote actor system name
# - hostname can be either hostname or IP address the remote actor

3.9. Configuration 84
Akka Java Documentation, Release 2.4.20

# should connect to
# - port should be the port for the remote server on the other node
# The number of actor instances to be spawned is still taken from the
# nr-of-instances setting as for local routers; the instances will be
# distributed round-robin among the given nodes.
nodes = []

}
}
}
}

remote {
### Settings shared by classic remoting and Artery (the new implementation of remoting)

# If set to a nonempty string remoting will use the given dispatcher for
# its internal actors otherwise the default dispatcher is used. Please note
# that since remoting can load arbitrary 3rd party drivers (see
# "enabled-transport" and "adapters" entries) it is not guaranteed that
# every module will respect this setting.
use-dispatcher = "[Link]-remote-dispatcher"

# Settings for the failure detector to monitor connections.


# For TCP it is not important to have fast failure detection, since
# most connection failures are captured by TCP itself.
# The default DeadlineFailureDetector will trigger if there are no heartbeats within
# the duration heartbeat-interval + acceptable-heartbeat-pause, i.e. 124 seconds
# with the default settings.
transport-failure-detector {

# FQCN of the failure detector implementation.


# It must implement [Link] and have
# a public constructor with a [Link] and
# [Link] parameter.
implementation-class = "[Link]"

# How often keep-alive heartbeat messages should be sent to each connection.


heartbeat-interval = 4 s

# Number of potentially lost/delayed heartbeats that will be


# accepted before considering it to be an anomaly.
# A margin to the ‘heartbeat-interval‘ is important to be able to survive sudden,
# occasional, pauses in heartbeat arrivals, due to for example garbage collect or
# network drop.
acceptable-heartbeat-pause = 120 s
}

# Settings for the Phi accrual failure detector ([Link]


# [Hayashibara et al]) used for remote death watch.
# The default PhiAccrualFailureDetector will trigger if there are no heartbeats within
# the duration heartbeat-interval + acceptable-heartbeat-pause + threshold_adjustment,
# i.e. around 12.5 seconds with default settings.
watch-failure-detector {

# FQCN of the failure detector implementation.


# It must implement [Link] and have
# a public constructor with a [Link] and
# [Link] parameter.
implementation-class = "[Link]"

# How often keep-alive heartbeat messages should be sent to each connection.


heartbeat-interval = 1 s

3.9. Configuration 85
Akka Java Documentation, Release 2.4.20

# Defines the failure detector threshold.


# A low threshold is prone to generate many wrong suspicions but ensures
# a quick detection in the event of a real crash. Conversely, a high
# threshold generates fewer mistakes but needs more time to detect
# actual crashes.
threshold = 10.0

# Number of the samples of inter-heartbeat arrival times to adaptively


# calculate the failure timeout for connections.
max-sample-size = 200

# Minimum standard deviation to use for the normal distribution in


# AccrualFailureDetector. Too low standard deviation might result in
# too much sensitivity for sudden, but normal, deviations in heartbeat
# inter arrival times.
min-std-deviation = 100 ms

# Number of potentially lost/delayed heartbeats that will be


# accepted before considering it to be an anomaly.
# This margin is important to be able to survive sudden, occasional,
# pauses in heartbeat arrivals, due to for example garbage collect or
# network drop.
acceptable-heartbeat-pause = 10 s

# How often to check for nodes marked as unreachable by the failure


# detector
unreachable-nodes-reaper-interval = 1s

# After the heartbeat request has been sent the first failure detection
# will start after this period, even though no heartbeat mesage has
# been received.
expected-response-after = 1 s

# remote deployment configuration section


deployment {
# If true, will only allow specific classes to be instanciated on this system via remote dep
enable-whitelist = off

whitelist = []
}

### Configuration for Artery, the reimplementation of remoting


artery {

# Enable the new remoting with this flag


enabled = off

# Canonical address is the address other clients should connect to.


# Artery transport will expect messages to this address.
canonical {

# The default remote server port clients should connect to.


# Default is 25520, use 0 if you want a random available port
# This port needs to be unique for each actor system on the same machine.
port = 25520

# Hostname clients should connect to. Can be set to an ip, hostname


# or one of the following special values:
# "<getHostAddress>" [Link]
# "<getHostName>" [Link]

3.9. Configuration 86
Akka Java Documentation, Release 2.4.20

#
hostname = "<getHostAddress>"
}

# Use these settings to bind a network interface to a different address


# than artery expects messages at. This may be used when running Akka
# nodes in a separated networks (under NATs or in containers). If canonical
# and bind addresses are different, then network configuration that relays
# communications from canonical to bind addresses is expected.
bind {

# Port to bind a network interface to. Can be set to a port number


# of one of the following special values:
# 0 random available port
# "" [Link]
#
port = ""

# Hostname to bind a network interface to. Can be set to an ip, hostname


# or one of the following special values:
# "[Link]" all interfaces
# "" [Link]
# "<getHostAddress>" [Link]
# "<getHostName>" [Link]
#
hostname = ""
}

# Actor paths to use the large message stream for when a message
# is sent to them over remoting. The large message stream dedicated
# is separate from "normal" and system messages so that sending a
# large message does not interfere with them.
# Entries should be the full path to the actor. Wildcards in the form of "*"
# can be supplied at any place and matches any name at that segment -
# "/user/supervisor/actor/*" will match any direct child to actor,
# while "/supervisor/*/child" will match any grandchild to "supervisor" that
# has the name "child"
# Messages sent to ActorSelections will not be passed through the large message
# stream, to pass such messages through the large message stream the selections
# but must be resolved to ActorRefs first.
large-message-destinations = []

# Enable untrusted mode, which discards inbound system messages, PossiblyHarmful and
# ActorSelection messages. E.g. remote watch and remote deployment will not work.
# ActorSelection messages can be enabled for specific paths with the trusted-selection-paths
untrusted-mode = off

# When ’untrusted-mode=on’ inbound actor selections are by default discarded.


# Actors with paths defined in this white list are granted permission to receive actor
# selections messages.
# E.g. trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
trusted-selection-paths = []

# If this is "on", all inbound remote messages will be logged at DEBUG level,
# if off then they are not logged
log-received-messages = off

# If this is "on", all outbound remote messages will be logged at DEBUG level,
# if off then they are not logged
log-sent-messages = off

advanced {

3.9. Configuration 87
Akka Java Documentation, Release 2.4.20

# Maximum serialized message size, including header data.


maximum-frame-size = 256 KiB

# Direct byte buffers are reused in a pool with this maximum size.
# Each buffer has the size of ’maximum-frame-size’.
# This is not a hard upper limit on number of created buffers. Additional
# buffers will be created if needed, e.g. when using many outbound
# associations at the same time. Such additional buffers will be garbage
# collected, which is not as efficient as reusing buffers in the pool.
buffer-pool-size = 128

# Maximum serialized message size for the large messages, including header data.
# See ’large-message-destinations’.
maximum-large-frame-size = 2 MiB

# Direct byte buffers for the large messages are reused in a pool with this maximum size.
# Each buffer has the size of ’maximum-large-frame-size’.
# See ’large-message-destinations’.
# This is not a hard upper limit on number of created buffers. Additional
# buffers will be created if needed, e.g. when using many outbound
# associations at the same time. Such additional buffers will be garbage
# collected, which is not as efficient as reusing buffers in the pool.
large-buffer-pool-size = 32

# For enabling testing features, such as blackhole in akka-remote-testkit.


test-mode = off

# Settings for the materializer that is used for the remote streams.
materializer = ${[Link]}

# If set to a nonempty string artery will use the given dispatcher for
# the ordinary and large message streams, otherwise the default dispatcher is used.
use-dispatcher = "[Link]-remote-dispatcher"

# If set to a nonempty string remoting will use the given dispatcher for
# the control stream, otherwise the default dispatcher is used.
# It can be good to not use the same dispatcher for the control stream as
# the dispatcher for the ordinary message stream so that heartbeat messages
# are not disturbed.
use-control-stream-dispatcher = ""

# Controls whether to start the Aeron media driver in the same JVM or use external
# process. Set to ’off’ when using external media driver, and then also set the
# ’aeron-dir’.
embedded-media-driver = on

# Directory used by the Aeron media driver. It’s mandatory to define the ’aeron-dir’
# if using external media driver, i.e. when ’embedded-media-driver = off’.
# Embedded media driver will use a this directory, or a temporary directory if this
# property is not defined (empty).
aeron-dir = ""

# Whether to delete aeron embeded driver directory upon driver stop.


delete-aeron-dir = yes

# Level of CPU time used, on a scale between 1 and 10, during backoff/idle.
# The tradeoff is that to have low latency more CPU time must be used to be
# able to react quickly on incoming messages or send as fast as possible after
# backoff backpressure.
# Level 1 strongly prefer low CPU consumption over low latency.
# Level 10 strongly prefer low latency over low CPU consumption.
idle-cpu-level = 5

3.9. Configuration 88
Akka Java Documentation, Release 2.4.20

# WARNING: This feature is not supported yet. Don’t use other value than 1.
# It requires more hardening and performance optimizations.
# Number of outbound lanes for each outbound association. A value greater than 1
# means that serialization can be performed in parallel for different destination
# actors. The selection of lane is based on consistent hashing of the recipient
# ActorRef to preserve message ordering per receiver.
outbound-lanes = 1

# WARNING: This feature is not supported yet. Don’t use other value than 1.
# It requires more hardening and performance optimizations.
# Total number of inbound lanes, shared among all inbound associations. A value
# greater than 1 means that deserialization can be performed in parallel for
# different destination actors. The selection of lane is based on consistent
# hashing of the recipient ActorRef to preserve message ordering per receiver.
inbound-lanes = 1

# Size of the send queue for outgoing messages. Messages will be dropped if
# the queue becomes full. This may happen if you send a burst of many messages
# without end-to-end flow control. Note that there is one such queue per
# outbound association. The trade-off of using a larger queue size is that
# it consumes more memory, since the queue is based on preallocated array with
# fixed size.
outbound-message-queue-size = 3072

# Size of the send queue for outgoing control messages, such as system messages.
# If this limit is reached the remote system is declared to be dead and its UID
# marked as quarantined.
# The trade-off of using a larger queue size is that it consumes more memory,
# since the queue is based on preallocated array with fixed size.
outbound-control-queue-size = 3072

# Size of the send queue for outgoing large messages. Messages will be dropped if
# the queue becomes full. This may happen if you send a burst of many messages
# without end-to-end flow control. Note that there is one such queue per
# outbound association. The trade-off of using a larger queue size is that
# it consumes more memory, since the queue is based on preallocated array with
# fixed size.
outbound-large-message-queue-size = 256

# This setting defines the maximum number of unacknowledged system messages


# allowed for a remote system. If this limit is reached the remote system is
# declared to be dead and its UID marked as quarantined.
system-message-buffer-size = 20000

# unacknowledged system messages are re-delivered with this interval


system-message-resend-interval = 1 second

# The timeout for outbound associations to perform the handshake.


# This timeout must be greater than the ’image-liveness-timeout’.
handshake-timeout = 20 s

# incomplete handshake attempt is retried with this interval


handshake-retry-interval = 1 second

# handshake requests are performed periodically with this interval,


# also after the handshake has been completed to be able to establish
# a new session with a restarted destination system
inject-handshake-interval = 1 second

# messages that are not accepted by Aeron are dropped after retrying for this period
give-up-message-after = 60 seconds

# System messages that are not acknowledged after re-sending for this period are

3.9. Configuration 89
Akka Java Documentation, Release 2.4.20

# dropped and will trigger quarantine. The value should be longer than the length
# of a network partition that you need to survive.
give-up-system-message-after = 6 hours

# during ActorSystem termination the remoting will wait this long for
# an acknowledgment by the destination system that flushing of outstanding
# remote messages has been completed
shutdown-flush-timeout = 1 second

# See ’inbound-max-restarts’
inbound-restart-timeout = 5 seconds

# Max number of restarts within ’inbound-restart-timeout’ for the inbound streams.


# If more restarts occurs the ActorSystem will be terminated.
inbound-max-restarts = 5

# See ’outbound-max-restarts’
outbound-restart-timeout = 5 seconds

# Max number of restarts within ’outbound-restart-timeout’ for the outbound streams.


# If more restarts occurs the ActorSystem will be terminated.
outbound-max-restarts = 5

# Stop outbound stream of a quarantined association after this idle timeout, i.e.
# when not used any more.
stop-quarantined-after-idle = 3 seconds

# Timeout after which aeron driver has not had keepalive messages
# from a client before it considers the client dead.
client-liveness-timeout = 20 seconds

# Timeout for each the INACTIVE and LINGER stages an aeron image
# will be retained for when it is no longer referenced.
# This timeout must be less than the ’handshake-timeout’.
image-liveness-timeout = 10 seconds

# Timeout after which the aeron driver is considered dead


# if it does not update its C’n’C timestamp.
driver-timeout = 20 seconds

flight-recorder {
// FIXME it should be enabled by default when we have a good solution for naming the fil
enabled = off
# Controls where the flight recorder file will be written. There are three options:
# 1. Empty: a file will be generated in the temporary directory of the OS
# 2. A relative or absolute path ending with ".afr": this file will be used
# 3. A relative or absolute path: this directory will be used, the file will get a rando
destination = ""
}

# compression of common strings in remoting messages, like actor destinations, serializers


compression {

actor-refs {
# Max number of compressed actor-refs
# Note that compression tables are "rolling" (i.e. a new table replaces the old
# compression table once in a while), and this setting is only about the total number
# of compressions within a single such table.
# Must be a positive natural number.
max = 256

# interval between new table compression advertisements.


# this means the time during which we collect heavy-hitter data and then turn it into

3.9. Configuration 90
Akka Java Documentation, Release 2.4.20

advertisement-interval = 1 minute
}
manifests {
# Max number of compressed manifests
# Note that compression tables are "rolling" (i.e. a new table replaces the old
# compression table once in a while), and this setting is only about the total number
# of compressions within a single such table.
# Must be a positive natural number.
max = 256

# interval between new table compression advertisements.


# this means the time during which we collect heavy-hitter data and then turn it into
advertisement-interval = 1 minute
}
}

# List of fully qualified class names of remote instruments which should


# be initialized and used for monitoring of remote messages.
# The class must extend [Link] and
# have a public constructor with empty parameters or one ExtendedActorSystem
# parameter.
# A new instance of RemoteInstrument will be created for each encoder and decoder.
# It’s only called from the stage, so if it dosn’t delegate to any shared instance
# it doesn’t have to be thread-safe.
# Refer to ‘[Link]‘ for more information.
instruments = ${?[Link]} []
}
}
}

akka-testkit

######################################
# Akka Testkit Reference Config File #
######################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

akka {
test {
# factor by which to scale timeouts during tests, e.g. to account for shared
# build system load
timefactor = 1.0

# duration of [Link] waits after the block is finished until


# all required messages are received
filter-leeway = 3s

# duration to wait in expectMsg and friends outside of within() block


# by default
single-expect-default = 3s

# The timeout that is added as an implicit by DefaultTimeout trait


default-timeout = 5s

calling-thread-dispatcher {
type = [Link]
}

3.9. Configuration 91
Akka Java Documentation, Release 2.4.20

[Link]-bindings {
"[Link]" = java
}
}

akka-cluster-metrics ~~~~~~~~~~~~——–
##############################################
# Akka Cluster Metrics Reference Config File #
##############################################

# This is the reference config file that contains all the default settings.
# Make your edits in your [Link] in order to override these settings.

# Sigar provisioning:
#
# User can provision sigar classes and native library in one of the following ways:
#
# 1) Use [Link] Kamon sigar-loader as a project dependency for
# Metrics extension will extract and load sigar library on demand with help of Kamon sigar provis
#
# 2) Use [Link] Kamon sigar-loader as java agent: ‘java -javaag
# Kamon sigar loader agent will extract and load sigar library during JVM start.
#
# 3) Place ‘[Link]‘ on the ‘classpath‘ and sigar native library for the o/s on the ‘[Link]
# User is required to manage both project dependency and library deployment manually.

# Cluster metrics extension.


# Provides periodic statistics collection and publication throughout the cluster.
[Link] {
# Full path of dispatcher configuration key.
# Use "" for default key ‘[Link]-dispatcher‘.
dispatcher = ""
# How long should any actor wait before starting the periodic tasks.
periodic-tasks-initial-delay = 1s
# Sigar native library extract location.
# Use per-application-instance scoped location, such as program working directory.
native-library-extract-folder = ${[Link]}"/native"
# Metrics supervisor actor.
supervisor {
# Actor name. Example name space: /system/cluster-metrics
name = "cluster-metrics"
# Supervision strategy.
strategy {
#
# FQCN of class providing ‘[Link]‘.
# Must have a constructor with signature ‘<init>([Link])‘.
# Default metrics strategy provider is a configurable extension of ‘OneForOneStrategy‘
provider = "[Link]"
#
# Configuration of the default strategy provider.
# Replace with custom settings when overriding the provider.
configuration = {
# Log restart attempts.
loggingEnabled = true
# Child actor restart-on-failure window.
withinTimeRange = 3s
# Maximum number of restart attempts before child actor is stopped.
maxNrOfRetries = 3
}
}

3.9. Configuration 92
Akka Java Documentation, Release 2.4.20

}
# Metrics collector actor.
collector {
# Enable or disable metrics collector for load-balancing nodes.
# Metrics collection can also be controlled at runtime by sending control messages
# to /system/cluster-metrics actor: ‘[Link].{CollectionStartMessage,Collecti
enabled = on
# FQCN of the metrics collector implementation.
# It must implement ‘[Link]‘ and
# have public constructor with [Link] parameter.
# Will try to load in the following order of priority:
# 1) configured custom collector 2) internal ‘SigarMetricsCollector‘ 3) internal ‘JmxMetri
provider = ""
# Try all 3 available collector providers, or else fail on the configured custom collector
fallback = true
# How often metrics are sampled on a node.
# Shorter interval will collect the metrics more often.
# Also controls frequency of the metrics publication to the node system event bus.
sample-interval = 3s
# How often a node publishes metrics information to the other nodes in the cluster.
# Shorter interval will publish the metrics gossip more often.
gossip-interval = 3s
# How quickly the exponential weighting of past data is decayed compared to
# new data. Set lower to increase the bias toward newer values.
# The relevance of each data sample is halved for every passing half-life
# duration, i.e. after 4 times the half-life, a data sample’s relevance is
# reduced to 6% of its original relevance. The initial relevance of a data
# sample is given by 1 - 0.5 ^ (collect-interval / half-life).
# See [Link]
moving-average-half-life = 12s
}
}

# Cluster metrics extension serializers and routers.


[Link] {
# Protobuf serializer for remote cluster metrics messages.
serializers {
akka-cluster-metrics = "[Link]"
}
# Interface binding for remote cluster metrics messages.
serialization-bindings {
"[Link]" = akka-cluster-metrics
}
# Globally unique metrics extension serializer identifier.
serialization-identifiers {
"[Link]" = 10
}
# Provide routing of messages based on cluster metrics.
[Link]-mapping {
cluster-metrics-adaptive-pool = "[Link]"
cluster-metrics-adaptive-group = "[Link]"
}
}

akka-cluster-tools ~~~~~~~~~~~~——
############################################
# Akka Cluster Tools Reference Config File #
############################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

3.9. Configuration 93
Akka Java Documentation, Release 2.4.20

# //#pub-sub-ext-config
# Settings for the DistributedPubSub extension
[Link]-sub {
# Actor name of the mediator actor, /system/distributedPubSubMediator
name = distributedPubSubMediator

# Start the mediator on members tagged with this role.


# All members are used if undefined or empty.
role = ""

# The routing logic to use for ’Send’


# Possible values: random, round-robin, broadcast
routing-logic = random

# How often the DistributedPubSubMediator should send out gossip information


gossip-interval = 1s

# Removed entries are pruned after this duration


removed-time-to-live = 120s

# Maximum number of elements to transfer in one message when synchronizing the registries.
# Next chunk will be transferred in next round of gossip.
max-delta-elements = 3000

# The id of the dispatcher to use for DistributedPubSubMediator actors.


# If not specified default dispatcher is used.
# If specified you need to define the settings of the actual dispatcher.
use-dispatcher = ""
}
# //#pub-sub-ext-config

# Protobuf serializer for cluster DistributedPubSubMeditor messages


[Link] {
serializers {
akka-pubsub = "[Link]"
}
serialization-bindings {
"[Link]" = akka-pubsub
}
serialization-identifiers {
"[Link]" = 9
}
# adds the protobuf serialization of pub sub messages to groups
additional-serialization-bindings {
"[Link]$Internal$SendToOneSubscriber" = akka-pubsub
}
}

# //#receptionist-ext-config
# Settings for the ClusterClientReceptionist extension
[Link] {
# Actor name of the ClusterReceptionist actor, /system/receptionist
name = receptionist

# Start the receptionist on members tagged with this role.


# All members are used if undefined or empty.
role = ""

# The receptionist will send this number of contact points to the client
number-of-contacts = 3

# The actor that tunnel response messages to the client will be stopped

3.9. Configuration 94
Akka Java Documentation, Release 2.4.20

# after this time of inactivity.


response-tunnel-receive-timeout = 30s

# The id of the dispatcher to use for ClusterReceptionist actors.


# If not specified default dispatcher is used.
# If specified you need to define the settings of the actual dispatcher.
use-dispatcher = ""

# How often failure detection heartbeat messages should be received for


# each ClusterClient
heartbeat-interval = 2s

# Number of potentially lost/delayed heartbeats that will be


# accepted before considering it to be an anomaly.
# The ClusterReceptionist is using the [Link], which
# will trigger if there are no heartbeats within the duration
# heartbeat-interval + acceptable-heartbeat-pause, i.e. 15 seconds with
# the default settings.
acceptable-heartbeat-pause = 13s

# Failure detection checking interval for checking all ClusterClients


failure-detection-interval = 2s
}
# //#receptionist-ext-config

# //#cluster-client-config
# Settings for the ClusterClient
[Link] {
# Actor paths of the ClusterReceptionist actors on the servers (cluster nodes)
# that the client will try to contact initially. It is mandatory to specify
# at least one initial contact.
# Comma separated full actor paths defined by a string on the form of
# "[Link]://system@hostname:port/system/receptionist"
initial-contacts = []

# Interval at which the client retries to establish contact with one of


# ClusterReceptionist on the servers (cluster nodes)
establishing-get-contacts-interval = 3s

# Interval at which the client will ask the ClusterReceptionist for


# new contact points to be used for next reconnect.
refresh-contacts-interval = 60s

# How often failure detection heartbeat messages should be sent


heartbeat-interval = 2s

# Number of potentially lost/delayed heartbeats that will be


# accepted before considering it to be an anomaly.
# The ClusterClient is using the [Link], which
# will trigger if there are no heartbeats within the duration
# heartbeat-interval + acceptable-heartbeat-pause, i.e. 15 seconds with
# the default settings.
acceptable-heartbeat-pause = 13s

# If connection to the receptionist is not established the client will buffer


# this number of messages and deliver them the connection is established.
# When the buffer is full old messages will be dropped when new messages are sent
# via the client. Use 0 to disable buffering, i.e. messages will be dropped
# immediately if the location of the singleton is unknown.
# Maximum allowed buffer size is 10000.
buffer-size = 1000

# If connection to the receiptionist is lost and the client has not been

3.9. Configuration 95
Akka Java Documentation, Release 2.4.20

# able to acquire a new connection for this long the client will stop itself.
# This duration makes it possible to watch the cluster client and react on a more permanent
# loss of connection with the cluster, for example by accessing some kind of
# service registry for an updated set of initial contacts to start a new cluster client with.
# If this is not wanted it can be set to "off" to disable the timeout and retry
# forever.
reconnect-timeout = off
}
# //#cluster-client-config

# Protobuf serializer for ClusterClient messages


[Link] {
serializers {
akka-cluster-client = "[Link]"
}
serialization-bindings {
"[Link]" = akka-cluster-client
}
serialization-identifiers {
"[Link]" = 15
}
}

# //#singleton-config
[Link] {
# The actor name of the child singleton actor.
singleton-name = "singleton"

# Singleton among the nodes tagged with specified role.


# If the role is not specified it’s a singleton among all nodes in the cluster.
role = ""

# When a node is becoming oldest it sends hand-over request to previous oldest,


# that might be leaving the cluster. This is retried with this interval until
# the previous oldest confirms that the hand over has started or the previous
# oldest member is removed from the cluster (+ [Link]-removal-margin).
hand-over-retry-interval = 1s

# The number of retries are derived from hand-over-retry-interval and


# [Link]-removal-margin (or [Link]),
# but it will never be less than this property.
min-number-of-hand-over-retries = 10
}
# //#singleton-config

# //#singleton-proxy-config
[Link]-proxy {
# The actor name of the singleton actor that is started by the ClusterSingletonManager
singleton-name = ${[Link]-name}

# The role of the cluster nodes where the singleton can be deployed.
# If the role is not specified then any node will do.
role = ""

# Interval at which the proxy will try to resolve the singleton instance.
singleton-identification-interval = 1s

# If the location of the singleton is unknown the proxy will buffer this
# number of messages and deliver them when the singleton is identified.
# When the buffer is full old messages will be dropped when new messages are
# sent via the proxy.
# Use 0 to disable buffering, i.e. messages will be dropped immediately if
# the location of the singleton is unknown.

3.9. Configuration 96
Akka Java Documentation, Release 2.4.20

# Maximum allowed buffer size is 10000.


buffer-size = 1000
}
# //#singleton-proxy-config

# Serializer for cluster ClusterSingleton messages


[Link] {
serializers {
akka-singleton = "[Link]"
}
serialization-bindings {
"[Link]" = akka-singleton
}
serialization-identifiers {
"[Link]" = 14
}
}

akka-cluster-sharding ~~~~~~~~~~~~———
###############################################
# Akka Cluster Sharding Reference Config File #
###############################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

# //#sharding-ext-config
# Settings for the ClusterShardingExtension
[Link] {

# The extension creates a top level actor with this name in top level system scope,
# e.g. ’/system/sharding’
guardian-name = sharding

# Specifies that entities runs on cluster nodes with a specific role.


# If the role is not specified (or empty) all nodes in the cluster are used.
role = ""

# When this is set to ’on’ the active entity actors will automatically be restarted
# upon Shard restart. i.e. if the Shard is started on a different ShardRegion
# due to rebalance or crash.
remember-entities = off

# If the coordinator can’t store state changes it will be stopped


# and started again after this duration, with an exponential back-off
# of up to 5 times this duration.
coordinator-failure-backoff = 5 s

# The ShardRegion retries registration and shard location requests to the


# ShardCoordinator with this interval if it does not reply.
retry-interval = 2 s

# Maximum number of messages that are buffered by a ShardRegion actor.


buffer-size = 100000

# Timeout of the shard rebalancing process.


handoff-timeout = 60 s

# Time given to a region to acknowledge it’s hosting a shard.


shard-start-timeout = 10 s

3.9. Configuration 97
Akka Java Documentation, Release 2.4.20

# If the shard is remembering entities and can’t store state changes


# will be stopped and then started again after this duration. Any messages
# sent to an affected entity may be lost in this process.
shard-failure-backoff = 10 s

# If the shard is remembering entities and an entity stops itself without


# using passivate. The entity will be restarted after this duration or when
# the next message for it is received, which ever occurs first.
entity-restart-backoff = 10 s

# Rebalance check is performed periodically with this interval.


rebalance-interval = 10 s

# Absolute path to the journal plugin configuration entity that is to be


# used for the internal persistence of ClusterSharding. If not defined
# the default journal plugin is used. Note that this is not related to
# persistence used by the entity actors.
journal-plugin-id = ""

# Absolute path to the snapshot plugin configuration entity that is to be


# used for the internal persistence of ClusterSharding. If not defined
# the default snapshot plugin is used. Note that this is not related to
# persistence used by the entity actors.
snapshot-plugin-id = ""

# Parameter which determines how the coordinator will be store a state


# valid values either "persistence" or "ddata"
# The "ddata" mode is experimental, since it depends on the experimental
# module akka-distributed-data-experimental.
state-store-mode = "persistence"

# The shard saves persistent snapshots after this number of persistent


# events. Snapshots are used to reduce recovery times.
snapshot-after = 1000

# Setting for the default shard allocation strategy


least-shard-allocation-strategy {
# Threshold of how large the difference between most and least number of
# allocated shards must be to begin the rebalancing.
rebalance-threshold = 10

# The number of ongoing rebalancing processes is limited to this number.


max-simultaneous-rebalance = 3
}

# Timeout of waiting the initial distributed state (an initial state will be queried again if th
# works only for state-store-mode = "ddata"
waiting-for-state-timeout = 5 s

# Timeout of waiting for update the distributed state (update will be retried if the timeout hap
# works only for state-store-mode = "ddata"
updating-state-timeout = 5 s

# The shard uses this strategy to determines how to recover the underlying entity actors. The st
# by the persistent shard when rebalancing or restarting. The value can either be "all" or "cons
# strategy start all the underlying entity actors at the same time. The constant strategy will s
# entity actors at a fix rate. The default strategy "all".
entity-recovery-strategy = "all"

# Default settings for the constant rate entity recovery strategy


entity-recovery-constant-rate-strategy {
# Sets the frequency at which a batch of entity actors is started.
frequency = 100 ms

3.9. Configuration 98
Akka Java Documentation, Release 2.4.20

# Sets the number of entity actors to be restart at a particular interval


number-of-entities = 5
}

# Settings for the coordinator singleton. Same layout as [Link].


# The "role" of the singleton configuration is not used. The singleton role will
# be the same as "[Link]".
coordinator-singleton = ${[Link]}

# The id of the dispatcher to use for ClusterSharding actors.


# If not specified default dispatcher is used.
# If specified you need to define the settings of the actual dispatcher.
# This dispatcher for the entity actors is defined by the user provided
# Props, i.e. this dispatcher is not used for the entity actors.
use-dispatcher = ""
}
# //#sharding-ext-config

# Protobuf serializer for Cluster Sharding messages


[Link] {
serializers {
akka-sharding = "[Link]"
}
serialization-bindings {
"[Link]" = akka-sharding
}
serialization-identifiers {
"[Link]" = 13
}
}

akka-distributed-data ~~~~~~~~~~~~———
##############################################
# Akka Distributed DataReference Config File #
##############################################

# This is the reference config file that contains all the default settings.
# Make your edits/overrides in your [Link].

#//#distributed-data
# Settings for the DistributedData extension
[Link]-data {
# Actor name of the Replicator actor, /system/ddataReplicator
name = ddataReplicator

# Replicas are running on members tagged with this role.


# All members are used if undefined or empty.
role = ""

# How often the Replicator should send out gossip information


gossip-interval = 2 s

# How often the subscribers will be notified of changes, if any


notify-subscribers-interval = 500 ms

# Maximum number of entries to transfer in one gossip message when synchronizing


# the replicas. Next chunk will be transferred in next round of gossip.
max-delta-elements = 1000

# The id of the dispatcher to use for Replicator actors. If not specified

3.9. Configuration 99
Akka Java Documentation, Release 2.4.20

# default dispatcher is used.


# If specified you need to define the settings of the actual dispatcher.
use-dispatcher = ""

# How often the Replicator checks for pruning of data associated with
# removed cluster nodes.
pruning-interval = 30 s

# How long time it takes (worst case) to spread the data to all other replica nodes.
# This is used when initiating and completing the pruning process of data associated
# with removed cluster nodes. The time measurement is stopped when any replica is
# unreachable, so it should be configured to worst case in a healthy cluster.
max-pruning-dissemination = 60 s

# Serialized Write and Read messages are cached when they are sent to
# several nodes. If no further activity they are removed from the cache
# after this duration.
serializer-cache-time-to-live = 10s

durable {
# List of keys that are durable. Prefix matching is supported by using * at the
# end of a key.
keys = []

# Fully qualified class name of the durable store actor. It must be a subclass
# of [Link] and handle the protocol defined in
# [Link]. The class must have a constructor with
# [Link] parameter.
store-actor-class = [Link]

use-dispatcher = [Link]-store

pinned-store {
executor = thread-pool-executor
type = PinnedDispatcher
}

# Config for the LmdbDurableStore


lmdb {
# Directory of LMDB file. There are two options:
# 1. A relative or absolute path to a directory that ends with ’ddata’
# the full name of the directory will contain name of the ActorSystem
# and its remote port.
# 2. Otherwise the path is used as is, as a relative or absolute path to
# a directory.
dir = "ddata"

# Size in bytes of the memory mapped file.


map-size = 100 MiB

# Accumulate changes before storing improves performance with the


# risk of losing the last writes if the JVM crashes.
# The interval is by default set to ’off’ to write each update immediately.
# Enabling write behind by specifying a duration, e.g. 200ms, is especially
# efficient when performing many writes to the same key, because it is only
# the last value for each key that will be serialized and stored.
# write-behind-interval = 200 ms
write-behind-interval = off
}
}

}
#//#distributed-data

3.9. Configuration 100


Akka Java Documentation, Release 2.4.20

# Protobuf serializer for cluster DistributedData messages


[Link] {
serializers {
akka-data-replication = "[Link]"
akka-replicated-data = "[Link]"
}
serialization-bindings {
"[Link]$ReplicatorMessage" = akka-data-replication
"[Link]" = akka-replicated-data
}
serialization-identifiers {
"[Link]" = 11
"[Link]" = 12
}
}

3.9. Configuration 101


CHAPTER

FOUR

ACTORS

4.1 Actors

The Actor Model provides a higher level of abstraction for writing concurrent and distributed systems. It alleviates
the developer from having to deal with explicit locking and thread management, making it easier to write correct
concurrent and parallel systems. Actors were defined in the 1973 paper by Carl Hewitt but have been popularized
by the Erlang language, and used for example at Ericsson with great success to build highly concurrent and reliable
telecom systems.
The API of Akka’s Actors is similar to Scala Actors which has borrowed some of its syntax from Erlang.

4.1.1 Creating Actors

Note: Since Akka enforces parental supervision every actor is supervised and (potentially) the supervisor of its
children, it is advisable that you familiarize yourself with Actor Systems and Supervision and Monitoring and it
may also help to read Actor References, Paths and Addresses.

Defining an Actor class

Actors in Java are implemented by extending the UntypedActor class and implementing the onReceive
method. This method takes the message as a parameter.
Here is an example:
import [Link];
import [Link];
import [Link];

public class MyUntypedActor extends UntypedActor {


LoggingAdapter log = [Link](getContext().system(), this);

public void onReceive(Object message) throws Exception {


if (message instanceof String) {
[Link]("Received String message: {}", message);
getSender().tell(message, getSelf());
} else
unhandled(message);
}
}

102
Akka Java Documentation, Release 2.4.20

Props

Props is a configuration class to specify options for the creation of actors, think of it as an immutable and thus
freely shareable recipe for creating an actor including associated deployment information (e.g. which dispatcher
to use, see more below). Here are some examples of how to create a Props instance.
import [Link];
import [Link];

static class MyActorC implements Creator<MyActor> {


@Override public MyActor create() {
return new MyActor("...");
}
}

Props props1 = [Link]([Link]);


Props props2 = [Link]([Link], "...");
Props props3 = [Link](new MyActorC());

The second line shows how to pass constructor arguments to the Actor being created. The pres-
ence of a matching constructor is verified during construction of the Props object, resulting in an
IllegalArgumentException if no or multiple matching constructors are found.
The third line demonstrates the use of a Creator. The creator class must be static, which is verified during
Props construction. The type parameter’s upper bound is used to determine the produced actor class, falling
back to Actor if fully erased. An example of a parametric factory could be:
static class ParametricCreator<T extends MyActor> implements Creator<T> {
@Override public T create() {
// ... fabricate actor here
}
}

Note: In order for mailbox requirements—like using a deque-based mailbox for actors using the stash—to be
picked up, the actor type needs to be known before creating it, which is what the Creator type argument allows.
Therefore make sure to use the specific type for your actors wherever possible.

Recommended Practices

It is a good idea to provide static factory methods on the UntypedActor which help keeping the creation of
suitable Props as close to the actor definition as possible. This also allows usage of the Creator-based methods
which statically verify that the used constructor actually exists instead relying only on a runtime check.
public class DemoActor extends UntypedActor {

/**
* Create Props for an actor of this type.
* @param magicNumber The magic number to be passed to this actor’s constructor.
* @return a Props for creating this actor, which can then be further configured
* (e.g. calling ‘.withDispatcher()‘ on it)
*/
public static Props props(final int magicNumber) {
return [Link](new Creator<DemoActor>() {
private static final long serialVersionUID = 1L;

@Override
public DemoActor create() throws Exception {
return new DemoActor(magicNumber);
}
});

4.1. Actors 103


Akka Java Documentation, Release 2.4.20

final int magicNumber;

public DemoActor(int magicNumber) {


[Link] = magicNumber;
}

@Override
public void onReceive(Object msg) {
// some behavior here
}

[Link]([Link](42), "demo");

Another good practice is to declare what messages an Actor can receive as close to the actor definition as possible
(e.g. as static classes inside the Actor or using other suitable class), which makes it easier to know what it can
receive.
public class DemoMessagesActor extends UntypedActor {

static public class Greeting {


private final String from;

public Greeting(String from) {


[Link] = from;
}

public String getGreeter() {


return from;
}
}

public void onReceive(Object message) throws Exception {


if (message instanceof Greeting) {
getSender().tell("Hello " + ((Greeting) message).getGreeter(), getSelf());
} else
unhandled(message);
}
}

Creating Actors with Props

Actors are created by passing a Props instance into the actorOf factory method which is available on
ActorSystem and ActorContext.
import [Link];
import [Link];

// ActorSystem is a heavy object: create only one per application


final ActorSystem system = [Link]("MySystem");
final ActorRef myActor = [Link]([Link]([Link]),
"myactor");

Using the ActorSystem will create top-level actors, supervised by the actor system’s provided guardian actor,
while using an actor’s context will create a child actor.
class A extends UntypedActor {
final ActorRef child =
getContext().actorOf([Link]([Link]), "myChild");

4.1. Actors 104


Akka Java Documentation, Release 2.4.20

// plus some behavior ...


}

It is recommended to create a hierarchy of children, grand-children and so on such that it fits the logical failure-
handling structure of the application, see Actor Systems.
The call to actorOf returns an instance of ActorRef. This is a handle to the actor instance and the only way to
interact with it. The ActorRef is immutable and has a one to one relationship with the Actor it represents. The
ActorRef is also serializable and network-aware. This means that you can serialize it, send it over the wire and
use it on a remote host and it will still be representing the same Actor on the original node, across the network.
The name parameter is optional, but you should preferably name your actors, since that is used in log messages
and for identifying actors. The name must not be empty or start with $, but it may contain URL encoded char-
acters (eg. %20 for a blank space). If the given name is already in use by another child to the same parent an
InvalidActorNameException is thrown.
Actors are automatically started asynchronously when created.

Dependency Injection

If your UntypedActor has a constructor that takes parameters then those need to be part of the Props as well, as
described above. But there are cases when a factory method must be used, for example when the actual constructor
arguments are determined by a dependency injection framework.
import [Link];
import [Link];

class DependencyInjector implements IndirectActorProducer {


final Object applicationContext;
final String beanName;

public DependencyInjector(Object applicationContext, String beanName) {


[Link] = applicationContext;
[Link] = beanName;
}

@Override
public Class<? extends Actor> actorClass() {
return [Link];
}

@Override
public MyActor produce() {
MyActor result;
// obtain fresh Actor instance from DI framework ...
return result;
}
}

final ActorRef myActor = getContext().actorOf(


[Link]([Link], applicationContext, "MyActor"),
"myactor3");

Warning: You might be tempted at times to offer an IndirectActorProducer which always returns
the same instance, e.g. by using a static field. This is not supported, as it goes against the meaning of an actor
restart, which is described here: What Restarting Means.
When using a dependency injection framework, actor beans MUST NOT have singleton scope.

Techniques for dependency injection and integration with dependency injection frameworks are described in more
depth in the Using Akka with Dependency Injection guideline and the Akka Java Spring tutorial in Lightbend
Activator.

4.1. Actors 105


Akka Java Documentation, Release 2.4.20

The Inbox

When writing code outside of actors which shall communicate with actors, the ask pattern can be a solution (see
below), but there are two things it cannot do: receiving multiple replies (e.g. by subscribing an ActorRef to a
notification service) and watching other actors’ lifecycle. For these purposes there is the Inbox class:
final Inbox inbox = [Link](system);
[Link](target, "hello");
try {
assert [Link]([Link](1, [Link])).equals("world");
} catch ([Link] e) {
// timeout
}

The send method wraps a normal tell and supplies the internal actor’s reference as the sender. This allows the
reply to be received on the last line. Watching an actor is quite simple as well:
final Inbox inbox = [Link](system);
[Link](target);
[Link]([Link](), [Link]());
try {
assert [Link]([Link](1, [Link])) instanceof Terminated;
} catch ([Link] e) {
// timeout
}

4.1.2 UntypedActor API

The UntypedActor class defines only one abstract method, the above mentioned onReceive(Object
message), which implements the behavior of the actor.
If the current actor behavior does not match a received message, it’s recommended that you call the unhandled
method, which by default publishes a new [Link](message, sender,
recipient) on the actor system’s event stream (set configuration item [Link]
to on to have them converted into actual Debug messages).
In addition, it offers:
• getSelf reference to the ActorRef of the actor
• getSender reference sender Actor of the last received message, typically used as described in Reply to
messages
• supervisorStrategy user overridable definition the strategy to use for supervising child actors
This strategy is typically declared inside the actor in order to have access to the actor’s internal state within
the decider function: since failure is communicated as a message sent to the supervisor and processed like
other messages (albeit outside of the normal behavior), all values and variables within the actor are available,
as is the getSender() reference (which will be the immediate child reporting the failure; if the original
failure occurred within a distant descendant it is still reported one level up at a time).
• getContext exposes contextual information for the actor and the current message, such as:
– factory methods to create child actors (actorOf)
– system that the actor belongs to
– parent supervisor
– supervised children
– lifecycle monitoring
– hotswap behavior stack as described in HotSwap
The remaining visible methods are user-overridable life-cycle hooks which are described in the following:

4.1. Actors 106


Akka Java Documentation, Release 2.4.20

public void preStart() {


}

public void preRestart(Throwable reason, [Link]<Object> message) {


for (ActorRef each : getContext().getChildren()) {
getContext().unwatch(each);
getContext().stop(each);
}
postStop();
}

public void postRestart(Throwable reason) {


preStart();
}

public void postStop() {


}

The implementations shown above are the defaults provided by the UntypedActor class.

Actor Lifecycle

A path in an actor system represents a “place” which might be occupied by a living actor. Initially (apart from
system initialized actors) a path is empty. When actorOf() is called it assigns an incarnation of the actor
described by the passed Props to the given path. An actor incarnation is identified by the path and a UID. A
restart only swaps the Actor instance defined by the Props but the incarnation and hence the UID remains the
same.
The lifecycle of an incarnation ends when the actor is stopped. At that point the appropriate lifecycle events are
called and watching actors are notified of the termination. After the incarnation is stopped, the path can be reused

4.1. Actors 107


Akka Java Documentation, Release 2.4.20

again by creating an actor with actorOf(). In this case the name of the new incarnation will be the same
as the previous one but the UIDs will differ. An actor can be stopped by the actor itself, another actor or the
ActorSystem (see Stopping actors).

Note: It is important to note that Actors do not stop automatically when no longer referenced, every Actor that
is created must also explicitly be destroyed. The only simplification is that stopping a parent Actor will also
recursively stop all the child Actors that this parent has created.

An ActorRef always represents an incarnation (path and UID) not just a given path. Therefore if an actor is
stopped and a new one with the same name is created an ActorRef of the old incarnation will not point to the
new one.
ActorSelection on the other hand points to the path (or multiple paths if wildcards are used) and is completely
oblivious to which incarnation is currently occupying it. ActorSelection cannot be watched for this reason.
It is possible to resolve the current incarnation’s ActorRef living under the path by sending an Identify
message to the ActorSelection which will be replied to with an ActorIdentity containing the correct
reference (see Identifying Actors via Actor Selection). This can also be done with the resolveOne method of
the ActorSelection, which returns a Future of the matching ActorRef.

Lifecycle Monitoring aka DeathWatch

In order to be notified when another actor terminates (i.e. stops permanently, not temporary failure and restart), an
actor may register itself for reception of the Terminated message dispatched by the other actor upon termination
(see Stopping Actors). This service is provided by the DeathWatch component of the actor system.
Registering a monitor is easy (see fourth line, the rest is for demonstrating the whole functionality):
import [Link];

public class WatchActor extends UntypedActor {


final ActorRef child = [Link]().actorOf([Link](), "child");
{
[Link]().watch(child); // <-- the only call needed for registration
}
ActorRef lastSender = getContext().system().deadLetters();

@Override
public void onReceive(Object message) {
if ([Link]("kill")) {
getContext().stop(child);
lastSender = getSender();
} else if (message instanceof Terminated) {
final Terminated t = (Terminated) message;
if ([Link]() == child) {
[Link]("finished", getSelf());
}
} else {
unhandled(message);
}
}
}

It should be noted that the Terminated message is generated independent of the order in which registration and
termination occur. In particular, the watching actor will receive a Terminated message even if the watched
actor has already been terminated at the time of registration.
Registering multiple times does not necessarily lead to multiple messages being generated, but there is no guaran-
tee that only exactly one such message is received: if termination of the watched actor has generated and queued
the message, and another registration is done before this message has been processed, then a second message will
be queued, because registering for monitoring of an already terminated actor leads to the immediate generation of
the Terminated message.

4.1. Actors 108


Akka Java Documentation, Release 2.4.20

It is also possible to deregister from watching another actor’s liveliness using


getContext().unwatch(target). This works even if the Terminated message has already
been enqueued in the mailbox; after calling unwatch no Terminated message for that actor will be processed
anymore.

Start Hook

Right after starting the actor, its preStart method is invoked.


@Override
public void preStart() {
child = getContext().actorOf([Link]());
}

This method is called when the actor is first created. During restarts it is called by the default implementation of
postRestart, which means that by overriding that method you can choose whether the initialization code in
this method is called only exactly once for this actor or for every restart. Initialization code which is part of the
actor’s constructor will always be called when an instance of the actor class is created, which happens at every
restart.

Restart Hooks

All actors are supervised, i.e. linked to another actor with a fault handling strategy. Actors may be restarted in
case an exception is thrown while processing a message (see Supervision and Monitoring). This restart involves
the hooks mentioned above:
1. The old actor is informed by calling preRestart with the exception which caused the restart and the
message which triggered that exception; the latter may be None if the restart was not caused by processing
a message, e.g. when a supervisor does not trap the exception and is restarted in turn by its supervisor, or if
an actor is restarted due to a sibling’s failure. If the message is available, then that message’s sender is also
accessible in the usual way (i.e. by calling getSender()).
This method is the best place for cleaning up, preparing hand-over to the fresh actor instance, etc. By default
it stops all children and calls postStop.
2. The initial factory from the actorOf call is used to produce the fresh instance.
3. The new actor’s postRestart method is invoked with the exception which caused the restart. By default
the preStart is called, just as in the normal start-up case.
An actor restart replaces only the actual actor object; the contents of the mailbox is unaffected by the restart,
so processing of messages will resume after the postRestart hook returns. The message that triggered the
exception will not be received again. Any message sent to an actor while it is being restarted will be queued to its
mailbox as usual.

Warning: Be aware that the ordering of failure notifications relative to user messages is not deterministic. In
particular, a parent might restart its child before it has processed the last messages sent by the child before the
failure. See Discussion: Message Ordering for details.

Stop Hook

After stopping an actor, its postStop hook is called, which may be used e.g. for deregistering this actor from
other services. This hook is guaranteed to run after message queuing has been disabled for this actor, i.e. messages
sent to a stopped actor will be redirected to the deadLetters of the ActorSystem.

4.1. Actors 109


Akka Java Documentation, Release 2.4.20

4.1.3 Identifying Actors via Actor Selection

As described in Actor References, Paths and Addresses, each actor has a unique logical path, which is obtained
by following the chain of actors from child to parent until reaching the root of the actor system, and it has a
physical path, which may differ if the supervision chain includes any remote supervisors. These paths are used
by the system to look up actors, e.g. when a remote message is received and the recipient is searched, but they
are also useful more directly: actors may look up other actors by specifying absolute or relative paths—logical or
physical—and receive back an ActorSelection with the result:
// will look up this absolute path
getContext().actorSelection("/user/serviceA/actor");
// will look up sibling beneath same supervisor
getContext().actorSelection("../joe");

Note: It is always preferable to communicate with other Actors using their ActorRef instead of relying upon
ActorSelection. Exceptions are
• sending messages using the At-Least-Once Delivery facility
• initiating first contact with a remote system
In all other cases ActorRefs can be provided during Actor creation or initialization, passing them from parent to
child or introducing Actors by sending their ActorRefs to other Actors within messages.

The supplied path is parsed as a [Link], which basically means that it is split on / into path elements.
If the path starts with /, it is absolute and the look-up starts at the root guardian (which is the parent of "/user");
otherwise it starts at the current actor. If a path element equals .., the look-up will take a step “up” towards the
supervisor of the currently traversed actor, otherwise it will step “down” to the named child. It should be noted
that the .. in actor paths here always means the logical structure, i.e. the supervisor.
The path elements of an actor selection may contain wildcard patterns allowing for broadcasting of messages to
that section:
// will look all children to serviceB with names starting with worker
getContext().actorSelection("/user/serviceB/worker*");
// will look up all siblings beneath same supervisor
getContext().actorSelection("../*");

Messages can be sent via the ActorSelection and the path of the ActorSelection is looked up when
delivering each message. If the selection does not match any actors the message will be dropped.
To acquire an ActorRef for an ActorSelection you need to send a message to the selection and use the
getSender reference of the reply from the actor. There is a built-in Identify message that all Actors will
understand and automatically reply to with a ActorIdentity message containing the ActorRef. This mes-
sage is handled specially by the actors which are traversed in the sense that if a concrete name lookup fails (i.e.
a non-wildcard path element does not correspond to a live actor) then a negative result is generated. Please note
that this does not mean that delivery of that reply is guaranteed, it still is a normal message.
import [Link];
import [Link];
import [Link];

public class Follower extends UntypedActor {


final String identifyId = "1";
{
ActorSelection selection =
getContext().actorSelection("/user/another");
[Link](new Identify(identifyId), getSelf());
}
ActorRef another;

final ActorRef probe;

4.1. Actors 110


Akka Java Documentation, Release 2.4.20

public Follower(ActorRef probe) {


[Link] = probe;
}

@Override
public void onReceive(Object message) {
if (message instanceof ActorIdentity) {
ActorIdentity identity = (ActorIdentity) message;
if ([Link]().equals(identifyId)) {
ActorRef ref = [Link]();
if (ref == null)
getContext().stop(getSelf());
else {
another = ref;
getContext().watch(another);
[Link](ref, getSelf());
}
}
} else if (message instanceof Terminated) {
final Terminated t = (Terminated) message;
if ([Link]().equals(another)) {
getContext().stop(getSelf());
}
} else {
unhandled(message);
}
}
}

You can also acquire an ActorRef for an ActorSelection with the resolveOne method of the
ActorSelection. It returns a Future of the matching ActorRef if such an actor exists. It is completed
with failure [[[Link]]] if no such actor exists or the identification didn’t complete within the
supplied timeout.
Remote actor addresses may also be looked up, if remoting is enabled:
getContext().actorSelection("[Link]://app@otherhost:1234/user/serviceB");

An example demonstrating remote actor look-up is given in Remoting Sample.

4.1.4 Messages and immutability

IMPORTANT: Messages can be any kind of object but have to be immutable. Akka can’t enforce immutability
(yet) so this has to be by convention.
Here is an example of an immutable message:
public class ImmutableMessage {
private final int sequenceNumber;
private final List<String> values;

public ImmutableMessage(int sequenceNumber, List<String> values) {


[Link] = sequenceNumber;
[Link] = [Link](new ArrayList<String>(values));
}

public int getSequenceNumber() {


return sequenceNumber;
}

public List<String> getValues() {


return values;

4.1. Actors 111


Akka Java Documentation, Release 2.4.20

}
}

4.1.5 Send messages

Messages are sent to an Actor through one of the following methods.


• tell means “fire-and-forget”, e.g. send a message asynchronously and return immediately.
• ask sends a message asynchronously and returns a Future representing a possible reply.
Message ordering is guaranteed on a per-sender basis.

Note: There are performance implications of using ask since something needs to keep track of when it times
out, there needs to be something that bridges a Promise into an ActorRef and it also needs to be reachable
through remoting. So always prefer tell for performance, and only ask if you must.

In all these methods you have the option of passing along your own ActorRef. Make it a practice of doing so
because it will allow the receiver actors to be able to respond to your message, since the sender reference is sent
along with the message.

Tell: Fire-forget

This is the preferred way of sending messages. No blocking waiting for a message. This gives the best concurrency
and scalability characteristics.
// don’t forget to think about who is the sender (2nd argument)
[Link](message, getSelf());

The sender reference is passed along with the message and available within the receiving actor via its getSender
method while processing this message. Inside of an actor it is usually getSelf who shall be the sender, but there
can be cases where replies shall be routed to some other actor—e.g. the parent—in which the second argument to
tell would be a different one. Outside of an actor and if no reply is needed the second argument can be null;
if a reply is needed outside of an actor you can use the ask-pattern described next..

Ask: Send-And-Receive-Future

The ask pattern involves actors as well as futures, hence it is offered as a use pattern rather than a method on
ActorRef:
import static [Link];
import static [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

final Timeout t = new Timeout([Link](5, [Link]));

final ArrayList<Future<Object>> futures = new ArrayList<Future<Object>>();


[Link](ask(actorA, "request", 1000)); // using 1000ms timeout
[Link](ask(actorB, "another request", t)); // using timeout from
// above

final Future<Iterable<Object>> aggregate = [Link](futures,


[Link]());

4.1. Actors 112


Akka Java Documentation, Release 2.4.20

final Future<Result> transformed = [Link](


new Mapper<Iterable<Object>, Result>() {
public Result apply(Iterable<Object> coll) {
final Iterator<Object> it = [Link]();
final String x = (String) [Link]();
final String s = (String) [Link]();
return new Result(x, s);
}
}, [Link]());

pipe(transformed, [Link]()).to(actorC);

This example demonstrates ask together with the pipe pattern on futures, because this is likely to be a common
combination. Please note that all of the above is completely non-blocking and asynchronous: ask produces a
Future, two of which are composed into a new future using the [Link] and map methods and
then pipe installs an onComplete-handler on the future to effect the submission of the aggregated Result to
another actor.
Using ask will send a message to the receiving Actor as with tell, and the receiving actor must reply with
getSender().tell(reply, getSelf()) in order to complete the returned Future with a value. The
ask operation involves creating an internal actor for handling this reply, which needs to have a timeout after
which it is destroyed in order not to leak resources; see more below.

Note: A Java 8 variant of the ask pattern that returns a CompletionStage instead of a Scala Future is
available in the [Link] object.

Warning: To complete the future with an exception you need send a Failure message to the sender. This is
not done automatically when an actor throws an exception while processing a message.

try {
String result = operation();
getSender().tell(result, getSelf());
} catch (Exception e) {
getSender().tell(new [Link](e), getSelf());
throw e;
}

If the actor does not complete the future, it will expire after the timeout period, specified as parameter to the ask
method; this will complete the Future with an AskTimeoutException.
See Futures for more information on how to await or query a future.
The onComplete, onSuccess, or onFailure methods of the Future can be used to register a callback to
get a notification when the Future completes. Gives you a way to avoid blocking.

Warning: When using future callbacks, inside actors you need to carefully avoid closing over the containing
actor’s reference, i.e. do not call methods or access mutable state on the enclosing actor from within the call-
back. This would break the actor encapsulation and may introduce synchronization bugs and race conditions
because the callback will be scheduled concurrently to the enclosing actor. Unfortunately there is not yet a
way to detect these illegal accesses at compile time. See also: Actors and shared mutable state

Forward message

You can forward a message from one actor to another. This means that the original sender address/reference is
maintained even though the message is going through a ‘mediator’. This can be useful when writing actors that
work as routers, load-balancers, replicators etc. You need to pass along your context variable as well.

4.1. Actors 113


Akka Java Documentation, Release 2.4.20

[Link](result, getContext());

4.1.6 Receive messages

When an actor receives a message it is passed into the onReceive method, this is an abstract method on the
UntypedActor base class that needs to be defined.
Here is an example:
import [Link];
import [Link];
import [Link];

public class MyUntypedActor extends UntypedActor {


LoggingAdapter log = [Link](getContext().system(), this);

public void onReceive(Object message) throws Exception {


if (message instanceof String) {
[Link]("Received String message: {}", message);
getSender().tell(message, getSelf());
} else
unhandled(message);
}
}

An alternative to using if-instanceof checks is to use Apache Commons MethodUtils to invoke a named method
whose parameter type matches the message type.

4.1.7 Reply to messages

If you want to have a handle for replying to a message, you can use getSender(), which gives you an ActorRef.
You can reply by sending to that ActorRef with getSender().tell(replyMsg, getSelf()). You can
also store the ActorRef for replying later, or passing on to other actors. If there is no sender (a message was sent
without an actor or future context) then the sender defaults to a ‘dead-letter’ actor ref.
@Override
public void onReceive(Object msg) {
Object result =
// calculate result ...

// do not forget the second argument!


getSender().tell(result, getSelf());
}

4.1.8 Receive timeout

The UntypedActorContext setReceiveTimeout defines the inactivity timeout after which the sending of
a ReceiveTimeout message is triggered. When specified, the receive function should be able to handle an
[Link] message. 1 millisecond is the minimum supported timeout.
Please note that the receive timeout might fire and enqueue the ReceiveTimeout message right after another mes-
sage was enqueued; hence it is not guaranteed that upon reception of the receive timeout there must have been
an idle period beforehand as configured via this method.
Once set, the receive timeout stays in effect (i.e. continues firing repeatedly after inactivity periods). Pass in
[Link] to switch off this feature.

4.1. Actors 114


Akka Java Documentation, Release 2.4.20

import [Link];
import [Link];
import [Link];
import [Link];

public class MyReceiveTimeoutUntypedActor extends UntypedActor {

public MyReceiveTimeoutUntypedActor() {
// To set an initial delay
getContext().setReceiveTimeout([Link]("30 seconds"));
}

public void onReceive(Object message) {


if ([Link]("Hello")) {
// To set in a response to a message
getContext().setReceiveTimeout([Link]("1 second"));
} else if (message instanceof ReceiveTimeout) {
// To turn it off
getContext().setReceiveTimeout([Link]());
} else {
unhandled(message);
}
}
}

Messages marked with NotInfluenceReceiveTimeout will not reset the timer. This can be useful when
ReceiveTimeout should be fired by external inactivity but not influenced by internal activity, e.g. scheduled
tick messages.

4.1.9 Stopping actors

Actors are stopped by invoking the stop method of a ActorRefFactory, i.e. ActorContext or
ActorSystem. Typically the context is used for stopping the actor itself or child actors and the system for
stopping top level actors. The actual termination of the actor is performed asynchronously, i.e. stop may return
before the actor is stopped.
import [Link];
import [Link];
import [Link];
import [Link];

public class MyStoppingActor extends UntypedActor {

ActorRef child = null;

// ... creation of child ...

public void onReceive(Object message) throws Exception {


if ([Link]("interrupt-child")) {
context().stop(child);
} else if ([Link]("done")) {
context().stop(getSelf());
} else {
unhandled(message);
}
}
}

Processing of the current message, if any, will continue before the actor is stopped, but additional messages in the
mailbox will not be processed. By default these messages are sent to the deadLetters of the ActorSystem,
but that depends on the mailbox implementation.

4.1. Actors 115


Akka Java Documentation, Release 2.4.20

Termination of an actor proceeds in two steps: first the actor suspends its mailbox processing and sends a stop
command to all its children, then it keeps processing the internal termination notifications from its children until
the last one is gone, finally terminating itself (invoking postStop, dumping mailbox, publishing Terminated
on the DeathWatch, telling its supervisor). This procedure ensures that actor system sub-trees terminate in an
orderly fashion, propagating the stop command to the leaves and collecting their confirmation back to the stopped
supervisor. If one of the actors does not respond (i.e. processing a message for extended periods of time and
therefore not receiving the stop command), this whole process will be stuck.
Upon [Link], the system guardian actors will be stopped, and the aforementioned process
will ensure proper termination of the whole system.
The postStop hook is invoked after an actor is fully stopped. This enables cleaning up of resources:
@Override
public void postStop() {
// clean up resources here ...
}

Note: Since stopping an actor is asynchronous, you cannot immediately reuse the name of the child you just
stopped; this will result in an InvalidActorNameException. Instead, watch the terminating actor and
create its replacement in response to the Terminated message which will eventually arrive.

PoisonPill

You can also send an actor the [Link] message, which will stop the actor when the mes-
sage is processed. PoisonPill is enqueued as ordinary messages and will be handled after messages that were
already queued in the mailbox.
Use it like this:
[Link]([Link](), sender);

Graceful Stop

gracefulStop is useful if you need to wait for termination or compose ordered termination of several actors:
import static [Link];
import [Link];
import [Link];
import [Link];
import [Link];

try {
Future<Boolean> stopped =
gracefulStop(actorRef, [Link](5, [Link]), [Link]);
[Link](stopped, [Link](6, [Link]));
// the actor has been stopped
} catch (AskTimeoutException e) {
// the actor wasn’t stopped within 5 seconds
}

public class Manager extends UntypedActor {

public static final String SHUTDOWN = "shutdown";

ActorRef worker = getContext().watch(getContext().actorOf(


[Link]([Link]), "worker"));

public void onReceive(Object message) {


if ([Link]("job")) {

4.1. Actors 116


Akka Java Documentation, Release 2.4.20

[Link]("crunch", getSelf());
} else if ([Link](SHUTDOWN)) {
[Link]([Link](), getSelf());
getContext().become(shuttingDown);
}
}

Procedure<Object> shuttingDown = new Procedure<Object>() {


@Override
public void apply(Object message) {
if ([Link]("job")) {
getSender().tell("service unavailable, shutting down", getSelf());
} else if (message instanceof Terminated) {
getContext().stop(getSelf());
}
}
};
}

When gracefulStop() returns successfully, the actor’s postStop() hook will have been executed: there
exists a happens-before edge between the end of postStop() and the return of gracefulStop().
In the above example a custom [Link] message is sent to the target actor to initiate the process
of stopping the actor. You can use PoisonPill for this, but then you have limited possibilities to perform
interactions with other actors before stopping the target actor. Simple cleanup tasks can be handled in postStop.

Warning: Keep in mind that an actor stopping and its name being deregistered are separate events which
happen asynchronously from each other. Therefore it may be that you will find the name still in use after
gracefulStop() returned. In order to guarantee proper deregistration, only reuse names from within a
supervisor you control and only in response to a Terminated message, i.e. not for top-level actors.

4.1.10 HotSwap

Upgrade

Akka supports hotswapping the Actor’s message loop (e.g. its implementation) at runtime. Use the
getContext().become method from within the Actor. The hotswapped code is kept in a Stack which can be
pushed (replacing or adding at the top) and popped.

Warning: Please note that the actor will revert to its original behavior when restarted by its Supervisor.

To hotswap the Actor using getContext().become:


import [Link];

public class HotSwapActor extends UntypedActor {

Procedure<Object> angry = new Procedure<Object>() {


@Override
public void apply(Object message) {
if ([Link]("bar")) {
getSender().tell("I am already angry?", getSelf());
} else if ([Link]("foo")) {
getContext().become(happy);
}
}
};

Procedure<Object> happy = new Procedure<Object>() {

4.1. Actors 117


Akka Java Documentation, Release 2.4.20

@Override
public void apply(Object message) {
if ([Link]("bar")) {
getSender().tell("I am already happy :-)", getSelf());
} else if ([Link]("foo")) {
getContext().become(angry);
}
}
};

public void onReceive(Object message) {


if ([Link]("bar")) {
getContext().become(angry);
} else if ([Link]("foo")) {
getContext().become(happy);
} else {
unhandled(message);
}
}
}

This variant of the become method is useful for many different things, such as to implement a Finite State
Machine (FSM). It will replace the current behavior (i.e. the top of the behavior stack), which means that you do
not use unbecome, instead always the next behavior is explicitly installed.
The other way of using become does not replace but add to the top of the behavior stack. In this case care must
be taken to ensure that the number of “pop” operations (i.e. unbecome) matches the number of “push” ones in
the long run, otherwise this amounts to a memory leak (which is why this behavior is not the default).
public class UntypedActorSwapper {

public static class Swap {


public static Swap SWAP = new Swap();

private Swap() {
}
}

public static class Swapper extends UntypedActor {


LoggingAdapter log = [Link](getContext().system(), this);

public void onReceive(Object message) {


if (message == SWAP) {
[Link]("Hi");
getContext().become(new Procedure<Object>() {
@Override
public void apply(Object message) {
[Link]("Ho");
getContext().unbecome(); // resets the latest ’become’
}
}, false); // this signals stacking of the new behavior
} else {
unhandled(message);
}
}
}

public static void main(String... args) {


ActorSystem system = [Link]("MySystem");
ActorRef swap = [Link]([Link]([Link]));
[Link](SWAP, [Link]()); // logs Hi
[Link](SWAP, [Link]()); // logs Ho
[Link](SWAP, [Link]()); // logs Hi

4.1. Actors 118


Akka Java Documentation, Release 2.4.20

[Link](SWAP, [Link]()); // logs Ho


[Link](SWAP, [Link]()); // logs Hi
[Link](SWAP, [Link]()); // logs Ho
}

4.1.11 Stash

The UntypedActorWithStash class enables an actor to temporarily stash away messages that can not or
should not be handled using the actor’s current behavior. Upon changing the actor’s message handler, i.e., right
before invoking getContext().become() or getContext().unbecome(), all stashed messages can
be “unstashed”, thereby prepending them to the actor’s mailbox. This way, the stashed messages can be processed
in the same order as they have been received originally. An actor that extends UntypedActorWithStash will
automatically get a deque-based mailbox.

Note: The abstract class UntypedActorWithStash implements the marker interface


RequiresMessageQueue<DequeBasedMessageQueueSemantics> which requests the system
to automatically choose a deque based mailbox implementation for the actor. If you want more control over the
mailbox, see the documentation on mailboxes: Mailboxes.

Here is an example of the UntypedActorWithStash class in action:


import [Link];

public class ActorWithProtocol extends UntypedActorWithStash {


public void onReceive(Object msg) {
if ([Link]("open")) {
unstashAll();
getContext().become(new Procedure<Object>() {
public void apply(Object msg) throws Exception {
if ([Link]("write")) {
// do writing...
} else if ([Link]("close")) {
unstashAll();
getContext().unbecome();
} else {
stash();
}
}
}, false); // add behavior on top instead of replacing
} else {
stash();
}
}
}

Invoking stash() adds the current message (the message that the actor received last) to the actor’s stash.
It is typically invoked when handling the default case in the actor’s message handler to stash messages that
aren’t handled by the other cases. It is illegal to stash the same message twice; to do so results in an
IllegalStateException being thrown. The stash may also be bounded in which case invoking stash()
may lead to a capacity violation, which results in a StashOverflowException. The capacity of the stash
can be configured using the stash-capacity setting (an Int) of the mailbox’s configuration.
Invoking unstashAll() enqueues messages from the stash to the actor’s mailbox until the capacity of the mail-
box (if any) has been reached (note that messages from the stash are prepended to the mailbox). In case a bounded
mailbox overflows, a MessageQueueAppendFailedException is thrown. The stash is guaranteed to be
empty after calling unstashAll().

4.1. Actors 119


Akka Java Documentation, Release 2.4.20

The stash is backed by a [Link]. As a result, even a very large number


of messages may be stashed without a major impact on performance.
Note that the stash is part of the ephemeral actor state, unlike the mailbox. Therefore, it should be managed like
other parts of the actor’s state which have the same property. The UntypedActorWithStash implementation
of preRestart will call unstashAll(), which is usually the desired behavior.

Note: If you want to enforce that your actor can only work with an unbounded stash, then you should use the
UntypedActorWithUnboundedStash class instead.

4.1.12 Killing an Actor

You can kill an actor by sending a Kill message. This will cause the actor to throw a
ActorKilledException, triggering a failure. The actor will suspend operation and its supervisor will be
asked how to handle the failure, which may mean resuming the actor, restarting it or terminating it completely.
See What Supervision Means for more information.
Use Kill like this:
[Link]([Link](), [Link]());

4.1.13 Actors and exceptions

It can happen that while a message is being processed by an actor, that some kind of exception is thrown, e.g. a
database exception.

What happens to the Message

If an exception is thrown while a message is being processed (i.e. taken out of its mailbox and handed over to the
current behavior), then this message will be lost. It is important to understand that it is not put back on the mailbox.
So if you want to retry processing of a message, you need to deal with it yourself by catching the exception and
retry your flow. Make sure that you put a bound on the number of retries since you don’t want a system to livelock
(so consuming a lot of cpu cycles without making progress). Another possibility would be to have a look at the
PeekMailbox pattern.

What happens to the mailbox

If an exception is thrown while a message is being processed, nothing happens to the mailbox. If the actor is
restarted, the same mailbox will be there. So all messages on that mailbox will be there as well.

What happens to the actor

If code within an actor throws an exception, that actor is suspended and the supervision process is started (see Su-
pervision and Monitoring). Depending on the supervisor’s decision the actor is resumed (as if nothing happened),
restarted (wiping out its internal state and starting from scratch) or terminated.

4.1.14 Initialization patterns

The rich lifecycle hooks of Actors provide a useful toolkit to implement various initialization patterns. During the
lifetime of an ActorRef, an actor can potentially go through several restarts, where the old instance is replaced
by a fresh one, invisibly to the outside observer who only sees the ActorRef.

4.1. Actors 120


Akka Java Documentation, Release 2.4.20

One may think about the new instances as “incarnations”. Initialization might be necessary for every incarnation
of an actor, but sometimes one needs initialization to happen only at the birth of the first instance when the
ActorRef is created. The following sections provide patterns for different initialization needs.

Initialization via constructor

Using the constructor for initialization has various benefits. First of all, it makes it possible to use val fields to
store any state that does not change during the life of the actor instance, making the implementation of the actor
more robust. The constructor is invoked for every incarnation of the actor, therefore the internals of the actor can
always assume that proper initialization happened. This is also the drawback of this approach, as there are cases
when one would like to avoid reinitializing internals on restart. For example, it is often useful to preserve child
actors across restarts. The following section provides a pattern for this case.

Initialization via preStart

The method preStart() of an actor is only called once directly during the initialization of the first instance,
that is, at creation of its ActorRef. In the case of restarts, preStart() is called from postRestart(),
therefore if not overridden, preStart() is called on every incarnation. However, overriding postRestart()
one can disable this behavior, and ensure that there is only one call to preStart().
One useful usage of this pattern is to disable creation of new ActorRefs for children during restarts. This can
be achieved by overriding preRestart():
@Override
public void preStart() {
// Initialize children here
}

// Overriding postRestart to disable the call to preStart()


// after restarts
@Override
public void postRestart(Throwable reason) {
}

// The default implementation of preRestart() stops all the children


// of the actor. To opt-out from stopping the children, we
// have to override preRestart()
@Override
public void preRestart(Throwable reason, Option<Object> message)
throws Exception {
// Keep the call to postStop(), but no stopping of children
postStop();
}

Please note, that the child actors are still restarted, but no new ActorRef is created. One can recursively apply
the same principles for the children, ensuring that their preStart() method is called only at the creation of
their refs.
For more information see What Restarting Means.

Initialization via message passing

There are cases when it is impossible to pass all the information needed for actor initialization in the constructor,
for example in the presence of circular dependencies. In this case the actor should listen for an initialization
message, and use become() or a finite state-machine state transition to encode the initialized and uninitialized
states of the actor.
private String initializeMe = null;

@Override

4.1. Actors 121


Akka Java Documentation, Release 2.4.20

public void onReceive(Object message) throws Exception {


if ([Link]("init")) {
initializeMe = "Up and running";
getContext().become(new Procedure<Object>() {
@Override
public void apply(Object message) throws Exception {
if ([Link]("U OK?"))
getSender().tell(initializeMe, getSelf());
}
});
}
}

If the actor may receive messages before it has been initialized, a useful tool can be the Stash to save messages
until the initialization finishes, and replaying them after the actor became initialized.

Warning: This pattern should be used with care, and applied only when none of the patterns above are
applicable. One of the potential issues is that messages might be lost when sent to remote actors. Also,
publishing an ActorRef in an uninitialized state might lead to the condition that it receives a user message
before the initialization has been done.

4.2 Typed Actors

Akka Typed Actors is an implementation of the Active Objects pattern. Essentially turning method invocations
into asynchronous dispatch instead of synchronous that has been the default way since Smalltalk came out.
Typed Actors consist of 2 “parts”, a public interface and an implementation, and if you’ve done any work in
“enterprise” Java, this will be very familiar to you. As with normal Actors you have an external API (the public
interface instance) that will delegate method calls asynchronously to a private instance of the implementation.
The advantage of Typed Actors vs. Actors is that with TypedActors you have a static contract, and don’t need to
define your own messages, the downside is that it places some limitations on what you can do and what you can’t,
i.e. you can’t use become/unbecome.
Typed Actors are implemented using JDK Proxies which provide a pretty easy-worked API to intercept method
calls.

Note: Just as with regular Akka Untyped Actors, Typed Actors process one call at a time.

4.2.1 When to use Typed Actors

Typed actors are nice for bridging between actor systems (the “inside”) and non-actor code (the “outside”), because
they allow you to write normal OO-looking code on the outside. Think of them like doors: their practicality lies
in interfacing between private sphere and the public, but you don’t want that many doors inside your house, do
you? For a longer discussion see this blog post.
A bit more background: TypedActors can easily be abused as RPC, and that is an abstraction which is well-
known to be leaky. Hence TypedActors are not what we think of first when we talk about making highly scalable
concurrent software easier to write correctly. They have their niche, use them sparingly.

4.2.2 The tools of the trade

Before we create our first Typed Actor we should first go through the tools that we have at our disposal, it’s located
in [Link].

4.2. Typed Actors 122


Akka Java Documentation, Release 2.4.20

//Returns the Typed Actor Extension


TypedActorExtension extension =
[Link](system); //system is an instance of ActorSystem

//Returns whether the reference is a Typed Actor Proxy or not


[Link](system).isTypedActor(someReference);

//Returns the backing Akka Actor behind an external Typed Actor Proxy
[Link](system).getActorRefFor(someReference);

//Returns the current ActorContext,


// method only valid within methods of a TypedActor implementation
ActorContext context = [Link]();

//Returns the external proxy of the current Typed Actor,


// method only valid within methods of a TypedActor implementation
Squarer sq = TypedActor.<Squarer>self();

//Returns a contextual instance of the Typed Actor Extension


//this means that if you create other Typed Actors with this,
//they will become children to the current Typed Actor.
[Link]([Link]());

Warning: Same as not exposing this of an Akka Actor, it’s important not to expose this of a Typed
Actor, instead you should pass the external proxy reference, which is obtained from within your Typed Actor
as [Link](), this is your external identity, as the ActorRef is the external identity of an
Akka Actor.

4.2.3 Creating Typed Actors

To create a Typed Actor you need to have one or more interfaces, and one implementation.
The following imports are assumed:
import [Link];
import [Link].*;
import [Link].*;
import [Link];

import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

import [Link];
import [Link];
import [Link];
import [Link];
public class TypedActorDocTest extends AbstractJavaTest {
Object someReference = null;
ActorSystem system = null;

static
public interface Squarer {
void squareDontCare(int i); //fire-forget

Future<Integer> square(int i); //non-blocking send-request-reply

Option<Integer> squareNowPlease(int i);//blocking send-request-reply

4.2. Typed Actors 123


Akka Java Documentation, Release 2.4.20

int squareNow(int i); //blocking send-request-reply


}

static
class SquarerImpl implements Squarer {
private String name;

public SquarerImpl() {
[Link] = "default";
}

public SquarerImpl(String name) {


[Link] = name;
}

public void squareDontCare(int i) {


int sq = i * i; //Nobody cares :(
}

public Future<Integer> square(int i) {


return [Link](i * i);
}

public Option<Integer> squareNowPlease(int i) {


return [Link](i * i);
}

public int squareNow(int i) {


return i * i;
}
}

@Test public void mustGetTheTypedActorExtension() {

try {

//Returns the Typed Actor Extension


TypedActorExtension extension =
[Link](system); //system is an instance of ActorSystem

//Returns whether the reference is a Typed Actor Proxy or not


[Link](system).isTypedActor(someReference);

//Returns the backing Akka Actor behind an external Typed Actor Proxy
[Link](system).getActorRefFor(someReference);

//Returns the current ActorContext,


// method only valid within methods of a TypedActor implementation
ActorContext context = [Link]();

//Returns the external proxy of the current Typed Actor,


// method only valid within methods of a TypedActor implementation
Squarer sq = TypedActor.<Squarer>self();

//Returns a contextual instance of the Typed Actor Extension


//this means that if you create other Typed Actors with this,
//they will become children to the current Typed Actor.
[Link]([Link]());

} catch (Exception e) {
//dun care
}

4.2. Typed Actors 124


Akka Java Documentation, Release 2.4.20

}
@Test public void createATypedActor() {
try {
Squarer mySquarer =
[Link](system).typedActorOf(
new TypedProps<SquarerImpl>([Link], [Link]));
Squarer otherSquarer =
[Link](system).typedActorOf(
new TypedProps<SquarerImpl>([Link],
new Creator<SquarerImpl>() {
public SquarerImpl create() { return new SquarerImpl("foo"); }
}),
"name");

[Link](10);

Future<Integer> fSquare = [Link](10); //A Future[Int]

Option<Integer> oSquare = [Link](10); //Option[Int]

int iSquare = [Link](10); //Int

assertEquals(100, [Link](fSquare,
[Link](3, [Link])).intValue());

assertEquals(100, [Link]().intValue());

assertEquals(100, iSquare);

[Link](system).stop(mySquarer);

[Link](system).poisonPill(otherSquarer);
} catch(Exception e) {
//Ignore
}
}

@Test public void createHierarchies() {


try {
Squarer childSquarer =
[Link]([Link]()).
typedActorOf(
new TypedProps<SquarerImpl>([Link], [Link])
);
//Use "childSquarer" as a Squarer
} catch (Exception e) {
//dun care
}
}

@Test public void proxyAnyActorRef() {


try {
final ActorRef actorRefToRemoteActor = [Link]();
Squarer typedActor =
[Link](system).
typedActorOf(
new TypedProps<Squarer>([Link]),
actorRefToRemoteActor
);
//Use "typedActor" as a FooBar
} catch (Exception e) {
//dun care
}

4.2. Typed Actors 125


Akka Java Documentation, Release 2.4.20

interface HasName {
String name();
}

class Named implements HasName {


private int id = new Random().nextInt(1024);

@Override public String name() { return "name-" + id; }


}

@Test public void typedRouterPattern() {


try {
// prepare routees
TypedActorExtension typed = [Link](system);

Named named1 =
[Link](new TypedProps<Named>([Link]));

Named named2 =
[Link](new TypedProps<Named>([Link]));

List<Named> routees = new ArrayList<Named>();


[Link](named1);
[Link](named2);

List<String> routeePaths = new ArrayList<String>();


[Link]([Link](named1).path().toStringWithoutAddress());
[Link]([Link](named2).path().toStringWithoutAddress());

// prepare untyped router


ActorRef router = [Link](new RoundRobinGroup(routeePaths).props(), "router");

// prepare typed proxy, forwarding MethodCall messages to ‘router‘


Named typedRouter = [Link](new TypedProps<Named>([Link]), router);

[Link]("actor was: " + [Link]()); // name-243


[Link]("actor was: " + [Link]()); // name-614
[Link]("actor was: " + [Link]()); // name-243
[Link]("actor was: " + [Link]()); // name-614

[Link](named1);
[Link](named2);
[Link](typedRouter);

} catch (Exception e) {
//dun care
}
}
}

Our example interface:


public interface Squarer {
// typed actor iface methods ...
}

Our example implementation of that interface:


class SquarerImpl implements Squarer {
private String name;

4.2. Typed Actors 126


Akka Java Documentation, Release 2.4.20

public SquarerImpl() {
[Link] = "default";
}

public SquarerImpl(String name) {


[Link] = name;
}

// typed actor impl methods ...


}

The most trivial way of creating a Typed Actor instance of our Squarer:
Squarer mySquarer =
[Link](system).typedActorOf(
new TypedProps<SquarerImpl>([Link], [Link]));

First type is the type of the proxy, the second type is the type of the implementation. If you need to call a specific
constructor you do it like this:
Squarer otherSquarer =
[Link](system).typedActorOf(
new TypedProps<SquarerImpl>([Link],
new Creator<SquarerImpl>() {
public SquarerImpl create() { return new SquarerImpl("foo"); }
}),
"name");

Since you supply a Props, you can specify which dispatcher to use, what the default timeout should be used and
more. Now, our Squarer doesn’t have any methods, so we’d better add those.
public interface Squarer {
void squareDontCare(int i); //fire-forget

Future<Integer> square(int i); //non-blocking send-request-reply

Option<Integer> squareNowPlease(int i);//blocking send-request-reply

int squareNow(int i); //blocking send-request-reply


}

Alright, now we’ve got some methods we can call, but we need to implement those in SquarerImpl.
class SquarerImpl implements Squarer {
private String name;

public SquarerImpl() {
[Link] = "default";
}

public SquarerImpl(String name) {


[Link] = name;
}

public void squareDontCare(int i) {


int sq = i * i; //Nobody cares :(
}

public Future<Integer> square(int i) {


return [Link](i * i);
}

4.2. Typed Actors 127


Akka Java Documentation, Release 2.4.20

public Option<Integer> squareNowPlease(int i) {


return [Link](i * i);
}

public int squareNow(int i) {


return i * i;
}
}

Excellent, now we have an interface and an implementation of that interface, and we know how to create a Typed
Actor from that, so let’s look at calling these methods.

4.2.4 Method dispatch semantics

Methods returning:
• void will be dispatched with fire-and-forget semantics, exactly like [Link]
• [Link]<?> will use send-request-reply semantics, exactly like
[Link]
• [Link]<?> will use send-request-reply semantics, but will block to wait for an
answer, and return [Link] if no answer was produced within the timeout, or
[Link]<?> containing the result otherwise. Any exception that was thrown dur-
ing this call will be rethrown.
• Any other type of value will use send-request-reply semantics, but will block to wait
for an answer, throwing [Link] if there was a time-
out or rethrow any exception that was thrown during this call. Note that due to the
Java exception and reflection mechanisms, such a TimeoutException will be wrapped in a
[Link] unless the interface method explicitly
declares the TimeoutException as a thrown checked exception.

4.2.5 Messages and immutability

While Akka cannot enforce that the parameters to the methods of your Typed Actors are immutable, we strongly
recommend that parameters passed are immutable.

One-way message send

[Link](10);

As simple as that! The method will be executed on another thread; asynchronously.

Request-reply message send

Option<Integer> oSquare = [Link](10); //Option[Int]

This will block for as long as the timeout that was set in the Props of the Typed Actor, if needed. It will return
None if a timeout occurs.
int iSquare = [Link](10); //Int

This will block for as long as the timeout that was set in the Props of the Typed Ac-
tor, if needed. It will throw a [Link] if a
timeout occurs. Note that here, such a TimeoutException will be wrapped in a
[Link] by the Java reflection mechanism, because
the interface method does not explicitly declare the TimeoutException as a thrown checked exception. To get

4.2. Typed Actors 128


Akka Java Documentation, Release 2.4.20

the TimeoutException directly, declare throws [Link] at


the interface method.

Request-reply-with-future message send

Future<Integer> fSquare = [Link](10); //A Future[Int]

This call is asynchronous, and the Future returned can be used for asynchronous composition.

4.2.6 Stopping Typed Actors

Since Akka’s Typed Actors are backed by Akka Actors they must be stopped when they aren’t needed anymore.
[Link](system).stop(mySquarer);

This asynchronously stops the Typed Actor associated with the specified proxy ASAP.
[Link](system).poisonPill(otherSquarer);

This asynchronously stops the Typed Actor associated with the specified proxy after it’s done with all calls that
were made prior to this call.

4.2.7 Typed Actor Hierarchies

Since you can obtain a contextual Typed Actor Extension by passing in an ActorContext you can create child
Typed Actors by invoking typedActorOf(..) on that.
Squarer childSquarer =
[Link]([Link]()).
typedActorOf(
new TypedProps<SquarerImpl>([Link], [Link])
);
//Use "childSquarer" as a Squarer

You can also create a child Typed Actor in regular Akka Actors by giving the UntypedActorContext as an
input parameter to [Link](. . . ).

4.2.8 Supervisor Strategy

By having your Typed Actor implementation class implement [Link] you can define the
strategy to use for supervising child actors, as described in Supervision and Monitoring and Fault Tolerance.

4.2.9 Receive arbitrary messages

If your implementation class of your TypedActor extends [Link], all mes-


sages that are not MethodCall instances will be passed into the onReceive-method.
This allows you to react to DeathWatch Terminated-messages and other types of messages, e.g. when inter-
facing with untyped actors.

4.2.10 Lifecycle callbacks

By having your Typed Actor implementation class implement any and all of the following:
• [Link]
• [Link]

4.2. Typed Actors 129


Akka Java Documentation, Release 2.4.20

• [Link]
• [Link]
You can hook into the lifecycle of your Typed Actor.

4.2.11 Proxying

You can use the typedActorOf that takes a TypedProps and an ActorRef to proxy the given ActorRef as a
TypedActor. This is usable if you want to communicate remotely with TypedActors on other machines, just pass
the ActorRef to typedActorOf.

4.2.12 Lookup & Remoting

Since TypedActors are backed by Akka Actors, you can use typedActorOf to proxy ActorRefs
potentially residing on remote nodes.
Squarer typedActor =
[Link](system).
typedActorOf(
new TypedProps<Squarer>([Link]),
actorRefToRemoteActor
);
//Use "typedActor" as a FooBar

4.2.13 Typed Router pattern

Sometimes you want to spread messages between multiple actors. The easiest way to achieve this in
Akka is to use a Router, which can implement a specific routing logic, such as smallest-mailbox or
consistent-hashing etc.
Routers are not provided directly for typed actors, but it is really easy to leverage an untyped router and use a
typed proxy in front of it. To showcase this let’s create typed actors that assign themselves some random id, so
we know that in fact, the router has sent the message to different actors:
@Test public void typedRouterPattern() {
try {
// prepare routees
TypedActorExtension typed = [Link](system);

Named named1 =
[Link](new TypedProps<Named>([Link]));

Named named2 =
[Link](new TypedProps<Named>([Link]));

List<Named> routees = new ArrayList<Named>();


[Link](named1);
[Link](named2);

List<String> routeePaths = new ArrayList<String>();


[Link]([Link](named1).path().toStringWithoutAddress());
[Link]([Link](named2).path().toStringWithoutAddress());

// prepare untyped router


ActorRef router = [Link](new RoundRobinGroup(routeePaths).props(), "router");

// prepare typed proxy, forwarding MethodCall messages to ‘router‘


Named typedRouter = [Link](new TypedProps<Named>([Link]), router);

4.2. Typed Actors 130


Akka Java Documentation, Release 2.4.20

[Link]("actor was: " + [Link]()); // name-243


[Link]("actor was: " + [Link]()); // name-614
[Link]("actor was: " + [Link]()); // name-243
[Link]("actor was: " + [Link]()); // name-614

[Link](named1);
[Link](named2);
[Link](typedRouter);

} catch (Exception e) {
//dun care
}
}
}

In order to round robin among a few instances of such actors, you can simply create a plain untyped router,
and then facade it with a TypedActor like shown in the example below. This works because typed actors of
course communicate using the same mechanisms as normal actors, and methods calls on them get transformed
into message sends of MethodCall messages.
// prepare routees
TypedActorExtension typed = [Link](system);

Named named1 =
[Link](new TypedProps<Named>([Link]));

Named named2 =
[Link](new TypedProps<Named>([Link]));

List<Named> routees = new ArrayList<Named>();


[Link](named1);
[Link](named2);

List<String> routeePaths = new ArrayList<String>();


[Link]([Link](named1).path().toStringWithoutAddress());
[Link]([Link](named2).path().toStringWithoutAddress());

// prepare untyped router


ActorRef router = [Link](new RoundRobinGroup(routeePaths).props(), "router");

// prepare typed proxy, forwarding MethodCall messages to ‘router‘


Named typedRouter = [Link](new TypedProps<Named>([Link]), router);

[Link]("actor was: " + [Link]()); // name-243


[Link]("actor was: " + [Link]()); // name-614
[Link]("actor was: " + [Link]()); // name-243
[Link]("actor was: " + [Link]()); // name-614

4.3 Fault Tolerance

As explained in Actor Systems each actor is the supervisor of its children, and as such each actor defines fault
handling supervisor strategy. This strategy cannot be changed afterwards as it is an integral part of the actor
system’s structure.

4.3.1 Fault Handling in Practice

First, let us look at a sample that illustrates one way to handle data store errors, which is a typical source of failure
in real world applications. Of course it depends on the actual application what is possible to do when the data
store is unavailable, but in this sample we use a best effort re-connect approach.

4.3. Fault Tolerance 131


Akka Java Documentation, Release 2.4.20

Read the following source code. The inlined comments explain the different pieces of the fault handling and why
they are added. It is also highly recommended to run this sample as it is easy to follow the log output to understand
what is happening at runtime.

Diagrams of the Fault Tolerance Sample

The above diagram illustrates the normal message flow.


Normal flow:
Step Description
1 The progress Listener starts the work.
2 The Worker schedules work by sending Do messages periodically to itself
3, When receiving Do the Worker tells the CounterService to increment the counter, three times.
4, 5 The Increment message is forwarded to the Counter, which updates its counter variable and sends
current value to the Storage.
6, 7 The Worker asks the CounterService of current value of the counter and pipes the result back to
the Listener.

4.3. Fault Tolerance 132


Akka Java Documentation, Release 2.4.20

The above diagram illustrates what happens in case of storage failure.


Failure flow:

4.3. Fault Tolerance 133


Akka Java Documentation, Release 2.4.20

Step Description
1 The Storage throws StorageException.
2 The CounterService is supervisor of the Storage and restarts the Storage when
StorageException is thrown.
3, 4, The Storage continues to fail and is restarted.
5, 6
7 After 3 failures and restarts within 5 seconds the Storage is stopped by its supervisor, i.e. the
CounterService.
8 The CounterService is also watching the Storage for termination and receives the
Terminated message when the Storage has been stopped ...
9, 10, and tells the Counter that there is no Storage.
11
12 The CounterService schedules a Reconnect message to itself.
13, When it receives the Reconnect message it creates a new Storage ...
14
15, and tells the Counter to use the new Storage
16

Full Source Code of the Fault Tolerance Sample

import [Link];
import [Link];
import [Link];
import [Link];

import [Link].*;
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

import static [Link];

import static [Link];


import static [Link];
import static [Link];
import [Link];
import static [Link];
import static [Link];

import static [Link].*;


import static [Link].*;
import static [Link].*;
import static [Link].*;

public class FaultHandlingDocSample {

/**
* Runs the sample
*/
public static void main(String[] args) {
Config config = [Link]("[Link] = DEBUG \n" +
"[Link] = on");

ActorSystem system = [Link]("FaultToleranceSample", config);

4.3. Fault Tolerance 134


Akka Java Documentation, Release 2.4.20

ActorRef worker = [Link]([Link]([Link]), "worker");


ActorRef listener = [Link]([Link]([Link]), "listener");
// start the work and listen on progress
// note that the listener is used as sender of the tell,
// i.e. it will receive replies from the worker
[Link](Start, listener);
}

/**
* Listens on progress from the worker and shuts down the system when enough
* work has been done.
*/
public static class Listener extends UntypedActor {
final LoggingAdapter log = [Link](getContext().system(), this);

@Override
public void preStart() {
// If we don’t get any progress within 15 seconds then the service
// is unavailable
getContext().setReceiveTimeout([Link]("15 seconds"));
}

public void onReceive(Object msg) {


[Link]("received message {}", msg);
if (msg instanceof Progress) {
Progress progress = (Progress) msg;
[Link]("Current progress: {} %", [Link]);
if ([Link] >= 100.0) {
[Link]("That’s all, shutting down");
getContext().system().terminate();
}
} else if (msg == [Link]()) {
// No progress within 15 seconds, ServiceUnavailable
[Link]("Shutting down due to unavailable service");
getContext().system().terminate();
} else {
unhandled(msg);
}
}
}

public interface WorkerApi {


public static final Object Start = "Start";
public static final Object Do = "Do";

public static class Progress {


public final double percent;

public Progress(double percent) {


[Link] = percent;
}

public String toString() {


return [Link]("%s(%s)", getClass().getSimpleName(), percent);
}
}
}

/**
* Worker performs some work when it receives the Start message. It will
* continuously notify the sender of the Start message of current Progress.
* The Worker supervise the CounterService.

4.3. Fault Tolerance 135


Akka Java Documentation, Release 2.4.20

*/
public static class Worker extends UntypedActor {
final LoggingAdapter log = [Link](getContext().system(), this);
final Timeout askTimeout = new Timeout([Link](5, "seconds"));

// The sender of the initial Start message will continuously be notified


// about progress
ActorRef progressListener;
final ActorRef counterService = getContext().actorOf(
[Link]([Link]), "counter");
final int totalCount = 51;

// Stop the CounterService child if it throws ServiceUnavailable


private static SupervisorStrategy strategy = new OneForOneStrategy(-1,
[Link](), new Function<Throwable, Directive>() {
@Override
public Directive apply(Throwable t) {
if (t instanceof ServiceUnavailable) {
return stop();
} else {
return escalate();
}
}
});

@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}

public void onReceive(Object msg) {


[Link]("received message {}", msg);
if ([Link](Start) && progressListener == null) {
progressListener = getSender();
getContext().system().scheduler().schedule(
[Link](), [Link](1, "second"), getSelf(), Do,
getContext().dispatcher(), null
);
} else if ([Link](Do)) {
[Link](new Increment(1), getSelf());
[Link](new Increment(1), getSelf());
[Link](new Increment(1), getSelf());

// Send current progress to the initial sender


pipe(ask(counterService, GetCurrentCount, askTimeout)
.mapTo(classTag([Link]))
.map(new Mapper<CurrentCount, Progress>() {
public Progress apply(CurrentCount c) {
return new Progress(100.0 * [Link] / totalCount);
}
}, getContext().dispatcher()), getContext().dispatcher())
.to(progressListener);
} else {
unhandled(msg);
}
}
}

public interface CounterServiceApi {

public static final Object GetCurrentCount = "GetCurrentCount";

public static class CurrentCount {

4.3. Fault Tolerance 136


Akka Java Documentation, Release 2.4.20

public final String key;


public final long count;

public CurrentCount(String key, long count) {


[Link] = key;
[Link] = count;
}

public String toString() {


return [Link]("%s(%s, %s)", getClass().getSimpleName(), key, count);
}
}

public static class Increment {


public final long n;

public Increment(long n) {
this.n = n;
}

public String toString() {


return [Link]("%s(%s)", getClass().getSimpleName(), n);
}
}

public static class ServiceUnavailable extends RuntimeException {


private static final long serialVersionUID = 1L;
public ServiceUnavailable(String msg) {
super(msg);
}
}

/**
* Adds the value received in Increment message to a persistent counter.
* Replies with CurrentCount when it is asked for CurrentCount. CounterService
* supervise Storage and Counter.
*/
public static class CounterService extends UntypedActor {

// Reconnect message
static final Object Reconnect = "Reconnect";

private static class SenderMsgPair {


final ActorRef sender;
final Object msg;

SenderMsgPair(ActorRef sender, Object msg) {


[Link] = msg;
[Link] = sender;
}
}

final LoggingAdapter log = [Link](getContext().system(), this);


final String key = getSelf().path().name();
ActorRef storage;
ActorRef counter;
final List<SenderMsgPair> backlog = new ArrayList<SenderMsgPair>();
final int MAX_BACKLOG = 10000;

// Restart the storage child when StorageException is thrown.

4.3. Fault Tolerance 137


Akka Java Documentation, Release 2.4.20

// After 3 restarts within 5 seconds it will be stopped.


private static SupervisorStrategy strategy = new OneForOneStrategy(3,
[Link]("5 seconds"), new Function<Throwable, Directive>() {
@Override
public Directive apply(Throwable t) {
if (t instanceof StorageException) {
return restart();
} else {
return escalate();
}
}
});

@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}

@Override
public void preStart() {
initStorage();
}

/**
* The child storage is restarted in case of failure, but after 3 restarts,
* and still failing it will be stopped. Better to back-off than
* continuously failing. When it has been stopped we will schedule a
* Reconnect after a delay. Watch the child so we receive Terminated message
* when it has been terminated.
*/
void initStorage() {
storage = getContext().watch(getContext().actorOf(
[Link]([Link]), "storage"));
// Tell the counter, if any, to use the new storage
if (counter != null)
[Link](new UseStorage(storage), getSelf());
// We need the initial value to be able to operate
[Link](new Get(key), getSelf());
}

@Override
public void onReceive(Object msg) {
[Link]("received message {}", msg);
if (msg instanceof Entry && ((Entry) msg).[Link](key) &&
counter == null) {
// Reply from Storage of the initial value, now we can create the Counter
final long value = ((Entry) msg).value;
counter = getContext().actorOf([Link]([Link], key, value));
// Tell the counter to use current storage
[Link](new UseStorage(storage), getSelf());
// and send the buffered backlog to the counter
for (SenderMsgPair each : backlog) {
[Link]([Link], [Link]);
}
[Link]();
} else if (msg instanceof Increment) {
forwardOrPlaceInBacklog(msg);
} else if ([Link](GetCurrentCount)) {
forwardOrPlaceInBacklog(msg);
} else if (msg instanceof Terminated) {
// After 3 restarts the storage child is stopped.
// We receive Terminated because we watch the child, see initStorage.
storage = null;

4.3. Fault Tolerance 138


Akka Java Documentation, Release 2.4.20

// Tell the counter that there is no storage for the moment


[Link](new UseStorage(null), getSelf());
// Try to re-establish storage after while
getContext().system().scheduler().scheduleOnce(
[Link](10, "seconds"), getSelf(), Reconnect,
getContext().dispatcher(), null);
} else if ([Link](Reconnect)) {
// Re-establish storage after the scheduled delay
initStorage();
} else {
unhandled(msg);
}
}

void forwardOrPlaceInBacklog(Object msg) {


// We need the initial value from storage before we can start delegate to
// the counter. Before that we place the messages in a backlog, to be sent
// to the counter when it is initialized.
if (counter == null) {
if ([Link]() >= MAX_BACKLOG)
throw new ServiceUnavailable("CounterService not available," +
" lack of initial value");
[Link](new SenderMsgPair(getSender(), msg));
} else {
[Link](msg, getContext());
}
}
}

public interface CounterApi {


public static class UseStorage {
public final ActorRef storage;

public UseStorage(ActorRef storage) {


[Link] = storage;
}

public String toString() {


return [Link]("%s(%s)", getClass().getSimpleName(), storage);
}
}
}

/**
* The in memory count variable that will send current value to the Storage,
* if there is any storage available at the moment.
*/
public static class Counter extends UntypedActor {
final LoggingAdapter log = [Link](getContext().system(), this);
final String key;
long count;
ActorRef storage;

public Counter(String key, long initialValue) {


[Link] = key;
[Link] = initialValue;
}

@Override
public void onReceive(Object msg) {
[Link]("received message {}", msg);
if (msg instanceof UseStorage) {

4.3. Fault Tolerance 139


Akka Java Documentation, Release 2.4.20

storage = ((UseStorage) msg).storage;


storeCount();
} else if (msg instanceof Increment) {
count += ((Increment) msg).n;
storeCount();
} else if ([Link](GetCurrentCount)) {
getSender().tell(new CurrentCount(key, count), getSelf());
} else {
unhandled(msg);
}
}

void storeCount() {
// Delegate dangerous work, to protect our valuable state.
// We can continue without storage.
if (storage != null) {
[Link](new Store(new Entry(key, count)), getSelf());
}
}
}

public interface StorageApi {

public static class Store {


public final Entry entry;

public Store(Entry entry) {


[Link] = entry;
}

public String toString() {


return [Link]("%s(%s)", getClass().getSimpleName(), entry);
}
}

public static class Entry {


public final String key;
public final long value;

public Entry(String key, long value) {


[Link] = key;
[Link] = value;
}

public String toString() {


return [Link]("%s(%s, %s)", getClass().getSimpleName(), key, value);
}
}

public static class Get {


public final String key;

public Get(String key) {


[Link] = key;
}

public String toString() {


return [Link]("%s(%s)", getClass().getSimpleName(), key);
}
}

public static class StorageException extends RuntimeException {


private static final long serialVersionUID = 1L;

4.3. Fault Tolerance 140


Akka Java Documentation, Release 2.4.20

public StorageException(String msg) {


super(msg);
}
}
}

/**
* Saves key/value pairs to persistent storage when receiving Store message.
* Replies with current value when receiving Get message. Will throw
* StorageException if the underlying data store is out of order.
*/
public static class Storage extends UntypedActor {

final LoggingAdapter log = [Link](getContext().system(), this);


final DummyDB db = [Link];

@Override
public void onReceive(Object msg) {
[Link]("received message {}", msg);
if (msg instanceof Store) {
Store store = (Store) msg;
[Link]([Link], [Link]);
} else if (msg instanceof Get) {
Get get = (Get) msg;
Long value = [Link]([Link]);
getSender().tell(new Entry([Link], value == null ?
[Link](0L) : value), getSelf());
} else {
unhandled(msg);
}
}
}

public static class DummyDB {


public static final DummyDB instance = new DummyDB();
private final Map<String, Long> db = new HashMap<String, Long>();

private DummyDB() {
}

public synchronized void save(String key, Long value) throws StorageException {


if (11 <= value && value <= 14)
throw new StorageException("Simulated store failure " + value);
[Link](key, value);
}

public synchronized Long load(String key) throws StorageException {


return [Link](key);
}
}
}

4.3.2 Creating a Supervisor Strategy

The following sections explain the fault handling mechanism and alternatives in more depth.
For the sake of demonstration let us consider the following strategy:
private static SupervisorStrategy strategy =
new OneForOneStrategy(10, [Link]("1 minute"),
new Function<Throwable, Directive>() {

4.3. Fault Tolerance 141


Akka Java Documentation, Release 2.4.20

@Override
public Directive apply(Throwable t) {
if (t instanceof ArithmeticException) {
return resume();
} else if (t instanceof NullPointerException) {
return restart();
} else if (t instanceof IllegalArgumentException) {
return stop();
} else {
return escalate();
}
}
});

@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}

I have chosen a few well-known exception types in order to demonstrate the application of the fault handling
directives described in Supervision and Monitoring. First off, it is a one-for-one strategy, meaning that each child
is treated separately (an all-for-one strategy works very similarly, the only difference is that any decision is applied
to all children of the supervisor, not only the failing one). There are limits set on the restart frequency, namely
maximum 10 restarts per minute. -1 and [Link]() means that the respective limit does not apply,
leaving the possibility to specify an absolute upper limit on the restarts or to make the restarts work infinitely. The
child actor is stopped if the limit is exceeded.

Note: If the strategy is declared inside the supervising actor (as opposed to a separate class) its decider has access
to all internal state of the actor in a thread-safe fashion, including obtaining a reference to the currently failed child
(available as the getSender of the failure message).

Default Supervisor Strategy

Escalate is used if the defined strategy doesn’t cover the exception that was thrown.
When the supervisor strategy is not defined for an actor the following exceptions are handled by default:
• ActorInitializationException will stop the failing child actor
• ActorKilledException will stop the failing child actor
• Exception will restart the failing child actor
• Other types of Throwable will be escalated to parent actor
If the exception escalate all the way up to the root guardian it will handle it in the same way as the default strategy
defined above.

Stopping Supervisor Strategy

Closer to the Erlang way is the strategy to just stop children when they fail and then take cor-
rective action in the supervisor when DeathWatch signals the loss of the child. This strategy is
also provided pre-packaged as [Link] with an accompanying
StoppingSupervisorStrategy configurator to be used when you want the "/user" guardian to apply it.

Logging of Actor Failures

By default the SupervisorStrategy logs failures unless they are escalated. Escalated failures are supposed
to be handled, and potentially logged, at a level higher in the hierarchy.

4.3. Fault Tolerance 142


Akka Java Documentation, Release 2.4.20

You can mute the default logging of a SupervisorStrategy by setting loggingEnabled to false when
instantiating it. Customized logging can be done inside the Decider. Note that the reference to the currently
failed child is available as the getSender when the SupervisorStrategy is declared inside the supervising
actor.
You may also customize the logging in your own SupervisorStrategy implementation by overriding the
logFailure method.

4.3.3 Supervision of Top-Level Actors

Toplevel actors means those which are created using [Link](), and they are children of the User
Guardian. There are no special rules applied in this case, the guardian simply applies the configured strategy.

4.3.4 Test Application

The following section shows the effects of the different directives in practice, where a test setup is needed. First
off, we need a suitable supervisor:
public class Supervisor extends UntypedActor {

private static SupervisorStrategy strategy =


new OneForOneStrategy(10, [Link]("1 minute"),
new Function<Throwable, Directive>() {
@Override
public Directive apply(Throwable t) {
if (t instanceof ArithmeticException) {
return resume();
} else if (t instanceof NullPointerException) {
return restart();
} else if (t instanceof IllegalArgumentException) {
return stop();
} else {
return escalate();
}
}
});

@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}

public void onReceive(Object o) {


if (o instanceof Props) {
getSender().tell(getContext().actorOf((Props) o), getSelf());
} else {
unhandled(o);
}
}
}

This supervisor will be used to create a child, with which we can experiment:
public class Child extends UntypedActor {
int state = 0;

public void onReceive(Object o) throws Exception {


if (o instanceof Exception) {
throw (Exception) o;
} else if (o instanceof Integer) {

4.3. Fault Tolerance 143


Akka Java Documentation, Release 2.4.20

state = (Integer) o;
} else if ([Link]("get")) {
getSender().tell(state, getSelf());
} else {
unhandled(o);
}
}
}

The test is easier by using the utilities described in akka-testkit, where TestProbe provides an actor ref useful
for receiving and inspecting replies.
import [Link];
import [Link];
import [Link];
import static [Link];
import static [Link];
import static [Link];
import static [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import static [Link];
import [Link];
import [Link];

public class FaultHandlingTest extends AbstractJavaTest {


static ActorSystem system;
Duration timeout = [Link](5, SECONDS);

@BeforeClass
public static void start() {
system = [Link]("FaultHandlingTest");
}

@AfterClass
public static void cleanup() {
[Link](system);
system = null;
}

@Test
public void mustEmploySupervisorStrategy() throws Exception {
// code here
}

Let us create actors:


Props superprops = [Link]([Link]);
ActorRef supervisor = [Link](superprops, "supervisor");
ActorRef child = (ActorRef) [Link](ask(supervisor,
[Link]([Link]), 5000), timeout);

The first test shall demonstrate the Resume directive, so we try it out by setting some non-initial state in the actor
and have it fail:

4.3. Fault Tolerance 144


Akka Java Documentation, Release 2.4.20

[Link](42, [Link]());
assert [Link](ask(child, "get", 5000), timeout).equals(42);
[Link](new ArithmeticException(), [Link]());
assert [Link](ask(child, "get", 5000), timeout).equals(42);

As you can see the value 42 survives the fault handling directive. Now, if we change the failure to a more serious
NullPointerException, that will no longer be the case:
[Link](new NullPointerException(), [Link]());
assert [Link](ask(child, "get", 5000), timeout).equals(0);

And finally in case of the fatal IllegalArgumentException the child will be terminated by the supervisor:
final TestProbe probe = new TestProbe(system);
[Link](child);
[Link](new IllegalArgumentException(), [Link]());
[Link]([Link]);

Up to now the supervisor was completely unaffected by the child’s failure, because the directives set did handle it.
In case of an Exception, this is not true anymore and the supervisor escalates the failure.
child = (ActorRef) [Link](ask(supervisor,
[Link]([Link]), 5000), timeout);
[Link](child);
assert [Link](ask(child, "get", 5000), timeout).equals(0);
[Link](new Exception(), [Link]());
[Link]([Link]);

The supervisor itself is supervised by the top-level actor provided by the ActorSystem, which
has the default policy to restart in case of all Exception cases (with the notable exceptions of
ActorInitializationException and ActorKilledException). Since the default directive in case
of a restart is to kill all children, we expected our poor child not to survive this failure.
In case this is not desired (which depends on the use case), we need to use a different supervisor which overrides
this behavior.
public class Supervisor2 extends UntypedActor {

private static SupervisorStrategy strategy = new OneForOneStrategy(10,


[Link]("1 minute"),
new Function<Throwable, Directive>() {
@Override
public Directive apply(Throwable t) {
if (t instanceof ArithmeticException) {
return resume();
} else if (t instanceof NullPointerException) {
return restart();
} else if (t instanceof IllegalArgumentException) {
return stop();
} else {
return escalate();
}
}
});

@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}

public void onReceive(Object o) {


if (o instanceof Props) {
getSender().tell(getContext().actorOf((Props) o), getSelf());

4.3. Fault Tolerance 145


Akka Java Documentation, Release 2.4.20

} else {
unhandled(o);
}
}

@Override
public void preRestart(Throwable cause, Option<Object> msg) {
// do not kill all children, which is the default here
}
}

With this parent, the child survives the escalated restart, as demonstrated in the last test:
superprops = [Link]([Link]);
supervisor = [Link](superprops);
child = (ActorRef) [Link](ask(supervisor,
[Link]([Link]), 5000), timeout);
[Link](23, [Link]());
assert [Link](ask(child, "get", 5000), timeout).equals(23);
[Link](new Exception(), [Link]());
assert [Link](ask(child, "get", 5000), timeout).equals(0);

4.4 Dispatchers

An Akka MessageDispatcher is what makes Akka Actors “tick”, it is the engine of the machine so to speak.
All MessageDispatcher implementations are also an ExecutionContext, which means that they can be
used to execute arbitrary code, for instance Futures.

4.4.1 Default dispatcher

Every ActorSystem will have a default dispatcher that will be used in case nothing else is config-
ured for an Actor. The default dispatcher can be configured, and is by default a Dispatcher
with the specified default-executor. If an ActorSystem is created with an ExecutionCon-
text passed in, this ExecutionContext will be used as the default executor for all dispatchers in
this ActorSystem. If no ExecutionContext is given, it will fallback to the executor specified in
[Link]. By default this is a “fork-
join-executor”, which gives excellent performance in most cases.

4.4.2 Looking up a Dispatcher

Dispatchers implement the ExecutionContext interface and can thus be used to run Future invocations etc.
// this is [Link]
// for use with Futures, Scheduler, etc.
final ExecutionContext ex = [Link]().lookup("my-dispatcher");

4.4.3 Setting the dispatcher for an Actor

So in case you want to give your Actor a different dispatcher than the default, you need to do two things, of
which the first is is to configure the dispatcher:
my-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "fork-join-executor"

4.4. Dispatchers 146


Akka Java Documentation, Release 2.4.20

# Configuration for the fork join pool


fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}

Note: Note that the parallelism-max does not set the upper bound on the total number of threads allocated
by the ForkJoinPool. It is a setting specifically talking about the number of hot threads the pool keep running in
order to reduce the latency of handling a new incoming task. You can read more about parallelism in the JDK’s
ForkJoinPool documentation.

And here’s another example that uses the “thread-pool-executor”:


my-thread-pool-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "thread-pool-executor"
# Configuration for the thread pool
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}

Note: The thread pool executor dispatcher is implemented using by a


[Link]. You can read more about it in the JDK’s Thread-
PoolExecutor documentation.

For more options, see the default-dispatcher section of the Configuration.


Then you create the actor as usual and define the dispatcher in the deployment configuration.
ActorRef myActor =
[Link]([Link]([Link]),
"myactor");

[Link] {
/myactor {
dispatcher = my-dispatcher
}
}

4.4. Dispatchers 147


Akka Java Documentation, Release 2.4.20

An alternative to the deployment configuration is to define the dispatcher in code. If you define the dispatcher
in the deployment configuration then this value will be used instead of programmatically provided parameter.
ActorRef myActor =
[Link]([Link]([Link]).withDispatcher("my-dispatcher"),
"myactor3");

Note: The dispatcher you specify in withDispatcher and the dispatcher property in the deploy-
ment configuration is in fact a path into your configuration. So in this example it’s a top-level section, but
you could for instance put it as a sub-section, where you’d use periods to denote sub-sections, like this:
"[Link]-dispatcher"

4.4.4 Types of dispatchers

There are 3 different types of message dispatchers:


• Dispatcher
– This is an event-based dispatcher that binds a set of Actors to a thread pool. It is the default dispatcher
used if one is not specified.
– Sharability: Unlimited
– Mailboxes: Any, creates one per Actor
– Use cases: Default dispatcher, Bulkheading
– Driven by: [Link] specify using “ex-
ecutor” using “fork-join-executor”, “thread-pool-executor” or the FQCN of an
[Link]
• PinnedDispatcher
– This dispatcher dedicates a unique thread for each actor using it; i.e. each actor will have its own
thread pool with only one thread in the pool.
– Sharability: None
– Mailboxes: Any, creates one per Actor
– Use cases: Bulkheading
– Driven by: Any [Link] by default a
“thread-pool-executor”
• CallingThreadDispatcher
– This dispatcher runs invocations on the current thread only. This dispatcher does not create any new
threads, but it can be used from different threads concurrently for the same actor. See CallingThread-
Dispatcher for details and restrictions.
– Sharability: Unlimited
– Mailboxes: Any, creates one per Actor per Thread (on demand)
– Use cases: Testing
– Driven by: The calling thread (duh)

More dispatcher configuration examples

Configuring a dispatcher with fixed thread pool size, e.g. for actors that perform blocking IO:

4.4. Dispatchers 148


Akka Java Documentation, Release 2.4.20

blocking-io-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 32
}
throughput = 1
}

And then using it:


ActorRef myActor = [Link]([Link]([Link])
.withDispatcher("blocking-io-dispatcher"));

Configuring a PinnedDispatcher:
my-pinned-dispatcher {
executor = "thread-pool-executor"
type = PinnedDispatcher
}

And then using it:


ActorRef myActor = [Link]([Link]([Link])
.withDispatcher("my-pinned-dispatcher"));

Note that thread-pool-executor configuration as per the above my-thread-pool-dispatcher


example is NOT applicable. This is because every actor will have its own thread pool when using
PinnedDispatcher, and that pool will have only one thread.
Note that it’s not guaranteed that the same thread is used over time, since the core pool timeout is used for
PinnedDispatcher to keep resource usage down in case of idle actors. To use the same thread all the
time you need to add [Link]-core-timeout=off to the configuration of the
PinnedDispatcher.

4.5 Mailboxes

An Akka Mailbox holds the messages that are destined for an Actor. Normally each Actor has its own
mailbox, but with for example a BalancingPool all routees will share a single mailbox instance.

4.5.1 Mailbox Selection

Requiring a Message Queue Type for an Actor

It is possible to require a certain type of message queue for a certain type of actor by having that actor implement
the parameterized interface RequiresMessageQueue. Here is an example:
import [Link];
import [Link];

public class MyBoundedUntypedActor extends MyUntypedActor


implements RequiresMessageQueue<BoundedMessageQueueSemantics> {
}

The type parameter to the RequiresMessageQueue interface needs to be mapped to a mailbox in configura-
tion like this:
bounded-mailbox {
mailbox-type = "[Link]"
mailbox-capacity = 1000

4.5. Mailboxes 149


Akka Java Documentation, Release 2.4.20

mailbox-push-timeout-time = 10s
}

[Link] {
"[Link]" = bounded-mailbox
}

Now every time you create an actor of type MyBoundedUntypedActor it will try to get a bounded mailbox.
If the actor has a different mailbox configured in deployment, either directly or via a dispatcher with a specified
mailbox type, then that will override this mapping.

Note: The type of the queue in the mailbox created for an actor will be checked against the required type in the
interface and if the queue doesn’t implement the required type then actor creation will fail.

Requiring a Message Queue Type for a Dispatcher

A dispatcher may also have a requirement for the mailbox type used by the actors running on it. An example is
the BalancingDispatcher which requires a message queue that is thread-safe for multiple concurrent consumers.
Such a requirement is formulated within the dispatcher configuration section like this:
my-dispatcher {
mailbox-requirement = [Link]
}

The given requirement names a class or interface which will then be ensured to be a supertype of the message
queue’s implementation. In case of a conflict—e.g. if the actor requires a mailbox type which does not satisfy this
requirement—then actor creation will fail.

How the Mailbox Type is Selected

When an actor is created, the ActorRefProvider first determines the dispatcher which will execute it. Then
the mailbox is determined as follows:
1. If the actor’s deployment configuration section contains a mailbox key then that names a configuration
section describing the mailbox type to be used.
2. If the actor’s Props contains a mailbox selection—i.e. withMailbox was called on it—then that names
a configuration section describing the mailbox type to be used.
3. If the dispatcher’s configuration section contains a mailbox-type key the same section will be used to
configure the mailbox type.
4. If the actor requires a mailbox type as described above then the mapping for that requirement will be used
to determine the mailbox type to be used; if that fails then the dispatcher’s requirement—if any—will be
tried instead.
5. If the dispatcher requires a mailbox type as described above then the mapping for that requirement will be
used to determine the mailbox type to be used.
6. The default mailbox [Link]-mailbox will be used.

Default Mailbox

When the mailbox is not specified as described above the default mailbox is used. By default it is an unbounded
mailbox, which is backed by a [Link].
SingleConsumerOnlyUnboundedMailbox is an even more efficient mailbox, and it can be used as the
default mailbox, but it cannot be used with a BalancingDispatcher.
Configuration of SingleConsumerOnlyUnboundedMailbox as default mailbox:

4.5. Mailboxes 150


Akka Java Documentation, Release 2.4.20

[Link]-mailbox {
mailbox-type = "[Link]"
}

Which Configuration is passed to the Mailbox Type

Each mailbox type is implemented by a class which extends MailboxType and takes two constructor arguments:
a [Link] object and a Config section. The latter is computed by obtaining the named
configuration section from the actor system’s configuration, overriding its id key with the configuration path of
the mailbox type and adding a fall-back to the default mailbox configuration section.

4.5.2 Builtin Mailbox Implementations

Akka comes shipped with a number of mailbox implementations:


• UnboundedMailbox (default)
– The default mailbox
– Backed by a [Link]
– Blocking: No
– Bounded: No
– Configuration name: "unbounded" or "[Link]"
• SingleConsumerOnlyUnboundedMailbox
This queue may or may not be faster than the default one depending on your use-case—be sure to benchmark
properly!
– Backed by a Multiple-Producer Single-Consumer queue, cannot be used with
BalancingDispatcher
– Blocking: No
– Bounded: No
– Configuration name: "[Link]"
• NonBlockingBoundedMailbox
– Backed by a very efficient Multiple-Producer Single-Consumer queue
– Blocking: No (discards overflowing messages into deadLetters)
– Bounded: Yes
– Configuration name: "[Link]"
• UnboundedControlAwareMailbox
– Delivers messages that extend [Link] with higher priority
– Backed by two [Link]
– Blocking: No
– Bounded: No
– Configuration name: “[Link]”
• UnboundedPriorityMailbox
– Backed by a [Link]
– Delivery order for messages of equal priority is undefined - contrast with the UnboundedStablePriori-
tyMailbox

4.5. Mailboxes 151


Akka Java Documentation, Release 2.4.20

– Blocking: No
– Bounded: No
– Configuration name: “[Link]”
• UnboundedStablePriorityMailbox
– Backed by a [Link] wrapped in an
[Link]
– FIFO order is preserved for messages of equal priority - contrast with the UnboundedPriorityMailbox
– Blocking: No
– Bounded: No
– Configuration name: “[Link]”
Other bounded mailbox implementations which will block the sender if the capacity is reached and configured
with non-zero mailbox-push-timeout-time.

Note: The following mailboxes should only be used with zero mailbox-push-timeout-time.

• BoundedMailbox
– Backed by a [Link]
– Blocking: Yes if used with non-zero mailbox-push-timeout-time, otherwise No
– Bounded: Yes
– Configuration name: “bounded” or “[Link]”
• BoundedPriorityMailbox
– Backed by a [Link] wrapped in an
[Link]
– Delivery order for messages of equal priority is undefined - contrast with the
BoundedStablePriorityMailbox
– Blocking: Yes if used with non-zero mailbox-push-timeout-time, otherwise No
– Bounded: Yes
– Configuration name: "[Link]"
• BoundedStablePriorityMailbox
– Backed by a [Link] wrapped in an
[Link] and an [Link]
– FIFO order is preserved for messages of equal priority - contrast with the BoundedPriorityMailbox
– Blocking: Yes if used with non-zero mailbox-push-timeout-time, otherwise No
– Bounded: Yes
– Configuration name: “[Link]”
• BoundedControlAwareMailbox
– Delivers messages that extend [Link] with higher priority
– Backed by two [Link] and blocking on en-
queue if capacity has been reached
– Blocking: Yes if used with non-zero mailbox-push-timeout-time, otherwise No
– Bounded: Yes
– Configuration name: “[Link]”

4.5. Mailboxes 152


Akka Java Documentation, Release 2.4.20

4.5.3 Mailbox configuration examples

PriorityMailbox

How to create a PriorityMailbox:


public class MyPrioMailbox extends UnboundedStablePriorityMailbox {
// needed for reflective instantiation
public MyPrioMailbox([Link] settings, Config config) {
// Create a new PriorityGenerator, lower prio means more important
super(new PriorityGenerator() {
@Override
public int gen(Object message) {
if ([Link]("highpriority"))
return 0; // ’highpriority messages should be treated first if possible
else if ([Link]("lowpriority"))
return 2; // ’lowpriority messages should be treated last if possible
else if ([Link]([Link]()))
return 3; // PoisonPill when no other left
else
return 1; // By default they go between high and low prio
}
});
}
}

And then add it to the configuration:


prio-dispatcher {
mailbox-type = "[Link]$MyPrioMailbox"
//Other dispatcher configuration goes here
}

And then an example on how you would use it:


class Demo extends UntypedActor {
LoggingAdapter log = [Link](getContext().system(), this);
{
for (Object msg : new Object[] { "lowpriority", "lowpriority",
"highpriority", "pigdog", "pigdog2", "pigdog3", "highpriority",
[Link]() }) {
getSelf().tell(msg, getSelf());
}
}

public void onReceive(Object message) {


[Link]([Link]());
}
}

// We create a new Actor that just prints out what it processes


ActorRef myActor = [Link]([Link]([Link], this)
.withDispatcher("prio-dispatcher"));

/*
Logs:
’highpriority
’highpriority
’pigdog
’pigdog2
’pigdog3
’lowpriority
’lowpriority
*/

4.5. Mailboxes 153


Akka Java Documentation, Release 2.4.20

It is also possible to configure a mailbox type directly like this:


prio-mailbox {
mailbox-type = "[Link]$MyPrioMailbox"
//Other mailbox configuration goes here
}

[Link] {
/priomailboxactor {
mailbox = prio-mailbox
}
}

And then use it either from deployment like this:


ActorRef myActor =
[Link]([Link]([Link]),
"priomailboxactor");

Or code like this:


ActorRef myActor =
[Link]([Link]([Link])
.withMailbox("prio-mailbox"));

ControlAwareMailbox

A ControlAwareMailbox can be very useful if an actor needs to be able to receive control messages imme-
diately no matter how many other messages are already in its mailbox.
It can be configured like this:
control-aware-dispatcher {
mailbox-type = "[Link]"
//Other dispatcher configuration goes here
}

Control messages need to extend the ControlMessage trait:


public class MyControlMessage implements ControlMessage {}

And then an example on how you would use it:


class Demo extends UntypedActor {
LoggingAdapter log = [Link](getContext().system(), this);
{
for (Object msg : new Object[] { "foo", "bar", new MyControlMessage(),
[Link]() }) {
getSelf().tell(msg, getSelf());
}
}

public void onReceive(Object message) {


[Link]([Link]());
}
}

// We create a new Actor that just prints out what it processes


ActorRef myActor = [Link]([Link]([Link], this)
.withDispatcher("control-aware-dispatcher"));

/*
Logs:
’MyControlMessage

4.5. Mailboxes 154


Akka Java Documentation, Release 2.4.20

’foo
’bar
*/

4.5.4 Creating your own Mailbox type

An example is worth a thousand quacks:


import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

public class MyUnboundedJMailbox implements MailboxType,


ProducesMessageQueue<[Link]> {

// This is the MessageQueue implementation


public static class MyMessageQueue implements MessageQueue,
MyUnboundedJMessageQueueSemantics {
private final Queue<Envelope> queue =
new ConcurrentLinkedQueue<Envelope>();

// these must be implemented; queue used as example


public void enqueue(ActorRef receiver, Envelope handle) {
[Link](handle);
}
public Envelope dequeue() { return [Link](); }
public int numberOfMessages() { return [Link](); }
public boolean hasMessages() { return ![Link](); }
public void cleanUp(ActorRef owner, MessageQueue deadLetters) {
for (Envelope handle: queue) {
[Link](owner, handle);
}
}
}

// This constructor signature must exist, it will be called by Akka


public MyUnboundedJMailbox([Link] settings, Config config) {
// put your initialization code here
}

// The create method is called to create the MessageQueue


public MessageQueue create(Option<ActorRef> owner, Option<ActorSystem> system) {
return new MyMessageQueue();
}
}

// Marker interface used for mailbox requirements mapping


public interface MyUnboundedJMessageQueueSemantics {
}

And then you just specify the FQCN of your MailboxType as the value of the “mailbox-type” in the dispatcher
configuration, or the mailbox configuration.

Note: Make sure to include a constructor which takes [Link] and

4.5. Mailboxes 155


Akka Java Documentation, Release 2.4.20

[Link] arguments, as this constructor is invoked reflectively to construct your


mailbox type. The config passed in as second argument is that section from the configuration which describes
the dispatcher or mailbox setting using this mailbox type; the mailbox type will be instantiated once for each
dispatcher or mailbox setting using it.

You can also use the mailbox as a requirement on the dispatcher like this:
custom-dispatcher {
mailbox-requirement =
"[Link]"
}

[Link] {
"[Link]" =
custom-dispatcher-mailbox
}

custom-dispatcher-mailbox {
mailbox-type = "[Link]"
}

Or by defining the requirement on your actor class like this:


public class MySpecialActor extends UntypedActor implements
RequiresMessageQueue<MyUnboundedJMessageQueueSemantics> {
// ...
}

4.5.5 Special Semantics of [Link]

In order to make [Link] both synchronous and non-blocking while keeping the return type
ActorRef (and the semantics that the returned ref is fully functional), special handling takes place for this
case. Behind the scenes, a hollow kind of actor reference is constructed, which is sent to the system’s guardian
actor who actually creates the actor and its context and puts those inside the reference. Until that has happened,
messages sent to the ActorRef will be queued locally, and only upon swapping the real filling in will they be
transferred into the real mailbox. Thus,
final Props props = ...
// this actor uses MyCustomMailbox, which is assumed to be a singleton
[Link]([Link]("myCustomMailbox").tell("bang", sender);
assert([Link]().getLastEnqueued().equals("bang"));

will probably fail; you will have to allow for some time to pass and retry the check à la [Link].

4.6 Routing

Messages can be sent via a router to efficiently route them to destination actors, known as its routees. A Router
can be used inside or outside of an actor, and you can manage the routees yourselves or use a self contained router
actor with configuration capabilities.
Different routing strategies can be used, according to your application’s needs. Akka comes with several useful
routing strategies right out of the box. But, as you will see in this chapter, it is also possible to create your own.

4.6.1 A Simple Router

The following example illustrates how to use a Router and manage the routees from within an actor.

4.6. Routing 156


Akka Java Documentation, Release 2.4.20

public final class Work implements Serializable {


private static final long serialVersionUID = 1L;
public final String payload;
public Work(String payload) {
[Link] = payload;
}
}

public class Master extends UntypedActor {

Router router;
{
List<Routee> routees = new ArrayList<Routee>();
for (int i = 0; i < 5; i++) {
ActorRef r = getContext().actorOf([Link]([Link]));
getContext().watch(r);
[Link](new ActorRefRoutee(r));
}
router = new Router(new RoundRobinRoutingLogic(), routees);
}

public void onReceive(Object msg) {


if (msg instanceof Work) {
[Link](msg, getSender());
} else if (msg instanceof Terminated) {
router = [Link](((Terminated) msg).actor());
ActorRef r = getContext().actorOf([Link]([Link]));
getContext().watch(r);
router = [Link](new ActorRefRoutee(r));
}
}
}

We create a Router and specify that it should use RoundRobinRoutingLogic when routing the messages
to the routees.
The routing logic shipped with Akka are:
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
• [Link]
We create the routees as ordinary child actors wrapped in ActorRefRoutee. We watch the routees to be able
to replace them if they are terminated.
Sending messages via the router is done with the route method, as is done for the Work messages in the example
above.
The Router is immutable and the RoutingLogic is thread safe; meaning that they can also be used outside
of actors.

Note: In general, any message sent to a router will be sent onwards to its routees, but there is one exception. The
special Broadcast Messages will send to all of a router’s routees. However, do not use Broadcast Messages when
you use BalancingPool for routees as described in Specially Handled Messages.

4.6. Routing 157


Akka Java Documentation, Release 2.4.20

4.6.2 A Router Actor

A router can also be created as a self contained actor that manages the routees itself and loads routing logic and
other settings from configuration.
This type of router actor comes in two distinct flavors:
• Pool - The router creates routees as child actors and removes them from the router if they terminate.
• Group - The routee actors are created externally to the router and the router sends messages to the specified
path using actor selection, without watching for termination.
The settings for a router actor can be defined in configuration or programmatically. In order to make an actor to
make use of an externally configurable router the FromConfig props wrapper must be used to denote that the
actor accepts routing settings from configuration. This is in contrast with Remote Deployment where such marker
props is not necessary. If the props of an actor is NOT wrapped in FromConfig it will ignore the router section
of the deployment configuration.
You send messages to the routees via the router actor in the same way as for ordinary actors, i.e. via its ActorRef.
The router actor forwards messages onto its routees without changing the original sender. When a routee replies
to a routed message, the reply will be sent to the original sender, not to the router actor.

Note: In general, any message sent to a router will be sent onwards to its routees, but there are a few exceptions.
These are documented in the Specially Handled Messages section below.

Pool

The following code and configuration snippets show how to create a round-robin router that forwards messages to
five Worker routees. The routees will be created as the router’s children.
[Link] {
/parent/router1 {
router = round-robin-pool
nr-of-instances = 5
}
}

ActorRef router1 =
getContext().actorOf([Link]().props([Link]([Link])),
"router1");

Here is the same example, but with the router configuration provided programmatically instead of from configu-
ration.
ActorRef router2 =
getContext().actorOf(new RoundRobinPool(5).props([Link]([Link])),
"router2");

Remote Deployed Routees

In addition to being able to create local actors as routees, you can instruct the router to deploy its created children
on a set of remote hosts. Routees will be deployed in round-robin fashion. In order to deploy routees remotely,
wrap the router configuration in a RemoteRouterConfig, attaching the remote addresses of the nodes to
deploy to. Remote deployment requires the akka-remote module to be included in the classpath.
Address[] addresses = {
new Address("[Link]", "remotesys", "otherhost", 1234),
[Link]("[Link]://othersys@anotherhost:1234")};
ActorRef routerRemote = [Link](

4.6. Routing 158


Akka Java Documentation, Release 2.4.20

new RemoteRouterConfig(new RoundRobinPool(5), addresses).props(


[Link]([Link])));

Senders

When a routee sends a message, it can set itself as the sender.


getSender().tell("reply", getSelf());

However, it is often useful for routees to set the router as a sender. For example, you might want to set the router
as the sender if you want to hide the details of the routees behind the router. The following code snippet shows
how to set the parent router as sender.
getSender().tell("reply", getContext().parent());

Supervision

Routees that are created by a pool router will be created as the router’s children. The router is therefore also the
children’s supervisor.
The supervision strategy of the router actor can be configured with the supervisorStrategy property of the
Pool. If no configuration is provided, routers default to a strategy of “always escalate”. This means that errors are
passed up to the router’s supervisor for handling. The router’s supervisor will decide what to do about any errors.
Note the router’s supervisor will treat the error as an error with the router itself. Therefore a directive to stop or
restart will cause the router itself to stop or restart. The router, in turn, will cause its children to stop and restart.
It should be mentioned that the router’s restart behavior has been overridden so that a restart, while still re-creating
the children, will still preserve the same number of actors in the pool.
This means that if you have not specified supervisorStrategy of the router or its parent a failure in a routee
will escalate to the parent of the router, which will by default restart the router, which will restart all routees (it
uses Escalate and does not stop routees during restart). The reason is to make the default behave such that adding
withRouter to a child’s definition does not change the supervision strategy applied to the child. This might be
an inefficiency that you can avoid by specifying the strategy when defining the router.
Setting the strategy is easily done:
final SupervisorStrategy strategy =
new OneForOneStrategy(5, [Link](1, [Link]),
Collections.<Class<? extends Throwable>>singletonList([Link]));
final ActorRef router = [Link](new RoundRobinPool(5).
withSupervisorStrategy(strategy).props([Link]([Link])));

Note: If the child of a pool router terminates, the pool router will not automatically spawn a new child. In the
event that all children of a pool router have terminated the router will terminate itself unless it is a dynamic router,
e.g. using a resizer.

Group

Sometimes, rather than having the router actor create its routees, it is desirable to create routees separately and pro-
vide them to the router for its use. You can do this by passing an paths of the routees to the router’s configuration.
Messages will be sent with ActorSelection to these paths.
The example below shows how to create a router by providing it with the path strings of three routee actors.

4.6. Routing 159


Akka Java Documentation, Release 2.4.20

[Link] {
/parent/router3 {
router = round-robin-group
[Link] = ["/user/workers/w1", "/user/workers/w2", "/user/workers/w3"]
}
}

ActorRef router3 =
getContext().actorOf([Link]().props(), "router3");

Here is the same example, but with the router configuration provided programmatically instead of from configu-
ration.
ActorRef router4 =
getContext().actorOf(new RoundRobinGroup(paths).props(), "router4");

The routee actors are created externally from the router:


[Link]([Link]([Link]), "workers");

public class Workers extends UntypedActor {


@Override public void preStart() {
getContext().actorOf([Link]([Link]), "w1");
getContext().actorOf([Link]([Link]), "w2");
getContext().actorOf([Link]([Link]), "w3");
}
// ...

The paths may contain protocol and address information for actors running on remote hosts. Remoting requires
the akka-remote module to be included in the classpath.
[Link] {
/parent/remoteGroup {
router = round-robin-group
[Link] = [
"[Link]://app@[Link]:2552/user/workers/w1",
"[Link]://app@[Link]:2552/user/workers/w1",
"[Link]://app@[Link]:2552/user/workers/w1"]
}
}

4.6.3 Router usage

In this section we will describe how to create the different types of router actors.
The router actors in this section are created from within a top level actor named parent. Note that deployment
paths in the configuration starts with /parent/ followed by the name of the router actor.
[Link]([Link]([Link]), "parent");

RoundRobinPool and RoundRobinGroup

Routes in a round-robin fashion to its routees.


RoundRobinPool defined in configuration:
[Link] {
/parent/router1 {
router = round-robin-pool
nr-of-instances = 5
}
}

4.6. Routing 160


Akka Java Documentation, Release 2.4.20

ActorRef router1 =
getContext().actorOf([Link]().props([Link]([Link])),
"router1");

RoundRobinPool defined in code:


ActorRef router2 =
getContext().actorOf(new RoundRobinPool(5).props([Link]([Link])),
"router2");

RoundRobinGroup defined in configuration:


[Link] {
/parent/router3 {
router = round-robin-group
[Link] = ["/user/workers/w1", "/user/workers/w2", "/user/workers/w3"]
}
}

ActorRef router3 =
getContext().actorOf([Link]().props(), "router3");

RoundRobinGroup defined in code:


List<String> paths = [Link]("/user/workers/w1", "/user/workers/w2",
"/user/workers/w3");
ActorRef router4 =
getContext().actorOf(new RoundRobinGroup(paths).props(), "router4");

RandomPool and RandomGroup

This router type selects one of its routees randomly for each message.
RandomPool defined in configuration:
[Link] {
/parent/router5 {
router = random-pool
nr-of-instances = 5
}
}

ActorRef router5 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router5");

RandomPool defined in code:


ActorRef router6 =
getContext().actorOf(new RandomPool(5).props([Link]([Link])),
"router6");

RandomGroup defined in configuration:


[Link] {
/parent/router7 {
router = random-group
[Link] = ["/user/workers/w1", "/user/workers/w2", "/user/workers/w3"]
}
}

ActorRef router7 =
getContext().actorOf([Link]().props(), "router7");

4.6. Routing 161


Akka Java Documentation, Release 2.4.20

RandomGroup defined in code:


List<String> paths = [Link]("/user/workers/w1", "/user/workers/w2",
"/user/workers/w3");
ActorRef router8 =
getContext().actorOf(new RandomGroup(paths).props(), "router8");

BalancingPool

A Router that will try to redistribute work from busy routees to idle routees. All routees share the same mailbox.

Note: The BalancingPool has the property that its routees do not have truly distinct identity: they have different
names, but talking to them will not end up at the right actor in most cases. Therefore you cannot use it for
workflows that require state to be kept within the routee, you would in this case have to include the whole state in
the messages.
With a SmallestMailboxPool you can have a vertically scaling service that can interact in a stateful fashion with
other services in the back-end before replying to the original client. The other advantage is that it does not place
a restriction on the message queue implementation as BalancingPool does.

Note: Do not use Broadcast Messages when you use BalancingPool for routers, as described in Specially Handled
Messages.

BalancingPool defined in configuration:


[Link] {
/parent/router9 {
router = balancing-pool
nr-of-instances = 5
}
}

ActorRef router9 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router9");

BalancingPool defined in code:


ActorRef router10 =
getContext().actorOf(new BalancingPool(5).props(
[Link]([Link])), "router10");

Addition configuration for the balancing dispatcher, which is used by the pool, can be configured in the
pool-dispatcher section of the router deployment configuration.
[Link] {
/parent/router9b {
router = balancing-pool
nr-of-instances = 5
pool-dispatcher {
attempt-teamwork = off
}
}
}

The BalancingPool automatically uses a special BalancingDispatcher for its routees - disregarding
any dispatcher that is set on the routee Props object. This is needed in order to implement the balancing semantics
via sharing the same mailbox by all the routees.
While it is not possible to change the dispatcher used by the routees, it is possible to fine tune the used execu-
tor. By default the fork-join-dispatcher is used and can be configured as explained in Dispatchers. In

4.6. Routing 162


Akka Java Documentation, Release 2.4.20

situations where the routees are expected to perform blocking operations it may be useful to replace it with a
thread-pool-executor hinting the number of allocated threads explicitly:
[Link] {
/parent/router10b {
router = balancing-pool
nr-of-instances = 5
pool-dispatcher {
executor = "thread-pool-executor"

# allocate exactly 5 threads for this pool


thread-pool-executor {
core-pool-size-min = 5
core-pool-size-max = 5
}
}
}
}

It is also possible to change the mailbox used by the balancing dispatcher for scenarios where the default
unbounded mailbox is not well suited. An example of such a scenario could arise whether there exists the need to
manage priority for each message. You can then implement a priority mailbox and configure your dispatcher:
[Link] {
/parent/router10c {
router = balancing-pool
nr-of-instances = 5
pool-dispatcher {
mailbox = [Link]
}
}
}

Note: Bear in mind that BalancingDispatcher requires a message queue that must be thread-safe for
multiple concurrent consumers. So it is mandatory for the message queue backing a custom mailbox for this kind
of dispatcher to implement [Link]. See details on how to implement your
custom mailbox in Mailboxes.

There is no Group variant of the BalancingPool.

SmallestMailboxPool

A Router that tries to send to the non-suspended child routee with fewest messages in mailbox. The selection is
done in this order:
• pick any idle routee (not processing message) with empty mailbox
• pick any routee with empty mailbox
• pick routee with fewest pending messages in mailbox
• pick any remote routee, remote actors are consider lowest priority, since their mailbox size is unknown
SmallestMailboxPool defined in configuration:
[Link] {
/parent/router11 {
router = smallest-mailbox-pool
nr-of-instances = 5
}
}

4.6. Routing 163


Akka Java Documentation, Release 2.4.20

ActorRef router11 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router11");

SmallestMailboxPool defined in code:


ActorRef router12 =
getContext().actorOf(new SmallestMailboxPool(5).props(
[Link]([Link])), "router12");

There is no Group variant of the SmallestMailboxPool because the size of the mailbox and the internal dispatching
state of the actor is not practically available from the paths of the routees.

BroadcastPool and BroadcastGroup

A broadcast router forwards the message it receives to all its routees.


BroadcastPool defined in configuration:
[Link] {
/parent/router13 {
router = broadcast-pool
nr-of-instances = 5
}
}

ActorRef router13 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router13");

BroadcastPool defined in code:


ActorRef router14 =
getContext().actorOf(new BroadcastPool(5).props([Link]([Link])),
"router14");

BroadcastGroup defined in configuration:


[Link] {
/parent/router15 {
router = broadcast-group
[Link] = ["/user/workers/w1", "/user/workers/w2", "/user/workers/w3"]
}
}

ActorRef router15 =
getContext().actorOf([Link]().props(), "router15");

BroadcastGroup defined in code:


List<String> paths = [Link]("/user/workers/w1", "/user/workers/w2",
"/user/workers/w3");
ActorRef router16 =
getContext().actorOf(new BroadcastGroup(paths).props(), "router16");

Note: Broadcast routers always broadcast every message to their routees. If you do not want to broadcast every
message, then you can use a non-broadcasting router and use Broadcast Messages as needed.

4.6. Routing 164


Akka Java Documentation, Release 2.4.20

ScatterGatherFirstCompletedPool and ScatterGatherFirstCompletedGroup

The ScatterGatherFirstCompletedRouter will send the message on to all its routees. It then waits for first reply it
gets back. This result will be sent back to original sender. Other replies are discarded.
It is expecting at least one reply within a configured duration, otherwise it will reply with
[Link] in a [Link].
ScatterGatherFirstCompletedPool defined in configuration:
[Link] {
/parent/router17 {
router = scatter-gather-pool
nr-of-instances = 5
within = 10 seconds
}
}

ActorRef router17 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router17");

ScatterGatherFirstCompletedPool defined in code:


FiniteDuration within = [Link](10, [Link]);
ActorRef router18 =
getContext().actorOf(new ScatterGatherFirstCompletedPool(5, within).props(
[Link]([Link])), "router18");

ScatterGatherFirstCompletedGroup defined in configuration:


[Link] {
/parent/router19 {
router = scatter-gather-group
[Link] = ["/user/workers/w1", "/user/workers/w2", "/user/workers/w3"]
within = 10 seconds
}
}

ActorRef router19 =
getContext().actorOf([Link]().props(), "router19");

ScatterGatherFirstCompletedGroup defined in code:


List<String> paths = [Link]("/user/workers/w1", "/user/workers/w2",
"/user/workers/w3");
FiniteDuration within2 = [Link](10, [Link]);
ActorRef router20 =
getContext().actorOf(new ScatterGatherFirstCompletedGroup(paths, within2).props(),
"router20");

TailChoppingPool and TailChoppingGroup

The TailChoppingRouter will first send the message to one, randomly picked, routee and then after a small delay
to a second routee (picked randomly from the remaining routees) and so on. It waits for first reply it gets back and
forwards it back to original sender. Other replies are discarded.
The goal of this router is to decrease latency by performing redundant queries to multiple routees, assuming that
one of the other actors may still be faster to respond than the initial one.
This optimisation was described nicely in a blog post by Peter Bailis: Doing redundant work to speed up dis-
tributed queries.
TailChoppingPool defined in configuration:

4.6. Routing 165


Akka Java Documentation, Release 2.4.20

[Link] {
/parent/router21 {
router = tail-chopping-pool
nr-of-instances = 5
within = 10 seconds
[Link] = 20 milliseconds
}
}

ActorRef router21 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router21");

TailChoppingPool defined in code:


FiniteDuration within3 = [Link](10, [Link]);
FiniteDuration interval = [Link](20, [Link]);
ActorRef router22 =
getContext().actorOf(new TailChoppingPool(5, within3, interval).props(
[Link]([Link])), "router22");

TailChoppingGroup defined in configuration:


[Link] {
/parent/router23 {
router = tail-chopping-group
[Link] = ["/user/workers/w1", "/user/workers/w2", "/user/workers/w3"]
within = 10 seconds
[Link] = 20 milliseconds
}
}

ActorRef router23 =
getContext().actorOf([Link]().props(), "router23");

TailChoppingGroup defined in code:


List<String> paths = [Link]("/user/workers/w1", "/user/workers/w2",
"/user/workers/w3");
FiniteDuration within4 = [Link](10, [Link]);
FiniteDuration interval2 = [Link](20, [Link]);
ActorRef router24 =
getContext().actorOf(new TailChoppingGroup(paths, within4, interval2).props(),
"router24");

ConsistentHashingPool and ConsistentHashingGroup

The ConsistentHashingPool uses consistent hashing to select a routee based on the sent message. This article
gives good insight into how consistent hashing is implemented.
There is 3 ways to define what data to use for the consistent hash key.
• You can define withHashMapper of the router to map incoming messages to their consistent hash key.
This makes the decision transparent for the sender.
• The messages may implement [Link].
The key is part of the message and it’s convenient to define it together with the message definition.
• The messages can be wrapped in a [Link]
to define what data to use for the consistent hash key. The sender knows the key to use.
These ways to define the consistent hash key can be use together and at the same time for one router. The
withHashMapper is tried first.

4.6. Routing 166


Akka Java Documentation, Release 2.4.20

Code example:
public class Cache extends UntypedActor {
Map<String, String> cache = new HashMap<String, String>();

public void onReceive(Object msg) {


if (msg instanceof Entry) {
Entry entry = (Entry) msg;
[Link]([Link], [Link]);
} else if (msg instanceof Get) {
Get get = (Get) msg;
Object value = [Link]([Link]);
getSender().tell(value == null ? NOT_FOUND : value,
getContext().self());
} else if (msg instanceof Evict) {
Evict evict = (Evict) msg;
[Link]([Link]);
} else {
unhandled(msg);
}
}
}

public final class Evict implements Serializable {


private static final long serialVersionUID = 1L;
public final String key;
public Evict(String key) {
[Link] = key;
}
}

public final class Get implements Serializable, ConsistentHashable {


private static final long serialVersionUID = 1L;
public final String key;
public Get(String key) {
[Link] = key;
}
public Object consistentHashKey() {
return key;
}
}

public final class Entry implements Serializable {


private static final long serialVersionUID = 1L;
public final String key;
public final String value;
public Entry(String key, String value) {
[Link] = key;
[Link] = value;
}
}

public final String NOT_FOUND = "NOT_FOUND";

final ConsistentHashMapper hashMapper = new ConsistentHashMapper() {


@Override
public Object hashKey(Object message) {
if (message instanceof Evict) {
return ((Evict) message).key;
} else {
return null;
}
}

4.6. Routing 167


Akka Java Documentation, Release 2.4.20

};

ActorRef cache = [Link](


new ConsistentHashingPool(10).withHashMapper(hashMapper).props(
[Link]([Link])),
"cache");

[Link](new ConsistentHashableEnvelope(
new Entry("hello", "HELLO"), "hello"), getRef());
[Link](new ConsistentHashableEnvelope(
new Entry("hi", "HI"), "hi"), getRef());

[Link](new Get("hello"), getRef());


expectMsgEquals("HELLO");

[Link](new Get("hi"), getRef());


expectMsgEquals("HI");

[Link](new Evict("hi"), getRef());


[Link](new Get("hi"), getRef());
expectMsgEquals(NOT_FOUND);

In the above example you see that the Get message implements ConsistentHashable itself, while the
Entry message is wrapped in a ConsistentHashableEnvelope. The Evict message is handled by
the hashMapping partial function.
ConsistentHashingPool defined in configuration:
[Link] {
/parent/router25 {
router = consistent-hashing-pool
nr-of-instances = 5
virtual-nodes-factor = 10
}
}

ActorRef router25 =
getContext().actorOf([Link]().props([Link]([Link])),
"router25");

ConsistentHashingPool defined in code:


ActorRef router26 =
getContext().actorOf(new ConsistentHashingPool(5).props(
[Link]([Link])), "router26");

ConsistentHashingGroup defined in configuration:


[Link] {
/parent/router27 {
router = consistent-hashing-group
[Link] = ["/user/workers/w1", "/user/workers/w2", "/user/workers/w3"]
virtual-nodes-factor = 10
}
}

ActorRef router27 =
getContext().actorOf([Link]().props(), "router27");

ConsistentHashingGroup defined in code:


List<String> paths = [Link]("/user/workers/w1", "/user/workers/w2",
"/user/workers/w3");

4.6. Routing 168


Akka Java Documentation, Release 2.4.20

ActorRef router28 =
getContext().actorOf(new ConsistentHashingGroup(paths).props(), "router28");

virtual-nodes-factor is the number of virtual nodes per routee that is used in the consistent hash node
ring to make the distribution more uniform.

4.6.4 Specially Handled Messages

Most messages sent to router actors will be forwarded according to the routers’ routing logic. However there are
a few types of messages that have special behavior.
Note that these special messages, except for the Broadcast message, are only handled by self contained router
actors and not by the [Link] component described in A Simple Router.

Broadcast Messages

A Broadcast message can be used to send a message to all of a router’s routees. When a router receives a
Broadcast message, it will broadcast that message’s payload to all routees, no matter how that router would
normally route its messages.
The example below shows how you would use a Broadcast message to send a very important message to every
routee of a router.
[Link](new Broadcast("Watch out for Davy Jones’ locker"), getTestActor());

In this example the router receives the Broadcast message, extracts its payload
("Watch out for Davy Jones’ locker"), and then sends the payload on to all of the router’s
routees. It is up to each routee actor to handle the received payload message.

Note: Do not use Broadcast Messages when you use BalancingPool for routers. Routees on BalancingPool
shares the same mailbox instance, thus some routees can possibly get the broadcast message multiple times, while
other routees get no broadcast message.

PoisonPill Messages

A PoisonPill message has special handling for all actors, including for routers. When any actor receives a
PoisonPill message, that actor will be stopped. See the PoisonPill documentation for details.
[Link]([Link](), getTestActor());

For a router, which normally passes on messages to routees, it is important to realise that PoisonPill messages
are processed by the router only. PoisonPill messages sent to a router will not be sent on to routees.
However, a PoisonPill message sent to a router may still affect its routees, because it will stop the router and
when the router stops it also stops its children. Stopping children is normal actor behavior. The router will stop
routees that it has created as children. Each child will process its current message and then stop. This may lead to
some messages being unprocessed. See the documentation on Stopping actors for more information.
If you wish to stop a router and its routees, but you would like the routees to first process all the messages
currently in their mailboxes, then you should not send a PoisonPill message to the router. Instead you should
wrap a PoisonPill message inside a Broadcast message so that each routee will receive the PoisonPill
message. Note that this will stop all routees, even if the routees aren’t children of the router, i.e. even routees
programmatically provided to the router.
[Link](new Broadcast([Link]()), getTestActor());

With the code shown above, each routee will receive a PoisonPill message. Each routee will continue to
process its messages as normal, eventually processing the PoisonPill. This will cause the routee to stop. After

4.6. Routing 169


Akka Java Documentation, Release 2.4.20

all routees have stopped the router will itself be stopped automatically unless it is a dynamic router, e.g. using a
resizer.

Note: Brendan W McAdams’ excellent blog post Distributing Akka Workloads - And Shutting Down Afterwards
discusses in more detail how PoisonPill messages can be used to shut down routers and routees.

Kill Messages

Kill messages are another type of message that has special handling. See Killing an Actor for general informa-
tion about how actors handle Kill messages.
When a Kill message is sent to a router the router processes the message internally, and does not send it on to its
routees. The router will throw an ActorKilledException and fail. It will then be either resumed, restarted
or terminated, depending how it is supervised.
Routees that are children of the router will also be suspended, and will be affected by the supervision directive
that is applied to the router. Routees that are not the routers children, i.e. those that were created externally to the
router, will not be affected.
[Link]([Link](), getTestActor());

As with the PoisonPill message, there is a distinction between killing a router, which indirectly kills its
children (who happen to be routees), and killing routees directly (some of whom may not be children.) To kill
routees directly the router should be sent a Kill message wrapped in a Broadcast message.
[Link](new Broadcast([Link]()), getTestActor());

Management Messages

• Sending [Link] to a router actor will make it send back its currently used routees
in a [Link] message.
• Sending [Link] to a router actor will add that routee to its collection of routees.
• Sending [Link] to a router actor will remove that routee to its collection of
routees.
• Sending [Link] to a pool router actor will add or remove that number of
routees to its collection of routees.
These management messages may be handled after other messages, so if you send AddRoutee immediately
followed by an ordinary message you are not guaranteed that the routees have been changed when the ordinary
message is routed. If you need to know when the change has been applied you can send AddRoutee followed by
GetRoutees and when you receive the Routees reply you know that the preceding change has been applied.

4.6.5 Dynamically Resizable Pool

All pools can be used with a fixed number of routees or with a resize strategy to adjust the number of routees
dynamically.
There are two types of resizers: the default Resizer and the OptimalSizeExploringResizer.

Default Resizer

The default resizer ramps up and down pool size based on pressure, measured by the percentage of busy routees
in the pool. It ramps up pool size if the pressure is higher than a certain threshold and backs off if the pressure is
lower than certain threshold. Both thresholds are configurable.
Pool with default resizer defined in configuration:

4.6. Routing 170


Akka Java Documentation, Release 2.4.20

[Link] {
/parent/router29 {
router = round-robin-pool
resizer {
lower-bound = 2
upper-bound = 15
messages-per-resize = 100
}
}
}

ActorRef router29 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router29");

Several more configuration options are available and described in [Link]


section of the reference Configuration.
Pool with resizer defined in code:
DefaultResizer resizer = new DefaultResizer(2, 15);
ActorRef router30 =
getContext().actorOf(new RoundRobinPool(5).withResizer(resizer).props(
[Link]([Link])), "router30");

It is also worth pointing out that if you define the ‘‘router‘‘ in the configuration file then this value will be used
instead of any programmatically sent parameters.

Optimal Size Exploring Resizer

The OptimalSizeExploringResizer resizes the pool to an optimal size that provides the most message
throughput.
It achieves this by keeping track of message throughput at each pool size and performing one of the following
three resizing operations periodically:
• Downsize if it hasn’t seen all routees ever fully utilized for a period of time.
• Explore to a random nearby pool size to try and collect throughput metrics.
• Optimize to a nearby pool size with a better (than any other nearby sizes) throughput metrics.
When the pool is fully-utilized (i.e. all routees are busy), it randomly choose between exploring and optimizing.
When the pool has not been fully-utilized for a period of time, it will downsize the pool to the last seen max
utilization multiplied by a configurable ratio.
By constantly exploring and optimizing, the resizer will eventually walk to the optimal size and remain nearby.
When the optimal size changes it will start walking towards the new one. This resizer works best when you expect
the pool size to performance function to be a convex function. For example, when you have a CPU bound tasks,
the optimal size is bound to the number of CPU cores. When your task is IO bound, the optimal size is bound to
optimal number of concurrent connections to that IO service - e.g. a 4 node elastic search cluster may handle 4-8
concurrent requests at optimal speed.
It keeps a performance log so it’s stateful as well as having a larger memory footprint than the default Resizer.
The memory usage is O(n) where n is the number of sizes you allow, i.e. upperBound - lowerBound.
Pool with OptimalSizeExploringResizer defined in configuration:
[Link] {
/parent/router31 {
router = round-robin-pool
optimal-size-exploring-resizer {
enabled = on
action-interval = 5s

4.6. Routing 171


Akka Java Documentation, Release 2.4.20

downsize-after-underutilized-for = 72h
}
}
}

ActorRef router31 =
getContext().actorOf([Link]().props(
[Link]([Link])), "router31");

Several more configuration options are available and described in [Link]-size-ex


section of the reference Configuration.

Note: Resizing is triggered by sending messages to the actor pool, but it is not completed synchronously; instead
a message is sent to the “head” RouterActor to perform the size change. Thus you cannot rely on resizing
to instantaneously create new workers when all others are busy, because the message just sent will be queued to
the mailbox of a busy actor. To remedy this, configure the pool to use a balancing dispatcher, see Configuring
Dispatchers for more information.

4.6.6 How Routing is Designed within Akka

On the surface routers look like normal actors, but they are actually implemented differently. Routers are designed
to be extremely efficient at receiving messages and passing them quickly on to routees.
A normal actor can be used for routing messages, but an actor’s single-threaded processing can become a bottle-
neck. Routers can achieve much higher throughput with an optimization to the usual message-processing pipeline
that allows concurrent routing. This is achieved by embedding routers’ routing logic directly in their ActorRef
rather than in the router actor. Messages sent to a router’s ActorRef can be immediately routed to the routee,
bypassing the single-threaded router actor entirely.
The cost to this is, of course, that the internals of routing code are more complicated than if routers were im-
plemented with normal actors. Fortunately all of this complexity is invisible to consumers of the routing API.
However, it is something to be aware of when implementing your own routers.

4.6.7 Custom Router

You can create your own router should you not find any of the ones provided by Akka sufficient for your needs.
In order to roll your own router you have to fulfill certain criteria which are explained in this section.
Before creating your own router you should consider whether a normal actor with router-like behavior might do
the job just as well as a full-blown router. As explained above, the primary benefit of routers over normal actors
is their higher performance. But they are somewhat more complicated to write than normal actors. Therefore if
lower maximum throughput is acceptable in your application you may wish to stick with traditional actors. This
section, however, assumes that you wish to get maximum performance and so demonstrates how you can create
your own router.
The router created in this example is replicating each message to a few destinations.
Start with the routing logic:
public class RedundancyRoutingLogic implements RoutingLogic {
private final int nbrCopies;

public RedundancyRoutingLogic(int nbrCopies) {


[Link] = nbrCopies;
}
RoundRobinRoutingLogic roundRobin = new RoundRobinRoutingLogic();

@Override
public Routee select(Object message, IndexedSeq<Routee> routees) {

4.6. Routing 172


Akka Java Documentation, Release 2.4.20

List<Routee> targets = new ArrayList<Routee>();


for (int i = 0; i < nbrCopies; i++) {
[Link]([Link](message, routees));
}
return new SeveralRoutees(targets);
}
}

select will be called for each message and in this example pick a few destinations by round-robin, by
reusing the existing RoundRobinRoutingLogic and wrap the result in a SeveralRoutees instance.
SeveralRoutees will send the message to all of the supplied routes.
The implementation of the routing logic must be thread safe, since it might be used outside of actors.
A unit test of the routing logic:
public final class TestRoutee implements Routee {
public final int n;

public TestRoutee(int n) {
this.n = n;
}

@Override
public void send(Object message, ActorRef sender) {
}

@Override
public int hashCode() {
return n;
}

@Override
public boolean equals(Object obj) {
return (obj instanceof TestRoutee) &&
n == ((TestRoutee) obj).n;
}
}

RedundancyRoutingLogic logic = new RedundancyRoutingLogic(3);

List<Routee> routeeList = new ArrayList<Routee>();


for (int n = 1; n <= 7; n++) {
[Link](new TestRoutee(n));
}
IndexedSeq<Routee> routees = immutableIndexedSeq(routeeList);

SeveralRoutees r1 = (SeveralRoutees) [Link]("msg", routees);


assertEquals([Link]().get(0), [Link](0));
assertEquals([Link]().get(1), [Link](1));
assertEquals([Link]().get(2), [Link](2));

SeveralRoutees r2 = (SeveralRoutees) [Link]("msg", routees);


assertEquals([Link]().get(0), [Link](3));
assertEquals([Link]().get(1), [Link](4));
assertEquals([Link]().get(2), [Link](5));

SeveralRoutees r3 = (SeveralRoutees) [Link]("msg", routees);


assertEquals([Link]().get(0), [Link](6));
assertEquals([Link]().get(1), [Link](0));
assertEquals([Link]().get(2), [Link](1));

You could stop here and use the RedundancyRoutingLogic with a [Link] as described
in A Simple Router.

4.6. Routing 173


Akka Java Documentation, Release 2.4.20

Let us continue and make this into a self contained, configurable, router actor.
Create a class that extends PoolBase, GroupBase or CustomRouterConfig. That class is a factory for
the routing logic and holds the configuration for the router. Here we make it a Group.
import [Link];

import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

import [Link];

import [Link];
import static [Link];

public class RedundancyGroup extends GroupBase {


private final List<String> paths;
private final int nbrCopies;

public RedundancyGroup(List<String> paths, int nbrCopies) {


[Link] = paths;
[Link] = nbrCopies;
}

public RedundancyGroup(Config config) {


this([Link]("[Link]"),
[Link]("nbr-copies"));
}

@Override
public [Link]<String> getPaths(ActorSystem system) {
return paths;
}

@Override
public Router createRouter(ActorSystem system) {
return new Router(new RedundancyRoutingLogic(nbrCopies));
}

@Override
public String routerDispatcher() {
return [Link]();
}

This can be used exactly as the router actors provided by Akka.


for (int n = 1; n <= 10; n++) {
[Link]([Link]([Link]), "s" + n);
}

List<String> paths = new ArrayList<String>();

4.6. Routing 174


Akka Java Documentation, Release 2.4.20

for (int n = 1; n <= 10; n++) {


[Link]("/user/s" + n);
}

ActorRef redundancy1 =
[Link](new RedundancyGroup(paths, 3).props(),
"redundancy1");
[Link]("important", getTestActor());

Note that we added a constructor in RedundancyGroup that takes a Config parameter. That makes it possible
to define it in configuration.
[Link] {
/redundancy2 {
router = "[Link]"
[Link] = ["/user/s1", "/user/s2", "/user/s3"]
nbr-copies = 5
}
}

Note the fully qualified class name in the router property. The router class must extend
[Link] (Pool, Group or CustomRouterConfig) and have constructor with
one [Link] parameter. The deployment section of the configuration is passed to
the constructor.
ActorRef redundancy2 = [Link]([Link]().props(),
"redundancy2");
[Link]("very important", getTestActor());

4.6.8 Configuring Dispatchers

The dispatcher for created children of the pool will be taken from Props as described in dispatchers-scala.
To make it easy to define the dispatcher of the routees of the pool you can define the dispatcher inline in the
deployment section of the config.
[Link] {
/poolWithDispatcher {
router = random-pool
nr-of-instances = 5
pool-dispatcher {
[Link]-min = 5
[Link]-max = 5
}
}
}

That is the only thing you need to do enable a dedicated dispatcher for a pool.

Note: If you use a group of actors and route to their paths, then they will still use the same dispatcher that was
configured for them in their Props, it is not possible to change an actors dispatcher after it has been created.

The “head” router cannot always run on the same dispatcher, because it does not process the same type
of messages, hence this special actor does not use the dispatcher configured in Props, but takes the
routerDispatcher from the RouterConfig instead, which defaults to the actor system’s default dis-
patcher. All standard routers allow setting this property in their constructor or factory method, custom routers
have to implement the method in a suitable way.
Props props =
// “head” router actor will run on "router-dispatcher" dispatcher
// Worker routees will run on "pool-dispatcher" dispatcher

4.6. Routing 175


Akka Java Documentation, Release 2.4.20

new RandomPool(5).withDispatcher("router-dispatcher").props(
[Link]([Link]));
ActorRef router = [Link](props, "poolWithDispatcher");

Note: It is not allowed to configure the routerDispatcher to be a


[Link] since the messages meant for the spe-
cial router actor cannot be processed by any other actor.

4.7 Building Finite State Machine Actors

4.7.1 Overview

The FSM (Finite State Machine) pattern is best described in the Erlang design principles. In short, it can be seen
as a set of relations of the form:
State(S) x Event(E) -> Actions (A), State(S’)
These relations are interpreted as meaning:
If we are in state S and the event E occurs, we should perform the actions A and make a transition to
the state S’.
While the Scala programming language enables the formulation of a nice internal DSL (domain specific lan-
guage) for formulating finite state machines (see fsm-scala), Java’s verbosity does not lend itself well to the
same approach. This chapter describes ways to effectively achieve the same separation of concerns through self-
discipline.

4.7.2 How State should be Handled

All mutable fields (or transitively mutable data structures) referenced by the FSM actor’s implementation should
be collected in one place and only mutated using a small well-defined set of methods. One way to achieve this is
to assemble all mutable state in a superclass which keeps it private and offers protected methods for mutating it.
import [Link];
import [Link];
import [Link];

public abstract class MyFSMBase extends UntypedActor {

/*
* This is the mutable state of this state machine.
*/
protected enum State {
IDLE, ACTIVE;
}

private State state = [Link];


private ActorRef target;
private List<Object> queue;

/*
* Then come all the mutator methods:
*/
protected void init(ActorRef target) {
[Link] = target;
queue = new ArrayList<Object>();
}

4.7. Building Finite State Machine Actors 176


Akka Java Documentation, Release 2.4.20

protected void setState(State s) {


if (state != s) {
transition(state, s);
state = s;
}
}

protected void enqueue(Object o) {


if (queue != null)
[Link](o);
}

protected List<Object> drainQueue() {


final List<Object> q = queue;
if (q == null)
throw new IllegalStateException("drainQueue(): not yet initialized");
queue = new ArrayList<Object>();
return q;
}

/*
* Here are the interrogation methods:
*/
protected boolean isInitialized() {
return target != null;
}

protected State getState() {


return state;
}

protected ActorRef getTarget() {


if (target == null)
throw new IllegalStateException("getTarget(): not yet initialized");
return target;
}

/*
* And finally the callbacks (only one in this example: react to state change)
*/
abstract protected void transition(State old, State next);
}

The benefit of this approach is that state changes can be acted upon in one central place, which makes it impossible
to forget inserting code for reacting to state transitions when adding to the FSM’s machinery.

4.7.3 Message Buncher Example

The base class shown above is designed to support a similar example as for the Scala FSM documentation: an
actor which receives and queues messages, to be delivered in batches to a configurable target actor. The messages
involved are:
public final class SetTarget {
final ActorRef ref;

public SetTarget(ActorRef ref) {


[Link] = ref;
}
}

public final class Queue {


final Object o;

4.7. Building Finite State Machine Actors 177


Akka Java Documentation, Release 2.4.20

public Queue(Object o) {
this.o = o;
}
}

public final Object flush = new Object();

public final class Batch {


final List<Object> objects;

public Batch(List<Object> objects) {


[Link] = objects;
}
}

This actor has only the two states IDLE and ACTIVE, making their handling quite straight-forward in the concrete
actor derived from the base class:
import [Link];
import [Link];
import [Link];

public class MyFSM extends MyFSMBase {

private final LoggingAdapter log =


[Link](getContext().system(), this);

@Override
public void onReceive(Object o) {

if (getState() == [Link]) {

if (o instanceof SetTarget)
init(((SetTarget) o).ref);

else
whenUnhandled(o);

} else if (getState() == [Link]) {

if (o == flush)
setState([Link]);

else
whenUnhandled(o);
}
}

@Override
public void transition(State old, State next) {
if (old == [Link]) {
getTarget().tell(new Batch(drainQueue()), getSelf());
}
}

private void whenUnhandled(Object o) {


if (o instanceof Queue && isInitialized()) {
enqueue(((Queue) o).o);
setState([Link]);

} else {
[Link]("received unknown message {} in state {}", o, getState());

4.7. Building Finite State Machine Actors 178


Akka Java Documentation, Release 2.4.20

}
}
}

The trick here is to factor out common functionality like whenUnhandled and transition in order to obtain
a few well-defined points for reacting to change or insert logging.

4.7.4 State-Centric vs. Event-Centric

In the example above, the subjective complexity of state and events was roughly equal, making it a matter of taste
whether to choose primary dispatch on either; in the example a state-based dispatch was chosen. Depending on
how evenly the matrix of possible states and events is populated, it may be more practical to handle different events
first and distinguish the states in the second tier. An example would be a state machine which has a multitude of
internal states but handles only very few distinct events.

4.8 Persistence

Akka persistence enables stateful actors to persist their internal state so that it can be recovered when an actor is
started, restarted after a JVM crash or by a supervisor, or migrated in a cluster. The key concept behind Akka
persistence is that only changes to an actor’s internal state are persisted but never its current state directly (except
for optional snapshots). These changes are only ever appended to storage, nothing is ever mutated, which allows
for very high transaction rates and efficient replication. Stateful actors are recovered by replaying stored changes
to these actors from which they can rebuild internal state. This can be either the full history of changes or starting
from a snapshot which can dramatically reduce recovery times. Akka persistence also provides point-to-point
communication with at-least-once message delivery semantics.

Note: Java 8 lambda expressions are also supported. (See section Persistence (Java with Lambda Support))

Akka persistence is inspired by and the official replacement of the eventsourced library. It follows the same
concepts and architecture of eventsourced but significantly differs on API and implementation level. See also
Migration Guide Eventsourced to Akka Persistence 2.3.x

4.8.1 Dependencies

Akka persistence is a separate jar file. Make sure that you have the following dependency in your project:
<dependency>
<groupId>[Link]</groupId>
<artifactId>akka-persistence_2.11</artifactId>
<version>2.4.20</version>
</dependency>

The Akka persistence extension comes with few built-in persistence plugins, including in-memory heap based
journal, local file-system based snapshot-store and LevelDB based journal.
LevelDB based plugins will require the following additional dependency declaration:
<dependency>
<groupId>[Link]</groupId>
<artifactId>leveldb</artifactId>
<version>0.7</version>
</dependency>
<dependency>
<groupId>[Link]</groupId>
<artifactId>leveldbjni-all</artifactId>
<version>1.8</version>
</dependency>

4.8. Persistence 179


Akka Java Documentation, Release 2.4.20

4.8.2 Architecture

• UntypedPersistentActor: Is a persistent, stateful actor. It is able to persist events to a journal and can react
to them in a thread-safe manner. It can be used to implement both command as well as event sourced actors.
When a persistent actor is started or restarted, journaled messages are replayed to that actor so that it can
recover internal state from these messages.
• UntypedPersistentView: A view is a persistent, stateful actor that receives journaled messages that have been
written by another persistent actor. A view itself does not journal new messages, instead, it updates internal
state only from a persistent actor’s replicated message stream.
• UntypedPersistentActorAtLeastOnceDelivery: To send messages with at-least-once delivery semantics to
destinations, also in case of sender and receiver JVM crashes.
• AsyncWriteJournal: A journal stores the sequence of messages sent to a persistent actor. An application
can control which messages are journaled and which are received by the persistent actor without being
journaled. Journal maintains highestSequenceNr that is increased on each message. The storage backend
of a journal is pluggable. The persistence extension comes with a “leveldb” journal plugin, which writes to
the local filesystem. Replicated journals are available as Community plugins.
• Snapshot store: A snapshot store persists snapshots of a persistent actor’s or a view’s internal state. Snap-
shots are used for optimizing recovery times. The storage backend of a snapshot store is pluggable. The
persistence extension comes with a “local” snapshot storage plugin, which writes to the local filesystem.
Replicated snapshot stores are available as Community plugins.

4.8.3 Event sourcing

The basic idea behind Event Sourcing is quite simple. A persistent actor receives a (non-persistent) command
which is first validated if it can be applied to the current state. Here validation can mean anything from simple
inspection of a command message’s fields up to a conversation with several external services, for example. If
validation succeeds, events are generated from the command, representing the effect of the command. These
events are then persisted and, after successful persistence, used to change the actor’s state. When the persistent
actor needs to be recovered, only the persisted events are replayed of which we know that they can be successfully
applied. In other words, events cannot fail when being replayed to a persistent actor, in contrast to commands.
Event sourced actors may of course also process commands that do not change application state such as query
commands for example.
Akka persistence supports event sourcing with the UntypedPersistentActor abstract class. An ac-
tor that extends this class uses the persist method to persist and handle events. The behavior of an
UntypedPersistentActor is defined by implementing receiveRecover and receiveCommand. This
is demonstrated in the following example.
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];
import [Link];

import [Link];
import [Link];
import static [Link];

class Cmd implements Serializable {


private static final long serialVersionUID = 1L;
private final String data;

public Cmd(String data) {


[Link] = data;
}

4.8. Persistence 180


Akka Java Documentation, Release 2.4.20

public String getData() {


return data;
}
}

class Evt implements Serializable {


private static final long serialVersionUID = 1L;
private final String data;

public Evt(String data) {


[Link] = data;
}

public String getData() {


return data;
}
}

class ExampleState implements Serializable {


private static final long serialVersionUID = 1L;
private final ArrayList<String> events;

public ExampleState() {
this(new ArrayList<String>());
}

public ExampleState(ArrayList<String> events) {


[Link] = events;
}

public ExampleState copy() {


return new ExampleState(new ArrayList<String>(events));
}

public void update(Evt evt) {


[Link]([Link]());
}

public int size() {


return [Link]();
}

@Override
public String toString() {
return [Link]();
}
}

class ExamplePersistentActor extends UntypedPersistentActor {


@Override
public String persistenceId() { return "sample-id-1"; }

private ExampleState state = new ExampleState();

public int getNumEvents() {


return [Link]();
}

@Override
public void onReceiveRecover(Object msg) {
if (msg instanceof Evt) {
[Link]((Evt) msg);
} else if (msg instanceof SnapshotOffer) {

4.8. Persistence 181


Akka Java Documentation, Release 2.4.20

state = (ExampleState)((SnapshotOffer)msg).snapshot();
} else {
unhandled(msg);
}
}

@Override
public void onReceiveCommand(Object msg) {
if (msg instanceof Cmd) {
final String data = ((Cmd)msg).getData();
final Evt evt1 = new Evt(data + "-" + getNumEvents());
final Evt evt2 = new Evt(data + "-" + (getNumEvents() + 1));
persistAll(asList(evt1, evt2), new Procedure<Evt>() {
public void apply(Evt evt) throws Exception {
[Link](evt);
if ([Link](evt2)) {
getContext().system().eventStream().publish(evt);
}
}
});
} else if ([Link]("snap")) {
// IMPORTANT: create a copy of snapshot
// because ExampleState is mutable !!!
saveSnapshot([Link]());
} else if ([Link]("print")) {
[Link](state);
} else {
unhandled(msg);
}
}
}

The example defines two data types, Cmd and Evt to represent commands and events, respectively. The state
of the ExamplePersistentActor is a list of persisted event data contained in ExampleState.
The persistent actor’s onReceiveRecover method defines how state is updated during recovery by handling
Evt and SnapshotOffer messages. The persistent actor’s onReceiveCommand method is a command
handler. In this example, a command is handled by generating two events which are then persisted and handled.
Events are persisted by calling persist with an event (or a sequence of events) as first argument and an event
handler as second argument.
The persist method persists events asynchronously and the event handler is executed for successfully persisted
events. Successfully persisted events are internally sent back to the persistent actor as individual messages that
trigger event handler executions. An event handler may close over persistent actor state and mutate it. The sender
of a persisted event is the sender of the corresponding command. This allows event handlers to reply to the sender
of a command (not shown).
The main responsibility of an event handler is changing persistent actor state using event data and notifying others
about successful state changes by publishing events.
When persisting events with persist it is guaranteed that the persistent actor will not receive further commands
between the persist call and the execution(s) of the associated event handler. This also holds for multiple
persist calls in context of a single command. Incoming messages are stashed until the persist is completed.
If persistence of an event fails, onPersistFailure will be invoked (logging the error by default), and the actor
will unconditionally be stopped. If persistence of an event is rejected before it is stored, e.g. due to serialization
error, onPersistRejected will be invoked (logging a warning by default), and the actor continues with the
next message.
The easiest way to run this example yourself is to download Lightbend Activator and open the tutorial named
Akka Persistence Samples with Java. It contains instructions on how to run the PersistentActorExample.

Note: It’s also possible to switch between different command handlers during normal processing and recovery

4.8. Persistence 182


Akka Java Documentation, Release 2.4.20

with getContext().become() and getContext().unbecome(). To get the actor into the same state
after recovery you need to take special care to perform the same state transitions with become and unbecome
in the receiveRecover method as you would have done in the command handler. Note that when using
become from receiveRecover it will still only use the receiveRecover behavior when replaying the
events. When replay is completed it will use the new behavior.

Identifiers

A persistent actor must have an identifier that doesn’t change across different actor incarnations. The identifier
must be defined with the persistenceId method.
@Override
public String persistenceId() {
return "my-stable-persistence-id";
}

Note: persistenceId must be unique to a given entity in the journal (database table/keyspace). When
replaying messages persisted to the journal, you query messages with a persistenceId. So, if two different
entities share the same persistenceId, message-replaying behavior is corrupted.

Recovery

By default, a persistent actor is automatically recovered on start and on restart by replaying journaled messages.
New messages sent to a persistent actor during recovery do not interfere with replayed messages. They are stashed
and received by a persistent actor after recovery phase completes.
The number of concurrent recoveries of recoveries that can be in progress at the same time is limited to not
overload the system and the backend data store. When exceeding the limit the actors will wait until other recoveries
have been completed. This is configured by:
[Link]-concurrent-recoveries = 50

Note: Accessing the sender() for replayed messages will always result in a deadLetters reference, as the
original sender is presumed to be long gone. If you indeed have to notify an actor during recovery in the future,
store its ActorPath explicitly in your persisted events.

Recovery customization

Applications may also customise how recovery is performed by returning a customised Recovery object in the
recovery method of a UntypedPersistentActor.
To skip loading snapshots and replay all events you can use [Link](). This
can be useful if snapshot serialization format has changed in an incompatible way. It should typically not be used
when events have been deleted.
@Override
public Recovery recovery() {
return [Link]([Link]());
}

Another example, which can be fun for experiments but probably not in a real application, is setting an upper
bound to the replay which allows the actor to be replayed to a certain point “in the past” instead to its most up
to date state. Note that after that it is a bad idea to persist new events because a later recovery will probably be
confused by the new events that follow the events that were previously skipped.

4.8. Persistence 183


Akka Java Documentation, Release 2.4.20

@Override
public Recovery recovery() {
return [Link](457L);
}

Recovery can be disabled by returning [Link]() in the recovery method of a


PersistentActor:
@Override
public Recovery recovery() {
return [Link]();
}

Recovery status

A persistent actor can query its own recovery status via the methods
public boolean recoveryRunning();
public boolean recoveryFinished();

Sometimes there is a need for performing additional initialization when the recovery has completed be-
fore processing any other message sent to the persistent actor. The persistent actor will receive a special
RecoveryCompleted message right after recovery and before any other received messages.
@Override
public void onReceiveRecover(Object message) {
if (message instanceof RecoveryCompleted) {
// perform init after recovery, before any other messages
}
}

@Override
public void onReceiveCommand(Object message) throws Exception {
if (message instanceof String) {
// ...
} else {
unhandled(message);
}
}

The actor will always receive a RecoveryCompleted message, even if there are no events in the journal and
the snapshot store is empty, or if it’s a new persistent actor with a previously unused persistenceId.
If there is a problem with recovering the state of the actor from the journal, onRecoveryFailure is called
(logging the error by default) and the actor will be stopped.

Internal stash

The persistent actor has a private stash for internally caching incoming messages during recovery or the
persist\persistAll method persisting events. You can still use/inherit from the Stash interface. The
internal stash cooperates with the normal stash by hooking into unstashAll method and making sure messages
are unstashed properly to the internal stash to maintain ordering guarantees.
You should be careful to not send more messages to a persistent actor than it can keep up with, otherwise the
number of stashed messages will grow without bounds. It can be wise to protect against OutOfMemoryError
by defining a maximum stash capacity in the mailbox configuration:
[Link]-capacity=10000

4.8. Persistence 184


Akka Java Documentation, Release 2.4.20

Note that the stash capacity is per actor. If you have many persistent actors, e.g. when using cluster
sharding, you may need to define a small stash capacity to ensure that the total number of stashed mes-
sages in the system don’t consume too much memory. Additionally, The persistent actor defines three
strategies to handle failure when the internal stash capacity is exceeded. The default overflow strategy is
the ThrowOverflowExceptionStrategy, which discards the current received message and throws a
StashOverflowException, causing actor restart if default supervision strategy is used. you can over-
ride the internalStashOverflowStrategy method to return DiscardToDeadLetterStrategy or
ReplyToStrategy for any “individual” persistent actor, or define the “default” for all persistent actors by pro-
viding FQCN, which must be a subclass of StashOverflowStrategyConfigurator, in the persistence
configuration:
[Link]-stash-overflow-strategy=
"[Link]"

The DiscardToDeadLetterStrategy strategy also has a pre-packaged companion configurator


[Link].
You can also query default strategy via the Akka persistence extension singleton:
[Link](context().system()).defaultInternalStashOverflowStrategy();

Note: The bounded mailbox should be avoided in the persistent actor, by which the messages come from storage
backends may be discarded. You can use bounded stash instead of it.

Relaxed local consistency requirements and high throughput use-cases

If faced with relaxed local consistency requirements and high throughput demands sometimes
PersistentActor and its persist may not be enough in terms of consuming incoming Commands
at a high rate, because it has to wait until all Events related to a given Command are processed in order to start
processing the next Command. While this abstraction is very useful for most cases, sometimes you may be faced
with relaxed requirements about consistency – for example you may want to process commands as fast as you
can, assuming that the Event will eventually be persisted and handled properly in the background, retroactively
reacting to persistence failures if needed.
The persistAsync method provides a tool for implementing high-throughput persistent actors. It will not stash
incoming Commands while the Journal is still working on persisting and/or user code is executing event callbacks.
In the below example, the event callbacks may be called “at any time”, even after the next Command has been
processed. The ordering between events is still guaranteed (“evt-b-1” will be sent after “evt-a-2”, which will be
sent after “evt-a-1” etc.).
class MyPersistentActor extends UntypedPersistentActor {
@Override
public String persistenceId() { return "some-persistence-id"; }

@Override
public void onReceiveRecover(Object msg) {
// handle recovery here
}

@Override
public void onReceiveCommand(Object msg) {
sender().tell(msg, self());

persistAsync([Link]("evt-%s-1", msg), new Procedure<String>(){


@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
}
});

4.8. Persistence 185


Akka Java Documentation, Release 2.4.20

persistAsync([Link]("evt-%s-2", msg), new Procedure<String>(){


@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
}
});
}
}

Note: In order to implement the pattern known as “command sourcing” simply persistAsync all incoming
messages right away and handle them in the callback.

Warning: The callback will not be invoked if the actor is restarted (or stopped) in between the call to
persistAsync and the journal has confirmed the write.

Deferring actions until preceding persist handlers have executed

Sometimes when working with persistAsync you may find that it would be nice to define some actions in
terms of ‘’happens-after the previous persistAsync handlers have been invoked’‘. PersistentActor
provides an utility method called deferAsync, which works similarly to persistAsync yet does not persist
the passed in event. It is recommended to use it for read operations, and actions which do not have corresponding
events in your domain model.
Using this method is very similar to the persist family of methods, yet it does not persist the passed in event. It
will be kept in memory and used when invoking the handler.
class MyPersistentActor extends UntypedPersistentActor {
@Override
public String persistenceId() { return "some-persistence-id"; }

@Override
public void onReceiveRecover(Object msg) {
// handle recovery here
}

@Override
public void onReceiveCommand(Object msg) {
final Procedure<String> replyToSender = new Procedure<String>() {
@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
}
};

persistAsync([Link]("evt-%s-1", msg), replyToSender);


persistAsync([Link]("evt-%s-2", msg), replyToSender);
deferAsync([Link]("evt-%s-3", msg), replyToSender);
}
}

Notice that the sender() is safe to access in the handler callback, and will be pointing to the original sender of
the command for which this deferAsync handler was called.
final ActorRef persistentActor = [Link]([Link]([Link]));
[Link]("a", null);
[Link]("b", null);

// order of received messages:


// a

4.8. Persistence 186


Akka Java Documentation, Release 2.4.20

// b
// evt-a-1
// evt-a-2
// evt-a-3
// evt-b-1
// evt-b-2
// evt-b-3

Warning: The callback will not be invoked if the actor is restarted (or stopped) in between the call to
deferAsync and the journal has processed and confirmed all preceding writes.

Nested persist calls

It is possible to call persist and persistAsync inside their respective callback blocks and they will properly
retain both the thread safety (including the right value of sender()) as well as stashing guarantees.
In general it is encouraged to create command handlers which do not need to resort to nested event persisting,
however there are situations where it may be useful. It is important to understand the ordering of callback execution
in those situations, as well as their implication on the stashing behaviour (that persist() enforces). In the
following example two persist calls are issued, and each of them issues another persist inside its callback:
@Override
public void onReceiveCommand(Object msg) {
final Procedure<String> replyToSender = new Procedure<String>() {
@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
}
};

final Procedure<String> outer1Callback = new Procedure<String>() {


@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
persist([Link]("%s-inner-1", msg), replyToSender);
}
};
final Procedure<String> outer2Callback = new Procedure<String>() {
@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
persist([Link]("%s-inner-2", msg), replyToSender);
}
};

persist([Link]("%s-outer-1", msg), outer1Callback);


persist([Link]("%s-outer-2", msg), outer2Callback);
}

When sending two commands to this PersistentActor, the persist handlers will be executed in the following
order:
[Link]("a", self());
[Link]("b", self());

// order of received messages:


// a
// a-outer-1
// a-outer-2
// a-inner-1
// a-inner-2

4.8. Persistence 187


Akka Java Documentation, Release 2.4.20

// and only then process "b"


// b
// b-outer-1
// b-outer-2
// b-inner-1
// b-inner-2

First the “outer layer” of persist calls is issued and their callbacks are applied. After these have successfully
completed, the inner callbacks will be invoked (once the events they are persisting have been confirmed to be
persisted by the journal). Only after all these handlers have been successfully invoked will the next command be
delivered to the persistent Actor. In other words, the stashing of incoming commands that is guaranteed by initially
calling persist() on the outer layer is extended until all nested persist callbacks have been handled.
It is also possible to nest persistAsync calls, using the same pattern:
@Override
public void onReceiveCommand(Object msg) {
final Procedure<String> replyToSender = new Procedure<String>() {
@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
}
};

final Procedure<String> outer1Callback = new Procedure<String>() {


@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
persistAsync([Link]("%s-inner-1", msg), replyToSender);
}
};
final Procedure<String> outer2Callback = new Procedure<String>() {
@Override
public void apply(String event) throws Exception {
sender().tell(event, self());
persistAsync([Link]("%s-inner-1", msg), replyToSender);
}
};

persistAsync([Link]("%s-outer-1", msg), outer1Callback);


persistAsync([Link]("%s-outer-2", msg), outer2Callback);
}

In this case no stashing is happening, yet events are still persisted and callbacks are executed in the expected order:
[Link]("a", [Link]());
[Link]("b", [Link]());

// order of received messages:


// a
// b
// a-outer-1
// a-outer-2
// b-outer-1
// b-outer-2
// a-inner-1
// a-inner-2
// b-inner-1
// b-inner-2

// which can be seen as the following causal relationship:


// a -> a-outer-1 -> a-outer-2 -> a-inner-1 -> a-inner-2
// b -> b-outer-1 -> b-outer-2 -> b-inner-1 -> b-inner-2

4.8. Persistence 188


Akka Java Documentation, Release 2.4.20

While it is possible to nest mixed persist and persistAsync with keeping their respective semantics it is
not a recommended practice, as it may lead to overly complex nesting.

Warning: While it is possible to nest persist calls within one another, it is not legal call persist from
any other Thread than the Actors message processing Thread. For example, it is not legal to call persist
from Futures! Doing so will break the guarantees that the persist methods aim to provide. Always call
persist and persistAsync from within the Actor’s receive block (or methods synchronously invoked
from there).

Failures

If persistence of an event fails, onPersistFailure will be invoked (logging the error by default), and the
actor will unconditionally be stopped.
The reason that it cannot resume when persist fails is that it is unknown if the event was actually persisted or
not, and therefore it is in an inconsistent state. Restarting on persistent failures will most likely fail anyway since
the journal is probably unavailable. It is better to stop the actor and after a back-off timeout start it again. The
[Link] actor is provided to support such restarts.
@Override
public void preStart() throws Exception {
final Props childProps = [Link]([Link]);
final Props props = [Link](
childProps,
"myActor",
[Link](3, [Link]),
[Link](30, [Link]),
0.2);
getContext().actorOf(props, "mySupervisor");
[Link]();
}

If persistence of an event is rejected before it is stored, e.g. due to serialization error, onPersistRejected
will be invoked (logging a warning by default), and the actor continues with next message.
If there is a problem with recovering the state of the actor from the journal when the actor is started,
onRecoveryFailure is called (logging the error by default), and the actor will be stopped. Note that fail-
ure to load snapshot is also treated like this, but you can disable loading of snapshots if you for example know that
serialization format has changed in an incompatible way, see Recovery customization.

Atomic writes

Each event is of course stored atomically, but it is also possible to store several events atomically by using the
persistAll or persistAllAsync method. That means that all events passed to that method are stored or
none of them are stored if there is an error.
The recovery of a persistent actor will therefore never be done partially with only a subset of events persisted by
persistAll.
Some journals may not support atomic writes of several events and they will then reject the
persistAll command, i.e. onPersistRejected is called with an exception (typically
UnsupportedOperationException).

Batch writes

In order to optimize throughput when using persistAsync, a persistent actor internally batches events to be
stored under high load before writing them to the journal (as a single batch). The batch size is dynamically
determined by how many events are emitted during the time of a journal round-trip: after sending a batch to the

4.8. Persistence 189


Akka Java Documentation, Release 2.4.20

journal no further batch can be sent before confirmation has been received that the previous batch has been written.
Batch writes are never timer-based which keeps latencies at a minimum.

Message deletion

It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number;
Persistent actors may call the deleteMessages method to this end.
Deleting messages in event sourcing based applications is typically either not used at all, or used in conjunction
with snapshotting, i.e. after a snapshot has been successfully stored, a deleteMessages(toSequenceNr)
up until the sequence number of the data held by that snapshot can be issued to safely delete the previous events
while still having access to the accumulated state during replays - by loading the snapshot.

Warning: If you are using Persistence Query, query results may be missing deleted messages in a journal,
depending on how deletions are implemented in the journal plugin. Unless you use a plugin which still shows
deleted messages in persistence query results, you have to design your application so that it is not affected by
missing messages.

The result of the deleteMessages request is signaled to the persistent actor with a
DeleteMessagesSuccess message if the delete was successful or a DeleteMessagesFailure
message if it failed.
Message deletion doesn’t affect the highest sequence number of the journal, even if all messages were deleted
from it after deleteMessages invocation.

Persistence status handling

Persisting, deleting, and replaying messages can either succeed or fail.


Method Success Failure / Rejection After failure handler
invoked
onPersistFailure Actor is stopped.
persist / persistAsync
persist handler invoked
onPersistRejected No automatic actions.
recovery RecoverySuccess onRecoveryFailure Actor is stopped.
deleteMessages DeleteMessagesSuccess DeleteMessagesFailure No automatic actions.
The most important operations (persist and recovery) have failure handlers modelled as explicit callbacks
which the user can override in the PersistentActor. The default implementations of these handlers emit
a log message (error for persist/recovery failures, and warning for others), logging the failure cause and
information about which message caused the failure.
For critical failures such as recovery or persisting events failing the persistent actor will be stopped after the failure
handler is invoked. This is because if the underlying journal implementation is signalling persistence failures it
is most likely either failing completely or overloaded and restarting right-away and trying to persist the event
again will most likely not help the journal recover – as it would likely cause a Thundering herd problem, as many
persistent actors would restart and try to persist their events again. Instead, using a BackoffSupervisor (as
described in Failures) which implements an exponential-backoff strategy which allows for more breathing room
for the journal to recover between restarts of the persistent actor.

Note: Journal implementations may choose to implement a retry mechanism, e.g. such that only after a write
fails N number of times a persistence failure is signalled back to the user. In other words, once a journal returns a
failure, it is considered fatal by Akka Persistence, and the persistent actor which caused the failure will be stopped.
Check the documentation of the journal implementation you are using for details if/how it is using this technique.

4.8. Persistence 190


Akka Java Documentation, Release 2.4.20

Safely shutting down persistent actors

Special care should be given when shutting down persistent actors from the outside. With normal Actors it is often
acceptable to use the special PoisonPill message to signal to an Actor that it should stop itself once it receives
this message – in fact this message is handled automatically by Akka, leaving the target actor no way to refuse
stopping itself when given a poison pill.
This can be dangerous when used with PersistentActor due to the fact that incoming commands are
stashed while the persistent actor is awaiting confirmation from the Journal that events have been written when
persist() was used. Since the incoming commands will be drained from the Actor’s mailbox and put into its
internal stash while awaiting the confirmation (thus, before calling the persist handlers) the Actor may receive
and (auto)handle the PoisonPill before it processes the other messages which have been put into its stash,
causing a pre-mature shutdown of the Actor.

Warning: Consider using explicit shut-down messages instead of PoisonPill when working with persis-
tent actors.

The example below highlights how messages arrive in the Actor’s mailbox and how they interact with its internal
stashing mechanism when persist() is used. Notice the early stop behaviour that occurs when PoisonPill
is used:
final class Shutdown {}

class MyPersistentActor extends UntypedPersistentActor {


@Override
public String persistenceId() {
return "some-persistence-id";
}

@Override
public void onReceiveCommand(Object msg) throws Exception {
if (msg instanceof Shutdown) {
context().stop(self());
} else if (msg instanceof String) {
[Link](msg);
persist("handle-" + msg, new Procedure<String>() {
@Override
public void apply(String param) throws Exception {
[Link](param);
}
});
} else unhandled(msg);
}

@Override
public void onReceiveRecover(Object msg) throws Exception {
// handle recovery...
}
}

// UN-SAFE, due to PersistentActor’s command stashing:


[Link]("a", [Link]());
[Link]("b", [Link]());
[Link]([Link](), [Link]());
// order of received messages:
// a
// # b arrives at mailbox, stashing; internal-stash = [b]
// # PoisonPill arrives at mailbox, stashing; internal-stash = [b, Shutdown]
// PoisonPill is an AutoReceivedMessage, is handled automatically
// !! stop !!
// Actor is stopped without handling ‘b‘ nor the ‘a‘ handler!

4.8. Persistence 191


Akka Java Documentation, Release 2.4.20

// SAFE:
[Link]("a", [Link]());
[Link]("b", [Link]());
[Link](new Shutdown(), [Link]());
// order of received messages:
// a
// # b arrives at mailbox, stashing; internal-stash = [b]
// # Shutdown arrives at mailbox, stashing; internal-stash = [b, Shutdown]
// handle-a
// # unstashing; internal-stash = [Shutdown]
// b
// handle-b
// # unstashing; internal-stash = []
// Shutdown
// -- stop --

Replay Filter

There could be cases where event streams are corrupted and multiple writers (i.e. multiple persistent actor in-
stances) journaled different messages with the same sequence number. In such a case, you can configure how you
filter replayed messages from multiple writers, upon recovery.
In your configuration, under the [Link]-filter section (where
xxx is your journal plugin id), you can select the replay filter mode from one of the following values:
• repair-by-discard-old
• fail
• warn
• off
For example, if you configure the replay filter for leveldb plugin, it looks like this:
# The replay filter can detect a corrupt event stream by inspecting
# sequence numbers and writerUuid when replaying events.
[Link]-filter {
# What the filter should do when detecting invalid events.
# Supported values:
# ‘repair-by-discard-old‘ : discard events from old writers,
# warning is logged
# ‘fail‘ : fail the replay, error is logged
# ‘warn‘ : log warning but emit events untouched
# ‘off‘ : disable this feature completely
mode = repair-by-discard-old
}

4.8. Persistence 192


Akka Java Documentation, Release 2.4.20

4.8.4 Persistent Views

Warning: UntypedPersistentView is deprecated. Use Persistence Query instead. The corresponding


query type is EventsByPersistenceId. There are several alternatives for connecting the Source to an
actor corresponding to a previous UntypedPersistentView actor:
• [Link] is simple, but has the disadvantage that there is no back-pressure signal from the desti-
nation actor, i.e. if the actor is not consuming the messages fast enough the mailbox of the actor will
grow
• mapAsync combined with Ask: Send-And-Receive-Future is almost as simple with the advantage of
back-pressure being propagated all the way
• ActorSubscriber in case you need more fine grained control
The consuming actor may be a plain UntypedActor or an UntypedPersistentActor if it needs to
store its own state (e.g. fromSequenceNr offset).

Persistent views can be implemented by extending the UntypedPersistentView trait and implementing the
onReceive and the persistenceId methods.
class MyView extends UntypedPersistentView {
@Override
public String persistenceId() { return "some-persistence-id"; }

@Override
public String viewId() { return "my-stable-persistence-view-id"; }

@Override
public void onReceive(Object message) throws Exception {
if (isPersistent()) {
// handle message from Journal...
} else if (message instanceof String) {
// handle message from user...
} else {
unhandled(message);
}
}
}

The persistenceId identifies the persistent actor from which the view receives journaled messages. It is
not necessary that the referenced persistent actor is actually running. Views read messages from a persistent
actor’s journal directly. When a persistent actor is started later and begins to write new messages, by default the
corresponding view is updated automatically.
It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling
the isPersistent method. Having that said, very often you don’t need this information at all and can simply
apply the same logic to both cases (skip the if isPersistent check).

Updates

The default update interval of all persistent views of an actor system is configurable:
[Link]-update-interval = 5s

UntypedPersistentView implementation classes may also override the autoUpdateInterval method


to return a custom update interval for a specific view class or view instance. Applications may also trigger addi-
tional updates at any time by sending a view an Update message.
final ActorRef view = [Link]([Link]([Link]));
[Link]([Link](true), null);

If the await parameter is set to true, messages that follow the Update request are processed when the incre-
mental message replay, triggered by that update request, completed. If set to false (default), messages following

4.8. Persistence 193


Akka Java Documentation, Release 2.4.20

the update request may interleave with the replayed message stream. Automated updates always run with await
= false.
Automated updates of all persistent views of an actor system can be turned off by configuration:
[Link]-update = off

Implementation classes may override the configured default value by overriding the autoUpdate
method. To limit the number of replayed messages per update request, applications can con-
figure a custom [Link]-update-replay-max value or override the
autoUpdateReplayMax method. The number of replayed messages for manual updates can be limited with
the replayMax parameter of the Update message.

Recovery

Initial recovery of persistent views works the very same way as for persistent actors (i.e. by sending a
Recover message to self). The maximum number of replayed messages during initial recovery is determined by
autoUpdateReplayMax. Further possibilities to customize initial recovery are explained in section Recovery.

Identifiers

A persistent view must have an identifier that doesn’t change across different actor incarnations. The identifier
must be defined with the viewId method.
The viewId must differ from the referenced persistenceId, unless Snapshots of a view and its persistent
actor should be shared (which is what applications usually do not want).

4.8.5 Snapshots

Snapshots can dramatically reduce recovery times of persistent actors and views. The following discusses snap-
shots in context of persistent actors but this is also applicable to persistent views.
Persistent actors can save snapshots of internal state by calling the saveSnapshot method. If saving
of a snapshot succeeds, the persistent actor receives a SaveSnapshotSuccess message, otherwise a
SaveSnapshotFailure message.
private Object state;

@Override
public void onReceiveCommand(Object message) {
if ([Link]("snap")) {
saveSnapshot(state);
} else if (message instanceof SaveSnapshotSuccess) {
SnapshotMetadata metadata = ((SaveSnapshotSuccess)message).metadata();
// ...
} else if (message instanceof SaveSnapshotFailure) {
SnapshotMetadata metadata = ((SaveSnapshotFailure)message).metadata();
// ...
}
}

During recovery, the persistent actor is offered a previously saved snapshot via a SnapshotOffer message
from which it can initialize internal state.
private Object state;

@Override
public void onReceiveRecover(Object message) {
if (message instanceof SnapshotOffer) {
state = ((SnapshotOffer)message).snapshot();

4.8. Persistence 194


Akka Java Documentation, Release 2.4.20

// ...
} else if (message instanceof RecoveryCompleted) {
// ...
} else {
// ...
}
}

The replayed messages that follow the SnapshotOffer message, if any, are younger than the offered snapshot.
They finally recover the persistent actor to its current (i.e. latest) state.
In general, a persistent actor is only offered a snapshot if that persistent actor has previously saved one or more
snapshots and at least one of these snapshots matches the SnapshotSelectionCriteria that can be speci-
fied for recovery.
@Override
public Recovery recovery() {
return [Link](
SnapshotSelectionCriteria
.create(457L, [Link]()));
}

If not specified, they default to [Link]() which selects


the latest (= youngest) snapshot. To disable snapshot-based recovery, applications should use
[Link](). A recovery where no saved snapshot matches the speci-
fied SnapshotSelectionCriteria will replay all journaled messages.

Note: In order to use snapshots, a default snapshot-store ([Link])


must be configured, or the persistent actor can pick a snapshot store explicitly by overriding String
snapshotPluginId().
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store.
However, Akka will log a warning message when this situation is detected and then continue to operate until an ac-
tor tries to store a snapshot, at which point the operation will fail (by replying with an SaveSnapshotFailure
for example).
Note that Cluster Sharding is using snapshots, so if you use Cluster Sharding you need to define a snapshot store
plugin.

Snapshot deletion

A persistent actor can delete individual snapshots by calling the deleteSnapshot method with the sequence
number of when the snapshot was taken.
To bulk-delete a range of snapshots matching SnapshotSelectionCriteria, persistent actors should use
the deleteSnapshots method.

Snapshot status handling

Saving or deleting snapshots can either succeed or fail – this information is reported back to the persistent actor
via status messages as illustrated in the following table.
Method Success Failure message
saveSnapshot(Any) SaveSnapshotSuccess SaveSnapshotFailure
deleteSnapshot(Long) DeleteSnapshotSuccess
DeleteSnapshotFailure
deleteSnapshots(SnapshotSelectionCriteria)
DeleteSnapshotsSuccess
DeleteSnapshotsFailure

4.8. Persistence 195


Akka Java Documentation, Release 2.4.20

4.8.6 At-Least-Once Delivery

To send messages with at-least-once delivery semantics to destinations you can ex-
tend the UntypedPersistentActorWithAtLeastOnceDelivery class instead of
UntypedPersistentActor on the sending side. It takes care of re-sending messages when they have
not been confirmed within a configurable timeout.
The state of the sending actor, including which messages have been sent that have not been confirmed
by the recipient must be persistent so that it can survive a crash of the sending actor or JVM. The
UntypedPersistentActorWithAtLeastOnceDelivery class does not persist anything by itself. It
is your responsibility to persist the intent that a message is sent and that a confirmation has been received.

Note: At-least-once delivery implies that original message sending order is not always preserved, and the desti-
nation may receive duplicate messages. Semantics do not match those of a normal ActorRef send operation:
• it is not at-most-once delivery
• message order for the same sender–receiver pair is not preserved due to possible resends
• after a crash and restart of the destination messages are still delivered to the new actor incarnation
These semantics are similar to what an ActorPath represents (see actor-lifecycle-scala), therefore you need
to supply a path and not a reference when delivering messages. The messages are sent to the path with an actor
selection.

Use the deliver method to send a message to a destination. Call the confirmDelivery method when the
destination has replied with a confirmation message.

Relationship between deliver and confirmDelivery

To send messages to the destination path, use the deliver method after you have persisted the intent to send the
message.
The destination actor must send back a confirmation message. When the sending actor receives this con-
firmation message you should persist the fact that the message was delivered successfully and then call the
confirmDelivery method.
If the persistent actor is not currently recovering, the deliver method will send the message to the destination
actor. When recovering, messages will be buffered until they have been confirmed using confirmDelivery.
Once recovery has completed, if there are outstanding messages that have not been confirmed (during the message
replay), the persistent actor will resend these before sending any other messages.
Deliver requires a deliveryIdToMessage function to pass the provided deliveryId into the message so
that the correlation between deliver and confirmDelivery is possible. The deliveryId must do the
round trip. Upon receipt of the message, the destination actor will send the same‘‘deliveryId‘‘ wrapped in a
confirmation message back to the sender. The sender will then use it to call the confirmDelivery method to
complete the delivery routine.
class Msg implements Serializable {
public final long deliveryId;
public final String s;

public Msg(long deliveryId, String s) {


[Link] = deliveryId;
this.s = s;
}
}

class Confirm implements Serializable {


public final long deliveryId;

public Confirm(long deliveryId) {

4.8. Persistence 196


Akka Java Documentation, Release 2.4.20

[Link] = deliveryId;
}
}

class MsgSent implements Serializable {


public final String s;

public MsgSent(String s) {
this.s = s;
}
}
class MsgConfirmed implements Serializable {
public final long deliveryId;

public MsgConfirmed(long deliveryId) {


[Link] = deliveryId;
}
}

class MyPersistentActor extends UntypedPersistentActorWithAtLeastOnceDelivery {


private final ActorSelection destination;

@Override
public String persistenceId() { return "persistence-id"; }

public MyPersistentActor(ActorSelection destination) {


[Link] = destination;
}

@Override
public void onReceiveCommand(Object message) {
if (message instanceof String) {
String s = (String) message;
persist(new MsgSent(s), new Procedure<MsgSent>() {
public void apply(MsgSent evt) {
updateState(evt);
}
});
} else if (message instanceof Confirm) {
Confirm confirm = (Confirm) message;
persist(new MsgConfirmed([Link]), new Procedure<MsgConfirmed>() {
public void apply(MsgConfirmed evt) {
updateState(evt);
}
});
} else {
unhandled(message);
}
}

@Override
public void onReceiveRecover(Object event) {
updateState(event);
}

void updateState(Object event) {


if (event instanceof MsgSent) {
final MsgSent evt = (MsgSent) event;
deliver(destination, new Function<Long, Object>() {
public Object apply(Long deliveryId) {
return new Msg(deliveryId, evt.s);
}

4.8. Persistence