Spring Integration Reference
Spring Integration Reference
5.0.0.RELEASE
Mark Fisher , Marius Bogoevici , Iwein Fuld , Jonas Partner , Oleg Zhurakousky , Gary
Russell , Dave Syer , Josh Long , David Turanski , Gunnar Hillert , Artem Bilan , Amol Nayak
Copyright © 2009 2010 2011 2012 2013 2014 2015 2016 2017 Pivotal Software, Inc. All Rights Reserved.
Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee
for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically.
Spring Integration Reference Manual
Table of Contents
I. Preface ................................................................................................................................... 1
Requirements ..................................................................................................................... ii
1. Compatible Java Versions ....................................................................................... ii
2. Compatible Versions of the Spring Framework ......................................................... ii
3. Code Conventions .................................................................................................. ii
1. Conventions in this Book ................................................................................................ 3
II. What’s new? .......................................................................................................................... 4
2. What’s new in Spring Integration 5.0? ............................................................................. 5
2.1. New Components ................................................................................................ 5
Java DSL ........................................................................................................... 5
Testing Support .................................................................................................. 5
MongoDB Outbound Gateway ............................................................................. 5
WebFlux Gateways and Channel Adapters .......................................................... 5
Content Type Conversion ................................................................................... 5
ErrorMessagePublisher and ErrorMessageStrategy .............................................. 5
JDBC Metadata Store ........................................................................................ 5
2.2. General Changes ................................................................................................ 6
Core Changes .................................................................................................... 6
Gateway Changes .............................................................................................. 7
Aggregator Performance Changes ...................................................................... 7
Splitter Changes ................................................................................................ 7
JMS Changes .................................................................................................... 7
Mail Changes ..................................................................................................... 7
Feed Changes ................................................................................................... 8
File Changes ..................................................................................................... 8
(S)FTP Changes ................................................................................................ 8
Integration Properties ......................................................................................... 9
Stream Changes ................................................................................................ 9
Barrier Changes ................................................................................................. 9
AMQP Changes ................................................................................................. 9
HTTP Changes ................................................................................................ 10
MQTT Changes ................................................................................................ 10
STOMP Changes ............................................................................................. 10
Web Services Changes .................................................................................... 10
Redis Changes ................................................................................................ 10
TCP Changes .................................................................................................. 11
Gemfire Changes ............................................................................................. 11
Jdbc Changes .................................................................................................. 11
III. Overview of Spring Integration Framework ............................................................................ 12
3. Spring Integration Overview .......................................................................................... 13
3.1. Background ....................................................................................................... 13
3.2. Goals and Principles .......................................................................................... 13
3.3. Main Components ............................................................................................. 14
Message .......................................................................................................... 14
Message Channel ............................................................................................ 14
Message Endpoint ............................................................................................ 15
3.4. Message Endpoints ........................................................................................... 15
Transformer ..................................................................................................... 16
Filter ................................................................................................................ 16
Router .............................................................................................................. 16
Splitter ............................................................................................................. 16
Aggregator ....................................................................................................... 17
Service Activator .............................................................................................. 17
Channel Adapter .............................................................................................. 18
3.5. Configuration and @EnableIntegration ................................................................ 18
3.6. Programming Considerations .............................................................................. 20
3.7. Programming Tips and Tricks ............................................................................. 20
XML Schemas .................................................................................................. 20
Finding Class Names for Java and DSL Configuration ........................................ 21
3.8. POJO Method invocation ................................................................................... 23
IV. Core Messaging .................................................................................................................. 25
4. Messaging Channels .................................................................................................... 26
4.1. Message Channels ............................................................................................ 26
The MessageChannel Interface ......................................................................... 26
PollableChannel ....................................................................................... 26
SubscribableChannel ................................................................................ 26
Message Channel Implementations ................................................................... 26
PublishSubscribeChannel .......................................................................... 27
QueueChannel ......................................................................................... 27
PriorityChannel ......................................................................................... 27
RendezvousChannel ................................................................................. 28
DirectChannel .......................................................................................... 28
ExecutorChannel ...................................................................................... 29
Scoped Channel ....................................................................................... 30
Channel Interceptors ........................................................................................ 30
MessagingTemplate .......................................................................................... 32
Configuring Message Channels ......................................................................... 32
DirectChannel Configuration ...................................................................... 33
Datatype Channel Configuration ................................................................ 33
QueueChannel Configuration .................................................................... 34
PublishSubscribeChannel Configuration ..................................................... 36
ExecutorChannel ...................................................................................... 37
PriorityChannel Configuration .................................................................... 37
RendezvousChannel Configuration ............................................................ 38
Scoped Channel Configuration .................................................................. 38
Channel Interceptor Configuration ............................................................. 38
Global Channel Interceptor Configuration ................................................... 38
Wire Tap .................................................................................................. 39
Conditional Wire Taps .............................................................................. 41
Global Wire Tap Configuration .................................................................. 41
Special Channels ............................................................................................. 42
4.2. Poller ................................................................................................................ 42
Polling Consumer ............................................................................................. 42
Pollable Message Source ................................................................................. 42
Conditional Pollers for Message Sources ........................................................... 43
Background .............................................................................................. 43
"Smart" Polling ......................................................................................... 43
SimpleActiveIdleMessageSourceAdvice ..................................................... 44
CompoundTriggerAdvice ........................................................................... 44
4.3. Channel Adapter ............................................................................................... 45
Configuring An Inbound Channel Adapter .......................................................... 45
Configuring An Outbound Channel Adapter ........................................................ 46
Channel Adapter Expressions and Scripts ......................................................... 47
4.4. Messaging Bridge .............................................................................................. 48
Introduction ...................................................................................................... 48
Configuring a Bridge with XML .......................................................................... 48
Configuring a Bridge with Java Configuration ..................................................... 49
Configuring a Bridge with the Java DSL ............................................................ 49
5. Message Construction .................................................................................................. 50
5.1. Message ........................................................................................................... 50
The Message Interface ..................................................................................... 50
Message Headers ............................................................................................ 50
MessageHeaderAccessor API ................................................................... 51
Message ID Generation ............................................................................ 52
Read-only Headers ................................................................................... 53
Header Propagation .................................................................................. 53
Message Implementations ................................................................................. 54
The MessageBuilder Helper Class .................................................................... 54
6. Message Routing ......................................................................................................... 56
6.1. Routers ............................................................................................................. 56
Overview .......................................................................................................... 56
Common Router Parameters ............................................................................. 58
Inside and Outside of a Chain ................................................................... 58
Top-Level (Outside of a Chain) ................................................................. 59
Router Implementations .................................................................................... 60
PayloadTypeRouter .................................................................................. 60
HeaderValueRouter .................................................................................. 61
RecipientListRouter ................................................................................... 62
RecipientListRouterManagement ............................................................... 64
XPath Router ........................................................................................... 64
Routing and Error handling ....................................................................... 64
Configuring a Generic Router ............................................................................ 65
Configuring a Content Based Router with XML ........................................... 65
Configuring a Router with Annotations ....................................................... 67
Dynamic Routers .............................................................................................. 68
Manage Router Mappings using the Control Bus ........................................ 71
Manage Router Mappings using JMX ........................................................ 71
Routing Slip ............................................................................................. 71
Process Manager Enterprise Integration Pattern ......................................... 74
6.2. Filter ................................................................................................................. 74
Introduction ...................................................................................................... 74
Configuring Filter .............................................................................................. 75
Configuring a Filter with XML .................................................................... 75
Configuring a Filter with Annotations ......................................................... 77
6.3. Splitter .............................................................................................................. 77
Introduction ...................................................................................................... 77
Programming model ......................................................................................... 77
Requirements
This section details the compatible Java and Spring Framework versions.
3 Code Conventions
The Spring Framework 2.0 introduced support for namespaces, which simplifies the XML configuration
of the application context, and consequently Spring Integration provides broad namespace support. This
reference guide applies the following conventions for all code examples that use namespace support:
The int namespace prefix will be used for Spring Integration’s core namespace support. Each Spring
Integration adapter type (module) will provide its own namespace, which is configured using the following
convention:
For a detailed explanation regarding Spring Integration’s namespace support see Section E.2,
“Namespace Support”.
Note
Please note that the namespace prefix can be freely chosen. You may even choose not to use any
namespace prefixes at all. Therefore, apply the convention that suits your application needs best.
Be aware, though, that SpringSource Tool Suite™ (STS) uses the same namespace conventions
for Spring Integration as used in this reference guide.
Testing Support
A new Spring Integration Test Framework has been created to assist with testing Spring
Integration applications. Now, with the @SpringIntegrationTest annotation on test class and
MockIntegration factory you can make your JUnit tests for integration flows somewhat easier.
See the section called “Content Type Conversion” for more information.
Core Changes
The @Poller annotation now has the errorChannel attribute for easier configuration of the underlying
MessagePublishingErrorHandler.
POJO methods are now invoked using an InvocableHandlerMethod by default, but can be
configured to use SpEL as before.
When targeting POJO methods as message handlers, one of the service methods can now be marked
with the @Default annotation to provide a fallback mechanism for non-matched conditions.
See the section called “Configuring Service Activator” for more information.
The aggregator expression-based ReleaseStrategy now evaluates the expression against the
MessageGroup instead of just the collection of Message<?>.
See the section called “Aggregators and Spring Expression Language (SpEL)” for more information.
See the section called “Aggregators and Spring Expression Language (SpEL)” for more information.
See the section called “Global Channel Interceptor Configuration” for more information.
Gateway Changes
The gateway now correctly sets the errorChannel header when the gateway method has a void
return type and an error channel is provided. Previously, the header was not populated. This had the
effect that synchronous downstream flows (running on the calling thread) would send the exception to
the configured channel but an exception on an async downstream flow would be sent to the default
errorChannel instead.
The request and reply timeouts can now be specified as SpEL expressions.
Splitter Changes
The Splitter component now can handle and split Java Stream and Reactive Streams
Publisher objects. If the output channel is a ReactiveStreamsSubscribableChannel, the
AbstractMessageSplitter builds a Flux for subsequent iteration instead of a regular Iterator
independent of object being split. In addition, AbstractMessageSplitter provides protected
obtainSizeIfPossible() methods to allow the determination of the size of the Iterable and
Iterator objects if that is possible.
JMS Changes
Previously, Spring Integration JMS XML configuration used a default bean name connectionFactory
for the JMS Connection Factory, allowing the property to be omitted from component definitions. It has
now been renamed to jmsConnectionFactory, which is the bean name used by Spring Boot to auto-
configure the JMS Connection Factory bean.
If your application is relying on the previous behavior, rename your connectionFactory bean to
jmsConnectionFactory, or specifically configure your components to use your bean using its current
name.
Mail Changes
Some inconsistencies with rendering IMAP mail content have been resolved.
See the note in the Mail-Receiving Channel Adapter Section for more information.
Feed Changes
File Changes
The new FileHeaders.RELATIVE_PATH Message header has been introduced to represent relative
path in the FileReadingMessageSource.
The tail adapter now supports idleEventInterval to emit events when there is no data in the file
during that period.
The flush predicates for the FileWritingMessageHandler now have an additional parameter.
The file outbound channel adapter and gateway (FileWritingMessageHandler) now support the
REPLACE_IF_MODIFIED FileExistsMode.
They also now support setting file permissions on the newly written file.
The FileSplitter now provides a firstLineAsHeader option to carry the first line of content as
a header in the messages emitted for the remaining lines.
(S)FTP Changes
The Inbound Channel Adapters now have a property max-fetch-size which is used to limit the
number of files fetched during a poll when there are no files currently in the local directory. They also are
configured with a FileSystemPersistentAcceptOnceFileListFilter in the local-filter
by default.
You can also provide a custom DirectoryScanner implementation to Inbound Channel Adapters via
the newly introduced scanner attribute.
The regex and pattern filters can now be configured to always pass directories. This can be useful when
using recursion in the outbound gateways.
All the Inbound Channel Adapters (streaming and synchronization-based) now use an appropriate
AbstractPersistentAcceptOnceFileListFilter implementation by default to prevent remote
files duplicate downloads.
The FTP and SFTP outbound gateways now support the REPLACE_IF_MODIFIED FileExistsMode
when fetching remote files.
The (S)FTP streaming inbound channel adapters now add remote file information in a message header.
The FTP and SFTP outbound channel adapters, as well as PUT command of the outbound gateways,
now support InputStream as payload, too.
The inbound channel adapters now can build file tree locally using a newly introduced
RecursiveDirectoryScanner. See scanner option for injection. Also these adapters can now be
switched to the WatchService instead.
The FtpOutboundGateway can now be supplied with workingDirExpression to change the FTP
client working directory for the current request message.
New filters for detecting incomplete remote files are now provided.
See Chapter 16, FTP/FTPS Adapters and Chapter 28, SFTP Adapters for more information.
Integration Properties
Since version 4.3.2 a new spring.integration.readOnly.headers global property has been
added to customize the list of headers which should not be copied to a newly created Message by the
MessageBuilder.
Stream Changes
There is a new option on the CharacterStreamReadingMessageSource to allow it to be used to
"pipe" stdin and publish an application event when the pipe is closed.
Barrier Changes
The BarrierMessageHandler now supports a discard channel to which late-arriving trigger
messages are sent.
AMQP Changes
The AMQP outbound endpoints now support setting a delay expression for when using the RabbitMQ
Delayed Message Exchange plugin.
Pollable AMQP-backed channels now block the poller thread for the poller’s configured
receiveTimeout (default 1 second).
Headers, such as contentType that are added to message properties by the message converter are
now used in the final message; previously, it depended on the converter type as to which headers/
message properties appeared in the final message. To override headers set by the converter, set the
headersMappedLast property to true.
HTTP Changes
The DefaultHttpHeaderMapper.userDefinedHeaderPrefix property is now an empty string
by default instead of X-.
MQTT Changes
Inbound messages are now mapped with headers RECEIVED_TOPIC, RECEIVED_QOS and
RECEIVED_RETAINED to avoid inadvertent propagation to outbound messages when an application is
relaying messages.
The outbound channel adapter now supports expressions for the topic, qos and retained properties; the
defaults remain the same.
STOMP Changes
The STOMP module has been changed to use ReactorNettyTcpStompClient, based on the
Project Reactor 3.1 and reactor-netty extension. The Reactor2TcpStompSessionManager
has been renamed to the ReactorNettyTcpStompSessionManager according to the
ReactorNettyTcpStompClient foundation.
• Simple WebService Inbound and Outbound gateways can now deal with the complete
WebServiceMessage as a payload, allowing the manipulation of MTOM attachments.
Redis Changes
The RedisStoreWritingMessageHandler is supplied now with additional String-based setters for
SpEL expressions - for convenience with Java configuration. The zsetIncrementExpression can
now be configured on the RedisStoreWritingMessageHandler, as well. In addition this property
has been changed from true to false since INCR option on ZADD Redis command is optional.
TCP Changes
You can now configure the TCP connection factories to support PushbackInputStream s, allowing
deserializers to "unread" (push back) bytes after "reading ahead".
See Chapter 32, TCP and UDP Support for more information.
Gemfire Changes
Jdbc Changes
Furthermore, the Spring framework and portfolio provide a comprehensive programming model for
building enterprise applications. Developers benefit from the consistency of this model and especially
the fact that it is based upon well-established best practices such as programming to interfaces and
favoring composition over inheritance. Spring’s simplified abstractions and powerful support libraries
boost developer productivity while simultaneously increasing the level of testability and portability.
Spring Integration is motivated by these same goals and principles. It extends the Spring programming
model into the messaging domain and builds upon Spring’s existing enterprise integration support to
provide an even higher level of abstraction. It supports message-driven architectures where inversion of
control applies to runtime concerns, such as when certain business logic should execute and where the
response should be sent. It supports routing and transformation of messages so that different transports
and different data formats can be integrated without impacting testability. In other words, the messaging
and integration concerns are handled by the framework, so business components are further isolated
from the infrastructure and developers are relieved of complex integration responsibilities.
As an extension of the Spring programming model, Spring Integration provides a wide variety of
configuration options including annotations, XML with namespace support, XML with generic "bean"
elements, and of course direct usage of the underlying API. That API is based upon well-defined
strategy interfaces and non-invasive, delegating adapters. Spring Integration’s design is inspired by the
recognition of a strong affinity between common patterns within Spring and the well-known Enterprise
Integration Patterns as described in the book of the same name by Gregor Hohpe and Bobby Woolf
(Addison Wesley, 2004). Developers who have read that book should be immediately comfortable with
the Spring Integration concepts and terminology.
• The framework should enforce separation of concerns between business logic and integration logic.
• Extension points should be abstract in nature but within well-defined boundaries to promote reuse
and portability.
Message
In Spring Integration, a Message is a generic wrapper for any Java object combined with metadata used
by the framework while handling that object. It consists of a payload and headers. The payload can be
of any type and the headers hold commonly required information such as id, timestamp, correlation id,
and return address. Headers are also used for passing values to and from connected transports. For
example, when creating a Message from a received File, the file name may be stored in a header to
be accessed by downstream components. Likewise, if a Message’s content is ultimately going to be
sent by an outbound Mail adapter, the various properties (to, from, cc, subject, etc.) may be configured
as Message header values by an upstream component. Developers can also store any arbitrary key-
value pairs in the headers.
Message Channel
A Message Channel represents the "pipe" of a pipes-and-filters architecture. Producers send Messages
to a channel, and consumers receive Messages from a channel. The Message Channel therefore
decouples the messaging components, and also provides a convenient point for interception and
monitoring of Messages.
A Message Channel may follow either Point-to-Point or Publish/Subscribe semantics. With a Point-to-
Point channel, at most one consumer can receive each Message sent to the channel. Publish/Subscribe
channels, on the other hand, will attempt to broadcast each Message to all of its subscribers. Spring
Integration supports both of these.
Whereas "Point-to-Point" and "Publish/Subscribe" define the two options for how many consumers will
ultimately receive each Message, there is another important consideration: should the channel buffer
messages? In Spring Integration, Pollable Channels are capable of buffering Messages within a queue.
The advantage of buffering is that it allows for throttling the inbound Messages and thereby prevents
overloading a consumer. However, as the name suggests, this also adds some complexity, since a
consumer can only receive the Messages from such a channel if a poller is configured. On the other
hand, a consumer connected to a Subscribable Channel is simply Message-driven. The variety of
channel implementations available in Spring Integration will be discussed in detail in the section called
“Message Channel Implementations”.
Message Endpoint
One of the primary goals of Spring Integration is to simplify the development of enterprise integration
solutions through inversion of control. This means that you should not have to implement consumers
and producers directly, and you should not even have to build Messages and invoke send or receive
operations on a Message Channel. Instead, you should be able to focus on your specific domain model
with an implementation based on plain Objects. Then, by providing declarative configuration, you can
"connect" your domain-specific code to the messaging infrastructure provided by Spring Integration. The
components responsible for these connections are Message Endpoints. This does not mean that you will
necessarily connect your existing application code directly. Any real-world enterprise integration solution
will require some amount of code focused upon integration concerns such as routing and transformation.
The important thing is to achieve separation of concerns between such integration logic and business
logic. In other words, as with the Model-View-Controller paradigm for web applications, the goal should
be to provide a thin but dedicated layer that translates inbound requests into service layer invocations,
and then translates service layer return values into outbound replies. The next section will provide an
overview of the Message Endpoint types that handle these responsibilities, and in upcoming chapters,
you will see how Spring Integration’s declarative configuration options provide a non-invasive way to
use each of these.
only a high-level description of the main endpoint types supported by Spring Integration and their roles.
The chapters that follow will elaborate and provide sample code as well as configuration examples.
Transformer
A Message Transformer is responsible for converting a Message’s content or structure and returning
the modified Message. Probably the most common type of transformer is one that converts the payload
of the Message from one format to another (e.g. from XML Document to java.lang.String). Similarly, a
transformer may be used to add, remove, or modify the Message’s header values.
Filter
A Message Filter determines whether a Message should be passed to an output channel at all. This
simply requires a boolean test method that may check for a particular payload content type, a property
value, the presence of a header, etc. If the Message is accepted, it is sent to the output channel, but if
not it will be dropped (or for a more severe implementation, an Exception could be thrown). Message
Filters are often used in conjunction with a Publish Subscribe channel, where multiple consumers may
receive the same Message and use the filter to narrow down the set of Messages to be processed
based on some criteria.
Note
Be careful not to confuse the generic use of "filter" within the Pipes-and-Filters architectural pattern
with this specific endpoint type that selectively narrows down the Messages flowing between two
channels. The Pipes-and-Filters concept of "filter" matches more closely with Spring Integration’s
Message Endpoint: any component that can be connected to Message Channel(s) in order to
send and/or receive Messages.
Router
A Message Router is responsible for deciding what channel or channels should receive the Message
next (if any). Typically the decision is based upon the Message’s content and/or metadata available in the
Message Headers. A Message Router is often used as a dynamic alternative to a statically configured
output channel on a Service Activator or other endpoint capable of sending reply Messages. Likewise,
a Message Router provides a proactive alternative to the reactive Message Filters used by multiple
subscribers as described above.
Splitter
A Splitter is another type of Message Endpoint whose responsibility is to accept a Message from its input
channel, split that Message into multiple Messages, and then send each of those to its output channel.
This is typically used for dividing a "composite" payload object into a group of Messages containing the
sub-divided payloads.
Aggregator
Basically a mirror-image of the Splitter, the Aggregator is a type of Message Endpoint that receives
multiple Messages and combines them into a single Message. In fact, Aggregators are often
downstream consumers in a pipeline that includes a Splitter. Technically, the Aggregator is more
complex than a Splitter, because it is required to maintain state (the Messages to-be-aggregated), to
decide when the complete group of Messages is available, and to timeout if necessary. Furthermore, in
case of a timeout, the Aggregator needs to know whether to send the partial results or to discard them to
a separate channel. Spring Integration provides a CorrelationStrategy, a ReleaseStrategy and
configurable settings for: timeout, whether to send partial results upon timeout, and a discard channel.
Service Activator
A Service Activator is a generic endpoint for connecting a service instance to the messaging system.
The input Message Channel must be configured, and if the service method to be invoked is capable of
returning a value, an output Message Channel may also be provided.
Note
The output channel is optional, since each Message may also provide its own Return Address
header. This same rule applies for all consumer endpoints.
The Service Activator invokes an operation on some service object to process the request Message,
extracting the request Message’s payload and converting if necessary (if the method does not expect
a Message-typed parameter). Whenever the service object’s method returns a value, that return value
will likewise be converted to a reply Message if necessary (if it’s not already a Message). That reply
Message is sent to the output channel. If no output channel has been configured, then the reply will be
sent to the channel specified in the Message’s "return address" if available.
A request-reply "Service Activator" endpoint connects a target object’s method to input and output
Message Channels.
Note
Channel Adapter
A Channel Adapter is an endpoint that connects a Message Channel to some other system or transport.
Channel Adapters may be either inbound or outbound. Typically, the Channel Adapter will do some
mapping between the Message and whatever object or resource is received-from or sent-to the other
system (File, HTTP Request, JMS Message, etc). Depending on the transport, the Channel Adapter
may also populate or extract Message header values. Spring Integration provides a number of Channel
Adapters, and they will be described in upcoming chapters.
Figure 3.5. An inbound "Channel Adapter" endpoint connects a source system to a MessageChannel.
Note
Message sources can be Pollable (e.g. POP3) or Message-Driven (e.g. IMAP Idle); in this
diagram, this is depicted by the "clock" symbol and the solid arrow (poll) and the dotted arrow
(message-driven).
Figure 3.6. An outbound "Channel Adapter" endpoint connects a MessageChannel to a target system.
Note
The first time a Spring Integration namespace element is encountered, the framework automatically
declares a number of beans that are used to support the runtime environment (task scheduler, implicit
channel creator, etc).
Important
Starting with version 4.0, the @EnableIntegration annotation has been introduced, to allow
the registration of Spring Integration infrastructure beans (see JavaDocs). This annotation is
required when only Java & Annotation configuration is used, e.g. with Spring Boot and/or
Spring Integration Messaging Annotation support and Spring Integration Java DSL with no XML
integration configuration.
The @EnableIntegration annotation is also useful when you have a parent context with no Spring
Integration components and 2 or more child contexts that use Spring Integration. It enables these
common components to be declared once only, in the parent context.
The @EnableIntegration annotation registers many infrastructure components with the application
context:
• Registers some built-in beans, e.g. errorChannel and its LoggingHandler, taskScheduler for
pollers, jsonPath SpEL-function etc.;
• Adds several BeanFactoryPostProcessor s to enhance the BeanFactory for global and default
integration environment;
• Adds several BeanPostProcessor s to enhance and/or convert and wrap particular beans for
integration purposes;
• Adds annotations processors to parse Messaging Annotations and registers components for them
with the application context.
Also see Section E.6, “Annotation Support” for more information about Messaging Annotations.
If you do expose the framework to your classes, there are some considerations that need to be taken
into account, especially during application startup; some of these are listed here.
When using XML configuration, to avoid getting false schema validation errors, you should use a "Spring-
aware" IDE, such as the Spring Tool Suite (STS) (or eclipse with the Spring IDE plugins) or IntelliJ IDEA,
for example. These IDEs know how to resolve the correct XML schema from the classpath (using the
META-INF/spring.schemas file in the jar(s)). When using STS, or eclipse with the plugin, be sure
to enable Spring Project Nature on the project.
The schemas hosted on the internet for certain legacy modules (those that existed in version 1.0) are the
1.0 versions for compatibility reasons; if your IDE uses these schemas, you will likely see false errors.
Important
This schema is for the 1.0 version of Spring Integration Core. We cannot update it to the current
schema because that will break any applications using 1.0.3 or lower. For subsequent versions,
the unversioned schema is resolved from the classpath and obtained from the jar. Please refer
to github:
https://github.com/spring-projects/spring-integration/tree/master/spring-integration-core/src/
main/resources/org/springframework/integration/config
• core (spring-integration.xsd)
• file
• http
• jms
• rmi
• security
• stream
• ws
• xml
With XML configuration and Spring Integration Namespace support, the XML Parsers hide how
target beans are declared and wired together. For Java & Annotation Configuration, it is important to
understand the Framework API for target end-user applications.
The first class citizens for EIP implementation are Message, Channel and Endpoint (see Section 3.3,
“Main Components” above). Their implementations (contracts) are:
The first two are simple enough to understand how to implement, configure and use, respectively; the
last one deserves more review.
The AbstractEndpoint is widely used throughout the Framework for different component
implementations; its main implementations are:
Using Messaging Annotations and/or Java DSL, you shouldn’t worry about these components, because
the Framework produces them automatically via appropriate annotations and BeanPostProcessor
s. When building components manually, the ConsumerEndpointFactoryBean should be used to
help to determine the target AbstractEndpoint consumer implementation to create, based on the
provided inputChannel property.
@Bean
@ServiceActivator(inputChannel = "input")
public MessageHandler sendChatMessageHandler(XMPPConnection xmppConnection) {
ChatMessageSendingMessageHandler handler = new ChatMessageSendingMessageHandler(xmppConnection);
return handler;
}
The MessageHandler implementations represent the outbound and processing part of the message
flow.
The inbound message flow side has its own components, which are divided to
polling and listening behaviors. The listening (message-driven) components are simple
and typically require only one target class implementation to be ready to produce
messages. Listening components can be one-way MessageProducerSupport implementations,
e.g. AbstractMqttMessageDrivenChannelAdapter and ImapIdleChannelAdapter; and
request-reply - MessagingGatewaySupport implementations, e.g. AmqpInboundGateway and
AbstractWebServiceInboundGateway.
Polling inbound endpoints are for those protocols which don’t provide a listener API or aren’t intended
for such a behavior. For example any File based protocol, as an FTP, any data bases (RDBMS or
NoSQL) etc.
These inbound endpoints consist with two components: the poller configuration, to initiate the
polling task periodically, and message source class to read data from the target protocol and
produce a message for the downstream integration flow. The first class, for the poller configuration,
is a SourcePollingChannelAdapter. It is one more AbstractEndpoint implementation, but
especially for polling to initiate an integration flow. Typically, with the Messaging Annotations or Java
DSL, you shouldn’t worry about this class, the Framework produces a bean for it, based on the
@InboundChannelAdapter configuration or a Java DSL Builder spec.
Message source components are more important for the target application development
and they all implement the MessageSource interface, e.g. MongoDbMessageSource and
AbstractTwitterMessageSource. With that in mind, our config for reading data from an RDBMS
table with JDBC may look like:
@Bean
@InboundChannelAdapter(value = "fooChannel", poller = @Poller(fixedDelay="5000"))
public MessageSource<?> storedProc(DataSource dataSource) {
return new JdbcPollingChannelAdapter(dataSource, "SELECT * FROM foo where status = 0");
}
All the required inbound and outbound classes for the target protocols you can find in the
particular Spring Integration module, in most cases in the respective package. For example spring-
integration-websocket adapters are:
• o.s.i.websocket.inbound.WebSocketInboundChannelAdapter - implements
MessageProducerSupport implementation to listen frames on the socket and produce message
to the channel;
If you are familiar with Spring Integration XML configuration, starting with version 4.3, we provide
information in the XSD element definitions about which target classes are used to declare beans for
the adapter or gateway, for example:
<xsd:element name="outbound-async-gateway">
<xsd:annotation>
<xsd:documentation>
Configures a Consumer Endpoint for the 'o.s.i.amqp.outbound.AsyncAmqpOutboundGateway'
that will publish an AMQP Message to the provided Exchange and expect a reply Message.
The sending thread returns immediately; the reply is sent asynchronously; uses
'AsyncRabbitTemplate.sendAndReceive()'.
</xsd:documentation>
</xsd:annotation>
@ServiceActivator
public String myService(String payload) { ... }
In this case, the framework will extract a String payload, invoke your method, and wrap the result in
a message to send to the next component in the flow (the original headers will be copied to the new
message). In fact, if you are using XML configuration, you don’t even need the @ServiceActivator
annotation:
You can omit the method attribute as long as there is no ambiguity in the public methods on the class.
@ServiceActivator
public String myService(@Payload String payload, @Header("foo") String fooHeader) { ... }
@ServiceActivator
public String myService(@Payload("payload.foo") String foo, @Header("bar.baz") String barbaz) { ... }
Because many any varied POJO method invocations are available, versions prior to 5.0 used SpEL
to invoke the POJO methods. SpEL (even interpreted) is usually "fast enough" for these operations,
when compared to the actual work usually done in the methods. However, starting with version 5.0,
the org.springframework.messaging.handler.invocation.InvocableHandlerMethod is
used by default, when possible. This technique is usually faster to execute than interpreted SpEL and
is consistent with other Spring messaging projects. The InvocableHandlerMethod is similar to the
technique used to invoke controller methods in Spring MVC. There are certain methods that are still
always invoked using SpEL; examples include annotated parameters with dereferenced properties as
discussed above. This is because SpEL has the capability to navigate a property path.
There may be some other corner cases that we haven’t considered that also won’t work with
InvocableHandlerMethod s. For this reason, we automatically fall-back to using SpEL in those
cases.
If you wish, you can also set up your POJO method such that it always uses SpEL, with the
UseSpelInvoker annotation:
@UseSpelInvoker(compilerMode = "IMMEDIATE")
public void bar(String bar) { ... }
4. Messaging Channels
4.1 Message Channels
While the Message plays the crucial role of encapsulating data, it is the MessageChannel that
decouples message producers from message consumers.
When sending a message, the return value will be true if the message is sent successfully. If the send
call times out or is interrupted, then it will return false.
PollableChannel
Since Message Channels may or may not buffer Messages (as discussed in the overview), there are
two sub-interfaces defining the buffering (pollable) and non-buffering (subscribable) channel behavior.
Here is the definition of PollableChannel.
Message<?> receive();
Similar to the send methods, when receiving a message, the return value will be null in the case of a
timeout or interrupt.
SubscribableChannel
The SubscribableChannel base interface is implemented by channels that send Messages directly
to their subscribed MessageHandler s. Therefore, they do not provide receive methods for polling, but
instead define methods for managing those subscribers:
Spring Integration provides several different Message Channel implementations. Each is briefly
described in the sections below.
PublishSubscribeChannel
Prior to version 3.0, invoking the send method on a PublishSubscribeChannel that had
no subscribers returned false. When used in conjunction with a MessagingTemplate, a
MessageDeliveryException was thrown. Starting with version 3.0, the behavior has changed such
that a send is always considered successful if at least the minimum subscribers are present (and
successfully handle the message). This behavior can be modified by setting the minSubscribers
property, which defaults to 0.
Note
If a TaskExecutor is used, only the presence of the correct number of subscribers is used for
this determination, because the actual handling of the message is performed asynchronously.
QueueChannel
A channel that has not reached its capacity limit will store messages in its internal queue, and the
send() method will return immediately even if no receiver is ready to handle the message. If the queue
has reached capacity, then the sender will block until room is available. Or, if using the send call that
accepts a timeout, it will block until either room is available or the timeout period elapses, whichever
occurs first. Likewise, a receive call will return immediately if a message is available on the queue, but
if the queue is empty, then a receive call may block until either a message is available or the timeout
elapses. In either case, it is possible to force an immediate return regardless of the queue’s state by
passing a timeout value of 0. Note however, that calls to the no-arg versions of send() and receive()
will block indefinitely.
PriorityChannel
RendezvousChannel
The RendezvousChannel enables a "direct-handoff" scenario where a sender will block until another
party invokes the channel’s receive() method or vice-versa. Internally, this implementation is
quite similar to the QueueChannel except that it uses a SynchronousQueue (a zero-capacity
implementation of BlockingQueue). This works well in situations where the sender and receiver
are operating in different threads but simply dropping the message in a queue asynchronously is not
appropriate. In other words, with a RendezvousChannel at least the sender knows that some receiver
has accepted the message, whereas with a QueueChannel, the message would have been stored to
the internal queue and potentially never received.
Tip
Keep in mind that all of these queue-based channels are storing messages in-memory only by
default. When persistence is required, you can either provide a message-store attribute within
the queue element to reference a persistent MessageStore implementation, or you can replace
the local channel with one that is backed by a persistent broker, such as a JMS-backed channel
or Channel Adapter. The latter option allows you to take advantage of any JMS provider’s
implementation for message persistence, and it will be discussed in Chapter 21, JMS Support.
However, when buffering in a queue is not necessary, the simplest approach is to rely upon the
DirectChannel discussed next.
The RendezvousChannel is also useful for implementing request-reply operations. The sender
can create a temporary, anonymous instance of RendezvousChannel which it then sets as
the replyChannel header when building a Message. After sending that Message, the sender can
immediately call receive (optionally providing a timeout value) in order to block while waiting for a reply
Message. This is very similar to the implementation used internally by many of Spring Integration’s
request-reply components.
DirectChannel
The DirectChannel has point-to-point semantics but otherwise is more similar to the
PublishSubscribeChannel than any of the queue-based channel implementations described
above. It implements the SubscribableChannel interface instead of the PollableChannel
interface, so it dispatches Messages directly to a subscriber. As a point-to-point channel, however,
it differs from the PublishSubscribeChannel in that it will only send each Message to a single
subscribed MessageHandler.
In addition to being the simplest point-to-point channel option, one of its most important features is
that it enables a single thread to perform the operations on "both sides" of the channel. For example,
if a handler is subscribed to a DirectChannel, then sending a Message to that channel will trigger
invocation of that handler’s handleMessage(Message) method directly in the sender’s thread, before
the send() method invocation can return.
The key motivation for providing a channel implementation with this behavior is to support transactions
that must span across the channel while still benefiting from the abstraction and loose coupling that the
channel provides. If the send call is invoked within the scope of a transaction, then the outcome of the
handler’s invocation (e.g. updating a database record) will play a role in determining the ultimate result
of that transaction (commit or rollback).
Note
Since the DirectChannel is the simplest option and does not add any additional overhead that
would be required for scheduling and managing the threads of a poller, it is the default channel
type within Spring Integration. The general idea is to define the channels for an application and
then to consider which of those need to provide buffering or to throttle input, and then modify those
to be queue-based PollableChannels. Likewise, if a channel needs to broadcast messages,
it should not be a DirectChannel but rather a PublishSubscribeChannel. Below you will
see how each of these can be configured.
The DirectChannel internally delegates to a Message Dispatcher to invoke its subscribed Message
Handlers, and that dispatcher can have a load-balancing strategy exposed via load-balancer or load-
balancer-ref attributes (mutually exclusive). The load balancing strategy is used by the Message
Dispatcher to help determine how Messages are distributed amongst Message Handlers in the case
that there are multiple Message Handlers subscribed to the same channel. As a convenience the
load-balancer attribute exposes enumeration of values pointing to pre-existing implementations of
LoadBalancingStrategy. The "round-robin" (load-balances across the handlers in rotation) and
"none" (for the cases where one wants to explicitely disable load balancing) are the only available values.
Other strategy implementations may be added in future versions. However, since version 3.0 you can
provide your own implementation of the LoadBalancingStrategy and inject it using load-balancer-
ref attribute which should point to a bean that implements LoadBalancingStrategy.
<int:channel id="lbRefChannel">
<int:dispatcher load-balancer-ref="lb"/>
</int:channel>
The load-balancing also works in combination with a boolean failover property. If the "failover" value
is true (the default), then the dispatcher will fall back to any subsequent handlers as necessary when
preceding handlers throw Exceptions. The order is determined by an optional order value defined on
the handlers themselves or, if no such value exists, the order in which the handlers are subscribed.
If a certain situation requires that the dispatcher always try to invoke the first handler, then fallback
in the same fixed order sequence every time an error occurs, no load-balancing strategy should be
provided. In other words, the dispatcher still supports the failover boolean property even when no load-
balancing is enabled. Without load-balancing, however, the invocation of handlers will always begin with
the first according to their order. For example, this approach works well when there is a clear definition
of primary, secondary, tertiary, and so on. When using the namespace support, the "order" attribute on
any endpoint will determine that order.
Note
Keep in mind that load-balancing and failover only apply when a channel has more than one
subscribed Message Handler. When using the namespace support, this means that more than
one endpoint shares the same channel reference in the "input-channel" attribute.
ExecutorChannel
The ExecutorChannel is a point-to-point channel that supports the same dispatcher configuration
as DirectChannel (load-balancing strategy and the failover boolean property). The key difference
between these two dispatching channel types is that the ExecutorChannel delegates to an instance
of TaskExecutor to perform the dispatch. This means that the send method typically will not block,
but it also means that the handler invocation may not occur in the sender’s thread. It therefore does not
support transactions spanning the sender and receiving handler.
Tip
Note that there are occasions where the sender may block. For example, when using
a TaskExecutor with a rejection-policy that throttles back on the client (such as the
ThreadPoolExecutor.CallerRunsPolicy), the sender’s thread will execute the method
directly anytime the thread pool is at its maximum capacity and the executor’s work queue is full.
Since that situation would only occur in a non-predictable way, that obviously cannot be relied
upon for transactions.
Scoped Channel
Spring Integration 1.0 provided a ThreadLocalChannel implementation, but that has been removed
as of 2.0. Now, there is a more general way for handling the same requirement by simply adding a
"scope" attribute to a channel. The value of the attribute can be any name of a Scope that is available
within the context. For example, in a web environment, certain Scopes are available, and any custom
Scope implementations can be registered with the context. Here’s an example of a ThreadLocal-based
scope being applied to a channel, including the registration of the Scope itself.
<bean class="org.springframework.beans.factory.config.CustomScopeConfigurer">
<property name="scopes">
<map>
<entry key="thread" value="org.springframework.context.support.SimpleThreadScope" />
</map>
</property>
</bean>
The channel above also delegates to a queue internally, but the channel is bound to the current thread,
so the contents of the queue are as well. That way the thread that sends to the channel will later be able to
receive those same Messages, but no other thread would be able to access them. While thread-scoped
channels are rarely needed, they can be useful in situations where DirectChannels are being used
to enforce a single thread of operation but any reply Messages should be sent to a "terminal" channel.
If that terminal channel is thread-scoped, the original sending thread can collect its replies from it.
Now, since any channel can be scoped, you can define your own scopes in addition to Thread Local.
Channel Interceptors
One of the advantages of a messaging architecture is the ability to provide common behavior and
capture meaningful information about the messages passing through the system in a non-invasive way.
Since the Message s are being sent to and received from MessageChannels, those channels provide
an opportunity for intercepting the send and receive operations. The ChannelInterceptor strategy
interface provides methods for each of those operations:
After implementing the interface, registering the interceptor with a channel is just a matter of calling:
channel.addInterceptor(someChannelInterceptor);
The methods that return a Message instance can be used for transforming the Message or can return
null to prevent further processing (of course, any of the methods can throw a RuntimeException). Also,
the preReceive method can return false to prevent the receive operation from proceeding.
Note
Keep in mind that receive() calls are only relevant for PollableChannels. In fact the
SubscribableChannel interface does not even define a receive() method. The reason
for this is that when a Message is sent to a SubscribableChannel it will be sent directly to
one or more subscribers depending on the type of channel (e.g. a PublishSubscribeChannel
sends to all of its subscribers). Therefore, the preReceive(..), postReceive(..) and
afterReceiveCompletion(..) interceptor methods are only invoked when the interceptor is
applied to a PollableChannel.
Spring Integration also provides an implementation of the Wire Tap pattern. It is a simple interceptor
that sends the Message to another channel without otherwise altering the existing flow. It can be very
useful for debugging and monitoring. An example is shown in the section called “Wire Tap”.
@Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
sendCount.incrementAndGet();
return message;
}
}
Tip
The order of invocation for the interceptor methods depends on the type of channel. As described
above, the queue-based channels are the only ones where the receive method is intercepted in
the first place. Additionally, the relationship between send and receive interception depends on
the timing of separate sender and receiver threads. For example, if a receiver is already blocked
while waiting for a message the order could be: preSend, preReceive, postReceive, postSend.
However, if a receiver polls after the sender has placed a message on the channel and already
returned, the order would be: preSend, postSend, (some-time-elapses) preReceive, postReceive.
The time that elapses in such a case depends on a number of factors and is therefore generally
unpredictable (in fact, the receive may never happen!). Obviously, the type of queue also plays a
role (e.g. rendezvous vs. priority). The bottom line is that you cannot rely on the order beyond the
fact that preSend will precede postSend and preReceive will precede postReceive.
Starting with Spring Framework 4.1 and Spring Integration 4.1, the ChannelInterceptor provides
new methods - afterSendCompletion() and afterReceiveCompletion(). They are invoked
after send()/receive() calls, regardless of any exception that is raised, thus allowing for resource
cleanup. Note, the Channel invokes these methods on the ChannelInterceptor List in the reverse order
of the initial preSend()/preReceive() calls.
MessagingTemplate
As you will see when the endpoints and their various configuration options are introduced, Spring
Integration provides a foundation for messaging components that enables non-invasive invocation of
your application code from the messaging system. However, sometimes it is necessary to invoke the
messaging system from your application code. For convenience when implementing such use-cases,
Spring Integration provides a MessagingTemplate that supports a variety of operations across the
Message Channels, including request/reply scenarios. For example, it is possible to send a request and
wait for a reply.
In that example, a temporary anonymous channel would be created internally by the template. The
sendTimeout and receiveTimeout properties may also be set on the template, and other exchange types
are also supported.
Note
A less invasive approach that allows you to invoke simple interfaces with payload and/or
header values instead of Message instances is described in the section called “Enter the
GatewayProxyFactoryBean”.
To create a Message Channel instance, you can use the <channel/> element:
<int:channel id="exampleChannel"/>
The default channel type is Point to Point. To create a Publish Subscribe channel, use the <publish-
subscribe-channel/> element:
<int:publish-subscribe-channel id="exampleChannel"/>
When using the <channel/> element without any sub-elements, it will create a DirectChannel
instance (a SubscribableChannel).
However, you can alternatively provide a variety of <queue/> sub-elements to create any of the pollable
channel types (as described in the section called “Message Channel Implementations”). Examples of
each are shown below.
DirectChannel Configuration
<int:channel id="directChannel"/>
A default channel will have a round-robin load-balancer and will also have failover enabled (See the
discussion in the section called “DirectChannel” for more detail). To disable one or both of these, add
a <dispatcher/> sub-element and configure the attributes:
<int:channel id="failFastChannel">
<int:dispatcher failover="false"/>
</channel>
<int:channel id="channelWithFixedOrderSequenceFailover">
<int:dispatcher load-balancer="none"/>
</int:channel>
There are times when a consumer can only process a particular type of payload and you need to
therefore ensure the payload type of input Messages. Of course the first thing that comes to mind is
Message Filter. However all that Message Filter will do is filter out Messages that are not compliant with
the requirements of the consumer. Another way would be to use a Content Based Router and route
Messages with non-compliant data-types to specific Transformers to enforce transformation/conversion
to the required data-type. This of course would work, but a simpler way of accomplishing the same thing
is to apply the Datatype Channel pattern. You can use separate Datatype Channels for each specific
payload data-type.
To create a Datatype Channel that only accepts messages containing a certain payload type, provide
the fully-qualified class name in the channel element’s datatype attribute:
Note that the type check passes for any type that is assignable to the channel’s datatype. In other
words, the "numberChannel" above would accept messages whose payload is java.lang.Integer
or java.lang.Double. Multiple types can be provided as a comma-delimited list:
So the numberChannel above will only accept Messages with a data-type of java.lang.Number.
But what happens if the payload of the Message is not of the required type? It depends on whether
you have defined a bean named integrationConversionService that is an instance of Spring’s
Conversion Service. If not, then an Exception would be thrown immediately, but if you do have an
You can even register custom converters. For example, let’s say you are sending a Message with a
String payload to the numberChannel we configured above.
Typically this would be a perfectly legal operation, however since we are using Datatype Channel the
result of such operation would generate an exception:
And rightfully so since we are requiring the payload type to be a Number while sending a String. So we
need something to convert String to a Number. All we need to do is implement a Converter.
<int:converter ref="strToInt"/>
When the converter element is parsed, it will create the "integrationConversionService" bean on-demand
if one is not already defined. With that Converter in place, the send operation would now be successful
since the Datatype Channel will use that Converter to convert the String payload to an Integer.
Note
For more information regarding Payload Type Conversion, please read the section called “Payload
Type Conversion”.
QueueChannel Configuration
To create a QueueChannel, use the <queue/> sub-element. You may specify the channel’s capacity:
<int:channel id="queueChannel">
<queue capacity="25"/>
</int:channel>
Note
If you do not provide a value for the capacity attribute on this <queue/> sub-element, the resulting
queue will be unbounded. To avoid issues such as OutOfMemoryErrors, it is highly recommended
to set an explicit value for a bounded queue.
Since a QueueChannel provides the capability to buffer Messages, but does so in-memory only
by default, it also introduces a possibility that Messages could be lost in the event of a system
failure. To mitigate this risk, a QueueChannel may be backed by a persistent implementation
of the MessageGroupStore strategy interface. For more details on MessageGroupStore and
MessageStore see Section 10.4, “Message Store”.
Important
The capacity attribute is not allowed when the message-store attribute is used.
When a QueueChannel receives a Message, it will add it to the Message Store, and when a Message
is polled from a QueueChannel, it is removed from the Message Store.
By default, a QueueChannel stores its Messages in an in-memory Queue and can therefore lead to the
lost message scenario mentioned above. However Spring Integration provides persistent stores, such
as the JdbcChannelMessageStore.
You can configure a Message Store for any QueueChannel by adding the message-store attribute
as shown in the next example.
<int:channel id="dbBackedChannel">
<int:queue message-store="channelStore"/>
</int:channel>
The Spring Integration JDBC module also provides schema DDL for a number of popular databases.
These schemas are located in the org.springframework.integration.jdbc.store.channel package of that
module (spring-integration-jdbc).
Important
One important feature is that with any transactional persistent store (e.g.,
JdbcChannelMessageStore), as long as the poller has a transaction configured, a Message
removed from the store will only be permanently removed if the transaction completes
successfully, otherwise the transaction will roll back and the Message will not be lost.
Many other implementations of the Message Store will be available as the growing number of Spring
projects related to "NoSQL" data stores provide the underlying support. Of course, you can always
provide your own implementation of the MessageGroupStore interface if you cannot find one that meets
your particular needs.
@Bean
public BasicMessageGroupStore mongoDbChannelMessageStore(MongoDbFactory mongoDbFactory) {
MongoDbChannelMessageStore store = new MongoDbChannelMessageStore(mongoDbFactory);
store.setPriorityEnabled(true);
return store;
}
@Bean
public PollableChannel priorityQueue(BasicMessageGroupStore mongoDbChannelMessageStore) {
return new PriorityChannel(new MessageGroupQueue(mongoDbChannelMessageStore, "priorityQueue"));
}
Note
@Bean
public IntegrationFlow priorityFlow(PriorityCapableChannelMessageStore mongoDbChannelMessageStore) {
return IntegrationFlows.from((Channels c) ->
c.priority("priorityChannel", mongoDbChannelMessageStore, "priorityGroup"))
....
.get();
}
Another option to customize the QueueChannel environment is provided by the ref attribute of
the <int:queue> sub-element or particular constructor. This attribute implies the reference to any
java.util.Queue implementation. For example Hazelcast distributed IQueue:
@Bean
public HazelcastInstance hazelcastInstance() {
return Hazelcast.newHazelcastInstance(new Config()
.setProperty("hazelcast.logging.type", "log4j"));
}
@Bean
public PollableChannel distributedQueue() {
return new QueueChannel(hazelcastInstance()
.getQueue("springIntegrationQueue"));
}
PublishSubscribeChannel Configuration
Note
The apply-sequence value is false by default so that a Publish Subscribe Channel can
send the exact same Message instances to multiple outbound channels. Since Spring Integration
enforces immutability of the payload and header references, the channel creates new Message
instances with the same payload reference but different header values when the flag is set to
true.
ExecutorChannel
<int:channel id="executorChannel">
<int:dispatcher task-executor="someExecutor"/>
</int:channel>
Note
The load-balancer and failover options are also both available on the <dispatcher/> sub-
element as described above in the section called “DirectChannel Configuration”. The same
defaults apply as well. So, the channel will have a round-robin load-balancing strategy with failover
enabled unless explicit configuration is provided for one or both of those attributes.
<int:channel id="executorChannelWithoutFailover">
<int:dispatcher task-executor="someExecutor" failover="false"/>
</int:channel>
PriorityChannel Configuration
<int:channel id="priorityChannel">
<int:priority-queue capacity="20"/>
</int:channel>
By default, the channel will consult the priority header of the message. However, a custom
Comparator reference may be provided instead. Also, note that the PriorityChannel (like the other
types) does support the datatype attribute. As with the QueueChannel, it also supports a capacity
attribute. The following example demonstrates all of these:
Since version 4.0, the priority-channel child element supports the message-store option
(comparator and capacity are not allowed in that case). The message store must
be a PriorityCapableChannelMessageStore and, in this case. Implementations of the
PriorityCapableChannelMessageStore are currently provided for Redis, JDBC and MongoDB.
See the section called “QueueChannel Configuration” and Section 10.4, “Message Store” for more
information. You can find sample configuration in the section called “Backing Message Channels”.
RendezvousChannel Configuration
<int:channel id="rendezvousChannel"/>
<int:rendezvous-queue/>
</int:channel>
Message channels may also have interceptors as described in the section called “Channel Interceptors”.
The <interceptors/> sub-element can be added within a <channel/> (or the more specific
element types). Provide the ref attribute to reference any Spring-managed object that implements the
ChannelInterceptor interface:
<int:channel id="exampleChannel">
<int:interceptors>
<ref bean="trafficMonitoringInterceptor"/>
</int:interceptors>
</int:channel>
In general, it is a good idea to define the interceptor implementations in a separate location since they
usually provide common behavior that can be reused across multiple channels.
Channel Interceptors provide a clean and concise way of applying cross-cutting behavior per individual
channel. If the same behavior should be applied on multiple channels, configuring the same set of
interceptors for each channel would not be the most efficient way. To avoid repeated configuration while
also enabling interceptors to apply to multiple channels, Spring Integration provides Global Interceptors.
Look at the example below:
or
Each <channel-interceptor/> element allows you to define a global interceptor which will be
applied on all channels that match any patterns defined via the pattern attribute. In the above case
the global interceptor will be applied on the foo channel and all other channels that begin with bar or
input and not to channel starting with baz (starting with version 5.0).
Warning
The addition of this syntax to the pattern causes one possible (although perhaps unlikely) problem.
If you have a bean "!foo"and you included a pattern "!foo" in your channel-interceptor’s
pattern patterns; it will no long match; the pattern will now match all beans not named foo.
In this case, you can escape the ! in the pattern with \. The pattern "\!foo" means match a
bean named "!foo".
The order attribute allows you to manage where this interceptor will be injected if there are multiple
interceptors on a given channel. For example, channel inputChannel could have individual interceptors
configured locally (see below):
<int:channel id="inputChannel">
<int:interceptors>
<int:wire-tap channel="logger"/>
</int:interceptors>
</int:channel>
A reasonable question is how will a global interceptor be injected in relation to other interceptors
configured locally or through other global interceptor definitions? The current implementation provides a
very simple mechanism for defining the order of interceptor execution. A positive number in the order
attribute will ensure interceptor injection after any existing interceptors and a negative number will ensure
that the interceptor is injected before existing interceptors. This means that in the above example, the
global interceptor will be injected AFTER (since its order is greater than 0) the wire-tap interceptor
configured locally. If there were another global interceptor with a matching pattern, its order would be
determined by comparing the values of the order attribute. To inject a global interceptor BEFORE the
existing interceptors, use a negative value for the order attribute.
Note
Note that both the order and pattern attributes are optional. The default value for order will
be 0 and for pattern, the default is * (to match all channels).
Wire Tap
As mentioned above, Spring Integration provides a simple Wire Tap interceptor out of the box. You can
configure a Wire Tap on any channel within an <interceptors/> element. This is especially useful for
debugging, and can be used in conjunction with Spring Integration’s logging Channel Adapter as follows:
<int:channel id="in">
<int:interceptors>
<int:wire-tap channel="logger"/>
</int:interceptors>
</int:channel>
Tip
The logging-channel-adapter also accepts an expression attribute so that you can evaluate a
SpEL expression against payload and/or headers variables. Alternatively, to simply log the full
Message toString() result, provide a value of "true" for the log-full-message attribute. That is
false by default so that only the payload is logged. Setting that to true enables logging of
all headers in addition to the payload. The expression option does provide the most flexibility,
however (e.g. expression="payload.user.name").
One of the common misconceptions about the wire tap and other similar components (Section B.1,
“Message Publishing Configuration”) is that they are automatically asynchronous in nature. Wire-tap as
a component is not invoked asynchronously be default. Instead, Spring Integration focuses on a single
unified approach to configuring asynchronous behavior: the Message Channel. What makes certain
parts of the message flow sync or async is the type of Message Channel that has been configured within
that flow. That is one of the primary benefits of the Message Channel abstraction. From the inception
of the framework, we have always emphasized the need and the value of the Message Channel as
a first-class citizen of the framework. It is not just an internal, implicit realization of the EIP pattern, it
is fully exposed as a configurable component to the end user. So, the Wire-tap component is ONLY
responsible for performing the following 3 tasks:
It is essentially a variation of the Bridge, but it is encapsulated within a channel definition (and hence
easier to enable and disable without disrupting a flow). Also, unlike the bridge, it basically forks another
message flow. Is that flow synchronous or asynchronous? The answer simply depends on the type of
Message Channel that channelB is. And, now you know that we have: Direct Channel, Pollable Channel,
and Executor Channel as options. The last two do break the thread boundary making communication
via such channels asynchronous simply because the dispatching of the message from that channel
to its subscribed handlers happens on a different thread than the one used to send the message to
that channel. That is what is going to make your wire-tap flow sync or async. It is consistent with other
components within the framework (e.g., Message Publisher) and actually brings a level of consistency
and simplicity by sparing you from worrying in advance (other than writing thread safe code) whether
a particular piece of code should be implemented as sync or async. The actual wiring of two pieces of
code (component A and component B) via Message Channel is what makes their collaboration sync or
async. You may even want to change from sync to async in the future and Message Channel is what’s
going to allow you to do it swiftly without ever touching the code.
One final point regarding the Wire Tap is that, despite the rationale provided above for not being async
by default, one should keep in mind it is usually desirable to hand off the Message as soon as possible.
Therefore, it would be quite common to use an asynchronous channel option as the wire-tap’s outbound
channel. Nonetheless, another reason that we do not enforce asynchronous behavior by default is that
you might not want to break a transactional boundary. Perhaps you are using the Wire Tap for auditing
purposes, and you DO want the audit Messages to be sent within the original transaction. As an example,
you might connect the wire-tap to a JMS outbound-channel-adapter. That way, you get the best of both
worlds: 1) the sending of a JMS Message can occur within the transaction while 2) it is still a "fire-and-
forget" action thereby preventing any noticeable delay in the main message flow.
Important
Starting with version 4.0, it is important to avoid circular references when an interceptor (such
as WireTap) references a channel itself. You need to exclude such channels from those
being intercepted by the current interceptor. This can be done with appropriate patterns or
programmatically. If you have a custom ChannelInterceptor that references a channel,
consider implementing VetoCapableInterceptor. That way, the framework will ask the
interceptor if it’s OK to intercept each channel that is a candidate based on the pattern. You can
also add runtime protection in the interceptor methods that ensures that the channel is not one
that is referenced by the interceptor. The WireTap uses both of these techniques.
Starting with version 4.3, the WireTap has additional constructors that take a channelName instead
of a MessageChannel instance. This can be convenient for Java Configuration and when channel
auto-creation logic is being used. The target MessageChannel bean is resolved from the provided
channelName later, on the first interaction with the interceptor.
Important
Channel resolution requires a BeanFactory so the wire tap instance must be a Spring-managed
bean.
This late-binding approach also allows simplification of typical wire-tapping patterns with Java DSL
configuration:
@Bean
public PollableChannel myChannel() {
return MessageChannels.queue()
.wireTap("loggingFlow.input")
.get();
}
@Bean
public IntegrationFlow loggingFlow() {
return f -> f.log();
}
Wire taps can be made conditional, using the selector or selector-expression attributes.
The selector references a MessageSelector bean, which can determine at runtime whether the
message should go to the tap channel. Similarly, the` selector-expression` is a boolean SpEL expression
that performs the same purpose - if the expression evaluates to true, the message will be sent to the
tap channel.
It is possible to configure a global wire tap as a special case of the the section called “Global Channel
Interceptor Configuration”. Simply configure a top level wire-tap element. Now, in addition to the
normal wire-tap namespace support, the pattern and order attributes are supported and work in
exactly the same way as with the channel-interceptor
Tip
A global wire tap provides a convenient way to configure a single channel wire tap externally
without modifying the existing channel configuration. Simply set the pattern attribute to the
target channel name. For example, This technique may be used to configure a test case to verify
messages on a channel.
Special Channels
If namespace support is enabled, there are two special channels defined within the application context
by default: errorChannel and nullChannel. The nullChannel acts like /dev/null, simply logging
any Message sent to it at DEBUG level and returning immediately. Any time you face channel
resolution errors for a reply that you don’t care about, you can set the affected component’s output-
channel attribute to nullChannel (the name nullChannel is reserved within the application context).
The errorChannel is used internally for sending error messages and may be overridden with a custom
configuration. This is discussed in greater detail in Section E.4, “Error Handling”.
See also Section 9.4, “Message Channels” in Java DSL chapter for more information about message
channel and interceptors.
4.2 Poller
Polling Consumer
When Message Endpoints (Channel Adapters) are connected to channels and instantiated, they
produce one of the following 2 instances:
• PollingConsumer
• EventDrivenConsumer
The actual implementation depends on which type of channel these Endpoints are
connected to. A channel adapter connected to a channel that implements the
org.springframework.messaging.SubscribableChannel interface will produce an instance of
EventDrivenConsumer. On the other hand, a channel adapter connected to a channel that
implements the org.springframework.messaging.PollableChannel interface (e.g. a QueueChannel) will
produce an instance of PollingConsumer.
Polling Consumers allow Spring Integration components to actively poll for Messages, rather than to
process Messages in an event-driven manner.
They represent a critical cross cutting concern in many messaging scenarios. In Spring Integration,
Polling Consumers are based on the pattern with the same name, which is described in the book
"Enterprise Integration Patterns" by Gregor Hohpe and Bobby Woolf. You can find a description of the
pattern on the book’s website at:
http://www.enterpriseintegrationpatterns.com/PollingConsumer.html
Furthermore, in Spring Integration a second variation of the Polling Consumer pattern exists.
When Inbound Channel Adapters are being used, these adapters are often wrapped by a
• PollingConsumer
• SourcePollingChannelAdapter
This means, Pollers are used in both inbound and outbound messaging scenarios. Here are some use-
cases that illustrate the scenarios in which Pollers are used:
• Polling certain external systems such as FTP Servers, Databases, Web Services
Note
This chapter is meant to only give a high-level overview regarding Polling Consumers and how they
fit into the concept of message channels - Section 4.1, “Message Channels” and channel adapters
- Section 4.3, “Channel Adapter”. For more in-depth information regarding Messaging Endpoints in
general and Polling Consumers in particular, please see Section 8.1, “Message Endpoints”.
Background
Advice objects, in an advice-chain on a poller, advise the whole polling task (message retrieval
and processing). These "around advice" methods do not have access to any context for the poll, just
the poll itself. This is fine for requirements such as making a task transactional, or skipping a poll due
to some external condition as discussed above. What if we wish to take some action depending on the
result of the receive part of the poll, or if we want to adjust the poller depending on conditions?
"Smart" Polling
Version 4.2 introduced the AbstractMessageSourceAdvice. Any Advice objects in the advice-
chain that subclass this class, are applied to just the receive operation. Such classes implement the
following methods:
beforeReceive(MessageSource<?> source)
This method is called before the MessageSource.receive() method. It enables you to examine
and or reconfigure the source at this time. Returning false cancels this poll (similar to the
PollSkipAdvice mentioned above).
This method is called after the receive() method; again, you can reconfigure the source, or take any
action perhaps depending on the result (which can be null if there was no message created by the
source). You can even return a different message!
It is important to understand how the advice chain is processed during initialization. Advice
objects that do not extend AbstractMessageSourceAdvice are applied to the whole poll
process and are all invoked first, in order, before any AbstractMessageSourceAdvice; then
AbstractMessageSourceAdvice objects are invoked in order around the MessageSource
receive() method. If you have, say Advice objects a, b, c, d, where b and d are
AbstractMessageSourceAdvice, they will be applied in the order a, c, b, d. Also, if a
MessageSource is already a Proxy, the AbstractMessageSourceAdvice will be invoked
after any existing Advice objects. If you wish to change the order, you should wire up the proxy
yourself.
SimpleActiveIdleMessageSourceAdvice
This advice modifies the trigger based on the receive() result. This will only work if the advice
is called on the poller thread. It will not work if the poller has a task-executor. To use this
advice where you wish to use async operations after the result of a poll, do the async handoff
later, perhaps by using an ExecutorChannel.
CompoundTriggerAdvice
This advice allows the selection of one of two triggers based on whether a poll returns a message or
not. Consider a poller that uses a CronTrigger; CronTrigger s are immutable so cannot be altered
once constructed. Consider a use case where we want to use a cron expression to trigger a poll once
each hour but, if no message is received, poll once per minute and, when a message is retrieved, revert
to using the cron expression.
The advice (and poller) use a CompoundTrigger for this purpose. The trigger’s primary trigger can be
a CronTrigger. When the advice detects that no message is received, it adds the secondary trigger to
the CompoundTrigger. When the CompoundTrigger 's nextExecutionTime method is invoked,
it will delegate to the secondary trigger, if present; otherwise the primary trigger.
The following shows the configuration for the hourly cron expression with fall-back to every minute…
This advice modifies the trigger based on the receive() result. This will only work if the advice
is called on the poller thread. It will not work if the poller has a task-executor. To use this
advice where you wish to use async operations after the result of a poll, do the async handoff
later, perhaps by using an ExecutorChannel.
An "inbound-channel-adapter" element can invoke any method on a Spring-managed Object and send
a non-null return value to a MessageChannel after converting it to a Message. When the adapter’s
subscription is activated, a poller will attempt to receive messages from the source. The poller will be
scheduled with the TaskScheduler according to the provided configuration. To configure the polling
interval or cron expression for an individual channel-adapter, provide a poller element with one of the
scheduling attributes, such as fixed-rate or cron.
Also see the section called “Channel Adapter Expressions and Scripts”.
Note
If no poller is provided, then a single default poller must be registered within the context. See the
section called “Endpoint Namespace Support” for more detail.
For example:
In the the first configuration the polling task will be invoked once per poll and during such task
(poll) the method (which results in the production of the Message) will be invoked once based on
the max-messages-per-poll attribute value. In the second configuration the polling task will
be invoked 10 times per poll or until it returns null thus possibly producing 10 Messages per poll
while each poll happens at 1 second intervals. However what if the configuration looks like this:
<int:poller fixed-rate="1000"/>
Note there is no max-messages-per-poll specified. As you’ll learn later the identical poller
configuration in the PollingConsumer (e.g., service-activator, filter, router etc.) would have a
default value of -1 for max-messages-per-poll which means "execute poling task non-stop
unless polling method returns null (e.g., no more Messages in the QueueChannel)" and then sleep
for 1 second.
However in the SourcePollingChannelAdapter it is a bit different. The default value for max-
messages-per-poll will be set to 1 by default unless you explicitly set it to a negative value
(e.g., -1). It is done so to make sure that poller can react to a LifeCycle events (e.g., start/stop) and
prevent it from potentially spinning in the infinite loop if the implementation of the custom method
of the MessageSource has a potential to never return null and happened to be non-interruptible.
However if you are sure that your method can return null and you need the behavior where you
want to poll for as many sources as available per each poll, then you should explicitly set max-
messages-per-poll to a negative value.
Using a "ref" attribute is generally recommended if the POJO consumer implementation can be reused
in other <outbound-channel-adapter> definitions. However if the consumer implementation is only
referenced by a single definition of the <outbound-channel-adapter>, you can define it as inner
bean:
Note
Using both the "ref" attribute and an inner handler definition in the same <outbound-channel-
adapter> configuration is not allowed as it creates an ambiguous condition. Such a configuration
will result in an Exception being thrown.
Any Channel Adapter can be created without a "channel" reference in which case it will implicitly
create an instance of DirectChannel. The created channel’s name will match the "id" attribute of
the <inbound-channel-adapter> or <outbound-channel-adapter> element. Therefore, if the
"channel" is not provided, the "id" is required.
Important
Introduction
A Messaging Bridge is a relatively trivial endpoint that simply connects two Message Channels
or Channel Adapters. For example, you may want to connect a PollableChannel to a
SubscribableChannel so that the subscribing endpoints do not have to worry about any polling
configuration. Instead, the Messaging Bridge provides the polling configuration.
By providing an intermediary poller between two channels, a Messaging Bridge can be used to throttle
inbound Messages. The poller’s trigger will determine the rate at which messages arrive on the second
channel, and the poller’s "maxMessagesPerPoll" property will enforce a limit on the throughput.
Another valid use for a Messaging Bridge is to connect two different systems. In such a scenario, Spring
Integration’s role would be limited to making the connection between these systems and managing a
poller if necessary. It is probably more common to have at least a Transformer between the two systems
to translate between their formats, and in that case, the channels would be provided as the input-channel
and output-channel of a Transformer endpoint. If data format translation is not required, the Messaging
Bridge may indeed be sufficient.
The <bridge> element is used to create a Messaging Bridge between two Message Channels or Channel
Adapters. Simply provide the "input-channel" and "output-channel" attributes:
As mentioned above, a common use case for the Messaging Bridge is to connect a PollableChannel
to a SubscribableChannel, and when performing this role, the Messaging Bridge may also serve
as a throttler:
Connecting Channel Adapters is just as easy. Here is a simple echo example between the "stdin" and
"stdout" adapters from Spring Integration’s "stream" namespace.
<int-stream:stdin-channel-adapter id="stdin"/>
<int-stream:stdout-channel-adapter id="stdout"/>
Of course, the configuration would be similar for other (potentially more useful) Channel Adapter bridges,
such as File to JMS, or Mail to File. The various Channel Adapters will be discussed in upcoming
chapters.
Note
If no output-channel is defined on a bridge, the reply channel provided by the inbound Message
will be used, if available. If neither output or reply channel is available, an Exception will be thrown.
@Bean
@BridgeFrom(value = "polled", poller = @Poller(fixedDelay = "5000", maxMessagesPerPoll = "10"))
public SubscribableChannel direct() {
return new DirectChannel();
}
or
@Bean
@BridgeTo(value = "direct", poller = @Poller(fixedDelay = "5000", maxMessagesPerPoll = "10"))
public PollableChannel polled() {
return new QueueChannel();
}
@Bean
public SubscribableChannel direct() {
return new DirectChannel();
}
@Bean
@ServiceActivator(inputChannel = "polled",
poller = @Poller(fixedRate = "5000", maxMessagesPerPoll = "10"))
public BridgeHandler bridge() {
BridgeHandler bridge = new BridgeHandler();
bridge.setOutputChannelName("direct");
return bridge;
}
5. Message Construction
5.1 Message
The Spring Integration Message is a generic container for data. Any object can be provided as the
payload, and each Message also includes headers containing user-extensible properties as key-value
pairs.
T getPayload();
MessageHeaders getHeaders();
The Message is obviously a very important part of the API. By encapsulating the data in a generic
wrapper, the messaging system can pass it around without any knowledge of the data’s type. As an
application evolves to support new types, or when the types themselves are modified and/or extended,
the messaging system will not be affected by such changes. On the other hand, when some component
in the messaging system does require access to information about the Message, such metadata can
typically be stored to and retrieved from the metadata in the Message Headers.
Message Headers
Just as Spring Integration allows any Object to be used as the payload of a Message, it also supports
any Object types as header values. In fact, the MessageHeaders class implements the java.util.Map
interface:
Note
As an implementation of Map, the headers can obviously be retrieved by calling get(..) with the name
of the header. Alternatively, you can provide the expected Class as an additional parameter. Even better,
when retrieving one of the pre-defined values, convenient getters are available. Here is an example of
each of these three options:
MessageHeaders. java.lang.Long The time the message was created. Changes each
TIMESTAMP
time a message is mutated.
Many inbound and outbound adapter implementations will also provide and/or expect certain headers,
and additional user-defined headers can also be configured. Constants for these headers can be found
in those modules where such headers exist, for example AmqpHeaders, JmsHeaders etc.
MessageHeaderAccessor API
Starting with Spring Framework 4.0 and Spring Integration 4.0, the core Messaging abstraction
has been moved to the spring-messaging module and the new MessageHeaderAccessor
API has been introduced to provide additional abstraction over Messaging implementations.
All (core) Spring Integration specific Message Headers constants are now declared in the
IntegrationMessageHeaderAccessor class:
Convenient typed getters for some of these headers are provided on the
IntegrationMessageHeaderAccessor class:
The following headers also appear in the IntegrationMessageHeaderAccessor but are generally
not used by user code; their inclusion here is for completeness:
Message ID Generation
When a message transitions through an application, each time it is mutated (e.g. by a transformer) a new
message id is assigned. The message id is a UUID. Beginning with Spring Integration 3.0, the default
strategy used for id generation is more efficient than the previous java.util.UUID.randomUUID()
implementation. It uses simple random numbers based on a secure random seed, instead of creating
a secure random number each time.
A different UUID generation strategy can be selected by declaring a bean that implements
org.springframework.util.IdGenerator in the application context.
Important
Only one UUID generation strategy can be used in a classloader. This means that if two or more
application contexts are running in the same classloader, they will share the same strategy. If one
of the contexts changes the strategy, it will be used by all contexts. If two or more contexts in the
same classloader declare a bean of type org.springframework.util.IdGenerator, they
must all be an instance of the same class, otherwise the context attempting to replace a custom
strategy will fail to initialize. If the strategy is the same, but parameterized, the strategy in the first
context to initialize will be used.
Read-only Headers
When you try to build a new message using MessageBuilder, this kind of headers are ignored and
particular INFO message is emitted to logs.
Starting with version 5.0, Messaging Gateway, Header Enricher, Content Enricher and
Header Filter don’t allow to configure MessageHeaders.ID and MessageHeaders.TIMESTAMP
header names when DefaultMessageBuilderFactory is used and they throw
BeanInitializationException.
Header Propagation
When messages are processed (and modified) by message-producing endpoints (such as a service
activator), in general, inbound headers are propagated to the outbound message. One exception to this
is a transformer, when a complete message is returned to the framework; in that case, the user code is
responsible for the entire outbound message. When a transformer just returns the payload; the inbound
headers are propagated. Also, a header is only propagated if it does not already exist in the outbound
message, allowing user code to change header values as needed.
Starting with version 4.3.10, you can configure message handlers (that modify messages and produce
output) to suppress the propagation of specific headers. Call the setNotPropagatedHeaders()
or addNotPropagatedHeaders() methods on the MessageProducingMessageHandler abstract
class, to configure the header(s) you don’t want to be copied.
You can also globally suppress propagation of specific message headers by setting the
readOnlyHeaders property in META-INF/spring.integration.properties to a comma-
delimited list of headers.
Important
Header propagation suppression does not apply to those endpoints that don’t modify the message,
e.g. bridges and routers.
Message Implementations
The base implementation of the Message interface is GenericMessage<T>, and it provides two
constructors:
When a Message is created, a random unique id will be generated. The constructor that accepts a Map
of headers will copy the provided headers to the newly created Message.
There is also a convenient implementation of Message designed to communicate error conditions. This
implementation takes Throwable object as its payload:
Throwable t = message.getPayload();
Notice that this implementation takes advantage of the fact that the GenericMessage base class is
parameterized. Therefore, as shown in both examples, no casting is necessary when retrieving the
Message payload Object.
You may notice that the Message interface defines retrieval methods for its payload and headers but
no setters. The reason for this is that a Message cannot be modified after its initial creation. Therefore,
when a Message instance is sent to multiple consumers (e.g. through a Publish Subscribe Channel), if
one of those consumers needs to send a reply with a different payload type, it will need to create a new
Message. As a result, the other consumers are not affected by those changes. Keep in mind, that multiple
consumers may access the same payload instance or header value, and whether such an instance is
itself immutable is a decision left to the developer. In other words, the contract for Messages is similar to
that of an unmodifiable Collection, and the MessageHeaders' map further exemplifies that; even though
the MessageHeaders class implements java.util.Map, any attempt to invoke a put operation (or
remove or clear) on the MessageHeaders will result in an UnsupportedOperationException.
Rather than requiring the creation and population of a Map to pass into the GenericMessage constructor,
Spring Integration does provide a far more convenient way to construct Messages: MessageBuilder.
The MessageBuilder provides two factory methods for creating Messages from either an existing
Message or with a payload Object. When building from an existing Message, the headers and payload
of that Message will be copied to the new Message:
assertEquals("test", message2.getPayload());
assertEquals("bar", message2.getHeaders().get("foo"));
If you need to create a Message with a new payload but still want to copy the headers from an existing
Message, you can use one of the copy methods.
assertEquals("bar", message3.getHeaders().get("foo"));
assertEquals(123, message4.getHeaders().get("foo"));
Notice that the copyHeadersIfAbsent does not overwrite existing values. Also, in the second
example above, you can see how to set any user-defined header with setHeader. Finally, there are
set methods available for the predefined headers as well as a non-destructive method for setting any
header (MessageHeaders also defines constants for the pre-defined header names).
assertEquals(5, importantMessage.getHeaders().getPriority());
assertEquals(2, lessImportantMessage.getHeaders().getPriority());
The priority header is only considered when using a PriorityChannel (as described in the next
chapter). It is defined as java.lang.Integer.
6. Message Routing
6.1 Routers
Overview
Routers are a crucial element in many messaging architectures. They consume Messages from a
Message Channel and forward each consumed message to one or more different Message Channel
depending on a set of conditions.
• (Generic) Router
Router implementations share many configuration parameters. Yet, certain differences exist between
routers. Furthermore, the availability of configuration parameters depends on whether Routers are used
inside or outside of a chain. In order to provide a quick overview, all available attributes are listed in
the 2 tables below.
apply-sequence
default-output-channel
resolution-required
ignore-send-failures
timeout
id
auto-startup
input-channel
order
method
ref
expression
header-name
evaluate-as-string
xpath-expression-ref
converter
apply-sequence
default-output-channel
resolution-required
ignore-send-failures
timeout
id
auto-startup
input-channel
order
method
ref
expression
header-name
evaluate-as-string
xpath-expression-ref
converter
Important
Router parameters have been more standardized across all router implementations with Spring
Integration 2.1. Consequently, there are a few minor changes that leave the possibility of breaking
older Spring Integration based applications.
The following parameters are valid for all routers inside and outside of chains.
apply-sequence
This attribute specifies whether sequence number and size headers should be added to each
Message. This optional attribute defaults to false.
default-output-channel
If set, this attribute provides a reference to the channel, where Messages should be sent, if channel
resolution fails to return any channels. If no default output channel is provided, the router will throw
an Exception. If you would like to silently drop those messages instead, add the nullChannel as
the default output channel attribute value.
Note
resolution-required
If true this attribute specifies that channel names must always be successfully resolved to channel
instances that exist. If set to true, a MessagingException will be raised, in case the channel
cannot be resolved. Setting this attribute to false, will cause any unresovable channels to be ignored.
This optional attribute will, if not explicitly set, default to true.
Note
ignore-send-failures
If set to true, failures to send to a message channel will be ignored. If set to false, a
MessageDeliveryException will be thrown instead, and if the router resolves more than one
channel, any subsequent channels will not receive the message.
The exact behavior of this attribute depends on the type of the Channel messages are sent to. For
example, when using direct channels (single threaded), send-failures can be caused by exceptions
thrown by components much further down-stream. However, when sending messages to a simple queue
channel (asynchronous) the likelihood of an exception to be thrown is rather remote.
Note
While most routers will route to a single channel, they are allowed to return more than one channel
name. The recipient-list-router, for instance, does exactly that. If you set this attribute to
true on a router that only routes to a single channel, any caused exception is simply swallowed,
which usually makes little sense to do. In that case it would be better to catch the exception in
an error flow at the flow entry point. Therefore, setting the ignore-send-failures attribute to
true usually makes more sense when the router implementation returns more than one channel
name, because the other channel(s) following the one that fails would still receive the Message.
timeout
The timeout attribute specifies the maximum amount of time in milliseconds to wait, when sending
Messages to the target Message Channels. By default the send operation will block indefinitely.
The following parameters are valid only across all top-level routers that are ourside of chains.
id
Identifies the underlying Spring bean definition which in case of Routers is an instance of
EventDrivenConsumer or PollingConsumer depending on whether the Router’s input-channel is a
SubscribableChannel or PollableChannel, respectively. This is an optional attribute.
auto-startup
This Lifecycle attribute signaled if this component should be started during startup of the
Application Context. This optional attribute defaults to true.
input-channel
The receiving Message channel of this endpoint.
order
This attribute defines the order for invocation when this endpoint is connected as a subscriber to a
channel. This is particularly relevant when that channel is using a failover dispatching strategy. It
has no effect when this endpoint itself is a Polling Consumer for a channel with a queue.
Router Implementations
Since content-based routing often requires some domain-specific logic, most use-cases will require
Spring Integration’s options for delegating to POJOs using the XML namespace support and/or
Annotations. Both of these are discussed below, but first we present a couple implementations that are
available out-of-the-box since they fulfill common requirements.
PayloadTypeRouter
<bean id="payloadTypeRouter"
class="org.springframework.integration.router.PayloadTypeRouter">
<property name="channelMapping">
<map>
<entry key="java.lang.String" value-ref="stringChannel"/>
<entry key="java.lang.Integer" value-ref="integerChannel"/>
</map>
</property>
</bean>
Configuration of the PayloadTypeRouter is also supported via the namespace provided by Spring
Integration (see Section E.2, “Namespace Support”), which essentially simplifies configuration by
combining the <router/> configuration and its corresponding implementation defined using a <bean/
> element into a single and more concise configuration element. The example below demonstrates
a PayloadTypeRouter configuration which is equivalent to the one above using the namespace
support:
<int:payload-type-router input-channel="routingChannel">
<int:mapping type="java.lang.String" channel="stringChannel" />
<int:mapping type="java.lang.Integer" channel="integerChannel" />
</int:payload-type-router>
@ServiceActivator(inputChannel = "routingChannel")
@Bean
public PayloadTypeRouter router() {
PayloadTypeRouter router = new PayloadTypeRouter();
router.setChannelMapping(String.class.getName(), "stringChannel");
router.setChannelMapping(Integer.class.getName(), "integerChannel");
return router;
}
When using the Java DSL, there are two options; 1) define the router object as above…
@Bean
public IntegrationFlow routerFlow1() {
return IntegrationFlows.from("routingChannel")
.route(router())
.get();
}
Note that the router can be, but doesn’t have to be, a @Bean - the flow will register it if it is not.
@Bean
public IntegrationFlow routerFlow2() {
return IntegrationFlows.from("routingChannel")
.<Object, Class<?>>route(Object::getClass, m -> m
.channelMapping(String.class, "stringChannel")
.channelMapping(Integer.class, "integerChannel"))
.get();
}
HeaderValueRouter
A HeaderValueRouter will send Messages to the channel based on the individual header value
mappings. When a HeaderValueRouter is created it is initialized with the name of the header to be
evaluated. The value of the header could be one of two things:
1. Arbitrary value
2. Channel name
If arbitrary then additional mappings for these header values to channel names is required, otherwise
no additional configuration is needed.
During the resolution process this router may encounter channel resolution failures, causing an
exception. If you want to suppress such exceptions and send unresolved messages to the default output
channel (identified with the default-output-channel attribute) set resolution-required to
false.
Normally, messages for which the header value is not explicitly mapped to a channel will be sent to
the default-output-channel. However, in cases where the header value is mapped to a channel
name but the channel cannot be resolved, setting the resolution-required attribute to false will
result in routing such messages to the default-output-channel.
Important
With Spring Integration 2.1 the attribute was changed from ignore-channel-name-
resolution-failures to resolution-required. Attribute resolution-required will
default to true.
@ServiceActivator(inputChannel = "routingChannel")
@Bean
public HeaderValueRouter router() {
HeaderValueRouter router = new HeaderValueRouter("testHeader");
router.setChannelMapping("someHeaderValue", "channelA");
router.setChannelMapping("someOtherHeaderValue", "channelB");
return router;
}
When using the Java DSL, there are two options; 1) define the router object as above…
@Bean
public IntegrationFlow routerFlow1() {
return IntegrationFlows.from("routingChannel")
.route(router())
.get();
}
Note that the router can be, but doesn’t have to be, a @Bean - the flow will register it if it is not.
@Bean
public IntegrationFlow routerFlow2() {
return IntegrationFlows.from("routingChannel")
.<Message<?>, String>route(m -> m.getHeaders().get("testHeader", String.class), m -> m
.channelMapping("someHeaderValue", "channelA")
.channelMapping("someOtherHeaderValue", "channelB"),
e -> e.id("headerValueRouter"))
.get();
}
2. Configuration where mapping of header values to channel names is not required since header values
themselves represent channel names
Note
Since Spring Integration 2.1 the behavior of resolving channels is more explicit. For example,
if you ommit the default-output-channel attribute and the Router was unable to resolve
at least one valid channel, and any channel name resolution failures were ignored by setting
resolution-required to false, then a MessageDeliveryException is thrown.
Basically, by default the Router must be able to route messages successfully to at least one
channel. If you really want to drop messages, you must also have default-output-channel
set to nullChannel.
RecipientListRouter
A RecipientListRouter will send each received Message to a statically defined list of Message
Channels:
<bean id="recipientListRouter"
class="org.springframework.integration.router.RecipientListRouter">
<property name="channels">
<list>
<ref bean="channel1"/>
<ref bean="channel2"/>
<ref bean="channel3"/>
</list>
</property>
</bean>
Spring Integration also provides namespace support for the RecipientListRouter configuration
(see Section E.2, “Namespace Support”) as the example below demonstrates.
@ServiceActivator(inputChannel = "routingChannel")
@Bean
public RecipientListRouter router() {
RecipientListRouter router = new RecipientListRouter();
router.setSendTimeout(1_234L);
router.setIgnoreSendFailures(true);
router.setApplySequence(true);
router.addRecipient("channel1");
router.addRecipient("channel2");
router.addRecipient("channel3");
return router;
}
@Bean
public IntegrationFlow routerFlow() {
return IntegrationFlows.from("routingChannel")
.routeToRecipients(r -> r
.applySequence(true)
.ignoreSendFailures(true)
.recipient("channel1")
.recipient("channel2")
.recipient("channel3")
.sendTimeout(1_234L))
.get();
}
Note
The apply-sequence flag here has the same effect as it does for a publish-subscribe-channel, and
like a publish-subscribe-channel, it is disabled by default on the recipient-list-router. Refer to the
section called “PublishSubscribeChannel Configuration” for more information.
In the above configuration a SpEL expression identified by the selector-expression attribute will be
evaluated to determine if this recipient should be included in the recipient list for a given input Message.
The evaluation result of the expression must be a boolean. If this attribute is not defined, the channel
will always be among the list of recipients.
RecipientListRouterManagement
Starting with version 4.1, the RecipientListRouter provides several operation to manipulate
with recipients dynamically at runtime. These management operations are presented by
RecipientListRouterManagement @ManagedResource. They are available using Section 10.6,
“Control Bus” as well as via JMX:
<control-bus input-channel="controlBus"/>
<channel id="channel2"/>
messagingTemplate.convertAndSend(controlBus, "@'simpleRouter.handler'.addRecipient('channel2')");
From the application start up the simpleRouter will have only one channel1 recipient. But after the
addRecipient command above the new channel2 recipient will be added. It is a "registering an
interest in something that is part of the Message" use case, when we may be interested in messages
from the router at some time period, so we are subscribing to the the recipient-list-router and
in some point decide to unsubscribe our interest.
Having the runtime management operation for the <recipient-list-router>, it can be configured
without any <recipient> from the start. In this case the behaviour of RecipientListRouter is
the same, when there is no one matching recipient for the message: if defaultOutputChannel is
configured, the message will be sent there, otherwise the MessageDeliveryException is thrown.
XPath Router
The XPath Router is part of the XML Module. See Section 37.6, “Routing XML Messages Using XPath”.
Note
Since version 4.3 the ErrorMessageExceptionTypeRouter loads all mapping classes during
the initialization phase to fail-fast for a ClassNotFoundException.
<int:exception-type-router input-channel="inputChannel"
default-output-channel="defaultChannel">
<int:mapping exception-type="java.lang.IllegalArgumentException"
channel="illegalChannel"/>
<int:mapping exception-type="java.lang.NullPointerException"
channel="npeChannel"/>
</int:exception-type-router>
The "router" element provides a simple way to connect a router to an input channel and also accepts
the optional default-output-channel attribute. The ref attribute references the bean name of a
custom Router implementation (extending AbstractMessageRouter):
Alternatively, ref may point to a simple POJO that contains the @Router annotation (see below), or the
ref may be combined with an explicit method name. Specifying a method applies the same behavior
described in the @Router annotation section below.
Using a ref attribute is generally recommended if the custom router implementation is referenced in
other <router> definitions. However if the custom router implementation should be scoped to a single
definition of the <router>, you may provide an inner bean definition:
Note
Using both the ref attribute and an inner handler definition in the same <router> configuration
is not allowed, as it creates an ambiguous condition, and an Exception will be thrown.
Important
@Bean
@Router(inputChannel = "routingChannel")
public AbstractMessageRouter myCustomRouter() {
return new AbstractMessageRouter() {
@Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
return // determine channel(s) for message
}
};
}
@Bean
public IntegrationFlow routerFlow() {
return IntegrationFlows.from("routingChannel")
.route(myCustomRouter())
.get();
}
@Override
protected Collection<MessageChannel> determineTargetChannels(Message<?> message) {
return // determine channel(s) for message
}
};
}
@Bean
public IntegrationFlow routerFlow() {
return IntegrationFlows.from("routingChannel")
.route(String.class, p -> p.contains("foo") ? "fooChannel" : "barChannel")
.get();
}
Sometimes the routing logic may be simple and writing a separate class for it and configuring it as a
bean may seem like overkill. As of Spring Integration 2.0 we offer an alternative where you can now use
SpEL to implement simple computations that previously required a custom POJO router.
Note
For more information about the Spring Expression Language, please refer to the respective
chapter in the Spring Framework Reference Documentation at:
@Router(inputChannel = "routingChannel")
@Bean
public ExpressionEvaluatingRouter router() {
ExpressionEvaluatingRouter router = new ExpressionEvaluatingRouter("payload.paymentType");
router.setChannelMapping("CASH", "cashPaymentChannel");
router.setChannelMapping("CREDIT", "authorizePaymentChannel");
router.setChannelMapping("DEBIT", "authorizePaymentChannel");
return router;
}
@Bean
public IntegrationFlow routerFlow() {
return IntegrationFlows.from("routingChannel")
.route("payload.paymentType", r -> r
.channelMapping("CASH", "cashPaymentChannel")
.channelMapping("CREDIT", "authorizePaymentChannel")
.channelMapping("DEBIT", "authorizePaymentChannel"))
.get();
}
To simplify things even more, the SpEL expression may evaluate to a channel name:
In the above configuration the result channel will be computed by the SpEL expression which simply
concatenates the value of the payload with the literal String Channel.
Another value of SpEL for configuring routers is that an expression can actually return a Collection,
effectively making every <router> a Recipient List Router. Whenever the expression returns multiple
channel values the Message will be forwarded to each channel.
In the above configuration, if the Message includes a header with the name channels the value of which
is a List of channel names then the Message will be sent to each channel in the list. You may also
find Collection Projection and Collection Selection expressions useful to select multiple channels. For
further information, please see:
• Collection Projection
• Collection Selection
When using @Router to annotate a method, the method may return either a MessageChannel or
String type. In the latter case, the endpoint will resolve the channel name as it does for the default
output channel. Additionally, the method may return either a single value or a collection. If a collection
is returned, the reply message will be sent to multiple channels. To summarize, the following method
signatures are all valid.
@Router
public MessageChannel route(Message message) {...}
@Router
public List<MessageChannel> route(Message message) {...}
@Router
public String route(Foo payload) {...}
@Router
public List<String> route(Foo payload) {...}
In addition to payload-based routing, a Message may be routed based on metadata available within the
message header as either a property or attribute. In this case, a method annotated with @Router may
include a parameter annotated with @Header which is mapped to a header value as illustrated below
and documented in Section E.6, “Annotation Support”.
@Router
public List<String> route(@Header("orderStatus") OrderStatus status)
Note
For routing of XML-based Messages, including XPath support, see Chapter 37, XML Support -
Dealing with XML Payloads.
Also see Section 9.9, “Message Routers” in Java DSL chapter for more information about routers
configuration.
Dynamic Routers
So as you can see, Spring Integration provides quite a few different router configurations for common
content-based routing use cases as well as the option of implementing custom routers as POJOs. For
example PayloadTypeRouter provides a simple way to configure a router which computes channels
based on the payload type of the incoming Message while HeaderValueRouter provides the same
convenience in configuring a router which computes channels by evaluating the value of a particular
Message Header. There are also expression-based (SpEL) routers where the channel is determined
based on evaluating an expression. Thus, these type of routers exhibit some dynamic characteristics.
However these routers all require static configuration. Even in the case of expression-based routers, the
expression itself is defined as part of the router configuration which means that_the same expression
operating on the same value will always result in the computation of the same channel_. This is
acceptable in most cases since such routes are well defined and therefore predictable. But there are
times when we need to change router configurations dynamically so message flows may be routed to
a different channel.
Example:
You might want to bring down some part of your system for maintenance and temporarily re-reroute
messages to a different message flow. Or you may want to introduce more granularity to your message
flow by adding another route to handle a more concrete type of java.lang.Number (in the case of
PayloadTypeRouter).
Unfortunately with static router configuration to accomplish this, you would have to bring down your
entire application, change the configuration of the router (change routes) and bring it back up. This is
obviously not the solution.
The Dynamic Router pattern describes the mechanisms by which one can change/configure routers
dynamically without bringing down the system or individual routers.
Before we get into the specifics of how this is accomplished in Spring Integration, let’s quickly summarize
the typical flow of the router, which consists of 3 simple steps:
• Step 1 - Compute channel identifier which is a value calculated by the router once it receives
the Message. Typically it is a String or and instance of the actual MessageChannel.
• Step 2 - Resolve channel identifier to channel name. We’ll describe specifics of this process
in a moment.
There is not much that can be done with regard to dynamic routing if Step 1 results in the actual instance
of the MessageChannel, simply because the MessageChannel is the final product of any router’s job.
However, if Step 1 results in a channel identifier that is not an instance of MessageChannel,
then there are quite a few possibilities to influence the process of deriving the Message Channel. Lets
look at couple of the examples in the context of the 3 steps mentioned above:
<int:payload-type-router input-channel="routingChannel">
<int:mapping type="java.lang.String" channel="channel1" />
<int:mapping type="java.lang.Integer" channel="channel2" />
</int:payload-type-router>
Within the context of the Payload Type Router the 3 steps mentioned above would be realized as:
• Step 1 - Compute channel identifier which is the fully qualified name of the payload type (e.g.,
java.lang.String).
• Step 2 - Resolve channel identifier to channel name where the result of the previous step is
used to select the appropriate value from the payload type mapping defined via mapping element.
• Step 3 - Resolve channel name to the actual instance of the MessageChannel as a reference
to a bean within the Application Context (which is hopefully a MessageChannel) identified by the
result of the previous step.
In other words, each step feeds the next step until the process completes.
• Step 1 - Compute channel identifier which is the value of the header identified by the header-
name attribute.
• Step 2 - Resolve channel identifier to channel name where the result of the previous step is
used to select the appropriate value from the general mapping defined via mapping element.
• Step 3 - Resolve channel name to the actual instance of the MessageChannel as a reference
to a bean within the Application Context (which is hopefully a MessageChannel) identified by the
result of the previous step.
The above two configurations of two different router types look almost identical. However if we look at
the alternate configuration of the HeaderValueRouter we clearly see that there is no mapping sub
element:
But the configuration is still perfectly valid. So the natural question is what about the mapping in the
Step 2?
What this means is that Step 2 is now an optional step. If mapping is not defined then the channel
identifier value computed in Step 1 will automatically be treated as the channel name, which will
now be resolved to the actual MessageChannel as in Step 3. What it also means is that Step 2 is one
of the key steps to provide dynamic characteristics to the routers, since it introduces a process which
allows you to change the way channel identifier resolves to 'channel name', thus influencing the process
of determining the final instance of the MessageChannel from the initial channel identifier.
For Example:
In the above configuration let’s assume that the testHeader value is kermit which is now a channel
identifier (Step 1). Since there is no mapping in this router, resolving this channel identifier
to a channel name (Step 2) is impossible and this channel identifier is now treated as channel
name. However what if there was a mapping but for a different value? The end result would still be the
same and that is: if a new value cannot be determined through the process of resolving the channel
identifier to a channel name, such channel identifier becomes channel name.
So all that is left is for Step 3 to resolve the channel name (kermit) to an actual instance of the
MessageChannel identified by this name. That basically involves a bean lookup for the name provided.
So now all messages which contain the header/value pair as testHeader=kermit are going to be
routed to a MessageChannel whose bean name (id) is kermit.
But what if you want to route these messages to the simpson channel? Obviously changing a static
configuration will work, but will also require bringing your system down. However if you had access to
the channel identifier map, then you could just introduce a new mapping where the header/value
pair is now kermit=simpson, thus allowing Step 2 to treat kermit as a channel identifier while
resolving it to simpson as the channel name .
The same obviously applies for PayloadTypeRouter, where you can now remap or remove a
particular payload type mapping. In fact, it applies to every other router, including expression-based
routers, since their computed values will now have a chance to go through Step 2 to be additionally
resolved to the actual channel name.
One way to manage the router mappings is through the Control Bus pattern which exposes a Control
Channel where you can send control messages to manage and monitor Spring Integration components,
including routers.
Note
For more information about the Control Bus, please see chapter Section 10.6, “Control Bus”.
Typically you would send a control message asking to invoke a particular operation on a particular
managed component (e.g. router). Two managed operations (methods) that are specific to changing
the router resolution process are:
Note that these methods can be used for simple changes (updating a single route or adding/removing
a route). However, if you want to remove one route and add another, the updates are not atomic. This
means the routing table may be in an indeterminate state between the updates. Starting with version
4.0, you can now use the control bus to update the entire routing table atomically.
"@'router.handler'.replaceChannelMappings('foo=qux \n baz=bar')"
• note that each mapping is separated by a newline character (\n). For programmatic changes to the
map, it is recommended that the setChannelMappings method is used instead, for type-safety.
Any non-String keys or values passed into replaceChannelMappings are ignored.
You can also expose a router instance with Spring’s JMX support, and then use your favorite JMX client
(e.g., JConsole) to manage those operations (methods) for changing the router’s configuration.
Note
For more information about Spring Integration’s JMX support, please see chapter Section 10.2,
“JMX Support”.
Routing Slip
Starting with version 4.1, Spring Integration provides an implementation of the Routing Slip Enterprise
Integration Pattern. It is implemented as a routingSlip message header which is used to determine
the next channel in AbstractMessageProducingHandler s, when an outputChannel isn’t
specified for the endpoint. This pattern is useful in complex, dynamic, cases when it can become difficult
to configure multiple routers to determine message flow. When a message arrives at an endpoint that
has no output-channel, the routingSlip is consulted to determine the next channel to which the
message will be sent. When the routing slip is exhausted, normal replyChannel processing resumes.
<util:properties id="properties">
<beans:prop key="myRoutePath1">channel1</beans:prop>
<beans:prop key="myRoutePath2">request.headers[myRoutingSlipChannel]</beans:prop>
</util:properties>
<context:property-placeholder properties-ref="properties"/>
Since the Routing Slip is involved in the getOutputChannel process we have a request-
reply context. The RoutingSlipRouteStrategy has been introduced to determine the next
outputChannel using the requestMessage, as well as the reply object. An implementation
of this strategy should be registered as a bean in the application context and its bean name
is used in the Routing Slip path. The ExpressionEvaluatingRoutingSlipRouteStrategy
implementation is provided. It accepts a SpEL expression, and an internal
ExpressionEvaluatingRoutingSlipRouteStrategy.RequestAndReply object is used as the
root object of the evaluation context. This is to avoid the overhead of EvaluationContext
creation for each ExpressionEvaluatingRoutingSlipRouteStrategy.getNextPath()
invocation. It is a simple Java Bean with two properties - Message<?> request
and Object reply. With this expression implementation, we can specify
Routing Slip path entries using SpEL (@routingSlipRoutingPojo.get(request,
reply), request.headers[myRoutingSlipChannel]) avoiding a bean definition for the
RoutingSlipRouteStrategy.
Note
Important
@Bean
@Transformer(inputChannel = "routingSlipHeaderChannel")
public HeaderEnricher headerEnricher() {
return new HeaderEnricher(Collections.singletonMap(IntegrationMessageHeaderAccessor.ROUTING_SLIP,
new RoutingSlipHeaderValueMessageProcessor("myRoutePath1",
"@routingSlipRoutingPojo.get(request, reply)",
"routingSlipRoutingStrategy",
"request.headers[myRoutingSlipChannel]",
"finishChannel")));
}
The Routing Slip algorithm works as follows when an endpoint produces a reply and there is no
outputChannel defined:
• The routingSlipIndex is used to get a value from the Routing Slip path list.
• If the next Routing Slip path entry isn’t a String it must be an instance of
RoutingSlipRouteStrategy;
• When the routingSlipIndex exceeds the size of the Routing Slip path list, the algorithm moves
to the default behavior for the standard replyChannel header.
The EIP also defines the Process Manager pattern. This pattern can now easily be implemented
using custom Process Manager logic encapsulated in a RoutingSlipRouteStrategy within
the routing slip. In addition to a bean name, the RoutingSlipRouteStrategy can return any
MessageChannel object; and there is no requirement that this MessageChannel instance is a
bean in the application context. This way, we can provide powerful dynamic routing logic, when
there is no prediction which channel should be used; a MessageChannel can be created within
the RoutingSlipRouteStrategy and returned. A FixedSubscriberChannel with an associated
MessageHandler implementation is good combination for such cases. For example we can route to
a Reactor Stream:
@Bean
public PollableChannel resultsChannel() {
return new QueueChannel();
}
@Bean
public RoutingSlipRouteStrategy routeStrategy() {
return (requestMessage, reply) -> requestMessage.getPayload() instanceof String
? new FixedSubscriberChannel(m ->
Mono.just((String) m.getPayload())
.map(String::toUpperCase)
.subscribe(v -> messagingTemplate().convertAndSend(resultsChannel(), v)))
: new FixedSubscriberChannel(m ->
Mono.just((Integer) m.getPayload())
.map(v -> v * 2)
.subscribe(v -> messagingTemplate().convertAndSend(resultsChannel(), v)));
}
6.2 Filter
Introduction
Message Filters are used to decide whether a Message should be passed along or dropped based on
some criteria such as a Message Header value or Message content itself. Therefore, a Message Filter
is similar to a router, except that for each Message received from the filter’s input channel, that same
Message may or may not be sent to the filter’s output channel. Unlike the router, it makes no decision
regarding which Message Channel to send the Message to but only decides whether to send.
Note
As you will see momentarily, the Filter also supports a discard channel, so in certain cases it can
play the role of a very simple router (or "switch") based on a boolean condition.
In Spring Integration, a Message Filter may be configured as a Message Endpoint that delegates to an
implementation of the MessageSelector interface. That interface is itself quite simple:
In combination with the namespace and SpEL, very powerful filters can be configured with very little
java code.
Configuring Filter
Configuring a Filter with XML
Alternatively, the method attribute can be added at which point the ref may refer to any object. The
referenced method may expect either the Message type or the payload type of inbound Messages.
The method must return a boolean value. If the method returns true, the Message will be sent to the
output-channel.
If the selector or adapted POJO method returns false, there are a few settings that control the handling
of the rejected Message. By default (if configured like the example above), rejected Messages will be
silently dropped. If rejection should instead result in an error condition, then set the throw-exception-
on-rejection attribute to true:
If you want rejected messages to be routed to a specific channel, provide that reference as the
discard-channel:
Note
Message Filters are commonly used in conjunction with a Publish Subscribe Channel. Many filter
endpoints may be subscribed to the same channel, and they decide whether or not to pass the
Message to the next endpoint which could be any of the supported types (e.g. Service Activator).
This provides a reactive alternative to the more proactive approach of using a Message Router
with a single Point-to-Point input channel and multiple output channels.
Using a ref attribute is generally recommended if the custom filter implementation is referenced in other
<filter> definitions. However if the custom filter implementation is scoped to a single <filter>
element, provide an inner bean definition:
Note
Using both the ref attribute and an inner handler definition in the same <filter> configuration
is not allowed, as it creates an ambiguous condition, and an Exception will be thrown.
Important
If the "ref" attribute references a bean that extends MessageFilter (such as filters provided
by the framework itself), the configuration is optimized by injecting the output channel into
the filter bean directly. In this case, each "ref" must be to a separate bean instance (or
a prototype-scoped bean), or use the inner <bean/> configuration type. However, this
optimization only applies if you don’t provide any filter-specific attributes in the filter XML definition.
If you inadvertently reference the same message handler from multiple beans, you will get a
configuration exception.
With the introduction of SpEL support, Spring Integration added the expression attribute to the filter
element. It can be used to avoid Java entirely for simple filters.
The string passed as the expression attribute will be evaluated as a SpEL expression with the Message
available in the evaluation context. If it is necessary to include the result of an expression in the scope
of the application context you can use the #{} notation as defined in the SpEL reference documentation.
<int:filter input-channel="input"
expression="payload.matches(#{filterPatterns.nonsensePattern})"/>
If the Expression itself needs to be dynamic, then an expression sub-element may be used. That
provides a level of indirection for resolving the Expression by its key from an ExpressionSource. That
is a strategy interface that you can implement directly, or you can rely upon a version available in
Spring Integration that loads Expressions from a "resource bundle" and can check for modifications
after a given number of seconds. All of this is demonstrated in the following configuration sample where
the Expression could be reloaded within one minute if the underlying file had been modified. If the
ExpressionSource bean is named "expressionSource", then it is not necessary to provide the` source`
attribute on the <expression> element, but in this case it’s shown for completeness.
Then, the config/integration/expressions.properties file (or any more specific version with a locale
extension to be resolved in the typical way that resource-bundles are loaded) would contain a key/value
pair:
Note
All of these examples that use expression as an attribute or sub-element can also be applied
within transformer, router, splitter, service-activator, and header-enricher elements. Of course,
the semantics/role of the given component type would affect the interpretation of the evaluation
result in the same way that the return value of a method-invocation would be interpreted. For
example, an expression can return Strings that are to be treated as Message Channel names by
a router component. However, the underlying functionality of evaluating the expression against
the Message as the root object, and resolving bean names if prefixed with @ is consistent across
all of the core EIP components within Spring Integration.
❶ An annotation indicating that this method shall be used as a filter. Must be specified if this class
will be used as a filter.
All of the configuration options provided by the xml element are also available for the @Filter
annotation.
The filter can be either referenced explicitly from XML or, if the @MessageEndpoint annotation is
defined on the class, detected automatically through classpath scanning.
6.3 Splitter
Introduction
The Splitter is a component whose role is to partition a message in several parts, and send the resulting
messages to be processed independently. Very often, they are upstream producers in a pipeline that
includes an Aggregator.
Programming model
The API for performing splitting consists of one base class, AbstractMessageSplitter, which is a
MessageHandler implementation, encapsulating features which are common to splitters, such as filling
in the appropriate message headers CORRELATION_ID, SEQUENCE_SIZE, and SEQUENCE_NUMBER
on the messages that are produced. This enables tracking down the messages and the results of their
processing (in a typical scenario, these headers would be copied over to the messages that are produced
by the various transforming endpoints), and use them, for example, in a Composed Message Processor
scenario.
• A Collection or an array of Messages, or an Iterable (or Iterator) that iterates over Messages
- in this case the messages will be sent as such (after the CORRELATION_ID, SEQUENCE_SIZE and
SEQUENCE_NUMBER are populated). Using this approach gives more control to the developer, for
example for populating custom message headers as part of the splitting process.
• a Message or non-Message object (but not a Collection or an Array) - it works like the previous cases,
except a single message will be sent out.
In Spring Integration, any POJO can implement the splitting algorithm, provided that it defines a method
that accepts a single argument and has a return value. In this case, the return value of the method will
be interpreted as described above. The input argument might either be a Message or a simple POJO.
In the latter case, the splitter will receive the payload of the incoming message. Since this decouples the
code from the Spring Integration API and will typically be easier to test, it is the recommended approach.
Iterators
Starting with version 4.1, the AbstractMessageSplitter supports the Iterator type for the
value to split. Note, in the case of an Iterator (or Iterable), we don’t have access to the
number of underlying items and the SEQUENCE_SIZE header is set to 0. This means that the
default SequenceSizeReleaseStrategy of an <aggregator> won’t work and the group for the
CORRELATION_ID from the splitter won’t be released; it will remain as incomplete. In this case
you should use an appropriate custom ReleaseStrategy or rely on send-partial-result-on-
expiry together with group-timeout or a MessageGroupStoreReaper.
An Iterator object is useful to avoid the need for building an entire collection in the memory before
splitting. For example, when underlying items are populated from some external system (e.g. DataBase
or FTP MGET) using iterations or streams.
Starting with version 5.0, the AbstractMessageSplitter supports the Java Stream and Reactive
Streams Publisher types for the value to split. In this case the target Iterator is built on their
iteration functionality.
Configuring Splitter
<int:channel id="inputChannel"/>
<int:splitter id="splitter" ❶
ref="splitterBean" ❷
method="split" ❸
input-channel="inputChannel" ❹
output-channel="outputChannel" /> ❺
<int:channel id="outputChannel"/>
Using a ref attribute is generally recommended if the custom splitter implementation may be referenced
in other <splitter> definitions. However if the custom splitter handler implementation should be
scoped to a single definition of the <splitter>, configure an inner bean definition:
Note
Using both a ref attribute and an inner handler definition in the same <int:splitter>
configuration is not allowed, as it creates an ambiguous condition and will result in an Exception
being thrown.
Important
definition. If you inadvertently reference the same message handler from multiple beans, you will
get a configuration exception.
The @Splitter annotation is applicable to methods that expect either the Message type or the
message payload type, and the return values of the method should be a Collection of any type. If
the returned values are not actual Message objects, then each item will be wrapped in a Message as
its payload. Each message will be sent to the designated output channel for the endpoint on which the
@Splitter is defined.
@Splitter
List<LineItem> extractItems(Order order) {
return order.getItems()
}
6.4 Aggregator
Introduction
Basically a mirror-image of the Splitter, the Aggregator is a type of Message Handler that receives
multiple Messages and combines them into a single Message. In fact, an Aggregator is often a
downstream consumer in a pipeline that includes a Splitter.
Technically, the Aggregator is more complex than a Splitter, because it is stateful as it must hold the
Messages to be aggregated and determine when the complete group of Messages is ready to be
aggregated. In order to do this it requires a MessageStore.
Functionality
The Aggregator combines a group of related messages, by correlating and storing them, until the group
is deemed complete. At that point, the Aggregator will create a single message by processing the whole
group, and will send the aggregated message as output.
Implementing an Aggregator requires providing the logic to perform the aggregation (i.e., the creation
of a single message from many). Two related concepts are correlation and release.
Correlation determines how messages are grouped for aggregation. In Spring Integration correlation is
done by default based on the IntegrationMessageHeaderAccessor.CORRELATION_ID message
header. Messages with the same IntegrationMessageHeaderAccessor.CORRELATION_ID will
be grouped together. However, the correlation strategy may be customized to allow other ways of
specifying how the messages should be grouped together by implementing a CorrelationStrategy
(see below).
Programming model
AggregatingMessageHandler
The responsibility of deciding how the messages should be grouped together is delegated to a
CorrelationStrategy instance. The responsibility of deciding whether the message group can be
released is delegated to a ReleaseStrategy instance.
As for actual processing of the message group, the default implementation is the
DefaultAggregatingMessageGroupProcessor. It creates a single Message whose payload
is a List of the payloads received for a given group. This works well for simple Scatter Gather
implementations with either a Splitter, Publish Subscribe Channel, or Recipient List Router upstream.
Note
When using a Publish Subscribe Channel or Recipient List Router in this type of scenario, be sure
to enable the flag to apply-sequence. That will add the necessary headers (CORRELATION_ID,
SEQUENCE_NUMBER and SEQUENCE_SIZE). That behavior is enabled by default for Splitters in
Spring Integration, but it is not enabled for the Publish Subscribe Channel or Recipient List Router
because those components may be used in a variety of contexts in which these headers are not
necessary.
When implementing a specific aggregator strategy for an application, a developer can extend
AbstractAggregatingMessageGroupProcessor and implement the aggregatePayloads
method. However, there are better solutions, less coupled to the API, for implementing the aggregation
logic which can be configured easily either through XML or through annotations.
In general, any POJO can implement the aggregation algorithm if it provides a method that accepts a
single java.util.List as an argument (parameterized lists are supported as well). This method will
be invoked for aggregating messages as follows:
• if the return type is not assignable to Message, then it will be treated as the payload for a Message
that will be created automatically by the framework.
Note
In the interest of code simplicity, and promoting best practices such as low coupling, testability,
etc., the preferred way of implementing the aggregation logic is through a POJO, and using the
XML or annotation support for configuring it in the application.
Important
If you wish to release a collection of objects from a custom MessageGroupProcessor as the payload
of a message, your class should extend AbstractAggregatingMessageGroupProcessor and
implement aggregatePayloads().
Also, since version 4.2, a SimpleMessageGroupProcessor is provided; which simply returns the
collection of messages from the group, which, as indicated above, causes the released messages to
be sent individually.
This allows the aggregator to work as a message barrier where arriving messages are held until the
release strategy fires, and the group is released, as a sequence of individual messages.
ReleaseStrategy
In general, any POJO can implement the completion decision logic if it provides a method that accepts
a single java.util.List as an argument (parameterized lists are supported as well), and returns a
boolean value. This method will be invoked after the arrival of each new message, to decide whether
the group is complete or not, as follows:
• if the argument is a java.util.List<T>, and the parameter type T is assignable to Message, then
the whole list of messages accumulated in the group will be sent to the method
• the method must return true if the message group is ready for aggregation, and false otherwise.
For example:
@ReleaseStrategy
public boolean canMessagesBeReleased(List<Message<?>>) {...}
}
@ReleaseStrategy
public boolean canMessagesBeReleased(List<String>) {...}
}
As you can see based on the above signatures, the POJO-based Release Strategy will be passed
a Collection of not-yet-released Messages (if you need access to the whole Message) or a
Collection of payload objects (if the type parameter is anything other than Message). Typically
this would satisfy the majority of use cases. However if, for some reason, you need to access the full
MessageGroup then you should simply provide an implementation of the ReleaseStrategy interface.
Warning
When handling potentially large groups, it is important to understand how these methods are
invoked because the release strategy may be invoked multiple times before the group is released.
The most efficient is an implementation of ReleaseStrategy because the aggregator can
invoke it directly. The second most efficient is a POJO method with a Collection<Message<?
>> parameter type. The least efficient is a POJO method with a Collection<Foo> type - the
framework has to copy the payloads from the messages in the group into a new collection (and
possibly attempt conversion on the payloads to Foo) every time the release strategy is called.
Collection<?> avoids the conversion but still requires creating the new Collection.
For these reasons, for large groups, it is recommended that you implement
ReleaseStrategy.
When the group is released for aggregation, all its not-yet-released messages are processed and
removed from the group. If the group is also complete (i.e. if all messages from a sequence have
arrived or if there is no sequence defined), then the group is marked as complete. Any new messages
for this group will be sent to the discard channel (if defined). Setting expire-groups-upon-
completion to true (default is false) removes the entire group and any new messages, with the
same correlation id as the removed group, will form a new group. Partial sequences can be released
by using a MessageGroupStoreReaper together with send-partial-result-on-expiry being
set to true.
Important
To facilitate discarding of late-arriving messages, the aggregator must maintain state about the
group after it has been released. This can eventually cause out of memory conditions. To avoid
such situations, you should consider configuring a MessageGroupStoreReaper to remove the
group metadata; the expiry parameters should be set to expire groups after it is not expected
that late messages will arrive. For information about configuring a reaper, see the section called
“Managing State in an Aggregator: MessageGroupStore”.
Note
Before version 5.0, the default release strategy was SequenceSizeReleaseStrategy which
does not perform well with large groups. With that strategy, duplicate sequence numbers are
detected and rejected; this operation can be expensive.
If you are aggregating large groups, you don’t need to release partial groups, and you don’t need
to detect/reject duplicate sequences, consider using the SimpleSequenceSizeReleaseStrategy
instead - it is much more efficient for these use cases, and is the default since version 5.0 when partial
group release is not specified.
The 4.3 release changed the default Collection for messages in a SimpleMessageGroup to
HashSet (it was previously a BlockingQueue). This was expensive when removing individual
messages from large groups (an O(n) linear scan was required). Although the hash set is generally
much faster for removing, it can be expensive for large messages because the hash has to be
calculated (on both inserts and removes). If you have messages that are expensive to hash,
consider using some other collection type. As discussed in the section called “MessageGroupFactory”,
a SimpleMessageGroupFactory is provided so you can select the Collection that best
suits your needs. You can also provide your own factory implementation to create some other
Collection<Message<?>>.
Here is an example of how to configure an aggregator with the previous implementation and a
SimpleSequenceSizeReleaseStrategy.
<int:aggregator input-channel="aggregate"
output-channel="out" message-store="store" release-strategy="releaser" />
CorrelationStrategy
The method returns an Object which represents the correlation key used for associating the message
with a message group. The key must satisfy the criteria used for a key in a Map with respect to the
implementation of equals() and hashCode().
In general, any POJO can implement the correlation logic, and the rules for mapping a message to a
method’s argument (or arguments) are the same as for a ServiceActivator (including support for
@Header annotations). The method must return a value, and the value must not be null.
LockRegistry
Changes to groups are thread safe; a LockRegistry is used to obtain a lock for the resolved
correlation id. A DefaultLockRegistry is used by default (in-memory). For synchronizing updates
across servers, where a shared MessageGroupStore is being used, a shared lock registry must be
configured. See the section called “Configuring an Aggregator” below for more information.
Configuring an Aggregator
See Section 9.11, “Aggregators and Resequencers” for configuring an Aggregator in Java DSL.
Spring Integration supports the configuration of an aggregator via XML through the <aggregator/>
element. Below you can see an example of an aggregator.
<channel id="inputChannel"/>
<int:aggregator id="myAggregator" ❶
auto-startup="true" ❷
input-channel="inputChannel" ❸
output-channel="outputChannel" ❹
discard-channel="throwAwayChannel" ❺
message-store="persistentMessageStore" ❻
order="1" ❼
send-partial-result-on-expiry="false" ❽
send-timeout="1000" ❾
correlation-strategy="correlationStrategyBean" ❿
correlation-strategy-method="correlate" 11
correlation-strategy-expression="headers['foo']" 12
ref="aggregatorBean" 13
method="aggregate" 14
release-strategy="releaseStrategyBean" 15
release-strategy-method="release" 16
release-strategy-expression="size() == 5" 17
expire-groups-upon-completion="false" 18
empty-group-min-timeout="60000" 19
lock-registry="lockRegistry" 20
group-timeout="60000" 21
group-timeout-expression="size() ge 2 ? 100 : -1" 22
expire-groups-upon-timeout="true" 23
scheduler="taskScheduler" > 24
<expire-transactional/> 25
<expire-advice-chain/> 26
</aggregator>
<int:channel id="outputChannel"/>
<int:channel id="throwAwayChannel"/>
❽ Indicates that expired messages should be aggregated and sent to the output-
channel or replyChannel once their containing MessageGroup is expired (see
MessageGroupStore.expireMessageGroups(long)). One way of expiring MessageGroup
s is by configuring a MessageGroupStoreReaper. However MessageGroup s can alternatively
be expired by simply calling MessageGroupStore.expireMessageGroups(timeout). That
could be accomplished via a Control Bus operation or by simply invoking that method if you have a
reference to the MessageGroupStore instance. Otherwise by itself this attribute has no behavior.
It only serves as an indicator of what to do (discard or send to the output/reply channel) with
Messages that are still in the MessageGroup that is about to be expired. Optional. Default - false.
NOTE: This attribute is more properly send-partial-result-on-timeout because the group
may not actually expire if expire-groups-upon-timeout is set to false.
❾ The timeout interval to wait when sending a reply Message to the output-channel
or discard-channel. Defaults to -1 - blocking indefinitely. It is applied only if the
output channel has some sending limitations, e.g. QueueChannel with a fixed capacity.
In this case a MessageDeliveryException is thrown. The send-timeout is ignored in
case of AbstractSubscribableChannel implementations. In case of group-timeout(-
expression) the MessageDeliveryException from the scheduled expire task leads this task
to be rescheduled. Optional.
❿ A reference to a bean that implements the message correlation (grouping) algorithm. The bean can
be an implementation of the CorrelationStrategy interface or a POJO. In the latter case the
correlation-strategy-method attribute must be defined as well. Optional (by default, the aggregator
will use the IntegrationMessageHeaderAccessor.CORRELATION_ID header).
11 A method defined on the bean referenced by correlation-strategy, that implements the
correlation decision algorithm. Optional, with restrictions (requires correlation-strategy to
be present).
12 A SpEL expression representing the correlation strategy. Example: "headers['foo']". Only
one of correlation-strategy or correlation-strategy-expression is allowed.
13 A reference to a bean defined in the application context. The bean must implement the aggregation
logic as described above. Optional (by default the list of aggregated Messages will become a
payload of the output message).
14 A method defined on the bean referenced by ref, that implements the message aggregation
algorithm. Optional, depends on ref attribute being defined.
15 A reference to a bean that implements the release strategy. The bean can be an implementation
of the ReleaseStrategy interface or a POJO. In the latter case the release-strategy-
method attribute must be defined as well. Optional (by default, the aggregator will use the
IntegrationMessageHeaderAccessor.SEQUENCE_SIZE header attribute).
16 A method defined on the bean referenced by release-strategy, that implements the
completion decision algorithm. Optional, with restrictions (requires release-strategy to be
present).
17 A SpEL expression representing the release strategy; the root object for the expression is a
MessageGroup. Example: "size() == 5". Only one of release-strategy or release-
strategy-expression is allowed.
18 When set to true (default false), completed groups are removed from the message store, allowing
subsequent messages with the same correlation to form a new group. The default behavior is to
send messages with the same correlation as a completed group to the discard-channel.
19 Only applies if a MessageGroupStoreReaper is configured for the <aggregator>'s
MessageStore. By default, when a MessageGroupStoreReaper is configured to expire partial
groups, empty groups are also removed. Empty groups exist after a group is released normally.
This is to enable the detection and discarding of late-arriving messages. If you wish to expire empty
groups on a longer schedule than expiring partial groups, set this property. Empty groups will then
not be removed from the MessageStore until they have not been modified for at least this number
of milliseconds. Note that the actual time to expire an empty group will also be affected by the
reaper’s timeout property and it could be as much as this value plus the timeout.
20 A reference to a org.springframework.integration.util.LockRegistry bean; used to
obtain a Lock based on the groupId for concurrent operations on the MessageGroup. By default,
an internal DefaultLockRegistry is used. Use of a distributed LockRegistry, such as the
ZookeeperLockRegistry, ensures only one instance of the aggregator will operate on a group
concurrently. See Section 25.11, “Redis Lock Registry”, Section 17.6, “Gemfire Lock Registry”,
Section 39.3, “Zookeeper Lock Registry” for more information.
21 A timeout in milliseconds to force the MessageGroup complete, when the ReleaseStrategy
doesn’t release the group when the current Message arrives. This attribute provides a built-in
Time-base Release Strategy for the aggregator, when there is a need to emit a partial result (or
discard the group), if a new Message does not arrive for the MessageGroup within the timeout.
When a new Message arrives at the aggregator, any existing ScheduledFuture<?> for its
MessageGroup is canceled. If the ReleaseStrategy returns false (don’t release) and the
groupTimeout > 0 a new task will be scheduled to expire the group. Setting this attribute
to zero is not advised because it will effectively disable the aggregator because every message
group will be immediately completed. It is possible, however to conditionally set it to zero using
an expression; see group-timeout-expression for information. The action taken during the
completion depends on the ReleaseStrategy and the send-partial-group-on-expiry
attribute. See the section called “Aggregator and Group Timeout” for more information. Mutually
exclusive with group-timeout-expression attribute.
22 The SpEL expression that evaluates to a groupTimeout with the MessageGroup as the #root
evaluation context object. Used for scheduling the MessageGroup to be forced complete. If the
expression evaluates to null or < 0, the completion is not scheduled. If it evaluates to zero, the
group is completed immediately on the current thread. In effect, this provides a dynamic group-
timeout property. See group-timeout for more information. Mutually exclusive with group-
timeout attribute.
23 When a group is completed due to a timeout (or by a MessageGroupStoreReaper), the group
is expired (completely removed) by default. Late arriving messages will start a new group. Set this
to false to complete the group but have its metadata remain so that late arriving messages will
be discarded. Empty groups can be expired later using a MessageGroupStoreReaper together
with the empty-group-min-timeout attribute. Default: true.
24 A TaskScheduler bean reference to schedule the MessageGroup to be forced complete
if no new message arrives for the MessageGroup within the groupTimeout. If not
provided, the default scheduler taskScheduler, registered in the ApplicationContext
(ThreadPoolTaskScheduler) will be used. This attribute does not apply if group-timeout or
group-timeout-expression is not specified.
25 Since version 4.1. Allows a transaction to be started for the forceComplete operation. It is
initiated from a group-timeout(-expression) or by a MessageGroupStoreReaper and
is not applied to the normal add/release/discard operations. Only this sub-element or
<expire-advice-chain/> is allowed.
26 Since version 4.1. Allows the configuration of any Advice for the forceComplete operation.
It is initiated from a group-timeout(-expression) or by a MessageGroupStoreReaper
and is not applied to the normal add/release/discard operations. Only this sub-element or
<expire-transactional/> is allowed. A transaction Advice can also be configured here
using the Spring tx namespace.
Expiring Groups
There are two attributes related to expiring (completely removing) groups. When a group is
expired, there is no record of it and if a new message arrives with the same correlation, a
new group is started. When a group is completed (without expiry), the empty group remains
and late arriving messages are discarded. Empty groups can be removed later using a
MessageGroupStoreReaper in combination with the empty-group-min-timeout attribute.
If a group is not completed normally, but is released or discarded because of a timeout, the group
is normally expired. Since version 4.1, you can now control this behavior using expire-groups-
upon-timeout; this defaults to true for backwards compatibility.
Note
When a group is timed out, the ReleaseStrategy is given one more opportunity to release
the group; if it does so, and expire-groups-upon-timeout is false, then expiration is
controlled by expire-groups-upon-completion. If the group is not released by the
release strategy during timeout, then the expiration is controlled by the expire-groups-
upon-timeout. Timed-out groups are either discarded, or a partial release occurs (based
on send-partial-result-on-expiry).
Starting with version 5.0 empty groups are also scheduled for removal after
empty-group-min-timeout. If expireGroupsUponCompletion == false and
minimumTimeoutForEmptyGroups > 0, the task to remove the group is scheduled, when
normal or partial sequences release happens.
Using a ref attribute is generally recommended if a custom aggregator handler implementation may be
referenced in other <aggregator> definitions. However if a custom aggregator implementation is only
being used by a single definition of the <aggregator>, you can use an inner bean definition (starting
with version 1.0.3) to configure the aggregation POJO within the <aggregator> element:
Note
Using both a ref attribute and an inner bean definition in the same <aggregator> configuration
is not allowed, as it creates an ambiguous condition. In such cases, an Exception will be thrown.
An implementation of the completion strategy bean for the example above may be as follows:
Note
Wherever it makes sense, the release strategy method and the aggregator method can be
combined in a single bean.
An implementation of the correlation strategy bean for the example above may be as follows:
For example, this aggregator would group numbers by some criterion (in our case the remainder after
dividing by 10) and will hold the group until the sum of the numbers provided by the payloads exceeds
a certain value.
Note
Wherever it makes sense, the release strategy method, correlation strategy method and the
aggregator method can be combined in a single bean (all of them or any two).
Since Spring Integration 2.0, the various strategies (correlation, release, and aggregation) may be
handled with SpEL which is recommended if the logic behind such release strategy is relatively simple.
Let’s say you have a legacy component that was designed to receive an array of objects. We know that
the default release strategy will assemble all aggregated messages in the List. So now we have two
problems. First we need to extract individual messages from the list, and then we need to extract the
payload of each message and assemble the array of objects (see code below).
However, with SpEL such a requirement could actually be handled relatively easily with a one-line
expression, thus sparing you from writing a custom class and configuring it as a bean.
<int:aggregator input-channel="aggChannel"
output-channel="replyChannel"
expression="#this.![payload].toArray()"/>
In the above configuration we are using a Collection Projection expression to assemble a new collection
from the payloads of all messages in the list and then transforming it to an Array, thus achieving the
same result as the java code above.
The same expression-based approach can be applied when dealing with custom Release and
Correlation strategies.
For example:
correlation-strategy-expression="payload.person.id"
In the above example it is assumed that the payload has an attribute person with an id which is going
to be used to correlate messages.
Likewise, for the ReleaseStrategy you can implement your release logic as a SpEL expression and
configure it via the release-strategy-expression attribute. The root object for evaluation context
is the MessageGroup itself. The List of messages can be referenced using the message property of
the group within the expression.
Note
In releases prior to version 5.0, the root object was the collection of Message<?>.
For example:
release-strategy-expression="!messages.?[payload==5].empty"
In this example the root object of the SpEL Evaluation Context is the MessageGroup itself, and you are
simply stating that as soon as there are a message with payload as 5 in this group, it should be released.
Starting with version 4.0, two new mutually exclusive attributes have been introduced: group-timeout
and group-timeout-expression (see the description above). There are some cases where it is
needed to emit the aggregator result (or discard the group) after a timeout if the ReleaseStrategy
doesn’t release when the current Message arrives. For this purpose the groupTimeout option allows
scheduling the MessageGroup to be forced complete:
With this example, the normal release will be possible if the aggregator receives the last message in
sequence as defined by the release-strategy-expression. If that specific message does not
arrive, the groupTimeout will force the group complete after 10 seconds as long as the group contains
at least 2 Messages.
The results of forcing the group complete depends on the ReleaseStrategy and the send-partial-
result-on-expiry. First, the release strategy is again consulted to see if a normal release is to be
made - while the group won’t have changed, the ReleaseStrategy can decide to release the group
at this time. If the release strategy still does not release the group, it will be expired. If send-partial-
result-on-expiry is true, existing messages in the (partial) MessageGroup will be released as a
normal aggregator reply Message to the output-channel, otherwise it will be discarded.
@Aggregator ❶
public Delivery aggregatingMethod(List<OrderItem> items) {
...
}
@ReleaseStrategy ❷
public boolean releaseChecker(List<Message<?>> messages) {
...
}
@CorrelationStrategy ❸
public String correlateBy(OrderItem item) {
...
}
}
❶ An annotation indicating that this method shall be used as an aggregator. Must be specified if this
class will be used as an aggregator.
❷ An annotation indicating that this method shall be used as the release strategy
of an aggregator. If not present on any method, the aggregator will use the
SimpleSequenceSizeReleaseStrategy.
❸ An annotation indicating that this method shall be used as the correlation strategy
of an aggregator. If no correlation strategy is indicated, the aggregator will use the
HeaderAttributeCorrelationStrategy based on CORRELATION_ID.
All of the configuration options provided by the xml element are also available for the @Aggregator
annotation.
The aggregator can be either referenced explicitly from XML or, if the @MessageEndpoint is defined
on the class, detected automatically through classpath scanning.
Annotation configuration (@Aggregator and others) for the Aggregator component covers only simple
use cases, where most default options are sufficient. If you need more control over those options using
Annotation configuration, consider using a @Bean definition for the AggregatingMessageHandler
and mark its @Bean method with @ServiceActivator:
@ServiceActivator(inputChannel = "aggregatorChannel")
@Bean
public MessageHandler aggregator(MessageGroupStore jdbcMessageGroupStore) {
AggregatingMessageHandler aggregator =
new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(),
jdbcMessageGroupStore);
aggregator.setOutputChannel(resultsChannel());
aggregator.setGroupTimeoutExpression(new ValueExpression<>(500L));
aggregator.setTaskScheduler(this.taskScheduler);
return aggregator;
}
See the section called “Programming model” and the section called “Annotations on @Beans” for more
information.
Note
Starting with the version 4.2 the AggregatorFactoryBean is available, to simplify Java
configuration for the AggregatingMessageHandler.
Aggregator (and some other patterns in Spring Integration) is a stateful pattern that requires decisions
to be made based on a group of messages that have arrived over a period of time, all with the same
correlation key. The design of the interfaces in the stateful patterns (e.g. ReleaseStrategy) is driven
by the principle that the components (whether defined by the framework or a user) should be able to
remain stateless. All state is carried by the MessageGroup and its management is delegated to the
MessageGroupStore.
int getMessageCountForAllMessageGroups();
int getMarkedMessageCountForAllMessageGroups();
int getMessageGroupCount();
The callback has direct access to the store and the message group so it can manage the persistent
state (e.g. by removing the group from the store entirely).
The expireMessageGroups method can be called with a timeout value. Any message older than the
current time minus this value will be expired, and have the callbacks applied. Thus it is the user of the
store that defines what is meant by message group "expiry".
As a convenience for users, Spring Integration provides a wrapper for the message expiry in the form
of a MessageGroupStoreReaper:
<task:scheduled-tasks scheduler="scheduler">
<task:scheduled ref="reaper" method="run" fixed-rate="10000"/>
</task:scheduled-tasks>
The reaper is a Runnable, and all that is happening in the example above is that the message group
store’s expire method is being called once every 10 seconds. The timeout itself is 30 seconds.
Note
In addition to the reaper, the expiry callbacks are invoked when the application shuts down via a lifecycle
callback in the AbstractCorrelatingMessageHandler.
The AbstractCorrelatingMessageHandler registers its own expiry callback, and this is the link
with the boolean flag send-partial-result-on-expiry in the XML configuration of the aggregator.
If the flag is set to true, then when the expiry callback is invoked, any unmarked messages in groups
that are not yet released can be sent on to the output channel.
Important
unexpected behavior may happen when one correlation endpoint may release or expire messages
from others - messages with the same correlation key are stored in the same message group.
For more information about MessageStore interface and its implementations, please read
Section 10.4, “Message Store”.
6.5 Resequencer
Introduction
Related to the Aggregator, albeit different from a functional standpoint, is the Resequencer.
Functionality
The Resequencer works in a similar way to the Aggregator, in the sense that it uses the
CORRELATION_ID to store messages in groups, the difference being that the Resequencer does not
process the messages in any way. It simply releases them in the order of their SEQUENCE_NUMBER
header values.
With respect to that, the user might opt to release all messages at once (after the whole sequence,
according to the SEQUENCE_SIZE, has been released), or as soon as a valid sequence is available.
Important
The resequencer is intended to resequence relatively short sequences of messages with small
gaps. If you have a large number of disjoint sequences with many gaps, you may experience
performance issues.
Configuring a Resequencer
See Section 9.11, “Aggregators and Resequencers” for configuring a Resequencer in Java DSL.
<int:channel id="inputChannel"/>
<int:channel id="outputChannel"/>
<int:resequencer id="completelyDefinedResequencer" ❶
input-channel="inputChannel" ❷
output-channel="outputChannel" ❸
discard-channel="discardChannel" ❹
release-partial-sequences="true" ❺
message-store="messageStore" ❻
send-partial-result-on-expiry="true" ❼
send-timeout="86420000" ❽
correlation-strategy="correlationStrategyBean" ❾
correlation-strategy-method="correlate" ❿
correlation-strategy-expression="headers['foo']" 11
release-strategy="releaseStrategyBean" 12
release-strategy-method="release" 13
release-strategy-expression="size() == 10" 14
empty-group-min-timeout="60000" 15
lock-registry="lockRegistry" 16
group-timeout="60000" 17
group-timeout-expression="size() ge 2 ? 100 : -1" 18
scheduler="taskScheduler" /> 19
expire-group-upon-timeout="false" /> 20
12 A reference to a bean that implements the release strategy. The bean can be an implementation
of the ReleaseStrategy interface or a POJO. In the latter case the release-strategy-
method attribute must be defined as well. Optional (by default, the aggregator will use the
IntegrationMessageHeaderAccessor.SEQUENCE_SIZE header attribute).
13 A method defined on the bean referenced by release-strategy, that implements the
completion decision algorithm. Optional, with restrictions (requires release-strategy to be
present).
14 A SpEL expression representing the release strategy; the root object for the expression is a
MessageGroup. Example: "size() == 5". Only one of release-strategy or release-
strategy-expression is allowed.
15 Only applies if a MessageGroupStoreReaper is configured for the <resequencer>
MessageStore. By default, when a MessageGroupStoreReaper is configured to expire partial
groups, empty groups are also removed. Empty groups exist after a group is released normally.
This is to enable the detection and discarding of late-arriving messages. If you wish to expire empty
groups on a longer schedule than expiring partial groups, set this property. Empty groups will then
not be removed from the MessageStore until they have not been modified for at least this number
of milliseconds. Note that the actual time to expire an empty group will also be affected by the
reaper’s timeout property and it could be as much as this value plus the timeout.
16 See the section called “Configuring an Aggregator with XML”.
17 See the section called “Configuring an Aggregator with XML”.
18 See the section called “Configuring an Aggregator with XML”.
19 See the section called “Configuring an Aggregator with XML”.
20 When a group is completed due to a timeout (or by a MessageGroupStoreReaper), the empty
group’s metadata is retained by default. Late arriving messages will be immediately discarded.
Set this to true to remove the group completely; then, late arriving messages will start a new
group and won’t be discarded until the group again times out. The new group will never be
released normally because of the "hole" in the sequence range that caused the timeout. Empty
groups can be expired (completely removed) later using a MessageGroupStoreReaper together
with the empty-group-min-timeout attribute. Starting with version 5.0 empty groups are also
scheduled for removal after empty-group-min-timeout. Default: false.
Note
Since there is no custom behavior to be implemented in Java classes for resequencers, there is
no annotation support for it.
Introduction
Tip
The handler chain simplifies configuration while internally maintaining the same degree of loose
coupling between components, and it is trivial to modify the configuration if at some point a non-linear
arrangement is required.
Internally, the chain will be expanded into a linear setup of the listed endpoints, separated by anonymous
channels. The reply channel header will not be taken into account within the chain: only after the last
handler is invoked will the resulting message be forwarded on to the reply channel or the chain’s output
channel. Because of this setup all handlers except the last required to implement the MessageProducer
interface (which provides a setOutputChannel() method). The last handler only needs an output channel
if the outputChannel on the MessageHandlerChain is set.
Note
As with other endpoints, the output-channel is optional. If there is a reply Message at the
end of the chain, the output-channel takes precedence, but if not available, the chain handler will
check for a reply channel header on the inbound Message as a fallback.
In most cases there is no need to implement MessageHandlers yourself. The next section will focus on
namespace support for the chain element. Most Spring Integration endpoints, like Service Activators
and Transformers, are suitable for use within a MessageHandlerChain.
Configuring a Chain
The <chain> element provides an input-channel attribute, and if the last element in the chain is
capable of producing reply messages (optional), it also supports an output-channel attribute. The
sub-elements are then filters, transformers, splitters, and service-activators. The last element may also
be a router or an outbound-channel-adapter.
The <header-enricher> element used in the above example will set a message header named "foo" with
a value of "bar" on the message. A header enricher is a specialization of Transformer that touches
only header values. You could obtain the same result by implementing a MessageHandler that did the
header modifications and wiring that as a bean, but the header-enricher is obviously a simpler option.
The <chain> can be configured as the last black-box consumer of the message flow. For this solution it
is enough to put at the end of the <chain> some <outbound-channel-adapter>:
<int:chain input-channel="input">
<int-xml:marshalling-transformer marshaller="marshaller" result-type="StringResult" />
<int:service-activator ref="someService" method="someMethod"/>
<int:header-enricher>
<int:header name="foo" value="bar"/>
</int:header-enricher>
<int:logging-channel-adapter level="INFO" log-full-message="true"/>
</int:chain>
It is important to note that certain attributes, such as order and input-channel are not allowed to be
specified on components used within a chain. The same is true for the poller sub-element.
Important
For the Spring Integration core components, the XML Schema itself will enforce some of
these constraints. However, for non-core components or your own custom components, these
constraints are enforced by the XML namespace parser, not by the XML Schema.
These XML namespace parser constraints were added with Spring Integration 2.2. The XML
namespace parser will throw an BeanDefinitionParsingException if you try to use
disallowed attributes and elements.
'id' Attribute
Beginning with Spring Integration 3.0, if a chain element is given an id, the bean name for the element is
a combination of the chain’s id and the id of the element itself. Elements without an id are not registered
as beans, but they are given componentName s that include the chain id. For example:
• The <chain> root element has an id fooChain. So, the AbstractEndpoint implementation
(PollingConsumer or EventDrivenConsumer, depending on the input-channel type) bean takes
this value as it’s bean name.
• The MessageHandlerChain bean acquires a bean alias fooChain.handler, which allows direct
access to this bean from the BeanFactory.
• The componentName of this ServiceActivatingHandler takes the same value, but without the
.handler suffix - fooChain$child.fooService.
The id attribute for <chain> elements allows them to be eligible for JMX export and they are trackable
via Message History. They can also be accessed from the BeanFactory using the appropriate bean
name as discussed above.
Tip
Sometimes you need to make a nested call to another chain from within a chain and then come back and
continue execution within the original chain. To accomplish this you can utilize a Messaging Gateway
by including a <gateway> element. For example:
In the above example the nested-chain-a will be called at the end of main-chain processing by the
gateway element configured there. While in nested-chain-a a call to a nested-chain-b will be made after
header enrichment and then it will come back to finish execution in nested-chain-b. Finally the flow
returns to the main-chain. When the nested version of a <gateway> element is defined in the chain, it
does not require the service-interface attribute. Instead, it simple takes the message in its current
state and places it on the channel defined via the request-channel attribute. When the downstream
flow initiated by that gateway completes, a Message will be returned to the gateway and continue its
journey within the current chain.
6.7 Scatter-Gather
Introduction
Starting with version 4.1, Spring Integration provides an implementation of the Scatter-Gather Enterprise
Integration Pattern. It is a compound endpoint, where the goal is to send a message to the recipients
and aggregate the results. Quoting the EIP Book, it is a component for scenarios like best quote, when
we need to request information from several suppliers and decide which one provides us with the best
term for the requested item.
Previously, the pattern could be configured using discrete components, this enhancement brings more
convenient configuration.
Functionality
The Scatter-Gather pattern suggests two scenarios - Auction and Distribution. In both
cases, the aggregation function is the same and provides all options available for
the AggregatingMessageHandler. Actually the ScatterGatherHandler just requires an
AggregatingMessageHandler as a constructor argument. See Section 6.4, “Aggregator” for more
information.
Auction
The Auction Scatter-Gather variant uses publish-subscribe logic for the request message,
where the scatter channel is a PublishSubscribeChannel with apply-sequence="true".
However, this channel can be any MessageChannel implementation as is the case with the request-
channel in the ContentEnricher (see Section 7.2, “Content Enricher”) but, in this case, the end-
user should support his own custom correlationStrategy for the aggregation function.
Distribution
In both cases, the request (scatter) message is enriched with the gatherResultChannel
QueueChannel header, to wait for a reply message from the aggregator.
By default, all suppliers should send their result to the replyChannel header (usually by omitting the
output-channel from the ultimate endpoint). However, the gatherChannel option is also provided,
allowing suppliers to send their reply to that channel for the aggregation.
For Java and Annotation configuration, the bean definition for the Scatter-Gather is:
@Bean
public MessageHandler distributor() {
RecipientListRouter router = new RecipientListRouter();
router.setApplySequence(true);
router.setChannels(Arrays.asList(distributionChannel1(), distributionChannel2(),
distributionChannel3()));
return router;
}
@Bean
public MessageHandler gatherer() {
return new AggregatingMessageHandler(
new ExpressionEvaluatingMessageGroupProcessor("^[payload gt 5] ?: -1D"),
new SimpleMessageStore(),
new HeaderAttributeCorrelationStrategy(
IntegrationMessageHeaderAccessor.CORRELATION_ID),
new ExpressionEvaluatingReleaseStrategy("size() == 2"));
}
@Bean
@ServiceActivator(inputChannel = "distributionChannel")
public MessageHandler scatterGatherDistribution() {
ScatterGatherHandler handler = new ScatterGatherHandler(distributor(), gatherer());
handler.setOutputChannel(output());
return handler;
}
<scatter-gather
id="" ❶
auto-startup="" ❷
input-channel="" ❸
output-channel="" ❹
scatter-channel="" ❺
gather-channel="" ❻
order="" ❼
phase="" ❽
send-timeout="" ❾
gather-timeout="" ❿
requires-reply="" > 11
<scatterer/> 12
<gatherer/> 13
</scatter-gather>
❹ The channel to which the Scatter-Gather will send the aggregation results. Optional (because
incoming messages can specify a reply channel themselves via replyChannel Message
Header).
❺ The channel to send the scatter message for the Auction scenario. Optional. Mutually exclusive
with <scatterer> sub-element.
❻ The channel to receive replies from each supplier for the aggregation. is used
as the replyChannel header in the scatter message. Optional. By default the
FixedSubscriberChannel is created.
❼ Order of this component when more than one handler is subscribed to the same DirectChannel
(use for load balancing purposes). Optional.
❽ Specify the phase in which the endpoint should be started and stopped. The startup order proceeds
from lowest to highest, and the shutdown order is the reverse of that. By default this value is
Integer.MAX_VALUE meaning that this container starts as late as possible and stops as soon as
possible. Optional.
❾ The timeout interval to wait when sending a reply Message to the output-channel.
By default the send will block for one second. It applies only if the output channel
has some sending limitations, e.g. a QueueChannel with a fixed capacity and is full. In
this case, a MessageDeliveryException is thrown. The send-timeout is ignored in
case of AbstractSubscribableChannel implementations. In case of group-timeout(-
expression) the MessageDeliveryException from the scheduled expire task leads this task
to be rescheduled. Optional.
❿ Allows you to specify how long the Scatter-Gather will wait for the reply message before returning.
By default it will wait indefinitely. null is returned if the reply times out. Optional. Defaults to -1 -
indefinitely.
11 Specify whether the Scatter-Gather must return a non-null value. This value is true by default,
hence a ReplyRequiredException will be thrown when the underlying aggregator returns a
null value after gather-timeout. Note, if null is a possibility, the gather-timeout should be
specified to avoid an indefinite wait.
12 The <recipient-list-router> options. Optional. Mutually exclusive with scatter-
channel attribute.
13 The <aggregator> options. Required.
Spring Integration version 4.2 introduced the <barrier/> component for this purpose. The
underlying MessageHandler is the BarrierMessageHandler; this class also implements
MessageTriggerAction where a message passed to the trigger() method releases a
corresponding thread in the handleRequestMessage() method (if present).
The suspended thread and trigger thread are correlated by invoking a CorrelationStrategy
on the messages. When a message is sent to the input-channel, the thread is suspended for
up to timeout milliseconds, waiting for a corresponding trigger message. The default correlation
strategy uses the IntegrationMessageHeaderAccessor.CORRELATION_ID header. When a
trigger message arrives with the same correlation, the thread is released. The message sent to
the output-channel after release is constructed using a MessageGroupProcessor. By default,
the message is a Collection<?> of the two payloads and the headers are merged, using a
DefaultAggregatingMessageGroupProcessor.
Caution
If the trigger() method is invoked first (or after the main thread times out), it will be suspended
for up to timeout waiting for the suspending message to arrive. If you do not want to suspend the
trigger thread, consider handing off to a TaskExecutor instead so its thread will be suspended
instead.
The requires-reply property determines the action if the suspended thread times out before the
trigger message arrives. By default, it is false which means the endpoint simply returns null, the flow
ends and the thread returns to the caller. When true, a ReplyRequiredException is thrown.
You can call the trigger() method programmatically (obtain the bean reference using the name
barrier.handler - where barrier is the bean name of the barrier endpoint) or you can configure an
<outbound-channel-adapter/> to trigger the release.
Important
Only one thread can be suspended with the same correlation; the same correlation can be used
multiple times but only once concurrently. An exception is thrown if a second thread arrives with
the same correlation.
In this example, a custom header is used for correlation. Either the thread sending a message to in
or the one sending a message to release will wait for up to 10 seconds until the other arrives. When
the message is released, the out channel will be sent a message combining the result of invoking the
custom MessageGroupProcessor bean myOutputProcessor. If the main thread times out and a
trigger arrives later, you can configure a discard channel to which the late trigger will be sent. Java
configuration is shown below.
@Configuration
@EnableIntegration
public class Config {
@ServiceActivator(inputChannel="in")
@Bean
public BarrierMessageHandler barrier() {
BarrierMessageHandler barrier = new BarrierMessageHandler(10000);
barrier.setOutputChannel(out());
barrier.setDiscardChannel(lateTriggers());
return barrier;
}
@ServiceActivator (inputChannel="release")
@Bean
public MessageHandler releaser() {
return new MessageHandler() {
@Override
public void handleMessage(Message<?> message) throws MessagingException {
barrier().trigger(message);
}
};
}
7. Message Transformation
7.1 Transformer
Introduction
Message Transformers play a very important role in enabling the loose-coupling of Message Producers
and Message Consumers. Rather than requiring every Message-producing component to know what
type is expected by the next consumer, Transformers can be added between those components.
Generic transformers, such as one that converts a String to an XML Document, are also highly reusable.
For some systems, it may be best to provide a Canonical Data Model, but Spring Integration’s general
philosophy is not to require any particular format. Rather, for maximum flexibility, Spring Integration
aims to provide the simplest possible model for extension. As with the other endpoint types, the use of
declarative configuration in XML and/or Annotations enables simple POJOs to be adapted for the role
of Message Transformers. These configuration options will be described below.
Note
For the same reason of maximizing flexibility, Spring does not require XML-based Message
payloads. Nevertheless, the framework does provide some convenient Transformers for dealing
with XML-based payloads if that is indeed the right choice for your application. For more
information on those transformers, see Chapter 37, XML Support - Dealing with XML Payloads.
Configuring Transformer
Using a ref attribute is generally recommended if the custom transformer handler implementation
can be reused in other <transformer> definitions. However if the custom transformer handler
implementation should be scoped to a single definition of the <transformer>, you can define an inner
bean definition:
Note
Using both the "ref" attribute and an inner handler definition in the same <transformer>
configuration is not allowed, as it creates an ambiguous condition and will result in an Exception
being thrown.
Important
When using a POJO, the method that is used for transformation may expect either the Message type
or the payload type of inbound Messages. It may also accept Message header values either individually
or as a full map by using the @Header and @Headers parameter annotations respectively. The return
value of the method can be any type. If the return value is itself a Message, that will be passed along
to the transformer’s output channel.
As of Spring Integration 2.0, a Message Transformer’s transformation method can no longer return
null. Returning null will result in an exception since a Message Transformer should always be
expected to transform each source Message into a valid target Message. In other words, a Message
Transformer should not be used as a Message Filter since there is a dedicated <filter> option for
that. However, if you do need this type of behavior (where a component might return NULL and that
should not be considered an error), a service-activator could be used. Its requires-reply value is
FALSE by default, but that can be set to TRUE in order to have Exceptions thrown for NULL return
values as with the transformer.
Just like Routers, Aggregators and other components, as of Spring Integration 2.0 Transformers can
also benefit from SpEL support (http://docs.spring.io/spring/docs/current/spring-framework-reference/
html/expressions.html) whenever transformation logic is relatively simple.
<int:transformer input-channel="inChannel"
output-channel="outChannel"
expression="payload.toUpperCase() + '- [' + T(java.lang.System).currentTimeMillis() + ']'"/>
In the above configuration we are achieving a simple transformation of the payload with a simple SpEL
expression and without writing a custom transformer. Our payload (assuming String) will be upper-cased
and concatenated with the current timestamp with some simple formatting.
Common Transformers
There are also a few Transformer implementations available out of the box.
Object-to-String Transformer
Because, it is fairly common to use the toString() representation of an Object, Spring Integration
provides an ObjectToStringTransformer whose output is a Message with a String payload. That
String is the result of invoking the toString() operation on the inbound Message’s payload.
A potential example for this would be sending some arbitrary object to the outbound-channel-adapter in
the file namespace. Whereas that Channel Adapter only supports String, byte-array, or java.io.File
payloads by default, adding this transformer immediately before the adapter will handle the necessary
conversion. Of course, that works fine as long as the result of the toString() call is what you want
to be written to the File. Otherwise, you can just provide a custom POJO-based Transformer via the
generic transformer element shown previously.
Tip
When debugging, this transformer is not typically necessary since the logging-channel-adapter is
capable of logging the Message payload. Refer to the section called “Wire Tap” for more detail.
Note
For more sophistication (such as selection of the charset dynamically, at runtime), you can use a
SpEL expression-based transformer instead; for example:
If you need to serialize an Object to a byte array or deserialize a byte array back into an Object, Spring
Integration provides symmetrical serialization transformers. These will use standard Java serialization
by default, but you can provide an implementation of Spring 3.0’s Serializer or Deserializer strategies
via the serializer and deserializer attributes, respectively.
Important
When deserializing data from untrusted sources, you should consider adding a white-list of
package/class patterns. By default, all classes will be deserialized.
Spring Integration also provides Object-to-Map and Map-to-Object transformers which utilize the JSON
to serialize and de-serialize the object graphs. The object hierarchy is introspected to the most primitive
types (String, int, etc.). The path to this type is described via SpEL, which becomes the key in the
transformed Map. The primitive type becomes the value.
For example:
The JSON-based Map allows you to describe the object structure without sharing the actual types
allowing you to restore/rebuild the object graph into a differently typed Object graph as long as you
maintain the structure.
For example: The above structure could be easily restored back to the following Object graph via the
Map-to-Object transformer:
If you need to create a "structured" map, you can provide the flatten attribute. The default value for this
attribute is true meaning the default behavior; if you provide a false value, then the structure will be a
map of maps.
For example:
or
Map-to-Object
<int:map-to-object-transformer input-channel="input"
output-channel="output"
type="org.foo.Person"/>
or
<int:map-to-object-transformer input-channel="inputA"
output-channel="outputA"
ref="person"/>
<bean id="person" class="org.foo.Person" scope="prototype"/>
Note
NOTE: ref and type attributes are mutually exclusive. You can only use one. Also, if using the ref
attribute, you must point to a prototype scoped bean, otherwise a BeanCreationException
will be thrown.
Starting with version 5.0, the ObjectToMapTransformer can be supplied with the customized
JsonObjectMapper, for example in use-cases when we need special formats for dates or nulls
for empty collections. See the section called “JSON Transformers” for more information about
JsonObjectMapper implementations.
Stream Transformer
@Bean
@Transformer(inputChannel = "stream", outputChannel = "data")
public StreamTransformer streamToBytes() {
return new StreamTransformer(); // transforms to byte[]
}
@Bean
@Transformer(inputChannel = "stream", outputChannel = "data")
public StreamTransformer streamToString() {
return new StreamTransformer("UTF-8"); // transforms to String
}
JSON Transformers
<int:object-to-json-transformer input-channel="objectMapperInput"/>
<int:json-to-object-transformer input-channel="objectMapperInput"
type="foo.MyDomainObject"/>
These use a vanilla JsonObjectMapper by default based on implementation from classpath. You can
provide your own custom JsonObjectMapper implementation with appropriate options or based on
required library (e.g. GSON).
<int:json-to-object-transformer input-channel="objectMapperInput"
type="foo.MyDomainObject" object-mapper="customObjectMapper"/>
Note
Beginning with version 3.0, the object-mapper attribute references an instance of a new
strategy interface JsonObjectMapper. This abstraction allows multiple implementations of json
mappers to be used. Implementations that wraphttps://github.com/RichardHightower/boon[Boon]
and Jackson 2 are provided, with the version being detected on the classpath. These classes are
BoonJsonObjectMapper and Jackson2JsonObjectMapper.
Important
If there are requirements to use both Jackson libraries and/or Boon in the same application, keep
in mind that before version 3.0, the JSON transformers used only Jackson 1.x. From 4.1 on, the
framework will select Jackson 2 by default ahead of the Boon implementation if both are on the
classpath. Jackson 1.x is no longer supported by the framework internally but, of course, you
can still use it within your code. To avoid unexpected issues with JSON mapping features, when
using annotations, there may be a need to apply annotations from both Jacksons and/or Boon
on domain classes:
@org.codehaus.jackson.annotate.JsonIgnoreProperties(ignoreUnknown=true)
@com.fasterxml.jackson.annotation.JsonIgnoreProperties(ignoreUnknown=true)
@org.boon.json.annotations.JsonIgnoreProperties("foo")
public class Foo {
@org.codehaus.jackson.annotate.JsonProperty("fooBar")
@com.fasterxml.jackson.annotation.JsonProperty("fooBar")
@org.boon.json.annotations.JsonProperty("fooBar")
public Object bar;
You may wish to consider using a FactoryBean or simple factory method to create the
JsonObjectMapper with the required characteristics.
Important
Beginning with version 2.2, the object-to-json-transformer sets the content-type header
to application/json, by default, if the input message does not already have that header
present.
It you wish to set the content type header to some other value, or explicitly overwrite any existing
header with some value (including application/json), use the content-type attribute. If
you wish to suppress the setting of the header, set the content-type attribute to an empty
string (""). This will result in a message with no content-type header, unless such a header
was present on the input message.
Beginning with version 3.0, the ObjectToJsonTransformer adds headers, reflecting the source
type, to the message. Similarly, the JsonToObjectTransformer can use those type headers when
converting the JSON to an object. These headers are mapped in the AMQP adapters so that they are
entirely compatible with the Spring-AMQP JsonMessageConverter.
This enables the following flows to work without any special configuration…
...->amqp-outbound-adapter---->
---->amqp-inbound-adapter->json-to-object-transformer->...
Where the outbound adapter is configured with a JsonMessageConverter and the inbound adapter
uses the default SimpleMessageConverter.
...->object-to-json-transformer->amqp-outbound-adapter---->
---->amqp-inbound-adapter->...
Where the outbound adapter is configured with a SimpleMessageConverter and the inbound adapter
uses the default JsonMessageConverter.
...->object-to-json-transformer->amqp-outbound-adapter---->
---->amqp-inbound-adapter->json-to-object-transformer->
Note
When using the headers to determine the type, you should not provide a class attribute, because
it takes precedence over the headers.
In addition to JSON Transformers, Spring Integration provides a built-in #jsonPath SpEL function for
use in expressions. For more information see Appendix A, Spring Expression Language (SpEL).
Since version 3.0, Spring Integration also provides a built-in #xpath SpEL function for use in expressions.
For more information see Section 37.9, “#xpath SpEL Function”.
Beginning with version 4.0, the ObjectToJsonTransformer supports the resultType property,
to specify the node JSON representation. The result node tree representation depends on the
implementation of the provided JsonObjectMapper. By default, the ObjectToJsonTransformer
uses a Jackson2JsonObjectMapper and delegates the conversion of the object to the node tree
to the ObjectMapper#valueToTree method. The node JSON representation provides efficiency for
using the JsonPropertyAccessor, when the downstream message flow uses SpEL expressions with
access to the properties of the JSON data. See Section A.4, “PropertyAccessors”. When using Boon,
the NODE representation is a Map<String, Object>
The @Transformer annotation can also be added to methods that expect either the Message type or
the message payload type. The return value will be handled in the exact same way as described above
in the section describing the <transformer> element.
@Transformer
Order generateOrder(String productId) {
return new Order(productId);
}
Transformer methods may also accept the @Header and @Headers annotations that is documented
in Section E.6, “Annotation Support”
@Transformer
Order generateOrder(String productId, @Header("customerName") String customer) {
return new Order(productId, customer);
}
Header Filter
Some times your transformation use case might be as simple as removing a few headers. For such a
use case, Spring Integration provides a Header Filter which allows you to specify certain header names
that should be removed from the output Message (e.g. for security reasons or a value that was only
needed temporarily). Basically, the Header Filter is the opposite of the Header Enricher. The latter is
discussed in the section called “Header Enricher”.
<int:header-filter input-channel="inputChannel"
output-channel="outputChannel" header-names="lastName, state"/>
As you can see, configuration of a Header Filter is quite simple. It is a typical endpoint with input/output
channels and a header-names attribute. That attribute accepts the names of the header(s) (delimited
by commas if there are multiple) that need to be removed. So, in the above example the headers named
lastName and state will not be present on the outbound Message.
Codec-Based Transformers
See Section 7.4, “Codec”.
• Header Enricher
• Payload Enricher
Please go to the adapter specific sections of this reference manual to learn more about those adapters.
For more information regarding expressions support, please see Appendix A, Spring Expression
Language (SpEL).
Header Enricher
If you only need to add headers to a Message, and they are not dynamically determined by the Message
content, then referencing a custom implementation of a Transformer may be overkill. For that reason,
Spring Integration provides support for the Header Enricher pattern. It is exposed via the <header-
enricher> element.
The Header Enricher also provides helpful sub-elements to set well-known header names.
In the above configuration you can clearly see that for well-known headers such as errorChannel,
correlationId, priority, replyChannel, routing-slip etc., instead of using generic
<header> sub-elements where you would have to provide both header name and value, you can use
convenient sub-elements to set those values directly.
Starting with version 4.1 the Header Enricher provides routing-slip sub-element. See the section
called “Routing Slip” for more information.
POJO Support
Often a header value cannot be defined statically and has to be determined dynamically based on some
content in the Message. That is why Header Enricher allows you to also specify a bean reference using
the ref and method attribute. The specified method will calculate the header value. Let’s look at the
following configuration:
SpEL Support
In Spring Integration 2.0 we have introduced the convenience of the Spring Expression Language
(SpEL) to help configure many different components. The Header Enricher is one of them. Looking
again at the POJO example above, you can see that the computation logic to determine the header
value is actually pretty simple. A natural question would be: "is there a simpler way to accomplish this?".
That is where SpEL shows its true power.
As you can see, by using SpEL for such simple cases, we no longer have to provide a separate class
and configure it in the application context. All we need is the expression attribute configured with a valid
SpEL expression. The payload and headers variables are bound to the SpEL Evaluation Context, giving
you full access to the incoming Message.
The following are some examples of Java Configuration for header enrichers:
@Bean
@Transformer(inputChannel = "enrichHeadersChannel", outputChannel = "emailChannel")
public HeaderEnricher enrichHeaders() {
Map<String, ? extends HeaderValueMessageProcessor<?>> headersToAdd =
Collections.singletonMap("emailUrl",
new StaticHeaderValueMessageProcessor<>(this.imapUrl));
HeaderEnricher enricher = new HeaderEnricher(headersToAdd);
return enricher;
}
@Bean
@Transformer(inputChannel="enrichHeadersChannel", outputChannel="emailChannel")
public HeaderEnricher enrichHeaders() {
Map<String, HeaderValueMessageProcessor<?>> headersToAdd = new HashMap<>();
headersToAdd.put("emailUrl", new StaticHeaderValueMessageProcessor<String>(this.imapUrl));
Expression expression = new SpelExpressionParser().parseExpression("payload.from[0].toString()");
headersToAdd.put("from",
new ExpressionEvaluatingHeaderValueMessageProcessor<>(expression, String.class));
HeaderEnricher enricher = new HeaderEnricher(headersToAdd);
return enricher;
}
The first adds a single literal header. The second adds two headers - a literal header and one based
on a SpEL expression.
@Bean
public IntegrationFlow enrichHeadersInFlow() {
return f -> f
...
.enrichHeaders(h -> h.header("emailUrl", this.emailUrl)
.headerExpression("from", "payload.from[0].toString()"))
.handle(...);
}
when it is time to send a reply, or handle an error. This is useful for cases where the headers might be
lost; for example when serializing a message into a message store or when transporting the message
over JMS. If the header does not already exist, or it is not a MessageChannel, no changes are made.
Since version 4.1, you can set a property removeOnGet to true on the <bean/> definition, and
the mapping entry will be removed immediately on first use. This might be useful in a high-volume
environment and when the channel is only used once, rather than waiting for the reaper to remove it.
The HeaderChannelRegistry has a size() method to determine the current size of the registry.
The runReaper() method cancels the current scheduled task and runs the reaper immediately; the
task is then scheduled to run again based on the current delay. These methods can be invoked directly
by getting a reference to the registry, or you can send a message with, for example, the following content
to a control bus:
"@integrationHeaderChannelRegistry.runReaper()"
<int:reply-channel
expression="@integrationHeaderChannelRegistry.channelToChannelName(headers.replyChannel)"
overwrite="true" />
<int:error-channel
expression="@integrationHeaderChannelRegistry.channelToChannelName(headers.errorChannel)"
overwrite="true" />
Starting with version 4.1, you can now override the registry’s configured reaper delay, so the the channel
mapping is retained for at least the specified time, regardless of the reaper delay:
In the first case, the time to live for every header channel mapping will be 2 minutes; in the second
case, the time to live is specified in the message header and uses an elvis operator to use 2 minutes
if there is no header.
Payload Enricher
In certain situations the Header Enricher, as discussed above, may not be sufficient and payloads
themselves may have to be enriched with additional information. For example, order messages that
enter the Spring Integration messaging system have to look up the order’s customer based on the
provided customer number and then enrich the original payload with that information.
Since Spring Integration 2.1, the Payload Enricher is provided. A Payload Enricher defines an endpoint
that passes a Message to the exposed request channel and then expects a reply message. The reply
message then becomes the root object for evaluation of expressions to enrich the target payload.
The Payload Enricher provides full XML namespace support via the enricher element. In order to
send request messages, the payload enricher has a request-channel attribute that allows you to
dispatch messages to a request channel.
Basically by defining the request channel, the Payload Enricher acts as a Gateway, waiting for the
message that were sent to the request channel to return, and the Enricher then augments the message’s
payload with the data provided by the reply message.
When sending messages to the request channel you also have the option to only send a subset of the
original payload using the request-payload-expression attribute.
The enriching of payloads is configured through SpEL expressions, providing users with a maximum
degree of flexibility. Therefore, users are not only able to enrich payloads with direct values from the
reply channel’s Message, but they can use SpEL expressions to extract a subset from that Message,
only, or to apply addtional inline transformations, allowing them to further manipulate the data.
If you only need to enrich payloads with static values, you don’t have to provide the request-channel
attribute.
Note
Enrichers are a variant of Transformers and in many cases you could use a Payload Enricher
or a generic Transformer implementation to add additional data to your messages payloads.
Thus, familiarize yourself with all transformation-capable components that are provided by Spring
Integration and carefully select the implementation that semantically fits your business case best.
Configuration
Below, please find an overview of all available configuration options that are available for the payload
enricher:
<int:enricher request-channel="" ❶
auto-startup="true" ❷
id="" ❸
order="" ❹
output-channel="" ❺
request-payload-expression="" ❻
reply-channel="" ❼
error-channel="" ❽
send-timeout="" ❾
should-clone-payload="false"> ❿
<int:poller></int:poller> 11
<int:property name="" expression="" null-result-expression="'Could not determine the name'"/> 12
<int:property name="" value="23" type="java.lang.Integer" null-result-expression="'0'"/>
<int:header name="" expression="" null-result-expression=""/> 13
<int:header name="" value="" overwrite="" type="" null-result-expression=""/>
</int:enricher>
❶ Channel to which a Message will be sent to get the data to use for enrichment. Optional.
❷ Lifecycle attribute signaling if this component should be started during Application Context startup.
Defaults to true.Optional.
❸ Id of the underlying bean definition, which is either an EventDrivenConsumer or a
PollingConsumer. Optional.
❹ Specifies the order for invocation when this endpoint is connected as a subscriber to a channel.
This is particularly relevant when that channel is using a "failover" dispatching strategy. It has no
effect when this endpoint itself is a Polling Consumer for a channel with a queue. Optional.
❺ Identifies the Message channel where a Message will be sent after it is being processed by this
endpoint.Optional.
❻ By default the original message’s payload will be used as payload that will be send to the
request-channel. By specifying a SpEL expression as value for the request-payload-
expression attribute, a subset of the original payload, a header value or any other resolvable
SpEL expression can be used as the basis for the payload, that will be sent to the request-channel.
For the Expression evaluation the full message is available as the root object. For instance the
following SpEL expressions (among others) are possible: payload.foo, headers.foobar, new
java.util.Date(), 'foo' + 'bar'.
❼ Channel where a reply Message is expected. This is optional; typically the auto-generated
temporary reply channel is sufficient. Optional.
❽ Channel to which an ErrorMessage will be sent if an Exception occurs downstream of the
request-channel. This enables you to return an alternative object to use for enrichment. This
is optional; if it is not set then Exception is thrown to the caller. Optional.
❾ Maximum amount of time in milliseconds to wait when sending a message to the channel,
if such channel may block. For example, a Queue Channel can block until space is
available, if its maximum capacity has been reached. Internally the send timeout is set on
the MessagingTemplate and ultimately applied when invoking the send operation on the
MessageChannel. By default the send timeout is set to -1, which may cause the send operation
on the MessageChannel, depending on the implementation, to block indefinitely. Optional.
❿ Boolean value indicating whether any payload that implements Cloneable should be cloned prior
to sending the Message to the request chanenl for acquiring the enriching data. The cloned version
would be used as the target payload for the ultimate reply. Default is false. Optional.
11 Allows you to configure a Message Poller if this endpoint is a Polling Consumer. Optional.
12 Each property sub-element provides the name of a property (via the mandatory name attribute).
That property should be settable on the target payload instance. Exactly one of the value or
expression attributes must be provided as well. The former for a literal value to set, and the
latter for a SpEL expression to be evaluated. The root object of the evaluation context is the
Message that was returned from the flow initiated by this enricher, the input Message if there is
no request channel, or the application context (using the @<beanName>.<beanProperty> SpEL
syntax). Starting with 4.0, when specifying a value attribute, you can also specify an optional
type attribute. When the destination is a typed setter method, the framework will coerce the value
appropriately (as long as a PropertyEditor) exists to handle the conversion. If however, the
target payload is a Map the entry will be populated with the value without conversion. The type
attribute allows you to, say, convert a String containing a number to an Integer value in the
target payload. Starting with 4.1, you can also specify an optional null-result-expression
attribute. When the enricher returns null, it will be evaluated and the output of the evaluation
will be returned instead.
13 Each header sub-element provides the name of a Message header (via the mandatory name
attribute). Exactly one of the value or expression attributes must be provided as well. The
former for a literal value to set, and the latter for a SpEL expression to be evaluated. The root
object of the evaluation context is the Message that was returned from the flow initiated by this
enricher, the input Message if there is no request channel, or the application context (using the
@<beanName>.<beanProperty> SpEL syntax). Note, similar to the <header-enricher>, the
<enricher>'s header element has type and overwrite attributes. However, a difference is
that, with the <enricher>, the overwrite attribute is true by default, to be consistent with
<enricher>'s <property> sub-element. Starting with 4.1, you can also specify an optional
null-result-expression attribute. When the enricher returns null, it will be evaluated and
the output of the evaluation will be returned instead.
Examples
Below, please find several examples of using a Payload Enricher in various situations.
In the following example, a User object is passed as the payload of the Message. The User has several
properties but only the username is set initially. The Enricher’s request-channel attribute below is
configured to pass the User on to the findUserServiceChannel.
Through the implicitly set reply-channel a User object is returned and using the property sub-
element, properties from the reply are extracted and used to enrich the original payload.
<int:enricher id="findUserEnricher"
input-channel="findUserEnricherChannel"
request-channel="findUserServiceChannel">
<int:property name="email" expression="payload.email"/>
<int:property name="password" expression="payload.password"/>
</int:enricher>
Note
The code samples shown here, are part of the Spring Integration Samples project. Please feel
free to check it out in the Appendix G, Spring Integration Samples.
<int:enricher id="findUserByUsernameEnricher"
input-channel="findUserByUsernameEnricherChannel"
request-channel="findUserByUsernameServiceChannel"
request-payload-expression="payload.username">
<int:property name="email" expression="payload.email"/>
<int:property name="password" expression="payload.password"/>
</int:enricher>
In the following example, instead of a User object, a Map is passed in. The Map contains the username
under the map key username. Only the username is passed on to the request channel. The reply
contains a full User object, which is ultimately added to the Map under the user key.
<int:enricher id="findUserWithMapEnricher"
input-channel="findUserWithMapEnricherChannel"
request-channel="findUserByUsernameServiceChannel"
request-payload-expression="payload.username">
<int:property name="user" expression="payload"/>
</int:enricher>
How can I enrich payloads with static information without using a request channel?
Here is an example that does not use a request channel at all, but solely enriches the message’s payload
with static values. But please be aware that the word static is used loosely here. You can still use SpEL
expressions for setting those values.
<int:enricher id="userEnricher"
input-channel="input">
<int:property name="user.updateDate" expression="new java.util.Date()"/>
<int:property name="user.firstName" value="foo"/>
<int:property name="user.lastName" value="bar"/>
<int:property name="user.age" value="42"/>
</int:enricher>
The Claim Check pattern describes a mechanism that allows you to store data in a well known place
while only maintaining a pointer (Claim Check) to where that data is located. You can pass that pointer
around as a payload of a new Message thereby allowing any component within the message flow to get
the actual data as soon as it needs it. This approach is very similar to the Certified Mail process where
you’ll get a Claim Check in your mailbox and would have to go to the Post Office to claim your actual
package. Of course it’s also the same idea as baggage-claim on a flight or in a hotel.
<int:claim-check-in id="checkin"
input-channel="checkinChannel"
message-store="testMessageStore"
output-channel="output"/>
In the above configuration the Message that is received on the input-channel will be persisted to
the Message Store identified with the message-store attribute and indexed with generated ID. That
ID is the Claim Check for that Message. The Claim Check will also become the payload of the new
(transformed) Message that will be sent to the output-channel.
Now, lets assume that at some point you do need access to the actual Message. You can of course
access the Message Store manually and get the contents of the Message, or you can use the same
approach as before except now you will be transforming the Claim Check to the actual Message by
using an Outgoing Claim Check Transformer.
<int:claim-check-in auto-startup="true" ❶
id="" ❷
input-channel="" ❸
message-store="messageStore" ❹
order="" ❺
output-channel="" ❻
send-timeout=""> ❼
<int:poller></int:poller> ❽
</int:claim-check-in>
❶ Lifecycle attribute signaling if this component should be started during Application Context startup.
Defaults to true. Attribute is not available inside a Chain element. Optional.
❷ Id identifying the underlying bean definition (MessageTransformingHandler). Attribute is not
available inside a Chain element. Optional.
❸ The receiving Message channel of this endpoint. Attribute is not available inside a Chain element.
Optional.
❹ Reference to the MessageStore to be used by this Claim Check transformer. If not specified, the
default reference will be to a bean named messageStore. Optional.
❺ Specifies the order for invocation when this endpoint is connected as a subscriber to a channel.
This is particularly relevant when that channel is using a failover dispatching strategy. It has no
effect when this endpoint itself is a Polling Consumer for a channel with a queue. Attribute is not
available inside a Chain element. Optional.
❻ Identifies the Message channel where Message will be sent after its being processed by this
endpoint. Attribute is not available inside a Chain element. Optional.
❼ Specify the maximum amount of time in milliseconds to wait when sending a reply Message to
the output channel. Defaults to -1 - blocking indefinitely. Attribute is not available inside a Chain
element. Optional.
❽ Defines a poller. Element is not available inside a Chain element. Optional.
<int:claim-check-out id="checkout"
input-channel="checkoutChannel"
message-store="testMessageStore"
output-channel="output"/>
In the above configuration, the Message that is received on the input-channel should have a Claim
Check as its payload and the Outgoing Claim Check Transformer will transform it into a Message with
the original payload by simply querying the Message store for a Message identified by the provided
Claim Check. It then sends the newly checked-out Message to the output-channel.
<int:claim-check-out auto-startup="true" ❶
id="" ❷
input-channel="" ❸
message-store="messageStore" ❹
order="" ❺
output-channel="" ❻
remove-message="false" ❼
send-timeout=""> ❽
<int:poller></int:poller> ❾
</int:claim-check-out>
❶ Lifecycle attribute signaling if this component should be started during Application Context startup.
Defaults to true. Attribute is not available inside a Chain element. Optional.
❷ Id identifying the underlying bean definition (MessageTransformingHandler). Attribute is not
available inside a Chain element. Optional.
❸ The receiving Message channel of this endpoint. Attribute is not available inside a Chain element.
Optional.
❹ Reference to the MessageStore to be used by this Claim Check transformer. If not specified, the
default reference will be to a bean named messageStore. Optional.
❺ Specifies the order for invocation when this endpoint is connected as a subscriber to a channel.
This is particularly relevant when that channel is using a failover dispatching strategy. It has no
effect when this endpoint itself is a Polling Consumer for a channel with a queue. Attribute is not
available inside a Chain element. Optional.
❻ Identifies the Message channel where Message will be sent after its being processed by this
endpoint. Attribute is not available inside a Chain element. Optional.
❼ If set to true the Message will be removed from the MessageStore by this transformer. Useful
when Message can be "claimed" only once. Defaults to false. Optional.
❽ Specify the maximum amount of time in milliseconds to wait when sending a reply Message to
the output channel. Defaults to -1 - blocking indefinitely. Attribute is not available inside a Chain
element. Optional.
❾ Defines a poller. Element is not available inside a Chain element. Optional.
Claim Once
There are scenarios when a particular message must be claimed only once. As an analogy, consider the
airplane luggage check-in/out process. Checking-in your luggage on departure and and then claiming
it on arrival is a classic example of such a scenario. Once the luggage has been claimed, it can not be
claimed again without first checking it back in. To accommodate such cases, we introduced a remove-
message boolean attribute on the claim-check-out transformer. This attribute is set to false by
default. However, if set to true, the claimed Message will be removed from the MessageStore, so that
it can no longer be claimed again.
This is also something to consider in terms of storage space, especially in the case of the in-memory
Map-based SimpleMessageStore, where failing to remove the Messages could ultimately lead to an
OutOfMemoryException. Therefore, if you don’t expect multiple claims to be made, it’s recommended
that you set the remove-message attribute’s value to true.
<int:claim-check-out id="checkout"
input-channel="checkoutChannel"
message-store="testMessageStore"
output-channel="output"
remove-message="true"/>
Although we rarely care about the details of the claim checks as long as they work, it is still worth
knowing that the current implementation of the actual Claim Check (the pointer) in Spring Integration
is a UUID to ensure uniqueness.
7.4 Codec
Introduction
Spring Integration version 4.2 introduces the Codec abstraction. Codecs are used to encode/decode
objects to/from byte[]. They are an alternative to Java Serialization. One advantage is, typically,
objects do not have to implement Serializable. One implementation, using Kryo for serialization, is
provided but you can provide your own implementation for use in any of these components:
• EncodingPayloadTransformer
• DecodingTransformer
• CodecMessageConverter
EncodingPayloadTransformer
This transformer encodes the payload to a byte[] using the codec. It does not affect message headers.
DecodingTransformer
This transformer decodes a byte[] using the codec; it needs to be configured with the Class to which
the object should be decoded (or an expression that resolves to a Class). If the resulting object is a
Message<?>, inbound headers will not be retained.
CodecMessageConverter
Certain endpoints (e.g. TCP, Redis) have no concept of message headers; they support the use of a
MessageConverter and the CodecMessageConverter can be used to convert a message to/from
a byte[] for transmission.
Kryo
Currently, this is the only implementation of Codec. There are two Codec s - PojoCodec which can be
used in the transformers and MessageCodec which can be used in the CodecMessageConverter.
• FileSerializer
• MessageHeadersSerializer
• MutableMessageHeadersSerializer
The first can be used with the PojoCodec, by initializing it with the FileKryoRegistrar. The second
and third are used with the MessageCodec, which is initialized with the MessageKryoRegistrar.
Customizing Kryo
By default, Kryo delegates unknown Java types to its FieldSerializer. Kryo also registers
default serializers for each primitive type along with String, Collection and Map serializers.
FieldSerializer uses reflection to navigate the object graph. A more efficient approach is to
implement a custom serializer that is aware of the object’s structure and can directly serialize selected
primitive fields:
@Override
public void write(Kryo kryo, Output output, Address address) {
output.writeString(address.getStreet());
output.writeString(address.getCity());
output.writeString(address.getCountry());
}
@Override
public Address read(Kryo kryo, Input input, Class<Address> type) {
return new Address(input.readString(), input.readString(), input.readString());
}
}
The Serializer interface exposes Kryo, Input, and Output which provide complete control over
which fields are included and other internal settings as described in the documentation.
Note
When registering your custom serializer, you need a registration ID. The registration IDs are
arbitrary but in our case must be explicitly defined because each Kryo instance across the
distributed application must use the same IDs. Kryo recommends small positive integers, and
reserves a few ids (value < 10). Spring Integration currently defaults to using 40, 41 and 42 (for
the file and message header serializers mentioned above); we recommend you start at, say 60, to
allow for expansion in the framework. These framework defaults can be overridden by configuring
the registrars mentioned above.
If custom serialization is indicated, please consult the Kryo documentation since you will be using the
native API. For an example, see the MessageCodec.
Implementing KryoSerializable
If you have write access to the domain object source code it may implement KryoSerializable
as described here. In this case the class provides the serialization methods itself and no further
configuration is required. This has the advantage of being much simpler to use with XD, however
benchmarks have shown this is not quite as efficient as registering a custom serializer explicitly:
@Override
public void write(Kryo kryo, Output output) {
output.writeString(this.street);
output.writeString(this.city);
output.writeString(this.country);
}
@Override
public void read(Kryo kryo, Input input) {
this.street = input.readString();
this.city = input.readString();
this.country = input.readString();
}
}
Note that this technique can also be used to wrap a serialization library other than Kryo.
@DefaultSerializer(SomeClassSerializer.class)
public class SomeClass {
// ...
}
If you have write access to the domain object this may be a simpler alternative to specify a custom
serializer. Note this does not register the class with an ID, so your mileage may vary.
8. Messaging Endpoints
8.1 Message Endpoints
The first part of this chapter covers some background theory and reveals quite a bit about the underlying
API that drives Spring Integration’s various messaging components. This information can be helpful if
you want to really understand what’s going on behind the scenes. However, if you want to get up and
running with the simplified namespace-based configuration of the various elements, feel free to skip
ahead to the section called “Endpoint Namespace Support” for now.
As mentioned in the overview, Message Endpoints are responsible for connecting the various
messaging components to channels. Over the next several chapters, you will see a number of different
components that consume Messages. Some of these are also capable of sending reply Messages.
Sending Messages is quite straightforward. As shown above in Section 4.1, “Message Channels”, it’s
easy to send a Message to a Message Channel. However, receiving is a bit more complicated. The main
reason is that there are two types of consumers: Polling Consumers and Event Driven Consumers.
Of the two, Event Driven Consumers are much simpler. Without any need to manage and schedule a
separate poller thread, they are essentially just listeners with a callback method. When connecting to one
of Spring Integration’s subscribable Message Channels, this simple option works great. However, when
connecting to a buffering, pollable Message Channel, some component has to schedule and manage the
polling thread(s). Spring Integration provides two different endpoint implementations to accommodate
these two types of consumers. Therefore, the consumers themselves can simply implement the callback
interface. When polling is required, the endpoint acts as a container for the consumer instance. The
benefit is similar to that of using a container for hosting Message Driven Beans, but since these
consumers are simply Spring-managed Objects running within an ApplicationContext, it more closely
resembles Spring’s own MessageListener containers.
Message Handler
Spring Integration’s MessageHandler interface is implemented by many of the components within
the framework. In other words, this is not part of the public API, and a developer would not typically
implement MessageHandler directly. Nevertheless, it is used by a Message Consumer for actually
handling the consumed Messages, and so being aware of this strategy interface does help in terms of
understanding the overall role of a consumer. The interface is defined as follows:
Despite its simplicity, this provides the foundation for most of the components that will be covered
in the following chapters (Routers, Transformers, Splitters, Aggregators, Service Activators, etc).
Those components each perform very different functionality with the Messages they handle, but the
requirements for actually receiving a Message are the same, and the choice between polling and event-
driven behavior is also the same. Spring Integration provides two endpoint implementations that host
these callback-based handlers and allow them to be connected to Message Channels.
and that the method accepts a MessageHandler parameter (as shown in the section called
“SubscribableChannel”):
subscribableChannel.subscribe(messageHandler);
Since a handler that is subscribed to a channel does not have to actively poll that channel, this
is an Event Driven Consumer, and the implementation provided by Spring Integration accepts a a
SubscribableChannel and a MessageHandler:
Polling Consumer
Spring Integration also provides a PollingConsumer, and it can be instantiated in the same way
except that the channel must implement PollableChannel:
Note
For more information regarding Polling Consumers, please also read Section 4.2, “Poller” as well
as Section 4.3, “Channel Adapter”.
There are many other configuration options for the Polling Consumer. For example, the trigger is a
required property:
The CronTrigger simply requires a valid cron expression (see the Javadoc for details):
In addition to the trigger, several other polling-related configuration properties may be specified:
consumer.setMaxMessagesPerPoll(10);
consumer.setReceiveTimeout(5000);
The maxMessagesPerPoll property specifies the maximum number of messages to receive within a
given poll operation. This means that the poller will continue calling receive() without waiting until either
null is returned or that max is reached. For example, if a poller has a 10 second interval trigger and
a maxMessagesPerPoll setting of 25, and it is polling a channel that has 100 messages in its queue,
all 100 messages can be retrieved within 40 seconds. It grabs 25, waits 10 seconds, grabs the next
25, and so on.
The receiveTimeout property specifies the amount of time the poller should wait if no messages are
available when it invokes the receive operation. For example, consider two options that seem similar on
the surface but are actually quite different: the first has an interval trigger of 5 seconds and a receive
timeout of 50 milliseconds while the second has an interval trigger of 50 milliseconds and a receive
timeout of 5 seconds. The first one may receive a message up to 4950 milliseconds later than it arrived
on the channel (if that message arrived immediately after one of its poll calls returned). On the other
hand, the second configuration will never miss a message by more than 50 milliseconds. The difference
is that the second option requires a thread to wait, but as a result it is able to respond much more
quickly to arriving messages. This technique, known as long polling, can be used to emulate event-
driven behavior on a polled source.
A Polling Consumer may also delegate to a Spring TaskExecutor, as illustrated in the following
example:
Furthermore, a PollingConsumer has a property called adviceChain. This property allows you to
specify a List of AOP Advices for handling additional cross cutting concerns including transactions.
These advices are applied around the doPoll() method. For more in-depth information, please see the
sections AOP Advice chains and Transaction Support under the section called “Endpoint Namespace
Support”.
The examples above show dependency lookups, but keep in mind that these consumers will most often
be configured as Spring bean definitions. In fact, Spring Integration also provides a FactoryBean called
ConsumerEndpointFactoryBean that creates the appropriate consumer type based on the type of
channel, and there is full XML namespace support to even further hide those details. The namespace-
based configuration will be featured as each component type is introduced.
Note
Many of the MessageHandler implementations are also capable of generating reply Messages.
As mentioned above, sending Messages is trivial when compared to the Message reception.
Nevertheless,when and how many reply Messages are sent depends on the handler type. For
example, an Aggregator waits for a number of Messages to arrive and is often configured as
a downstream consumer for a Splitter which may generate multiple replies for each Message
it handles. When using the namespace configuration, you do not strictly need to know all
of the details, but it still might be worth knowing that several of these components share a
common base class, the AbstractReplyProducingMessageHandler, and it provides a
setOutputChannel(..) method.
Throughout the reference manual, you will see specific configuration examples for endpoint elements,
such as router, transformer, service-activator, and so on. Most of these will support an input-channel
attribute and many will support an output-channel attribute. After being parsed, these endpoint elements
In the configuration below you find a poller with all available configuration options:
<int:poller cron="" ❶
default="false" ❷
error-channel="" ❸
fixed-delay="" ❹
fixed-rate="" ❺
id="" ❻
max-messages-per-poll="" ❼
receive-timeout="" ❽
ref="" ❾
task-executor="" ❿
time-unit="MILLISECONDS" 11
trigger=""> 12
<int:advice-chain /> 13
<int:transactional /> 14
</int:poller>
❶ Provides the ability to configure Pollers using Cron expressions. The underlying implementation
uses an org.springframework.scheduling.support.CronTrigger. If this attribute is set,
none of the following attributes must be specified: fixed-delay, trigger, fixed-rate, ref.
❷ By setting this attribute to true, it is possible to define exactly one (1) global default
poller. An exception is raised if more than one default poller is defined in the
application context. Any endpoints connected to a PollableChannel (PollingConsumer) or any
SourcePollingChannelAdapter that does not have any explicitly configured poller will then use the
global default Poller. Optional. Defaults to false.
❸ Identifies the channel which error messages will be sent to if a failure occurs in this poller’s
invocation. To completely suppress Exceptions, provide a reference to the nullChannel.
Optional.
❹ The fixed delay trigger uses a PeriodicTrigger under the covers. If the time-unit attribute
is not used, the specified value is represented in milliseconds. If this attribute is set, none of the
following attributes must be specified: fixed-rate, trigger, cron, ref.
❺ The fixed rate trigger uses a PeriodicTrigger under the covers. If the time-unit attribute
is not used the specified value is represented in milliseconds. If this attribute is set, none of the
following attributes must be specified: fixed-delay, trigger, cron, ref.
❻ The Id referring to the Poller’s underlying bean-definition, which is of type
org.springframework.integration.scheduling.PollerMetadata. The id attribute is
required for a top-level poller element unless it is the default poller (default="true").
❼ Please see the section called “Configuring An Inbound Channel Adapter” for more information.
Optional. If not specified the default values used depends on the context. If a PollingConsumer
is used, this atribute will default to -1. However, if a SourcePollingChannelAdapter is used,
then the max-messages-per-poll attribute defaults to 1.
❽ Value is set on the underlying class PollerMetadata. Optional. If not specified it defaults to 1000
(milliseconds).
❾ Bean reference to another top-level poller. The ref attribute must not be present on the top-level
poller element. However, if this attribute is set, none of the following attributes must be specified:
fixed-rate, trigger, cron, fixed-delay.
❿ Provides the ability to reference a custom task executor. Please see the section below titled
TaskExecutor Support for further information. Optional.
Examples
For example, a simple interval-based poller with a 1-second interval would be configured like this:
<int:transformer input-channel="pollable"
ref="transformer"
output-channel="output">
<int:poller fixed-rate="1000"/>
</int:transformer>
For a poller based on a Cron expression, use the cron attribute instead:
<int:transformer input-channel="pollable"
ref="transformer"
output-channel="output">
<int:poller cron="*/10 * * * * MON-FRI"/>
</int:transformer>
If the input channel is a PollableChannel, then the poller configuration is required. Specifically, as
mentioned above, the trigger is a required property of the PollingConsumer class. Therefore, if you omit
the poller sub-element for a Polling Consumer endpoint’s configuration, an Exception may be thrown.
The exception will also be thrown if you attempt to configure a poller on the element that is connected
to a non-pollable channel.
It is also possible to create top-level pollers in which case only a ref is required:
<int:transformer input-channel="pollable"
ref="transformer"
output-channel="output">
<int:poller ref="weekdayPoller"/>
</int:transformer>
Note
The ref attribute is only allowed on the inner-poller definitions. Defining this attribute on a top-level
poller will result in a configuration exception thrown during initialization of the Application Context.
In fact, to simplify the configuration even further, you can define a global default poller. A single top-level
poller within an ApplicationContext may have the default attribute with a value of true. In that case, any
endpoint with a PollableChannel for its input-channel that is defined within the same ApplicationContext
and has no explicitly configured poller sub-element will use that default.
Transaction Support
Spring Integration also provides transaction support for the pollers so that each receive-and-forward
operation can be performed as an atomic unit-of-work. To configure transactions for a poller, simply add
the_<transactional/>_ sub-element. The attributes for this element should be familiar to anyone who
has experience with Spring’s Transaction management:
<int:poller fixed-delay="1000">
<int:transactional transaction-manager="txManager"
propagation="REQUIRED"
isolation="REPEATABLE_READ"
timeout="10000"
read-only="false"/>
</int:poller>
For more information please refer to the section called “Poller Transaction Support”.
Since Spring transaction support depends on the Proxy mechanism with TransactionInterceptor
(AOP Advice) handling transactional behavior of the message flow initiated by the poller, some times
there is a need to provide extra Advice(s) to handle other cross cutting behavior associated with the
poller. For that poller defines an advice-chain element allowing you to add more advices - class that
implements MethodInterceptor interface…
For more information on how to implement MethodInterceptor please refer to AOP sections of Spring
reference manual (section 8 and 9). Advice chain can also be applied on the poller that does not have
any transaction configuration essentially allowing you to enhance the behavior of the message flow
initiated by the poller.
Important
When using an advice chain, the <transactional/> child element cannot be specified; instead,
declare a <tx:advice/> bean and add it to the <advice-chain/>. See the section called
“Poller Transaction Support” for complete configuration.
TaskExecutor Support
The polling threads may be executed by any instance of Spring’s TaskExecutor abstraction. This
enables concurrency for an endpoint or group of endpoints. As of Spring 3.0, there is a task namespace
in the core Spring Framework, and its <executor/> element supports the creation of a simple thread
pool executor. That element accepts attributes for common concurrency settings such as pool-size and
queue-capacity. Configuring a thread-pooling executor can make a substantial difference in how the
endpoint performs under load. These settings are available per-endpoint since the performance of an
endpoint is one of the major factors to consider (the other major factor being the expected volume
on the channel to which the endpoint subscribes). To enable concurrency for a polling endpoint that
is configured with the XML namespace support, provide the task-executor reference on its <poller/>
element and then provide one or more of the properties shown below:
<task:executor id="pool"
pool-size="5-25"
queue-capacity="20"
keep-alive="120"/>
If no task-executor is provided, the consumer’s handler will be invoked in the caller’s thread. Note that
the caller is usually the default TaskScheduler (see Section E.3, “Configuring the Task Scheduler”).
Also, keep in mind that the task-executor attribute can provide a reference to any implementation of
Spring’s TaskExecutor interface by specifying the bean name. The executor element above is simply
provided for convenience.
As mentioned in the background section for Polling Consumers above, you can also configure a Polling
Consumer in such a way as to emulate event-driven behavior. With a long receive-timeout and a short
interval-trigger, you can ensure a very timely reaction to arriving messages even on a polled message
source. Note that this will only apply to sources that have a blocking wait call with a timeout. For example,
the File poller does not block, each receive() call returns immediately and either contains new files or
not. Therefore, even if a poller contains a long receive-timeout, that value would never be usable in such
a scenario. On the other hand when using Spring Integration’s own queue-based channels, the timeout
value does have a chance to participate. The following example demonstrates how a Polling Consumer
will receive Messages nearly instantaneously.
<int:service-activator input-channel="someQueueChannel"
output-channel="output">
<int:poller receive-timeout="30000" fixed-rate="10"/>
</int:service-activator>
Using this approach does not carry much overhead since internally it is nothing more then a timed-wait
thread which does not require nearly as much CPU resource usage as a thrashing, infinite while loop
for example.
nextExecutionTime to schedule the next poll. To use this custom trigger within pollers, declare the bean
definition of the custom Trigger in your application context and inject the dependency into your Poller
configuration using the trigger attribute, which references the custom Trigger bean instance. You can
now obtain a reference to the Trigger bean and the polling interval can be changed between polls.
For an example, please see the Spring Integration Samples project. It contains a sample called dynamic-
poller, which uses a custom Trigger and demonstrates the ability to change the polling interval at runtime.
https://github.com/SpringSource/spring-integration-samples/tree/master/intermediate
Note
It is important to note, though, that because the Trigger method is nextExecutionTime(), any
changes to a dynamic trigger will not take effect until the next poll, based on the existing
configuration. It is not possible to force a trigger to fire before it’s currently configured next
execution time.
The Converter implementation is the simplest and converts from a single type to another. For more
sophistication, such as converting to a class hierarchy, you would implement a GenericConverter
and possibly a ConditionalConverter. These give you complete access to the from and to type
descriptors enabling complex conversions. For example, if you have an abstract class Foo that is
the target of your conversion (parameter type, channel data type etc) and you have two concrete
implementations Bar and Baz and you wish to convert to one or the other based on the input type,
the GenericConverter would be a good fit. Refer to the JavaDocs for these interfaces for more
information.
When you have implemented your converter, you can register it with convenient namespace support:
<int:converter ref="sampleConverter"/>
or as an inner bean:
<int:converter>
<bean class="o.s.i.config.xml.ConverterParserTests$TestConverter3"/>
</int:converter>
Starting with Spring Integration 4.0, the above configuration is available using annotations:
@Component
@IntegrationConverter
public class TestConverter implements Converter<Boolean, Number> {
or as a @Configuration part:
@Configuration
@EnableIntegration
public class ContextConfiguration {
@Bean
@IntegrationConverter
public SerializingConverter serializingConverter() {
return new SerializingConverter();
}
Important
When configuring an Application Context, the Spring Framework allows you to add a
conversionService bean (see Configuring a ConversionService chapter). This service is used,
when needed, to perform appropriate conversions during bean creation and configuration.
In contrast, the integrationConversionService is used for runtime conversions. These uses are
quite different; converters that are intended for use when wiring bean constructor-args and
properties may produce unintended results if used at runtime for Spring Integration expression
evaluation against Messages within Datatype Channels, Payload Type transformers etc.
However, if you do want to use the Spring conversionService as the Spring Integration
integrationConversionService, you can configure an alias in the Application Context:
In this case the conversionService's Converters will be available for Spring Integration runtime
conversion.
Starting with version 5.0, by default, the method invocation mechanism is based on
the org.springframework.messaging.handler.invocation.InvocableHandlerMethod
infrastructure. Its HandlerMethodArgumentResolver implementations (e.g.
PayloadArgumentResolver and MessageMethodArgumentResolver) can use the
MessageConverter abstraction to convert an incoming payload to the target method argument
type. The conversion can be based on the contentType message header. For this purpose Spring
Integration provides the ConfigurableCompositeMessageConverter that delegates to a list of
registered converters to be invoked until one of them returns a non-null result. By default this converter
provides (in strict order):
• ByteArrayMessageConverter
• ObjectStringMessageConverter
• GenericMessageConverter
Please, consult their JavaDocs for more information about their purpose and appropriate contentType
value for conversion. The ConfigurableCompositeMessageConverter is used because it can
be be supplied with any other MessageConverter s including or excluding above mentioned default
converters and registered as an appropriate bean in the application context overriding the default one:
@Bean(name = IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME)
public ConfigurableCompositeMessageConverter compositeMessageConverter() {
List<MessageConverter> converters =
Arrays.asList(new MarshallingMessageConverter(jaxb2Marshaller()),
new JavaSerializationMessageConverter());
return new ConfigurableCompositeMessageConverter(converters);
}
And those two new converters will be registered in the composite before the defaults. You can also not
use a ConfigurableCompositeMessageConverter, but provide your own MessageConverter
by registering a bean with the name integrationArgumentResolverMessageConverter
(IntegrationContextUtils.ARGUMENT_RESOLVER_MESSAGE_CONVERTER_BEAN_NAME
constant).
Note
Asynchronous polling
If you want the polling to be asynchronous, a Poller can optionally specify a task-executor attribute
pointing to an existing instance of any TaskExecutor bean (Spring 3.0 provides a convenient
namespace configuration via the task namespace). However, there are certain things you must
understand when configuring a Poller with a TaskExecutor.
The problem is that there are two configurations in place. The Poller and the TaskExecutor, and they
both have to be in tune with each other otherwise you might end up creating an artificial memory leak.
Let’s look at the following configuration provided by one of the users on the Spring Integration Forum:
<int:channel id="publishChannel">
<int:queue />
</int:channel>
By default, the task executor has an unbounded task queue. The poller keeps scheduling new tasks
even though all the threads are blocked waiting for either a new message to arrive, or the timeout to
expire. Given that there are 20 threads executing tasks with a 5 second timeout, they will be executed
at a rate of 4 per second (5000/20 = 250ms). But, new tasks are being scheduled at a rate of 20 per
second, so the internal queue in the task executor will grow at a rate of 16 per second (while the process
is idle), so we essentially have a memory leak.
One of the ways to handle this is to set the queue-capacity attribute of the Task Executor; and
even 0 is a reasonable value. You can also manage it by specifying what to do with messages that can
not be queued by setting the rejection-policy attribute of the Task Executor (e.g., DISCARD). In
other words, there are certain details you must understand with regard to configuring the TaskExecutor.
Please refer to Task Execution and Scheduling of the Spring reference manual for more detail on the
subject.
The above only applies to the framework component itself. If you use an inner bean definition such as
this:
the bean is treated like any inner bean declared that way and is not registered with the application
context. If you wish to access this bean in some other manner, declare it at the top level with an id and
use the ref attribute instead. See the Spring Documentation for more information.
You can assign endpoints to roles using XML, Java configuration, or programmatically:
@Bean
@ServiceActivator(inputChannel = "sendAsyncChannel", autoStartup="false")
@Role("cluster")
public MessageHandler sendAsyncHandler() {
return // some MessageHandler
}
@Payload("#args[0].toLowerCase()")
@Role("cluster")
public String handle(String payload) {
return payload.toUpperCase();
}
@Autowired
private SmartLifecycleRoleController roleController;
...
this.roleController.addSmartLifeCycleToRole("cluster", someEndpoint);
...
Note
Any object implementing SmartLifecycle can be programmatically added, not just endpoints.
Important
When using leadership election to start/stop components, it is important to set the auto-startup
XML attribute (autoStartup bean property) to false so the application context does not start
the components during context intialization.
To participate in a leader election and be notified when elected leader, when leadership is revoked
or, failure to acquire the resources to become leader, an application creates a component in the
application context called a "leader initiator". Normally a leader initiator is a SmartLifecycle so
it starts up (optionally) automatically when the context starts, and then publishes notifications when
leadership changes. Users can also receive failure notifications by setting the publishFailedEvents
to true (starting with version 5.0), in cases when they want take a specific action if a failure occurs.
By convention the user provides a Candidate that receives the callbacks and also can revoke
the leadership through a Context object provided by the framework. User code can also listen for
org.springframework.integration.leader.event.AbstractLeaderEvent s, and respond
accordingly, for instance using a SmartLifecycleRoleController.
There is a basic implementation of a leader initiator based on the LockRegistry abstraction. To use
it you just need to create an instance as a bean, for example:
@Bean
public LockRegistryLeaderInitiator leaderInitiator(LockRegistry locks) {
return new LockRegistryLeaderInitiator(locks);
}
If the lock registry is implemented correctly, there will only ever be at most one leader. If the lock registry
also provides locks which throw exceptions (ideally InterruptedException) when they expire or are
broken, then the duration of the leaderless periods can be as short as is allowed by the inherent latency
in the lock implementation. By default there is a busyWaitMillis property that adds some additional
latency to prevent CPU starvation in the (more usual) case that the locks are imperfect and you only
know they expired by trying to obtain one again.
See Section 39.4, “Zookeeper Leadership Event Handling” for more information about leadership
election and events using Zookeeper.
Here is an example of an interface that can be used to interact with Spring Integration:
package org.cafeteria;
Namespace support is also provided which allows you to configure such an interface as a service as
demonstrated by the following example.
<int:gateway id="cafeService"
service-interface="org.cafeteria.Cafe"
default-request-channel="requestChannel"
default-reply-timeout="10000"
default-reply-channel="replyChannel"/>
With this configuration defined, the "cafeService" can now be injected into other beans, and the code
that invokes the methods on that proxied instance of the Cafe interface has no awareness of the Spring
Integration API. The general approach is similar to that of Spring Remoting (RMI, HttpInvoker, etc.). See
the "Samples" Appendix for an example that uses this "gateway" element (in the Cafe demo).
The defaults in the configuration above are applied to all methods on the gateway interface; if a reply
timeout is not specified, the calling thread will wait indefinitely for a reply. See the section called “Gateway
behavior when no response arrives”.
The defaults can be overridden for individual methods; see the section called “Gateway Configuration
with Annotations and/or XML”.
Typically you don’t have to specify the default-reply-channel, since a Gateway will auto-create
a temporary, anonymous reply channel, where it will listen for the reply. However, there are some
cases which may prompt you to define a default-reply-channel (or reply-channel with adapter
gateways such as HTTP, JMS, etc.).
For some background, we’ll quickly discuss some of the inner-workings of the Gateway. A Gateway
will create a temporary point-to-point reply channel which is anonymous and is added to the Message
Headers with the name replyChannel. When providing an explicit default-reply-channel
(reply-channel with remote adapter gateways), you have the option to point to a publish-subscribe
channel, which is so named because you can add more than one subscriber to it. Internally Spring
Integration will create a Bridge between the temporary replyChannel and the explicitly defined
default-reply-channel.
So let’s say you want your reply to go not only to the gateway, but also to some other consumer. In
this case you would want two things: a) a named channel you can subscribe to and b) that channel
is a publish-subscribe-channel. The default strategy used by the gateway will not satisfy those needs,
because the reply channel added to the header is anonymous and point-to-point. This means that no
other subscriber can get a handle to it and even if it could, the channel has point-to-point behavior such
that only one subscriber would get the Message. So by defining a default-reply-channel you can
point to a channel of your choosing, which in this case would be a publish-subscribe-channel.
The Gateway would create a bridge from it to the temporary, anonymous reply channel that is stored
in the header.
Another case where you might want to provide a reply channel explicitly is for monitoring or auditing via
an interceptor (e.g., wiretap). You need a named channel in order to configure a Channel Interceptor.
@Gateway(requestChannel="orders")
void placeOrder(Order order);
You may alternatively provide such content in method sub-elements if you prefer XML configuration
(see the next paragraph).
It is also possible to pass values to be interpreted as Message headers on the Message that is created
and sent to the request channel by using the @Header annotation:
@Gateway(requestChannel="filesOut")
void write(byte[] content, @Header(FileHeaders.FILENAME) String filename);
If you prefer the XML approach of configuring Gateway methods, you can provide method sub-elements
to the gateway configuration.
You can also provide individual headers per method invocation via XML. This could be very useful if
the headers you want to set are static in nature and you don’t want to embed them in the gateway’s
method signature via @Header annotations. For example, in the Loan Broker example we want to
influence how aggregation of the Loan quotes will be done based on what type of request was initiated
(single quote or all quotes). Determining the type of the request by evaluating what gateway method
was invoked, although possible, would violate the separation of concerns paradigm (the method is a
java artifact), but expressing your intention (meta information) via Message headers is natural in a
Messaging architecture.
<int:gateway id="loanBrokerGateway"
service-interface="org.springframework.integration.loanbroker.LoanBrokerGateway">
<int:method name="getLoanQuote" request-channel="loanBrokerPreProcessingChannel">
<int:header name="RESPONSE_TYPE" value="BEST"/>
</int:method>
<int:method name="getAllLoanQuotes" request-channel="loanBrokerPreProcessingChannel">
<int:header name="RESPONSE_TYPE" value="ALL"/>
</int:method>
</int:gateway>
In the above case you can clearly see how a different value will be set for the RESPONSE_TYPE header
based on the gateway’s method.
The <header/> element supports expression as an alternative to value. The SpEL expression is
evaluated to determine the value of the header. There is no #root object but the following variables
are available:
Note
Since 3.0, <default-header/> s can be defined to add headers to all messages produced by the
gateway, regardless of the method invoked. Specific headers defined for a method take precedence
over default headers. Specific headers defined for a method here will override any @Header annotations
in the service interface. However, default headers will NOT override any @Header annotations in the
service interface.
The gateway now also supports a default-payload-expression which will be applied for all
methods (unless overridden).
Using the configuration techniques in the previous section allows control of how method arguments are
mapped to message elements (payload and header(s)). When no explicit configuration is used, certain
conventions are used to perform the mapping. In some cases, these conventions cannot determine
which argument is the payload and which should be mapped to headers.
In the first case, the convention will map the first argument to the payload (as long as it is not a Map)
and the contents of the second become headers.
In the second case (or the first when the argument for parameter foo is a Map), the framework cannot
determine which argument should be the payload; mapping will fail. This can generally be resolved
using a payload-expression, a @Payload annotation and/or a @Headers annotation.
Alternatively, and whenever the conventions break down, you can take the entire responsibility for
mapping the method calls to messages. To do this, implement an MethodArgsMessageMapper and
provide it to the <gateway/> using the mapper attribute. The mapper maps a MethodArgsHolder,
which is a simple class wrapping the java.reflect.Method instance and an Object[] containing
the arguments. When providing a custom mapper, the default-payload-expression attribute and
<default-header/> elements are not allowed on the gateway; similarly, the payload-expression
attribute and <header/> elements are not allowed on any <method/> elements.
Here are examples showing how method arguments can be mapped to the message (and some
examples of invalid configuration):
void mapOnly(Map<String, Object> map); // the payload is the map and no custom headers are added
@Payload("@someBean.exclaim(#args[0])")
void payloadAnnotationAtMethodLevelUsingBeanResolver(String s);
// invalid
void twoMapsWithoutAnnotations(Map<String, Object> m1, Map<String, Object> m2);
// invalid
void twoPayloads(@Payload String s1, @Payload String s2);
// invalid
void payloadAndHeaderAnnotationsOnSameParameter(@Payload @Header("x") String s);
// invalid
void payloadAndHeadersAnnotationsOnSameParameter(@Payload @Headers Map<String, Object> map);
❶ Note that in this example, the SpEL variable #this refers to the argument - in this case, the value
of 's'.
The XML equivalent looks a little different, since there is no #this context for the method argument,
but expressions can refer to method arguments using the #args variable:
@MessagingGateway Annotation
Starting with version 4.0, gateway service interfaces can be marked with a @MessagingGateway
annotation instead of requiring the definition of a <gateway /> xml element for configuration. The
following compares the two approaches for configuring the same gateway:
Important
As with the XML version, Spring Integration creates the proxy implementation with its
messaging infrastructure, when discovering these annotations during a component scan.
To perform this scan and register the BeanDefinition in the application context,
add the @IntegrationComponentScan annotation to a @Configuration class. The
standard @ComponentScan infrastructure doesn’t deal with interfaces, therefore the custom
@IntegrationComponentScan logic has been introduced to determine @MessagingGateway
annotation on the interfaces and register GatewayProxyFactoryBean s for them. See also
Section E.6, “Annotation Support”
Note
When invoking methods on a Gateway interface that do not have any arguments, the default behavior
is to receive a Message from a PollableChannel.
At times however, you may want to trigger no-argument methods so that you can in fact interact
with other components downstream that do not require user-provided parameters, e.g. triggering no-
argument SQL calls or Stored Procedures.
In order to achieve send-and-receive semantics, you must provide a payload. In order to generate a
payload, method parameters on the interface are not necessary. You can either use the @Payload
annotation or the payload-expression attribute in XML on the method sub-element. Below please
find a few examples of what the payloads could be:
• a literal string
• #gatewayMethod.name
• new java.util.Date()
@Payload("new java.util.Date()")
List<Order> retrieveOpenOrders();
If a method has no argument and no return value, but does contain a payload expression, it will be
treated as a send-only operation.
Error Handling
Of course, the Gateway invocation might result in errors. By default, any error that occurs downstream
will be re-thrown as is upon the Gateway’s method invocation. For example, consider the following
simple flow:
If the service invoked by the service activator throws a FooException, the framework wraps it in a
MessagingException, attaching the message passed to the service activator in the failedMessage
property. Any logging performed by the framework will therefore have full context of the failure. When
the exception is caught by the gateway, by default, the FooException will be unwrapped and thrown
to the caller. You can configure a throws clause on the gateway method declaration for matching
the particular exception type in the cause chain. For example if you would like to catch a whole
MessagingException with all the messaging information of the reason of downstream error, you
should have a gateway method like this:
Since we encourage POJO programming, you may not want to expose the caller to messaging
infrastructure.
If your gateway method does not have a throws clause, the gateway will traverse the cause tree looking
for a RuntimeException (that is not a MessagingException). If none is found, the framework will
simply throw the MessagingException. If the FooException in the discussion above has a cause
BarException and your method throws BarException then the gateway will further unwrap that
and throw it to the caller.
Before version 5.0 this exchange method did not have a throws clause and therefore the exception
was unwrapped. If you are using this interface, and wish to restore the previous unwrap behavior, use
a custom service-interface instead, or simply access the cause of the MessagingException
yourself.
However there are times when you may want to simply log the error rather than propagating it, or you
may want to treat an Exception as a valid reply, by mapping it to a Message that will conform to some
"error message" contract that the caller understands. To accomplish this, the Gateway provides support
for a Message Channel dedicated to the errors via the error-channel attribute. In the example below,
you can see that a transformer is used to create a reply Message from the Exception.
<int:gateway id="sampleGateway"
default-request-channel="gatewayChannel"
service-interface="foo.bar.SimpleGateway"
error-channel="exceptionTransformationChannel"/>
<int:transformer input-channel="exceptionTransformationChannel"
ref="exceptionTransformer" method="createErrorResponse"/>
The exceptionTransformer could be a simple POJO that knows how to create the expected error
response objects. That would then be the payload that is sent back to the caller. Obviously, you could
do many more elaborate things in such an "error flow" if necessary. It might involve routers (including
Spring Integration’s ErrorMessageExceptionTypeRouter), filters, and so on. Most of the time, a
simple transformer should be sufficient, however.
Alternatively, you might want to only log the Exception (or send it somewhere asynchronously). If you
provide a one-way flow, then nothing would be sent back to the caller. In the case that you want to
completely suppress Exceptions, you can provide a reference to the global "nullChannel" (essentially
a /dev/null approach). Finally, as mentioned above, if no "error-channel" is defined at all, then the
Exceptions will propagate as usual.
When using the @MessagingGateway annotation (see the section called “@MessagingGateway
Annotation”), use the errroChannel attribute.
Starting with version 5.0, when using a gateway method with a void return type (one-way flow), the
error-channel reference (if provided) is populated in the standard errorChannel header of each
message sent. This allows a downstream async flow, based on the standard ExecutorChannel
configuration (or a QueueChannel), to override a default global errorChannel exceptions sending
behavior. Previously you had to specify an errorChannel header manually via @GatewayHeader
annotation or <header> sub-element. The error-channel property was ignored for void methods
with an asynchronous flow; error messages were sent to the default errorChannel instead.
Important
Exposing the messaging system via simple POJI Gateways obviously provides benefits, but
"hiding" the reality of the underlying messaging system does come at a price so there are certain
things you should consider. We want our Java method to return as quickly as possible and not
hang for an indefinite amount of time while the caller is waiting on it to return (void, return value, or a
thrown Exception). When regular methods are used as a proxies in front of the Messaging system,
we have to take into account the potentially asynchronous nature of the underlying messaging.
This means that there might be a chance that a Message that was initiated by a Gateway could
be dropped by a Filter, thus never reaching a component that is responsible for producing a reply.
Some Service Activator method might result in an Exception, thus providing no reply (as we don’t
generate Null messages). So as you can see there are multiple scenarios where a reply message
might not be coming. That is perfectly natural in messaging systems. However think about the
implication on the gateway method. The Gateway’s method input arguments were incorporated
into a Message and sent downstream. The reply Message would be converted to a return value of
the Gateway’s method. So you might want to ensure that for each Gateway call there will always be
a reply Message. Otherwise, your Gateway method might never return and will hang indefinitely.
One of the ways of handling this situation is via an Asynchronous Gateway (explained later in this
section). Another way of handling it is to explicitly set the reply-timeout attribute. That way, the
gateway will not hang any longer than the time specified by the reply-timeout and will return null
if that timeout does elapse. Finally, you might want to consider setting downstream flags such as
requires-reply on a service-activator or throw-exceptions-on-rejection on a filter. These options
will be discussed in more detail in the final section of this chapter.
Note
Gateway Timeouts
There are two properties requestTimeout and replyTimeout. The request timeout only applies if
the channel can block (e.g. a bounded QueueChannel that is full). The reply timeout is how long the
gateway will wait for a reply, or return null; it defaults to infinity.
The timeouts can be set as defaults for all methods on the gateway (defaultRequestTimeout,
defaultReplyTimeout) (or on the MessagingGateway interface annotation). Individual methods
can override these defaults (in <method/> child elements) or on the @Gateway annotation.
The evaluation context has a BeanResolver (use @someBean to reference other beans) and the
#args array variable is available.
When configuring with XML, the timeout attributes can be a simple long value or a SpEL expression.
Asynchronous Gateway
Introduction
As a pattern, the Messaging Gateway is a very nice way to hide messaging-specific code
while still exposing the full capabilities of the messaging system. As you’ve seen, the
GatewayProxyFactoryBean provides a convenient way to expose a Proxy over a service-interface
thus giving you POJO-based access to a messaging system (based on objects in your own domain, or
primitives/Strings, etc). But when a gateway is exposed via simple POJO methods which return values
it does imply that for each Request message (generated when the method is invoked) there must be
a Reply message (generated when the method has returned). Since Messaging systems naturally are
asynchronous you may not always be able to guarantee the contract where "for each request there will
always be be a reply". With Spring Integration 2.0 we introduced support for an Asynchronous Gateway
which is a convenient way to initiate flows where you may not know if a reply is expected or how long
will it take for replies to arrive.
A natural way to handle these types of scenarios in Java would be relying upon
java.util.concurrent.Future instances, and that is exactly what Spring Integration uses to support an
Asynchronous Gateway.
From the XML configuration, there is nothing different and you still define Asynchronous Gateway the
same way as a regular Gateway.
<int:gateway id="mathService"
service-interface="org.springframework.integration.sample.gateway.futures.MathServiceGateway"
default-request-channel="requestChannel"/>
As you can see from the example above, the return type for the gateway method is a Future.
When GatewayProxyFactoryBean sees that the return type of the gateway method is a Future,
it immediately switches to the async mode by utilizing an AsyncTaskExecutor. That is all. The call
to such a method always returns immediately with a Future instance. Then, you can interact with the
Future at your own pace to get the result, cancel, etc. And, as with any other use of Future instances,
calling get() may reveal a timeout, an execution exception, and so on.
For a more detailed example, please refer to the async-gateway sample distributed within the Spring
Integration samples.
ListenableFuture
Starting with version 4.1, async gateway methods can also return ListenableFuture (introduced
in Spring Framework 4.0). These return types allow you to provide a callback which is invoked
when the result is available (or an exception occurs). When the gateway detects this return
type, and the task executor (see below) is an AsyncListenableTaskExecutor, the executor’s
submitListenable() method is invoked.
@Override
public void onSuccess(String result) {
...
}
@Override
public void onFailure(Throwable t) {
...
}
});
AsyncTaskExecutor
@Bean
public AsyncTaskExecutor exec() {
SimpleAsyncTaskExecutor simpleAsyncTaskExecutor = new SimpleAsyncTaskExecutor();
simpleAsyncTaskExecutor.setThreadNamePrefix("exec-");
return simpleAsyncTaskExecutor;
}
@MessagingGateway(asyncExecutor = "exec")
public interface ExecGateway {
@Gateway(requestChannel = "gatewayChannel")
Future<?> doAsync(String foo);
If you wish to return a different Future implementation, you can provide a custom executor, or
disable the executor altogether and return the Future in the reply message payload from the
downstream flow. To disable the executor, simply set it to null in the GatewayProxyFactoryBean
(setAsyncTaskExecutor(null)). When configuring the gateway with XML, use async-
executor=""; when configuring using the @MessagingGateway annotation, use:
@MessagingGateway(asyncExecutor = AnnotationConstants.NULL)
public interface NoExecGateway {
@Gateway(requestChannel = "gatewayChannel")
Future<?> doAsync(String foo);
Important
If the return type is a specific concrete Future implementation or some other sub-interface that
is not supported by the configured executor, the flow will run on the caller’s thread and the flow
must return the required type in the reply message payload.
CompletableFuture
Starting with version 4.2, gateway methods can now return CompletableFuture<?>. There are
several modes of operation when returning this type:
When an async executor is provided and the return type is exactly CompletableFuture
(not a subclass), the framework will run the task on the executor and immediately return
a CompletableFuture to the caller. CompletableFuture.supplyAsync(Supplier<U>
supplier, Executor executor) is used to create the future.
When the async executor is explicitly set to null and the return type is CompletableFuture or the
return type is a subclass of CompletableFuture, the flow is invoked on the caller’s thread. In this
scenario, it is expected that the downstream flow will return a CompletableFuture of the appropriate
type.
Usage Scenarios
In this scenario, the caller thread returns immediately with a CompletableFuture<Invoice> which
will be completed when the downstream flow replies to the gateway (with an Invoice object).
In this scenario, the caller thread will return with a CompletableFuture<Invoice> when the downstream
flow provides it as the payload of the reply to the gateway. Some other process must complete the future
when the invoice is ready.
In this scenario, the caller thread will return with a CompletableFuture<Invoice> when the downstream
flow provides it as the payload of the reply to the gateway. Some other process must complete the
future when the invoice is ready. If DEBUG logging is enabled, a log is emitted indicating that the async
executor cannot be used for this scenario.
CompletableFuture s can be used to perform additional manipulation on the reply, such as:
...
...
Reactor Mono
Starting with version 5.0, the GatewayProxyFactoryBean allows the use of the Project Reactor with
gateway interface methods, utilizing a Mono<T> return type. The internal AsyncInvocationTask is
wrapped in a Mono.fromCallable().
A Mono can be used to retrieve the result later (similar to a Future<?>) or you can consume from it
with the dispatcher invoking your Consumer when the result is returned to the gateway.
Important
The Mono isn’t flushed immediately by the framework. Hence the underlying message flow won’t
be started before the gateway method returns (as it is with Future<?> Executor task). The flow
will be started when the Mono is subscribed. Alternatively, the Mono (being a Composable) might
be a part of Reactor stream, when the subscribe() is related to the entire Flux. For example:
@MessagingGateway
public static interface TestGateway {
@Gateway(requestChannel = "promiseChannel")
Mono<Integer> multiply(Integer value);
...
@ServiceActivator(inputChannel = "promiseChannel")
public Integer multiply(Integer value) {
return value * 2;
}
...
The calling thread continues, with handleInvoice() being called when the flow completes.
expected to always return (even with an Exception), might not always map one-to-one to message
exchanges (e.g., a reply message might not arrive - which is equivalent to a method not returning). It is
important to go over several scenarios especially in the Sync Gateway case and understand the default
behavior of the Gateway and how to deal with these scenarios to make the Sync Gateway behavior
more predictable regardless of the outcome of the message flow that was initialed from such Gateway.
There are certain attributes that could be configured to make Sync Gateway behavior more predictable,
but some of them might not always work as you might have expected. One of them is reply-timeout (at
the method level or default-reply-timeout at the gateway level). So, lets look at the reply-timeout attribute
and see how it can/can’t influence the behavior of the Sync Gateway in various scenarios. We will look
at single-threaded scenario (all components downstream are connected via Direct Channel) and multi-
threaded scenarios (e.g., somewhere downstream you may have Pollable or Executor Channel which
breaks single-thread boundary)
Sync Gateway - single-threaded. If a component downstream is still running (e.g., infinite loop or a
very slow service), then setting a reply-timeout has no effect and the Gateway method call will not
return until such downstream service exits (via return or exception). Sync Gateway - multi-threaded. If
a component downstream is still running (e.g., infinite loop or a very slow service), in a multi-threaded
message flow setting the reply-timeout will have an effect by allowing gateway method invocation to
return once the timeout has been reached, since the GatewayProxyFactoryBean will simply poll on
the reply channel waiting for a message until the timeout expires. However it could result in a null return
from the Gateway method if the timeout has been reached before the actual reply was produced. It is
also important to understand that the reply message (if produced) will be sent to a reply channel after
the Gateway method invocation might have returned, so you must be aware of that and design your
flow with this in mind.
Sync Gateway - single-threaded. If a component downstream returns null and no reply-timeout has been
configured, the Gateway method call will hang indefinitely unless: a) a reply-timeout has been configured
or b) the requires-reply attribute has been set on the downstream component (e.g., service-activator)
that might return null. In this case, an Exception would be thrown and propagated to the Gateway.Sync
Gateway - multi-threaded. Behavior is the same as above.
Downstream component return signature is void while Gateway method signature is non-void
Sync Gateway - single-threaded. If a component downstream returns void and no reply-timeout has been
configured, the Gateway method call will hang indefinitely unless a reply-timeout has been configured
Sync Gateway - multi-threaded Behavior is the same as above.
Important
It is also important to understand that by default reply-timeout is unbounded* which means that
if not explicitly set there are several scenarios (described above) where your Gateway method
invocation might hang indefinitely. So, make sure you analyze your flow and if there is even a
remote possibility of one of these scenarios to occur, set the reply-timeout attribute to a safe value
or, even better, set the requires-reply attribute of the downstream component to true to ensure
a timely response as produced by the throwing of an Exception as soon as that downstream
component does return null internally. But also, realize that there are some scenarios (see the
very first one) where reply-timeout will not help. That means it is also important to analyze your
message flow and decide when to use a Sync Gateway vs an Async Gateway. As you’ve seen
the latter case is simply a matter of defining Gateway methods that return Future instances. Then,
you are guaranteed to receive that return value, and you will have more granular control over the
results of the invocation.Also, when dealing with a Router you should remember that setting the
resolution-required attribute to true will result in an Exception thrown by the router if it can not
resolve a particular channel. Likewise, when dealing with a Filter, you can set the throw-exception-
on-rejection attribute. In both of these cases, the resulting flow will behave like that containing
a service-activator with the requires-reply attribute. In other words, it will help to ensure a timely
response from the Gateway method invocation.
Note
Important
It is important to understand that the timer starts when the thread returns to the gateway, i.e. when
the flow completes or a message is handed off to another thread. At that time, the calling thread
starts waiting for the reply. If the flow was completely synchronous, the reply will be immediately
available; for asynchronous flows, the thread will wait for up to this time.
Also see Section 9.21, “IntegrationFlow as Gateway” in the Java DSL chapter for options to define
gateways via IntegrationFlows.
The Service Activator is the endpoint type for connecting any Spring-managed Object to an input channel
so that it may play the role of a service. If the service produces output, it may also be connected to an
output channel. Alternatively, an output producing service may be located at the end of a processing
pipeline or message flow in which case, the inbound Message’s "replyChannel" header can be used.
This is the default behavior if no output channel is defined and, as with most of the configuration options
you’ll see here, the same behavior actually applies for most of the other components we have seen.
To create a Service Activator, use the service-activator element with the input-channel and ref attributes:
The configuration above selects all methods from the exampleHandler which meet one of the
Messaging requirements:
• is public;
The target method for invocation at runtime is selected for each request message by their payload
type. Or as a fallback to Message<?> type if such a method is present on target class.
Starting with version 5.0, one service method can be marked with the
@org.springframework.integration.annotation.Default as a fallback for all non-matching
cases. This can be useful when using the section called “Content Type Conversion” with the target
method being invoked after conversion.
To delegate to an explicitly defined method of any object, simply add the "method" attribute.
In either case, when the service method returns a non-null value, the endpoint will attempt to send the
reply message to an appropriate reply channel. To determine the reply channel, it will first check if an
"output-channel" was provided in the endpoint configuration:
If the method returns a result and no "output-channel" is defined, the framework will then check the
request Message’s replyChannel header value. If that value is available, it will then check its type.
If it is a MessageChannel, the reply message will be sent to that channel. If it is a String, then
the endpoint will attempt to resolve the channel name to a channel instance. If the channel cannot
be resolved, then a DestinationResolutionException will be thrown. It it can be resolved, the
Message will be sent there. If the request Message doesn’t have replyChannel header and and the
reply object is a Message, its replyChannel header is consulted for a target destination. This is
the technique used for Request Reply messaging in Spring Integration, and it is also an example of the
Return Address pattern.
If your method returns a result, and you want to discard it and end the flow, you should configure the
output-channel to send to a NullChannel. For convenience, the framework registers one with the
name nullChannel. See the section called “Special Channels” for more information.
The Service Activator is one of those components that is not required to produce a reply
message. If your method returns null or has a void return type, the Service Activator
exits after the method invocation, without any signals. This behavior can be controlled by
the AbstractReplyProducingMessageHandler.requiresReply option, also exposed as
requires-reply when configuring with the XML namespace. If the flag is set to true and the method
returns null, a ReplyRequiredException is thrown.
The argument in the service method could be either a Message or an arbitrary type. If the latter, then
it will be assumed that it is a Message payload, which will be extracted from the message and injected
into such service method. This is generally the recommended approach as it follows and promotes a
POJO model when working with Spring Integration. Arguments may also have @Header or @Headers
annotations as described in Section E.6, “Annotation Support”
Note
The service method is not required to have any arguments at all, which means you can implement
event-style Service Activators, where all you care about is an invocation of the service method,
not worrying about the contents of the message. Think of it as a NULL JMS message. An example
use-case for such an implementation could be a simple counter/monitor of messages deposited
on the input channel.
Starting with version 4.1 the framework correct converts Message properties (payload and headers)
to the Java 8 Optional POJO method parameters:
Using a ref attribute is generally recommended if the custom Service Activator handler implementation
can be reused in other <service-activator> definitions. However if the custom Service Activator
handler implementation is only used within a single definition of the <service-activator>, you can
provide an inner bean definition:
Note
Using both the "ref" attribute and an inner handler definition in the same <service-activator>
configuration is not allowed, as it creates an ambiguous condition and will result in an Exception
being thrown.
Important
Since Spring Integration 2.0, Service Activators can also benefit from SpEL (http://
static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/expressions.html).
For example, you may now invoke any bean method without pointing to the bean via a ref attribute or
including it as an inner bean definition. For example:
In the above configuration instead of injecting accountService using a ref or as an inner bean, we
are simply using SpEL’s @beanId notation and invoking a method which takes a type compatible with
Message payload. We are also passing a header value. As you can see, any valid SpEL expression
can be evaluated against any content in the Message. For simple scenarios your Service Activators do
not even have to reference a bean if all logic can be encapsulated by such an expression.
In the above configuration our service logic is to simply multiply the payload value by 2, and SpEL lets
us handle it relatively easy.
See Section 9.12, “ServiceActivators (.handle())” in Java DSL chapter for more information about
configuring Service Activator.
If the service completes the future with an Exception, normal error processing will occur - an
ErrorMessage is sent to the errorChannel message header, if present or otherwise to the default
errorChannel (if available).
8.6 Delayer
Introduction
A Delayer is a simple endpoint that allows a Message flow to be delayed by a certain interval. When a
Message is delayed, the original sender will not block. Instead, the delayed Messages will be scheduled
with an instance of org.springframework.scheduling.TaskScheduler to be sent to the output
channel after the delay has passed. This approach is scalable even for rather long delays, since it does
not result in a large number of blocked sender Threads. On the contrary, in the typical case a thread pool
will be used for the actual execution of releasing the Messages. Below you will find several examples
of configuring a Delayer.
Configuring a Delayer
The <delayer> element is used to delay the Message flow between two Message Channels. As
with the other endpoints, you can provide the input-channel and output-channel attributes, but the
delayer also has default-delay and expression attributes (and expression sub-element) that are used
to determine the number of milliseconds that each Message should be delayed. The following delays
all messages by 3 seconds:
If you need per-Message determination of the delay, then you can also provide the SpEL expression
using the expression attribute:
In the example above, the 3 second delay would only apply when the expression evaluates to null for
a given inbound Message. If you only want to apply a delay to Messages that have a valid result of the
expression evaluation, then you can use a default-delay of 0 (the default). For any Message that has a
delay of 0 (or less), the Message will be sent immediately, on the calling Thread.
@ServiceActivator(inputChannel = "input")
@Bean
public DelayHandler delayer() {
DelayHandler handler = new DelayHandler("delayer.messageGroupId");
handler.setDefaultDelay(3_000L);
handler.setDelayExpressionString("headers['delay']");
handler.setOutputChannelName("output");
return handler;
}
@Bean
public IntegrationFlow flow() {
return IntegrationFlows.from("input")
.delay("delayer.messageGroupId", d -> d
.defaultDelay(3_000L)
.delayExpression("headers['delay']"))
.channel("output")
.get();
}
Note
Tip
The delay handler supports expression evaluation results that represent an interval in milliseconds
(any Object whose toString() method produces a value that can be parsed into a Long) as well
as java.util.Date instances representing an absolute time. In the first case, the milliseconds
will be counted from the current time (e.g. a value of 5000 would delay the Message for at least 5
seconds from the time it is received by the Delayer). With a Date instance, the Message will not
be released until the time represented by that Date object. In either case, a value that equates
to a non-positive delay, or a Date in the past, will not result in any delay. Instead, it will be sent
directly to the output channel on the original sender’s Thread. If the expression evaluation result
is not a Date, and can not be parsed as a Long, the default delay (if any) will be applied.
Important
The expression evaluation may throw an evaluation Exception for various reasons, including an
invalid expression, or other conditions. By default, such exceptions are ignored (logged at DEBUG
level) and the delayer falls back to the default delay (if any). You can modify this behavior by
setting the ignore-expression-failures attribute. By default this attribute is set to true
and the Delayer behavior is as described above. However, if you wish to not ignore expression
evaluation exceptions, and throw them to the delayer’s caller, set the ignore-expression-
failures attribute to false.
Tip
Notice in the example above that the delay expression is specified as headers['delay']. This
is the SpEL Indexer syntax to access a Map element (MessageHeaders implements Map), it
invokes: headers.get("delay"). For simple map element names (that do not contain .) you
can also use the SpEL dot accessor syntax, where the above header expression can be specified
as headers.delay. But, different results are achieved if the header is missing. In the first case,
the expression will evaluate to null; the second will result in something like:
So, if there is a possibility of the header being omitted, and you want to fall back to the default
delay, it is generally more efficient (and recommended) to use the Indexer syntax instead of dot
property accessor syntax, because detecting the null is faster than catching an exception.
The delayer delegates to an instance of Spring’s TaskScheduler abstraction. The default scheduler
used by the delayer is the ThreadPoolTaskScheduler instance provided by Spring Integration on
startup: Section E.3, “Configuring the Task Scheduler”. If you want to delegate to a different scheduler,
you can provide a reference through the delayer element’s scheduler attribute:
Tip
If you configure an external ThreadPoolTaskScheduler you can set on this scheduler property
waitForTasksToCompleteOnShutdown = true. It allows successful completion of delay
tasks, which already in the execution state (releasing the Message), when the application is
shutdown. Before Spring Integration 2.2 this property was available on the <delayer> element,
because DelayHandler could create its own scheduler on the background. Since 2.2 delayer
requires an external scheduler instance and waitForTasksToCompleteOnShutdown was
deleted; you should use the scheduler’s own configuration.
Tip
The DelayHandler persists delayed Messages into the Message Group in the provided
MessageStore. (The groupId is based on required id attribute of <delayer> element.) A delayed
message is removed from the MessageStore by the scheduled task just before the DelayHandler
sends the Message to the output-channel. If the provided MessageStore is persistent (e.g.
JdbcMessageStore) it provides the ability to not lose Messages on the application shutdown.
After application startup, the DelayHandler reads Messages from its Message Group in the
MessageStore and reschedules them with a delay based on the original arrival time of the Message
(if the delay is numeric). For messages where the delay header was a Date, that is used when
rescheduling. If a delayed Message remained in the MessageStore more than its delay, it will be sent
immediately after startup.
Message<String> delayerReschedulingMessage =
MessageBuilder.withPayload("@'delayer.handler'.reschedulePersistedMessages()").build();
controlBusChannel.send(delayerReschedulingMessage);
Note
For more information regarding the Message Store, JMX and the Control Bus, please read
Chapter 10, System Management.
Important
Note that this feature requires Java 6 or higher. Sun developed a JSR223 reference
implementation which works with Java 5 but it is not officially supported and we have not tested
it with Spring Integration.
In order to use a JVM scripting language, a JSR223 implementation for that language must be included
in your class path. Java 6 natively supports Javascript. The Groovy and JRuby projects provide JSR233
support in their standard distribution. Other language implementations may be available or under
development. Please refer to the appropriate project website for more information.
Important
Various JSR223 language implementations have been developed by third parties. A particular
implementation’s compatibility with Spring Integration depends on how well it conforms to the
specification and/or the implementer’s interpretation of the specification.
Tip
If you plan to use Groovy as your scripting language, we recommended you use Spring-
Integration’s Groovy Support as it offers additional features specific to Groovy. However you will
find this section relevant as well.
Script configuration
Depending on the complexity of your integration requirements scripts may be provided inline as CDATA
in XML configuration or as a reference to a Spring resource containing the script. To enable scripting
support Spring Integration defines a ScriptExecutingMessageProcessor which will bind the
Message Payload to a variable named payload and the Message Headers to a headers variable,
both accessible within the script execution context. All that is left for you to do is write a script that uses
these variables. Below are a couple of sample configurations:
Filter
<int:filter input-channel="referencedScriptInput">
<int-script:script lang="ruby" location="some/path/to/ruby/script/RubyFilterTests.rb"/>
</int:filter>
<int:filter input-channel="inlineScriptInput">
<int-script:script lang="groovy">
<![CDATA[
return payload == 'good'
]]>
</int-script:script>
</int:filter>
Here, you see that the script can be included inline or can reference a resource location via the
location attribute. Additionally the lang attribute corresponds to the language name (or JSR223 alias)
Other Spring Integration endpoint elements which support scripting include router, service-activator,
transformer, and splitter. The scripting configuration in each case would be identical to the above
(besides the endpoint element).
Another useful feature of Scripting support is the ability to update (reload) scripts without having to
restart the Application Context. To accomplish this, specify the refresh-check-delay attribute on
the script element:
In the above example, the script location will be checked for updates every 5 seconds. If the script is
updated, any invocation that occurs later than 5 seconds since the update will result in execution of
the new script.
In the above example the context will be updated with any script modifications as soon as such
modification occurs, providing a simple mechanism for real-time configuration. Any negative number
value means the script will not be reloaded after initialization of the application context. This is the default
behavior.
Important
Variable bindings are required to enable the script to reference variables externally provided to the
script’s execution context. As we have seen, payload and headers are used as binding variables by
default. You can bind additional variables to a script via <variable> sub-elements:
As shown in the above example, you can bind a script variable either to a scalar value or a Spring bean
reference. Note that payload and headers will still be included as binding variables.
With Spring Integration 3.0, in addition to the variable sub-element, the variables attribute has
been introduced. This attribute and variable sub-elements aren’t mutually exclusive and you can
combine them within one script component. However variables must be unique, regardless of where
they are defined. Also, since Spring Integration 3.0, variable bindings are allowed for inline scripts too:
<service-activator input-channel="input">
<script:script lang="ruby" variables="foo=FOO, date-ref=dateBean">
<script:variable name="bar" ref="barBean"/>
<script:variable name="baz" value="bar"/>
<![CDATA[
payload.foo = foo
payload.date = date
payload.bar = bar
payload.baz = baz
payload
]]>
</script:script>
</service-activator>
The example above shows a combination of an inline script, a variable sub-element and a
variables attribute. The variables attribute is a comma-separated value, where each segment
contains an = separated pair of the variable and its value. The variable name can be suffixed with -ref,
as in the date-ref variable above. That means that the binding variable will have the name date, but
the value will be a reference to the dateBean bean from the application context. This may be useful
when using Property Placeholder Configuration or command line arguments.
If you need more control over how variables are generated, you can implement your own Java class
using the ScriptVariableGenerator strategy:
<int-script:script location="foo/bar/MyScript.groovy"
script-variable-generator="variableGenerator"/>
Important
You cannot provide both the script-variable-generator attribute and <variable> sub-
element(s) as they are mutually exclusive.
Groovy configuration
With Spring Integration 2.1, Groovy Support’s configuration namespace is an extension of Spring
Integration’s Scripting Support and shares the core configuration and behavior described in detail in
the Scripting Support section. Even though Groovy scripts are well supported by generic Scripting
Support, Groovy Support provides the Groovy configuration namespace which is backed by the
Spring Framework’s org.springframework.scripting.groovy.GroovyScriptFactory and
related components, offering extended capabilities for using Groovy. Below are a couple of sample
configurations:
Filter
<int:filter input-channel="referencedScriptInput">
<int-groovy:script location="some/path/to/groovy/file/GroovyFilterTests.groovy"/>
</int:filter>
<int:filter input-channel="inlineScriptInput">
<int-groovy:script><![CDATA[
return payload == 'good'
]]></int-groovy:script>
</int:filter>
As the above examples show, the configuration looks identical to the general Scripting Support
configuration. The only difference is the use of the Groovy namespace as indicated in the examples by
the int-groovy namespace prefix. Also note that the lang attribute on the <script> tag is not valid
in this namespace.
If you need to customize the Groovy object itself, beyond setting variables, you can reference a bean
that implements GroovyObjectCustomizer via the customizer attribute. For example, this might
be useful if you want to implement a domain-specific language (DSL) by modifying the MetaClass and
registering functions to be available within the script:
<int:service-activator input-channel="groovyChannel">
<int-groovy:script location="foo/SomeScript.groovy" customizer="groovyCustomizer"/>
</int:service-activator>
With Spring Integration 3.0, in addition to the variable sub-element, the variables attribute
has been introduced. Also, groovy scripts have the ability to resolve a variable to a bean in the
BeanFactory, if a binding variable was not provided with the name:
<int-groovy:script>
<![CDATA[
entityManager.persist(payload)
payload
]]>
</int-groovy:script>
The @CompileStatic hint is the most popular Groovy compiler customization option, which can
be used on the class or method level. See more information in the Groovy Reference Manual and,
specifically, @CompileStatic. To utilize this feature for short scripts (in integration scenarios), we are
forced to change a simple script like this (a <filter> script):
headers.type == 'good'
@groovy.transform.CompileStatic
String filter(Map headers) {
headers.type == 'good'
}
filter(headers)
With that, the filter() method will be transformed and compiled to static Java code, bypassing the
Groovy dynamic phases of invocation, like getProperty() factories and CallSite proxies.
Starting with version 4.3, Spring Integration Groovy components can be configured with the compile-
static boolean option, specifying that ASTTransformationCustomizer for @CompileStatic
should be added to the internal CompilerConfiguration. With that in place, we can omit the method
declaration with @CompileStatic in our script code and still get compiled plain Java code. In this case
our script can still be short but still needs to be a little more verbose than interpreted script:
binding.variables.headers.type == 'good'
Where we can access the headers and payload (or any other) variables only through the
groovy.lang.Script binding property since, with @CompileStatic, we don’t have the dynamic
GroovyObject.getProperty() capability.
In addition, the compiler-configuration bean reference has been introduced. With this attribute,
you can provide any other required Groovy compiler customizations, e.g. ImportCustomizer. For
more information about this feature, please, refer to the Groovy Documentation: Advanced compiler
configuration.
Note
Note
The Groovy compiler customization does not have any effect to the refresh-check-delay
option and reloadable scripts can be statically compiled, too.
Control Bus
As described in (EIP), the idea behind the Control Bus is that the same messaging system can be used
for monitoring and managing the components within the framework as is used for "application-level"
messaging. In Spring Integration we build upon the adapters described above so that it’s possible to
send Messages as a means of invoking exposed operations. One option for those operations is Groovy
scripts.
<int-groovy:control-bus input-channel="operationChannel"/>
The Control Bus has an input channel that can be accessed for invoking operations on the beans in
the application context.
The Groovy Control Bus executes messages on the input channel as Groovy scripts. It takes
a message, compiles the body to a Script, customizes it with a GroovyObjectCustomizer,
and then executes it. The Control Bus' MessageProcessor exposes all beans in the application
context that are annotated with @ManagedResource, implement Spring’s Lifecycle interface or
extend Spring’s CustomizableThreadCreator base class (e.g. several of the TaskExecutor and
TaskScheduler implementations).
Important
Be careful about using managed beans with custom scopes (e.g. request) in the Control
Bus' command scripts, especially inside an async message flow. If The Control Bus'
MessageProcessor can’t expose a bean from the application context, you may end up
with some BeansException during command script’s executing. For example, if a custom
scope’s context is not established, the attempt to get a bean within that scope will trigger a
BeanCreationException.
If you need to further customize the Groovy objects, you can also provide a reference to a bean that
implements GroovyObjectCustomizer via the customizer attribute.
<int-groovy:control-bus input-channel="input"
output-channel="output"
customizer="groovyCustomizer"/>
Prior to Spring Integration 2.2, you could add behavior to an entire Integration flow by adding an AOP
Advice to a poller’s <advice-chain/> element. However, let’s say you want to retry, say, just a REST
Web Service call, and not any downstream endpoints.
inbound-adapter#poller#http-gateway1#http-gateway2#jdbc-outbound-adapter
If you configure some retry-logic into an advice chain on the poller, and, the call to http-gateway2 failed
because of a network glitch, the retry would cause both http-gateway1 and http-gateway2 to be called a
second time. Similarly, after a transient failure in the jdbc-outbound-adapter, both http-gateways would
be called a second time before again calling the jdbc-outbound-adapter.
Spring Integration 2.2 adds the ability to add behavior to individual endpoints. This is achieved by the
addition of the <request-handler-advice-chain/> element to many endpoints. For example:
<int-http:outbound-gateway id="withAdvice"
url-expression="'http://localhost/test1'"
request-channel="requests"
reply-channel="nextChannel">
<int:request-handler-advice-chain>
<ref bean="myRetryAdvice" />
</request-handler-advice-chain>
</int-http:outbound-gateway>
In this case, myRetryAdvice will only be applied locally to this gateway and will not apply to further
actions taken downstream after the reply is sent to the nextChannel. The scope of the advice is limited
to the endpoint itself.
Important
At this time, you cannot advise an entire <chain/> of endpoints. The schema does not allow a
<request-handler-advice-chain/> as a child element of the chain itself.
In addition to providing the general mechanism to apply AOP Advice classes in this way, three standard
Advices are provided:
• RequestHandlerRetryAdvice
• RequestHandlerCircuitBreakerAdvice
• ExpressionEvaluatingRequestHandlerAdvice
Retry Advice
Stateless Retry
Stateless retry is the case where the retry activity is handled entirely within the advice, where the thread
pauses (if so configured) and retries the action.
Stateful Retry
Stateful retry is the case where the retry state is managed within the advice, but where an exception is
thrown and the caller resubmits the request. An example for stateful retry is when we want the message
originator (e.g. JMS) to be responsible for resubmitting, rather than performing it on the current thread.
Stateful retry needs some mechanism to detect a retried submission.
Further Information
For more information on spring-retry, refer to the project’s javadocs, as well as the reference
documentation for Spring Batch, where spring-retry originated.
Warning
The default back off behavior is no back off - retries are attempted immediately. Using a back off
policy that causes threads to pause between attempts may cause performance issues, including
excessive memory use and thread starvation. In high volume environments, back off policies
should be used with caution.
The following examples use a simple <service-activator/> that always throws an exception:
This example uses the default RetryTemplate which has a SimpleRetryPolicy which tries 3 times.
There is no BackOffPolicy so the 3 attempts are made back-to-back-to-back with no delay between
attempts. There is no RecoveryCallback so, the result is to throw the exception to the caller after
the final failed retry occurs. In a Spring Integration environment, this final exception might be handled
using an error-channel on the inbound endpoint.
For more sophistication, we can provide the advice with a customized RetryTemplate. This example
continues to use the SimpleRetryPolicy but it increases the attempts to 4. It also adds an
ExponentialBackoffPolicy where the first retry waits 1 second, the second waits 5 seconds and
the third waits 25 (for 4 attempts in all).
Starting with version 4.0, the above configuration can be greatly simplified with the namespace support
for the retry advice:
In this example, the advice is defined as a top level bean so it can be used in multiple request-
handler-advice-chain s. You can also define the advice directly within the chain:
A <handler-retry-advice/> with no child element uses no back off; it can have a fixed-back-
off or exponential-back-off child element. If there is no recovery-channel, the exception is
thrown when retries are exhausted. The namespace can only be used with stateless retry.
For more complex environments (custom policies etc), use normal <bean/> definitions.
To make retry stateful, we need to provide the Advice with a RetryStateGenerator implementation.
This class is used to identify a message as being a resubmission so that the RetryTemplate
can determine the current state of retry for this message. The framework provides a
SpelExpressionRetryStateGenerator which determines the message identifier using a SpEL
expression. This is shown below; this example again uses the default policies (3 attempts with no back
off); of course, as with stateless retry, these policies can be customized.
Comparing with the stateless examples, you can see that with stateful retry, the exception is thrown to
the caller on each failure.
Spring Retry has a great deal of flexibility for determining which exceptions can invoke retry. The default
configuration will retry for all exceptions and the exception classifier just looks at the top level exception.
If you configure it to, say, only retry on BarException and your application throws a FooException
where the cause is a BarException, retry will not occur.
To use this classifier for retry, use a SimpleRetryPolicy created with the constructor that takes the
max attempts, the Map of Exception s and the boolean (traverseCauses), and inject this policy into
the RetryTemplate.
The general idea of the Circuit Breaker Pattern is that, if a service is not
currently available, then don’t waste time (and resources) trying to use it. The
o.s.i.handler.advice.RequestHandlerCircuitBreakerAdvice implements this pattern.
When the circuit breaker is in the closed state, the endpoint will attempt to invoke the service. The circuit
breaker goes to the open state if a certain number of consecutive attempts fail; when it is in the open
state, new requests will "fail fast" and no attempt will be made to invoke the service until some time
has expired.
When that time has expired, the circuit breaker is set to the half-open state. When in this state, if even
a single attempt fails, the breaker will immediately go to the open state; if the attempt succeeds, the
breaker will go to the closed state, in which case, it won’t go to the open state again until the configured
number of consecutive failures again occur. Any successful attempt resets the state to zero failures for
the purpose of determining when the breaker might go to the open state again.
Typically, this Advice might be used for external services, where it might take some time to fail (such
as a timeout attempting to make a network connection).
Example:
In the above example, the threshold is set to 2 and halfOpenAfter is set to 12 seconds; a new request
arrives every 5 seconds. You can see that the first two attempts invoked the service; the third and fourth
failed with an exception indicating the circuit breaker is open. The fifth request was attempted because
the request was 15 seconds after the last failure; the sixth attempt fails immediately because the breaker
immediately went to open.
A typical use case for this advice might be with an <ftp:outbound-channel-adapter/>, perhaps
to move the file to one directory if the transfer was successful, or to another directory if it fails:
The Advice has properties to set an expression when successful, an expression for failures, and
corresponding channels for each. For the successful case, the message sent to the successChannel is
an AdviceMessage, with the payload being the result of the expression evaluation, and an additional
property inputMessage which contains the original message sent to the handler. A message sent
to the failureChannel (when the handler throws an exception) is an ErrorMessage with a payload of
MessageHandlingExpressionEvaluatingAdviceException. Like all MessagingException
s, this payload has failedMessage and cause properties, as well as an additional property
evaluationResult, containing the result of the expression evaluation.
When an exception is thrown in the scope of the advice, by default, that exception is thrown to caller
after any failureExpression is evaluated. If you wish to suppress throwing the exception, set the
trapException property to true.
@SpringBootApplication
public class EerhaApplication {
@Bean
public IntegrationFlow advised() {
return f -> f.handle((GenericHandler<String>) (payload, headers) -> {
if (payload.equals("good")) {
return null;
}
else {
throw new RuntimeException("some failure");
}
}, c -> c.advice(expressionAdvice()));
}
@Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new
ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload + ' was successful'");
advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString(
"payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
@Bean
public IntegrationFlow success() {
return f -> f.handle(System.out::println);
}
@Bean
public IntegrationFlow failure() {
return f -> f.handle(System.out::println);
}
In addition to the provided Advice classes above, you can implement your own Advice
classes. While you can provide any implementation of org.aopalliance.aop.Advice (usually
org.aopalliance.intercept.MethodInterceptor), it is generally recommended that you
subclass o.s.i.handler.advice.AbstractRequestHandlerAdvice. This has the benefit of
avoiding writing low-level Aspect Oriented Programming code as well as providing a starting point that
is specifically tailored for use in this environment.
/**
* Subclasses implement this method to apply behavior to the {@link MessageHandler} callback.execute()
* invokes the handler method and returns its result, or null).
* @param callback Subclasses invoke the execute() method on this interface to invoke the handler
method.
* @param target The target handler.
* @param message The message that will be sent to the handler.
* @return the result after invoking the {@link MessageHandler}.
* @throws Exception
*/
protected abstract Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws
Exception;
The callback parameter is simply a convenience to avoid subclasses dealing with AOP directly; invoking
the callback.execute() method invokes the message handler.
The target parameter is provided for those subclasses that need to maintain state for a specific handler,
perhaps by maintaining that state in a Map, keyed by the target. This allows the same advice to be
applied to multiple handlers. The RequestHandlerCircuitBreakerAdvice uses this to keep circuit
breaker state for each handler.
The message parameter is the message that will be sent to the handler. While the advice cannot
modify the message before invoking the handler, it can modify the payload (if it has mutable properties).
Typically, an advice would use the message for logging and/or to send a copy of the message
somewhere before or after invoking the handler.
@Override
protected Object doInvoke(ExecutionCallback callback, Object target, Message<?> message) throws
Exception {
// add code before the invocation
Object result = callback.execute();
// add code after the invocation
return result;
}
}
Note
While the abstract class mentioned above is provided as a convenience, you can add any Advice to
the chain, including a transaction advice.
As discussed in the introduction to this section, advice objects in a request handler advice chain are
applied to just the current endpoint, not the downstream flow (if any). For MessageHandler s that
produce a reply (AbstractReplyProducingMessageHandler), the advice is applied to an internal
method handleRequestMessage() (called from MessageHandler.handleMessage()). For other
message handlers, the advice is applied to MessageHandler.handleMessage().
In the case of a MessageHandler that does not return a response, the advice chain order is retained.
Transaction Support
Starting with version 5.0 a new TransactionHandleMessageAdvice has been introduced to make
the whole downstream flow transactional, thanks to the HandleMessageAdvice implementation.
When regular TransactionInterceptor is used in the <request-handler-advice-chain>,
for example via <tx:advice> configuration, a started transaction is only applied only for an internal
AbstractReplyProducingMessageHandler.handleRequestMessage() and isn’t propagated
to the downstream flow.
For whom is familiar with JPA Integration components such a configuration isn’t new, but now we can
start transaction from any point in our flow, not only from the <poller> or Message Driven Channel
Adapter like in JMS.
@Bean
public ConcurrentMetadataStore store() {
return new SimpleMetadataStore(hazelcastInstance()
.getMap("idempotentReceiverMetadataStore"));
}
@Bean
public IdempotentReceiverInterceptor idempotentReceiverInterceptor() {
return new IdempotentReceiverInterceptor(
new MetadataStoreSelector(
message -> message.getPayload().toString(),
message -> message.getPayload().toString().toUpperCase(), store()));
}
@Bean
public TransactionInterceptor transactionInterceptor() {
return new TransactionInterceptorBuilder(true)
.transactionManager(this.transactionManager)
.isolation(Isolation.READ_COMMITTED)
.propagation(Propagation.REQUIRES_NEW)
.build();
}
@Bean
@org.springframework.integration.annotation.Transformer(inputChannel = "input",
outputChannel = "output",
adviceChain = { "idempotentReceiverInterceptor",
"transactionInterceptor" })
public Transformer transformer() {
return message -> message;
}
Note the true for the TransactionInterceptorBuilder constructor, which means produce
TransactionHandleMessageAdvice, not regular TransactionInterceptor.
Java DSL supports such an Advice via .transactional() options on the endpoint configuration:
@Bean
public IntegrationFlow updatingGatewayFlow() {
return f -> f
.handle(Jpa.updatingGateway(this.entityManagerFactory),
e -> e.transactional(true))
.channel(c -> c.queue("persistResults"));
}
Advising Filters
There is an additional consideration when advising Filter s. By default, any discard actions (when
the filter returns false) are performed within the scope of the advice chain. This could include all the flow
downstream of the discard channel. So, for example if an element downstream of the discard-channel
throws an exception, and there is a retry advice, the process will be retried. This is also the case if
throwExceptionOnRejection is set to true (the exception is thrown within the scope of the advice).
Setting discard-within-advice to "false" modifies this behavior and the discard (or exception) occurs after
the advice chain is called.
@MessageEndpoint
public class MyAdvisedFilter {
@Filter(inputChannel="input", outputChannel="output",
adviceChain="adviceChain", discardWithinAdvice="false")
public boolean filter(String s) {
return s.contains("good");
}
}
For example, let’s say you want to add a retry advice and a transaction advice. You may want to place
the retry advice advice first, followed by the transaction advice. Then, each retry will be performed in a
new transaction. On the other hand, if you want all the attempts, and any recovery operations (in the retry
RecoveryCallback), to be scoped within the transaction, you would put the transaction advice first.
The target object can be accessed via the target argument when subclassing
AbstractRequestHandlerAdvice or invocation.getThis() when implementing
org.aopalliance.intercept.MethodInterceptor.
When the entire handler is advised (such as when the handler does not produce replies, or the advice
implements HandleMessageAdvice), you can simply cast the target object to the desired implemented
interface, such as NamedComponent.
or
When only the handleRequestMessage() method is advised (in a reply-producing handler), you
need to access the full handler, which is an AbstractReplyProducingMessageHandler…
AbstractReplyProducingMessageHandler handler =
((AbstractReplyProducingMessageHandler.RequestHandler) target).getAdvisedHandler();
Previously, users could have implemented this pattern, by using a custom MessageSelector in a
<filter/> (Section 6.2, “Filter”), for example. However, since this pattern is really behavior of an
endpoint rather than being an endpoint itself, the Idempotent Receiver implementation doesn’t provide
an endpoint component; rather, it is applied to endpoints declared in the application.
To maintain state between messages and provide the ability to compare messages for the idempotency,
the MetadataStoreSelector is provided. It accepts a MessageProcessor implementation (which
creates a lookup key based on the Message) and an optional ConcurrentMetadataStore
(Section 10.5, “Metadata Store”). See the MetadataStoreSelector JavaDocs for more
information. The value for ConcurrentMetadataStore also can be customized using additional
MessageProcessor. By default MetadataStoreSelector uses timestamp message header.
<idempotent-receiver
id="" ❶
endpoint="" ❷
selector="" ❸
discard-channel="" ❹
metadata-store="" ❺
key-strategy="" ❻
key-expression="" ❼
value-strategy="" ❽
value-expression="" ❾
throw-exception-on-rejection="" /> ❿
For Java configuration, the method level IdempotentReceiver annotation is provided. It is used to
mark a method that has a Messaging annotation (@ServiceActivator, @Router etc.) to specify
which IdempotentReceiverInterceptor s will be applied to this endpoint:
@Bean
public IdempotentReceiverInterceptor idempotentReceiverInterceptor() {
return new IdempotentReceiverInterceptor(new MetadataStoreSelector(m ->
m.getHeaders().get(INVOICE_NBR_HEADER)));
}
@Bean
@ServiceActivator(inputChannel = "input", outputChannel = "output")
@IdempotentReceiver("idempotentReceiverInterceptor")
public MessageHandler myService() {
....
}
Note
<int:logging-channel-adapter
channel="" ❶
level="INFO" ❷
expression="" ❸
log-full-message="false" ❹
logger-name="" /> ❺
The following Spring Boot application provides an example of configuring the LoggingHandler using
Java configuration:
@SpringBootApplication
public class LoggingJavaApplication {
@Bean
@ServiceActivator(inputChannel = "logChannel")
public LoggingHandler logging() {
LoggingHandler adapter = new LoggingHandler(LoggingHandler.Level.DEBUG);
adapter.setLoggerName("TEST_LOGGER");
adapter.setLogExpressionString("headers.id + ': ' + payload");
return adapter;
}
@MessagingGateway(defaultRequestChannel = "logChannel")
public interface MyGateway {
The following Spring Boot application provides an example of configuring the logging channel adapter
using the Java DSL:
@SpringBootApplication
public class LoggingJavaApplication {
@Bean
public IntegrationFlow loggingFlow() {
return IntegrationFlows.from(MyGateway.class)
.log(LoggingHandler.Level.DEBUG, "TEST_LOGGER",
m -> m.getHeaders().getId() + ": " + m.getPayload());
}
@MessagingGateway
public interface MyGateway {
9. Java DSL
The Spring Integration JavaConfig and DSL extension provides a set of convenient Builders and a fluent
API to configure Spring Integration message flows from Spring @Configuration classes.
@Bean
public AtomicInteger integerSource() {
return new AtomicInteger();
}
@Bean
public IntegrationFlow myFlow() {
return IntegrationFlows.from(integerSource::getAndIncrement,
c -> c.poller(Pollers.fixedRate(100)))
.channel("inputChannel")
.filter((Integer p) -> p > 0)
.transform(Object::toString)
.channel(MessageChannels.queue())
.get();
}
}
As the result after ApplicationContext start up Spring Integration endpoints and Message Channels
will be created as is the case after XML parsing. Such configuration can be used to replace XML
configuration or along side with it.
9.2 Introduction
The Java DSL for Spring Integration is essentially a facade for Spring Integration. The DSL provides a
simple way to embed Spring Integration Message Flows into your application using the fluent Builder
pattern together with existing Java and Annotation configurations from Spring Framework and Spring
Integration as well. Another useful tool to simplify configuration is Java 8 Lambdas.
The DSL is presented by the IntegrationFlows Factory for the IntegrationFlowBuilder. This
produces the IntegrationFlow component, which should be registered as a Spring bean (@Bean).
The builder pattern is used to express arbitrarily complex structures as a hierarchy of methods that may
accept Lambdas as arguments.
The Java DSL uses Spring Integration classes directly and bypasses any XML generation and parsing.
However, the DSL offers more than syntactic sugar on top of XML. One of its most compelling features is
the ability to define inline Lambdas to implement endpoint logic, eliminating the need for external classes
to implement custom logic. In some sense, Spring Integration’s support for the Spring Expression
Language (SpEL) and inline scripting address this, but Java Lambdas are easier and much more
powerful.
Endpoints are expressed as verbs in the DSL to improve readability. The following list includes the
common DSL method names and the associated EIP endpoint:
• transform # Transformer
• filter # Filter
• handle # ServiceActivator
• split # Splitter
• aggregate # Aggregator
• route # Router
• bridge # Bridge
Conceptually, integration processes are constructed by composing these endpoints into one or more
message flows. Note that EIP does not formally define the term message flow, but it is useful
to think of it as a unit of work that uses well known messaging patterns. The DSL provides an
IntegrationFlow component to define a composition of channels and endpoints between them, but
now IntegrationFlow plays only the configuration role to populate real beans in the application
context and isn’t used at runtime:
@Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.<String, Integer>transform(Integer::parseInt)
.get();
}
Here we use the IntegrationFlows factory to define an IntegrationFlow bean using EIP-
methods from IntegrationFlowBuilder.
The transform method accepts a Lambda as an endpoint argument to operate on the message
payload. The real argument of this method is GenericTransformer<S, T>, hence any out-of-the-box
transformers (ObjectToJsonTransformer, FileToStringTransformer etc.) can be used here.
Under the covers, IntegrationFlowBuilder recognizes the MessageHandler and endpoint for
that: MessageTransformingHandler and ConsumerEndpointFactoryBean, respectively. Let’s
look at another example:
@Bean
public IntegrationFlow myFlow() {
return IntegrationFlows.from("input")
.filter("World"::equals)
.transform("Hello "::concat)
.handle(System.out::println)
.get();
}
The above example composes a sequence of Filter -> Transformer -> Service Activator.
The flow is one way, that is it does not provide a a reply message but simply prints the payload to
STDOUT. The endpoints are automatically wired together using direct channels.
@Bean
public MessageChannel priorityChannel() {
return MessageChannels.priority(this.mongoDbChannelMessageStore, "priorityGroup")
.interceptor(wireTap())
.get();
}
The same MessageChannels builder factory can be used in the channel() EIP-method from
IntegrationFlowBuilder to wire endpoints similar to an`input-channel`/output-channel pair in
the XML configuration. By default endpoints are wired via DirectChannel s where the bean name
is based on the pattern: [IntegrationFlow.beanName].channel#[channelNameIndex]. This
rule is applied for unnamed channels produced by inline MessageChannels builder factory usage,
too. However all MessageChannels methods have a channelId -aware variant to create the bean
names for MessageChannel s. The MessageChannel references can be used as well as beanName,
as bean-method invocations. Here is a sample with possible variants of channel() EIP-method usage:
@Bean
public MessageChannel queueChannel() {
return MessageChannels.queue().get();
}
@Bean
public MessageChannel publishSubscribe() {
return MessageChannels.publishSubscribe().get();
}
@Bean
public IntegrationFlow channelFlow() {
return IntegrationFlows.from("input")
.fixedSubscriberChannel()
.channel("queueChannel")
.channel(publishSubscribe())
.channel(MessageChannels.executor("executorChannel", this.taskExecutor))
.channel("output")
.get();
}
• from("input") means: find and use the MessageChannel with the "input" id, or create one;
• channel("queueChannel") works the same way but, of course, uses an existing "queueChannel"
bean;
• channel("output") - registers the DirectChannel bean with "output" name as long as there
are no beans with this name.
Note: the IntegrationFlow definition shown above is valid and all of its channels are applied to
endpoints with BridgeHandler s.
Important
Be careful to use the same inline channel definition via MessageChannels factory from different
IntegrationFlow s. Even if the DSL parsers register non-existing objects as beans, it can’t
determine the same object (MessageChannel) from different IntegrationFlow containers.
This is wrong:
@Bean
public IntegrationFlow startFlow() {
return IntegrationFlows.from("input")
.transform(...)
.channel(MessageChannels.queue("queueChannel"))
.get();
}
@Bean
public IntegrationFlow endFlow() {
return IntegrationFlows.from(MessageChannels.queue("queueChannel"))
.handle(...)
.get();
}
To make it working there is just need to declare @Bean for that channel and use its bean-method from
different IntegrationFlow s.
9.5 Pollers
A similar fluent API is provided to configure PollerMetadata for AbstractPollingEndpoint
implementations. The Pollers builder factory can be used to configure common bean definitions or
those created from IntegrationFlowBuilder EIP-methods:
@Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedRate(500).get();
}
@Bean
public IntegrationFlow flow2() {
return IntegrationFlows.from(this.inputChannel)
.transform(new PayloadSerializingTransformer(),
c -> c.autoStartup(false).id("payloadSerializingTransformer"))
.transform((Integer p) -> p * 2, c -> c.advice(this.expressionAdvice()))
.get();
}
In addition the EndpointSpec provides an id() method to allow you to register an endpoint bean with
a given bean name, rather than a generated one.
9.7 Transformers
The DSL API provides a convenient, fluent Transformers factory to be used as inline target object
definition within .transform() EIP-method:
@Bean
public IntegrationFlow transformFlow() {
return IntegrationFlows.from("input")
.transform(Transformers.fromJson(MyPojo.class))
.transform(Transformers.serializer())
.get();
}
It avoids inconvenient coding using setters and makes the flow definition more straightforward. Note,
that Transformers can be use to declare target Transformer s as @Bean s and, again, use them
from IntegrationFlow definition as bean-methods. Nevertheless, the DSL parser takes care about
bean declarations for inline objects, if they aren’t defined as beans yet.
See Transformers Java Docs for more information and supported factory methods.
@Bean
public MessageSource<Object> jdbcMessageSource() {
return new JdbcPollingChannelAdapter(this.dataSource, "SELECT * FROM foo");
}
@Bean
public IntegrationFlow pollingFlow() {
return IntegrationFlows.from(jdbcMessageSource(),
c -> c.poller(Pollers.fixedRate(100).maxMessagesPerPoll(1)))
.transform(Transformers.toJson())
.channel("furtherProcessChannel")
.get();
}
result of the Supplier.get() is wrapped to the Message (if it isn’t message already) by Framework
automatically.
The next sections discuss selected endpoints which require further explanation.
• HeaderValueRouter
• PayloadTypeRouter
• ExceptionTypeRouter
• RecipientListRouter
• XPathRouter
@Bean
public IntegrationFlow routeFlow() {
return IntegrationFlows.from("routerInput")
.<Integer, Boolean>route(p -> p % 2 == 0,
m -> m.suffix("Channel")
.channelMapping("true", "even")
.channelMapping("false", "odd")
)
.get();
}
@Bean
public IntegrationFlow routeFlow() {
return IntegrationFlows.from("routerInput")
.route("headers['destChannel']")
.get();
}
@Bean
public IntegrationFlow recipientListFlow() {
return IntegrationFlows.from("recipientListInput")
.<String, String>transform(p -> p.replaceFirst("Payload", ""))
.routeToRecipients(r -> r
.recipient("foo-channel", "'foo' == payload")
.recipient("bar-channel", m ->
m.getHeaders().containsKey("recipient")
&& (boolean) m.getHeaders().get("recipient"))
.recipientFlow("'foo' == payload or 'bar' == payload or 'baz' == payload",
f -> f.<String, String>transform(String::toUpperCase)
.channel(c -> c.queue("recipientListSubFlow1Result")))
.recipientFlow((String p) -> p.startsWith("baz"),
f -> f.transform("Hello "::concat)
.channel(c -> c.queue("recipientListSubFlow2Result")))
.recipientFlow(new FunctionExpression<Message<?>>(m ->
"bax".equals(m.getPayload())),
f -> f.channel(c -> c.queue("recipientListSubFlow3Result")))
.defaultOutputToParentFlow())
.get();
}
9.10 Splitters
A splitter is created using the split() EIP-method. By default, if the payload is a Iterable,
Iterator, Array, Stream or Reactive Publisher, this will output each item as an individual
message. This takes a Lambda, SpEL expression, any AbstractMessageSplitter implementation,
or can be used without parameters to provide the DefaultMessageSplitter. For example:
@Bean
public IntegrationFlow splitFlow() {
return IntegrationFlows.from("splitInput")
.split(s ->
s.applySequence(false).get().getT2().setDelimiters(","))
.channel(MessageChannels.executor(this.taskExecutor()))
.get();
}
This creates a splitter that splits a message containing a comma delimited String. Note: the getT2()
method comes from Tuple Collection which is the result of EndpointSpec.get() and represents
a pair of ConsumerEndpointFactoryBean and DefaultMessageSplitter for the example
above.
@Bean
public IntegrationFlow splitAggregateFlow() {
return IntegrationFlows.from("splitAggregateInput")
.split()
.channel(MessageChannels.executor(this.taskExecutor()))
.resequence()
.aggregate()
.get();
}
The above is a canonical example of splitter/aggregator pattern. The split() method splits the list into
individual messages and sends them to the ExecutorChannel. The resequence() method reorders
messages by sequence details from message headers. The aggregate() method just collects those
messages to the result list.
However, you may change the default behavior by specifying a release strategy and correlation strategy,
among other things. Consider the following:
.aggregate(a ->
a.correlationStrategy(m -> m.getHeaders().get("myCorrelationKey"))
.releaseStrategy(g -> g.size() > 10)
.messageStore(messageStore()))
The similar Lambda configurations are provided for the resequence() EIP-method.
@Bean
public IntegrationFlow myFlow() {
return IntegrationFlows.from("flow3Input")
.<Integer>handle((p, h) -> p * 2)
.get();
}
However one main goal of Spring Integration an achieving of loose coupling via runtime type
conversion from message payload to target arguments of message handler. Since Java doesn’t support
generic type resolution for Lambda classes, we introduced a workaround with additional payloadType
argument for the most EIP-methods and LambdaMessageProcessor, which delegates the hard
conversion work to the Spring’s ConversionService using provided type and requested message
to target method arguments. The IntegrationFlow might look like this:
@Bean
public IntegrationFlow integerFlow() {
return IntegrationFlows.from("input")
.<byte[], String>transform(p - > new String(p, "UTF-8"))
.handle(Integer.class, (p, h) -> p * 2)
.get();
}
.filter(...)
.log(LoggingHandler.Level.ERROR, "test.category", m -> m.getHeaders().getId())
.route(...)
In this example an id header will be logged with ERROR level onto "test.category" only for messages
passed the filter and before routing.
9.14 MessageChannelSpec.wireTap()
A .wireTap() fluent API exists for MessageChannelSpec builders. A target configuration gains much
more from Java DSL usage:
@Bean
public QueueChannelSpec myChannel() {
return MessageChannels.queue()
.wireTap("loggingFlow.input");
}
@Bean
public IntegrationFlow loggingFlow() {
return f -> f.log();
}
Important
If log() or wireTap() are used in the end of flow they are considered as one-way
MessageHandler s. If the integration flow is expected to return reply, the bridge() should be
used in the end, after log() or wireTap():
@Bean
public IntegrationFlow sseFlow() {
return IntegrationFlows
.from(WebFlux.inboundGateway("/sse")
.requestMapping(m ->
m.produces(MediaType.TEXT_EVENT_STREAM_VALUE)))
.handle((p, h) -> Flux.just("foo", "bar", "baz"))
.log(LoggingHandler.Level.WARN)
.bridge()
.get();
}
By default, the MessageFlow behaves as a Chain in Spring Integration parlance. That is, the endpoints
are automatically wired implicitly via DirectChannel s. The message flow is not actually constructed
as a chain, affording much more flexibility. For example, you may send a message to any component
within the flow, if you know its inputChannel name, i.e., explicitly define it. You may also reference
externally defined channels within a flow to allow the use of channel adapters to enable remote transport
protocols, file I/O, and the like, instead of direct channels. As such, the DSL does not support the Spring
Integration chain element since it doesn’t add much value.
Since the Spring Integration Java DSL produces the same bean definition model as any other
configuration options and is based on the existing Spring Framework @Configuration infrastructure,
it can be used together with Integration XML definitions and wired with Spring Integration Messaging
Annotations configuration.
@Bean
public IntegrationFlow lambdaFlow() {
return f -> f.filter("World"::equals)
.transform("Hello "::concat)
.handle(System.out::println);
}
The result of this definition is the same bunch of Integration components wired with implicit
direct channel. Only limitation is here, that this flow is started with named direct channel -
lambdaFlow.input. And Lambda flow can’t start from MessageSource or MessageProducer.
9.16 FunctionExpression
The FunctionExpression (an implementation of SpEL Expression) has been introduced to get a
gain of Java and Lambda usage for the method and its generics context. The Function<T, R>
option is provided for the DSL components alongside with expression option, when there is the implicit
Strategy variant from Core Spring Integration. The usage may look like:
The FunctionExpression also supports runtime type conversion as it is done in the standard
SpelExpression.
@Bean
public IntegrationFlow subscribersFlow() {
return flow -> flow
.publishSubscribeChannel(Executors.newCachedThreadPool(), s -> s
.subscribe(f -> f
.<Integer>handle((p, h) -> p / 2)
.channel(c -> c.queue("subscriber1Results")))
.subscribe(f -> f
.<Integer>handle((p, h) -> p * 2)
.channel(c -> c.queue("subscriber2Results"))))
.<Integer>handle((p, h) -> p * 3)
.channel(c -> c.queue("subscriber3Results"));
}
Of course the same result we can achieve with separate IntegrationFlow @Bean definitions, but we
hope you’ll find the subflow style of logic composition useful.
@Bean
public IntegrationFlow routeFlow() {
return f -> f
.<Integer, Boolean>route(p -> p % 2 == 0,
m -> m.channelMapping("true", "evenChannel")
.subFlowMapping("false", sf ->
sf.<Integer>handle((p, h) -> p * 3)))
.transform(Object::toString)
.channel(c -> c.queue("oddChannel"));
}
Of course, subflows can be nested with any depth, but we don’t recommend to do that because, in fact,
even in the router case, adding complex subflows within a flow would quickly begin to look like a plate
of spaghetti and difficult for a human to parse.
Anyway we are providing the hi-level API to define protocol-specific seamlessly. This is achieved with
Factory and Builder patterns and, of course, with Lambdas. The factory classes can be considered
"Namespace Factories", because they play the same role as XML namespace for components from the
concrete protocol-specific Spring Integration modules. Currently, Spring Integration Java DSL supports
Amqp, Feed, Jms, Files, (S)Ftp, Http, JPA, MongoDb, TCP/UDP, Mail, WebFlux and Scripts
namespace factories:
@Bean
public IntegrationFlow amqpFlow() {
return IntegrationFlows.from(Amqp.inboundGateway(this.rabbitConnectionFactory, queue()))
.transform("hello "::concat)
.transform(String.class, String::toUpperCase)
.get();
}
@Bean
public IntegrationFlow jmsOutboundGatewayFlow() {
return IntegrationFlows.from("jmsOutboundGatewayChannel")
.handle(Jms.outboundGateway(this.jmsConnectionFactory)
.replyContainer(c ->
c.concurrentConsumers(3)
.sessionTransacted(true))
.requestDestination("jmsPipelineTest"))
.get();
}
@Bean
public IntegrationFlow sendMailFlow() {
return IntegrationFlows.from("sendMailChannel")
.handle(Mail.outboundAdapter("localhost")
.port(smtpPort)
.credentials("user", "pw")
.protocol("smtp")
.javaMailProperties(p -> p.put("mail.debug", "true")),
e -> e.id("sendMailEndpoint"))
.get();
}
We show here the usage of namespace factories as inline adapters declarations, however they can be
used from @Bean definitions to make the IntegrationFlow method-chain more readable.
We are soliciting community feedback on these namespace factories before we spend effort on others;
we’d also appreciate some prioritization for which adapters/gateways we should support next.
See more Java DSL samples in the protocol-specific chapter throughout this reference manual.
All other protocol channel adapters may be configured as generic beans and wired to the
IntegrationFlow:
@Bean
public QueueChannelSpec wrongMessagesChannel() {
return MessageChannels
.queue()
.wireTap("wrongMessagesWireTapChannel");
}
@Bean
public IntegrationFlow xpathFlow(MessageChannel wrongMessagesChannel) {
return IntegrationFlows.from("inputChannel")
.filter(new StringValueTestXPathMessageSelector("namespace-uri(/*)", "my:namespace"),
e -> e.discardChannel(wrongMessagesChannel))
.log(LoggingHandler.Level.ERROR, "test.category", m -> m.getHeaders().getId())
.route(xpathRouter(wrongMessagesChannel))
.get();
}
@Bean
public AbstractMappingMessageRouter xpathRouter(MessageChannel wrongMessagesChannel) {
XPathRouter router = new XPathRouter("local-name(/*)");
router.setEvaluateAsString(true);
router.setResolutionRequired(false);
router.setDefaultOutputChannel(wrongMessagesChannel);
router.setChannelMapping("Tags", "splittingChannel");
router.setChannelMapping("Tag", "receivedChannel");
return router;
}
9.19 IntegrationFlowAdapter
The IntegrationFlow as an interface can be implemented directly and specified as component for
scanning:
@Component
public class MyFlow implements IntegrationFlow {
@Override
public void configure(IntegrationFlowDefinition<?> f) {
f.<String, String>transform(String::toUpperCase);
}
For convenience and loosely coupled architecture the IntegrationFlowAdapter base class
implementation is provided. It requires a buildFlow() method implementation to produce an
IntegrationFlowDefinition using one of from() support methods:
@Component
public class MyFlowAdapter extends IntegrationFlowAdapter {
@Override
protected IntegrationFlowDefinition<?> buildFlow() {
return from(this, "messageSource",
e -> e.poller(p -> p.trigger(this::nextExecutionTime)))
.split(this)
.transform(this)
.aggregate(a -> a.processor(this, null), null)
.enrichHeaders(Collections.singletonMap("foo", "FOO"))
.filter(this)
.handle(this)
.channel(c -> c.queue("myFlowAdapterOutput"));
}
@Splitter
public String[] split(String payload) {
return StringUtils.commaDelimitedListToStringArray(payload);
}
@Transformer
public String transform(String payload) {
return payload.toLowerCase();
}
@Aggregator
public String aggregate(List<String> payloads) {
return payloads.stream().collect(Collectors.joining());
}
@Filter
public boolean filter(@Header Optional<String> foo) {
return foo.isPresent();
}
@ServiceActivator
public String handle(String payload, @Header String foo) {
return payload + ":" + foo;
}
BeanDefinition beanDefinition =
BeanDefinitionBuilder.genericBeanDefinition((Class<Object>) bean.getClass(), () -> bean)
.getRawBeanDefinition();
and all the necessary bean initialization and lifecycle is done automatically as it is with the standard
context configuration bean definitions.
@Autowired
private AbstractServerConnectionFactory server1;
@Autowired
private IntegrationFlowContext flowContext;
...
@Test
public void testTcpGateways() {
TestingUtilities.waitListening(this.server1, null);
This is useful when we have multi configuration options and have to create several instances of similar
flows. So, we can iterate our options and create and register IntegrationFlow s within loop. Another
variant when our source of data isn’t Spring-based and we must create it on the fly. Such a sample is
Reactive Streams event source:
Flux<Message<?>> messageFlux =
Flux.just("1,2,3,4")
.map(v -> v.split(","))
.flatMapIterable(Arrays::asList)
.map(Integer::parseInt)
.map(GenericMessage<Integer>::new);
IntegrationFlow integrationFlow =
IntegrationFlows.from(messageFlux)
.<Integer, Integer>transform(p -> p * 2)
.channel(resultChannel)
.get();
this.integrationFlowContext.registration(integrationFlow)
.register();
Such a dynamically registered IntegrationFlow and all its dependant beans can
be removed afterwards using IntegrationFlowRegistration.destroy() callback. See
IntegrationFlowContext JavaDocs for more information.
...
@Bean
public IntegrationFlow controlBusFlow() {
return IntegrationFlows.from(ControlBusGateway.class)
.controlBus()
.get();
}
All the proxy for interface methods are supplied with the channel to send messages to the
next integration component in the IntegrationFlow. The service interface can be marked with
the @MessagingGateway as well as methods with the @Gateway annotations. Nevertheless the
requestChannel is ignored and overridden with that internal channel for the next component in the
IntegrationFlow. Otherwise such a configuration via IntegrationFlow won’t make sense.
With the Java 8 on board we even can create such an Integration Gateway with the
java.util.function interfaces:
@Bean
public IntegrationFlow errorRecovererFlow() {
return IntegrationFlows.from(Function.class, "errorRecovererFunction")
.handle((GenericHandler<?>) (p, h) -> {
throw new RuntimeException("intentional");
}, e -> e.advice(retryAdvice()))
.get();
}
@Autowired
@Qualifier("errorRecovererFunction")
private Function<String, String> errorRecovererFlowGateway;
Note
Prior to version 4.2 metrics were only available when JMX was enabled. See Section 10.2, “JMX
Support”.
In addition to metrics, you can control debug logging in the main message flow. It has been found that
in very high volume applications, even calls to isDebugEnabled() can be quite expensive with some
logging subsystems. You can disable all such logging to avoid this overhead; exception logging (debug
or otherwise) are not affected by this setting.
<int:management
default-logging-enabled="true" ❶
default-counts-enabled="false" ❷
default-stats-enabled="false" ❸
counts-enabled-patterns="foo, !baz, ba*" ❹
stats-enabled-patterns="fiz, buz" ❺
metrics-factory="myMetricsFactory" /> ❻
@Configuration
@EnableIntegration
@EnableIntegrationManagement(
defaultLoggingEnabled = "true", ❶
defaultCountsEnabled = "false", ❷
defaultStatsEnabled = "false", ❸
countsEnabled = { "foo", "${count.patterns}" }, ❹
statsEnabled = { "qux", "!*" }, ❺
MetricsFactory = "myMetricsFactory") ❻
public static class ContextConfiguration {
...
}
❶❶ Set to false to disable all logging in the main message flow, regardless of the log system category
settings. Set to true to enable debug logging (if also enabled by the logging subsystem). Only
applied if you have not explicitly configured the setting in a bean definition. Default true.
❷❷ Enable or disable count metrics for components not matching one of the patterns in <4>. Only
applied if you have not explicitly configured the setting in a bean definition. Default false.
❸❸ Enable or disable statistical metrics for components not matching one of the patterns in <5>. Only
applied if you have not explicitly configured the setting in a bean definition. Default false.
❹❹ A comma-delimited list of patterns for beans for which counts should be enabled; negate the pattern
with !. First match wins (positive or negative). In the unlikely event that you have a bean name
starting with !, escape the ! in the pattern: \!foo positively matches a bean named !foo.
❺❺ A comma-delimited list of patterns for beans for which statistical metrics should be enabled; negate
the pattern with !. First match wins (positive or negative). In the unlikely event that you have a
bean name starting with !, escape the ! in the pattern: \!foo positively matches a bean named
!foo. Stats implies counts.
❻❻ A reference to a MetricsFactory. See the section called “Metrics Factory”.
When JMX is enabled (see Section 10.2, “JMX Support”), these metrics are also exposed by the
IntegrationMBeanExporter.
Important
Message channels report metrics according to their concrete type. If you are looking at a
DirectChannel, you will see statistics for the send operation. If it is a QueueChannel, you will also
see statistics for the receive operation, as well as the count of messages that are currently buffered by
this QueueChannel. In both cases there are some metrics that are simple counters (message count
and error count), and some that are estimates of averages of interesting quantities. The algorithms used
to calculate these estimates are described briefly in the section below.
Error Count Send Error Count Simple incrementer. Increases by one when an
send results in an error.
Error Rate Send Error Rate (number of Inverse of Exponential Moving Average of the
errors per second) interval between error events with decay in time
(lapsing over 60 seconds by default) and per
measurement (last 10 events by default).
Ratio Send Success Ratio (ratio of Estimate the success ratio as the Exponential
successful to total sends) Moving Average of the series composed of
values 1 for success and 0 for failure (decaying
as per the rate measurement over time and
events by default). Error ratio is 1 - success ratio.
The following table shows the statistics maintained for message handlers. Some metrics are simple
counters (message count and error count), and one is an estimate of averages of send duration. The
algorithms used to calculate these estimates are described briefly in the table below:
Error Count Handler Error Count Simple incrementer. Increases by one when an
invocation results in an error.
Active Count Handler Active Count Indicates the number of currently active
threads currently invoking the handler (or any
downstream synchronous flow).
A feature of the time-based average estimates is that they decay with time if no new measurements
arrive. To help interpret the behaviour over time, the time (in seconds) since the last measurement is
also exposed as a metric.
There are two basic exponential models: decay per measurement (appropriate for duration and anything
where the number of measurements is part of the metric), and decay per time unit (more suitable for rate
measurements where the time in between measurements is part of the metric). Both models depend
on the fact that
S(n) = sum(i=0,i=n) w(i) x(i) has a special form when w(i) = r^i, with r=constant:
S(n) = x(n) + r S(n-1) (so you only have to store S(n-1), not the whole series x(i), to
generate a new metric estimate from the last measurement). The algorithms used in the duration metrics
use r=exp(-1/M) with M=10. The net effect is that the estimate S(n) is more heavily weighted to
recent measurements and is composed roughly of the last M measurements. So M is the "window" or
lapse rate of the estimate In the case of the vanilla moving average, i is a counter over the number of
measurements. In the case of the rate we interpret i as the elapsed time, or a combination of elapsed
time and a counter (so the metric estimate contains contributions roughly from the last M measurements
and the last T seconds).
Metrics Factory
A new strategy interface MetricsFactory has been introduced allowing you to provide
custom channel metrics for your MessageChannel s and MessageHandler s. By default, a
DefaultMetricsFactory provides default implementation of MessageChannelMetrics and
MessageHandlerMetrics which are described in the next bullet. To override the default
MetricsFactory configure it as described above, by providing a reference to your MetricsFactory
bean instance. You can either customize the default implementations as described in the next bullet,
or provide completely different implementations by extending AbstractMessageChannelMetrics
and/or AbstractMessageHandlerMetrics.
In addition to the default metrics factory described above, the framework provides the
AggregatingMetricsFactory. This factory creates AggregatingMessageChannelMetrics
and AggregatingMessageHandlerMetrics. In very high volume scenarios, the cost of capturing
statistics can be prohibitive (2 calls to the system time and storing the data for each message). The
aggregating metrics aggregate the response time over a sample of messages. This can save significant
CPU time.
Caution
The statistics will be skewed if messages arrive in bursts. These metrics are intended for use with
high, constant-volume, message rates.
<bean id="aggregatingMetricsFactory"
class="org.springframework.integration.support.management.AggregatingMetricsFactory">
<constructor-arg value="1000" /> <!-- sample size -->
</bean>
The above configuration aggregates the duration over 1000 messages. Counts (send, error) are
maintained per-message but the statistics are per 1000 messages.
See the section called “Time-Based Average Estimates” and the Javadocs for the
ExponentialMovingAverage* classes for more information about these values.
If you wish to override these defaults, you can provide a custom MetricsFactory that returns
appropriately configured metrics and provide a reference to it to the MBean exporter as described above.
Example:
@Override
public AbstractMessageChannelMetrics createChannelMetrics(String name) {
return new DefaultMessageChannelMetrics(name,
new ExponentialMovingAverage(20, 1000000.),
new ExponentialMovingAverageRate(2000, 120000, 30, true),
new ExponentialMovingAverageRatio(130000, 40, true),
new ExponentialMovingAverageRate(3000, 140000, 50, true));
}
@Override
public AbstractMessageHandlerMetrics createHandlerMetrics(String name) {
return new DefaultMessageHandlerMetrics(name, new ExponentialMovingAverage(20, 1000000.));
}
• Advanced Customization
The customizations described above are wholesale and will apply to all appropriate beans exported by
the MBean exporter. This is the extent of customization available using XML configuration.
Individual beans can be provided with different implementations using java @Configuration or
programmatically at runtime, after the application context has been refreshed, by invoking the
configureMetrics methods on AbstractMessageChannel and AbstractMessageHandler.
• Performance Improvement
Previously, the time-based metrics (see the section called “Time-Based Average Estimates”) were
calculated in real time. The statistics are now calculated when retrieved instead. This resulted in a
significant performance improvement, at the expense of a small amount of additional memory for each
statistic. As discussed in the bullet above, the statistics can be disabled altogether, while retaining the
MBean allowing the invocation of Lifecycle methods.
The Notification-listening Channel Adapter requires a JMX ObjectName for the MBean that publishes
notifications to which this listener should be registered. A very simple configuration might look like this:
<int-jmx:notification-listening-channel-adapter id="adapter"
channel="channel"
object-name="example.domain:name=publisher"/>
Tip
The adapter can also accept a reference to a NotificationFilter and a handback Object to provide
some context that is passed back with each Notification. Both of those attributes are optional. Extending
the above example to include those attributes as well as an explicit MBeanServer bean name would
produce the following:
<int-jmx:notification-listening-channel-adapter id="adapter"
channel="channel"
mbean-server="someServer"
object-name="example.domain:name=somePublisher"
notification-filter="notificationFilter"
handback="myHandback"/>
The Notification-listening Channel Adapter is event-driven and registered with the MBeanServer
directly. It does not require any poller configuration.
Note
For this component only, the object-name attribute can contain an ObjectName pattern (e.g.
"org.foo:type=Bar,name=*") and the adapter will receive notifications from all MBeans with
ObjectNames that match the pattern. In addition, the object-name attribute can contain a SpEL
reference to a <util:list/> of ObjectName patterns:
<jmx:notification-listening-channel-adapter id="manyNotificationsAdapter"
channel="manyNotificationsChannel"
object-name="#{patterns}"/>
<util:list id="patterns">
<value>org.foo:type=Foo,name=*</value>
<value>org.foo:type=Bar,name=*</value>
</util:list>
The names of the located MBean(s) will be logged when DEBUG level logging is enabled.
The Notification-publishing Channel Adapter is relatively simple. It only requires a JMX ObjectName in
its configuration as shown below.
<context:mbean-export/>
<int-jmx:notification-publishing-channel-adapter id="adapter"
channel="channel"
object-name="example.domain:name=publisher"/>
It does also require that an MBeanExporter be present in the context. That is why the <context:mbean-
export/> element is shown above as well.
When Messages are sent to the channel for this adapter, the Notification is created from the Message
content. If the payload is a String it will be passed as the message text for the Notification. Any other
payload type will be passed as the userData of the Notification.
JMX Notifications also have a type, and it should be a dot-delimited String. There are two ways to
provide the type. Precedence will always be given to a Message header value associated with the
JmxHeaders.NOTIFICATION_TYPE key. On the other hand, you can rely on a fallback default-
notification-type attribute provided in the configuration.
<context:mbean-export/>
<int-jmx:notification-publishing-channel-adapter id="adapter"
channel="channel"
object-name="example.domain:name=publisher"
default-notification-type="some.default.type"/>
The Attribute Polling Channel Adapter is useful when you have a requirement, to periodically check on
some value that is available through an MBean as a managed attribute. The poller can be configured
in the same way as any other polling adapter in Spring Integration (or it’s possible to rely on the default
poller). The object-name and attribute-name are required. An MBeanServer reference is also required,
but it will automatically check for a bean named mbeanServer by default, just like the Notification-
listening Channel Adapter described above.
<int-jmx:attribute-polling-channel-adapter id="adapter"
channel="channel"
object-name="example.domain:name=someService"
attribute-name="InvocationCount">
<int:poller max-messages-per-poll="1" fixed-rate="5000"/>
</int-jmx:attribute-polling-channel-adapter>
The Tree Polling Channel Adapter queries the JMX MBean tree and sends a message with a payload
that is the graph of objects that matches the query. By default the MBeans are mapped to primitives and
simple Objects like Map, List and arrays - permitting simple transformation, for example, to JSON. An
MBeanServer reference is also required, but it will automatically check for a bean named mbeanServer
by default, just like the Notification-listening Channel Adapter described above. A basic configuration
would be:
<int-jmx:tree-polling-channel-adapter id="adapter"
channel="channel"
query-name="example.domain:type=*">
<int:poller max-messages-per-poll="1" fixed-rate="5000"/>
</int-jmx:tree-polling-channel-adapter>
This will include all attributes on the MBeans selected. You can filter the attributes by providing an
MBeanObjectConverter that has an appropriate filter configured. The converter can be provided
as a reference to a bean definition using the converter attribute, or as an inner <bean/> definition.
A DefaultMBeanObjectConverter is provided which can take a MBeanAttributeFilter in its
constructor argument.
Two standard filters are provided; the NamedFieldsMBeanAttributeFilter allows you to specify a
list of attributes to include and the NotNamedFieldsMBeanAttributeFilter allows you to specify
a list of attributes to exclude. You can also implement your own filter
<int-jmx:operation-invoking-channel-adapter id="adapter"
object-name="example.domain:name=TestBean"
operation-name="ping"/>
Then the adapter only needs to be able to discover the mbeanServer bean. If a different bean name is
required, then provide the mbean-server attribute with a reference.
The payload of the Message will be mapped to the parameters of the operation, if any. A Map-typed
payload with String keys is treated as name/value pairs, whereas a List or array would be passed as
a simple argument list (with no explicit parameter names). If the operation requires a single parameter
value, then the payload can represent that single value, and if the operation requires no parameters,
then the payload would be ignored.
If you want to expose a channel for a single common operation to be invoked by Messages that need
not contain headers, then that option works well.
<int-jmx:operation-invoking-outbound-gateway request-channel="requestChannel"
reply-channel="replyChannel"
object-name="o.s.i.jmx.config:type=TestBean,name=testBeanGateway"
operation-name="testWithReturn"/>
If the reply-channel attribute is not provided, the reply message will be sent to the channel that is
identified by the IntegrationMessageHeaderAccessor.REPLY_CHANNEL header. That header
is typically auto-created by the entry point into a message flow, such as any Gateway component.
However, if the message flow was started by manually creating a Spring Integration Message and
sending it directly to a Channel, then you must specify the message header explicitly or use the provided
reply-channel attribute.
MBean Exporter
Spring Integration components themselves may be exposed as MBeans when
the IntegrationMBeanExporter is configured. To create an instance of the
IntegrationMBeanExporter, define a bean and provide a reference to an MBeanServer and
a domain name (if desired). The domain can be left out, in which case the default domain is
org.springframework.integration.
<int-jmx:mbean-export id="integrationMBeanExporter"
default-domain="my.company.domain" server="mbeanServer"/>
Important
The MBean exporter is orthogonal to the one provided in Spring core - it registers message
channels and message handlers, but not itself. You can expose the exporter itself, and certain
other components in Spring Integration, using the standard <context:mbean-export/> tag.
The exporter has a some metrics attached to it, for instance a count of the number of active
handlers and the number of queued messages.
It also has a useful operation, as discussed in the section called “Orderly Shutdown Managed
Operation”.
Starting with Spring Integration 4.0 the @EnableIntegrationMBeanExport annotation has been
introduced for convenient configuration of a default (integrationMbeanExporter) bean of type
IntegrationMBeanExporter with several useful options at the @Configuration class level. For
example:
@Configuration
@EnableIntegration
@EnableIntegrationMBeanExport(server = "mbeanServer", managedComponents = "input")
public class ContextConfiguration {
@Bean
public MBeanServerFactoryBean mbeanServer() {
return new MBeanServerFactoryBean();
}
}
If there is a need to provide more options, or have several IntegrationMBeanExporter beans e.g.
for different MBean Servers, or to avoid conflicts with the standard Spring MBeanExporter (e.g. via
@EnableMBeanExport), you can simply configure an IntegrationMBeanExporter as a generic
bean.
MBean ObjectNames
All the MessageChannel, MessageHandler and MessageSource instances in the application are
wrapped by the MBean exporter to provide management and monitoring features. The generated JMX
object names for each component type are listed in the table below:
MessageChannel o.s.i:type=MessageChannel,name=<channelName>
MessageSource o.s.i:type=MessageSource,name=<channelName>,bean=<source>
MessageHandler o.s.i:type=MessageSource,name=<channelName>,bean=<source>
The bean attribute in the object names for sources and handlers takes one of the values in the table
below:
anonymous An indication that the enclosing endpoint didn’t have a user-specified bean
name, so the JMX name is the input channel name
handler/source None of the above: fallback to the toString() of the object being
monitored (handler or source)
Custom elements can be appended to the object name by providing a reference to a Properties
object in the object-name-static-properties attribute.
Also, since Spring Integration 3.0, you can use a custom ObjectNamingStrategy using the object-
naming-strategy attribute. This permits greater control over the naming of the MBeans. For
example, to group all Integration MBeans under an Integration type. A simple custom naming strategy
implementation might be:
The beanKey argument is a String containing the standard object name beginning with the default-
domain and including any additional static properties. This example simply moves the standard type
part to componentType and sets the type to Integration, enabling selection of all Integration MBeans
in one query:"my.domain:type=Integration,*. This also groups the beans under one tree entry
under the domain in tools like VisualVM.
Note
JMX Improvements
Version 4.2 introduced some important improvements, representing a fairly major overhaul to the JMX
support in the framework. These resulted in a significant performance improvement of the JMX statistics
collection and much more control thereof, but has some implications for user code in a few specific
(uncommon) situations. These changes are detailed below, with a caution where necessary.
• Metrics Capture
Now, the statistics are captured by the beans themselves; see Section 10.1, “Metrics and Management”
for more information.
Warning
This change means that you no longer automatically get an MBean or statistics
for custom MessageHandler implementations, unless those custom handlers extend
AbstractMessageHandler. The simplest way to resolve this is to extend
AbstractMessageHandler. If that’s not possible, or desired, another work-around
is to implement the MessageHandlerMetrics interface. For convenience, a
DefaultMessageHandlerMetrics is provided to capture and report statistics. Invoke the
The removal of the proxy has two additional benefits; 1) stack traces in exceptions are reduced (when
JMX is enabled) because the proxy is not on the stack; 2) cases where 2 MBeans were exported for
the same bean now only export a single MBean with consolidated attributes/operations (see the MBean
consolidation bullet below).
• Resolution
Previously, when JMX was enabled, all sources, channels, handlers captured statistics. It is now
possible to control whether the statisics are enabled on an individual component. Further, it is possible
to capture simple counts on MessageChannel s and MessageHandler s instead of the complete
time-based statistics. This can have significant performance implications because you can selectively
configure where you need detailed statistics, as well as enable/disable at runtime.
• @IntegrationManagedResource
• Consolidated MBeans
Certain classes within the framework (mapping routers for example) have additional attributes/
operations over and above those provided by metrics and Lifecycle. We will use a Router as an
example here.
Now, the attributes and operations are consolidated into a single MBean. The objectName
will depend on the exporter. If exported by the integration MBean exporter, the objectName
will be, for example: intDomain:type=MessageHandler,name=myRouter,bean=endpoint.
If exported by another exporter, the objectName will be, for example:
ctxDomain:name=org.springframework.integration.config.RouterFactoryBean#0
,type=MethodInvokingRouter. There is no difference between these MBeans (aside from the
objectName), except that the statistics will not be enabled (the attributes will be 0) by exporters other
than the integration exporter; statistics can be enabled at runtime using the JMX operations. When
exported by the integration MBean exporter, the initial state can be managed as described above.
Warning
If you are currently using the second MBean to change, for example, channel mappings, and you
are using the integration MBean exporter, note that the objectName has changed because of the
MBean consolidation. There is no change if you are not using the integration MBean exporter.
Previously, the managed-components patterns were inclusive only. If a bean name matched one of
the patterns it would be included. Now, the pattern can be negated by prefixing it with !. i.e. "!foo*,
foox" will match all beans that don’t start with foo, except foox. Patterns are evaluated left to right
and the first match (positive or negative) wins and no further patterns are applied.
Warning
The addition of this syntax to the pattern causes one possible (although perhaps unlikely) problem.
If you have a bean "!foo"and you included a pattern "!foo" in your MBean exporter’s
managed-components patterns; it will no long match; the pattern will now match all beans not
named foo. In this case, you can escape the ! in the pattern with \. The pattern "\!foo" means
match a bean named "!foo".
• IntegrationMBeanExporter changes
The MBean exporter provides a JMX operation to shut down the application in an orderly manner,
intended for use before terminating the JVM.
Its use and operation are described in Section 10.7, “Orderly Shutdown”.
architecture could prove to be difficult when things go wrong. When debugging, you would probably like
to get as much information about the message as you can (its origin, channels it has traversed, etc.)
Message History is one of those patterns that helps by giving you an option to maintain some level
of awareness of a message path either for debugging purposes or to maintain an audit trail. Spring
integration provides a simple way to configure your message flows to maintain the Message History
by adding a header to the Message and updating that header every time a message passes through
a tracked component.
To enable Message History all you need is to define the message-history element in your
configuration.
<int:message-history/>
Now every named component (component that has an id defined) will be tracked. The framework will
set the history header in your Message. Its value is very simple - List<Properties>.
<int:gateway id="sampleGateway"
service-interface="org.springframework.integration.history.sample.SampleGateway"
default-request-channel="bridgeInChannel"/>
The above configuration will produce a very simple Message History structure:
To get access to Message History all you need is access the MessageHistory header. For example:
Iterator<Properties> historyIterator =
message.getHeaders().get(MessageHistory.HEADER_NAME, MessageHistory.class).iterator();
assertTrue(historyIterator.hasNext());
Properties gatewayHistory = historyIterator.next();
assertEquals("sampleGateway", gatewayHistory.get("name"));
assertTrue(historyIterator.hasNext());
Properties chainHistory = historyIterator.next();
assertEquals("sampleChain", chainHistory.get("name"));
You might not want to track all of the components. To limit the history to certain components based
on their names, all you need is provide the tracked-components attribute and specify a comma-
delimited list of component names and/or patterns that match the components you want to track.
In the above example, Message History will only be maintained for all of the components that end with
Gateway, start with sample, or match the name foo exactly.
Starting with version 4.0, you can also use the @EnableMessageHistory annotation in a
@Configuration class. In addition, the MessageHistoryConfigurer bean is now exposed
as a JMX MBean by the IntegrationMBeanExporter (see the section called “MBean
Exporter”), allowing the patterns to be changed at runtime. Note, however, that the bean must
be stopped (turning off message history) in order to change the patterns. This feature might
be useful to temporarily turn on history to analyze a system. The MBean’s object name is
"<domain>:name=messageHistoryConfigurer,type=MessageHistoryConfigurer".
Important
Note
Remember that by definition the Message History header is immutable (you can’t re-write history,
although some try). Therefore, when writing Message History values, the components are either
creating brand new Messages (when the component is an origin), or they are copying the history
from a request Message, modifying it and setting the new list on a reply Message. In either case,
the values can be appended even if the Message itself is crossing thread boundaries. That means
that the history values can greatly simplify debugging in an asynchronous message flow.
To mitigate the risk of losing Messages, EIP defines the Message Store pattern which allows EIP
components to store Messages typically in some type of persistent store (e.g. RDBMS).
Spring Integration provides support for the Message Store pattern by a) defining a
org.springframework.integration.store.MessageStore strategy interface, b) providing
several implementations of this interface, and c) exposing a message-store attribute on all
components that have the capability to buffer messages so that you can inject any instance that
implements the MessageStore interface.
Details on how to configure a specific Message Store implementation and/or how to inject a
MessageStore implementation into a specific buffering component are described throughout the
manual (see the specific component, such as QueueChannel, Aggregator, Delayer etc.), but here are
a couple of samples to give you an idea:
QueueChannel
<int:channel id="myQueueChannel">
<int:queue message-store="refToMessageStore"/>
<int:channel>
Aggregator
<int:aggregator … message-store="refToMessageStore"/>
MessageStore. That might be fine for development or simple low-volume environments where the
potential loss of non-persistent messages is not a concern. However, the typical production application
will need a more robust option, not only to mitigate the risk of message loss but also to avoid potential
out-of-memory errors. Therefore, we also provide MessageStore implementations for a variety of data-
stores. Below is a complete list of supported implementations:
• Section 25.4, “Redis Message Store” - uses Redis key/value datastore to store Messages
• Section 23.3, “MongoDB Message Store” - uses MongoDB document store to store Messages
• Section 17.5, “Gemfire Message Store” - uses Gemfire distributed cache to store Messages
Important
The Message data (payload and headers) is serialized and deserialized using different
serialization strategies depending on the implementation of the MessageStore. For example,
when using JdbcMessageStore, only Serializable data is persisted by default. In this case
non-Serializable headers are removed before serialization occurs. Also be aware of the protocol
specific headers that are injected by transport adapters (e.g., FTP, HTTP, JMS etc.). For example,
<http:inbound-channel-adapter/> maps HTTP-headers into Message Headers and one
of them is an ArrayList of non-Serializable org.springframework.http.MediaType
instances. However you are able to inject your own implementation of the Serializer and/
or Deserializer strategy interfaces into some MessageStore implementations (such as
JdbcMessageStore) to change the behaviour of serialization and deserialization.
Special attention must be paid to the headers that represent certain types of data. For example,
if one of the headers contains an instance of some Spring Bean, upon deserialization you may
end up with a different instance of that bean, which directly affects some of the implicit headers
created by the framework (e.g., REPLY_CHANNEL or ERROR_CHANNEL). Currently they are
not serializable, but even if they were, the deserialized channel would not represent the expected
instance.
Beginning with Spring Integration version 3.0, this issue can be resolved with a header
enricher, configured to replace these headers with a name after registering the channel with the
HeaderChannelRegistry.
Also when configuring a message-flow like this: gateway # queue-channel (backed by a persistent
Message Store) # service-activator That gateway creates a Temporary Reply Channel, and it will
be lost by the time the service-activator’s poller reads from the queue. Again, you can use the
header enricher to replace the headers with a String representation.
For more information, refer to the the section called “Header Enricher”.
these implementations, which can be used as a persistent MessageStore for QueueChannel and
PriorityChannel:
Starting with version 4.1, the SimpleMessageStore no longer copies the message group when
calling getMessageGroup(). For large message groups, this was a significant performance
problem. 4.0.1 introduced a boolean copyOnGet allowing this to be controlled. When used
internally by the aggregator, this was set to false to improve performance. It is now false by default.
Users accessing the group store outside of components such as aggregators, will now get a direct
reference to the group being used by the aggregator, instead of a copy. Manipulation of the group
outside of the aggregator may cause unpredictable results.
For this reason, users should not perform such manipulation, or set the copyOnGet property to
true.
MessageGroupFactory
Starting with version 4.3, some MessageGroupStore implementations can be injected with a
custom MessageGroupFactory strategy to create/customize the MessageGroup instances used
by the MessageGroupStore. This defaults to a SimpleMessageGroupFactory which produces
SimpleMessageGroup s based on the GroupType.HASH_SET (LinkedHashSet) internal collection.
Other possible options are SYNCHRONISED_SET and BLOCKING_QUEUE, where the last one can
be used to reinstate the previous SimpleMessageGroup behavior. Also the PERSISTENT option is
available. See the next section for more information.
Starting with version 4.3, all persistence MessageGroupStore s retrieve MessageGroup s and their
messages from the store with the Lazy-Load manner. In most cases it is useful for the Correlation
MessageHandler s (Section 6.4, “Aggregator” and Section 6.5, “Resequencer”), when it would be an
overhead to load entire MessageGroup from the store on each correlation operation.
Our performance tests for lazy-load on MongoDB MessageStore (Section 23.3, “MongoDB Message
Store”) and <aggregator> (Section 6.4, “Aggregator”) with custom release-strategy like:
<int:aggregator input-channel="inputChannel"
output-channel="outputChannel"
message-store="mongoStore"
release-strategy-expression="size() == 1000"/>
The Metadata Store is designed to store various types of generic meta-data (e.g., published date
of the last feed entry that has been processed) to help components such as the Feed adapter deal
with duplicates. If a component is not directly provided with a reference to a MetadataStore, the
algorithm for locating a metadata store is as follows: First, look for a bean with id metadataStore in
the ApplicationContext. If one is found then it will be used, otherwise it will create a new instance of
SimpleMetadataStore which is an in-memory implementation that will only persist metadata within
the lifecycle of the currently running Application Context. This means that upon restart you may end
up with duplicate entries.
If you need to persist metadata between Application Context restarts, these persistent
MetadataStores are provided by the framework:
• PropertiesPersistingMetadataStore
By default, it only persists the state when the application context is closed normally. It implements
Flushable so you can persist the state at will, be invoking flush().
<bean id="metadataStore"
class="org.springframework.integration.metadata.PropertiesPersistingMetadataStore"/>
Alternatively, you can provide your own implementation of the MetadataStore interface (e.g.
JdbcMetadataStore) and configure it as a bean in the Application Context.
The Metadata Store is useful for implementing the EIP Idempotent Receiver pattern, when there is need
to filter an incoming Message if it has already been processed, and just discard it or perform some other
logic on discarding. The following configuration is an example of how to do this:
<int:filter input-channel="serviceChannel"
output-channel="idempotentServiceChannel"
discard-channel="discardChannel"
expression="@metadataStore.get(headers.businessKey) == null"/>
<int:publish-subscribe-channel id="idempotentServiceChannel"/>
<int:outbound-channel-adapter channel="idempotentServiceChannel"
expression="@metadataStore.put(headers.businessKey, '')"/>
The value of the idempotent entry may be some expiration date, after which that entry should be
removed from Metadata Store by some scheduled reaper.
Also see the section called “Idempotent Receiver Enterprise Integration Pattern”.
MetadataStoreListener
Some metadata stores (currently only zookeeper) support registering a listener to receive events when
items change.
See the javadocs for more information. The MetadataStoreListenerAdapter can be subclassed
if you are only interested in a subset of events.
<int:control-bus input-channel="operationChannel"/>
The Control Bus has an input channel that can be accessed for invoking operations on the beans in
the application context. It also has all the common properties of a service activating endpoint, e.g. you
can specify an output channel if the result of the operation has a return value that you want to send
on to a downstream channel.
The Control Bus executes messages on the input channel as Spring Expression Language expressions.
It takes a message, compiles the body to an expression, adds some context, and then executes
it. The default context supports any method that has been annotated with @ManagedAttribute
The root of the context for the expression is the Message itself, so you also have access to the payload
and headers as variables within your expression. This is consistent with all the other expression support
in Spring Integration endpoints.
With Java and Annotations the Control Bus can be configured as follows:
@Bean
@ServiceActivator(inputChannel = "operationChannel")
public ExpressionControlBusFactoryBean controlBus() {
return new ExpressionControlBusFactoryBean();
}
@Bean
public IntegrationFlow controlBusFlow() {
return IntegrationFlows.from("controlBus")
.controlBus()
.get();
}
@Bean
public IntegrationFlow controlBus() {
return IntegrationFlowDefinition::controlBus;
}
The first step calls beforeShutdown() on all beans that implement OrderlyShutdownCapable.
This allows such components to prepare for shutdown. Examples of components that implement this
interface, and what they do with this call include: JMS and AMQP message-driven adapters stop their
listener containers; TCP server connection factories stop accepting new connections (while keeping
existing connections open); TCP inbound endpoints drop (log) any new messages received; http
inbound endpoints return 503 - Service Unavailable for any new requests.
The second step stops any active channels, such as JMS- or AMQP-backed channels.
The fourth step stops all inbound MessageProducer s (that are not OrderlyShutdownCapable).
The fifth step waits for any remaining time left, as defined by the value of the long parameter passed in to
the operation. This is intended to allow any in-flight messages to complete their journeys. It is therefore
important to select an appropriate timeout when invoking this operation.
The sixth step calls afterShutdown() on all OrderlyShutdownCapable components. This allows such
components to perform final shutdown tasks (closing all open sockets, for example).
As discussed in the section called “Orderly Shutdown Managed Operation” this operation can be invoked
using JMX. If you wish to programmatically invoke the method, you will need to inject, or otherwise
get a reference to, the IntegrationMBeanExporter. If no id attribute is provided on the <int-
jmx:mbean-export/> definition, the bean will have a generated name. This name contains a random
component to avoid ObjectName collisions if multiple Spring Integration contexts exist in the same
JVM (MBeanServer).
For this reason, if you wish to invoke the method programmatically, it is recommended that you provide
the exporter with an id attribute so it can easily be accessed in the application context.
Finally, the operation can be invoked using the <control-bus>; see the monitoring Spring Integration
sample application for details.
Important
The above algorithm was improved in version 4.1. Previously, all task executors and schedulers
were stopped. This could cause mid-flow messages in QueueChannel s to remain. Now, the
shutdown leaves pollers running in order to allow these messages to be drained and processed.
{
"contentDescriptor": {
"providerVersion": "4.3.0.RELEASE",
"providerFormatVersion": 1.0,
"provider": "spring-integration",
"name": "myApplication"
},
"nodes": [
{
"nodeId": 1,
"name": "nullChannel",
"stats": null,
"componentType": "channel"
},
{
"nodeId": 2,
"name": "errorChannel",
"stats": null,
"componentType": "publish-subscribe-channel"
},
{
"nodeId": 3,
"name": "_org.springframework.integration.errorLogger",
"stats": {
"duration": {
"count": 0,
"min": 0.0,
"max": 0.0,
"mean": 0.0,
"standardDeviation": 0.0,
"countLong": 0
},
"errorCount": 0,
"standardDeviationDuration": 0.0,
"countsEnabled": true,
"statsEnabled": true,
"loggingEnabled": false,
"handleCount": 0,
"meanDuration": 0.0,
"maxDuration": 0.0,
"minDuration": 0.0,
"activeCount": 0
},
"componentType": "logging-channel-adapter",
"output": null,
"input": "errorChannel"
}
],
"links": [
{
"from": 2,
"to": 3,
"type": "input"
}
]
}
The links graph element represents connections between nodes from the nodes graph element and,
therefore, between integration components in the source Spring Integration application. For example
The information from this element can be used by a visualizing tool to render connections between
nodes from the nodes graph element, where the from and to numbers represent the value from the
nodeId property of the linked nodes. For example the link type can be used to determine the proper
port on the target node:
+---(discard)
|
+----o----+
| |
| |
| |
(input)--o o---(output)
| |
| |
| |
+----o----+
|
+---(error)
The nodes graph element is perhaps the most interesting because its elements contain
not only the runtime components with their componentType s and name s, but can also
optionally contain metrics exposed by the component. Node elements contain various properties
which are generally self-explanatory. For example, expression-based components include the
expression property containing the primary expression string for the component. To enable the
metrics, add an @EnableIntegrationManagement to some @Configuration class or add an
<int:management/> element to your XML configuration. You can control exactly which components in
the framework collect statistics. See Section 10.1, “Metrics and Management” for complete information.
See the stats attribute from the _org.springframework.integration.errorLogger
component in the JSON example above. The nullChannel and errorChannel don’t provide
statistics information in this case, because the configuration for this example was:
@Configuration
@EnableIntegration
@EnableIntegrationManagement(statsEnabled = "_org.springframework.integration.errorLogger.handler",
countsEnabled = "!*",
defaultLoggingEnabled = "false")
public class ManagementConfiguration {
@Bean
public IntegrationGraphServer integrationGraphServer() {
return new IntegrationGraphServer();
}
The nodeId represents a unique incremental identifier to distinguish one component from another.
It is also used in the links element to represent a relationship (connection) of this component to
others, if any. The input and output attributes are for the inputChannel and outputChannel
properties of the AbstractEndpoint, MessageHandler, SourcePollingChannelAdapter or
MessageProducerSupport. See the next paragraph for more information.
Spring Integration components have various levels of complexity. For example, any polled
MessageSource also has a SourcePollingChannelAdapter and a MessageChannel to which to
send messages from the source data periodically. Other components might be middleware request-reply
components, e.g. JmsOutboundGateway, with a consuming AbstractEndpoint to subscribe to (or
poll) the requestChannel (input) for messages, and a replyChannel (output) to produce a reply
message to send downstream. Meanwhile, any MessageProducerSupport implementation (e.g.
ApplicationEventListeningMessageProducer) simply wraps some source protocol listening
logic and sends messages to the outputChannel.
Within the graph, Spring Integration components are represented using the IntegrationNode
class hierarchy, which you can find in the o.s.i.support.management.graph package.
For example the ErrorCapableDiscardingMessageHandlerNode could be used for the
AggregatingMessageHandler (because it has a discardChannel option) and can produce
errors when consuming from a PollableChannel using a PollingConsumer. Another sample
is CompositeMessageHandlerNode - for a MessageHandlerChain when subscribed to a
SubscribableChannel, using an EventDrivenConsumer.
Note
The @MessagingGateway (see Section 8.4, “Messaging Gateways”) provides nodes for each
its method, where the name attribute is based on the gateway’s bean name and the short method
signature. For example the gateway:
@MessagingGateway(defaultRequestChannel = "four")
public interface Gate {
{
"nodeId" : 10,
"name" : "gate.bar(class java.lang.String)",
"stats" : null,
"componentType" : "gateway",
"output" : "four",
"errors" : null
},
{
"nodeId" : 11,
"name" : "gate.foo(class java.lang.String)",
"stats" : null,
"componentType" : "gateway",
"output" : "four",
"errors" : null
},
{
"nodeId" : 12,
"name" : "gate.foo(class java.lang.Integer)",
"stats" : null,
"componentType" : "gateway",
"output" : "four",
"errors" : null
}
This IntegrationNode hierarchy can be used for parsing the graph model on the client side, as
well as for the understanding the general Spring Integration runtime behavior. See also Section 3.7,
“Programming Tips and Tricks” for more information.
Any Security and Cross Origin restrictions for the IntegrationGraphController can be achieved
with the standard configuration options and components provided by Spring Security and Spring MVC
projects. A simple example of that follows:
<mvc:annotation-driven />
<mvc:cors>
<mvc:mapping path="/myIntegration/**"
allowed-origins="http://localhost:9090"
allowed-methods="GET" />
</mvc:cors>
<security:http>
<security:intercept-url pattern="/myIntegration/**" access="ROLE_ADMIN" />
</security:http>
The Java & Annotation Configuration variant follows; note that, for convenience, the annotation provides
an allowedOrigins attribute; this just provides GET access to the path. For more sophistication, you
can configure the CORS mappings using standard Spring MVC mechanisms.
@Configuration
@EnableWebMvc
@EnableWebSecurity
@EnableIntegration
@EnableIntegrationGraphController(path = "/testIntegration", allowedOrigins="http://localhost:9090")
public class IntegrationConfiguration extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.antMatchers("/testIntegration/**").hasRole("ADMIN")
// ...
.formLogin();
}
//...
To recap, Inbound Channel Adapters are used for one-way integration bringing data into the
messaging application. Outbound Channel Adapters are used for one-way integration to send data
out of the messaging application. Inbound Gateways are used for a bidirectional integration flow where
some other system invokes the messaging application and receives a reply.Outbound Gateways are
used for a bidirectional integration flow where the messaging application invokes some external service
or entity, expecting a result.
Redis the section called the section called Section 25.10, Section 25.8,
“Redis Inbound “Redis Outbound “Redis Queue “Redis Outbound
Channel Adapter” Channel Adapter” Inbound Gateway” Command
and the section and the section Gateway” and
called “Redis called “Redis Section 25.9,
Queue Inbound Queue Outbound “Redis Queue
Channel Adapter” Channel Adapter” Outbound
and Section 25.6, and Section 25.7, Gateway”
“RedisStore “RedisStore
Inbound Channel Outbound
Adapter” Channel Adapter”
In addition, as discussed in Part IV, “Core Messaging”, endpoints are provided for interfacing with Plain
Old Java Objects (POJOs). As discussed in Section 4.3, “Channel Adapter”, the <int:inbound-
channel-adapter> allows polling a java method for data; the <int:outbound-channel-
adapter> allows sending data to a void method, and as discussed in Section 8.4, “Messaging
Gateways”, the <int:gateway> allows any Java program to invoke a messaging flow. Each of these
without requiring any source level dependencies on Spring Integration. The equivalent of an outbound
gateway in this context would be to use a Section 8.5, “Service Activator” to invoke a method that returns
an Object of some kind.
• Inbound Gateway
• Outbound Gateway
In order to provide AMQP support, Spring Integration relies on (Spring AMQP) which "applies core
Spring concepts to the development of AMQP-based messaging solutions". Spring AMQP provides
similar semantics to (Spring JMS).
Whereas the provided AMQP Channel Adapters are intended for unidirectional Messaging (send or
receive) only, Spring Integration also provides inbound and outbound AMQP Gateways for request/
reply operations.
Tip
Please familiarize yourself with the reference documentation of the Spring AMQP project as well.
It provides much more in-depth information regarding Spring’s integration with AMQP in general
and RabbitMQ in particular.
<int-amqp:inbound-channel-adapter
id="inboundAmqp" ❶
channel="inboundChannel" ❷
queue-names="si.test.queue" ❸
acknowledge-mode="AUTO" ❹
advice-chain="" ❺
channel-transacted="" ❻
concurrent-consumers="" ❼
connection-factory="" ❽
error-channel="" ❾
expose-listener-channel="" ❿
header-mapper="" 11
mapped-request-headers="" 12
listener-container="" 13
message-converter="" 14
message-properties-converter="" 15
phase="" 16
prefetch-count="" 17
receive-timeout="" 18
recovery-interval="" 19
missing-queues-fatal="" 20
shutdown-timeout="" 21
task-executor="" 22
transaction-attribute="" 23
transaction-manager="" 24
tx-size="" 25
consumers-per-queue /> 26
container
Note that when configuring an external container, you cannot use the Spring AMQP namespace
to define the container. This is because the namespace requires at least one <listener/>
element. In this environment, the listener is internal to the adapter. For this reason, you must
define the container using a normal Spring <bean/> definition, such as:
<bean id="container"
class="org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer">
<property name="connectionFactory" ref="connectionFactory" />
<property name="queueNames" value="foo.queue" />
<property name="defaultRequeueRejected" value="false"/>
</bean>
Important
Even though the Spring Integration JMS and AMQP support is very similar, important differences
exist. The JMS Inbound Channel Adapter is using a JmsDestinationPollingSource under
the covers and expects a configured Poller. The AMQP Inbound Channel Adapter uses an
AbstractMessageListenerContainer and is message driven. In that regard, it is more
similar to the JMS Message Driven Channel Adapter.
The following Spring Boot application provides an example of configuring the inbound adapter using
Java configuration:
@SpringBootApplication
public class AmqpJavaApplication {
@Bean
public MessageChannel amqpInputChannel() {
return new DirectChannel();
}
@Bean
public AmqpInboundChannelAdapter inbound(SimpleMessageListenerContainer listenerContainer,
@Qualifier("amqpInputChannel") MessageChannel channel) {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(listenerContainer);
adapter.setOutputChannel(channel);
return adapter;
}
@Bean
public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory) {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames("foo");
container.setConcurrentConsumers(2);
// ...
return container;
}
@Bean
@ServiceActivator(inputChannel = "amqpInputChannel")
public MessageHandler handler() {
return new MessageHandler() {
@Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload());
}
};
}
The following Spring Boot application provides an example of configuring the inbound adapter using
the Java DSL:
@SpringBootApplication
public class AmqpJavaApplication {
@Bean
public IntegrationFlow amqpInbound(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(Amqp.inboundAdapter(connectionFactory, "foo"))
.handle(m -> System.out.println(m.getPayload()))
.get();
}
<int-amqp:inbound-gateway
id="inboundGateway" ❶
request-channel="myRequestChannel" ❷
header-mapper="" ❸
mapped-request-headers="" ❹
mapped-reply-headers="" ❺
reply-channel="myReplyChannel" ❻
reply-timeout="1000" ❼
amqp-template="" ❽
default-reply-to="" /> ❾
❽ The customized AmqpTemplate bean reference to have more control for the reply messages to
send or you can provide an alternative implementation to the RabbitTemplate.
❾ The replyTo org.springframework.amqp.core.Address to be used when the
requestMessage doesn’t have replyTo property. If this option isn’t specified, no amqp-
template is provided, and no replyTo property exists in the request message, an
IllegalStateException is thrown because the reply can’t be routed. If this option isn’t
specified, and an external amqp-template is provided, no exception will be thrown. You must
either specify this option, or configure a default exchange and routingKey on that template, if
you anticipate cases when no replyTo property exists in the request message.
See the note in Section 12.2, “Inbound Channel Adapter” about configuring the listener-container
attribute.
@SpringBootApplication
public class AmqpJavaApplication {
@Bean
public MessageChannel amqpInputChannel() {
return new DirectChannel();
}
@Bean
public AmqpInboundGateway inbound(SimpleMessageListenerContainer listenerContainer,
@Qualifier("amqpInputChannel") MessageChannel channel) {
AmqpInboundGateway gateway = new AmqpInboundGateway(listenerContainer);
gateway.setRequestChannel(channel);
gateway.setDefaultReplyTo("bar");
return gateway;
}
@Bean
public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory) {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(connectionFactory);
container.setQueueNames("foo");
container.setConcurrentConsumers(2);
// ...
return container;
}
@Bean
@ServiceActivator(inputChannel = "amqpInputChannel")
public MessageHandler handler() {
return new AbstractReplyProducingMessageHandler() {
@Override
protected Object handleRequestMessage(Message<?> requestMessage) {
return "reply to " + requestMessage.getPayload();
}
};
}
The following Spring Boot application provides an example of configuring the inbound gateway using
the Java DSL:
@SpringBootApplication
public class AmqpJavaApplication {
You can perform any valid rabbit command on the Channel but, generally, only basicAck and
basicNack (or basicReject) would be used. In order to not interfere with the operation of the
container, you should not retain a reference to the channel and just use it in the context of the current
message.
Note
Since the Channel is a reference to a "live" object, it cannot be serialized and will be lost if a
message is persisted.
// Do some processing
if (allOK) {
channel.basicAck(deliveryTag, false);
}
else {
channel.basicNack(deliveryTag, false, true);
}
return someResultForDownStreamProcessing;
}
<int-amqp:outbound-channel-adapter id="outboundAmqp" ❶
channel="outboundChannel" ❷
amqp-template="myAmqpTemplate" ❸
exchange-name="" ❹
exchange-name-expression="" ❺
order="1" ❻
routing-key="" ❼
routing-key-expression="" ❽
default-delivery-mode"" ❾
confirm-correlation-expression="" ❿
confirm-ack-channel="" 11
confirm-nack-channel="" 12
return-channel="" 13
error-message-strategy="" 14
header-mapper="" 15
mapped-request-headers="" 16
lazy-connect="true" /> 17
return-channel
The following Spring Boot application provides an example of configuring the outbound adapter using
Java configuration:
@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {
@Bean
@ServiceActivator(inputChannel = "amqpOutboundChannel")
public AmqpOutboundEndpoint amqpOutbound(AmqpTemplate amqpTemplate) {
AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate);
outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo'
return outbound;
}
@Bean
public MessageChannel amqpOutboundChannel() {
return new DirectChannel();
}
@MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
public interface MyGateway {
The following Spring Boot application provides an example of configuring the outbound adapter using
the Java DSL:
@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {
@Bean
public IntegrationFlow amqpOutbound(AmqpTemplate amqpTemplate) {
return IntegrationFlows.from(amqpOutboundChannel())
.handle(Amqp.outboundAdapter(amqpTemplate)
.routingKey("foo")) // default exchange - route to queue 'foo'
.get();
}
@Bean
public MessageChannel amqpOutboundChannel() {
return new DirectChannel();
}
@MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
public interface MyGateway {
}
}
<int-amqp:outbound-gateway id="inboundGateway" ❶
request-channel="myRequestChannel" ❷
amqp-template="" ❸
exchange-name="" ❹
exchange-name-expression="" ❺
order="1" ❻
reply-channel="" ❼
reply-timeout="" ❽
requires-reply="" ❾
routing-key="" ❿
routing-key-expression="" 11
default-delivery-mode"" 12
confirm-correlation-expression="" 13
confirm-ack-channel="" 14
confirm-nack-channel="" 15
return-channel="" 16
error-message-strategy="" 17
lazy-connect="true" /> 18
❺ A SpEL expression that is evaluated to determine the name of the AMQP Exchange to which
Messages should be sent, with the message as the root object. If not provided, Messages will be
sent to the default, no-name Exchange. Mutually exclusive with exchange-name. Optional.
❻ The order for this consumer when multiple consumers are registered thereby enabling
load-balancing and/or failover. Optional (Defaults to Ordered.LOWEST_PRECEDENCE
[=Integer.MAX_VALUE]).
❼ Message Channel to which replies should be sent after being received from an AMQP Queue and
converted.Optional.
❽ The time the gateway will wait when sending the reply message to the reply-channel. This only
applies if the reply-channel can block - such as a QueueChannel with a capacity limit that is
currently full. Default: infinity.
❾ When true, the gateway will throw an exception if no reply message is received within the
AmqpTemplate's replyTimeout property. Default: true.
❿ The routing-key to use when sending Messages. By default, this will be an empty String. Mutually
exclusive with routing-key-expression. Optional.
11 A SpEL expression that is evaluated to determine the routing-key to use when sending Messages,
with the message as the root object (e.g. payload.key). By default, this will be an empty String.
Mutually exclusive with routing-key. Optional.
12 The default delivery mode for messages; PERSISTENT or NON_PERSISTENT. Overridden if
the header-mapper sets the delivery mode. The DefaultHeaderMapper sets the value if
the Spring Integration message header amqp_deliveryMode is present. If this attribute is not
supplied and the header mapper doesn’t set it, the default depends on the underlying spring-amqp
MessagePropertiesConverter used by the RabbitTemplate. If that is not customized at all,
the default is PERSISTENT. Optional.
13 Since version 4.2. An expression defining correlation data. When provided, this configures the
underlying amqp template to receive publisher confirms. Requires a dedicated RabbitTemplate
and a CachingConnectionFactory with the publisherConfirms property set to true.
When a publisher confirm is received, and correlation data is supplied, it is written to
either the confirm-ack-channel, or the confirm-nack-channel, depending on the confirmation
type. The payload of the confirm is the correlation data as defined by this expression and
the message will have a header amqp_publishConfirm set to true (ack) or false (nack).
For nacks, an additional header amqp_publishConfirmNackCause is provided. Examples:
"headers[myCorrelationData]", "payload". If the expression resolves to a Message<?> instance
(such as "#this"), the message emitted on the ack/nack channel is based on that message, with
the additional header(s) added. Previously, a new message was created with the correlation data
as its payload, regardless of type. Optional.
14 The channel to which positive (ack) publisher confirms are sent; payload is the correlation data
defined by the confirm-correlation-expression. If the expression is #root or #this, the message
is built from the original message, with the amqp_publishConfirm header set to true. Optional,
default=nullChannel.
15 The channel to which negative (nack) publisher confirms are sent; payload is the correlation
data defined by the confirm-correlation-expression (if there is no ErrorMessageStrategy
configured). If the expression is #root or #this, the message is built from the
original message, with the amqp_publishConfirm header set to false. When there
is an ErrorMessageStrategy, the message will be an ErrorMessage with a
NackedAmqpMessageException payload. Optional, default=nullChannel.
16 The channel to which returned messages are sent. When provided, the underlying amqp
template is configured to return undeliverable messages to the adapter. When there is no
ErrorMessageStrategy configured, the message will be constructed from the data received
from amqp, with the following additional headers: amqp_returnReplyCode, amqp_returnReplyText,
return-channel
Important
The following Spring Boot application provides an example of configuring the outbound gateway using
Java configuration:
@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {
@Bean
@ServiceActivator(inputChannel = "amqpOutboundChannel")
public AmqpOutboundEndpoint amqpOutbound(AmqpTemplate amqpTemplate) {
AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate);
outbound.setExpectReply(true);
outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo'
return outbound;
}
@Bean
public MessageChannel amqpOutboundChannel() {
return new DirectChannel();
}
@MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
public interface MyGateway {
Notice that the only difference between the outbound adapter and outbound gateway configuration is
the setting of the expectReply property.
The following Spring Boot application provides an example of configuring the outbound adapter using
the Java DSL:
@SpringBootApplication
@IntegrationComponentScan
public class AmqpJavaApplication {
@Bean
public IntegrationFlow amqpOutbound(AmqpTemplate amqpTemplate) {
return IntegrationFlows.from(amqpOutboundChannel())
.handle(Amqp.outboundGateway(amqpTemplate)
.routingKey("foo")) // default exchange - route to queue 'foo'
.get();
}
@Bean
public MessageChannel amqpOutboundChannel() {
return new DirectChannel();
}
@MessagingGateway(defaultRequestChannel = "amqpOutboundChannel")
public interface MyGateway {
}
}
<int-amqp:outbound-gateway id="inboundGateway" ❶
request-channel="myRequestChannel" ❷
async-template="" ❸
exchange-name="" ❹
exchange-name-expression="" ❺
order="1" ❻
reply-channel="" ❼
reply-timeout="" ❽
requires-reply="" ❾
routing-key="" ❿
routing-key-expression="" 11
default-delivery-mode"" 12
confirm-correlation-expression="" 13
confirm-ack-channel="" 14
confirm-nack-channel="" 15
return-channel="" 16
lazy-connect="true" /> 17
Also see the section called “Asynchronous Service Activator” for more information.
RabbitTemplate
When using confirms and returns, it is recommended that the RabbitTemplate wired
into the AsyncRabbitTemplate be dedicated. Otherwise, unexpected side-effects may be
encountered.
@Configuration
public class AmqpAsyncConfig {
@Bean
@ServiceActivator(inputChannel = "amqpOutboundChannel")
public AsyncAmqpOutboundGateway amqpOutbound(AmqpTemplate asyncTemplate) {
AsyncAmqpOutboundGateway outbound = new AsyncAmqpOutboundGateway(asyncTemplate);
outbound.setRoutingKey("foo"); // default exchange - route to queue 'foo'
return outbound;
}
@Bean
public AsyncRabbitTemplate asyncTemplate(RabbitTemplate rabbitTemplate,
SimpleMessageListenerContainer replyContainer) {
return new AsyncRabbitTemplate(rabbitTemplate, replyContainer);
}
@Bean
public SimpleMessageListenerContainer replyContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(ccf);
container.setQueueNames("asyncRQ1");
return container;
}
@Bean
public MessageChannel amqpOutboundChannel() {
return new DirectChannel();
}
@SpringBootApplication
public class AmqpAsyncApplication {
@Bean
public IntegrationFlow asyncAmqpOutbound(AsyncRabbitTemplate asyncRabbitTemplate) {
return f -> f
.handle(Amqp.asyncOutboundGateway(asyncRabbitTemplate)
.routingKey("foo")); // default exchange - route to queue 'foo'
}
@MessagingGateway(defaultRequestChannel = "asyncAmqpOutbound.input")
public interface MyGateway {
<int:channel id="ctRequestChannel"/>
<rabbit:template id="amqpTemplateContentTypeConverter"
connection-factory="connectionFactory" message-converter="ctConverter" />
<bean id="ctConverter"
class="o.s.amqp.support.converter.ContentTypeDelegatingMessageConverter">
<property name="delegates">
<map>
<entry key="application/json">
<bean class="o.s.amqp.support.converter.Jackson2JsonMessageConverter" />
</entry>
</map>
</property>
</bean>
Note
Starting with version 5.0, headers that are added to the MessageProperties of the outbound
message are never overwritten by mapped headers (by default). Previously, this was only the case
if the message converter was a ContentTypeDelegatingMessageConverter (in that case,
the header was mapped first, so that the proper converter could be selected). For other converters,
such as the SimpleMessageConverter, mapped headers overwrote any headers added by the
converter. This caused problems when an outbound message had some left over contentType
header (perhaps from an inbound channel adapter) and the correct outbound contentType was
incorrectly overwritten. The work-around was to use a header filter to remove the header before
sending the message to the outbound endpoint.
There are, however, cases where the previous behavior is desired. For example, with a String
payload containing JSON, the SimpleMessageConverter is not aware of the content and sets
the contentType message property to text/plain, but your application would like to override
that to application/json by setting the the contentType header of the message sent to the
outbound endpoint. The ObjectToJsonTransformer does exactly that (by default).
There is now a property on the outbound channel adapter and gateway (as well as AMQP-backed
channels) headersMappedLast. Setting this to true will restore the behavior of overwriting the
property added by the converter.
<int-amqp:channel id="p2pChannel"/>
Under the covers a Queue named "si.p2pChannel" would be declared, and this channel will send to
that Queue (technically by sending to the no-name Direct Exchange with a routing key that matches
this Queue’s name). This channel will also register a consumer on that Queue. If you want the channel
to be "pollable" instead of message-driven, then simply provide the "message-driven" flag with a value
of false:
<int-amqp:publish-subscribe-channel id="pubSubChannel"/>
Under the covers a Fanout Exchange named "si.fanout.pubSubChannel" would be declared, and this
channel will send to that Fanout Exchange. This channel will also declare a server-named exclusive,
auto-delete, non-durable Queue and bind that to the Fanout Exchange while registering a consumer
on that Queue to receive Messages. There is no "pollable" option for a publish-subscribe-channel; it
must be message-driven.
Starting with version 4.1 AMQP Backed Message Channels, alongside with channel-transacted,
support template-channel-transacted to separate transactional configuration for the
AbstractMessageListenerContainer and for the RabbitTemplate. Note, previously, the
channel-transacted was true by default, now it changed to false as standard default value for
the AbstractMessageListenerContainer.
Prior to version 4.3, AMQP-backed channels only supported messages with Serializable payloads
and headers. The entire message was converted (serialized) and sent to RabbitMQ. Now, you can set
the extract-payload attribute (or setExtractPayload() when using Java configuration) to true.
When this flag is true, the message payload is converted and the headers mapped, in a similar manner
to when using channel adapters. This allows AMQP-backed channels to be used with non-serializable
payloads (perhaps with another message converter such as the Jackson2JsonMessageConverter).
The default mapped headers are discussed in Section 12.12, “AMQP Message Headers”. You
can modify the mapping by providing custom mappers using the outbound-header-mapper and
inbound-header-mapper attributes. You can now also specify a default-delivery-mode, used
to set the delivery mode when there is no amqp_deliveryMode header. By default, Spring AMQP
MessageProperties uses PERSISTENT delivery mode.
Important
Just as with other persistence-backed channels, AMQP-backed channels are intended to provide
message persistence to avoid message loss. They are not intended to distribute work to other
peer applications; for that purpose, use channel adapters instead.
Important
Starting with version 5.0, the pollable channel now blocks the poller thread for the specified
receiveTimeout (default 1 second). Previously, unlike other PollableChannel s, the thread
returned immediately to the scheduler if no message was available, regardless of the receive
timeout. Blocking is a little more expensive than just using a basicGet() to retrieve a message
(with no timeout) because a consumer has to be created to receive each message. To restore the
previous behavior, set the poller receiveTimeout to 0.
The following provides an example of configuring the channels using Java configuration:
@Bean
public AmqpChannelFactoryBean pollable(ConnectionFactory connectionFactory) {
AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean();
factoryBean.setConnectionFactory(connectionFactory);
factoryBean.setQueueName("foo");
factoryBean.setPubSub(false);
return factoryBean;
}
@Bean
public AmqpChannelFactoryBean messageDriven(ConnectionFactory connectionFactory) {
AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true);
factoryBean.setConnectionFactory(connectionFactory);
factoryBean.setQueueName("bar");
factoryBean.setPubSub(false);
return factoryBean;
}
@Bean
public AmqpChannelFactoryBean pubSub(ConnectionFactory connectionFactory) {
AmqpChannelFactoryBean factoryBean = new AmqpChannelFactoryBean(true);
factoryBean.setConnectionFactory(connectionFactory);
factoryBean.setQueueName("baz");
factoryBean.setPubSub(false);
return factoryBean;
}
The following provides an example of configuring the channels using the Java DSL:
@Bean
public IntegrationFlow pollableInFlow(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(...)
...
.channel(Amqp.pollableChannel(connectionFactory)
.queueName("foo"))
...
.get();
}
@Bean
public IntegrationFlow messageDrivenInFow(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(...)
...
.channel(Amqp.channel(connectionFactory)
.queueName("bar"))
...
.get();
}
@Bean
public IntegrationFlow pubSubInFlow(ConnectionFactory connectionFactory) {
return IntegrationFlows.from(...)
...
.channel(Amqp.publisSubscribeChannel(connectionFactory)
.queueName("baz"))
...
.get();
}
Of course, you can pass in your own implementation of AMQP specific header mappers, as the adapters
have respective properties to support that.
Any user-defined headers within the AMQP MessageProperties WILL be copied to or from an AMQP
Message, unless explicitly negated by the requestHeaderNames and/or replyHeaderNames properties
of the DefaultAmqpHeaderMapper. For an outbound mapper, no x-* headers are mapped by default;
see the caution below for the reason why.
To override the default, and revert to the pre-4.3 behavior, use STANDARD_REQUEST_HEADERS and
STANDARD_REPLY_HEADERS in the properties.
Tip
When mapping user-defined headers, the values can also contain simple wildcard patterns (e.g.
"foo*" or "*foo") to be matched. * matches all headers.
• amqp_appId
• amqp_clusterId
• amqp_contentEncoding
• amqp_contentLength
• content-type
• amqp_correlationId
• amqp_delay
• amqp_deliveryMode
• amqp_deliveryTag
• amqp_expiration
• amqp_messageCount
• amqp_messageId
• amqp_receivedDelay
• amqp_receivedDeliveryMode
• amqp_receivedExchange
• amqp_receivedRoutingKey
• amqp_redelivered
• amqp_replyTo
• amqp_timestamp
• amqp_type
• amqp_userId
• amqp_publishConfirm
• amqp_publishConfirmNackCause
• amqp_returnReplyCode
• amqp_returnReplyText
• amqp_returnExchange
• amqp_returnRoutingKey
• amqp_channel
• amqp_consumerTag
• amqp_consumerQueue
Caution
As mentioned above, using a header mapping pattern * is a common way to copy all headers.
However, this can have some unexpected side-effects because certain RabbitMQ proprietary
properties/headers will be copied as well. For example, when you use Federation, the received
message may have a property named x-received-from which contains the node that sent
the message. If you use the wildcard character * for the request and reply header mapping on
the Inbound Gateway, this header will be copied as well, which may cause some issues with
federation; this reply message may be federated back to the sending broker, which will think
that a message is looping and is thus silently dropped. If you wish to use the convenience of
wildcard header mapping, you may need to filter out some headers in the downstream flow.
For example, to avoid copying the x-received-from header back to the reply you can use
<int:header-filter ... header-names="x-received-from"> before sending the
reply to the AMQP Inbound Gateway. Alternatively, you could explicitly list those properties that
you actually want mapped instead of using wildcards. For these reasons, for inbound messages,
the mapper by default does not map any x-* headers; it also does not map the deliveryMode
to amqp_deliveryMode header, to avoid propagation of that header from an inbound message
to an outbound message. Instead, this header is mapped to amqp_receivedDeliveryMode,
which is not mapped on output.
Starting with version 4.3, patterns in the header mappings can be negated by preceding the pattern
with !. Negated patterns get priority, so a list such as STANDARD_REQUEST_HEADERS,foo,ba*,!
bar,!baz,qux,!foo will NOT map foo (nor bar nor baz); the standard headers plus bad, qux will
be mapped.
Important
If you have a user defined header that begins with ! that you do wish to map, you need to escape
it with \ thus: STANDARD_REQUEST_HEADERS,\!myBangHeader and it WILL be mapped.
• https://github.com/SpringSource/spring-integration-samples
Currently there is one sample available that demonstrates the basic functionality of the Spring Integration
AMQP Adapter using an Outbound Channel Adapter and an Inbound Channel Adapter. As AMQP Broker
implementation the sample uses RabbitMQ (http://www.rabbitmq.com/).
Note
In order to run the example you will need a running instance of RabbitMQ. A local installation
with just the basic defaults will be sufficient. For detailed RabbitMQ installation procedures please
visit: http://www.rabbitmq.com/install.html
Once the sample application is started, you enter some text on the command prompt and a message
containing that entered text is dispatched to the AMQP queue. In return that message is retrieved via
Spring Integration and then printed to the console.
The image belows illustrates the basic set of Spring Integration components used in this sample.
<int-event:inbound-channel-adapter channel="eventChannel"
error-channel="eventErrorChannel"
event-types="example.FooEvent, example.BarEvent, java.util.Date"/>
<int:publish-subscribe-channel id="eventChannel"/>
In the above example, all Application Context events that match one of the types specified by the event-
types (optional) attribute will be delivered as Spring Integration Messages to the Message Channel
named eventChannel. If a downstream component throws an exception, a MessagingException
containing the failed message and exception will be sent to the channel named eventErrorChannel. If
no "error-channel" is specified and the downstream channels are synchronous, the Exception will be
propagated to the caller.
<int:channel id="eventChannel"/>
<int-event:outbound-channel-adapter channel="eventChannel"/>
If you are using a PollableChannel (e.g., Queue), you can also provide poller as a sub-element of the
outbound-channel-adapter element. You can also optionally provide a task-executor reference for that
poller. The following example demonstrates both.
<int:channel id="eventChannel">
<int:queue/>
</int:channel>
<int-event:outbound-channel-adapter channel="eventChannel">
<int:poller max-messages-per-poll="1" task-executor="executor" fixed-rate="100"/>
</int-event:outbound-channel-adapter>
In the above example, all messages sent to the eventChannel channel will be published as
ApplicationEvents to any relevant ApplicationListener instances that are registered within the same
Spring ApplicationContext. If the payload of the Message is an ApplicationEvent, it will be passed as-
is. Otherwise the Message itself will be wrapped in a MessagingEvent instance.
14.1 Introduction
Web syndication is a form of publishing material such as news stories, press releases, blog posts, and
other items typically available on a website but also made available in a feed format such as RSS or
ATOM.
Spring integration provides support for Web Syndication via its feed adapter and provides convenient
namespace-based configuration for it. To configure the feed namespace, include the following elements
within the headers of your XML configuration file:
xmlns:int-feed="http://www.springframework.org/schema/integration/feed"
xsi:schemaLocation="http://www.springframework.org/schema/integration/feed
http://www.springframework.org/schema/integration/feed/spring-integration-feed.xsd"
<int-feed:inbound-channel-adapter id="feedAdapter"
channel="feedChannel"
url="http://feeds.bbci.co.uk/news/rss.xml">
<int:poller fixed-rate="10000" max-messages-per-poll="100" />
</int-feed:inbound-channel-adapter>
In the above configuration, we are subscribing to a URL identified by the url attribute.
As news items are retrieved they will be converted to Messages and sent to a
channel identified by the channel attribute. The payload of each message will be a
com.sun.syndication.feed.synd.SyndEntry instance. That encapsulates various data about a
news item (content, dates, authors, etc.).
You can also see that the Inbound Feed Channel Adapter is a Polling Consumer. That means
you have to provide a poller configuration. However, one important thing you must understand
with regard to Feeds is that its inner-workings are slightly different then most other poling
consumers. When an Inbound Feed adapter is started, it does the first poll and receives a
com.sun.syndication.feed.synd.SyndEntryFeed instance. That is an object that contains
multiple SyndEntry objects. Each entry is stored in the local entry queue and is released based on the
value in the max-messages-per-poll attribute such that each Message will contain a single entry.
If during retrieval of the entries from the entry queue the queue had become empty, the adapter will
attempt to update the Feed thereby populating the queue with more entries (SyndEntry instances) if
available. Otherwise the next attempt to poll for a feed will be determined by the trigger of the poller
(e.g., every 10 seconds in the above configuration).
Duplicate Entries
Polling for a Feed might result in entries that have already been processed ("I already read that news
item, why are you showing it to me again?"). Spring Integration provides a convenient mechanism to
eliminate the need to worry about duplicate entries. Each feed entry will have a published date field.
Every time a new Message is generated and sent, Spring Integration will store the value of the latest
published date in an instance of the MetadataStore strategy (Section 10.5, “Metadata Store”).
Note
The key used to persist the latest published date is the value of the (required) id attribute of
the Feed Inbound Channel Adapter component plus the feedUrl (if any) from the adapter’s
configuration.
Other Options
@SpringBootApplication
public class FeedJavaApplication {
@Value("org/springframework/integration/feed/sample.rss")
private Resource feedResource;
@Bean
public MetadataStore metadataStore() {
PropertiesPersistingMetadataStore metadataStore = new PropertiesPersistingMetadataStore();
metadataStore.setBaseDirectory(tempFolder.getRoot().getAbsolutePath());
return metadataStore;
}
@Bean
public IntegrationFlow feedFlow() {
return IntegrationFlows
.from(Feed.inboundAdapter(this.feedResource, "feedTest")
.metadataStore(metadataStore()),
e -> e.poller(p -> p.fixedDelay(100)))
.channel(c -> c.queue("entries"))
.get();
}
<bean id="pollableFileSource"
class="org.springframework.integration.file.FileReadingMessageSource"
p:directory="${input.directory}"/>
To prevent creating messages for certain files, you may supply a FileListFilter. By default the
following 2 filters are used:
• IgnoreHiddenFileListFilter
• AcceptOnceFileListFilter
The IgnoreHiddenFileListFilter ensures that hidden files are not being processed. Please keep
in mind that the exact definition of hidden is system-dependent. For example, on UNIX-based systems,
a file beginning with a period character is considered to be hidden. Microsoft Windows, on the other
hand, has a dedicated file attribute to indicate hidden files.
Important
The IgnoreHiddenFileListFilter was introduced with version 4.2. In prior versions hidden
files were included. With the default configuration, the IgnoreHiddenFileListFilter will be
triggered first, then the AcceptOnceFileListFilter.
The AcceptOnceFileListFilter ensures files are picked up only once from the directory.
Note
The AcceptOnceFileListFilter stores its state in memory. If you wish the state to survive
a system restart, consider using the FileSystemPersistentAcceptOnceFileListFilter
instead. This filter stores the accepted file names in a MetadataStore implementation
(Section 10.5, “Metadata Store”). This filter matches on the filename and modified time.
Since version 4.0, this filter requires a ConcurrentMetadataStore. When used with a shared
data store (such as Redis with the RedisMetadataStore) this allows filter keys to be shared
across multiple application instances, or when a network file share is being used by multiple
servers.
Since version 4.1.5, this filter has a new property flushOnUpdate which will cause it to flush the
metadata store on every update (if the store implements Flushable).
<bean id="pollableFileSource"
class="org.springframework.integration.file.FileReadingMessageSource"
p:inputDirectory="${input.directory}"
p:filter-ref="customFilterBean"/>
A common problem with reading files is that a file may be detected before it is ready. The default
AcceptOnceFileListFilter does not prevent this. In most cases, this can be prevented if the
file-writing process renames each file as soon as it is ready for reading. A filename-pattern or
filename-regex filter that accepts only files that are ready (e.g. based on a known suffix), composed
with the default AcceptOnceFileListFilter allows for this. The CompositeFileListFilter
enables the composition.
<bean id="pollableFileSource"
class="org.springframework.integration.file.FileReadingMessageSource"
p:inputDirectory="${input.directory}"
p:filter-ref="compositeFilter"/>
<bean id="compositeFilter"
class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean class="o.s.i.file.filters.AcceptOnceFileListFilter"/>
<bean class="o.s.i.file.filters.RegexPatternFileListFilter">
<constructor-arg value="^test.*$"/>
</bean>
</list>
</constructor-arg>
</bean>
If it is not possible to create the file with a temporary name and rename to the final name, another
alternative is provided. The LastModifiedFileListFilter was added in version 4.2. This filter can
be configured with an age property and only files older than this will be passed by the filter. The age
defaults to 60 seconds, but you should choose an age that is large enough to avoid picking up a file
early, due to, say, network glitches.
The pattern filter only passes a.txt and b.txt, the "done" filter will see all three files and only pass
a.txt. The final result of the composite filter is only a.txt is released.
Note
With the ChainFileListFilter, if any filter in the chain returns an empty list, the remaining
filters are not invoked.
Starting with version 5.0 an ExpressionFileListFilter has been introduced to allow to execute
SpEL expression against file as a context evaluation root object. For this purpose all the XML
components for file handling (local and remote), alongside with an existing filter attribute, have been
supplied with the filter-expression option:
<int-file:inbound-channel-adapter
directory="${inputdir}"
filter-expression="name matches '.text'"
auto-startup="false"/>
Message Headers
Starting with version 5.0 the FileReadingMessageSource, in addition to the payload as a polled
File, populates these headers to the outbound Message:
• FileHeaders.FILENAME - the File.getName() of the file to send. Can be used for subsequent
rename or copy logic;
• FileHeaders.RELATIVE_PATH - a new header introduced to represent the part of file path relative
to the root directory for the scan. This header can be useful when the requirement is to restore a
source directory hierarchy in the other places. For this purpose the DefaultFileNameGenerator
(the section called “Generating File Names”) can be configured to use this header.
The FileReadingMessageSource doesn’t produce messages for files from the directory
immediately. It uses an internal queue for eligible files returned by the scanner. The scanEachPoll
option is used to ensure that the internal queue is refreshed with the latest input directory content
on each poll. By default (scanEachPoll = false), the FileReadingMessageSource empties
its queue before scanning the directory again. This default behavior is particularly useful to reduce
scans of large numbers of files in a directory. However, in cases where custom ordering is
required, it is important to consider the effects of setting this flag to true; the order in which
files are processed may not be as expected. By default, files in the queue are processed in
their natural (path) order. New files added by a scan, even when the queue already has files,
are inserted in the appropriate position to maintain that natural order. To customize the order,
the FileReadingMessageSource can accept a Comparator<File> as a constructor argument.
It is used by the internal (PriorityBlockingQueue) to reorder its content according to the
business requirements. Therefore, to process files in a specific order, you should provide a
comparator to the FileReadingMessageSource, rather than ordering the list produced by a custom
DirectoryScanner.
Namespace Support
The configuration for file reading can be simplified using the file specific namespace. To do this use
the following template.
Within this namespace you can reduce the FileReadingMessageSource and wrap it in an inbound
Channel Adapter like this:
<int-file:inbound-channel-adapter id="filesIn1"
directory="file:${input.directory}" prevent-duplicates="true" ignore-hidden="true"/>
<int-file:inbound-channel-adapter id="filesIn2"
directory="file:${input.directory}"
filter="customFilterBean" />
<int-file:inbound-channel-adapter id="filesIn3"
directory="file:${input.directory}"
filename-pattern="test*" />
<int-file:inbound-channel-adapter id="filesIn4"
directory="file:${input.directory}"
filename-regex="test[0-9]+\.txt" />
Therefore, you can also leave off the 2 attributes prevent-duplicates and ignore-hidden as
they are true by default.
Important
The ignore-hidden attribute was introduced with Spring Integration 4.2. In prior versions hidden
files were included.
The second channel adapter example is using a custom filter, the third is using the filename-
pattern attribute to add an AntPathMatcher based filter, and the fourth is using the filename-
regex attribute to add a regular expression Pattern based filter to the FileReadingMessageSource.
The filename-pattern and filename-regex attributes are each mutually exclusive with the regular
filter reference attribute. However, you can use the filter attribute to reference an instance of
CompositeFileListFilter that combines any number of filters, including one or more pattern based
filters to fit your particular needs.
When multiple processes are reading from the same directory it can be desirable to lock files to prevent
them from being picked up concurrently. To do this you can use a FileLocker. There is a java.nio
based implementation available out of the box, but it is also possible to implement your own locking
scheme. The nio locker can be injected as follows
<int-file:inbound-channel-adapter id="filesIn"
directory="file:${input.directory}" prevent-duplicates="true">
<int-file:nio-locker/>
</int-file:inbound-channel-adapter>
<int-file:inbound-channel-adapter id="filesIn"
directory="file:${input.directory}" prevent-duplicates="true">
<int-file:locker ref="customLocker"/>
</int-file:inbound-channel-adapter>
Note
When a file inbound adapter is configured with a locker, it will take the responsibility to acquire a
lock before the file is allowed to be received. It will not assume the responsibility to unlock the
file. If you have processed the file and keeping the locks hanging around you have a memory leak.
If this is a problem in your case you should call FileLocker.unlock(File file) yourself
at the appropriate time.
When filtering and locking files is not enough it might be needed to control the way files are listed
entirely. To implement this type of requirement you can use an implementation of DirectoryScanner.
This scanner allows you to determine entirely what files are listed each poll. This is also the
interface that Spring Integration uses internally to wire FileListFilter s and FileLocker to
the FileReadingMessageSource. A custom DirectoryScanner can be injected into the <int-
file:inbound-channel-adapter/> on the scanner attribute.
This gives you full freedom to choose the ordering, listing and locking strategies.
Note
WatchServiceDirectoryScanner
Note
There is a case with WatchKey, when its internal events queue isn’t drained by the program
as quickly as the directory modification events occur. If the queue size is exceeded, a
StandardWatchEventKinds.OVERFLOW is emitted to indicate that some file system events
may be lost. In this case, the root directory is re-scanned completely. To avoid duplicates consider
using an appropriate FileListFilter such as the AcceptOnceFileListFilter and/or
remove files when processing is completed.
The ENTRY_MODIFY events logic should be implemented properly in the FileListFilter to track
not only new files but also the modification, if that is requirement. Otherwise the files from those events
are treated the same way.
The ENTRY_DELETE events have effect for the ResettableFileListFilter implementations and,
therefore, their files are provided for the remove() operation. This means that (when this event is
enabled), filters such as the AcceptOnceFileListFilter will have the file removed, meaning that,
if a file with the same name appears, it will pass the filter and be sent as a message.
<int-file:inbound-channel-adapter id="newFiles"
directory="${input.directory}"
use-watch-service="true"/>
<int-file:inbound-channel-adapter id="modifiedFiles"
directory="${input.directory}"
use-watch-service="true"
filter="acceptAllFilter"
watch-events="MODIFY"/> <!-- CREATE by default -->
A HeadDirectoryScanner can be used to limit the number of files retained in memory. This can be
useful when scanning large directories. With XML configuration, this is enabled using the queue-size
property on the inbound channel adapter.
Prior to version 4.2, this setting was incompatible with the use of any other filters. Any other filters
(including prevent-duplicates="true") overwrote the filter used to limit the size.
Note
The following Spring Boot application provides an example of configuring the inbound adapter using
Java configuration:
@SpringBootApplication
public class FileReadingJavaApplication {
@Bean
public MessageChannel fileInputChannel() {
return new DirectChannel();
}
@Bean
@InboundChannelAdapter(value = "fileInputChannel", poller = @Poller(fixedDelay = "1000"))
public MessageSource<File> fileReadingMessageSource() {
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory(new File(INBOUND_PATH));
source.setFilter(new SimplePatternFileListFilter("*.txt"));
return source;
}
@Bean
@Transformer(inputChannel = "fileInputChannel", outputChannel = "processFileChannel")
public FileToStringTransformer fileToStringTransformer() {
return new FileToStringTransformer();
}
The following Spring Boot application provides an example of configuring the inbound adapter using
the Java DSL:
@SpringBootApplication
public class FileReadingJavaApplication {
@Bean
public IntegrationFlow fileReadingFlow() {
return IntegrationFlows
.from(s -> s.file(new File(INBOUND_PATH))
.patternFilter("*.txt"),
e -> e.poller(Pollers.fixedDelay(1000)))
.transform(Transformers.fileToString())
.channel("processFileChannel")
.get();
}
'Tail’ing Files
Another popular use case is to get lines from the end (or tail) of a file,
capturing new lines when they are added. Two implementations are provided; the first,
OSDelegatingFileTailingMessageProducer, uses the native tail command (on operating
systems that have one). This is likely the most efficient implementation on those platforms.
For operating systems that do not have a tail command, the second implementation
ApacheCommonsFileTailingMessageProducer which uses the Apache commons-io Tailer
class.
In both cases, file system events, such as files being unavailable etc, are published as
ApplicationEvent s using the normal Spring event publishing mechanism. Examples of such events
are:
This sequence of events might occur, for example, when a file is rotated.
Starting with version 5.0, a FileTailingIdleEvent is emitted when there is no data in the file during
idleEventInterval.
Note
Not all platforms supporting a tail command provide these status messages.
Note
Example configurations:
<int-file:tail-inbound-channel-adapter id="native"
channel="input"
task-executor="exec"
file="/tmp/foo"/>
This creates a native adapter with default -F -n 0 options (follow the file name from the current end).
<int-file:tail-inbound-channel-adapter id="native"
channel="input"
native-options="-F -n +0"
task-executor="exec"
file-delay=10000
file="/tmp/foo"/>
This creates a native adapter with -F -n +0 options (follow the file name, emitting all existing lines). If the
tail command fails (on some platforms, a missing file causes the tail to fail, even with -F specified),
the command will be retried every 10 seconds.
<int-file:tail-inbound-channel-adapter id="native"
channel="input"
enable-status-reader="false"
task-executor="exec"
file="/tmp/foo"/>
By default native adapter capture from standard output and send them as messages and from standard
error to raise events. Starting with version 4.3.6, you can discard the standard error events by setting
the enable-status-reader to false.
<int-file:tail-inbound-channel-adapter id="native"
channel="input"
idle-event-interval="5000"
task-executor="exec"
file="/tmp/foo"/>
<int-file:tail-inbound-channel-adapter id="apache"
channel="input"
task-executor="exec"
file="/tmp/bar"
delay="2000"
end="false"
reopen="true"
file-delay="10000"/>
This creates an Apache commons-io Tailer adapter that examines the file for new lines every 2
seconds, and checks for existence of a missing file every 10 seconds. The file will be tailed from the
beginning (end="false") instead of the end (which is the default). The file will be reopened for each
chunk (the default is to keep the file open).
Important
Specifying the delay, end or reopen attributes, forces the use of the Apache commons-io
adapter and the native-options attribute is not allowed.
Another common technique is to write a second "marker" file to indicate the file transfer
is complete. In this scenario, say, you should not consider foo.txt to be available for
use until foo.txt.complete is also present. Spring Integration version 5.0 introduces
new filters to support this mechanism. Implementations are provided for the file system
(FileSystemMarkerFilePresentFileListFilter), FTP and SFTP. They are configurable such
that the marker file can have any name, although it will usually be related to the file being transferred.
See the javadocs for more information.
• File,
• String
• byte array
You can configure the encoding and the charset that will be used in case of a String payload.
To make things easier, you can configure the FileWritingMessageHandler as part of an Outbound
Channel Adapter or Outbound Gateway using the provided XML namespace support.
Starting with version 4.3, you can specify the buffer size to use when writing files.
Alternatively, you can specify an expression to be evaluated against the Message in order to generate
a file name, e.g. headers[myCustomHeader] + '.foo'. The expression must evaluate to a String. For
Once setup, the DefaultFileNameGenerator will employ the following resolution steps to determine
the filename for a given Message payload:
1. Evaluate the expression against the Message and, if the result is a non-empty String, use it as
the filename.
When using the XML namespace support, both, the File Outbound Channel Adapter and the File
Outbound Gateway support the following two mutually exclusive configuration attributes:
While writing files, a temporary file suffix will be used (default: .writing). It is appended to the filename
while the file is being written. To customize the suffix, you can set the temporary-file-suffix attribute on
both the File Outbound Channel Adapter and the File Outbound Gateway.
Note
When using the APPEND file mode, the temporary-file-suffix attribute is ignored, since the data
is appended to the file directly.
Starting with version 4.2.5 the generated file name (as a result of filename-generator/filename-
generator-expression evaluation) can represent a sub-path together with the target file name. It
is used as a second constructor argument for File(File parent, String child) as before, but
in the past we didn’t created (mkdirs()) directories for sub-path assuming only the file name. This
approach is useful for cases when we need to restore the file system tree according the source directory.
For example we unzipping the archive and want to save all file in the target directory at the same order.
• directory
• directory-expression
Note
When using the directory attribute, the output directory will be set to a fixed value, that is set at
initialization time of the FileWritingMessageHandler. If you don’t specify this attribute, then you
must use the directory-expression attribute.
If you want to have full SpEL support you would choose the directory-expression attribute. This attribute
accepts a SpEL expression that is evaluated for each message being processed. Thus, you have full
access to a Message’s payload and its headers to dynamically specify the output file directory.
The SpEL expression must resolve to either a String or to java.io.File. Furthermore the resulting
String or File must point to a directory. If you don’t specify the directory-expression attribute, then
you must set the directory attribute.
If the destination directory does not exists, yet, by default the respective destination directory and any
non-existing parent directories are being created automatically. You can set the auto-create-directory
attribute to false in order to prevent that. This attribute applies to both, the directory and the directory-
expression attribute.
Note
When using the directory attribute and auto-create-directory is false, the following change was
made starting with Spring Integration 2.2:
Instead of checking for the existence of the destination directory at initialization time of the adapter,
this check is now performed for each message being processed.
Furthermore, if auto-create-directory is true and the directory was deleted between the
processing of messages, the directory will be re-created for each message being processed.
• REPLACE (Default)
• REPLACE_IF_MODIFIED
• APPEND
• APPEND_NO_FLUSH
• FAIL
• IGNORE
Note
The mode attribute and the options APPEND, FAIL and IGNORE, are available since Spring
Integration 2.2.
REPLACE
If the target file already exists, it will be overwritten. If the mode attribute is not specified, then this is
the default behavior when writing files.
REPLACE_IF_MODIFIED
If the target file already exists, it will be overwritten only if the last modified timestamp is different to the
source file. For File payloads, the payload lastModified time is compared to the existing file. For
other payloads, the FileHeaders.SET_MODIFIED (file_setModified) header is compared to the
existing file. If the header is missing, or has a value that is not a Number, the file is always replaced.
APPEND
This mode allows you to append Message content to the existing file instead of creating a new file
each time. Note that this attribute is mutually exclusive with temporary-file-suffix attribute since when
appending content to the existing file, the adapter no longer uses a temporary file. The file is closed
after each message.
APPEND_NO_FLUSH
This has the same semantics as APPEND but the data is not flushed and the file is not closed after each
message. This can provide a significant performance at the risk of data loss in the case of a failure. See
the section called “Flushing Files When using APPEND_NO_FLUSH” for more information.
FAIL
IGNORE
Note
When using a temporary file suffix (default: .writing), the IGNORE mode will apply if the final
file name exists, or the temporary file name exists.
• flushInterval - if a file is not written to for this period of time, it is automatically flushed. This is
approximate and may be up to 1.33x this time (with an average of 1.167x).
• Send a message to the message handler’s trigger method containing a regular expression. Files
with absolute path names matching the pattern will be flushed.
• Provide the handler with a custom MessageFlushPredicate implementation to modify the action
taken when a message is sent to the trigger method.
The predicates are called for each open file. See the java docs for these interfaces for more information.
Note that, since version 5.0, the predicate methods provide another parameter - the time that the current
file was first written to if new or previously closed.
When using flushInterval, the interval starts at the last write - the file is flushed only if it is idle for
the interval. Starting with version 4.3.7, and additional property flushWhenIdle can be set to false,
meaning that the interval starts with the first write to a previously flushed (or new) file.
File Timestamps
By default, the destination file lastModified timestamp will be the time the file was created (except
a rename in-place will retain the current timestamp). Starting with version 4.3, you can now configure
preserve-timestamp (or setPreserveTimestamp(true) when using Java configuration). For
File payloads, this will transfer the timestamp from the inbound file to the outbound (regardless
of whether a copy was required). For other payloads, if the FileHeaders.SET_MODIFIED header
(file_setModified) is present, it will be used to set the destination file’s lastModified timestamp,
as long as the header is a Number.
File Permissions
Starting with version 5.0, when writing files to a file system that supports Posix permissions, you can
specify those permissions on the outbound channel adapter or gateway. The property is an integer and
is usually supplied in the familiar octal format; e.g. 0640 meaning the owner has read/write permissions,
the group has read only permission and others have no access.
The namespace based configuration also supports a delete-source-files attribute. If set to true,
it will trigger the deletion of the original source files after writing to a destination. The default value for
that flag is false.
<int-file:outbound-channel-adapter id="filesOut"
directory="${output.directory}"
delete-source-files="true"/>
Note
The delete-source-files attribute will only have an effect if the inbound Message has a File
payload or if the FileHeaders.ORIGINAL_FILE header value contains either the source File
instance or a String representing the original file path.
<int-file:outbound-channel-adapter id="newlineAdapter"
append-new-line="true"
directory="${output.directory}"/>
Outbound Gateway
In cases where you want to continue processing messages based on the written file, you can use
the outbound-gateway instead. It plays a very similar role as the outbound-channel-adapter.
However, after writing the file, it will also send it to the reply channel as the payload of a Message.
As mentioned earlier, you can also specify the mode attribute, which defines the behavior of how to
deal with situations where the destination file already exists. Please see the section called “Dealing with
Existing Destination Files” for further details. Generally, when using the File Outbound Gateway, the
result file is returned as the Message payload on the reply channel.
This also applies when specifying the IGNORE mode. In that case the pre-existing destination file is
returned. If the payload of the request message was a file, you still have access to that original file
through the Message Header FileHeaders.ORIGINAL_FILE.
Note
The outbound-gateway works well in cases where you want to first move a file and then send it
through a processing pipeline. In such cases, you may connect the file namespace’s inbound-
channel-adapter element to the outbound-gateway and then connect that gateway’s
reply-channel to the beginning of the pipeline.
If you have more elaborate requirements or need to support additional payload types as input to be
converted to file content you could extend the FileWritingMessageHandler, but a much better
option is to rely on a Transformer.
The following Spring Boot application provides an example of configuring the inbound adapter using
Java configuration:
@SpringBootApplication
@IntegrationComponentScan
public class FileWritingJavaApplication {
@Bean
@ServiceActivator(inputChannel = "writeToFileChannel")
public MessageHandler fileWritingMessageHandler() {
Expression directoryExpression = new
SpelExpressionParser().parseExpression("headers.directory");
FileWritingMessageHandler handler = new FileWritingMessageHandler(directoryExpression);
handler.setFileExistsMode(FileExistsMode.APPEND);
return handler;
}
@MessagingGateway(defaultRequestChannel = "writeToFileChannel")
public interface MyGateway {
}
}
The following Spring Boot application provides an example of configuring the inbound adapter using
the Java DSL:
@SpringBootApplication
public class FileWritingJavaApplication {
@Bean
public IntegrationFlow fileWritingFlow() {
return IntegrationFlows.from("fileWritingInput")
.enrichHeaders(h -> h.header(FileHeaders.FILENAME, "foo.txt")
.header("directory", new File(tmpDir.getRoot(), "fileWritingFlow")))
.handleWithAdapter(a -> a.fileGateway(m -> m.getHeaders().get("directory")))
.channel(MessageChannels.queue("fileWritingResultChannel"))
.get();
}
FileToStringTransformer will convert Files to Strings as the name suggests. If nothing else, this
can be useful for debugging (consider using with a Wire Tap).
To configure File specific transformers you can use the appropriate elements from the file namespace.
The delete-files option signals to the transformer that it should delete the inbound File
after the transformation is complete. This is in no way a replacement for using the
AcceptOnceFileListFilter when the FileReadingMessageSource is being used in a multi-
threaded environment (e.g. Spring Integration in general).
Inbound payloads can be File, String (a File path), InputStream, or Reader. Other payload
types will be emitted unchanged.
<int-file:splitter id="splitter" ❶
iterator="" ❷
markers="" ❸
markers-json="" ❹
apply-sequence="" ❺
requires-reply="" ❻
charset="" ❼
first-line-as-header="" ❽
input-channel="" ❾
output-channel="" ❿
send-timeout="" 11
auto-startup="" 12
order="" 13
phase="" /> 14
12 Set to false to disable automatically starting the splitter when the context is refreshed. Default:
true.
13 Set the order of this endpoint if the input-channel is a <publish-subscribe-channel/>.
14 Set the startup phase for the splitter (used when auto-startup is true).
The FileSplitter will also split any text-based InputStream into lines. When used in conjunction
with an FTP or SFTP streaming inbound channel adapter, or an FTP or SFTP outbound gateway using
the stream option to retrieve a file, starting with version 4.3, the splitter will automatically close the
session supporting the stream, when the file is completely consumed. See Section 16.5, “FTP Streaming
Inbound Channel Adapter” and Section 28.8, “SFTP Streaming Inbound Channel Adapter” as well
as Section 16.8, “FTP Outbound Gateway” and Section 28.11, “SFTP Outbound Gateway” for more
information about these facilities.
When markersJson is true, the markers will be represented as a JSON string, as long as a suitable
JSON processor library, such as Jackson or Boon, is on the classpath.
Starting with version 5.0, the firstLineAsHeader option is introduced to specify that the first line
of content is a header (such as column names in a CSV file). The argument passed to this property
is the header name under which the first line will be carried as a header in the messages emitted for
the remaining lines. This line is not included in the sequence header (if applySequence is true) nor
in the FileMarker.END lineCount. If file contains only the header line, the file is treated as empty
and therefore only FileMarker s are emitted during splitting (if markers are enabled, otherwise no
messages are emitted). By default (if no header name is set), the first line is considered to be data and
will be the payload of the first emitted message.
If you need more complex logic about headers extraction from the file content (not first line, not the whole
content of the line, not one header etc.), consider to use Header Enricher upfront of the FileSplitter.
The lines which have been moved to the headers might be filtered downstream from the normal content
process.
The following Spring Boot application provides an example of configuring the inbound adapter using
the Java DSL:
@SpringBootApplication
public class FileSplitterApplication {
@Bean
public IntegrationFlow fileSplitterFlow() {
return IntegrationFlows
.from(Files.inboundAdapter(tmpDir.getRoot())
.filter(new ChainFileListFilter<File>()
.addFilter(new AcceptOnceFileListFilter<>())
.addFilter(new ExpressionFileListFilter<>(
new FunctionExpression<File>(f -> "foo.tmp".equals(f.getName()))))))
.split(Files.splitter()
.markers()
.charset(StandardCharsets.US_ASCII)
.firstLineAsHeader("fileHeader")
.applySequence(true))
.channel(c -> c.queue("fileSplittingResultChannel"))
.get();
}
16.1 Introduction
The File Transfer Protocol (FTP) is a simple network protocol which allows you to transfer files between
two computers on the Internet.
There are two actors when it comes to FTP communication: client and server. To transfer files with FTP/
FTPS, you use a client which initiates a connection to a remote computer that is running an FTP server.
After the connection is established, the client can choose to send and/or receive copies of files.
Spring Integration supports sending and receiving files over FTP/FTPS by providing three client side
endpoints: Inbound Channel Adapter, Outbound Channel Adapter, and Outbound Gateway. It also
provides convenient namespace-based configuration options for defining these client components.
To use the FTP namespace, add the following to the header of your XML file:
xmlns:int-ftp="http://www.springframework.org/schema/integration/ftp"
xsi:schemaLocation="http://www.springframework.org/schema/integration/ftp
http://www.springframework.org/schema/integration/ftp/spring-integration-ftp.xsd"
Default Factories
Important
Starting with version 3.0, sessions are no longer cached by default. See Section 16.9, “FTP
Session Caching”.
Before configuring FTP adapters you must configure an FTP Session Factory. You can configure
the FTP Session Factory with a regular bean definition where the implementation class is
org.springframework.integration.ftp.session.DefaultFtpSessionFactory: Below is
a basic configuration:
<bean id="ftpClientFactory"
class="org.springframework.integration.ftp.session.DefaultFtpSessionFactory">
<property name="host" value="localhost"/>
<property name="port" value="22"/>
<property name="username" value="kermit"/>
<property name="password" value="frog"/>
<property name="clientMode" value="0"/>
<property name="fileType" value="2"/>
<property name="bufferSize" value="100000"/>
</bean>
<bean id="ftpClientFactory"
class="org.springframework.integration.ftp.session.DefaultFtpsSessionFactory">
<property name="host" value="localhost"/>
<property name="port" value="22"/>
<property name="username" value="oleg"/>
<property name="password" value="password"/>
<property name="clientMode" value="1"/>
<property name="fileType" value="2"/>
<property name="useClientMode" value="true"/>
<property name="cipherSuites" value="a,b.c"/>
<property name="keyManager" ref="keyManager"/>
<property name="protocol" value="SSL"/>
<property name="trustManager" ref="trustManager"/>
<property name="prot" value="P"/>
<property name="needClientAuth" value="true"/>
<property name="authValue" value="oleg"/>
<property name="sessionCreation" value="true"/>
<property name="protocols" value="SSL, TLS"/>
<property name="implicit" value="true"/>
</bean>
Every time an adapter requests a session object from its SessionFactory the session is returned
from a session pool maintained by a caching wrapper around the factory. A Session in the session pool
might go stale (if it has been disconnected by the server due to inactivity) so the SessionFactory will
perform validation to make sure that it never returns a stale session to the adapter. If a stale session
was encountered, it will be removed from the pool, and a new one will be created.
Note
If you experience connectivity problems and would like to trace Session creation as well as
see which Sessions are polled you may enable it by setting the logger to TRACE level (e.g.,
log4j.category.org.springframework.integration.file=TRACE)
Now all you need to do is inject these session factories into your adapters. Obviously the protocol (FTP
or FTPS) that an adapter will use depends on the type of session factory that has been injected into
the adapter.
Note
A more practical way to provide values for FTP/FTPS Session Factories is by using Spring’s
property placeholder support (See: http://docs.spring.io/spring/docs/current/spring-framework-
reference/html/beans.html#beans-factory-placeholderconfigurer).
Advanced Configuration
DefaultFtpSessionFactory provides an abstraction over the underlying client API which, since
Spring Integration 2.0, is Apache Commons Net. This spares you from the low level configuration details
of the org.apache.commons.net.ftp.FTPClient. Several common properties are exposed on
the session factory (since version 4.0, this now includes connectTimeout, defaultTimeout and
dataTimeout). However there are times when access to lower level FTPClient configuration is
necessary to achieve more advanced configuration (e.g., setting the port range for active mode etc.). For
that purpose, AbstractFtpSessionFactory (the base class for all FTP Session Factories) exposes
hooks, in the form of the two post-processing methods below.
/**
* Will handle additional initialization after client.connect() method was invoked,
* but before any action on the client has been taken
*/
protected void postProcessClientAfterConnect(T t) throws IOException {
// NOOP
}
/**
* Will handle additional initialization before client.connect() method was invoked.
*/
protected void postProcessClientBeforeConnect(T client) throws IOException {
// NOOP
}
As you can see, there is no default implementation for these two methods. However, by
extending DefaultFtpSessionFactory you can override these methods to provide more advanced
configuration of the FTPClient. For example:
When using FTP over SSL/TLS, some servers require the same SSLSession to be used on the control
and data connections; this is to prevent "stealing" data connections; see here for more information.
Currently, the Apache FTPSClient does not support this feature - see NET-408.
@Bean
public DefaultFtpsSessionFactory sf() {
DefaultFtpsSessionFactory sf = new DefaultFtpsSessionFactory() {
@Override
protected FTPSClient createClientInstance() {
return new SharedSSLFTPSClient();
}
};
sf.setHost("...");
sf.setPort(21);
sf.setUsername("...");
sf.setPassword("...");
sf.setNeedClientAuth(true);
return sf;
}
@Override
protected void _prepareDataSocket_(final Socket socket) throws IOException {
if (socket instanceof SSLSocket) {
// Control socket is SSL
final SSLSession session = ((SSLSocket) _socket_).getSession();
final SSLSessionContext context = session.getSessionContext();
context.setSessionCacheSize(0); // you might want to limit the cache
try {
final Field sessionHostPortCache = context.getClass()
.getDeclaredField("sessionHostPortCache");
sessionHostPortCache.setAccessible(true);
final Object cache = sessionHostPortCache.get(context);
final Method method = cache.getClass().getDeclaredMethod("put", Object.class,
Object.class);
method.setAccessible(true);
String key = String.format("%s:%s", socket.getInetAddress().getHostName(),
String.valueOf(socket.getPort())).toLowerCase(Locale.ROOT);
method.invoke(cache, key, session);
key = String.format("%s:%s", socket.getInetAddress().getHostAddress(),
String.valueOf(socket.getPort())).toLowerCase(Locale.ROOT);
method.invoke(cache, key, session);
}
catch (NoSuchFieldException e) {
// Not running in expected JRE
logger.warn("No field sessionHostPortCache in SSLSessionContext", e);
}
catch (Exception e) {
// Not running in expected JRE
logger.warn(e.getMessage());
}
}
Convenience methods have been added so this can easily be done from a message flow:
Important
When using session caching (see Section 16.9, “FTP Session Caching”), each of the delegates
should be cached; you cannot cache the DelegatingSessionFactory itself.
<int-ftp:inbound-channel-adapter id="ftpInbound"
channel="ftpChannel"
session-factory="ftpSessionFactory"
auto-create-local-directory="true"
delete-remote-files="true"
filename-pattern="*.txt"
remote-directory="some/remote/path"
remote-file-separator="/"
preserve-timestamp="true"
local-filename-generator-expression="#this.toUpperCase() + '.a'"
scanner="myDirScanner"
local-filter="myFilter"
temporary-file-suffix=".writing"
max-fetch-size="-1"
local-directory=".">
<int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>
As you can see from the configuration above you can configure an FTP Inbound Channel Adapter via
the inbound-channel-adapter element while also providing values for various attributes such as
local-directory, filename-pattern (which is based on simple pattern matching, not regular
expressions), and of course the reference to a session-factory.
By default the transferred file will carry the same name as the original file. If you want to override this
behavior you can set the local-filename-generator-expression attribute which allows you
to provide a SpEL Expression to generate the name of the local file. Unlike outbound gateways and
adapters where the root object of the SpEL Evaluation Context is a Message, this inbound adapter does
not yet have the Message at the time of evaluation since that’s what it ultimately generates with the
transferred file as its payload. So, the root object of the SpEL Evaluation Context is the original name
of the remote file (String).
The inbound channel adapter first retrieves the file to a local directory and then emits each file according
to the poller configuration. Starting with version 5.0, you can now limit the number of files fetched from
the FTP server when new file retrievals are needed. This can be beneficial when the target files are very
large and/or when running in a clustered system with a persistent file list filter discussed below. Use
max-fetch-size for this purpose; a negative value (default) means no limit and all matching files will
be retrieved; see Section 16.6, “Inbound Channel Adapters: Controlling Remote File Fetching” for more
information. Since version 5.0, you can also provide a custom DirectoryScanner implementation to
the inbound-channel-adapter via the scanner attribute.
Starting with Spring Integration 3.0, you can specify the preserve-timestamp attribute (default
false); when true, the local file’s modified timestamp will be set to the value retrieved from the server;
otherwise it will be set to the current time.
Starting with version 4.2, you can specify remote-directory-expression instead of remote-
directory, allowing you to dynamically determine the directory on each poll. e.g remote-
directory-expression="@myBean.determineRemoteDir()".
Sometimes file filtering based on the simple pattern specified via filename-pattern attribute
might not be sufficient. If this is the case, you can use the filename-regex attribute to specify
a Regular Expression (e.g. filename-regex=".*\.test$"). And of course if you need complete
control you can use filter attribute and provide a reference to any custom implementation of the
org.springframework.integration.file.filters.FileListFilter, a strategy interface
for filtering a list of files. This filter determines which remote files are retrieved. You can also combine a
pattern based filter with other filters, such as an AcceptOnceFileListFilter to avoid synchronizing
files that have previously been fetched, by using a CompositeFileListFilter.
The AcceptOnceFileListFilter stores its state in memory. If you wish the state to survive a
system restart, consider using the FtpPersistentAcceptOnceFileListFilter instead. This filter
stores the accepted file names in an instance of the MetadataStore strategy (Section 10.5, “Metadata
Store”). This filter matches on the filename and the remote modified time.
Since version 4.0, this filter requires a ConcurrentMetadataStore. When used with a shared data
store (such as Redis with the RedisMetadataStore) this allows filter keys to be shared across
multiple application or server instances.
The above discussion refers to filtering the files before retrieving them. Once the files have been
retrieved, an additional filter is applied to the files on the file system. By default, this is an
AcceptOnceFileListFilter which, as discussed, retains state in memory and does not consider
the file’s modified time. Unless your application removes files after processing, the adapter will re-
process the files on disk by default after an application restart.
Use the local-filter attribute to configure the behavior of the local file system filter. Starting with
version 4.3.8, a FileSystemPersistentAcceptOnceFileListFilter is configured by default.
This filter stores the accepted file names and modified timestamp in an instance of the MetadataStore
strategy (Section 10.5, “Metadata Store”), and will detect changes to the local file modified time. The
default MetadataStore is a SimpleMetadataStore which stores state in memory.
Since version 4.1.5, these filters have a new property flushOnUpdate which will cause them to flush
the metadata store on every update (if the store implements Flushable).
Important
Further, if you use a distributed MetadataStore (such as Section 25.5, “Redis Metadata Store”
or Section 17.7, “Gemfire Metadata Store”) you can have multiple instances of the same adapter/
application and be sure that one and only one will process a file.
The actual local filter is a CompositeFileListFilter containing the supplied filter and a pattern filter
that prevents processing files that are in the process of being downloaded (based on the temporary-
file-suffix); files are downloaded with this suffix (default: .writing) and the file is renamed to its
final name when the transfer is complete, making it visible to the filter.
The remote-file-separator attribute allows you to configure a file separator character to use if the
default / is not applicable for your particular environment.
It is also important to understand that the FTP Inbound Channel Adapter is a Polling Consumer and
therefore you must configure a poller (either via a global default or a local sub-element). Once a file has
been transferred, a Message with a java.io.File as its payload will be generated and sent to the
channel identified by the channel attribute.
Sometimes the file that just appeared in the monitored (remote) directory is not complete. Typically such
a file will be written with temporary extension (e.g., foo.txt.writing) and then renamed after the writing
process finished. As a user in most cases you are only interested in files that are complete and would
like to filter only files that are complete. To handle these scenarios you can use the filtering support
provided by the filename-pattern, filename-regex and filter attributes. Here is an example
that uses a custom Filter implementation.
<int-ftp:inbound-channel-adapter
channel="ftpChannel"
session-factory="ftpSessionFactory"
filter="customFilter"
local-directory="file:/my_transfers">
remote-directory="some/remote/path"
<int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>
The job of the inbound FTP adapter consists of two tasks: 1) Communicate with a remote server in
order to transfer files from a remote directory to a local directory. 2) For each transferred file, generate
a Message with that file as a payload and send it to the channel identified by the channel attribute. That
is why they are called channel-adapters rather than just adapters. The main job of such an adapter is to
generate a Message to be sent to a Message Channel. Essentially, the second task mentioned above
takes precedence in such a way that IF your local directory already has one or more files it will first
generate Messages from those, and ONLY when all local files have been processed, will it initiate the
remote communication to retrieve more files.
Also, when configuring a trigger on the poller you should pay close attention to the max-messages-
per-poll attribute. Its default value is 1 for all SourcePollingChannelAdapter instances
(including FTP). This means that as soon as one file is processed, it will wait for the next execution
time as determined by your trigger configuration. If you happened to have one or more files sitting in
the local-directory, it would process those files before it would initiate communication with the
remote FTP server. And, if the max-messages-per-poll were set to 1 (default), then it would be
processing only one file at a time with intervals as defined by your trigger, essentially working as one-
poll === one-file.
For typical file-transfer use cases, you most likely want the opposite behavior: to process all the files
you can for each poll and only then wait for the next poll. If that is the case, set max-messages-per-
poll to -1. Then, on each poll, the adapter will attempt to generate as many Messages as it possibly
can. In other words, it will process everything in the local directory, and then it will connect to the remote
directory to transfer everything that is available there to be processed locally. Only then is the poll
operation considered complete, and the poller will wait for the next execution time.
You can alternatively set the max-messages-per-poll value to a positive value indicating the upward
limit of Messages to be created from files with each poll. For example, a value of 10 means that on each
poll it will attempt to process no more than 10 files.
It is important to understand the architecture of the adapter. There is a file synchronizer which fetches the
files, and a FileReadingMessageSource to emit a message for each synchronized file. As discussed
above, there are two filters involved. The filter attribute (and patterns) refers to the remote (FTP)
file list - to avoid fetching files that have already been fetched. The local-filter is used by the
FileReadingMessageSource to determine which files are to be sent as messages.
The synchronizer lists the remote files and consults its filter; the files are then transferred. If an IO
error occurs during file transfer, any files that have already been added to the filter are removed
so they are eligible to be re-fetched on the next poll. This only applies if the filter implements
ReversibleFileListFilter (such as the AcceptOnceFileListFilter).
If, after synchronizing the files, an error occurs on the downstream flow processing a file, there is no
automatic rollback of the filter so the failed file will not be reprocessed by default.
If you wish to reprocess such files after a failure, you can use configuration similar to the
following to facilitate the removal of the failed file from the filter. This will work for any
ResettableFileListFilter.
<int-ftp:inbound-channel-adapter id="ftpAdapter"
session-factory="ftpSessionFactory"
channel="requestChannel"
remote-directory-expression="'/sftpSource'"
local-directory="file:myLocalDir"
auto-create-local-directory="true"
filename-pattern="*.txt">
<int:poller fixed-rate="1000">
<int:transactional synchronization-factory="syncFactory" />
</int:poller>
</int-ftp:inbound-channel-adapter>
<bean id="acceptOnceFilter"
class="org.springframework.integration.file.filters.AcceptOnceFileListFilter" />
<int:transaction-synchronization-factory id="syncFactory">
<int:after-rollback expression="payload.delete()" />
</int:transaction-synchronization-factory>
<bean id="transactionManager"
class="org.springframework.integration.transaction.PseudoTransactionManager" />
Starting with version 5.0, the Inbound Channel Adapter can build sub-directories
locally according the generated local file name. That can be a remote sub-
path as well. To be able to read local directory recursively for modification
according the hierarchy support, an internal FileReadingMessageSource now can
be supplied with a new RecursiveDirectoryScanner based on the Files.walk()
algorithm. See AbstractInboundFileSynchronizingMessageSource.setScanner() for
more information. Also the AbstractInboundFileSynchronizingMessageSource can now
be switched to the WatchService -based DirectoryScanner via setUseWatchService()
option. It is also configured for all the WatchEventType s to react for any
modifications in local directory. The reprocessing sample above is based on the
build-in functionality of the FileReadingMessageSource.WatchServiceDirectoryScanner
to perform ResettableFileListFilter.remove() when the file is deleted
(StandardWatchEventKinds.ENTRY_DELETE) from the local directory. See the section called
“WatchServiceDirectoryScanner” for more information.
The following Spring Boot application provides an example of configuring the inbound adapter using
Java configuration:
@SpringBootApplication
public class FtpJavaApplication {
@Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(port);
sf.setUsername("foo");
sf.setPassword("foo");
return new CachingSessionFactory<FTPFile>(sf);
}
@Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new
FtpInboundFileSynchronizer(ftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory("foo");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.xml"));
return fileSynchronizer;
}
@Bean
@InboundChannelAdapter(channel = "ftpChannel", poller = @Poller(fixedDelay = "5000"))
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source =
new FtpInboundFileSynchronizingMessageSource(ftpInboundFileSynchronizer());
source.setLocalDirectory(new File("ftp-inbound"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
source.setMaxFetchSize(1);
return source;
}
@Bean
@ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
@Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload());
}
};
}
The following Spring Boot application provides an example of configuring the inbound adapter using
the Java DSL:
@SpringBootApplication
public class FtpJavaApplication {
@Bean
public IntegrationFlow ftpInboundFlow() {
return IntegrationFlows
.from(s -> s.ftp(this.ftpSessionFactory)
.preserveTimestamp(true)
.remoteDirectory("foo")
.regexFilter(".*\\.txt$")
.localFilename(f -> f.toUpperCase() + ".a")
.localDirectory(new File("d:\\ftp_files")),
e -> e.id("ftpInboundAdapter")
.autoStartup(true)
.poller(Pollers.fixedDelay(5000)))
.handle(m -> System.out.println(m.getPayload()))
.get();
}
}
<int-ftp:inbound-streaming-channel-adapter id="ftpInbound"
channel="ftpChannel"
session-factory="sessionFactory"
filename-pattern="*.txt"
filename-regex=".*\.txt"
filter="filter"
filter-expression="@myFilterBean.check(#root)"
remote-file-separator="/"
comparator="comparator"
max-fetch-size="1"
remote-directory-expression="'foo/bar'">
<int:poller fixed-rate="1000" />
</int-ftp:inbound-streaming-channel-adapter>
Important
Use the max-fetch-size attribute to limit the number of files fetched on each poll when a fetch is
necessary; set to 1 and use a persistent filter when running in a clustered environment; see Section 16.6,
“Inbound Channel Adapters: Controlling Remote File Fetching” for more information.
The adapter puts the remote directory and file name in headers FileHeaders.REMOTE_DIRECTORY
and FileHeaders.REMOTE_FILE respectively. Starting with version 5.0, additional remote file
information, represented in JSON by default, is provided in the FileHeaders.REMOTE_FILE_INFO
header. If you set the fileInfoJson property on the FtpStreamingMessageSource to false,
the header will contain an FtpFileInfo object. The FTPFile object provided by the underlying
Apache Net library can be accessed using the FtpFileInfo.getFileInfo() method. The
fileInfoJson property is not available when using XML configuration but you can set it by injecting
the FtpStreamingMessageSource into one of your configuration classes.
The following Spring Boot application provides an example of configuring the inbound adapter using
Java configuration:
@SpringBootApplication
public class FtpJavaApplication {
@Bean
@InboundChannelAdapter(channel = "stream")
public MessageSource<InputStream> ftpMessageSource() {
FtpStreamingMessageSource messageSource = new FtpStreamingMessageSource(template());
messageSource.setRemoteDirectory("ftpSource/");
messageSource.setFilter(new AcceptAllFileListFilter<>());
messageSource.setMaxFetchSize(1);
return messageSource;
}
@Bean
@Transformer(inputChannel = "stream", outputChannel = "data")
public org.springframework.integration.transformer.Transformer transformer() {
return new StreamTransformer("UTF-8");
}
@Bean
public FtpRemoteFileTemplate template() {
return new FtpRemoteFileTemplate(ftpSessionFactory());
}
@Bean
public ExpressionEvaluatingRequestHandlerAdvice after() {
ExpressionEvaluatingRequestHandlerAdvice advice = new
ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpression(
"@template.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(true);
return advice;
}
Notice that, in this example, the message handler downstream of the transformer has an advice that
removes the remote file after processing.
The following scenarios assume the starting state is an empty local directory.
• max-messages-per-poll=2 and max-fetch-size=1, the adapter will fetch one file, emit it, fetch
the next file, emit it; then sleep until the next poll.
• max-messages-per-poll=2 and max-fetch-size=2), the adapter will fetch both files, then emit
each one.
• max-messages-per-poll=2 and max-fetch-size not specified, the adapter will fetch all remote
files and emit the first two (if there are at least two); the subsequent files will be emitted on subsequent
polls (2-at-a-time); when all are consumed, the remote fetch will be attempted again, to pick up any
new files.
Important
Another use for max-fetch-size is if you want to stop fetching remote files, but continue to process
files that have already been fetched. Setting the maxFetchSize property on the MessageSource
(programmatically, via JMX, or via a control bus) effectively stops the adapter from fetching more files,
but allows the poller to continue to emit messages for files that have previously been fetched. If the
poller is active when the property is changed, the change will take effect on the next poll.
<int-ftp:outbound-channel-adapter id="ftpOutbound"
channel="ftpChannel"
session-factory="ftpSessionFactory"
charset="UTF-8"
remote-file-separator="/"
auto-create-directory="true"
remote-directory-expression="headers['remote_dir']"
temporary-remote-directory-expression="headers['temp_remote_dir']"
filename-generator="fileNameGenerator"
use-temporary-filename="true"
mode="REPLACE"/>
As you can see from the configuration above you can configure an FTP Outbound
Channel Adapter via the outbound-channel-adapter element while also providing
values for various attributes such as filename-generator (an implementation of
the org.springframework.integration.file.FileNameGenerator strategy interface), a
reference to a session-factory, as well as other attributes. You can also see some
examples of *expression attributes which allow you to use SpEL to configure things like
remote-directory-expression, temporary-remote-directory-expression and remote-
filename-generator-expression (a SpEL alternative to filename-generator shown above).
As with any component that allows the usage of SpEL, access to Payload and Message Headers is
available via payload and headers variables. Please refer to the schema for more details on the available
attributes.
Note
Important
Defining certain values (e.g., remote-directory) might be platform/ftp server dependent. For
example as it was reported on this forum http://forum.springsource.org/showthread.php?
p=333478&posted=1#post333478 on some platforms you must add slash to the end of the
directory definition (e.g., remote-directory="/foo/bar/" instead of remote-directory="/foo/bar")
Starting with version 4.1, you can specify the mode when transferring the file. By default, an existing
file will be overwritten; the modes are defined on enum FileExistsMode, having values REPLACE
(default), APPEND, IGNORE, and FAIL. With IGNORE and FAIL, the file is not transferred; FAIL causes
an exception to be thrown whereas IGNORE silently ignores the transfer (although a DEBUG log entry
is produced).
One of the common problems, when dealing with file transfers, is the possibility of processing a partial
file - a file might appear in the file system before its transfer is actually complete.
To deal with this issue, Spring Integration FTP adapters use a very common algorithm where files are
transferred under a temporary name and then renamed once they are fully transferred.
By default, every file that is in the process of being transferred will appear in the file system with an
additional suffix which, by default, is .writing; this can be changed using the temporary-file-
suffix attribute.
However, there may be situations where you don’t want to use this technique (for example, if the server
does not permit renaming files). For situations like this, you can disable this feature by setting use-
temporary-file-name to false (default is true). When this attribute is false, the file is written
with its final name and the consuming application will need some other mechanism to detect that the
file is completely uploaded before accessing it.
The following Spring Boot application provides an example of configuring the Outbound Adapter using
Java configuration:
@SpringBootApplication
@IntegrationComponentScan
public class FtpJavaApplication {
@Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(port);
sf.setUsername("foo");
sf.setPassword("foo");
return new CachingSessionFactory<FTPFile>(sf);
}
@Bean
@ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
FtpMessageHandler handler = new FtpMessageHandler(ftpSessionFactory());
handler.setRemoteDirectoryExpressionString("headers['remote-target-dir']");
handler.setFileNameGenerator(new FileNameGenerator() {
@Override
public String generateFileName(Message<?> message) {
return "handlerContent.test";
}
});
return handler;
}
@MessagingGateway
public interface MyGateway {
@Gateway(requestChannel = "toFtpChannel")
void sendToFtp(File file);
}
}
The following Spring Boot application provides an example of configuring the Outbound Adapter using
the Java DSL:
@SpringBootApplication
@IntegrationComponentScan
public class FtpJavaApplication {
@Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(port);
sf.setUsername("foo");
sf.setPassword("foo");
return new CachingSessionFactory<FTPFile>(sf);
}
@Bean
public IntegrationFlow ftpOutboundFlow() {
return IntegrationFlows.from("toFtpChannel")
.handle(Ftp.outboundAdapter(ftpSessionFactory(), FileExistsMode.FAIL)
.useTemporaryFileName(false)
.fileNameExpression("headers['" + FileHeaders.FILENAME + "']")
.remoteDirectory(this.ftpServer.getTargetFtpDirectory().getName())
).get();
}
@MessagingGateway
public interface MyGateway {
@Gateway(requestChannel = "toFtpChannel")
void sendToFtp(File file);
• ls (list files)
• rm (remove file(s))
• mv (move/rename file)
ls
• -1 - just retrieve a list of file names, default is to retrieve a list of FileInfo objects.
The message payload resulting from an ls operation is a list of file names, or a list of FileInfo objects.
These objects provide information such as modified time, permissions etc.
The remote directory that the ls command acted on is provided in the file_remoteDirectory header.
When using the recursive option (-R), the fileName includes any subdirectory elements, representing
a relative path to the file (relative to the remote directory). If the -dirs option is included, each recursive
directory is also returned as an element in the list. In this case, it is recommended that the -1 is not
used because you would not be able to determine files Vs. directories, which is achievable using the
FileInfo objects.
Starting with version 4.3, the FtpSession supports null for the list() and listNames() methods,
therefore the expression attribute can be omitted. For Java configuration, there are two constructors
without an expression argument for convenience. null for LS, NLST, PUT and MPUT commands
is treated as the Client working directory according to the FTP protocol. All other commands must
be supplied with the expression to evaluate remote path against request message. The working
directory can be set via the FTPClient.changeWorkingDirectory() function when you extend the
DefaultFtpSessionFactory and implement postProcessClientAfterConnect() callback.
nlst
The message payload resulting from an nlst operation is a list of file names.
The remote directory that the nlst command acted on is provided in the file_remoteDirectory
header.
Unlike the -1 option for the ls command (see above), which uses the LIST command, the nlst command
sends an NLST command to the target FTP server. This command is useful when the server doesn’t
support LIST, due to security restrictions, for example. The result of the nlst is just the names, therefore
the framework can’t determine if an entity is a directory, to perform filtering or recursive listing, for
example.
get
• -D - delete the remote file after successful transfer. The remote file is NOT deleted if the transfer is
ignored because the FileExistsMode is IGNORE and the local file already exists.
The remote directory is provided in the file_remoteDirectory header, and the filename is provided
in the file_remoteFile header.
The message payload resulting from a get operation is a File object representing the retrieved file,
or an InputStream when the -stream option is provided. This option allows retrieving the file as a
stream. For text files, a common use case is to combine this operation with a File Splitter or Stream
Transformer. When consuming remote files as streams, the user is responsible for closing the Session
after the stream is consumed. For convenience, the Session is provided in the closeableResource
header, a convenience method is provided on the IntegrationMessageHeaderAccessor:
Framework components such as the File Splitter and Stream Transformer will automatically close the
session after the data is transferred.
<int-ftp:outbound-gateway session-factory="ftpSessionFactory"
request-channel="inboundGetStream"
command="get"
command-options="-stream"
expression="payload"
remote-directory="ftpTarget"
reply-channel="stream" />
Note: if you consume the input stream in a custom component, you must close the Session. You can
either do that in your custom code, or route a copy of the message to a service-activator and
use SpEL:
<int:service-activator input-channel="closeSession"
expression="headers['closeableResource'].close()" />
mget
mget retrieves multiple remote files based on a pattern and supports the following options:
• -x - Throw an exception if no files match the pattern (otherwise an empty list is returned).
• -D - delete each remote file after successful transfer. The remote file is NOT deleted if the transfer is
ignored because the FileExistsMode is IGNORE and the local file already exists.
The message payload resulting from an mget operation is a List<File> object - a List of File objects,
each representing a retrieved file.
Important
Starting with version 5.0, if the FileExistsMode is IGNORE, the payload of the output message
will no longer contain files that were not fetched due to the file already existing. Previously, the
array contained all files, including those that already existed.
The expression used to determine the remote path should produce a result that ends with * - e.g. foo/
* will fetch the complete tree under foo.
Starting with version 5.0, a recursive MGET, combined with the new
FileExistsMode.REPLACE_IF_MODIFIED mode, can be used to periodically synchronize an entire
remote directory tree locally. This mode will set the local file last modified timestamp with the remote
timestamp, regardless of the -P (preserve timestamp) option.
The pattern is ignored, and * is assumed. By default, the entire remote tree is retrieved. However,
files in the tree can be filtered, by providing a FileListFilter; directories in the tree can also be
filtered this way. A FileListFilter can be provided by reference or by filename-pattern
or filename-regex attributes. For example, filename-regex="(subDir|.*1.txt)" will
retrieve all files ending with 1.txt in the remote directory and the subdirectory subDir. However,
see below for an alternative available in version 5.0.
The -dirs option is not allowed (the recursive mget uses the recursive ls to obtain the directory
tree and the directories themselves cannot be included in the list).
<bean id="starDotTxtFilter"
class="org.springframework.integration.ftp.filters.FtpSimplePatternFileListFilter">
<constructor-arg value="*.txt" />
<property name="alwaysAcceptDirectories" value="true" />
</bean>
<bean id="dotStarDotTxtFilter"
class="org.springframework.integration.ftp.filters.FtpRegexPatternFileListFilter">
<constructor-arg value="^.*\.txt$" />
<property name="alwaysAcceptDirectories" value="true" />
</bean>
and provide one of these filters using filter property on the gateway.
See also the section called “Outbound Gateway Partial Success (mget and mput)”.
put
put sends a file to the remote server; the payload of the message can be a java.io.File, a byte[]
or a String. A remote-filename-generator (or expression) is used to name the remote file.
Other available attributes include remote-directory, temporary-remote-directory (and their
The message payload resulting from a put operation is a String representing the full path of the file
on the server after transfer.
mput
mput sends multiple files to the server and supports the following option:
• -R - Recursive - send all files (possibly filtered) in the directory and subdirectories
The same attributes as the put command are supported. In addition, files in the local directory can
be filtered with one of mput-pattern, mput-regex, mput-filter or mput-filter-expression.
The filter works with recursion, as long as the subdirectories themselves pass the filter. Subdirectories
that do not pass the filter are not recursed.
The message payload resulting from an mget operation is a List<String> object - a List of remote
file paths resulting from the transfer.
See also the section called “Outbound Gateway Partial Success (mget and mput)”.
rm
The message payload resulting from an rm operation is Boolean.TRUE if the remove was successful,
Boolean.FALSE otherwise. The remote directory is provided in the file_remoteDirectory header,
and the filename is provided in the file_remoteFile header.
mv
The expression attribute defines the "from" path and the rename-expression attribute defines the "to"
path. By default, the rename-expression is headers['file_renameTo']. This expression must not
evaluate to null, or an empty String. If necessary, any remote directories needed will be created.
The payload of the result message is Boolean.TRUE. The original remote directory is provided in the
file_remoteDirectory header, and the filename is provided in the file_remoteFile header.
The new path is in the file_renameTo header.
Additional Information
The get and mget commands support the local-filename-generator-expression attribute. It defines
a SpEL expression to generate the name of local file(s) during the transfer. The root object of
the evaluation context is the request Message but, in addition, the remoteFileName variable is
also available, which is particularly useful for mget, for example: local-filename-generator-
expression="#remoteFileName.toUpperCase() + headers.foo".
The get and mget commands support the local-directory-expression attribute. It defines a SpEL
expression to generate the name of local directory(ies) during the transfer. The root object of the
evaluation context is the request Message but, in addition, the remoteDirectory variable is also
available, which is particularly useful for mget, for example: local-directory-expression="'/
tmp/local/' + #remoteDirectory.toUpperCase() + headers.foo". This attribute is
mutually exclusive with local-directory attribute.
For all commands, the PATH that the command acts on is provided by the expression property of
the gateway. For the mget command, the expression might evaluate to , meaning retrieve all files, or
somedirectory/ etc.
<int-ftp:outbound-gateway id="gateway1"
session-factory="ftpSessionFactory"
request-channel="inbound1"
command="ls"
command-options="-1"
expression="payload"
reply-channel="toSplitter"/>
The payload of the message sent to the toSplitter channel is a list of String objects containing the
filename of each file. If the command-options was omitted, it would be a list of FileInfo objects.
Options are provided space-delimited, e.g. command-options="-1 -dirs -links".
Starting with version 4.2, the GET, MGET, PUT and MPUT commands support a FileExistsMode
property (mode when using the namespace support). This affects the behavior when the local file exists
(GET and MGET) or the remote file exists (PUT and MPUT). Supported modes are REPLACE, APPEND,
FAIL and IGNORE. For backwards compatibility, the default mode for PUT and MPUT operations is
REPLACE and for GET and MGET operations, the default is FAIL.
@SpringBootApplication
public class FtpJavaApplication {
@Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(port);
sf.setUsername("foo");
sf.setPassword("foo");
return new CachingSessionFactory<FTPFile>(sf);
}
@Bean
@ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
FtpOutboundGateway ftpOutboundGateway =
new FtpOutboundGateway(ftpSessionFactory(), "ls", "'my_remote_dir/'");
ftpOutboundGateway.setOutputChannelName("lsReplyChannel");
return ftpOutboundGateway;
}
The following Spring Boot application provides an example of configuring the Outbound Gateway using
the Java DSL:
@SpringBootApplication
public class FtpJavaApplication {
@Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(port);
sf.setUsername("foo");
sf.setPassword("foo");
return new CachingSessionFactory<FTPFile>(sf);
}
@Bean
public FtpOutboundGatewaySpec ftpOutboundGateway() {
return Ftp.outboundGateway(ftpSessionFactory(),
AbstractRemoteFileOutboundGateway.Command.MGET, "payload")
.options(AbstractRemoteFileOutboundGateway.Option.RECURSIVE)
.regexFileNameFilter("(subFtpSource|.*1.txt)")
.localDirectoryExpression("'localDirectory/' + #remoteDirectory")
.localFilenameExpression("#remoteFileName.replaceFirst('ftpSource', 'localTarget')");
}
@Bean
public IntegrationFlow ftpMGetFlow(AbstractRemoteFileOutboundGateway<FTPFile> ftpOutboundGateway) {
return f -> f
.handle(ftpOutboundGateway)
.channel(c -> c.queue("remoteFileOutputChannel"));
}
When performing operations on multiple files (mget and mput) it is possible that an exception occurs
some time after one or more files have been transferred. In this case (starting with version 4.2),
a PartialSuccessException is thrown. As well as the usual MessagingException properties
(failedMessage and cause), this exception has two additional properties:
• derivedInput - the list of files generated from the request message (e.g. local files to transfer for
an mput).
This will enable you to determine which files were successfully transferred, and which were not.
Consider:
root/
|- file1.txt
|- subdir/
| - file2.txt
| - file3.txt
|- zoo.txt
Important
Starting with Spring Integration version 3.0, sessions are no longer cached by default;
the cache-sessions attribute is no longer supported on endpoints. You must use a
CachingSessionFactory (see below) if you wish to cache sessions.
In versions prior to 3.0, the sessions were cached automatically by default. A cache-sessions
attribute was available for disabling the auto caching, but that solution did not provide a way to
configure other session caching attributes. For example, you could not limit on the number of sessions
created. To support that requirement and other configuration options, a CachingSessionFactory
was provided. It provides sessionCacheSize and sessionWaitTimeout properties. As its name
suggests, the sessionCacheSize property controls how many active sessions the factory will maintain
in its cache (the DEFAULT is unbounded). If the sessionCacheSize threshold has been reached,
any attempt to acquire another session will block until either one of the cached sessions becomes
available or until the wait time for a Session expires (the DEFAULT wait time is Integer.MAX_VALUE).
The sessionWaitTimeout property enables configuration of that value.
If you want your Sessions to be cached, simply configure your default Session Factory as described
above and then wrap it in an instance of CachingSessionFactory where you may provide those
additional properties.
In the above example you see a CachingSessionFactory created with the sessionCacheSize
set to 10 and the sessionWaitTimeout set to 1 second (its value is in milliseconds).
16.10 RemoteFileTemplate
Starting with Spring Integration version 3.0 a new abstraction is provided over the FtpSession object.
The template provides methods to send, retrieve (as an InputStream), remove, and rename files.
In addition an execute method is provided allowing the caller to execute multiple operations on the
session. In all cases, the template takes care of reliably closing the session. For more information, refer
to the JavaDocs for RemoteFileTemplate. There is a subclass for FTP: FtpRemoteFileTemplate.
Additional methods were added in version 4.1 including getClientInstance() which provides
access to the underlying FTPClient enabling access to low-level APIs.
Not all FTP servers properly implement STAT <path> command, in that it can return a positive result
for a non-existent path. The NLST command reliably returns the name, when the path is a file and it
exists. However, this does not support checking that an empty directory exists since NLST always returns
an empty list in this case, when the path is a directory. Since the template doesn’t know if the path
represents a directory or not, it has to perform additional checks when the path does not appear to exist,
when using NLST. This adds overhead, requiring several requests to the server. Starting with version
4.1.9 the FtpRemoteFileTemplate provides FtpRemoteFileTemplate.ExistsMode property
with the following options:
• STAT - Perform the STAT FTP command (FTPClient.getStatus(path)) to check the path
existence; this is the default and requires that your FTP server properly supports the STAT command
(with a path).
• NLST - Perform the NLST FTP command - FTPClient.listName(path); use this if you are testing
for a path that is a full path to a file; it won’t work for empty directories.
Since we know that the FileExistsMode.FAIL case is always only looking for a file (and not
a directory), we safely use NLST mode for the FtpMessageHandler and FtpOutboundGateway
components.
For any other cases the FtpRemoteFileTemplate can be extended for implementing a custom logic
in the overridden exist() method.
16.11 MessageSessionCallback
Starting with Spring Integration version 4.2, a MessageSessionCallback<F, T> implementation
can be used with the <int-ftp:outbound-gateway/> (FtpOutboundGateway) to perform any
operation(s) on the Session<FTPFile> with the requestMessage context. It can be used for any
non-standard or low-level FTP operation (or several); for example, allowing access from an integration
flow definition, and functional interface (Lambda) implementation injection:
@Bean
@ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler ftpOutboundGateway(SessionFactory<FTPFile> sessionFactory) {
return new FtpOutboundGateway(sessionFactory,
(session, requestMessage) -> session.list(requestMessage.getPayload()));
}
Another example might be to pre- or post- process the file data being sent/retrieved.
Note
The session-callback is mutually exclusive with the command and expression attributes.
When configuring with Java, different constructors are available in the FtpOutboundGateway
class.
17.1 Introduction
VMWare vFabric GemFire (GemFire) is a distributed data management platform providing a key-
value data grid along with advanced distributed system features such as event processing, continuous
querying, and remote function execution. This guide assumes some familiarity with GemFire and its API.
Spring integration provides support for GemFire by providing inbound adapters for entry and
continuous query events, an outbound adapter to write entries to the cache, and MessageStore and
MessageGroupStore implementations. Spring integration leverages thehttp://www.springsource.org/
spring-gemfire[Spring Gemfire] project, providing a thin wrapper over its components.
To configure the int-gfe namespace, include the following elements within the headers of your XML
configuration file:
xmlns:int-gfe="http://www.springframework.org/schema/integration/gemfire"
xsi:schemaLocation="http://www.springframework.org/schema/integration/gemfire
http://www.springframework.org/schema/integration/gemfire/spring-integration-gemfire.xsd"
<gfe:cache/>
<gfe:replicated-region id="region"/>
<int-gfe:inbound-channel-adapter id="inputChannel" region="region"
cache-events="CREATED" expression="newValue"/>
In the above configuration, we are creating a GemFire Cache and Region using Spring GemFire’s
gfe namespace. The inbound-channel-adapter requires a reference to the GemFire region for which
the adapter will be listening for events. Optional attributes include cache-events which can contain
a comma separated list of event types for which a message will be produced on the input channel. By
default CREATED and UPDATED are enabled. Note that this adapter conforms to Spring integration
conventions. If no channel attribute is provided, the channel will be created from the id attribute.
This adapter also supports an error-channel. The GemFire EntryEvent is the #root object of the
expression evaluation. Example:
If the expression attribute is not provided, the message payload will be the GemFire EntryEvent
itself.
Note
GemFire queries are written in OQL and are scoped to the entire cache (not just one region).
Additionally, continuous queries require a remote (i.e., running in a separate process or
remote host) cache server. Please consult the GemFire documentation for more information on
implementing continuous queries.
<int-gfe:cq-inbound-channel-adapter id="inputChannel"
cq-listener-container="queryListenerContainer"
query="select * from /test"/>
In the above configuration, we are creating a GemFire client cache (recall a remote cache server is
required for this implementation and its address is configured as a sub-element of the pool), a client
region and a ContinuousQueryListenerContainer using Spring GemFire. The continuous query
inbound channel adapter requires a cq-listener-container attribute which contains a reference
to the ContinuousQueryListenerContainer. Optionally, it accepts an expression attribute
which uses SpEL to transform the CqEvent or extract an individual property as needed. The cq-
inbound-channel-adapter provides a query-events attribute, containing a comma separated list of
event types for which a message will be produced on the input channel. Available event types are
CREATED, UPDATED, DESTROYED, REGION_DESTROYED, REGION_INVALIDATED. CREATED
and UPDATED are enabled by default. Additional optional attributes include, query-name which
provides an optional query name, and expression which works as described in the above section, and
durable - a boolean value indicating if the query is durable (false by default). Note that this adapter
conforms to Spring integration conventions. If no channel attribute is provided, the channel will be
created from the id attribute. This adapter also supports an error-channel
Given the above configuration, an exception will be thrown if the payload is not a Map. Additionally, the
outbound channel adapter can be configured to create a map of cache entries using SpEL of course.
In the above configuration, the inner element cache-entries is semantically equivalent to Spring
map element. The adapter interprets the key and value attributes as SpEL expressions with the
message as the evaluation context. Note that this contain arbitrary cache entries (not only those
derived from the message) and that literal values must be enclosed in single quotes. In the above
example, if the message sent to cacheChannel has a String payload with a value "Hello", two entries
[HELLO:hello, foo:bar] will be written (created or updated) in the cache region. This adapter also
supports the order attribute which may be useful if it is bound to a PublishSubscribeChannel.
<gfe:cache/>
<gfe:replicated-region id="myRegion"/>
<int:channel id="somePersistentQueueChannel">
<int:queue message-store="gemfireMessageStore"/>
<int:channel>
In the above example, the cache and region are configured using the spring-gemfire namespace (not to
be confused with the spring-integration-gemfire namespace). Often it is desirable for the message store
to be maintained in one or more remote cache servers in a client-server configuration (See the GemFire
product documentation for more details). In this case, you configure a client cache, client region, and
client pool and inject the region into the MessageStore. Here is an example:
<bean id="gemfireMessageStore"
class="org.springframework.integration.gemfire.store.GemfireMessageStore">
<constructor-arg ref="myRegion"/>
</bean>
<gfe:client-cache/>
<gfe:pool id="messageStorePool">
<gfe:server host="localhost" port="40404" />
</gfe:pool>
Note the pool element is configured with the address of a cache server (a locator may be substituted
here). The region is configured as a PROXY so that no data will be stored locally. The region’s id
corresponds to a region with the same name configured in the cache server.
Starting with version 4.3.12, the GemfireMessageStore supports the key prefix option to allow
distinguishing between instances of the store on the same Gemfire region.
Note
In order to instruct these adapters to use the new GemfireMetadataStore, simply declare a Spring
bean using the bean name metadataStore. The Twitter Inbound Channel Adapter and the Feed Inbound
Channel Adapter will both automatically pick up and use the declared GemfireMetadataStore.
Note
Note
@Override
public void onAdd(String key, String value) {
...
}
});
<servlet>
<servlet-name>inboundGateway</servlet-name>
<servlet-class>o.s.web.context.support.HttpRequestHandlerServlet</servlet-class>
</servlet>
Notice that the servlet name matches the bean name. For more information on using the
HttpRequestHandlerServlet, see chapter Remoting and web services using Spring, which is part
of the Spring Framework Reference documentation.
If you are running within a Spring MVC application, then the aforementioned explicit servlet definition is
not necessary. In that case, the bean name for your gateway can be matched against the URL path just
like a Spring MVC Controller bean. For more information, please see the chapter Web MVC framework,
which is part of the Spring Framework Reference documentation.
Tip
For a sample application and the corresponding configuration, please see the Spring Integration
Samples repository. It contains the Http Sample application demonstrating Spring Integration’s
HTTP support.
<bean id="httpInbound"
class="org.springframework.integration.http.inbound.HttpRequestHandlingMessagingGateway">
<property name="requestChannel" ref="httpRequestChannel" />
<property name="replyChannel" ref="httpReplyChannel" />
</bean>
The message conversion process uses the (optional) requestPayloadType property and the
incoming Content-Type header. Starting with version 4.3, if a request has no content type header,
Starting with Spring Integration 2.0, MultiPart File support is implemented. If the request has been
wrapped as a MultipartHttpServletRequest, when using the default converters, that request
will be converted to a Message payload that is a MultiValueMap containing values that may be
byte arrays, Strings, or instances of Spring’s MultipartFile depending on the content type of the
individual parts.
Note
The HTTP inbound Endpoint will locate a MultipartResolver in the context if one
exists with the bean name "multipartResolver" (the same name expected by Spring’s
DispatcherServlet). If it does in fact locate that bean, then the support for MultipartFiles will
be enabled on the inbound request mapper. Otherwise, it will fail when trying to map a multipart-file
request to a Spring Integration Message. For more on Spring’s support for MultipartResolver,
refer to the Spring Reference Manual.
Note
<int-http:inbound-gateway
channel="receiveChannel"
path="/inboundAdapter.htm"
request-payload-type="byte[]"
message-converters="converters"
merge-with-default-converters="false"
supported-methods="POST" />
<util:list id="converters">
<beans:bean class="org.springframework.http.converter.ByteArrayHttpMessageConverter" />
<beans:bean class="org.springframework.http.converter.StringHttpMessageConverter" />
<beans:bean class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter"
/>
</util:list>
In sending a response to the client there are a number of ways to customize the behavior of the gateway.
By default the gateway will simply acknowledge that the request was received by sending a 200 status
code back. It is possible to customize this response by providing a viewName to be resolved by the
Spring MVC ViewResolver. In the case that the gateway should expect a reply to the Message then
setting the expectReply flag (constructor argument) will cause the gateway to wait for a reply Message
before creating an HTTP response. Below is an example of a gateway configured to serve as a Spring
MVC Controller with a view name. Because of the constructor arg value of TRUE, it wait for a reply.
This also shows how to customize the HTTP methods accepted by the gateway, which are POST and
GET by default.
<bean id="httpInbound"
class="org.springframework.integration.http.inbound.HttpRequestHandlingController">
<constructor-arg value="true" /> <!-- indicates that a reply is expected -->
<property name="requestChannel" ref="httpRequestChannel" />
<property name="replyChannel" ref="httpReplyChannel" />
<property name="viewName" value="jsonView" />
<property name="supportedMethodNames" >
<list>
<value>GET</value>
<value>DELETE</value>
</list>
</property>
</bean>
The reply message will be available in the Model map. The key that is used for that map entry by default
is reply, but this can be overridden by setting the replyKey property on the endpoint’s configuration.
<bean id="httpOutbound"
class="org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler">
<constructor-arg value="http://localhost:8080/example" />
<property name="outputChannel" ref="responseChannel" />
</bean>
This bean definition will execute HTTP requests by delegating to a RestTemplate. That template in
turn delegates to a list of HttpMessageConverters to generate the HTTP request body from the Message
payload. You can configure those converters as well as the ClientHttpRequestFactory instance to use:
<bean id="httpOutbound"
class="org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler">
<constructor-arg value="http://localhost:8080/example" />
<property name="outputChannel" ref="responseChannel" />
<property name="messageConverters" ref="messageConverterList" />
<property name="requestFactory" ref="customRequestFactory" />
</bean>
Note
In the case of the Outbound Gateway, the reply message produced by the gateway will contain
all Message Headers present in the request message.
Cookies
Basic cookie support is provided by the transfer-cookies attribute on the outbound gateway. When set
to true (default is false), a Set-Cookie header received from the server in a response will be converted to
Cookie in the reply message. This header will then be used on subsequent sends. This enables simple
stateful interactions, such as…
...->logonGateway->...->doWorkGateway->...->logoffGateway->...
If transfer-cookies is false, any Set-Cookie header received will remain as Set-Cookie in the reply
message, and will be dropped on subsequent sends.
HTTP is a request/response protocol. However the response may not have a body, just headers.
In this case, the HttpRequestExecutingMessageHandler produces a reply Message with
the payload being an org.springframework.http.ResponseEntity, regardless of any
provided expected-response-type. According to the HTTP RFC Status Code Definitions,
there are many statuses which identify that a response MUST NOT contain a message-body (e.g.
204 No Content). There are also cases where calls to the same URL might, or might not, return a
response body; for example, the first request to an HTTP resource returns content, but the second
does not (e.g. 304 Not Modified). In all cases, however, the http_statusCode message header
is populated. This can be used in some routing logic after the Http Outbound Gateway. You could
also use a`<payload-type-router/>` to route messages with an ResponseEntity to a different
flow than that used for responses with a body.
Note: expected-response-type
Further to the note above regarding empty response bodies, if a response does contain a
body, you must provide an appropriate expected-response-type attribute or, again, you will
simply receive a ResponseEntity with no body. The expected-response-type must be
compatible with the (configured or default) HttpMessageConverter s and the Content-Type
header in the response. Of course, this can be an abstract class, or even an interface (such as
java.io.Serializable when using java serialization and Content-Type: application/
x-java-serialized-object).
Introduction
Spring Integration provides an http namespace and the corresponding schema definition. To include it
in your configuration, simply provide the following namespace declaration in your application context
configuration file:
Inbound
The XML Namespace provides two components for handling HTTP Inbound requests. In order to
process requests without returning a dedicated response, use the inbound-channel-adapter:
<int-http:inbound-gateway id="inboundGateway"
request-channel="requests"
reply-channel="responses"/>
Note
The parsing of the HTTP Inbound Gateway or the HTTP Inbound Channel
Adapter registers an integrationRequestMappingHandlerMapping bean of type
IntegrationRequestMappingHandlerMapping, in case there is none registered, yet. This
particular implementation of the HandlerMapping delegates its logic to the
RequestMappingInfoHandlerMapping. The implementation provides similar functionality as
the one provided by the org.springframework.web.bind.annotation.RequestMapping
annotation in Spring MVC.
Note
For this purpose, Spring Integration 3.0 introduces the <request-mapping> sub-element.
This optional sub-element can be added to the <http:inbound-channel-adapter> and the
<http:inbound-gateway>. It works in conjunction with the path and supported-methods
attributes:
<inbound-gateway id="inboundController"
request-channel="requests"
reply-channel="responses"
path="/foo/{fooId}"
supported-methods="GET"
view-name="foo"
error-code="oops">
<request-mapping headers="User-Agent"
params="myParam=myValue"
consumes="application/json"
produces="!text/plain"/>
</inbound-gateway>
• headers
• params
• consumes
• produces
The <request-mapping> sub-element allows you to configure several Spring Integration HTTP
Inbound Endpoints to the same path (or even the same supported-methods) and to provide different
downstream message flows based on incoming HTTP requests.
Alternatively, you can also declare just one HTTP Inbound Endpoint and apply routing and filtering logic
within the Spring Integration flow to achieve the same result. This allows you to get the Message into
the flow as early as possibly, e.g.:
<int-http:inbound-gateway request-channel="httpMethodRouter"
supported-methods="GET,DELETE"
path="/process/{entId}"
payload-expression="#pathVariables.entId"/>
For more information regarding Handler Mappings, please see: Handler Mappings.
• origin - List of allowed origins. * means that all origins are allowed. These values are placed in
the Access-Control-Allow-Origin header of both the pre-flight and actual responses. Default
value is *.
• allowed-headers - Indicates which request headers can be used during the actual request. *
means that all headers asked by the client are allowed. This property controls the value of the pre-
flight response’s Access-Control-Allow-Headers header. Default value is *.
• exposed-headers - List of response headers that the user-agent will allow the client to access. This
property controls the value of the actual response’s Access-Control-Expose-Headers header.
• method - The HTTP request methods to allow: GET, POST, HEAD, OPTIONS, PUT, PATCH,
DELETE, TRACE. Methods specified here overrides those in supported-methods.
• allow-credentials - Set to true if the the browser should include any cookies associated to
the domain of the request, or false if it should not. Empty string "" means undefined. If true, the
pre-flight response will include the header Access-Control-Allow-Credentials=true. Default
value is true.
• max-age - Controls the cache duration for pre-flight responses. Setting this to a reasonable value can
reduce the number of pre-flight request/response interactions required by the browser. This property
controls the value of the Access-Control-Max-Age header in the pre-flight response. A value of
-1 means undefined. Default value is 1800 seconds, or 30 minutes.
Response StatusCode
Starting with version 4.1 the <http:inbound-channel-adapter> can be configured with a
status-code-expression to override the default 200 OK status. The expression must
return an object which can be converted to an org.springframework.http.HttpStatus
enum value. The evaluationContext has a BeanResolver but no variables, so the
usage of this attribute is somewhat limited. An example might be to resolve, at runtime,
some scoped Bean that returns a status code value but, most likely, it will be set to
a fixed value such as status-code=expression="204" (No Content), or status-code-
expression="T(org.springframework.http.HttpStatus).NO_CONTENT". By default,
status-code-expression is null meaning that the normal 200 OK response status will be returned.
<http:inbound-channel-adapter id="inboundController"
channel="requests" view-name="foo" error-code="oops"
status-code-expression="T(org.springframework.http.HttpStatus).ACCEPTED">
<request-mapping headers="BAR"/>
</http:inbound-channel-adapter>
The <http:inbound-gateway> resolves the status code from the http_statusCode header of the
reply Message. Starting with version 4.2, the default response status code when no reply is received
within the reply-timeout is 500 Internal Server Error. There are two ways to modify this
behavior:
• Add an error-channel and return an appropriate message with an http status code header, such
as…
<int:chain input-channel="errors">
<int:header-enricher>
<int:header name="http_statusCode" value="504" />
</int:header-enricher>
<int:transformer expression="payload.failedMessage" />
</int:chain>
If the error flow times out after a main flow timeout, 500 Internal Server Error is returned, or
the reply-timeout-status-code-expression is evaluated, if present.
Note
previously, the default status code for a timeout was 200 OK; to restore that behavior, set reply-
timeout-status-code-expression="200".
In the following example configuration, an Inbound Channel Adapter is configured to accept requests
using the following URI: /first-name/{firstName}/last-name/{lastName}
Using the payload-expression attribute, the URI template variable {firstName} is mapped to be the
Message payload, while the {lastName} URI template variable will map to the lname Message header.
<int-http:inbound-channel-adapter id="inboundAdapterWithExpressions"
path="/first-name/{firstName}/last-name/{lastName}"
channel="requests"
payload-expression="#pathVariables.firstName">
<int-http:header name="lname" expression="#pathVariables.lastName"/>
</int-http:inbound-channel-adapter>
For more information about URI template variables, please see the Spring Reference Manual: uri
template patterns.
Since Spring Integration 3.0, in addition to the existing #pathVariables and #requestParams
variables being available in payload and header expressions, other useful variables have been added.
• #pathVariables - the Map from URI Template placeholders and their values;
• #requestAttributes - the
org.springframework.web.context.request.RequestAttributes associated with the
current Request;
Note, all these values (and others) can be accessed within expressions in the downstream message flow
via the ThreadLocal org.springframework.web.context.request.RequestAttributes
variable, if that message flow is single-threaded and lives within the request thread:
<int-:transformer
expression="T(org.springframework.web.context.request.RequestContextHolder).
requestAttributes.request.queryString"/>
Outbound
To configure the outbound gateway you can use the namespace support as well. The following code
snippet shows the different configuration options for an outbound Http gateway. Most importantly, notice
that the http-method and expected-response-type are provided. Those are two of the most commonly
configured values. The default http-method is POST, and the default response type is null. With a null
response type, the payload of the reply Message would contain the ResponseEntity as long as it’s http
status is a success (non-successful status codes will throw Exceptions). If you are expecting a different
type, such as a String, then provide that fully-qualified class name as shown below. See also the note
about empty response bodies in Section 18.3, “Http Outbound Components”.
Important
Beginning with Spring Integration 2.1 the request-timeout attribute of the HTTP Outbound
Gateway was renamed to reply-timeout to better reflect the intent.
<int-http:outbound-gateway id="example"
request-channel="requests"
url="http://localhost/test"
http-method="POST"
extract-request-payload="false"
expected-response-type="java.lang.String"
charset="UTF-8"
request-factory="requestFactory"
reply-timeout="1234"
reply-channel="replies"/>
Important
Since Spring Integration 2.2, Java serialization over HTTP is no longer enabled by default.
Previously, when setting the expected-response-type attribute to a Serializable
object, the Accept header was not properly set up. Since Spring Integration 2.2, the
SerializingHttpMessageConverter has now been updated to set the Accept header to
application/x-java-serialized-object.
However, because this could cause incompatibility with existing applications, it was decided
to no longer automatically add this converter to the HTTP endpoints. If you wish to use Java
serialization, you will need to add the SerializingHttpMessageConverter to the appropriate
endpoints, using the message-converters attribute, when using XML configuration, or using
the setMessageConverters() method. Alternatively, you may wish to consider using JSON
instead which is enabled by simply having Jackson on the classpath.
Beginning with Spring Integration 2.2 you can also determine the HTTP Method dynamically using
SpEL and the http-method-expression attribute. Note that this attribute is obviously mutually exclusive
with http-method You can also use expected-response-type-expression attribute instead of
expected-response-type and provide any valid SpEL expression that determines the type of the
response.
<int-http:outbound-gateway id="example"
request-channel="requests"
url="http://localhost/test"
http-method-expression="headers.httpMethod"
extract-request-payload="false"
expected-response-type-expression="payload"
charset="UTF-8"
request-factory="requestFactory"
reply-timeout="1234"
reply-channel="replies"/>
If your outbound adapter is to be used in a unidirectional way, then you can use an outbound-channel-
adapter instead. This means that a successful response will simply execute without sending any
Messages to a reply channel. In the case of any non-successful response status code, it will throw an
exception. The configuration looks very similar to the gateway:
<int-http:outbound-channel-adapter id="example"
url="http://localhost/example"
http-method="GET"
channel="requests"
charset="UTF-8"
extract-payload="false"
expected-response-type="java.lang.String"
request-factory="someRequestFactory"
order="3"
auto-startup="false"/>
Note
To specify the URL; you can use either the url attribute or the url-expression attribute. The url is
a simple string (with placeholders for URI variables, as described below); the url-expression is a
SpEL expression, with the Message as the root object, enabling dynamic urls. The url resulting
from the expression evaluation can still have placeholders for URI variables.
In previous releases, some users used the place holders to replace the entire URL with a URI
variable. Changes in Spring 3.1 can cause some issues with escaped characters, such as ?. For
this reason, it is recommended that if you wish to generate the URL entirely at runtime, you use
the url-expression attribute.
If your URL contains URI variables, you can map them using the uri-variable sub-element. This
sub-element is available for the Http Outbound Gateway and the Http Outbound Channel Adapter.
<int-http:outbound-gateway id="trafficGateway"
url="http://local.yahooapis.com/trafficData?appid=YdnDemo&zip={zipCode}"
request-channel="trafficChannel"
http-method="GET"
expected-response-type="java.lang.String">
<int-http:uri-variable name="zipCode" expression="payload.getZip()"/>
</int-http:outbound-gateway>
The uri-variable sub-element defines two attributes: name and expression. The name attribute
identifies the name of the URI variable, while the expression attribute is used to set the actual value.
Using the expression attribute, you can leverage the full power of the Spring Expression Language
(SpEL) which gives you full dynamic access to the message payload and the message headers. For
example, in the above configuration the getZip() method will be invoked on the payload object of the
Message and the result of that method will be used as the value for the URI variable named zipCode.
Since Spring Integration 3.0, HTTP Outbound Endpoints support the uri-variables-expression
attribute to specify an Expression which should be evaluated, resulting in a Map for all URI variable
placeholders within the URL template. It provides a mechanism whereby different variable expressions
can be used, based on the outbound message. This attribute is mutually exclusive with the <uri-
variable/> sub-element:
<int-http:outbound-gateway
url="http://foo.host/{foo}/bars/{bar}"
request-channel="trafficChannel"
http-method="GET"
uri-variables-expression="@uriVariablesBean.populate(payload)"
expected-response-type="java.lang.String"/>
Note
The uri-variables-expression must evaluate to a Map. The values of the Map must be
instances of String or Expression. This Map is provided to an ExpressionEvalMap for
further resolution of URI variable placeholders using those expressions in the context of the
outbound Message.
Scenarios when we need to supply a dynamic set of URI variables on per message basis can
be achieved with the custom url-expression and some utilities for building and encoding URL
parameters:
url-expression="T(org.springframework.web.util.UriComponentsBuilder)
.fromHttpUrl('http://HOST:PORT/PATH')
.queryParams(payload)
.build()
.toUri()"
In this case the URL encoding must be provided manually. For example the
org.apache.http.client.utils.URLEncodedUtils#format() can be used for this purpose.
A mentioned, manually built, MultiValueMap<String, String> can be converted to the the
List<NameValuePair> format() method argument using this Java Streams snippet:
List<NameValuePair> nameValuePairs =
params.entrySet()
.stream()
.flatMap(e -> e
.getValue()
.stream()
.map(v -> new BasicNameValuePair(e.getKey(), v)))
.collect(Collectors.toList());
@Bean
public HttpRequestHandlingMessagingGateway inbound() {
HttpRequestHandlingMessagingGateway gateway =
new HttpRequestHandlingMessagingGateway(true);
gateway.setRequestMapping(mapping());
gateway.setRequestPayloadType(String.class);
gateway.setRequestChannelName("httpRequest");
return gateway;
}
@Bean
public RequestMapping mapping() {
RequestMapping requestMapping = new RequestMapping();
requestMapping.setPathPatterns("/foo");
requestMapping.setMethods(HttpMethod.POST);
return requestMapping;
}
@Bean
public IntegrationFlow inbound() {
return IntegrationFlows.from(Http.inboundGateway("/foo")
.requestMapping(m -> m.methods(HttpMethod.POST))
.requestPayloadType(String.class))
.channel("httpRequest")
.get();
}
@ServiceActivator(inputChannel = "httpOutRequest")
@Bean
public HttpRequestExecutingMessageHandler outbound() {
HttpRequestExecutingMessageHandler handler =
new HttpRequestExecutingMessageHandler("http://localhost:8080/foo");
handler.setHttpMethod(HttpMethod.POST);
handler.setExpectedResponseType(String.class);
return handler;
}
@Bean
public IntegrationFlow outbound() {
return IntegrationFlows.from("httpOutRequest")
.handle(Http.outboundGateway("http://localhost:8080/foo")
.httpMethod(HttpMethod.POST)
.expectedResponseType(String.class))
.get();
}
First, the components interact with Message Channels, for which timeouts can be specified. For
example, an HTTP Inbound Gateway will forward messages received from connected HTTP Clients to a
Message Channel (Request Timeout) and consequently the HTTP Inbound Gateway will receive a reply
Message from the Reply Channel (Reply Timeout) that will be used to generate the HTTP Response.
Please see the figure below for an illustration.
For outbound endpoints, the second thing to consider is timing while interacting with the remote server.
You may want to configure the HTTP related timeout behavior, when making active HTTP requests
using the HTTP Outbound Gateway or the HTTP Outbound Channel Adapter. In those instances, these
two components use Spring’s RestTemplate support to execute HTTP requests.
In order to configure timeouts for the HTTP Outbound Gateway and the HTTP Outbound Channel
Adapter, you can either reference a RestTemplate bean directly, using the rest-template attribute, or
you can provide a reference to a ClientHttpRequestFactory bean using the request-factory attribute.
Spring provides the following implementations of the ClientHttpRequestFactory interface:
If you don’t explicitly configure the request-factory or rest-template attribute respectively, then a default
RestTemplate which uses a SimpleClientHttpRequestFactory will be instantiated.
Note
With some JVM implementations, the handling of timeouts using the URLConnection class may
not be consistent.
E.g. from the Java™ Platform, Standard Edition 6 API Specification on setConnectTimeout:
[quote] Some non-standard implementation of this method may ignore the specified timeout. To
see the connect timeout set, please call getConnectTimeout().
Please test your timeouts if you have specific needs. Consider using
the HttpComponentsClientHttpRequestFactory which, in turn, uses Apache
HttpComponents HttpClient instead.
Important
When using the Apache HttpComponents HttpClient with a Pooling Connection Manager, be
aware that, by default, the connection manager will create no more than 2 concurrent connections
per given route and no more than 20 connections in total. For many real-world applications these
limits may prove too constraining. Refer to the Apache documentation (link above) for information
about configuring this important component.
<int-http:outbound-gateway url="http://www.google.com/ig/api?weather={city}"
http-method="GET"
expected-response-type="java.lang.String"
request-factory="requestFactory"
request-channel="requestChannel"
reply-channel="replyChannel">
<int-http:uri-variable name="city" expression="payload"/>
</int-http:outbound-gateway>
<bean id="requestFactory"
class="org.springframework.http.client.SimpleClientHttpRequestFactory">
<property name="connectTimeout" value="5000"/>
<property name="readTimeout" value="5000"/>
</bean>
For the HTTP Outbound Gateway, the XML Schema defines only the
reply-timeout. The reply-timeout maps to the sendTimeout property of the
org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler class. More
precisely, the property is set on the extended AbstractReplyProducingMessageHandler class,
which ultimately sets the property on the MessagingTemplate.
The value of the sendTimeout property defaults to "-1" and will be applied to the connected
MessageChannel. This means, that depending on the implementation, the Message Channel’s send
method may block indefinitely. Furthermore, the sendTimeout property is only used, when the actual
MessageChannel implementation has a blocking send (such as full bounded QueueChannel).
For the HTTP Inbound Gateway, the XML Schema defines the request-timeout attribute, which will be
used to set the requestTimeout property on the HttpRequestHandlingMessagingGateway class
(on the extended MessagingGatewaySupport class). Secondly, the_reply-timeout_ attribute exists and
it maps to the replyTimeout property on the same class.
The default for both timeout properties is "1000ms". Ultimately, the request-timeout property will be used
to set the sendTimeout on the used MessagingTemplate instance. The replyTimeout property on the
other hand, will be used to set the receiveTimeout property on the used MessagingTemplate instance.
Tip
There are 3 System Properties you can set to configure the proxy settings that will be used by the HTTP
protocol handler:
• http.nonProxyHosts - a list of hosts that should be reached directly, bypassing the proxy. This is a list
of patterns separated by |. The patterns may start or end with a * for wildcards. Any host matching
one of these patterns will be reached through a direct connection instead of through a proxy.
Spring’s SimpleClientHttpRequestFactory
If for any reason, you need more explicit control over the proxy configuration, you can use Spring’s
SimpleClientHttpRequestFactory and configure its proxy property as such:
<bean id="requestFactory"
class="org.springframework.http.client.SimpleClientHttpRequestFactory">
<property name="proxy">
<bean id="proxy" class="java.net.Proxy">
<constructor-arg>
<util:constant static-field="java.net.Proxy.Type.HTTP"/>
</constructor-arg>
<constructor-arg>
<bean class="java.net.InetSocketAddress">
<constructor-arg value="123.0.0.1"/>
<constructor-arg value="8080"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<int-http:outbound-gateway id="httpGateway"
url="http://localhost/test2"
mapped-request-headers="foo, bar"
mapped-response-headers="X-*, HTTP_RESPONSE_HEADERS"
channel="someChannel"/>
<int-http:outbound-channel-adapter id="httpAdapter"
url="http://localhost/test2"
mapped-request-headers="foo, bar, HTTP_REQUEST_HEADERS"
channel="someChannel"/>
The adapters and gateways will use the DefaultHttpHeaderMapper which now provides two static
factory methods for "inbound" and "outbound" adapters so that the proper direction can be applied
(mapping HTTP requests/responses IN/OUT as appropriate).
Before version 5.0, the DefaultHttpHeaderMapper the default prefix for user-defined, non-standard
HTTP headers was X-. In _version 5.0_ this has been changed to an empty string. According to
RFC-6648, the use of such prefixes is now discouraged. This option can still be customized by setting
the DefaultHttpHeaderMapper.setUserDefinedHeaderPrefix() property.
<int-http:outbound-gateway id="httpGateway"
url="http://localhost/test2"
header-mapper="headerMapper"
channel="someChannel"/>
Of course, you can even implement the HeaderMapper strategy interface directly and provide a
reference to that if you need to do something other than what the DefaultHttpHeaderMapper
supports.
This example demonstrates how simple it is to send a Multipart HTTP request via Spring’s RestTemplate
and receive it with a Spring Integration HTTP Inbound Adapter. All we are doing is creating a
MultiValueMap and populating it with multi-part data. The RestTemplate will take care of the rest
(no pun intended) by converting it to a MultipartHttpServletRequest . This particular client will
send a multipart HTTP Request which contains the name of the company as well as an image file with
the company logo.
<int-http:inbound-channel-adapter id="httpInboundAdapter"
channel="receiveChannel"
path="/inboundAdapter.htm"
supported-methods="GET, POST"/>
<int:channel id="receiveChannel"/>
<int:service-activator input-channel="receiveChannel">
<bean class="org.springframework.integration.samples.multipart.MultipartReceiver"/>
</int:service-activator>
<bean id="multipartResolver"
class="org.springframework.web.multipart.commons.CommonsMultipartResolver"/>
The httpInboundAdapter will receive the request, convert it to a Message with a payload that is
a LinkedMultiValueMap. We then are parsing that in the multipartReceiver service-activator;
• Outbound Gateway
Furthermore, the Spring Integration JDBC Module also provides a JDBC Message Store
Note
If you want to convert rows in the SELECT query result to individual messages you can use a
downstream splitter.
The inbound adapter also requires a reference to either a JdbcTemplate instance or a DataSource.
As well as the SELECT statement to generate the messages, the adapter above also has an UPDATE
statement that is being used to mark the records as processed so that they don’t show up in the next
poll. The update can be parameterized by the list of ids from the original select. This is done through a
naming convention by default (a column in the input result set called "id" is translated into a list in the
parameter map for the update called "id"). The following example defines an inbound Channel Adapter
with an update query and a DataSource reference.
Note
The parameters in the update query are specified with a colon (:) prefix to the name of a parameter
(which in this case is an expression to be applied to each of the rows in the polled result set).
This is a standard feature of the named parameter JDBC support in Spring JDBC combined with
a convention (projection onto the polled result list) adopted in Spring Integration. The underlying
Spring JDBC features limit the available expressions (e.g. most special characters other than
period are disallowed), but since the target is usually a list of or an individual object addressable
by simple bean paths this isn’t unduly restrictive.
To change the parameter generation strategy you can inject a SqlParameterSourceFactory into
the adapter to override the default behavior (the adapter has a sql-parameter-source-factory
attribute). Spring Integration provides a ExpressionEvaluatingSqlParameterSourceFactory
which will create a SpEL-based parameter source, with the results of the query as the #root object.
(If update-per-row is true, the root object is the row). If the same parameter name appears multiple
times in the update query, it is evaluated only one time, and its result is cached.
You can also use a parameter source for the select query. In this case, since there is no "result" object
to evaluate against, a single parameter source is used each time (rather than using a parameter source
factory). Starting with version 4.0, you can use Spring to create a SpEL based parameter source as
follows:
<bean id="parameterSourceFactory"
class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory">
<property name="parameterExpressions">
<map>
<entry key="status" value="@statusBean.which()" />
</map>
</property>
</bean>
The value in each parameter expression can be any valid SpEL expression. The #root object for the
expression evaluation is the constructor argument defined on the parameterSource bean. It is static
for all evaluations (in this case, an empty String).
Below example provides sql type for the parameters being used in the query.
<bean id="parameterSourceFactory"
class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory">
<property name="sqlParameterTypes">
<map>
<entry key="status" value=""#{ T(java.sql.Types).BINARY}" />
</map>
</property>
</bean>
Important
The inbound adapter accepts a regular Spring Integration poller as a sub element, so for instance the
frequency of the polling can be controlled. A very important feature of the poller for JDBC usage is the
option to wrap the poll operation in a transaction, for example:
<int-jdbc:inbound-channel-adapter query="..."
channel="target" data-source="dataSource" update="...">
<int:poller fixed-rate="1000">
<int:transactional/>
</int:poller>
</int-jdbc:inbound-channel-adapter>
Note
If a poller is not explicitly specified, a default value will be used (and as per normal with Spring
Integration can be defined as a top level bean).
In this example the database is polled every 1000 milliseconds, and the update and select queries are
both executed in the same transaction. The transaction manager configuration is not shown, but as long
as it is aware of the data source then the poll is transactional. A common use case is for the downstream
channels to be direct channels (the default), so that the endpoints are invoked in the same thread, and
hence the same transaction. Then if any of them fail, the transaction rolls back and the input data is
reverted to its original state.
The JDBC Inbound Channel Adapter defines an attribute max-rows-per-poll. When you specify the
adapter’s Poller, you can also define a property called max-messages-per-poll. While these two
attributes look similar, their meaning is quite different.
max-messages-per-poll specifies the number of times the query is executed per polling interval,
whereas max-rows-per-poll specifies the number of rows returned for each execution.
Under normal circumstances, you would likely not want to set the Poller’s max-messages-per-poll
property when using the JDBC Inbound Channel Adapter. Its default value is 1, which means that the
JDBC Inbound Channel Adapter's receive() method is executed exactly once for each poll interval.
Setting the max-messages-per-poll attribute to a larger value means that the query is executed that
many times back to back. For more information regarding the max-messages-per-poll attribute,
please see the section called “Configuring An Inbound Channel Adapter”.
In contrast, the max-rows-per-poll attribute, if greater than 0, specifies the maximum number of
rows that will be used from the query result set, per execution of the receive() method. If the attribute
is set to 0, then all rows will be included in the resulting message. If not explicitly set, the attribute
defaults to 0.
<int-jdbc:outbound-channel-adapter
query="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])"
data-source="dataSource"
channel="input"/>
In the example above, messages arriving on the channel labelled input have a payload of a map with key
foo, so the [] operator dereferences that value from the map. The headers are also accessed as a map.
Note
The parameters in the query above are bean property expressions on the incoming message (not
Spring EL expressions). This behavior is part of the SqlParameterSource which is the default
source created by the outbound adapter. Other behavior is possible in the adapter, and requires
the user to inject a different SqlParameterSourceFactory.
The outbound adapter requires a reference to either a DataSource or a JdbcTemplate. It can also
have a SqlParameterSourceFactory injected to control the binding of each incoming message to
a query.
If the input channel is a direct channel, then the outbound adapter runs its query in the same thread,
and therefore the same transaction (if there is one) as the sender of the message.
A common requirement for most JDBC Channel Adapters is to pass parameters as part of Sql queries
or Stored Procedures/Functions. As mentioned above, these parameters are by default bean property
expressions, not SpEL expressions. However, if you need to pass SpEL expression as parameters, you
must inject a SqlParameterSourceFactory explicitly.
<bean id="spelSource"
class="o.s.integration.jdbc.ExpressionEvaluatingSqlParameterSourceFactory">
<property name="parameterExpressions">
<map>
<entry key="id" value="headers['id'].toString()"/>
<entry key="createdDate" value="new java.util.Date()"/>
<entry key="payload" value="payload"/>
</map>
</property>
</bean>
For further information, please also see the section called “Defining Parameter Sources”
PreparedStatement Callback
There are some cases when the flexibility and loose-coupling of SqlParameterSourceFactory isn’t
enough for the target PreparedStatement or we need to do some low-level JDBC work. The Spring
JDBC module provides APIs to configure the execution environment (e.g. ConnectionCallback or
PreparedStatementCreator) and manipulation of parameter values (e.g. SqlParameterSource).
Or even APIs for low level operations, for example StatementCallback.
Starting with Spring Integration 4.2, the MessagePreparedStatementSetter is available to allow the
specification of parameters on the PreparedStatement manually, in the requestMessage context.
This class plays exactly the same role as PreparedStatementSetter in the standard Spring JDBC
API. Actually it is invoked directly from an inline PreparedStatementSetter implementation, when
the JdbcMessageHandler invokes execute on the JdbcTemplate.
This functional interface option is mutually exclusive with sqlParameterSourceFactory and can
be used as a more powerful alternative to populate parameters of the PreparedStatement from the
requestMessage. For example it is useful when we need to store File data to the DataBase BLOB
column in a stream manner:
@Bean
@ServiceActivator(inputChannel = "storeFileChannel")
public MessageHandler jdbcMessageHandler(DataSource dataSource) {
JdbcMessageHandler jdbcMessageHandler = new JdbcMessageHandler(dataSource,
"INSERT INTO imagedb (image_name, content, description) VALUES (?, ?, ?)");
jdbcMessageHandler.setPreparedStatementSetter((ps, m) -> {
ps.setString(1, m.getHeaders().get(FileHeaders.FILENAME));
try (FileInputStream inputStream = new FileInputStream((File) m.getPayload())) {
ps.setBlob(2, inputStream);
}
catch (Exception e) {
throw new MessageHandlingException(m, e);
}
ps.setClob(3, new StringReader(m.getHeaders().get("description", String.class)));
});
return jdbcMessageHandler;
}
<int-jdbc:outbound-gateway
update="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])"
request-channel="input" reply-channel="output" data-source="dataSource" />
The result of the above would be to insert a record into the "foos" table and return a message to the
output channel indicating the number of rows affected (the payload is a map: {UPDATED=1}).
If the update query is an insert with auto-generated keys, the reply message can be populated with the
generated keys by adding keys-generated="true" to the above example (this is not the default
because it is not supported by some database platforms). For example:
<int-jdbc:outbound-gateway
update="insert into foos (status, name) values (0, :payload[foo])"
request-channel="input" reply-channel="output" data-source="dataSource"
keys-generated="true"/>
Instead of the update count or the generated keys, you can also provide a select query to execute and
generate a reply message from the result (like the inbound adapter), e.g:
<int-jdbc:outbound-gateway
update="insert into foos (id, status, name) values (:headers[id], 0, :payload[foo])"
query="select * from foos where id=:headers[$id]"
request-channel="input" reply-channel="output" data-source="dataSource"/>
Since Spring Integration 2.2 the update SQL query is no longer mandatory. You can now solely provide
a select query, using either the query attribute or the query sub-element. This is extremely useful if you
need to actively retrieve data using e.g. a generic Gateway or a Payload Enricher. The reply message
is then generated from the result, like the inbound adapter, and passed to the reply channel.
<int-jdbc:outbound-gateway
query="select * from foos where id=:headers[id]"
request-channel="input"
reply-channel="output"
data-source="dataSource"/>
Important
By default the component for the SELECT query returns only one, first row from the cursor. This
can be adjusted with the max-rows-per-poll option. Consider to specify max-rows-per-
poll="0" if you need to return all the rows from the SELECT.
As with the channel adapters, there is also the option to provide SqlParameterSourceFactory
instances for request and reply. The default is the same as for the outbound adapter, so the request
message is available as the root of an expression. If keys-generated="true" then the root of the
expression is the generated keys (a map if there is only one or a list of maps if multi-valued).
The outbound gateway requires a reference to either a DataSource or a JdbcTemplate. It can also have a
SqlParameterSourceFactory injected to control the binding of the incoming message to the query.
See Section 19.2, “Outbound Channel Adapter” for more information about
MessagePreparedStatementSetter.
Spring Integration ships with some sample scripts that can be used to initialize a
database. In the spring-integration-jdbc JAR file you can find scripts in the
org.springframework.integration.jdbc package: there is a create and a drop script example
for a range of common database platforms. A common way to use these scripts is to reference them in
a Spring JDBC data source initializer. Note that the scripts are provided as samples or specifications of
the the required table and column names. You may find that you need to enhance them for production
use (e.g. with index declarations).
Here we have specified a LobHandler for dealing with messages as large objects (e.g. often necessary
if using Oracle) and a prefix for the table names in the queries generated by the store. The table name
prefix defaults to INT_.
Supported Databases
The JdbcChannelMessageStore uses database specific SQL queries to retrieve messages from
the database. Therefore, users must set the ChannelMessageStoreQueryProvider property on
the JdbcChannelMessageStore. This channelMessageStoreQueryProvider provides the SQL
queries and Spring Integration provides support for the following relational databases:
• PostgreSQL
• HSQLDB
• MySQL
• Oracle
• Derby
• H2
• SqlServer
• Sybase
• DB2
Since version 4.0, the MESSAGE_SEQUENCE column has been added to the table to ensure first-in-first-
out (FIFO) queueing even when messages are stored in the same millisecond.
Below example uses the default implementation of setValues to store common columns and overrides
the behavior just to store the message payload as varchar.
public JsonPreparedStatementSetter() {
super();
}
@Override
public void setValues(PreparedStatement preparedStatement, Message<?> requestMessage,
Object groupId, String region, boolean priorityEnabled) throws SQLException {
// Populate common columns
super.setValues(preparedStatement, requestMessage, groupId, region, priorityEnabled);
// Store message payload as varchar
preparedStatement.setString(6, requestMessage.getPayload().toString());
}
}
Important
Generally it is not recommended to use a relational database for the purpose of queuing. Instead,
if possible, consider using either JMS or AMQP backed channels instead. For further reference
please see the following resources:
• 5 subtle ways you’re using MySQL as a queue, and why it’ll bite you.
Concurrent Polling
When polling a Message Channel, you have the option to configure the associated Poller with a
TaskExecutor reference.
Important
Keep in mind, though, that if you use a JDBC backed Message Channel and you are planning on
polling the channel and consequently the message store transactionally with multiple threads, you
should ensure that you use a relational database that supports Multiversion Concurrency Control
(MVCC). Otherwise, locking may be an issue and the performance, when using multiple threads,
may not materialize as expected. For example Apache Derby is problematic in that regard.
To achieve better JDBC queue throughput, and avoid issues when different threads may poll
the same Message from the queue, it is important to set the usingIdCache property of
JdbcChannelMessageStore to true when using databases that do not support MVCC:
<bean id="queryProvider"
class="o.s.i.jdbc.store.channel.PostgresChannelMessageStoreQueryProvider"/>
<int:transaction-synchronization-factory id="syncFactory">
<int:after-commit expression="@store.removeFromIdCache(headers.id.toString())" />
<int:after-rollback expression="@store.removeFromIdCache(headers.id.toString())"/>
</int:transaction-synchronization-factory>
<int:channel id="inputChannel">
<int:queue message-store="store"/>
</int:channel>
Priority Channel
Note
It’s not recommended to use the same JdbcChannelMessageStore bean for priority
and non-priority queue channel, because priorityEnabled option applies to the entire
store and proper FIFO queue semantics will not be retained for the queue channel.
However the same INT_CHANNEL_MESSAGE table, and even region, can be used for both
JdbcChannelMessageStore types. To configure that scenario, simply extend one message
store bean from the other:
<int:channel id="queueChannel">
<int:queue message-store="channelStore"/>
</int:channel>
<int:channel id="priorityChannel">
<int:priority-queue message-store="priorityStore"/>
</int:channel>
Supported Databases
In order to enable calls to Stored Procedures and Stored Functions, the Stored Procedure components
use the org.springframework.jdbc.core.simple.SimpleJdbcCall class. Consequently, the
following databases are fully supported for executing Stored Procedures:
• Apache Derby
• DB2
• MySQL
• Oracle
• PostgreSQL
• Sybase
If you want to execute Stored Functions instead, the following databases are fully supported:
• MySQL
• Oracle
• PostgreSQL
Note
Even though your particular database may not be fully supported,