Showing posts with label json. Show all posts
Showing posts with label json. Show all posts

Friday, December 19, 2025

DuckDB: very useful were least expected

If you haven't heard about DuckDB yet, you definitely have to check it out. It is fascinating piece of technology for quick data exploration and analysis. But today we are going to talk about somewhat surprising but exceptionally useful area where DuckDB could be tremendously helpful - dealing with chatty HTTP/REST services.

One of the best things about DuckDB is that it is just a single binary (per OS/arch), with no additional dependencies, so the installation process is a breath.

Let me set the stage here. I have been working with OpenSearch (and Elasticsearch) for years, those are great search engines with very reach HTTP/REST APIs. The production grade clusters constitute hundreds of nodes, and at this scale, mostly every single cluster wide HTTP/REST endpoint invocation returns unmanageable JSON blobs. Wouldn't it be cool to somehow transform such JSON blobs into structured, queryable form somehow? Like relational table for example and run SQL queries over it, without writing a single line of code? It is absolutely doable with DuckDB and its JSON Processing Functions.

As an exercise, we are going to play with Nodes API which returns a detailed per node response, following deep nested JSON structure:

{
  "cluster_name" : "...",
  "_nodes" : {
     ...
  },
  "nodes" : {
    <node1> : {
        ...
    },
    <node2> : {
        ...
    },
    ...
    <nodeN> : {
        ...
    }
  }

Ideally, what we want is to flatten this structure into a table where each row represents individual node and each JSON key becomes an individual column by itself. To put things in the context, each node structure has nested arrays and objects, we will not recursively traverse them (although it is possible but needs more complex transformations). With that, let us start our exploration journey!

The first step is the simplest: extract nodes collection of objects from the Nodes API response and just feed it directly into DuckDB.

$ curl "https://localhost:9200/_nodes?pretty" -u admin:<password> -k --raw -s | duckdb -c "
  WITH nodes AS (
    SELECT key as id, value FROM  read_json_auto('/dev/stdin') AS r, json_each(r, '$.nodes')
  )
  SELECT * FROM nodes"

We would get back something like this (the OpenSearch cluster I use for tests has only two nodes, hence we see only two rows):

┌──────────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│          id          │                                                                 value                                                                  │
│       varchar        │                                                                  json                                                                  │
├──────────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ YQzVQ_kHT-WnYTReh0…  │ {"name":"opensearch-node2","transport_address":"10.89.0.3:9300","host":"10.89.0.3","ip":"10.89.0.3","version":"3.0.0","build_type":"…  │
│ J9a5OM8STainCdkaLm…  │ {"name":"opensearch-node1","transport_address":"10.89.0.2:9300","host":"10.89.0.2","ip":"10.89.0.2","version":"3.0.0","build_type":"…  │
└──────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Although we could stop just here (since DuckDB lets you query over JSON values easily), we would like to do something useful with value blob: we want it to become a table with real columns. There are many ways that could be accomplished in DuckDB, some do require precise JSON structure (schema) to be provided, some require predefined list of keys upfront. We would stick to dynamic exploration instead and extract all keys from the actual JSON data.

$ curl "https://localhost:9200/_nodes?pretty" -u admin:<password> | duckdb -c "
  WITH nodes AS (
    SELECT key as id, value FROM  read_json_auto('/dev/stdin') AS r, json_each(r, '$.nodes')
  ),
  all_keys AS (
    SELECT distinct(unnest(json_keys(value))) AS key FROM nodes
  )
  SELECT * FROM all_keys"

In the version of the OpenSearch I am running, there are 22 unique keys (JSON field names) returned, an example of the output is below:

┌────────────────────────────────┐
│              key               │
│            varchar             │
├────────────────────────────────┤
│ plugins                        │
│ jvm                            │
│ host                           │
│ version                        │
│ build_hash                     │
│ ...                            │
│ modules                        │
│ build_type                     │
│ os                             │
│ transport                      │
│ search_pipelines               │
│ attributes                     │
├────────────────────────────────┤
│            22 rows             │
└────────────────────────────────┘

Good progress so far, but we need to go over the last mile and build a relational table out of these pieces. This is where DuckDB's powerful PIVOT statement comes in very handy.

$ curl "https://localhost:9200/_nodes?pretty" -u admin:<password> | duckdb -c "
  WITH nodes AS (
    SELECT key as id, value FROM  read_json_auto('/dev/stdin') AS r, json_each(r, '$.nodes')
  ),
  all_keys AS (
    SELECT distinct(unnest(json_keys(value))) AS key FROM nodes
  ),
  keys AS (
    SELECT * FROM all_keys WHERE key not in ['plugins', 'modules']
  )
  SELECT id, node.* FROM nodes, (PIVOT keys ON(key) USING first(json_extract(value, '$.' || key))) as node"

And here we are:

┌──────────────────────┬──────────────────────┬──────────────────────┬──────────────────────┬───┬──────────────────────┬──────────────────────┬──────────────────────┬───────────────────┬─────────┐
│          id          │     aggregations     │      attributes      │      build_hash      │ … │     thread_pool      │ total_indexing_buf…  │      transport       │ transport_address │ version │
│       varchar        │         json         │         json         │         json         │   │         json         │         json         │         json         │       json        │  json   │
├──────────────────────┼──────────────────────┼──────────────────────┼──────────────────────┼───┼──────────────────────┼──────────────────────┼──────────────────────┼───────────────────┼─────────┤
│ YQzVQ_kHT-WnYTReh0…  │ {"adjacency_matrix…  │ {"shard_indexing_p…  │ "dc4efa821904cc2d7…  │ … │ {"remote_refresh_r…  │ 53687091             │ {"bound_address":[…  │ "10.89.0.3:9300"  │ "3.0.0" │
│ J9a5OM8STainCdkaLm…  │ {"adjacency_matrix…  │ {"shard_indexing_p…  │ "dc4efa821904cc2d7…  │ … │ {"remote_refresh_r…  │ 53687091             │ {"bound_address":[…  │ "10.89.0.2:9300"  │ "3.0.0" │
├──────────────────────┴──────────────────────┴──────────────────────┴──────────────────────┴───┴──────────────────────┴──────────────────────┴──────────────────────┴───────────────────┴─────────┤
│ 2 rows                                                                                                                                                                      21 columns (9 shown) │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

As we set for the goal, we have transformed each node from JSON to structured table. There is one subtle quirk to mention, the presence of the additional step to filter out plugins and modules fields from the transformations, DuckDB seems to have difficulties pivoting those:

Binder Error:
PIVOT is not supported in correlated subqueries yet

I hope you find it useful, typical enterprise grade HTTP/REST services often throw a pile of JSON at you, try to make sense of it!

I πŸ‡ΊπŸ‡¦ stand πŸ‡ΊπŸ‡¦ with πŸ‡ΊπŸ‡¦ Ukraine.

Tuesday, February 18, 2020

In praise of the thoughful design: how property-based testing helps me to be a better developer

The developer's testing toolbox is one of these things which rarely stays unchanged. For sure, some testing practices have proven to be more valuable than others but still, we are constantly looking for better, faster and more expressive ways to test our code. Property-based testing, largely unknown to Java community, is yet another gem crafted by Haskell folks and described in QuickCheck paper.

The power of this testing technique has been quickly realized by Scala community (where the ScalaCheck library was born) and many others but the Java ecosystem has lacked the interest into adopting property-based testing for quite some time. Luckily, since the jqwik appearance, the things are slowly changing for better.

For many, it is quite difficult to grasp what property-based testing is and how it could be exploited. The excellent presentation Property-based Testing for Better Code by Jessica Kerr and comprehensive An introduction to property-based testing, Property-based Testing Patterns series of articles are excellent sources to get you hooked, but in today's post we are going to try discovering the practical side of the property-based testing for typical Java developer using jqwik.

To start with, what the name property-based testing actually implies? The first thought of every Java developer would be it aims to test all getters and setters (hello 100% coverage)? Not really, although for some data structures it could be useful. Instead, we should identify the high-level characteristics, if you will, of the component, data structure, or even individual function and efficiently test them by formulating the hypothesis.

Our first example falls into category "There and back again": serialization and deserialization into JSON representation. The class under the test is User POJO, although trivial, please notice that it has one temporal property of type OffsetDateTime.

public class User {
    private String username;
    @JsonFormat(pattern = "yyyy-MM-dd'T'HH:mm:ss[.SSS[SSS]]XXX", shape = Shape.STRING)
    private OffsetDateTime created;
    
    // ...
}

It is surprising to see how often manipulation with date/time properties are causing issues these days since everyone tries to use own representation. As you could spot, our contract is using ISO-8601 interchange format with optional milliseconds part. What we would like to make sure is that any valid instance of User could be serialized into JSON and desearialized back into Java object without loosing any date/time precision. As an exercise, let us try to express that in pseudo code first:

For any user
  Serialize user instance to JSON
  Deserialize user instance back from JSON
  Two user instances must be identical

Looks simple enough but here comes the surprising part: let us take a look at how this pseudo code projects into real test case using jqwik library. It gets as close to our pseudo code as it possibly could.

@Property
void serdes(@ForAll("users") User user) throws JsonProcessingException {
    final String json = serdes.serialize(user);

    assertThat(serdes.deserialize(json))
        .satisfies(other -> {
            assertThat(user.getUsername()).isEqualTo(other.getUsername());
            assertThat(user.getCreated().isEqual(other.getCreated())).isTrue();
        });
        
    Statistics.collect(user.getCreated().getOffset());
}

The test case reads very easy, mostly natural, but obviously, there is some background hidden behind jqwik's @Property and @ForAll annotations. Let us start from @ForAll and clear out where all these User instances are coming from. As you may guess, these instances must be generated, preferably in a randomized fashion.

For most of the built-in data types jqwik has a rich set of data providers (Arbitraries), but since we are dealing with application-specific class, we have to supply our own generation strategy. It should be able to emit User class instances with the wide range of usernames and the date/time instants for different set of timezones and offsets. Let us do a sneak peek at the provider implementation first and discuss it in details right after.

@Provide
Arbitrary<User> users() {
    final Arbitrary<String> usernames = Arbitraries.strings().alpha().ofMaxLength(64);
 
    final Arbitrary<OffsetDateTime> dates = Arbitraries
        .of(List.copyOf(ZoneId.getAvailableZoneIds()))
        .flatMap(zone -> Arbitraries
            .longs()
            .between(1266258398000L, 1897410427000L) // ~ +/- 10 years
            .unique()
            .map(epochMilli -> Instant.ofEpochMilli(epochMilli))
            .map(instant -> OffsetDateTime.from(instant.atZone(ZoneId.of(zone)))));

    return Combinators
        .combine(usernames, dates)
        .as((username, created) -> new User(username).created(created));

}

The source of usernames is easy: just random strings. The source of dates basically could be any date/time between 2010 and 2030 whereas the timezone part (thus the offset) is randomly picked from all available region-based zone identifiers. For example, below are some samples jqwik came up with.

{"username":"zrAazzaDZ","created":"2020-05-06T01:36:07.496496+03:00"}
{"username":"AZztZaZZWAaNaqagPLzZiz","created":"2023-03-20T00:48:22.737737+08:00"}
{"username":"aazGZZzaoAAEAGZUIzaaDEm","created":"2019-03-12T08:22:12.658658+04:00"}
{"username":"Ezw","created":"2011-10-28T08:07:33.542542Z"}
{"username":"AFaAzaOLAZOjsZqlaZZixZaZzyZzxrda","created":"2022-07-09T14:04:20.849849+02:00"}
{"username":"aaYeZzkhAzAazJ","created":"2016-07-22T22:20:25.162162+06:00"}
{"username":"BzkoNGzBcaWcrDaaazzCZAaaPd","created":"2020-08-12T22:23:56.902902+08:45"}
{"username":"MazNzaTZZAEhXoz","created":"2027-09-26T17:12:34.872872+11:00"}
{"username":"zqZzZYamO","created":"2023-01-10T03:16:41.879879-03:00"}
{"username":"GaaUazzldqGJZsqksRZuaNAqzANLAAlj","created":"2015-03-19T04:16:24.098098Z"}
...

By default, jqwik will run the test against 1000 different sets of parameter values (randomized User instances). The quite helpful Statistics container allows to collect whatever distribution insights you are curious about. Just in case, why not to collect the distribution by zone offsets?

    ...
    -04:00 (94) :  9.40 %
    -03:00 (76) :  7.60 %
    +02:00 (75) :  7.50 %
    -05:00 (74) :  7.40 %
    +01:00 (72) :  7.20 %
    +03:00 (69) :  6.90 %
    Z      (62) :  6.20 %
    -06:00 (54) :  5.40 %
    +11:00 (42) :  4.20 %
    -07:00 (39) :  3.90 %
    +08:00 (37) :  3.70 %
    +07:00 (34) :  3.40 %
    +10:00 (34) :  3.40 %
    +06:00 (26) :  2.60 %
    +12:00 (23) :  2.30 %
    +05:00 (23) :  2.30 %
    -08:00 (20) :  2.00 %
    ...    

Let us consider another example. Imagine at some point we decided to reimplement the equality for User class (which in Java means, overriding equals and hashCode) based on username property. With that, for any pair of User class instances the following invariants must hold true:

  • if two User instances have the same username, they are equal and must have same hash code
  • if two User instances have different usernames, they are not equal (but hash code may not necessarily be different)
It is the perfect fit for property-based testing and jqwik in particular makes such kind of tests trivial to write and maintain.

@Provide
Arbitrary<String> usernames() {
    return Arbitraries.strings().alpha().ofMaxLength(64);
}

@Property
void equals(@ForAll("usernames") String username, @ForAll("usernames") String other) {
    Assume.that(!username.equals(other));
        
    assertThat(new User(username))
        .isEqualTo(new User(username))
        .isNotEqualTo(new User(other))
        .extracting(User::hashCode)
        .isEqualTo(new User(username).hashCode());
}

The assumptions expressed through Assume allow to put additional constraints on the generated parameters since we introduce two sources of the usernames, it could happen that both of them emit the identical username at the same run so the test would fail.

The question you may be holding up to now is: what is the point? It is surely possible to test serialization / deserialization or equals/hashCode without embarking on property-based testing and using jqwik, so why even bother? Fair enough, but the answer to this question basically lies deeply in how we approach the design of our software systems.

By and large, property-based testing is heavily influenced by functional programming, not a first thing which comes into mind with respect to Java (at least, not yet), to say it mildly. The randomized generation of test data is not novel idea per se, however what property-based testing is encouraging you to do, at least in my opinion, is to think in more abstract terms, focus not on individual operations (equals, compare, add, sort, serialize, ...) but what kind of properties, characteristics, laws and/or invariants they come with to obey. It certainly feels like an alien technique, paradigm shift if you will, encourages to spend more time on designing the right thing. It does not mean that from now on all your tests must be property-based but I believe it certainly deserves the place in the front row of our testing toolboxes.

Please find the complete project sources available on Github.

Saturday, November 30, 2019

Spring has you covered, again: consumer-driven contract testing for messaging continued

In the previous post we have started to talk about consumer-driven contract testing in the context of the message-based communications. In today's post, we are going to include yet another tool in our testing toolbox but before that, let me do a quick refresher on a system under the microscope. It has two services, Order Service and Shipment Service. The Order Service publishes the messages / events to the message queue and Shipment Service consumes them from there.

The search for the suitable test scaffolding led us to discovery of the Pact framework (to be precise, Pact JVM). The Pact offers simple and straightforward ways to write consumer and producer tests, leaving no excuses to not doing consumer-driven contract testing. But there is another player on the field, Spring Cloud Contract, and this is what we are going to discuss today.

To start with, Spring Cloud Contract fits the best JVM-based projects, built on top of terrific Spring portfolio (although you could make it work in polyglot scenarios as well). In addition, the collaboration flow that Spring Cloud Contract adopts is slightly different from the one Pact taught us, which is not necessarily a bad thing. Let us get straight to the point.

Since we are scoping out to messaging only, the first thing Spring Cloud Contract asks us to do is to define messaging contract specification, written using convenient Groovy Contract DSL.

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    name "OrderConfirmed Event"
    label 'order'
    
    input {
        triggeredBy('createOrder()')
    }
    
    outputMessage {
        sentTo 'orders'
        
        body([
            orderId: $(anyUuid()),
            paymentId: $(anyUuid()),
            amount: $(anyDouble()),
            street: $(anyNonBlankString()),
            city: $(anyNonBlankString()),
            state: $(regex('[A-Z]{2}')),
            zip: $(regex('[0-9]{5}')),
            country: $(anyOf('USA','Mexico'))
        ])
        
        headers {
            header('Content-Type', 'application/json')
        }
    }
}

It resembles a lot Pact specifications we are already familiar with (if you are not a big fan of Groovy, no real need to learn it in order to use Spring Cloud Contract). The interesting parts here are triggeredBy and sentTo blocks: basically, those outline how the message is being produced (or triggered) and where it should land (channel or queue name) respectively. In this case, the createOrder() is just a method name which we have to provide the implementation for.

package com.example.order;

import java.math.BigDecimal;
import java.util.UUID;

import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.cloud.contract.verifier.messaging.boot.AutoConfigureMessageVerifier;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.MessageChannel;
import org.springframework.test.context.junit4.SpringRunner;

import com.example.order.event.OrderConfirmed;

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureMessageVerifier
public class OrderBase {
    @Autowired private MessageChannel orders;
    
    public void createOrder() {
        final OrderConfirmed order = new OrderConfirmed();
        order.setOrderId(UUID.randomUUID());
        order.setPaymentId(UUID.randomUUID());
        order.setAmount(new BigDecimal("102.32"));
        order.setStreet("1203 Westmisnter Blvrd");
        order.setCity("Westminster");
        order.setCountry("USA");
        order.setState("MI");
        order.setZip("92239");

        orders.send(
            MessageBuilder
                .withPayload(order)
                .setHeader("Content-Type", "application/json")
                .build());
    }
}

There is one small detail left out though: these contracts are managed by providers (or better to say, producers), not consumers. Not only that, the producers are responsible for publishing all the stubs for consumers so they would be able to write the tests against. Certainly a different path than Pact takes, but on the bright side, the test suite for producers are 100% generated by Apache Maven / Gradle plugins.

<plugin>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-contract-maven-plugin</artifactId>
    <version>2.1.4.RELEASE</version>
    <extensions>true</extensions>
    <configuration>
        <packageWithBaseClasses>com.example.order</packageWithBaseClasses>
    </configuration>
</plugin>

As you may have noticed, the plugin would assume that the base test classes (the ones which have to provide createOrder() method implementation) are located in the com.example.order package, exactly where we have placed OrderBase class. To complete the setup, we need to add a few dependencies to our pom.xml file.


<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Greenwich.SR4</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-dependencies</artifactId>
            <version>2.1.10.RELEASE</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-verifier</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
</dependencies>

And we are done with producer side! If we run mvn clean install right now, two things are going to happen. First, you will notice that some tests were run and passed, although we wrote none, these were generated on our behalf.

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.example.order.OrderTest

....

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

And secondly, the stubs for consumers are going to be generate (and published) as well (in this case, bundled into order-service-messaging-contract-tests-0.0.1-SNAPSHOT-stubs.jar).

...
[INFO]
[INFO] --- spring-cloud-contract-maven-plugin:2.1.4.RELEASE:generateStubs (default-generateStubs) @ order-service-messaging-contract-tests ---
[INFO] Files matching this pattern will be excluded from stubs generation []
[INFO] Building jar: order-service-messaging-contract-tests-0.0.1-SNAPSHOT-stubs.jar
[INFO]
....

Awesome, so we have messaging contract specification and stubs published, the ball is on consumer's field now, the Shipment Service. Probably, the most tricky part for the consumer would be to configure the messaging integration library of choice. In our case, it is going to be Spring Cloud Stream however other integrations are also available.

The fastest way to understand how the Spring Cloud Contract works on cosumer side is to start from the end and to look at the complete sample test suite first.

@RunWith(SpringRunner.class)
@SpringBootTest
@AutoConfigureMessageVerifier
@AutoConfigureStubRunner(
    ids = "com.example:order-service-messaging-contract-tests:+:stubs", 
    stubsMode = StubRunnerProperties.StubsMode.LOCAL
)
public class OrderMessagingContractTest {
    @Autowired private MessageVerifier<Message<?>> verifier;
    @Autowired private StubFinder stubFinder;

    @Test
    public void testOrderConfirmed() throws Exception {
        stubFinder.trigger("order");
        
        final Message<?> message = verifier.receive("orders");
        assertThat(message, notNullValue());
        assertThat(message.getPayload(), isJson(
            allOf(List.of(
                withJsonPath("$.orderId"),
                withJsonPath("$.paymentId"),
                withJsonPath("$.amount"),
                withJsonPath("$.street"),
                withJsonPath("$.city"),
                withJsonPath("$.state"),
                withJsonPath("$.zip"),
                withJsonPath("$.country")
            ))));
    }
}

At the top, the @AutoConfigureStubRunner references the stubs published by producer, effectively the ones from order-service-messaging-contract-tests-0.0.1-SNAPSHOT-stubs.jar archive. The StubFinder helps us to pick the right stub for the test case and to trigger a particular messaging contract verification flow by means of calling stubFinder.trigger("order"). The value "order" is not arbitrary, it should match the label assigned to the contract specification, in our case we have it defined as:

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    ...
    label 'order'
    ...
}

With that, the test should be looking simple and straightfoward: trigger the flow, verify that the message has been placed into the messaging channel and satisfies the consumer expectations. From the configuration standpoint, we only need to provide this messaging channel to run the tests against.

@SpringBootConfiguration
public class OrderMessagingConfiguration {
    @Bean
    PollableChannel orders() {
        return MessageChannels.queue().get();
    }
}

And again, the name of the bean, orders, is not a random pick, it has to much the destination from the contract specification:

package contracts

org.springframework.cloud.contract.spec.Contract.make {
    ...
    outputMessage {
        sentTo 'orders'
        ...
    }
    ...
}

Last but not least, let us enumerate the dependencies which are required on consumer side (luckily, there is no need to use any additional Apache Maven or Gradle plugins).

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>Greenwich.SR4</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-verifier</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-contract-stub-runner</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-stream</artifactId>
        <version>2.2.1.RELEASE</version>
        <type>test-jar</type>
        <scope>test</scope>
        <classifier>test-binder</classifier>
    </dependency>
</dependencies>

A quick note here. The last dependency is quite an important piece of the puzzle, it brings the integration of the Spring Cloud Stream with Spring Cloud Contract. With that, the consumers are all set.

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.example.order.OrderMessagingContractTest

...

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

To close the loop, we should look back to the one of the core promises of the consumer-driven contract testing: allow the producers to evolve the contracts without breaking the consumers. What that means practically is that consumers may contribute their tests back to the producers, alhough the improtance of doing that is less of the concern with Spring Cloud Contract. The reason is simple: the producers are the ones who write the message contract specifications first and the tests generated out of these specifications are expected to fail against any breaking change. Nonetheless, there are number of benefits for producers to know how the consumers use their messages, so please give it some thoughts.

Hopefuly, it was an interesting subject to discuss. Spring Cloud Contract brings somewhat different perspective of applying consumer-driven contract testing for messaging. It is an appealing alternative to Pact JVM, especially if your applications and services already rely on Spring projects.

As always, the complete project sources are available on Github.

Thursday, October 31, 2019

Tell us what you want and we will make it so: consumer-driven contract testing for messaging

Quite some time ago we have talked about consumer-driven contract testing from the perspective of the REST(ful) web APIs in general and their projection into Java (JAX-RS 2.0 specification) in particular. It would be fair to say that REST still dominates the web API landscape, at least with respect to public APIs, however the shift towards microservices or/and service-based architecture is changing the alignment of forces very fast. One of such disrupting trends is messaging.

Modern REST(ful) APIs are implemented mostly over HTTP 1.1 protocol and are constrained by its request/response communication style. The HTTP/2 is here to help out but still, not every use case fits into this communication model. Often the job could be performed asynchronously and the fact of its completion could be broadcasted to interested parties later on. This is how most of the things work in real life and using messaging is a perfect answer to that.

The messaging space is really crowded with astonishing amount of message brokers and brokerless options available. We are not going to talk about that instead focusing on another tricky subject: the message contracts. Once the producer emits message or event, it lands into the queue/topic/channel, ready to be consumed. It is here to stay for some time. Obviously, the producer knows what it publishes, but what about consumers? How would they know what to expect?

At this moment, many of us would scream: use schema-based serialization! And indeed, Apache Avro, Apache Thrift, Protocol Buffers, Message Pack, ... are here to address that. At the end of the day, such messages and events become the part of the provider contract, along with the REST(ful) web APIs if any, and have to be communicated and evolved over time without breaking the consumers. But ... you would be surprised to know how many organizations found their nirvana in JSON and use it to pass messages and events around, throwing such clobs at consumers, no schema whatsoever! In this post we are going to look at how consumer-driven contract testing technique could help us in such situation.

Let us consider a simple system with two services, Order Service and Shipment Service. The Order Service publishes the messages / events to the message queue and Shipment Service consumes them from there.

Since Order Service is implemented in Java, the events are just POJO classes, serialized into JSON before arriving to the message broker using one of the numerous libraries out there. OrderConfirmed is one of such events.

public class OrderConfirmed {
    private UUID orderId;
    private UUID paymentId;
    private BigDecimal amount;
    private String street;
    private String city;
    private String state;
    private String zip;
    private String country;
}

As it often happens, the Shipment Service team was handed over the sample JSON snippet or pointed out some documentation piece, or reference Java class, and that is basically it. How Shipment Service team could kickoff the integration while being sure their interpretation is correct and the message's data they need will not suddenly disappear? Consumer-driven contract testing to the rescue!

The Shipment Service team could (and should) start off by writing the test cases against the OrderConfirmed message, embedding the knowledge they have, and our old friend Pact framework (to be precise, Pact JVM) is the right tool for that. So how the test case may look like?

public class OrderConfirmedConsumerTest {
    private static final String PROVIDER_ID = "Order Service";
    private static final String CONSUMER_ID = "Shipment Service";
    
    @Rule
    public MessagePactProviderRule provider = new MessagePactProviderRule(this);
    private byte[] message;

    @Pact(provider = PROVIDER_ID, consumer = CONSUMER_ID)
    public MessagePact pact(MessagePactBuilder builder) {
        return builder
            .given("default")
            .expectsToReceive("an Order confirmation message")
            .withMetadata(Map.of("Content-Type", "application/json"))
            .withContent(new PactDslJsonBody()
                .uuid("orderId")
                .uuid("paymentId")
                .decimalType("amount")
                .stringType("street")
                .stringType("city")
                .stringType("state")
                .stringType("zip")
                .stringType("country"))
            .toPact();
    }

    @Test
    @PactVerification(PROVIDER_ID)
    public void test() throws Exception {
        Assert.assertNotNull(message);
    }

    public void setMessage(byte[] messageContents) {
        message = messageContents;
    }
}

It is exceptionally simple and straightforward, no boilerplate added. The test case is designed right from the JSON representation of the OrderConfirmed message. But we are only half-way through, the Shipment Service team should somehow contribute their expectations back to the Order Service so the producer would keep track of who and how consumes the OrderConfirmed message. The Pact test harness takes care of that by generating the pact files (set of agreements, or pacts) out of the each JUnit test cases into the 'target/pacts' folder. Below is an example of the generated Shipment Service-Order Service.json pact file after running OrderConfirmedConsumerTest test suite.

{
  "consumer": {
    "name": "Shipment Service"
  },
  "provider": {
    "name": "Order Service"
  },
  "messages": [
    {
      "description": "an Order confirmation message",
      "metaData": {
        "contentType": "application/json"
      },
      "contents": {
        "zip": "string",
        "country": "string",
        "amount": 100,
        "orderId": "e2490de5-5bd3-43d5-b7c4-526e33f71304",
        "city": "string",
        "paymentId": "e2490de5-5bd3-43d5-b7c4-526e33f71304",
        "street": "string",
        "state": "string"
      },
      "providerStates": [
        {
          "name": "default"
        }
      ],
      "matchingRules": {
        "body": {
          "$.orderId": {
            "matchers": [
              {
                "match": "regex",
                "regex": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
              }
            ],
            "combine": "AND"
          },
          "$.paymentId": {
            "matchers": [
              {
                "match": "regex",
                "regex": "[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}"
              }
            ],
            "combine": "AND"
          },
          "$.amount": {
            "matchers": [
              {
                "match": "decimal"
              }
            ],
            "combine": "AND"
          },
          "$.street": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.city": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.state": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.zip": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          },
          "$.country": {
            "matchers": [
              {
                "match": "type"
              }
            ],
            "combine": "AND"
          }
        }
      }
    }
  ],
  "metadata": {
    "pactSpecification": {
      "version": "3.0.0"
    },
    "pact-jvm": {
      "version": "4.0.2"
    }
  }
}

The next step for Shipment Service team is to share this pact file with Order Service team so these guys could run the provider-side Pact verifications as part of their test suites.

@RunWith(PactRunner.class)
@Provider(OrderServicePactsTest.PROVIDER_ID)
@PactFolder("pacts") 
public class OrderServicePactsTest {
    public static final String PROVIDER_ID = "Order Service";

    @TestTarget
    public final Target target = new AmqpTarget();
    private ObjectMapper objectMapper;
    
    @Before
    public void setUp() {
        objectMapper = new ObjectMapper();
    }

    @State("default")
    public void toDefaultState() {
    }
    
    @PactVerifyProvider("an Order confirmation message")
    public String verifyOrderConfirmed() throws JsonProcessingException {
        final OrderConfirmed order = new OrderConfirmed();
        
        order.setOrderId(UUID.randomUUID());
        order.setPaymentId(UUID.randomUUID());
        order.setAmount(new BigDecimal("102.33"));
        order.setStreet("1203 Westmisnter Blvrd");
        order.setCity("Westminster");
        order.setCountry("USA");
        order.setState("MI");
        order.setZip("92239");

        return objectMapper.writeValueAsString(order);
    }
}

The test harness picks all the pact files from the @PactFolder and run the tests against the @TestTarget, in this case we are wiring AmqpTarget, provided out of the box, but your could plug your own specific target easily.

And this is basically it! The consumer (Shipment Service) have their expectations expressed in the test cases and shared with the producer (Order Service) in a shape of the pact files. The producers have own set of tests to make sure its model matches the consumers' view. Both sides could continue to evolve independently, and trust each other, as far as pacts are not denounced (hopefully, never).

To be fair, Pact is not the only choice for doing consumer-driven contract testing, in the upcoming post (already in work) we are going to talk about yet another excellent option, Spring Cloud Contract.

As for today, the complete project sources are available on Github.

Tuesday, February 20, 2018

Run away from 'null' checks feast: doing PATCH properly with JSON Patch (RFC-6902)

Today we are going to have a conversation about REST(ful) services and APIs, more precisely, around one peculiar subject many experienced developers are struggling with. To put things into perspective, we are going to talk about web APIs, where the REST(ful) principles adhere to HTTP protocol and heavily exploit the semantics of HTTP methods and (usually but not necessarily) use JSON to represent the state.

One particular HTTP method stands out, and although its meaning sounds pretty straightforward, the implementation is far from that. Yes, we are looking at you, the PATCH. So what is the problem, really? It is just an update, right? Yes, in the essence the semantics of the PATCH method in the context of the HTTP-based REST(ful) web services is partial update of the resource. Now, how would you do that, Java developer? Here is where the fun begins.

Let us go over a very simple example of book management API, modeled using latest JSR 370: Java API for RESTful Web Services (JAX-RS 2.1) specification (which finally includes the @PATCH annotation!) and terrific Apache CXF framework. Our resource is just a very simplistic Book class.

public class Book {
    private String title;
    private Collection>String< authors;
    private String isbn;
}

How would you implement the partial update using the PATCH method? Sadly, the brute force solution, the null feast, is the clear winner here.

@PATCH
@Path("/{isbn}")
@Consumes(MediaType.APPLICATION_JSON)
public void update(@PathParam("isbn") String isbn, Book book) {
    final Book existing = bookService.find(isbn).orElseThrow(NotFoundException::new);
        
    if (book.getTitle() != null) {
        existing.setTitle(book.getTitle());
    }

    if (book.getAuthors() != null) {
        existing.setAuthors(book.getAuthors());
    }
        
    // And here it goes on and on ...
    // ...
}

In the nutshell, this is null-guarded PUT clone. Probably, someone could claim that it kind of works and declare the victory here. But hopefully for majority of us this approach clearly has a lot of flaws and should be never taken. Alternatives? Yes, absolutely, RFC-6902: JSON Patch, not an official standard just yet but it is getting there.

The RFC-6902: JSON Patch drastically changes the game by expressing a sequence of operations to apply to a JSON document. To illustrate the idea in action, let us start from a simple example of changing book's title, described in the terms of desired outcome.

{ "op": "replace", "path": "/title", "value": "..." }

Looks clean, what about adding the authors? Easy ...

{ "op": "add", "path": "/authors", "value": ["...", "..."] }

Awesome, sold out, but ... implementation-wise it seems to require quite a lot of work, isn't it? Not really if we rely on the latest and greatest JSR 374: Java API for JSON Processing 1.1 which fully supports RFC-6902: JSON Patch. Armed with the right tools, this time let us do it right.


    org.glassfish
    javax.json
    1.1.2

Interestingly, not many are aware of that Apache CXF, and in general any JAX-RS-complaint framework, closely integrates with JSON-P and supports its basic data types. In case of Apache CXF, it is just a matter of adding cxf-rt-rs-extension-providers module dependency:


    org.apache.cxf
    cxf-rt-rs-extension-providers
    3.2.2

And registering JsrJsonpProvider with your server factory bean, for example:

@Configuration
public class AppConfig {
    @Bean
    public Server rsServer(Bus bus, BookRestService service) {
        JAXRSServerFactoryBean endpoint = new JAXRSServerFactoryBean();
        endpoint.setBus(bus);
        endpoint.setAddress("/");
        endpoint.setServiceBean(service);
        endpoint.setProvider(new JsrJsonpProvider());
        return endpoint.create();
    }
}

With all the pieces wired together, our PATCH operation could be implemented using JSR 374: Java API for JSON Processing 1.1 alone, in just a few lines:

@Service
@Path("/catalog")
public class BookRestService {
    @Inject private BookService bookService;
    @Inject private BookConverter converter;

    @PATCH
    @Path("/{isbn}")
    @Consumes(MediaType.APPLICATION_JSON)
    public void apply(@PathParam("isbn") String isbn, JsonArray operations) {
        final Book book = bookService.find(isbn).orElseThrow(NotFoundException::new);
        final JsonPatch patch = Json.createPatch(operations);
        final JsonObject result = patch.apply(converter.toJson(book));
        bookService.update(isbn, converter.fromJson(result));
    }
}

The BookConverter performs the conversion between Book class and its JSON representation (and vise versa), which we are doing by hand to illustrate another capabilities which JSR 374: Java API for JSON Processing 1.1 provides.

@Component
public class BookConverter {
    public Book fromJson(JsonObject json) {
        final Book book = new Book();
        book.setTitle(json.getString("title"));
        book.setIsbn(json.getString("isbn"));
        book.setAuthors(
            json
                .getJsonArray("authors")
                .stream()
                .map(value -> (JsonString)value)
                .map(JsonString::getString)
                .collect(Collectors.toList()));
        return book;
    }

    public JsonObject toJson(Book book) {
        return Json
            .createObjectBuilder()
            .add("title", book.getTitle())
            .add("isbn", book.getIsbn())
            .add("authors", Json.createArrayBuilder(book.getAuthors()))
            .build();
    }
}

To finish up, let is wrap this simple JAX-RS 2.1 web API into the beautiful Spring Boot envelope.

@SpringBootApplication
public class BookServerStarter {    
    public static void main(String[] args) {
        SpringApplication.run(BookServerStarter.class, args);
    }
}

And run it.

mvn spring-boot:run

To conclude the discussion, let us play a bit with more realistic examples by deliberately adding an incomplete book into our catalog.

$ curl -i -X POST http://localhost:19091/services/catalog -H "Content-Type: application\json" -d '{
       "title": "Microservice Architecture",
       "isbn": "978-1491956250",
       "authors": [
           "Ronnie Mitra",
           "Matt McLarty"
       ]
   }'

HTTP/1.1 201 Created
Date: Tue, 20 Feb 2018 02:30:18 GMT
Location: http://localhost:19091/services/catalog/978-1491956250
Content-Length: 0

There are a couple of inaccuracies we would like to fix in this book description, namely set the title to be complete, "Microservice Architecture: Aligning Principles, Practices, and Culture", and include missing co-authors, Irakli Nadareishvili and Mike Amundsen. With the API we have developed a moment ago, it is a no-brainer.

$ curl -i -X PATCH http://localhost:19091/services/catalog/978-1491956250 -H "Content-Type: application\json" -d '[
       { "op": "add", "path": "/authors/0", "value": "Irakli Nadareishvili" },
       { "op": "add", "path": "/authors/-", "value": "Mike Amundsen" },
       { "op": "replace", "path": "/title", "value": "Microservice Architecture: Aligning Principles, Practices, and Culture" }
   ]'

HTTP/1.1 204 No Content
Date: Tue, 20 Feb 2018 02:38:48 GMT

The path reference of the first two operations may look confusing a bit but fear no more, let us clarify that. Because authors is a collection (or in terms of JSON data types, an array) we could use RFC-6902: JSON Patch array index notation to specify exactly where we would like the new element to be inserted. The first operations uses index '0' to denote the head position, while the second one uses '-' placeholder to simplify say "add to the end of the collection". If we retrieve the book right after the update, we should see our modifications to be applied exactly as we asked.

$ curl http://localhost:19091/services/catalog/978-1491956250

{
    "title": "Microservice Architecture: Aligning Principles, Practices, and Culture",
    "isbn": "978-1491956250",
    "authors": [
        "Irakli Nadareishvili",
        "Ronnie Mitra",
        "Matt McLarty",
        "Mike Amundsen"
    ]
}

Clean, simple and powerful. To be fair, there is a price to pay is a form of additional JSON manipulations (in order to apply the patch) but is it worth the effort? I believe it is ...

Next time you are going to design new shiny REST(ful) web APIs, please seriously consider RFC-6902: JSON Patch to back the PATCH implementation of your resources. I believe more close integration with JAX-RS is also coming (if not there yet) to directly support JSONPatch class and its family.

And last, but not least, in this post we have touched on server-side implementation only, but JSR 374: Java API for JSON Processing 1.1 includes convenient client-side scaffolding as well, giving full-fledged programmatic control over the patches.

final JsonPatch patch = Json.createPatchBuilder()
    .add("/authors/0", "Irakli Nadareishvili")
    .add("/authors/-", "Mike Amundsen")
    .replace("/title", "Microservice Architecture: Aligning Principles, Practices, and Culture")
    .build();

The complete project sources are available on Github.

Saturday, August 9, 2014

OSGi: the gateway into micro-services architecture

The terms "modularity" and "microservices architecture" pop up quite often these days in context of building scalable, reliable distributed systems. Java platform itself is known to be weak with regards to modularity (Java 9 is going to address that by delivering project Jigsaw), giving a chance to frameworks like OSGi and JBoss Modules to emerge.

When I first heard about OSGi back in 2007, I was truly excited about all those advantages Java applications might benefit of by being built on top of it. But very quickly the frustration took place instead of excitement: no tooling support, very limited set of compatible libraries and frameworks, quite unstable and hard to troubleshoot runtime. Clearly, it was not ready to be used by average Java developer and as such, I had to put it on the shelf. With years, OSGi has matured a lot and gained a widespread community support.

The curious reader may ask: what are the benefits of using modules and OSGi in particular? To name just a few problems it helps to solve:

  • explicit (and versioned) dependency management: modules declare what they need (and optionally the version ranges)
  • small footprint: modules are not packaged with all their dependencies
  • easy release: modules can be developed and released independently
  • hot redeploy: individual modules may be redeployed without affecting others

In today's post we are going to take a 10000 feet view on a state of the art in building modular Java applications using OSGi. Leaving aside discussions how good or bad OSGi is, we are going to build an example application consisting of following modules:

  • data access module
  • business services module
  • REST services module
Apache OpenJPA 2.3.0 / JPA 2.0 for data access (unfortunately, JPA 2.1 is not yet supported by OSGi implementation of our choice), Apache CXF 3.0.1 / JAX-RS 2.0 for REST layer are two main building blocks of the application. I found Christian Schneider's blog, Liquid Reality, to be invaluable source of information about OSGi (as well as many other topics).

In OSGi world, the modules are called bundles. Bundles manifest their dependencies (import packages) and the packages they expose (export packages) so other bundles are able to use them. Apache Maven supports this packaging model as well. The bundles are managed by OSGi runtime, or container, which in our case is going to be Apache Karaf 3.0.1 (actually, it is the single thing we need to download and unpack).

Let me stop talking and better show some code. We are going to start from the top (REST) and go all the way to the bottom (data access) as it would be easier to follow. Our PeopleRestService is a typical example of JAX-RS 2.0 service implementation:

package com.example.jaxrs;

import java.util.Collection;

import javax.ws.rs.DELETE;
import javax.ws.rs.DefaultValue;
import javax.ws.rs.FormParam;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.PUT;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import javax.ws.rs.core.UriInfo;

import com.example.data.model.Person;
import com.example.services.PeopleService;

@Path( "/people" )
public class PeopleRestService {
    private PeopleService peopleService;
        
    @Produces( { MediaType.APPLICATION_JSON } )
    @GET
    public Collection< Person > getPeople( 
            @QueryParam( "page") @DefaultValue( "1" ) final int page ) {
        return peopleService.getPeople( page, 5 );
    }

    @Produces( { MediaType.APPLICATION_JSON } )
    @Path( "/{email}" )
    @GET
    public Person getPerson( @PathParam( "email" ) final String email ) {
        return peopleService.getByEmail( email );
    }

    @Produces( { MediaType.APPLICATION_JSON  } )
    @POST
    public Response addPerson( @Context final UriInfo uriInfo,
            @FormParam( "email" ) final String email, 
            @FormParam( "firstName" ) final String firstName, 
            @FormParam( "lastName" ) final String lastName ) {
        
        peopleService.addPerson( email, firstName, lastName );
        return Response.created( uriInfo
            .getRequestUriBuilder()
            .path( email )
            .build() ).build();
    }
    
    @Produces( { MediaType.APPLICATION_JSON  } )
    @Path( "/{email}" )
    @PUT
    public Person updatePerson( @PathParam( "email" ) final String email, 
            @FormParam( "firstName" ) final String firstName, 
            @FormParam( "lastName" )  final String lastName ) {
        
        final Person person = peopleService.getByEmail( email );
        
        if( firstName != null ) {
            person.setFirstName( firstName );
        }
        
        if( lastName != null ) {
            person.setLastName( lastName );
        }

        return person;              
    }
    
    @Path( "/{email}" )
    @DELETE
    public Response deletePerson( @PathParam( "email" ) final String email ) {
        peopleService.removePerson( email );
        return Response.ok().build();
    }
    
    public void setPeopleService( final PeopleService peopleService ) {
        this.peopleService = peopleService;
    }
}

As we can see, there is nothing here telling us about OSGi. The only dependency is the PeopleService which somehow should be injected into the PeopleRestService. How? Typically, OSGi applications use blueprint as the dependency injection framework, very similar to old buddy, XML based Spring configuration. It should be packaged along with application inside OSGI-INF/blueprint folder. Here is a blueprint example for our REST module, built on top of Apache CXF 3.0.1:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:jaxrs="http://cxf.apache.org/blueprint/jaxrs"
    xmlns:cxf="http://cxf.apache.org/blueprint/core"
    xsi:schemaLocation="
        http://www.osgi.org/xmlns/blueprint/v1.0.0 
        http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
        http://cxf.apache.org/blueprint/jaxws 
        http://cxf.apache.org/schemas/blueprint/jaxws.xsd
        http://cxf.apache.org/blueprint/jaxrs 
        http://cxf.apache.org/schemas/blueprint/jaxrs.xsd
        http://cxf.apache.org/blueprint/core 
        http://cxf.apache.org/schemas/blueprint/core.xsd">

    <cxf:bus id="bus">
        <cxf:features>
            <cxf:logging/>
        </cxf:features>       
    </cxf:bus>

    <jaxrs:server address="/api" id="api">
        <jaxrs:serviceBeans>
            <ref component-id="peopleRestService"/>
        </jaxrs:serviceBeans>
        <jaxrs:providers>
            <bean class="com.fasterxml.jackson.jaxrs.json.JacksonJsonProvider" />
        </jaxrs:providers>
    </jaxrs:server>
    
    <!-- Implementation of the rest service -->
    <bean id="peopleRestService" class="com.example.jaxrs.PeopleRestService">
        <property name="peopleService" ref="peopleService"/>
    </bean>         
    
    <reference id="peopleService" interface="com.example.services.PeopleService" />
</blueprint>

Very small and simple: basically the configuration just states that in order for the module to work, the reference to the com.example.services.PeopleService should be provided (effectively, by OSGi container). To see how it is going to happen, let us take a look on another module which exposes services. It contains only one interface PeopleService:

package com.example.services;

import java.util.Collection;

import com.example.data.model.Person;

public interface PeopleService {
    Collection< Person > getPeople( int page, int pageSize );
    Person getByEmail( final String email );
    Person addPerson( final String email, final String firstName, final String lastName );
    void removePerson( final String email );
}

And also provides its implementation as PeopleServiceImpl class:

package com.example.services.impl;

import java.util.Collection;

import org.osgi.service.log.LogService;

import com.example.data.PeopleDao;
import com.example.data.model.Person;
import com.example.services.PeopleService;

public class PeopleServiceImpl implements PeopleService {
    private PeopleDao peopleDao;
    private LogService logService;
 
    @Override
    public Collection< Person > getPeople( final int page, final int pageSize ) {        
        logService.log( LogService.LOG_INFO, "Getting all people" );
        return peopleDao.findAll( page, pageSize );
    }

    @Override
    public Person getByEmail( final String email ) {
        logService.log( LogService.LOG_INFO, 
            "Looking for a person with e-mail: " + email );
        return peopleDao.find( email );        
    }

    @Override
    public Person addPerson( final String email, final String firstName, 
            final String lastName ) {
        logService.log( LogService.LOG_INFO, 
            "Adding new person with e-mail: " + email );
        return peopleDao.save( new Person( email, firstName, lastName ) );
    }

    @Override
    public void removePerson( final String email ) {
        logService.log( LogService.LOG_INFO, 
            "Removing a person with e-mail: " + email );
        peopleDao.delete( email );
    }
    
    public void setPeopleDao( final PeopleDao peopleDao ) {
        this.peopleDao = peopleDao;
    }
    
    public void setLogService( final LogService logService ) {
        this.logService = logService;
    }
}

And this time again, very small and clean implementation with two injectable dependencies, org.osgi.service.log.LogService and com.example.data.PeopleDao. Its blueprint configuration, located inside OSGI-INF/blueprint folder, looks quite compact as well:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"  
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         
    xsi:schemaLocation="
        http://www.osgi.org/xmlns/blueprint/v1.0.0 
        http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">
        
    <service ref="peopleService" interface="com.example.services.PeopleService" />        
    <bean id="peopleService" class="com.example.services.impl.PeopleServiceImpl">
        <property name="peopleDao" ref="peopleDao" />    
        <property name="logService" ref="logService" />
    </bean>
    
    <reference id="peopleDao" interface="com.example.data.PeopleDao" />
    <reference id="logService" interface="org.osgi.service.log.LogService" />
</blueprint>

The references to PeopleDao and LogService are expected to be provided by OSGi container at runtime. Hovewer, PeopleService implementation is exposed as service and OSGi container will be able to inject it into PeopleRestService once its bundle is being activated.

The last piece of the puzzle, data access module, is a bit more complicated: it contains persistence configuration (META-INF/persistence.xml) and basically depends on JPA 2.0 capabilities of the OSGi container. The persistence.xml is quite basic:

<persistence xmlns="http://java.sun.com/xml/ns/persistence"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    version="2.0">
 
    <persistence-unit name="peopleDb" transaction-type="JTA">
        <jta-data-source>
        osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=peopleDb)
        </jta-data-source>       
        <class>com.example.data.model.Person</class>
        
        <properties>
            <property name="openjpa.jdbc.SynchronizeMappings" 
                value="buildSchema"/>         
        </properties>        
    </persistence-unit>
</persistence>

Similarly to the service module, there is an interface PeopleDao exposed:

package com.example.data;

import java.util.Collection;

import com.example.data.model.Person;

public interface PeopleDao {
    Person save( final Person person );
    Person find( final String email );
    Collection< Person > findAll( final int page, final int pageSize );
    void delete( final String email ); 
}

With its implementation PeopleDaoImpl:

package com.example.data.impl;

import java.util.Collection;

import javax.persistence.EntityManager;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;

import com.example.data.PeopleDao;
import com.example.data.model.Person;

public class PeopleDaoImpl implements PeopleDao {
    private EntityManager entityManager;
 
    @Override
    public Person save( final Person person ) {
        entityManager.persist( person );
        return person;
    }
 
    @Override
    public Person find( final String email ) {
        return entityManager.find( Person.class, email );
    }
 
    public void setEntityManager( final EntityManager entityManager ) {
        this.entityManager = entityManager;
    }
 
    @Override
    public Collection< Person > findAll( final int page, final int pageSize ) {
        final CriteriaBuilder cb = entityManager.getCriteriaBuilder();
     
        final CriteriaQuery< Person > query = cb.createQuery( Person.class );
        query.from( Person.class );
     
        return entityManager
            .createQuery( query )
            .setFirstResult(( page - 1 ) * pageSize )
            .setMaxResults( pageSize ) 
            .getResultList();
    }
 
    @Override
    public void delete( final String email ) {
        entityManager.remove( find( email ) );
    }
}

Please notice, although we are performing data manipulations, there is no mention of transactions as well as there are no explicit calls to entity manager's transactions API. We are going to use the declarative approach to transactions as blueprint configuration supports that (the location is unchanged, OSGI-INF/blueprint folder):

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"  
    xmlns:jpa="http://aries.apache.org/xmlns/jpa/v1.1.0"
    xmlns:tx="http://aries.apache.org/xmlns/transactions/v1.0.0" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         
    xsi:schemaLocation="
        http://www.osgi.org/xmlns/blueprint/v1.0.0 
        http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">
    
    <service ref="peopleDao" interface="com.example.data.PeopleDao" />
    <bean id="peopleDao" class="com.example.data.impl.PeopleDaoImpl">
     <jpa:context unitname="peopleDb" property="entityManager" />
     <tx:transaction method="*" value="Required"/>
    </bean>
    
    <bean id="dataSource" class="org.hsqldb.jdbc.JDBCDataSource">
       <property name="url" value="jdbc:hsqldb:mem:peopleDb"/>
    </bean>
    
    <service ref="dataSource" interface="javax.sql.DataSource"> 
        <service-properties> 
            <entry key="osgi.jndi.service.name" value="peopleDb" /> 
        </service-properties> 
    </service>     
</blueprint>

One thing to keep in mind: the application doesn't need to create JPA 2.1's entity manager: the OSGi runtime is able do that and inject it everywhere it is required, driven by jpa:context declarations. Consequently, tx:transaction instructs the runtime to wrap the selected service methods inside transaction.

Now, when the last service PeopleDao is exposed, we are ready to deploy our modules with Apache Karaf 3.0.1. It is quite easy to do in three steps:

  • run the Apache Karaf 3.0.1 container
    bin/karaf (or bin\karaf.bat on Windows)
    
  • execute following commands from the Apache Karaf 3.0.1 shell:
    feature:repo-add cxf 3.0.1
    feature:install http cxf jpa openjpa transaction jndi jdbc
    install -s mvn:org.hsqldb/hsqldb/2.3.2
    install -s mvn:com.fasterxml.jackson.core/jackson-core/2.4.0
    install -s mvn:com.fasterxml.jackson.core/jackson-annotations/2.4.0
    install -s mvn:com.fasterxml.jackson.core/jackson-databind/2.4.0
    install -s mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-base/2.4.0
    install -s mvn:com.fasterxml.jackson.jaxrs/jackson-jaxrs-json-provider/2.4.0
    
  • build our modules and copy them into Apache Karaf 3.0.1's deploy folder (while container is still running):
    mvn clean package
    cp module*/target/*jar apache-karaf-3.0.1/deploy/
    
When you run the list command in the Apache Karaf 3.0.1 shell, you should see the list of all activated bundles (modules), similar to this one:

Where module-service, module-jax-rs and module-data correspond to the ones we are being developed. By default, all our Apache CXF 3.0.1 services will be available at base URL http://:8181/cxf/api/. It is easy to check by executing cxf:list-endpoints -f command in the Apache Karaf 3.0.1 shell.

Let us make sure our REST layer works as expected by sending couple of HTTP requests. Let us create new person:

curl http://localhost:8181/cxf/api/people -iX POST -d "[email protected]"

HTTP/1.1 201 Created
Content-Length: 0
Date: Sat, 09 Aug 2014 15:26:17 GMT
Location: http://localhost:8181/cxf/api/people/[email protected]
Server: Jetty(8.1.14.v20131031)

And verify that person has been created successfully:

curl -i http://localhost:8181/cxf/api/people

HTTP/1.1 200 OK
Content-Type: application/json
Date: Sat, 09 Aug 2014 15:28:20 GMT
Transfer-Encoding: chunked
Server: Jetty(8.1.14.v20131031)

[{"email":"[email protected]","firstName":"Tom","lastName":"Knocker"}]

Would be nice to check if database has the person populated as well. With Apache Karaf 3.0.1 shell it is very simple to do by executing just two commands: jdbc:datasources and jdbc:query peopleDb "select * from people".

Awesome! I hope this quite introductory blog post opens yet another piece of interesting technology you may use for developing robust, scalable, modular and manageable software. We have not touched many, many things but these are here for you to discover. The complete source code is available on GitHub.

Note to Hibernate 4.2.x / 4.3.x users: unfortunately, in the current release of Apache Karaf 3.0.1 the Hibernate 4.3.x does work properly at all (as JPA 2.1 is not yet supported) and, however I have managed to run with Hibernate 4.2.x, the container often refused to resolve the JPA-related dependencies.

Saturday, November 30, 2013

Java WebSockets (JSR-356) on Jetty 9.1

Jetty 9.1 is finally released, bringing Java WebSockets (JSR-356) to non-EE environments. It's awesome news and today's post will be about using this great new API along with Spring Framework.

JSR-356 defines concise, annotation-based model to allow modern Java web applications easily create bidirectional communication channels using WebSockets API. It covers not only server-side, but client-side as well, making this API really simple to use everywhere.

Let's get started! Our goal would be to build a WebSockets server which accepts messages from the clients and broadcasts them to all other clients currently connected. To begin with, let's define the message format, which server and client will be exchanging, as this simple Message class. We can limit ourselves to something like a String, but I would like to introduce to you the power of another new API - Java API for JSON Processing (JSR-353).

package com.example.services;

public class Message {
    private String username;
    private String message;
 
    public Message() {
    }
 
    public Message( final String username, final String message ) {
        this.username = username;
        this.message = message;
    }
 
    public String getMessage() {
        return message;
    }
 
    public String getUsername() {
        return username;
    }

    public void setMessage( final String message ) {
        this.message = message;
    }
 
    public void setUsername( final String username ) {
        this.username = username;
    }
}

To separate the declarations related to the server and the client, JSR-356 defines two basic annotations: @ServerEndpoint and @ClientEndpoit respectively. Our client endpoint, let's call it BroadcastClientEndpoint, will simply listen for messages server sends:

package com.example.services;

import java.io.IOException;
import java.util.logging.Logger;

import javax.websocket.ClientEndpoint;
import javax.websocket.EncodeException;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;

@ClientEndpoint
public class BroadcastClientEndpoint {
    private static final Logger log = Logger.getLogger( 
        BroadcastClientEndpoint.class.getName() );
 
    @OnOpen
    public void onOpen( final Session session ) throws IOException, EncodeException  {
        session.getBasicRemote().sendObject( new Message( "Client", "Hello!" ) );
    }

    @OnMessage
    public void onMessage( final Message message ) {
        log.info( String.format( "Received message '%s' from '%s'",
            message.getMessage(), message.getUsername() ) );
    }
}

That's literally it! Very clean, self-explanatory piece of code: @OnOpen is being called when client got connected to the server and @OnMessage is being called every time server sends a message to the client. Yes, it's very simple but there is a caveat: JSR-356 implementation can handle any simple objects but not the complex ones like Message is. To manage that, JSR-356 introduces concept of encoders and decoders.

We all love JSON, so why don't we define our own JSON encoder and decoder? It's an easy task which Java API for JSON Processing (JSR-353) can handle for us. To create an encoder, you only need to implement Encoder.Text< Message > and basically serialize your object to some string, in our case to JSON string, using JsonObjectBuilder.

package com.example.services;

import javax.json.Json;
import javax.json.JsonReaderFactory;
import javax.websocket.EncodeException;
import javax.websocket.Encoder;
import javax.websocket.EndpointConfig;

public class Message {
    public static class MessageEncoder implements Encoder.Text< Message > {
        @Override
        public void init( final EndpointConfig config ) {
        }
  
        @Override
        public String encode( final Message message ) throws EncodeException {
            return Json.createObjectBuilder()
                .add( "username", message.getUsername() )
                .add( "message", message.getMessage() )
                .build()
                .toString();
        }
  
        @Override
        public void destroy() {
        }
    }
}

For decoder part, everything looks very similar, we have to implement Decoder.Text< Message > and deserialize our object from string, this time using JsonReader.

package com.example.services;

import javax.json.JsonObject;
import javax.json.JsonReader;
import javax.json.JsonReaderFactory;
import javax.websocket.DecodeException;
import javax.websocket.Decoder;

public class Message {
    public static class MessageDecoder implements Decoder.Text< Message > {
        private JsonReaderFactory factory = Json.createReaderFactory( Collections.< String, Object >emptyMap() );
  
        @Override
        public void init( final EndpointConfig config ) {
        }
  
        @Override
        public Message decode( final String str ) throws DecodeException {
            final Message message = new Message();
   
            try( final JsonReader reader = factory.createReader( new StringReader( str ) ) ) {
                final JsonObject json = reader.readObject();
                message.setUsername( json.getString( "username" ) );
                message.setMessage( json.getString( "message" ) );
            }
   
            return message;
        }
  
        @Override
        public boolean willDecode( final String str ) {
            return true;
        }
  
        @Override
        public void destroy() {
        }
    }
}

And as a final step, we need to tell the client (and the server, they share same decoders and encoders) that we have encoder and decoder for our messages. The easiest thing to do that is just by declaring them as part of @ServerEndpoint and @ClientEndpoit annotations.


import com.example.services.Message.MessageDecoder;
import com.example.services.Message.MessageEncoder;

@ClientEndpoint( encoders = { MessageEncoder.class }, decoders = { MessageDecoder.class } )
public class BroadcastClientEndpoint {
}

To make client's example complete, we need some way to connect to the server using BroadcastClientEndpoint and basically exchange messages. The ClientStarter class finalizes the picture:

package com.example.ws;

import java.net.URI;
import java.util.UUID;

import javax.websocket.ContainerProvider;
import javax.websocket.Session;
import javax.websocket.WebSocketContainer;

import org.eclipse.jetty.websocket.jsr356.ClientContainer;

import com.example.services.BroadcastClientEndpoint;
import com.example.services.Message;

public class ClientStarter {
    public static void main( final String[] args ) throws Exception {
        final String client = UUID.randomUUID().toString().substring( 0, 8 );
  
        final WebSocketContainer container = ContainerProvider.getWebSocketContainer();    
        final String uri = "ws://localhost:8080/broadcast";  
  
        try( Session session = container.connectToServer( BroadcastClientEndpoint.class, URI.create( uri ) ) ) {
            for( int i = 1; i <= 10; ++i ) {
                session.getBasicRemote().sendObject( new Message( client, "Message #" + i ) );
                Thread.sleep( 1000 );
            }
        }
  
        // Application doesn't exit if container's threads are still running
        ( ( ClientContainer )container ).stop();
    }
}

Just couple of comments what this code does: we are connecting to WebSockets endpoint at ws://localhost:8080/broadcast, randomly picking some client name (from UUID) and generating 10 messages, every with 1 second delay (just to be sure we have time to receive them all back).

Server part doesn't look very different and at this point could be understood without any additional comments (except may be the fact that server just broadcasts every message it receives to all connected clients). Important to mention here: new instance of the server endpoint is created every time new client connects to it (that's why peers collection is static), it's a default behavior and could be easily changed.

package com.example.services;

import java.io.IOException;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;

import javax.websocket.EncodeException;
import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.server.ServerEndpoint;

import com.example.services.Message.MessageDecoder;
import com.example.services.Message.MessageEncoder;

@ServerEndpoint( 
    value = "/broadcast", 
    encoders = { MessageEncoder.class }, 
    decoders = { MessageDecoder.class } 
) 
public class BroadcastServerEndpoint {
    private static final Set< Session > sessions = 
        Collections.synchronizedSet( new HashSet< Session >() ); 
   
    @OnOpen
    public void onOpen( final Session session ) {
        sessions.add( session );
    }

    @OnClose
    public void onClose( final Session session ) {
        sessions.remove( session );
    }

    @OnMessage
    public void onMessage( final Message message, final Session client ) 
            throws IOException, EncodeException {
        for( final Session session: sessions ) {
            session.getBasicRemote().sendObject( message );
        }
    }
}

In order this endpoint to be available for connection, we should start the WebSockets container and register this endpoint inside it. As always, Jetty 9.1 is runnable in embedded mode effortlessly:

package com.example.ws;

import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.eclipse.jetty.websocket.jsr356.server.deploy.WebSocketServerContainerInitializer;
import org.springframework.web.context.ContextLoaderListener;
import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;

import com.example.config.AppConfig;

public class ServerStarter  {
    public static void main( String[] args ) throws Exception {
        Server server = new Server( 8080 );
        
        // Create the 'root' Spring application context
        final ServletHolder servletHolder = new ServletHolder( new DefaultServlet() );
        final ServletContextHandler context = new ServletContextHandler();

        context.setContextPath( "/" );
        context.addServlet( servletHolder, "/*" );
        context.addEventListener( new ContextLoaderListener() );   
        context.setInitParameter( "contextClass", AnnotationConfigWebApplicationContext.class.getName() );
        context.setInitParameter( "contextConfigLocation", AppConfig.class.getName() );
   
        server.setHandler( context );
        WebSocketServerContainerInitializer.configureContext( context );        
        
        server.start();
        server.join(); 
    }
}

The most important part of the snippet above is WebSocketServerContainerInitializer.configureContext: it's actually creates the instance of WebSockets container. Because we haven't added any endpoints yet, the container basically sits here and does nothing. Spring Framework and AppConfig configuration class will do this last wiring for us.

package com.example.config;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import javax.websocket.DeploymentException;
import javax.websocket.server.ServerContainer;
import javax.websocket.server.ServerEndpoint;
import javax.websocket.server.ServerEndpointConfig;

import org.eclipse.jetty.websocket.jsr356.server.AnnotatedServerEndpointConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.context.WebApplicationContext;

import com.example.services.BroadcastServerEndpoint;

@Configuration
public class AppConfig  {
    @Inject private WebApplicationContext context;
    private ServerContainer container;
 
    public class SpringServerEndpointConfigurator extends ServerEndpointConfig.Configurator {
        @Override
        public < T > T getEndpointInstance( Class< T > endpointClass ) 
                throws InstantiationException {
            return context.getAutowireCapableBeanFactory().createBean( endpointClass );   
        }
    }
 
    @Bean
    public ServerEndpointConfig.Configurator configurator() {
        return new SpringServerEndpointConfigurator();
    }
 
    @PostConstruct
    public void init() throws DeploymentException {
        container = ( ServerContainer )context.getServletContext().
            getAttribute( javax.websocket.server.ServerContainer.class.getName() );
  
        container.addEndpoint( 
            new AnnotatedServerEndpointConfig( 
                BroadcastServerEndpoint.class, 
                BroadcastServerEndpoint.class.getAnnotation( ServerEndpoint.class )  
            ) {
                @Override
                public Configurator getConfigurator() {
                    return configurator();
                }
            }
        );
    }  
}

As we mentioned earlier, by default container will create new instance of server endpoint every time new client connects, and it does so by calling constructor, in our case BroadcastServerEndpoint.class.newInstance(). It might be a desired behavior but because we are using Spring Framework and dependency injection, such new objects are basically unmanaged beans. Thanks to very well-thought (in my opinion) design of JSR-356, it's actually quite easy to provide your own way of creating endpoint instances by implementing ServerEndpointConfig.Configurator. The SpringServerEndpointConfigurator is an example of such implementation: it's creates new managed bean every time new endpoint instance is being asked (if you want single instance, you can create singleton of the endpoint as a bean in AppConfig and return it all the time).

The way we retrieve the WebSockets container is Jetty-specific: from the attribute of the context with name "javax.websocket.server.ServerContainer" (it probably might change in the future). Once container is there, we are just adding new (managed!) endpoint by providing our own ServerEndpointConfig (based on AnnotatedServerEndpointConfig which Jetty kindly provides already).

To build and run our server and clients, we need just do that:

mvn clean package
java -jar target\jetty-web-sockets-jsr356-0.0.1-SNAPSHOT-server.jar // run server
java -jar target/jetty-web-sockets-jsr356-0.0.1-SNAPSHOT-client.jar // run yet another client

As an example, by running server and couple of clients (I run 4 of them, '392f68ef', '8e3a869d', 'ca3a06d0', '6cb82119') you might see by the output in the console that each client receives all the messages from all other clients (including its own messages):

Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Hello!' from 'Client'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #1' from '392f68ef'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #2' from '8e3a869d'
Nov 29, 2013 9:21:29 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from 'ca3a06d0'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '6cb82119'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #2' from '392f68ef'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #3' from '8e3a869d'
Nov 29, 2013 9:21:30 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from 'ca3a06d0'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '6cb82119'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #3' from '392f68ef'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '8e3a869d'
Nov 29, 2013 9:21:31 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from 'ca3a06d0'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '6cb82119'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #4' from '392f68ef'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '8e3a869d'
Nov 29, 2013 9:21:32 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from 'ca3a06d0'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '6cb82119'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #5' from '392f68ef'
Nov 29, 2013 9:21:33 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '8e3a869d'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '6cb82119'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #6' from '392f68ef'
Nov 29, 2013 9:21:34 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '8e3a869d'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '6cb82119'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #7' from '392f68ef'
Nov 29, 2013 9:21:35 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '8e3a869d'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '6cb82119'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #8' from '392f68ef'
Nov 29, 2013 9:21:36 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '8e3a869d'
Nov 29, 2013 9:21:37 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #9' from '392f68ef'
Nov 29, 2013 9:21:37 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '8e3a869d'
Nov 29, 2013 9:21:38 PM com.example.services.BroadcastClientEndpoint onMessage
INFO: Received message 'Message #10' from '392f68ef'
2013-11-29 21:21:39.260:INFO:oejwc.WebSocketClient:main: Stopped org.eclipse.jetty.websocket.client.WebSocketClient@3af5f6dc

Awesome! I hope this introductory blog post shows how easy it became to use modern web communication protocols in Java, thanks to Java WebSockets (JSR-356), Java API for JSON Processing (JSR-353) and great projects such as Jetty 9.1!

As always, complete project is available on GitHub.