At work we had a Spring Boot RESTful microservice that lost some data in its responses only when deployed. The endpoint did what was expected in unit testing supposidly but when deployed some of the values in the JSON responses where empty (?). Immediately suspicious of the quality of the unit tests I fired up the service using ./gradlew bootRun and hit the end point. The data I was looking for was there. Head scratching ensured. Being thorough, and systematic, next I tried ./gradlew bootRepackage and ran the resultant jar file and tried the endpoint again. Bingo, the data disappeared.
To be fair, I’ve left out a key fact. The data I was looking for was included in property file resources that were included in another dependency. The RESTful endpoint was returning a catalog of information that pulled certain longer, internationalized, text descriptions from a dependant jar. Those property file resources are ultimately loaded with ClassLoader.getResourceAsStream(). Now this worked in the dependency’s unit test. It worked in the service’s unit tests. It worked with ./gradlew bootRunbut when you did ./gradlew bootRepackage and created a fat jar it failed. Why?
The issue was the fat jar. Fat jars are a little bit of a hack. Java likes to think of jars as a bundle of classes and resources. Java wants you to use jars in an additive manner, meaning if you have classes in multiple jars, you use the classpath to include the multiple jars. It’s not a recursive model, Java doesn’t want a jar full of jars. But if you want to distribute an application, creating a single package is, for simplicity, where you want to end up. So there’s a mismatch between the goal of a single package, and Java’s desire to treat an program as a collection of many jars. To address this, folks started tampering with jar files to allow programs to be distributed as a single jar, but somehow include all the code come from a collection of jars.
Initial attempts at this were somewhat mechanical, they’d unpack all the stuff from all the jars and put it all together in one jar. That had issues because individual jar’s didn’t consider this, so they didn’t worry about uniquely naming resources, and when you dumped all the contents of all the jars into one jar resources started to collide and get overwritten.
When the “pour it all out into one jar” approach proved awkward, the next approach was to mess with the ClassLoader to allow for recursion. Java supports custom class loaders so this seemed to be the right way to go. Make a class loader that allowed a class to be found in a jar in another jar. This meant you didn’t tamper with the jars you included, those jars all were just stuffed unaltered into another jar.
And that is why bootRun isn’t the same as running the jar resulting from bootRepackage. BootRun is really just the old school class path of all the dependencies included. BootRepackage creates a fat jar, and provides a class loader that can handle the jars in jars.
… and that was our problem. Spring’s class loader behaved just a bit differently than the normal class loader when responding to getResourceAsStream to support the jars in jars.
Once we pinpointed the problem, we tweaked our code a bit and, without problem found idiomatic Java that worked properly in either scenario. It was the getting there that was tricky.
I’ve been tinkering with Kotlin and JavaFX and the result was a Java jar file that spins up a user interface. OS X will double click open a jar, but I wanted a normal app. I looked around and found App Maker, which did a decent job, but processing the jar manually each build got annoying fast. Looking at the info out there, and what App Maker did, it seemed like I ought to be able to get gradle to do the same without too much pain. I did.
The Simplest App Possible
So, from documentation, and investigating App Maker’s output, the simplest app is:
That’s the basics. So if you’ve a working jar, all you’re lacking it the directory structure, and:
Info.plist: An Apple plist file, you can use a simple one unaltered.
launcher: A shell script that launches the jar. This can be 98% templated, with only things like the app name, jar name, and java version needing to be set.
application.icns: An icon formatted to Apple’s approval.
The Gradle Solution
To generate the app bundle from gradle, I did two things:
Put the three needed files into src/main/app. The Info.plist and the application.icns are the actual final files. The launcher I modified into a template, using Ant’s templating notation “@VARIABLE_NAME@”.
Then I added one gradle task to build out the directory structure, and copy the files into place, applying the templating.
The Task
Here is the gradle task. I employed some of gradle’s ant support to perform copies and templating.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
With the files and task in place all you need to do is run gradle osxApp and you’ll hopefully find build/app/ProjectName.app – a working OS X app.
Conclusion
I could roll this into a Gradle PlugIn but that’s more work then I want to do for this. As it stands, if you want to use this approach, just grab my project that uses it, copy the files in src/main/app and add the task to your build.gradle.
I’m filtering out a small number of objects from a collection in Java 8. Nothing special there, I streamed the collection and used a Predicate. Done. But then the collection moved out into a cloud database and I found myself pulling the entire collection over the wire to filter out a small set of objects. While the code still worked perfectly performance really suffered. Additionally, the collections where going to continue to be in both places so I needed to find a way to handle both well.
Taking Stock
The cloud database in question was Orchestrate.io and it offers server side filtering based on Lucene queries. So obviously that’s what I wanted to use to reduce the traffic over the wire. I considered moving away from my Predicates and introducing a query abstraction that could be converted either to a Predicate or a Lucene query based on the collection location. But it felt like that was over designing the solution because both Predicates and Lucene queries are basically built up of comparisons and boolean operators so I felt there ought to be a way to convert one model directly to the other and I already had the Predicates in place.
The Goal
My Predicates tested field/value pairs in beans, and could be built up through the negate, and and or methods. For example:
Predicate predicate = new BeanPredicate("lastname","Doe")
.and(new BeanPredicate("firstname","John")
.or(new BeanPredicate("firstname","Jane")));
From that I wanted to derive a Lucene query:
lastname:"Doe" AND ( firstname:"John" OR firstname: "Jane")
I decided that I could override the toString method to make the Lucene representation available there.
Iteration 1: Subclassing
Java predicated use lambdas to implement the negate,and,or operator tests, but I needed the toString method to change for all those operators as well. So I went with subclasses for the various operators. My code looked something like this example of the and operation:
class BeanPredicate implements Predicate {
private final String label;
private final String value;
...
public BeanPredicate and(BeanPredicate other) {
return new BeanPredicate() {
public boolean test(Bean bean) {
return super.test(bean) && other.test(bean);
};
public toString() {
return super.toString() + " AND " + other.toString();
};
}
}
}
This passed all tests but damn was it verbose and ugly.
Iteration 2: Composition With Functions
What my next refactoring did was to change the test and toString over to Functions and then have the different operations compose proper implementation:
class BeanPredicate implements Predicate {
private final String label;
private final String value;
private final Function<Bean, Boolean> test;
private final Function<BeanPredicate, String> toString;
...
public BeanPredicate and(BeanPredicate other) {
return new BeanPredicate(label, value,
bean -> test.apply(bean) && other.test(bean),
bp -> toString.apply(this) + " AND " + other.toString());
}
}
This may not appear glaringly different but it has a number of advantages. The operation implementations are cleaner, easier to read, and do not involve an anonymous class. When you consider that the bulk of the code is in the operations, being cleaner there pays off.
Conclusion
With a Object-Functional language the general wisdom of applying composition over subclassing becomes even more true and can be applied in an even more clean and powerful way. Particularly, one common draw back of composition, where you end up polluting the class interface with indirect references through the composite types is completely avoided when you’re simply assigning functions. You can take a look at the final implementation here.
[This post is woefully out of date, read the Update]
In my prior post on GraphQL server in Java I noted the complexity of defining the schema and committed to look at the tools offered to ease the process. Here we are. I’ve migrated the schema definition over to using an annotation package and as I’m generally pleased with the results as it really did simplify the schema definition.
Class Definitions
Defining the classes was trivial. Simply adding a few annotations to the entity getter methods:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Once those are in place, anywhere you reference that type in your queries or mutations will “do the right thing.”
Queries/Mutations
This was a bit less polished, but worked well enough too. They offered a couple of approaches and in the end I implemented the following pattern:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There a few things going on here to note. First the annotation tool scans the .class definition, so either you use static methods, or if you use instance methods it’s going to call the no argument constructor to create and instance. Since data fetchers need data, I was a little confused about what to do with static methods, or a no argument instance – how do you get your data access objects in there? Obviously you could make you DAO objects into singletons and get at them that way but that seemed ugly. What I found, by digging into the code was that if your GraphQL dispatcher passed in a context object that was accessible via the DataFetchingEnvironment.getSource() method. So I went with static methods and as you see in line 6 above I access the DAO via getSource().
Using the Annotated Classes
Once you’ve annotated your entities and created an annotated query and mutation class here is how to create your schema:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I felt the annotations definitely made the code simpler and cleaner. About my only complaint was that the documentation and examples were so terse that I ended up tracing through the code to figure out how some of it came together.
[This post is woefully out of date, Please read Update here!]
Github is a service I respect and depend on so its adoption of GraphQL for an update to their API pushed me to look into GraphQL.
The Approach
People suggest a good place to start with GraphQL is to migrate an existing RESTful service, so I decided to try it out on one of my existing Java RESTful services. That service has a Java backend with a simple jQuery based JavaScript UI. So I started looking for a Java based GraphQL server and a simple JavaScript GraphQL client.
For the client side I reviewed several offerings but they were all rather complex for my needs, so with about 40 lines of JavaScript I cooked up my own client.
I also decided to follow the generally accepted wisdom that you only tackle the data management parts of your API with GraphQL, and leave things like authentication as they were.
The Server
The graphql-java package came with some good examples that let me start trying it out right away. But the service I was migrating was based on tools that didn’t exactly line up with their examples so it wasn’t plug and play. My service was written with Spark: “A micro framework for creating web applications in Java 8 with minimal effort.” And as it turned out connecting graphql-java to Spark was straight forward. Associate something like the following with the graphql HTTP POST path:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
And your service can now dispatch GraphQL requests in JSON of the form:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
With that dispatcher in place you can start the process of working on your GraphQL schema implementation, where the magic takes place.
The GraphQL Schema
The schema, at least in graphql-java, is where a lot of the work starts and ends. In a RESTful service, you tend to map a single HTTP request path and operation to a specific Java method. With graphql-java the schema is a holistic description of types, requests, and how to fulfill them. It almost has the feel of a Guice module, or a Spring XML configuration. There are tools designed to simplify schema creation, but since I had set out to learn GraphQL I decided to feel the pain and create the schema by hand. Every type exposed by the API had to be described. Every request had to be described. The associations between the requests and the code fulfilling them had to be described. Again the graphql-java examples and tests proved a good resource but nothing there was a drop in solution.
Describing an Entity
The service I ported was a snippets app. One of of its domain classes is a way to categorize a snippet:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
To add this class the the GraphQL schema you have to fully describe it:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Once you’ve described the classes, you need to describe how you’ll retrieve instances, here’s an example of retrieving by key, and retrieving them all:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
To create, update or delete objects you’ll need to define the “mutations”, here’s my category create:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
From these samples you can tell that the schema is, as the name implies, a detailed description of the classes and operations on them. The format is verbose, but I found I pretty quickly picked up the syntax and semantics and developing the complete schema wasn’t too bad a chore. Also, don’t forget there we tools claiming to ease the process that might be useful.
The Client
I developed the server side to completion using various testing to drive that. Once I had a server that supported my old RESTful operations via GraphQL I approached the client side. My client was nothing more then jQuery performing HTTP get/post/deletes, and it wasn’t particularly tidy. I looked into existing JavaScript clients, but they seemed to all be npm based or integrated into a framework. So I just wrote the following:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Both the server and the client benefitted from GraphQL’s more holistic approach. Where GraphQL had a single schema, dispatcher and client pattern, RESTful had used GET/POST/DELETE methods each somewhat distinct.
The performance suffered a bit. I’ve done a lot or RESTful services, and this was my first foray into GraphQL so I’m not surprised that it wasn’t quite as quick end to end. I’m suspicious of some of the magic (likely reflection based) that graphql-java uses to execute the schema queries. That said I’m betting I can improve the performance by doing some things better then my first attempt.
But overall I liked the GraphQL experience and will probably advocate it over RESTful going forward.
As always the complete code for my work is in github.
For work I’ve been helping some folks with a demo Android app. They needed a backend to test against and so whipping up a servlet was suggested. A couple hours work…. if you’ve done it a bunch. So I hacked up a skeleton and promised some references. I thought I do it here and share. These are some of my favorite ingredients for throwing together a servlet backend.