sebastiandaschner blog
Thoughts on Quarkus
monday, april 08, 2019Quarkus, the new “supersonic, subatomic” Java framework is currently getting a lot of attention. The ideas behind this build and runtime tool are indeed more than interesting for the future of enterprise Java. What are the benefits and shortcomings of using Quarkus?
Getting rid of dynamics
Quarkus takes the reasoning that most of the dynamics of an enterprise Java runtime is not really required in a containerized world. Once you build your application to a container image, the functionality is usually not supposed to change. All of the dynamics that an enterprise container brings allows for very powerful and flexible programming and deployment models but once our applications have been started inside containers they typically don’t change anymore.
The approach that Quarkus takes is to tailor a runtime that only contains what your application needs and to boil down most of the dynamics of an enterprise runtime.
Enterprise Java code heavily relies on Inversion of Control (IoC), aka “don’t call us, we call you”.
Think of dependency injection alà @Inject
, HTTP resources with @Path
and @GET
, or event observers with @Observes
.
We developers declaratively specify what should happen and the implementation makes sure it does.
This allows an extremely productive programming model but also comes with heavy lifting at runtime, since someone has to put together all these loose ends.
Now, the idea is that if our applications aren’t supposed to mutate at runtime, most of these dynamics can be resolved at build time.
The resulting code can then mainly consist of direct invocations; all of the magic is being boiled down.
Now, is this the same result that one achieved in the past with (from today’s view) cumbersome enterprise framework that didn’t support IoC and required to directly invoke functionality in our code? From a developer’s perspective, not at all. In our code, we still use the same effective, declarative approaches, the same annotations; the build process takes care of bringing the dynamics back to earth.
Quarkus also supports to build native executables with GraalVM. With that approach, we use ahead-of-time (AOT) compilation to pre-build and compile our applications to native executables that don’t need to dynamically scan and load all our classes into a JVM. The resulting executable starts up very quickly and comes with lower resources consumption compared to a regular JVM.
Power of standards
Looking at Quarkus, what I find the most appealing is that it’s built on top of known Enterprise standards, such as CDI, JAX-RS, and many more. Instead of a fully-fledged application server, we run our applications in an optimized runtime, either via a native executable or using a Java runtime.
A lot of uprising enterprise frameworks require developers to, once again, learn new APIs and are, sometimes more sometimes less, reinventing the wheel, how to implement REST endpoints for example. However, from a developer’s and project’s point of view, I don’t see the benefit of re-learning and re-writing applications, when existing APIs and solutions would suffice. With the approach that Quarkus takes, developers can write and take an application that is based on CDI, JAX-RS, and JPA, for example, and optimize it by changing the runtime to Quarkus.
Extensions to Enterprise Java
Besides what is contained in Java Enterprise, Quarkus also extends the available functionality where this could be required in projects.
Aside from the supported Java EE and MicroProfile specifications there are, for example, Quarkus extensions for reactive messaging, Vert.x, or Camel.
Vert.x’s EventBus
type, for instance, is injectable via @Inject
.
This matches the developer experience that we’re used to in EE.
I like the approach of starting with known enterprise APIs, and extending them with what applications require furthermore, by keeping the same declarative approaches.
Serverless Enterprise Java
One of the unique selling points of Quarkus, and running Java applications natively is the extremly short startup time. Like seriously, everything that starts in a few milliseconds is a gamechanger for requirements, where we need to quickly start-up and tear down our applications.
That is still one of the biggest limitations in an otherwise suitable-for-almost-everything Java world. Performance-wise, the JVM needs a huge amount of time to startup, let alone to warm up the HotSpot engine and to reach it’s full throughput. Fair enough, there’s a reason for that, since the runtime has mostly been optimized for throughput in long-running processes. With the demand that applications should aim to start-up quickly, as in that fast so that users can wait for it, it’s simply not enough to start a JVM in the normal way.
The mentioned approaches of AOT compilation enables us to write our Java applications while executing them as native images. By doing so, we enable our Java workload to be executed in “serverless” environments where we can scale our workload to zero, and be able to startup quickly without punishing the user with an initial startup time.
However, as often, in practice life’s not quite that easy. GraalVM doesn’t support the whole feature set of a regular JVM, for example, it doesn’t support Reflection in the usual way and a lot of enterprise runtimes wouldn’t run out-of-the-box as a native executable.
That being said, it’s pretty impressive how much work the friends at Red Hat have put into the development of Quarkus, by developing the implementations with the limitations of this runtime in mind. Only this enables us to combine these pieces and run our Java Enterprise app in a native way. A Quarkus application also runs well on a normal JVM, by starting up “fast enough”, at least in my eyes, in way less than one second.
Despite all that great news for Enterprise Java, and the requirement of scaling to zero and thus starting up quickly, from my point of view, startup time is not everything. While this new movement is certainly interesting, we just shouldn’t forget that the vast majority of enterprises are running, and probably will continue running, their workload for a longer period of time. However, the approach of getting rid of most of the “dynamics” at runtime also has a positive impact on the overall resource consumption and is certainly promising.
But, in my opinion, the native startup time is not even the biggest benefit.
Development turnaround time: “Coding that sparks joy”
Quarkus allows us developers to modify and test our business code with extremely fast hot-reloads.
The quarkus:dev
goal of the Maven plugin enables us to change and save a file, the framework reloads the classes and swaps the behavior inside the running application, in an automated approach.
We can simply re-execute and test the changed functionality after a few milliseconds, which is, in human reaction-time, instantly.
The turnaround time of the development cycle and the feedback loop thus becomes as short as it will get.
As my friend Edson Yanaga puts it: “This is coding that sparks joy”. I fully agree.
In general, I’m a huge fan of short latencies. The mantra of fighting latency is what I believe made a lot of the Google services a joy to use. In general, when coding, we want to get and stay in the flow. The developer thinking time is very precious, and we don’t want to be disrupted from that flow and wait for more than very few seconds; otherwise one gets distracted, fetches yet another coffee, or worse, looks on social media, and there goes your attention.
In my eyes, this minimal turnaround time is the biggest advantange of the Quarkus framework. However, even without Quarkus, if you use a modern application container and some tooling you can already achieve hot-redeployment times that enable a keep-in-the-flow development mode. For example, Open Liberty can deploy applications in less than a second, and when combined with tooling such as WAD, we can really improve our turnaround times, as described in this video.
Some notes on integration testing: What is also very helpful is that the quick startups of the overall Quarkus applications makes tests actually much more suited for integration tests on a deployment level, rather than a code level. That is, a single application is deployed and end-to-end-tested using the application’s communication interfaces. However, one of the major cause for slow build times are long running test phases, that start-up the application, or parts of it, for every. single. test run. Even with low startup times provided by Quarkus this impact becomes huge, once more and more test scenarios become part of the pipeline. What we should do, in general, is to define a single or at most a few deployments during our test suite execution where we end-to-end-test our application without restarting the running application-under-test in between. This is regardless of whether we use Quarkus' capabilities for testing or a dedicated test project that hammers a fired-up application.
Continuous Delivery turnaround time
One of the downsides of native builds alà GraalVM is that this build takes a looooong time. Depending on your machine thirty seconds and more. Much longer even to what we should be used to in a Java world. In our development pipeline, this implies that we don’t want to execute the native build on each code change, only inside the Continuous Delivery pipeline. Even still, we need to take into account that this will slow down our overall pipeline execution time, that otherwise could be executed faster. Following the mantra of building our application only once and fully testing that very build before we ship to production, this implies that also the end-to-end / system / acceptance test turnaround times increase.
Besides native executables, Quarkus also supports thin deployment artifact, as thin JARs, which only contain the actual business logic classes, which are developed by us.
This approach is possible with Quarkus since it separates the concerns of libraries and our own code.
Have a look at the size and contents of the built *-runner.jar
.
The implementation and required libraries are contained under the lib/
directory.
Just as with regular Java Enterprise applications, this allows us to leverage the benefits of Docker, by optimizing for the copy-on-write file system image layers.
If you know a little about these image layers you’ll notice that this certainly makes sense in a containerized world.
The build and transmission times of the container image also affect the overall build execution time.
In this case, thin deployment artifacts offer the best possible experience.
From my experience, the overall image sizes seldomly matter; What matters is how quickly can we re-build and re-transmit the layers that actually change.
Even with tiny native images, these sizes and times are still orders of magnitude larger compared to a thin deployment artifact.
In projects, we need to make this trade-off between pipeline execution times and container startup time. Besides the approach of scaling to zero, deployment scenarios should make use of some form of blue-green deployment, in order to avoid downtime for users, anyway. With that in mind, production startup time becomes less of an issue, since the old version will always stay active, until the new one is ready to roll. If you’re involved in an enterprise project with enough users so that scaling to zero is not something to think about, but quickly shipping new versions to production is, the approach of thin deployment artifacts might be more suitable.
Current limitations
One of the current framework limitations is that Quarkus doesn’t support the full set of some of the EE standards yet.
EJBs, for example, aren’t supported.
However, transactions are supported and some other functionality can be substituted with Quarkus' own features.
One example is scheduling where Quarkus ships its own @Scheduled
annotation.
This seems like a reasonable approach, to try to realize the functionality that projects might need and delivering a framework that already supports the, from my point of view, majority of required functionality.
However, Quarkus is moving very quickly, so let’s see how these gaps are closed. Again, I believe it’s very impressive how mature and exhaustive this framework already looks.
The Maven plugin declaration, and especially how it’s being advertized on the Quarkus documentation is something else that could be improved.
A lot of folks seem to be fans of putting in quite an amount of XML into their pom.xml
, however, I’m not that much.
I prefer to maintain a clearer separation of concerns of our Java application and not letting Maven “build everything”.
If we allow the projects to use Maven’s defaults we keep the required LoCs inside the pom.xml
to a bare minimum, and let everything on top of that be handled by the CI infrastructure.
With Quarkus, you can at least get rid of the most of its pom.xml
definition, and only define and build the native image in your CI pipeline, for example.
Then it’s possible to boil down the pom.xml
a bit.
However, the documentation promises that there’s a native CLI “coming soon”, which sounds promising to me.
Conclusion
Quarkus takes cloud native Enterprise Java to the next level and enables scenarios that haven’t been possible before, especially in regard to application startup times. If you’re planning to deliver scale to zero approaches, this is certainly a technology that you want to have a look into.
I very much like how Quarkus follows up on the approaches that a few technologies took before, takes them further, and delivers a single framework, everthing one umbrella. This makes it easy for developers to get started, use enterprise standards that they might already be familiar with, such as CDI or JAX-RS. In my opinion, this is a big benefit: not trying to reinvent the enterprise world, and using familiar technology, but with a highly-optimized implementation.
As a developer, I find the AOT compilations and other JVM optimizations very interesting in general. You might also have a look at the OpenJ9 JVM and it’s optimizations; maybe combining that runtime with the JVM execution mode of a Quarkus application would be interesting.
For a quick turnaround developer experience with "plain" Java EE, you can have a look at WAD and how to integrate it in Docker.
Update 2019-04-08: Added more clarification to section Extensions to Enterprise Java
Found the post useful? Subscribe to my newsletter for more free content, tips and tricks on IT & Java: