practicing techie

tech oriented notes to self and lessons learned

Tag Archives: jaxrs

An open web application framework benchmark

Selecting a platform for your next application development project can be a complex and burdensome undertaking. It can also be very intriguing and a lot of fun. There’s a wide range of different approaches to take: at one end The Architect will attend conferences, purchase and study analyst reports from established technology research companies such as Gartner, and base his evaluation on analyst views. Another approach is to set up a cross-disciplinary evaluation committee that will collect a wishlist of platform requirements from around the organization and make its decision based on a consensus vote. The first approach is very autocratic, while the second can sometimes lead to lack of focus. A clear, coherent vision of requirements and prioritization is essential for the success of the evaluation. Due to these problems, a middle road and a more pragmatic approach is becoming increasingly popular: a tightly-knit group of senior propellerheads use a more empiric method of analysing requirements, study and experiment with potential solution stack elements, brainstorm to produce a short list of candidates to be validated using a hands-on architecture exercises and smell-tests. Though hands-on experimentation can lead to better results, the cost of this method can be prohibitive, so often only a handful of solutions that pass the first phase screening can be evaluated this way.

Platform evaluation criteria depend on the project requirements and may include:

  • developer productivity
  • platform stability
  • roadmap alignment with projected requirements
  • tools support
  • information security
  • strategic partnerships
  • developer ecosystem
  • existing software license and human capital investments
  • etc.

Performance and scalability are often high priority concerns. They are also among those platform properties that can be formulated into quantifiable criteria, though the key challenge here is how to model the user and implement performance tests that accurately model your expected workloads. Benchmarking several different platforms can only add to the cost of benchmarking.

A company called TechEmpower has started a project called TechEmpower Framework Benchmarks, or TFB for short, that aims to compare the performance of different web frameworks. The project publishes benchmark results that application developers can use to make more informed decisions when selecting frameworks. What’s particularly interesting about FrameworkBenchmarks, is that it’s a collaborative effort conducted in an open manner. Development related discussions take place in an online forum and the source code repository is publicly available on GitHub. Doing test implementation development in the open is important for enabling peer review and it allows implementations to evolve and improve over time. The project implements performance tests for a wide variety of frameworks, and chances are that the ones that you’re planning to use are included. If not, you can create your own tests and submit them to be included in the project code base. You can also take the tests and run the benchmarks on your own hardware.

Openly published test implementations are not only useful for producing benchmark data, but can also be used by framework developers to communicate framework performance related best practices to application developers. They also allow framework developers to receive reproducible performance benchmarking feedback and data for optimization purposes.

It’s interesting to note that the test implementations have been designed and built by different groups and individuals, and some may have been more rigorously optimized than others. The benchmarks measure the performance of the framework as much as they measure the test implementation, and in some cases suboptimal test implementation will result in poor overall performance. Framework torchbearers are expected to take their best shot in optimizing the test implementation, so the implementations should eventually converge to optimal solutions given enough active framework pundits.

Test types

In the project’s parlance, the combination of programming language, framework and database used is termed “framework permutation” or just permutation, and some test types have been implemented in 100+ different permutations. The different test types include:

  1. JSON serialization
    “test framework fundamentals including keep-alive support, request routing, request header parsing, object instantiation, JSON serialization, response header generation, and request count throughput.”
  2. Single database query
    “exercise the framework’s object-relational mapper (ORM), random number generator, database driver, and database connection pool.”
  3. Multiple database queries
    “This test is a variation of Test #2 and also uses the World table. Multiple rows are fetched to more dramatically punish the database driver and connection pool. At the highest queries-per-request tested (20), this test demonstrates all frameworks’ convergence toward zero requests-per-second as database activity increases.”
  4. Fortunes
    “This test exercises the ORM, database connectivity, dynamic-size collections, sorting, server-side templates, XSS countermeasures, and character encoding.”
  5. Database updates
    “This test is a variation of Test #3 that exercises the ORM’s persistence of objects and the database driver’s performance at running UPDATE statements or similar. The spirit of this test is to exercise a variable number of read-then-write style database operations.”
  6. Plaintext
    “This test is an exercise of the request-routing fundamentals only, designed to demonstrate the capacity of high-performance platforms in particular. The response payload is still small, meaning good performance is still necessary in order to saturate the gigabit Ethernet of the test environment.”

Notes on Round 9 results

Currently, the latest benchmark is Round 9 and the result data is published on the project web page. The data is not available in machine-readable form and it can’t be sorted by column for analysing patterns. It can, however, be imported into a spreadsheet program fairly easily, so I took the data and analyzed it a bit. Some interesting observations could be made just by looking at the raw data. In addition to comparing throughput, it’s also interesting to compare how well frameworks scale. One way of quantifying scalability is to take test implementation throughput figures for the lowest and highest concurrency level (for test types 1, 2, 4 and 6) per framework and plot them on a 2-D plane. A line can then be drawn between these two points with the slope characterizing scalability. Well-scaling test implementations would be expected to have a positive, steep slope for test types 1, 2, 4 and 6 whereas for test types 3 and 5 the slope is expected to be negative.

This model is not entirely without problems since the scalability rating is not relative to the throughput, so e.g. a poorly performing framework can end up having a great scalability rating. As a result, you’d have to look at these figures together.

To better visualize throughput against concurrency level (“Peak Hosting” environment data), I created a small web app that’s available at http://tfb-kippo.rhcloud.com/ (the app is subject to removal without notice).

JSON serialization

The JSON serialization test aims to measure framework overhead. One could argue that it’s a bit of a micro benchmark, but it should demonstrate how well the framework does with basic tasks like request routing, JSON serialization and response generation.

The top 10 frameworks were based on the following programming languages: C++, Java, Lua, Ur and Go. C++ based CPPSP was the clear winner while the next 6 contestants were Java -based. No database is used in this test type.

The top 7 frameworks with highest throughput also have the highest scalability rating. After that, both these figures start declining fairly rapidly. This is a very simple test and it’s a bit of a surprise to see such large variation in results. In their commentary TechEmpower attributes some of the differences to how well frameworks work on a NUMA-based system architecture.

Quite many frameworks are Java or JVM based and rather large variations exist even within this group, so clearly neither the language nor the JVM is an impeding factor in this group.

I was surprised about Node.js and HHVM rankings. Unfortunately, the Scala-based Spray test implementation, as well as the JVM-based polyglot framework Vert.x implementation, were removed due to being outdated. Hope to see these included in a future benchmark round.

Single database query

This test type measures database access throughput and parallelizability. Again, surprisingly large spread in performance can be observed for a fairly trivial test case. This would seem to suggest that framework or database access method overhead contributes significantly to the results. Is the database access technology (DB driver or ORM) a bottleneck? Or is the backend system one? It would be interesting to look at the system activity reports from test runs to analyze potential bottlenecks in more detail.

Before seeing the results I would’ve expected the DB backend to be the bottleneck, but this doesn’t appear to be clear-cut based on the fact that the top, as well as many of the bottom performing test implementations, are using the same DB. It was interesting to note that the top six test implementations use a relational database with the first NoSQL based implementation taking 7th place. This test runs DB read statements by ID, which NoSQL databases should be very good at.

Top performing 10 frameworks were based on Java, C++, Lua and PHP languages and are using MySQL, PostgreSQL and MongoDB databases. Java based Gemini leads with CPPSP being second. Both use MySQL DB. Spring based test implementation performance was a bit of a disappointment.

Multiple database queries

Where the previous test exercised a single database query per request this test does a variable number of database queries per request. Again, I would’ve assumed this test would measure the backend database performance more than the framework performance, but it seems that framework and database access method overhead can also contribute significantly.

The top two performers in this test are Dart based implementations that use MongoDB.

Top 10 frameworks in this test are based on Dart, Java, Clojure, PHP and C# languages and they use MongoDB and MySQL databases.

Fortunes

This is the most complex test that aims to exercise the full framework stack from request routing through business logic execution, database access, templating and response generation.

Top 10 frameworks are based on C++, Java, Ur, Scala, PHP languages and with the full spectrum of databases being used (MySQL, PostgreSQL and MongoDB).

Database updates

In addition to reads this test exercises database updates as well.

HHVM wins this test with 3 Node.js based frameworks coming next. Similar to the Single database query test the top 13 implementations work with relational MySQL DB, before NoSQL implementations. This test exercises simple read and write data access by ID which, again, should be one of NoSQL database strong points.

Top performing 10 frameworks were based on PHP, JavaScript, Scala, Java and Go languages, all of which use the MySQL database.

Plaintext

The aim of this test is to measure how well the framework performs under extreme load conditions and massive client parallelism. Since there’s no backend system dependencies involved, this test measures platform and framework concurrency limits. Throughput plateaus or starts degrading with top-performing frameworks in this test before client concurrency level reaches the maximum value, which seems to suggest that a bottleneck is being hit somewhere in the test setup, presumably hardware, OS and/or framework concurrency.

Many frameworks are at their best with concurrency level of 256, except CPPSP which peaks at 1024. CPPSP is the only one of the top-performing implementations that is able to significantly improve its performance as the concurrency level increases from 256, but even with CPPSP throughput actually starts dropping after concurrency level hits the 4,096 mark. Only 12 test implementations are able to exceed 1 M requests per second. Some well-known platforms e.g. Spring did surprisingly poorly.

There seems to be something seriously wrong with HHVM test run as it generates only tens of responses per second with concurrency levels 256 and 1024.

Top 10 frameworks are based on C++, Java, Scala and Lua languages. No database is used in this test.

Benchmark repeatability

In the scientific world research must be repeatable, in order to be credible. Similarly, the benchmark test methodology and relevant circumstances should be documented to make the results repeatable and credible. There’re a few details that could be documented to improve repeatability.

The benchmarking project source code doesn’t seem to be tagged. Tagging would be essential for making benchmarks repeatable.

A short description of the hardware and some other test environment parameters is available on the benchmark project web site. However, the environment setup (hardware + software) is expected to change over time, so this information should be documented per round. Also, Linux distribution minor release or the exact Linux kernel version don’t appear to be identified.

Detailed data about what goes on inside the servers could be published, so that externals could analyze benchmark results in a more meaningful way. System activity reports e.g. system resource usage (CPU, memory, IO) can provide valuable clues to possible scalability issues. Also, application, framework, database and other logs can be useful to test implementers.

Resin was chosen as the Java application server over Apache Tomcat and other servlet containers due to performance reasons. While I’m not contesting this statement, but there wasn’t any mention about software versions, and since performance attributes tend to change over time between releases, this premise is not repeatable.

Neither the exact JVM version nor the JVM arguments are documented for JVM based test implementation execution. Default JVM arguments are used if test implementations don’t override the settings. Since the test implementations have very similar execution profiles by definition, it could be beneficial to explicitly configure and share some JVM flags that are commonly used with server-side applications. Also, due to JVM ergonomics different GC parameters can be automatically selected based on underlying server capacity and JVM version. Documenting these parameters per benchmark round would help with repeatability. Perhaps all the middleware software versions could be logged during test execution and the full test run logs could be made available.

A custom test implementation: Asynchronous Java + NoSQL DB

Since I’ve worked recently on implementing RESTful services based on JAX-RS 2 API with asynchronous processing (based on Jersey 2 implementation) and Apache Cassandra NoSQL database, I got curious about how this combination would perform against the competition so, I started coding my own test implementation. I decided to drop JAX-RS in this case, however, to eliminate any non-essential abstraction layers that might have a negative impact on performance.

One of the biggest hurdles in getting started with test development was that, at the time I started my project there wasn’t a way to test run platform installation scripts in smaller pieces, and you had to run the full installation, which took a very long time. Fortunately, since then framework installation procedure has been compartmentalized, so it’s possible to install just the framework that you’re developing tests for. Also, recently the project has added support for fully automated development environment setup with Vagrant, which is a great help. Another excellent addition is Travis CI integration that allows test implementation developers to gain additional assurance that their code is working as expected also outside their sandbox. Unfortunately, Travis builds can take a very long time, so you might need to disable some of the tests that you’re not actively working on. The Travis CI environment is also a bit different from the developer and the actual benchmarking environments, so you could bump into issues with Travis builds that don’t occur in the development environment, and vice versa. Travis build failures can sometimes be very obscure and tricky to troubleshoot.

The actual test implementation code is easy enough to develop and test in isolation, outside of the real benchmark environment, but if you’re adding support for new platform components such as databases or testing platform installation scripts, it’s easiest if you have an environment that’s a close replica of the actual benchmarking environment. In this case adding support for a new database involved creating a new DB schema, test data generation and automating database installation and configuration.

Implementing the actual test permutation turned out to be interesting, but surprisingly laborious, as well. I started seeing strange error responses occasionally when benchmarking my test implementation with ab and wrk, especially with higher loads. TFB executes Java based performance implementations in the Resin web container, and after a while of puzzlement about the errors, I decided to test the code in other web containers, namely Tomcat and Jetty. It turned out that I had bumped into 1 Resin bug (5776) and 2 Tomcat bugs (56736, 56739) related to servlet asynchronous processing support.

Architecturally, Test types 1 and 6 have been implemented using traditional synchronous Servlet API, while the rest of the test implementations leverage non-blocking request handling through Servlet 3 asynchronous processing support. The test implementations store their data in the Apache Cassandra 2 NoSQL database, which is accessed using the DataStax Java Driver. Asynchronous processing is also used in the data access tier in order to minimize resource consumption. JSON data is processed with the Jackson JSON library. In Java versions predating version 8, asynchronous processing requires passing around callbacks in the form of anonymous classes, which can at times be a bit high-ceremony syntactically. Java 8 Lambda expressions does away with some of the ceremonial overhead, but unfortunately TFB doesn’t yet fully support the latest Java version. I’ve previously used the JAX-RS 2 asynchronous processing API, but not the Servlet 3 async API. One thing I noticed during the test implementation was that the mechanism provided by Servlet 3 async API for generating error response to the client is much lower level, less intuitive and more cumbersome than its JAX-RS async counterpart.

The test implementation code was merged in the FrameworkBenchmarks code base, so it should be benchmarked on the next round. The code can be found here:
https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java/servlet3-cass

Conclusions

TechEmpower’s Framework Benchmarks is a really valuable contribution to the web framework developer and user community. It holds great potential for enabling friendly competition between framework developers, as well as, framework users, and thus driving up performance of popular frameworks and adoption of framework performance best practices. As always, there’s room for improvement. Some areas from a framework user and test implementer point of view include: make the benchmark tests and results more repeatable, publish raw benchmark data for analysis purposes and work on making test development and adding new framework components even easier.

Good job TFB team + contributors – can’t wait to see Round 10 benchmark data!

Advertisement

Implementing Jersey 2 Spring integration

Jersey is the excellent Java JAX-RS specification reference implementation from Oracle. Last year, when we were starting to build RESTful backend web services for a high-volume website, we chose to use the JAX-RS API as our REST framework and Spring framework for dependency injection. Jersey was our JAX-RS implementation of choice.

When the project was started JAX-RS API 2.0 specification was not yet released, and neither was Jersey 2.0. Since we didn’t see any fundamental deficiencies with JAX-RS 1.1, and because a stable Spring integration module existed for Jersey 1.1, we decided to go with the tried-and-true version instead of taking on the bleeding edge.

Still, I was curious to learn what could be gained by adopting the newer version, so I started looking at the JAX-RS 2 API on my free time and doing some prototyping with Jersey 2. I noticed that Jersey 2 lacked Spring framework integration that was available for the previous version. Studying the issue further, I found that the old Spring integration module would not be directly portable to Jersey 2. The reason was that Jersey 1 builds on a custom internal dependency injection framework while Jersey 2 had switched to HK2 for dependency injection. (HK2 is an interesting, light-weight dependency injection framework used in GlassFish.)

My original goals for Jersey-Spring integration were fairly simple:

inject Spring beans declared in application context XML into JAX-RS resource classes (using @Autowired annotation or XML configuration)

So, I thought I’d dig a bit deeper and started looking into Jersey source code. I was happy to notice that Jersey development was being done in an open and approachable manner. The source code was hosted on GitHub and updated frequently. After a while of digging, a high-level design for Jersey Spring integration started to take shape. It took quite some experimenting and many iterations before the first working prototype. At that point, being an optimist, I hoped I was nearly done and contacted the jersey-users mailing list to get feedback on the design and implementation. The feedback: add more use cases, provide sample code, implement test automation, sign Oracle Contributor Agreement 🙂 (The feedback, of course, was very reasonable from the Jersey software product point of view). So, while it wasn’t quite back to the drawing board, but at this point I realized the last mile was to be considerably longer than I had hoped for.

Eventually, though, the Jersey-Spring integration got merged in Jersey 2 code base in Jersey v2.2 release. The integration API is based on annotations and supports the following features:

  • inject Spring beans into Jersey managed JAX-RS resource classes (using org.springframework.beans.factory.annotation.Autowired or javax.inject.Inject). @Qualifier and @Named annotations can be used to further qualify the injected instance.
  • allow JAX-RS resource class instance lifecycle to be managed by Spring instead of Jersey (org.springframework.stereotype.Component)
  • support different Spring bean injection scopes: singleton, request, prototype. Bean scope is declared in applicationContext.xml.

The implementation

Source code for the Jersey-Spring integration can be found in the main Jersey source repository:
https://github.com/jersey/jersey/tree/2.5.1/ext/spring3/src/main/java/org/glassfish/jersey/server/spring

Jersey-Spring integration consists of the following implementation classes:

org.glassfish.jersey.server.spring.SpringComponentProvider
This ComponentProvider implementation is registered with Jersey SPI extension mechanism and it’s responsible for bootstrapping Jersey 2 Spring integration. It makes Jersey skip JAX-RS life-cycle management for Spring components. Otherwise, Jersey would bind these classes to HK2 ServiceLocator with Jersey default scope without respecting the scope declared for the Spring component. This class also initializes HK2 spring-bridge and registers Spring @Autowired annotation handler with HK2 ServiceLocator. When being run outside of servlet context, a custom org.springframework.web.context.request.RequestScope implementation is configured to implement request scope for beans.

org.glassfish.jersey.server.spring.AutowiredInjectResolver
HK2 injection resolver that injects dependencies declared using Spring framework @Autowired annotation. HK2 invokes this resolver and asks it to resolve dependencies annotated using @Autowired.

org.glassfish.jersey.server.spring.SpringLifecycleListener
Handles container lifecycle events. Refreshes Spring context on reload and close it on shutdown.

org.glassfish.jersey.server.spring.SpringWebApplicationInitializer
A convenience class that helps the user avoid having to configure Spring ContextLoaderListener and RequestContextListener in web.xml. Alternatively the user can configure these in web application web.xml.

In addition to the actual implementation code, the integration includes samples and tests, which can be very helpful in getting developers started.

The JAX-RS specification defines its own dependency injection API. Additionally, Jersey supports JSR 330 style injection not mandated by the JAX-RS specification. Jersey-Spring integration adds support for Spring style injection. Both JAX-RS injection and Spring integration provide a mechanism for binding objects into a registry, so that objects can later be looked up and injected. If you’re using a full Java EE application server, such as Glassfish, you also have the option of binding objects via the CDI API. On non-Java EE environments it’s possible to use CDI by embedding a container implementation such as Weld. Yet another binding method is to use Jersey specific API. The test code includes a JAX-RS application class that demonstrates how this can be done.

Modifying Jersey-Spring

If you want to work on Jersey-Spring, you need to check out Jersey 2 code base and build it. That process is rather easy and well documented:

https://jersey.java.net/documentation/2.5.1/how-to-build.html

You simply need to clone the repository and build the source. The build system is Maven based. You can also easily import the code base into your IDE of choice (tried it with IDEA 12, Eclipse 4.3 and NetBeans 8.0 beta) using its Maven plugin. I noticed, however, that some integration tests failed with Maven 3.0, and I had to upgrade to 3.1, but apart from that there weren’t any issues.

After building Jersey 2 you can modify the Spring integration module, and build only the changed modules to save time.

Tests

Jersey-Spring integration tests have been built using Jersey test framework and they’re run under the control of maven-failsafe-plugin. Integration tests consist of actual test code and a JAX-RS backend webapp that the tests exercise. The backend gets deployed into an external Jetty servlet container using jetty-maven-plugin. Jersey-Spring tests can be executed separately from the rest of the tests. Integration tests can be found in a separate Maven submodule here:

https://github.com/jersey/jersey/tree/2.5.1/tests/integration/spring3

In addition to demonstrating the basic features of Jersey-Spring, the tests show how to use different Spring bean scopes: singleton, request, prototype. The tests also exhibit using a JAX-RS application class for registering your own dependencies in the container, in different scopes.

Conclusions

I think the JAX-RS 2.0 API provides a nice and clean way of implementing RESTful interfaces in Java. Development of the Jersey JAX-RS reference implementation is being conducted in an open and transparent manner. Jersey also has a large and active user community.

As noted by Frederick Brooks, Jr.: “All programmers are optimists”. It’s often easy to underestimate the amount of work required to integrate code with a relative large and complex code base, and in particular when you need to mediate between multiple different frameworks (in this case Jersey, HK2, Spring framework). Also, though Jersey has pretty good user documentation, I missed high-level architectural documentation on the design and implementation. A lot of poking around was needed to be able to identify the correct integration points. Fortunately, the Jersey build system is pretty easy to use and allows building only selected parts, which makes experimenting and the change-build-test cycle relatively fast.

Both Jersey and Spring framework provide a rich set of features and you can use them together in a multitude of ways. Jersey-Spring integration in it’s current form covers a couple of basic integration scenarios between the two. If you find that your particular scenario isn’t supported, join the jersey-users mailing list to discuss it. You can also just check out the code, implement your changes and contribute them by submitting a pull request on GitHub.