practicing techie

tech oriented notes to self and lessons learned

Tag Archives: java

Scala def macros and Java interoperability

Most of the time Scala-Java interoperability works pretty well from a Scala application developer perspective: you can use a wealth of Java class libraries in your Scala programs with fairly little effort. Simply using JavaConversions and maybe a few custom wrappers usually gets you pretty far. Sure, there can be some friction resulting from the use of different programming paradigms and mutable data structures, but if you’re a pragmatist re-using Java code in Scala is nevertheless quite feasible. Scala version 2.10 saw the introduction of an experimental language feature called macros. Specification work on Scala macros recognizes different flavors of macros, but versions 2.10 and 2.11, as well as the future version 2.12, support only def macros. This macro variety behaves similar to methods except that def macro invocations are expanded at compile time. Here’s how EPFL Scala team member and “Scala macros guy” Eugene Burmako characterizes def macros:

If, during type-checking, the compiler encounters an application of the macro m(args), it will expand that application by invoking the corresponding macro implementation method, with the abstract-syntax trees of the argument expressions args as arguments. The result of the macro implementation is another abstract syntax tree, which will be inlined at the call site and will be type-checked in turn.

Def macros resemble C/C++ pre-preprocessor macros, the m4 macro processor and similar in that the result of macro application will be inlined at the call site. A notable difference is that Scala macros are well integrated into the language meaning e.g. that the results of macro expansion are type-checked. But let’s look at this from the code perspective. I’ve implemented a “hello, world” macro and an object called MyReusableService that defines two methods: a regular method and one implemented as a macro. Two objects, ScalaClient and JavaClient, invoke methods on MyReusableService. Here’s what happens when compiling Java code that tries to invoke a macro method on a Scala object:

~ ᐅ sbt 'runMain com.practicingtechie.gist.macros.JavaClient'
...
[error] /Users/marko/blog-gists/macros-interop/src/main/java/com/practicingtechie/gist/macros/JavaClient.java:6: cannot find symbol
[error] symbol: method macroMethod()
[error] location: class com.practicingtechie.gist.macros.MyReusableService
[error] MyReusableService.macroMethod

Method macroMethod is defined by MyReusableService Scala object, but the method is not visible when inspecting the disassembled class file:

~ ᐅ javap -cp target/scala-2.11/classes com.practicingtechie.gist.macros.MyReusableService$
Compiled from "MyReusableService.scala"
public final class com.practicingtechie.gist.macros.MyReusableService$ {
  public static final com.practicingtechie.gist.macros.MyReusableService$ MODULE$;
  public static {};
  public void regularMethod();
}

After removing the macroMethod invocation JavaClient is able to compile. ScalaClient, however, is more interesting. Here’s macro debugging output from running the code (with “-Ymacro-debug-lite” argument):

~ ᐅ sbt 'runMain com.practicingtechie.gist.macros.ScalaClient'
...
[info] Compiling 2 Scala sources and 1 Java source to /Users/marko/blog-gists/macros-interop/target/scala-2.11/classes...
performing macro expansion MyReusableService.macroMethod at source-/Users/marko/blog-gists/macros-interop/src/main/scala/com/practicingtechie/gist/macros/ScalaClient.scala,line-7,offset=158
println("Hello macro world")
Apply(Ident(TermName("println")), List(Literal(Constant("Hello macro world"))))
[info] Running com.practicingtechie.gist.macros.ScalaClient
Hello, from regular method
Hello macro world

In the above extract we can see the location where the macro was applied, as well as results of macro expansion, both as Scala code and as well as abstract-syntax tree (AST) representation.

Finally, we can see macro expansion results expanded and compiled into bytecode at the ScalaClient call site:

~ ᐅ javap -c -cp target/scala-2.11/classes com.practicingtechie.gist.macros.ScalaClient$
...
  public void main(java.lang.String[]);
    Code:
       0: getstatic     #19                 // Field com/practicingtechie/gist/macros/MyReusableService$.MODULE$:Lcom/practicingtechie/gist/macros/MyReusableService$;
       3: invokevirtual #22                 // Method com/practicingtechie/gist/macros/MyReusableService$.regularMethod:()V
       6: getstatic     #27                 // Field scala/Predef$.MODULE$:Lscala/Predef$;
       9: ldc           #29                 // String Hello macro world
      11: invokevirtual #33                 // Method scala/Predef$.println:(Ljava/lang/Object;)V
...

Scala def macros are a very interesting language feature that’s planned to be officially supported in a future Scala version. Def macros are implemented by the Scala compiler, so a function or method whose definition is macro based won’t be accessible in Java code. This is because unlike ordinary function or method invocations the result of the macro application gets expanded at the call site. Still, Scala functions or methods that simply invoke macros from within their body can nonetheless be called from Java code. Depending on how def macros are used, they can sometimes hinder reuse of Scala code from Java or other JVM-based languages.

More info

Advertisement

What’s the strangest bug you’ve squashed?

As software engineers we’re tasked with creating solutions to customer’s business problems. Being complex systems, every once in a while flaws inevitably slip in the design or implementation. And sometimes flaws creep in through use of third party software, which can can make problems all the more difficult to track down. Each bug has a story to tell and the stories about hunting the most puzzling, challenging, annoying and time-consuming bugs can sometimes live with you for a long time.

What’s the strangest bug you’ve managed to squash?

Mine was quite a few years ago when we were working on a greenfield Java EE software project for a client in the health care industry. With alpha release cycle nearing I was deploying a new build in a newly created server environment and during testing we found a bug in an isolated software feature. Initially, I thought there was something wrong with the new environment setup, but after a while I realized the bug seemed to be related with the way the new release was built. During development phase we had been building the software using Oracle JDeveloper IDE, but had moved to using Apache Ant with Sun Java JDK. So, at that point I thought – this was Java EE after all – it was a packaging issue. I carefully compared the working and broken release packages, but couldn’t find any significant differences. Though it was troublesome to reproduce, the problem was fortunately nevertheless reproducible, so I started tracking it down with remote debugging. After a while I noticed that the software was executing a weird code path I couldn’t quite explain.

Puzzled by the strange behaviour I didn’t really have a clear idea how to continue troubleshooting, but I decided to take a long shot with comparing compiled bytecode from the working and broken releases. This was my first time looking at disassembled Java bytecode, which made analysis a bit slow, and all the more interesting, but fortunately in my remote debugging sessions I had been able to identify some likely places for the bug. After staring at the disassembled bytecode for a while an initially innocuous looking bit of code started to look suspect: a mutator method was present in one class in the working build, but missing in the broken one. It turned out that when the code was built with Oracle JDeveloper it automatically generated a mutator method for a subclass, which just happened to override a buggy superclass mutator method. In the broken build such a mutator method wasn’t being generated causing the buggy superclass mutator method to execute.

This story happened years ago, and while there are lots of things I do differently nowadays, including use of different technologies, design approach, unit testing, test automation, build methods and tooling etc., for me this was one of those more memorable bug squashing sessions.

Do you have a intriguing bug hunting story to share?

An open web application framework benchmark

Selecting a platform for your next application development project can be a complex and burdensome undertaking. It can also be very intriguing and a lot of fun. There’s a wide range of different approaches to take: at one end The Architect will attend conferences, purchase and study analyst reports from established technology research companies such as Gartner, and base his evaluation on analyst views. Another approach is to set up a cross-disciplinary evaluation committee that will collect a wishlist of platform requirements from around the organization and make its decision based on a consensus vote. The first approach is very autocratic, while the second can sometimes lead to lack of focus. A clear, coherent vision of requirements and prioritization is essential for the success of the evaluation. Due to these problems, a middle road and a more pragmatic approach is becoming increasingly popular: a tightly-knit group of senior propellerheads use a more empiric method of analysing requirements, study and experiment with potential solution stack elements, brainstorm to produce a short list of candidates to be validated using a hands-on architecture exercises and smell-tests. Though hands-on experimentation can lead to better results, the cost of this method can be prohibitive, so often only a handful of solutions that pass the first phase screening can be evaluated this way.

Platform evaluation criteria depend on the project requirements and may include:

  • developer productivity
  • platform stability
  • roadmap alignment with projected requirements
  • tools support
  • information security
  • strategic partnerships
  • developer ecosystem
  • existing software license and human capital investments
  • etc.

Performance and scalability are often high priority concerns. They are also among those platform properties that can be formulated into quantifiable criteria, though the key challenge here is how to model the user and implement performance tests that accurately model your expected workloads. Benchmarking several different platforms can only add to the cost of benchmarking.

A company called TechEmpower has started a project called TechEmpower Framework Benchmarks, or TFB for short, that aims to compare the performance of different web frameworks. The project publishes benchmark results that application developers can use to make more informed decisions when selecting frameworks. What’s particularly interesting about FrameworkBenchmarks, is that it’s a collaborative effort conducted in an open manner. Development related discussions take place in an online forum and the source code repository is publicly available on GitHub. Doing test implementation development in the open is important for enabling peer review and it allows implementations to evolve and improve over time. The project implements performance tests for a wide variety of frameworks, and chances are that the ones that you’re planning to use are included. If not, you can create your own tests and submit them to be included in the project code base. You can also take the tests and run the benchmarks on your own hardware.

Openly published test implementations are not only useful for producing benchmark data, but can also be used by framework developers to communicate framework performance related best practices to application developers. They also allow framework developers to receive reproducible performance benchmarking feedback and data for optimization purposes.

It’s interesting to note that the test implementations have been designed and built by different groups and individuals, and some may have been more rigorously optimized than others. The benchmarks measure the performance of the framework as much as they measure the test implementation, and in some cases suboptimal test implementation will result in poor overall performance. Framework torchbearers are expected to take their best shot in optimizing the test implementation, so the implementations should eventually converge to optimal solutions given enough active framework pundits.

Test types

In the project’s parlance, the combination of programming language, framework and database used is termed “framework permutation” or just permutation, and some test types have been implemented in 100+ different permutations. The different test types include:

  1. JSON serialization
    “test framework fundamentals including keep-alive support, request routing, request header parsing, object instantiation, JSON serialization, response header generation, and request count throughput.”
  2. Single database query
    “exercise the framework’s object-relational mapper (ORM), random number generator, database driver, and database connection pool.”
  3. Multiple database queries
    “This test is a variation of Test #2 and also uses the World table. Multiple rows are fetched to more dramatically punish the database driver and connection pool. At the highest queries-per-request tested (20), this test demonstrates all frameworks’ convergence toward zero requests-per-second as database activity increases.”
  4. Fortunes
    “This test exercises the ORM, database connectivity, dynamic-size collections, sorting, server-side templates, XSS countermeasures, and character encoding.”
  5. Database updates
    “This test is a variation of Test #3 that exercises the ORM’s persistence of objects and the database driver’s performance at running UPDATE statements or similar. The spirit of this test is to exercise a variable number of read-then-write style database operations.”
  6. Plaintext
    “This test is an exercise of the request-routing fundamentals only, designed to demonstrate the capacity of high-performance platforms in particular. The response payload is still small, meaning good performance is still necessary in order to saturate the gigabit Ethernet of the test environment.”

Notes on Round 9 results

Currently, the latest benchmark is Round 9 and the result data is published on the project web page. The data is not available in machine-readable form and it can’t be sorted by column for analysing patterns. It can, however, be imported into a spreadsheet program fairly easily, so I took the data and analyzed it a bit. Some interesting observations could be made just by looking at the raw data. In addition to comparing throughput, it’s also interesting to compare how well frameworks scale. One way of quantifying scalability is to take test implementation throughput figures for the lowest and highest concurrency level (for test types 1, 2, 4 and 6) per framework and plot them on a 2-D plane. A line can then be drawn between these two points with the slope characterizing scalability. Well-scaling test implementations would be expected to have a positive, steep slope for test types 1, 2, 4 and 6 whereas for test types 3 and 5 the slope is expected to be negative.

This model is not entirely without problems since the scalability rating is not relative to the throughput, so e.g. a poorly performing framework can end up having a great scalability rating. As a result, you’d have to look at these figures together.

To better visualize throughput against concurrency level (“Peak Hosting” environment data), I created a small web app that’s available at http://tfb-kippo.rhcloud.com/ (the app is subject to removal without notice).

JSON serialization

The JSON serialization test aims to measure framework overhead. One could argue that it’s a bit of a micro benchmark, but it should demonstrate how well the framework does with basic tasks like request routing, JSON serialization and response generation.

The top 10 frameworks were based on the following programming languages: C++, Java, Lua, Ur and Go. C++ based CPPSP was the clear winner while the next 6 contestants were Java -based. No database is used in this test type.

The top 7 frameworks with highest throughput also have the highest scalability rating. After that, both these figures start declining fairly rapidly. This is a very simple test and it’s a bit of a surprise to see such large variation in results. In their commentary TechEmpower attributes some of the differences to how well frameworks work on a NUMA-based system architecture.

Quite many frameworks are Java or JVM based and rather large variations exist even within this group, so clearly neither the language nor the JVM is an impeding factor in this group.

I was surprised about Node.js and HHVM rankings. Unfortunately, the Scala-based Spray test implementation, as well as the JVM-based polyglot framework Vert.x implementation, were removed due to being outdated. Hope to see these included in a future benchmark round.

Single database query

This test type measures database access throughput and parallelizability. Again, surprisingly large spread in performance can be observed for a fairly trivial test case. This would seem to suggest that framework or database access method overhead contributes significantly to the results. Is the database access technology (DB driver or ORM) a bottleneck? Or is the backend system one? It would be interesting to look at the system activity reports from test runs to analyze potential bottlenecks in more detail.

Before seeing the results I would’ve expected the DB backend to be the bottleneck, but this doesn’t appear to be clear-cut based on the fact that the top, as well as many of the bottom performing test implementations, are using the same DB. It was interesting to note that the top six test implementations use a relational database with the first NoSQL based implementation taking 7th place. This test runs DB read statements by ID, which NoSQL databases should be very good at.

Top performing 10 frameworks were based on Java, C++, Lua and PHP languages and are using MySQL, PostgreSQL and MongoDB databases. Java based Gemini leads with CPPSP being second. Both use MySQL DB. Spring based test implementation performance was a bit of a disappointment.

Multiple database queries

Where the previous test exercised a single database query per request this test does a variable number of database queries per request. Again, I would’ve assumed this test would measure the backend database performance more than the framework performance, but it seems that framework and database access method overhead can also contribute significantly.

The top two performers in this test are Dart based implementations that use MongoDB.

Top 10 frameworks in this test are based on Dart, Java, Clojure, PHP and C# languages and they use MongoDB and MySQL databases.

Fortunes

This is the most complex test that aims to exercise the full framework stack from request routing through business logic execution, database access, templating and response generation.

Top 10 frameworks are based on C++, Java, Ur, Scala, PHP languages and with the full spectrum of databases being used (MySQL, PostgreSQL and MongoDB).

Database updates

In addition to reads this test exercises database updates as well.

HHVM wins this test with 3 Node.js based frameworks coming next. Similar to the Single database query test the top 13 implementations work with relational MySQL DB, before NoSQL implementations. This test exercises simple read and write data access by ID which, again, should be one of NoSQL database strong points.

Top performing 10 frameworks were based on PHP, JavaScript, Scala, Java and Go languages, all of which use the MySQL database.

Plaintext

The aim of this test is to measure how well the framework performs under extreme load conditions and massive client parallelism. Since there’s no backend system dependencies involved, this test measures platform and framework concurrency limits. Throughput plateaus or starts degrading with top-performing frameworks in this test before client concurrency level reaches the maximum value, which seems to suggest that a bottleneck is being hit somewhere in the test setup, presumably hardware, OS and/or framework concurrency.

Many frameworks are at their best with concurrency level of 256, except CPPSP which peaks at 1024. CPPSP is the only one of the top-performing implementations that is able to significantly improve its performance as the concurrency level increases from 256, but even with CPPSP throughput actually starts dropping after concurrency level hits the 4,096 mark. Only 12 test implementations are able to exceed 1 M requests per second. Some well-known platforms e.g. Spring did surprisingly poorly.

There seems to be something seriously wrong with HHVM test run as it generates only tens of responses per second with concurrency levels 256 and 1024.

Top 10 frameworks are based on C++, Java, Scala and Lua languages. No database is used in this test.

Benchmark repeatability

In the scientific world research must be repeatable, in order to be credible. Similarly, the benchmark test methodology and relevant circumstances should be documented to make the results repeatable and credible. There’re a few details that could be documented to improve repeatability.

The benchmarking project source code doesn’t seem to be tagged. Tagging would be essential for making benchmarks repeatable.

A short description of the hardware and some other test environment parameters is available on the benchmark project web site. However, the environment setup (hardware + software) is expected to change over time, so this information should be documented per round. Also, Linux distribution minor release or the exact Linux kernel version don’t appear to be identified.

Detailed data about what goes on inside the servers could be published, so that externals could analyze benchmark results in a more meaningful way. System activity reports e.g. system resource usage (CPU, memory, IO) can provide valuable clues to possible scalability issues. Also, application, framework, database and other logs can be useful to test implementers.

Resin was chosen as the Java application server over Apache Tomcat and other servlet containers due to performance reasons. While I’m not contesting this statement, but there wasn’t any mention about software versions, and since performance attributes tend to change over time between releases, this premise is not repeatable.

Neither the exact JVM version nor the JVM arguments are documented for JVM based test implementation execution. Default JVM arguments are used if test implementations don’t override the settings. Since the test implementations have very similar execution profiles by definition, it could be beneficial to explicitly configure and share some JVM flags that are commonly used with server-side applications. Also, due to JVM ergonomics different GC parameters can be automatically selected based on underlying server capacity and JVM version. Documenting these parameters per benchmark round would help with repeatability. Perhaps all the middleware software versions could be logged during test execution and the full test run logs could be made available.

A custom test implementation: Asynchronous Java + NoSQL DB

Since I’ve worked recently on implementing RESTful services based on JAX-RS 2 API with asynchronous processing (based on Jersey 2 implementation) and Apache Cassandra NoSQL database, I got curious about how this combination would perform against the competition so, I started coding my own test implementation. I decided to drop JAX-RS in this case, however, to eliminate any non-essential abstraction layers that might have a negative impact on performance.

One of the biggest hurdles in getting started with test development was that, at the time I started my project there wasn’t a way to test run platform installation scripts in smaller pieces, and you had to run the full installation, which took a very long time. Fortunately, since then framework installation procedure has been compartmentalized, so it’s possible to install just the framework that you’re developing tests for. Also, recently the project has added support for fully automated development environment setup with Vagrant, which is a great help. Another excellent addition is Travis CI integration that allows test implementation developers to gain additional assurance that their code is working as expected also outside their sandbox. Unfortunately, Travis builds can take a very long time, so you might need to disable some of the tests that you’re not actively working on. The Travis CI environment is also a bit different from the developer and the actual benchmarking environments, so you could bump into issues with Travis builds that don’t occur in the development environment, and vice versa. Travis build failures can sometimes be very obscure and tricky to troubleshoot.

The actual test implementation code is easy enough to develop and test in isolation, outside of the real benchmark environment, but if you’re adding support for new platform components such as databases or testing platform installation scripts, it’s easiest if you have an environment that’s a close replica of the actual benchmarking environment. In this case adding support for a new database involved creating a new DB schema, test data generation and automating database installation and configuration.

Implementing the actual test permutation turned out to be interesting, but surprisingly laborious, as well. I started seeing strange error responses occasionally when benchmarking my test implementation with ab and wrk, especially with higher loads. TFB executes Java based performance implementations in the Resin web container, and after a while of puzzlement about the errors, I decided to test the code in other web containers, namely Tomcat and Jetty. It turned out that I had bumped into 1 Resin bug (5776) and 2 Tomcat bugs (56736, 56739) related to servlet asynchronous processing support.

Architecturally, Test types 1 and 6 have been implemented using traditional synchronous Servlet API, while the rest of the test implementations leverage non-blocking request handling through Servlet 3 asynchronous processing support. The test implementations store their data in the Apache Cassandra 2 NoSQL database, which is accessed using the DataStax Java Driver. Asynchronous processing is also used in the data access tier in order to minimize resource consumption. JSON data is processed with the Jackson JSON library. In Java versions predating version 8, asynchronous processing requires passing around callbacks in the form of anonymous classes, which can at times be a bit high-ceremony syntactically. Java 8 Lambda expressions does away with some of the ceremonial overhead, but unfortunately TFB doesn’t yet fully support the latest Java version. I’ve previously used the JAX-RS 2 asynchronous processing API, but not the Servlet 3 async API. One thing I noticed during the test implementation was that the mechanism provided by Servlet 3 async API for generating error response to the client is much lower level, less intuitive and more cumbersome than its JAX-RS async counterpart.

The test implementation code was merged in the FrameworkBenchmarks code base, so it should be benchmarked on the next round. The code can be found here:
https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Java/servlet3-cass

Conclusions

TechEmpower’s Framework Benchmarks is a really valuable contribution to the web framework developer and user community. It holds great potential for enabling friendly competition between framework developers, as well as, framework users, and thus driving up performance of popular frameworks and adoption of framework performance best practices. As always, there’s room for improvement. Some areas from a framework user and test implementer point of view include: make the benchmark tests and results more repeatable, publish raw benchmark data for analysis purposes and work on making test development and adding new framework components even easier.

Good job TFB team + contributors – can’t wait to see Round 10 benchmark data!

Daemonizing JVM-based applications

Deployment architecture design is a vital part of any custom-built server-side application development project. Due to it’s significance, deployment architecture design should commence early and proceed in tandem with other development activities. The complexity of deployment architecture design depends on many aspects, including scalability and availability targets of the provided service, rollout processes as well as technical properties of the system architecture.

Serviceability and operational concerns, such as deployment security, monitoring, backup/restore etc., relate to the broader topic of deployment architecture design. These concerns are cross-cutting in nature and may need to be addressed on different levels ranging from service rollout processes to the practical system management details.

On the system management detail level the following challenges often arise when using a pure JVM-based application deployment model (on Unix-like platforms):

  • how to securely shut down the app server or application? Often, a TCP listener thread listening for shutdown requests is used. If you have many instances of the same app server deployed on the same host, it’s sometimes easy to confuse the instances and shutdown the wrong one. Also, you’ll have to prevent unauthorized access to the shutdown listener.
  • creating init scripts that integrate seamlessly with system startup and shutdown mechanisms (e.g. Sys-V init, systemd, Upstart etc.)
  • how to automatically restart the application if it dies?
  • log file management. Application logs can be managed (e.g. rotate, compress, delete) by a log library. App server or platform logs can sometimes be also managed using a log library, but occasionally integration with OS level tools (e.g. logrotate) may be necessary.

There’s a couple of solutions to these problems that enable tighter integration between the operating system and application / application server. One widely used and generic solution is the Java Service Wrapper. The Java Service Wrapper is good at addressing the above challenges and is released under a proprietary license. GPL v2 based community licensing option is available as well.

Apache commons daemon is another option. It has its roots in Apache Tomcat and integrates well with the app server, but it’s much more generic than that, and in addition to Java, commons daemon can be used with also other JVM-based languages such as Scala. As the name implies, commons daemon is Apache licensed.

Commons daemon includes the following features:

  • automatically restart JVM if it dies
  • enable secure shutdown of JVM process using standard OS mechanisms (Tomcat TCP based shutdown mechanism is error-prone and unsecure)
  • redirect STDERR/STDOUT and set JVM process name
  • allow integration with OS init script mechanisms (record JVM process pid)
  • detach JVM process from parent process and console
  • run JVM and application with reduced OS privileges
  • allow coordinating log file management with OS tools such as logrotate (reopen log files with SIGUSR1 signal)

 

Deploying commons daemon

From an application developer point of view commons daemon consists of two parts: the jsvc binary used for starting applications and commons daemon Java API. During startup, jsvc binary bootstraps the application through lifecycle methods implemented by the application and defined by commons daemon Java API. Jsvc creates a control process for monitoring and restarting the application upon abnormal termination. Here’s an outline for deploying commons daemon with your application:

  1. implement commons daemon API lifecycle methods in an application bootstrap class (see Using jsvc directly).
  2. compile and install jsvc. (Note that it’s usually not good practice to install compiler toolchain on production or QA servers).
  3. place commons-daemon API in application classpath
  4. figure out command line arguments for running your app through jsvc. Check out bin/daemon.sh in Tomcat distribution for reference.
  5. create a proper init script based on previous step. Tomcat can be installed via package manager on many Linux distributions and the package typically come with an init script that can be used as a reference.

 

Practical experiences

Tomcat distribution includes “daemon.sh”, a generic wrapper shell script that can be used as a basis for creating a system specific init script variant. One of the issues that I encountered was the wait configuration parameter default value couldn’t be overridden by the invoker of the wrapper script. In some cases Tomcat random number generator initialization could exceed the maximum wait time, resulting in the initialization script reporting a failure, even if the app server would eventually get started. This seems to be fixed now.

Another issue was that the wrapper script doesn’t allow passing JVM-parameters with spaces in them. This can be handy e.g. in conjunction with the JVM “-XX:OnOutOfMemoryError” & co. parameters. Using the wrapper script is optional, and it can also be changed easily, but since it includes some pieces of useful functionality, I’d rather reuse instead of duplicating it, so I created a feature request and proposed tiny patch for this #55104.

While figuring out the correct command line arguments for getting jsvc to bootstrap your application, the “-debug” argument can be quite useful for troubleshooting purposes. Also, by default the jsvc changes working directory to /, in which case absolute paths should typically be used with other options. The “-cwd” option can be used for overriding the default working directory value.

 

Daemonizing Jetty

In addition to Tomcat, Jetty is another servlet container I often use. Using commons daemon with Tomcat poses no challenge since the integration already exists, so I decided to see how things would work with an app server that doesn’t support commons daemon out-of-the-box.

To implement the necessary changes in Jetty, I cloned the Jetty source code repository, added jsvc lifecycle methods in the Jetty bootstrap class and built Jetty. After that, I started experimenting with jsvc command line arguments for bootstrapping Jetty. Jetty comes with jetty.sh startup script that has an option called “check” for outputting various pieces of information related to the installation. Among other things it outputs the command line arguments that would be used with the JVM. This provided quite a good starting point for the jsvc command line.

These are the command lines I ended up with:

export JH=$HOME/jetty-9.2.2-SNAPSHOT
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`
jsvc -debug -pidfile $JH/jetty.pid -outfile $JH/std.out -errfile $JH/std.err -Djetty.logs=$JH/logs -Djetty.home=$JH -Djetty.base=$JH -Djava.io.tmpdir=/var/folders/g6/zmr61rsj11q5zjmgf96rhvy0sm047k/T/ -classpath $JH/commons-daemon-1.0.15.jar:$JH/start.jar org.eclipse.jetty.start.Main jetty.state=$JH/jetty.state jetty-logging.xml jetty-started.xml

This could be used as a starting point for a proper production grade init script for starting and shutting down Jetty.

I submitted my code changes as issue #439672 in the Jetty project issue tracker and just received word that the change has been merged with the upstream code base, so you should be able to daemonize Jetty with Apache commons daemon jsvc in the future out-of-the-box.

Implementing Jersey 2 Spring integration

Jersey is the excellent Java JAX-RS specification reference implementation from Oracle. Last year, when we were starting to build RESTful backend web services for a high-volume website, we chose to use the JAX-RS API as our REST framework and Spring framework for dependency injection. Jersey was our JAX-RS implementation of choice.

When the project was started JAX-RS API 2.0 specification was not yet released, and neither was Jersey 2.0. Since we didn’t see any fundamental deficiencies with JAX-RS 1.1, and because a stable Spring integration module existed for Jersey 1.1, we decided to go with the tried-and-true version instead of taking on the bleeding edge.

Still, I was curious to learn what could be gained by adopting the newer version, so I started looking at the JAX-RS 2 API on my free time and doing some prototyping with Jersey 2. I noticed that Jersey 2 lacked Spring framework integration that was available for the previous version. Studying the issue further, I found that the old Spring integration module would not be directly portable to Jersey 2. The reason was that Jersey 1 builds on a custom internal dependency injection framework while Jersey 2 had switched to HK2 for dependency injection. (HK2 is an interesting, light-weight dependency injection framework used in GlassFish.)

My original goals for Jersey-Spring integration were fairly simple:

inject Spring beans declared in application context XML into JAX-RS resource classes (using @Autowired annotation or XML configuration)

So, I thought I’d dig a bit deeper and started looking into Jersey source code. I was happy to notice that Jersey development was being done in an open and approachable manner. The source code was hosted on GitHub and updated frequently. After a while of digging, a high-level design for Jersey Spring integration started to take shape. It took quite some experimenting and many iterations before the first working prototype. At that point, being an optimist, I hoped I was nearly done and contacted the jersey-users mailing list to get feedback on the design and implementation. The feedback: add more use cases, provide sample code, implement test automation, sign Oracle Contributor Agreement 🙂 (The feedback, of course, was very reasonable from the Jersey software product point of view). So, while it wasn’t quite back to the drawing board, but at this point I realized the last mile was to be considerably longer than I had hoped for.

Eventually, though, the Jersey-Spring integration got merged in Jersey 2 code base in Jersey v2.2 release. The integration API is based on annotations and supports the following features:

  • inject Spring beans into Jersey managed JAX-RS resource classes (using org.springframework.beans.factory.annotation.Autowired or javax.inject.Inject). @Qualifier and @Named annotations can be used to further qualify the injected instance.
  • allow JAX-RS resource class instance lifecycle to be managed by Spring instead of Jersey (org.springframework.stereotype.Component)
  • support different Spring bean injection scopes: singleton, request, prototype. Bean scope is declared in applicationContext.xml.

The implementation

Source code for the Jersey-Spring integration can be found in the main Jersey source repository:
https://github.com/jersey/jersey/tree/2.5.1/ext/spring3/src/main/java/org/glassfish/jersey/server/spring

Jersey-Spring integration consists of the following implementation classes:

org.glassfish.jersey.server.spring.SpringComponentProvider
This ComponentProvider implementation is registered with Jersey SPI extension mechanism and it’s responsible for bootstrapping Jersey 2 Spring integration. It makes Jersey skip JAX-RS life-cycle management for Spring components. Otherwise, Jersey would bind these classes to HK2 ServiceLocator with Jersey default scope without respecting the scope declared for the Spring component. This class also initializes HK2 spring-bridge and registers Spring @Autowired annotation handler with HK2 ServiceLocator. When being run outside of servlet context, a custom org.springframework.web.context.request.RequestScope implementation is configured to implement request scope for beans.

org.glassfish.jersey.server.spring.AutowiredInjectResolver
HK2 injection resolver that injects dependencies declared using Spring framework @Autowired annotation. HK2 invokes this resolver and asks it to resolve dependencies annotated using @Autowired.

org.glassfish.jersey.server.spring.SpringLifecycleListener
Handles container lifecycle events. Refreshes Spring context on reload and close it on shutdown.

org.glassfish.jersey.server.spring.SpringWebApplicationInitializer
A convenience class that helps the user avoid having to configure Spring ContextLoaderListener and RequestContextListener in web.xml. Alternatively the user can configure these in web application web.xml.

In addition to the actual implementation code, the integration includes samples and tests, which can be very helpful in getting developers started.

The JAX-RS specification defines its own dependency injection API. Additionally, Jersey supports JSR 330 style injection not mandated by the JAX-RS specification. Jersey-Spring integration adds support for Spring style injection. Both JAX-RS injection and Spring integration provide a mechanism for binding objects into a registry, so that objects can later be looked up and injected. If you’re using a full Java EE application server, such as Glassfish, you also have the option of binding objects via the CDI API. On non-Java EE environments it’s possible to use CDI by embedding a container implementation such as Weld. Yet another binding method is to use Jersey specific API. The test code includes a JAX-RS application class that demonstrates how this can be done.

Modifying Jersey-Spring

If you want to work on Jersey-Spring, you need to check out Jersey 2 code base and build it. That process is rather easy and well documented:

https://jersey.java.net/documentation/2.5.1/how-to-build.html

You simply need to clone the repository and build the source. The build system is Maven based. You can also easily import the code base into your IDE of choice (tried it with IDEA 12, Eclipse 4.3 and NetBeans 8.0 beta) using its Maven plugin. I noticed, however, that some integration tests failed with Maven 3.0, and I had to upgrade to 3.1, but apart from that there weren’t any issues.

After building Jersey 2 you can modify the Spring integration module, and build only the changed modules to save time.

Tests

Jersey-Spring integration tests have been built using Jersey test framework and they’re run under the control of maven-failsafe-plugin. Integration tests consist of actual test code and a JAX-RS backend webapp that the tests exercise. The backend gets deployed into an external Jetty servlet container using jetty-maven-plugin. Jersey-Spring tests can be executed separately from the rest of the tests. Integration tests can be found in a separate Maven submodule here:

https://github.com/jersey/jersey/tree/2.5.1/tests/integration/spring3

In addition to demonstrating the basic features of Jersey-Spring, the tests show how to use different Spring bean scopes: singleton, request, prototype. The tests also exhibit using a JAX-RS application class for registering your own dependencies in the container, in different scopes.

Conclusions

I think the JAX-RS 2.0 API provides a nice and clean way of implementing RESTful interfaces in Java. Development of the Jersey JAX-RS reference implementation is being conducted in an open and transparent manner. Jersey also has a large and active user community.

As noted by Frederick Brooks, Jr.: “All programmers are optimists”. It’s often easy to underestimate the amount of work required to integrate code with a relative large and complex code base, and in particular when you need to mediate between multiple different frameworks (in this case Jersey, HK2, Spring framework). Also, though Jersey has pretty good user documentation, I missed high-level architectural documentation on the design and implementation. A lot of poking around was needed to be able to identify the correct integration points. Fortunately, the Jersey build system is pretty easy to use and allows building only selected parts, which makes experimenting and the change-build-test cycle relatively fast.

Both Jersey and Spring framework provide a rich set of features and you can use them together in a multitude of ways. Jersey-Spring integration in it’s current form covers a couple of basic integration scenarios between the two. If you find that your particular scenario isn’t supported, join the jersey-users mailing list to discuss it. You can also just check out the code, implement your changes and contribute them by submitting a pull request on GitHub.

Practical NoSQL experiences with Apache Cassandra

Most of the backend systems I’ve worked with over the years have employed relational database storage in some role. Despite many application developers complaining about RDBMS performance, I’ve found that with good design and implementation a relational database can actually scale a lot further than developers think. Often software developers who don’t really understand relational databases tend to blame the database for being a performance bottleneck, even if the root cause could actually be traced to bad design and implementation.

That said, there are limits to RDBMS scalability and it can become a serious issue with massive transaction and data volumes. A common workaround is to partitioning application data based on a selected criteria (functional area and/or selected property of entities within functional area) and then distributing data across database server nodes. Such partitioning must usually be done at the expense of relaxing consistency. There are also plenty of other use cases for which relational databases in general, or the ones that are available to you, aren’t without problems.

Load-balancing and failover are sometimes difficult to achieve even on a smaller scale with relational databases, especially if you don’t have the option to license a commercial database clustering option. And even if you can, there are limits to scalability. People tend to workaround these problems with master-slave database configurations, but they can be difficult to set up and manage. This sort of configuration will also impact data consistency if master-slave replication is not synchronous, as is often the case.

When an application also requires a dynamic or open-ended data model, people usually start looking into NoSQL storage solutions.

This was the path of design reasoning for a project I’m currently working on. I’ve been using Apache Cassandra (v1.2) in a development project for a few months now. NoSQL databases come in very different forms and Cassandra is often characterized as a “column-oriented” or “wide-row” database. With the introduction of the Cassandra Query Language (CQL) Cassandra now supports declaring schema and typing for your data model. For the application developer, this feature brings the Cassandra data model somewhat closer to the relational (relations and tuples) model.

NoSQL design goals and the CAP theorem

NoSQL and relational databases have very different design goals. It’s important for application developers to understand these goals because in practice they guide and dictate the set of feasible product features.

ACID transaction guarantees provide a strong consistency model around which web applications have traditionally been designed. When building Internet-scale systems developers came to realize that strong consistency guarantees come at a cost. This was formulated in Brewer’s CAP theorem, which in its original form stated that a distributed system can only achieve two of the following properties:

  • consistency (C) equivalent to having a single up-to-date copy of the data;
  • high availability (A) of that data (for updates); and
  • tolerance to network partitions (P).

The “2 of 3” formulation was later revised somewhat by Brewer, but this realization led developers to consider using alternative consistency models, such as “Basically Available, Soft state, Eventual consistency” or BASE, in order to trade off strong consistency guarantees for availability and partition tolerance, but also scalability. Promoting availability over consistency became a key design tenet for many NoSQL databases. Other common design goals for NoSQL databases include high performance, horizontal scalability, simplicity and schema flexibility. These design goals were also shared by Cassandra founders, but it was also designed to be CAP-aware, meaning the developer is allowed to tune the tradeoff between consistency and latency.

BASE is a consistency model for distributed systems that does not require a NoSQL database. NoSQL databases that promote the BASE model also encourage applications to be designed around BASE. Designing a system that uses BASE consistency model can be challenging from technical perspective, but also because relaxing consistency guarantees will be visible to the users and requires a new way of thinking from the product owner, who traditionally are accustomed to thinking in terms of a strong consistency model.

Data access APIs and client libraries

One of the first things needed when starting to develop a Java database application is a database client library. With most RDBMS products this is straightforward: JDBC is the defacto low-level database access API, so you just download a JDBC driver for that particular database and configure your higher level data access API (e.g. JPA) to use that driver. You get to choose which higher level API to use, but there’s usually only a single JDBC driver vendor for a particular database product. Cassandra on the other hand currently has 9 different clients for Java developers. These clients provide different ways of managing data: some offer an object-relational -mapping API, some support CQL and others provide a lower level (e.g. Thrift based) APIs.

Data in Cassandra can be accessed and managed using an RPC-style (Thrift based) API, but Cassandra also has a very basic query language called CQL that resembles SQL syntactically to some extent, but in many cases the developer is required to have a much deeper knowledge of how the storage engine works below the hood than with relational databases. The Cassandra community recommended API to use for new projects using Cassandra 1.2 is CQL 3.

Since Cassandra is being actively developed, it’s important to pick a client whose development pace matches that of the server. Otherwise you won’t be able to leverage all the new server features in your application. Because Cassandra user community is still growing, it’s good to choose a client with an active user community and existing documentation. Astyanax client, developed by Netflix, currently seems to be the most widely used, production-ready and feature complete Java client for Cassandra. This client supports both Thrift and CQL 3 based APIs for accessing the data. DataStax, a company that provides commercial Cassandra offering and support, is also developing their own CQL 3 based Java driver, which recently came out of beta phase.

Differences from relational databases

Cassandra storage engine design goals are radically different from those of relational databases. These goals are inevitably reflected in the product and APIs, and IMO neither can nor should be hidden from the application developer. The CQL query language sets expectations for many developers and may make them assume they’re working with a relational database. Some important differences to take note of that may feel surprising from an RDBMS background include:

  • joins and subqueries are not supported. The database is optimized for key-oriented access and data access paths need to be designed in advance by denormalization. Apache Solr, Hive, Pig and similar solutions can be used to provide more advanced query and join functionality
  • no integrity constraints. Referential and other types of integrity constraint enforcement must be built into the application
  • no ACID transactions. Updates within a single row are atomic and isolated, but not across rows or across entities. Logical units of work may need be split and ordered differently than when using RDBMS. Applications should be designed should to be designed for eventual consistency
  • only indexed predicates can be used in query statements. An index is automatically created for row keys, indexes in other column values can be created as needed (except currently for collection typed columns). Composite indexes are not supported. Solr, Hive etc. can be used to address these limitations.
  • sort criteria needs to be designed ahead. Sort order selection is very limited. Rows are sorted by row key and columns by column name. These can’t be changed later. Solr, Hive etc. can be used to address these limitations.

Data model design is a process where developers will encounter other dissimilarities compared to the RDBMS world. For Cassandra, the recommended data modeling approach is the opposite of RDBMS: identify data access patterns, then the model data to support those access patterns. Data independence is not a primary goal and developers are expected to understand how the CQL data model maps to storage engine’s implementation data structures in order to make optimal use of Cassandra. (In practice, full data independence can be impossible to achieve with high data volume RDBMS applications as well). The database is optimized for key-oriented data access and data model must be denormalized. Some aspects of the application that can be easily modified or configured at runtime in relational databases are design time decisions with Cassandra, e.g. sorting.

A relational application data model typically stores entities of a single type per relation. The Cassandra storage engine does not require that rows in a column family contain the same set of columns. You can store data about entirely unrelated entities in a single column family (wide rows).

Row key, partition key and clustering column are data modeling concepts that are important to understand for the Cassandra application developer. The Cassandra storage engine uniquely identifies rows by row key and keys provide the primary row access path. A CQL single column primary key maps directly to a row key in the storage engine. In case of a composite primary key, the first part of the primary key is used as the row key and partition key. The remaining parts of a composite primary key are used as clustering columns.

Row key and column name, along with partitioner (and comparator) algorithm selection have important consequences for data partitioning, clustering, performance and consistency. Row key and partitioner control how data is distributed among nodes in the database cluster and ordered within a node. These parameters also determine whether range scanning and sorting is possible in the storage engine. Logical rows with the same partition key get stored as a single, physical wide row, on the same database node. Updates within a single storage engine row are atomic and isolated, but not across rows. This means that your data model design determines which updates can be performed atomically. Columns within a row are clustered and ordered by the clustering columns, which is particularly important when the data model includes wide rows (e.g. time-series data).

When troubleshooting issues with an application, it’s often very important to be able to study the data in the storage engine using ad-hoc queries. Though Cassandra does support ad-hoc CQL queries, the supported query types are more limited. Also, the database schema changes, data migration and data import typically require custom software development. On the other hand, schema evolution has traditionally been very challenging with RDBMS when large data volumes have been involved.

Cassandra supports secondary indexes, but applications are often designed to maintain separate column families that support looking up data based on a single or multiple secondary access criteria.

One of the interesting things I noticed about Cassandra was that it has really nice load-balance and failover clustering support that’s quite easy to setup. Failover works seamlessly and fast. Cassandra is also quite lightweight and effortless to set up. Data access and manipulation operation performance is extremely fast in Cassandra. The data model is schema-flexible and supports use cases for which RDMBS usually aren’t up to the task e.g. storing large amounts of time-series data with very high performance.

Conclusions

Cassandra is a highly available, Internet-scale NoSQL database with design goals that are very different from those of traditional relational databases. The differences between Cassandra and relational databases identified in this article should each be regarded as having pros and cons and be evaluated in the context of the your problem domain. Also, using NoSQL does not exclude the use of RDBMS – it’s quite common to have a hybrid architecture where each database type is used in different use cases according the their strengths.

When starting their first NoSQL project, developers are likely to enter new territory and have their first encounters with related concepts such as big data and eventual consistency. Relational databases are often associated with strong consistency, whereas NoSQL systems are associated with eventual consistency (even though the use of a certain type of database doesn’t imply a particular consistency model). When moving from the relational world and strong consistency to the NoSQL world the biggest mind shift may be in understanding and architecting an application for eventual consistency. Data modeling is another area where a new way of design thinking needs to be adopted.

Cassandra is a very interesting product with a wide range of use cases. I think it’s particularly well suited database option for the following use cases:

  • very large data volumes
  • very large user transaction volumes
  • high reliability requirements for data storage
  • dynamic data model. Data model may be semi structured and expected see significant changes over time
  • cross datacenter distribution

It is, however, very different from relational databases. In order to be able to make an informed design decision on whether to use Cassandra or not, a good way to learn more is to study the documentation carefully. Cassandra development is very fast paced, so many of the documents you may find could be outdated. There’s no substitute for hands-on experience, though, so you should do some prototyping and benchmarking as well.

Java VM Options

Some of the more frequently used Java HotSpot VM Options have been publicly documented by Oracle.

Of those flags the following are the ones I tend to find most useful for server-side Java applications:

  • -XX:+UseConcMarkSweepGC – select garbage collector type
  • -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=$APP_HOME_DIR – when an out-of-memory error occurs, dump heap
  • -XX:OnOutOfMemoryError=<command_for_processing_out_of_memory_errors> – execute user configurable command when an out-of-memory error occurs
  • -XX:OnError=<command_for_processing_unexpected_fatal_errors> – execute user configurable command when an unexpected error occurs
  • -XX:+PrintGCDetails -XX:+PrintGCTimeStamps – increase garbase collection logging
  • -Xloggc:$APP_HOME_DIR/gc.log – log garbage collection information to a separate log file
  • -XX:-UseGCLogFileRotation -XX:GCLogFileSize=<max_file_size>M -XX:NumberOfGCLogFiles=<nb_of_files> – set up log rotation for garbage collection log

Occasionally, the following might also be helpful to gain more insight into what the GC is doing:

  • -XX:PrintHeapAtGC
  • -XX:PrintTenuringDistribution

Oracle documentation covers quite a few HotSpot VM flags, but every once in a while you bump into flags that aren’t on the list. So, you begin to wonder if there’re more JVM options out there.

And, in fact, there’s a lot more as documented by this blog entry: The most complete list of -XX options for Java JVM

That list is enough to make any JVM application developer stand in awe 🙂
There’s enough options to shoot yourself in the foot, but you should avoid the temptation of employing each and every option.

Most of the time it’s best to use only the one’s you know are needed. JVM options and their default values tend to change from release to release, so it’s easy to end up with a huge list of flags that suddenly may not be required at all with a new JVM release, or worse,  may even result in conflicting or erroneous behavior. The implications of some of the options, the GC one’s in particular, and the way that related options interact with each other may sometimes be difficult. Using a minimal list of options is easier to understand and maintain. Don’t just copy-paste options from other applications without really understanding what they’re for.

So, what’re your favorite options?

Mine would probably be -XX:SelfDestructTimer 😉
(No, there really is such an option and it does work)

A JVM polyglot experiment with JRuby

The nice thing about hobby technology projects is that you get to freely explore and learn new things. Sometimes this freedom makes the project go off at a tangent, and it’s in those cases in particular when you get to explore.

Some time ago I was working on a multi-vendor software development project. We had trouble making developers follow Git commit message guidelines and asking multiple times didn’t help, so I thought I’d implement a technological solution for this. Our repositories were hosted at GitHub, so I studied the post-receive hooks mechanism, learned a bit of Ruby and implemented my own service hook that validates the message format against a configurable format, generates an email using a configurable template and delivers it to selected recipients. Post-receive hooks don’t prevent people from committing with invalid messages, but I chose to go with a centralized solution that would not require every developer to configure their repository. I submitted my module, test code and documentation to GitHub, but the service hook implementation was rejected.

After the dead-end, I decided to try enforcing a commit message policy using a server-side hook that could actually prevent invalid commits. That solution was technically viable, but as suspected, it turned out that not all developers were willing to configure the hook in their repository. Also, every once in awhile when developers do a clean clone of the repository, the configuration needs to be redone.

So, I decided to study how service hooks could be run on an external system instead of being hosted on GitHub. The “WebHook” service hook allows you to deliver the post-receive event anywhere over HTTP. GitHub also makes service hook implementations available to be run on your own servers. The easy way would’ve been to simply take my custom service hook implementation and run it on our server. In addition to being too easy, there were some limitations with this approach as well:

  • a github-services server instance can only have a single configuration i.e. you can’t serve multiple repositories each with different configurations
  • the github-services server dispatches data it receives to a single service based on the request URL. It’s not possible to dispatch the data to a set of services.
  • you have to code service hooks in Ruby

I had heard of JRuby at that time, but didn’t have practical experience with it. After some experimenting I was able to validate my assumption that GitHub Service implementations could, in fact, be run with JRuby. At that point I started migrating the code base into a polyglot GitHub Services container that allows you to run the GitHub provided github-services as well as your own custom service implementations. Services can be implemented in different languages and run simultaneously in the same container instance. The container can be configured with an ordered set of services (chain) to handle post-receive events from one or more GitHub repositories. It’s also possible to configure a single container with separate service chains, each bound to a different repository. The container is run in Jetty servlet container and uses JRuby for executing Ruby code.

Below is an illustration of an example configuration scenario where two GitHub repositories are set up to deliver post-receive events to a single container. The container has been configured with a separate service chain for each repository.

ghj-diagram

The current status of the project is that a few GitHub Services as well as my custom Ruby and Java based services have been tested and seem to be working.

Lessons learned

The JVM can run code written in a large number of different programming languages and it’s a great platform for both dynamic language implementers and polyglot application developers. To quote JRuby developer Charles Nutter:

The JVM is going to be the best VM for building dynamic languages, because it already is a dynamic language VM.

Java 7 delivered vastly improved support for dynamic language implementers with JSR 292 or the invokedynamic bytecode instruction. Java 8 is expected to further improve language interoperability and performance.

JRuby is an interesting alternative Ruby implementation for the JVM. It’s mature and the performance benchmark numbers are impressive (Why JRuby) compared with Ruby MRI. Performance is expected to get even better with invokedynamic optimization work being done for Java 8.

While taking the existing Ruby based github-services and making them run on JRuby was successful and didn’t require any code changes, there were lots of small issues that took a surprising amount of time to resolve. Many of the issues were related to setting up the runtime environment in one way or another. High-level troubleshooting strategies are similar from platform to platform, but on a more detailed level the methods and tools are often quite different, and many of the problems I encountered might have been easier to crack with solid Ruby experience.

Here’re some lessons learned during the project:

  • Learning how to use the JRuby embedding API. There’re 3 different APIs to choose from: Red Bridge, JSR 223 and BSF. Performing tasks like instantiating objects and passing parameters in a Java call-out was not immediately obvious at first using Red Bridge
  • Figuring out concurrency / thread-safety properties of different areas of the JRuby embedding API. JRuby concurrency documentation was lacking at the time when the project was started.
  • JRuby tooling. Tooling works somewhat different from Ruby.
  • bootstrapping GitHub services gem environment 
    • in order to keep the github-services installation self-contained, I wanted to install as many gems as possible in the github-services vendor directory instead of installing them in JRuby. Some gems had to be installed in JRuby while others could be installed in vendor tree.
    • some gems need to be replaced by JRuby specific ones (e.g. jruby-openssl)
    • setting up Ruby requires and load paths
    • Gems implemented as native extensions require a compiler toolchain as well as gem module library dependencies to be installed.
  • bypassing the Sinatra web app framework that’s used by github-services

Probably, most of the issues I encountered were related to bootstrapping GitHub services gem environment in some way.

Code for the experiment can be found from https://github.com/marko-asplund/github-hook-jar

Java on Mac OS X

Mac OS X is a nice platform for Java development because it successfully combines a very good desktop user experience with the system’s unix heritage and tooling. There are some problems, however:

  • only a limited set of JDK versions are available for current OS X releases
  • the JVM implementations are not standalone and require Apple proprietary frameworks to be present

In this respect, Linux is probably the best Java development platform because JDKs are available from many different vendors and multiple versions of the most popular JDKs run on Linux. The JVM implementations, usually don’t have esoteric dependencies and only require basic OS libraries in addition to the ones bundled with the JVM.

On the other hand, only Apple and Oracle provided JDKs run on Mac OS X and older Java versions aren’t available. Also, neither Apple’s Java 6 nor Oracle’s Java 7 JVM seems to run on Lion or Mountain Lion without the com.apple.pkg.JavaEssentials package, for example.

Recently, I managed to corrupt my Java installation beyond repair and I wanted to try and avoid this in the future by trying to isolate my Java 7 and 8 installations as much as possible. This turned out to be fairly simple if you defy the temptation to install by just clicking on the downloaded JDK package. The JDKs are distributed by Oracle as disk images that contain a Mac OS X installer package file. Instead of running the installer you can easily extract the contents of the package using command line tools. First mount the disk image by clicking on the disk image and then run the following commands in a terminal session:

xar -xf '/Volumes/JDK 8/JDK 8.pkg'
cat jdk180.pkg/Payload | gunzip | cpio -i

After that, the Contents directory will include the entire JDK installation and you can move it to the location of your choosing. Then just specify JAVA_HOME environment variable and add the JVM and required tools to shell search path.

JavaOne 2012 – Keynotes

Java Strategy Keynote

Java Strategy and JavaOne technical keynotes were delivered at the end of the first conference day, on sunday.

The Java Strategy keynote was kicked off with a “catchy” music video “coding in Java”. After the video Hasan Rizvi, EVP Middleware and Java Development, opened the more formal part of the keynote. Rizvi described how the conference theme “make the future Java” referred to two different aspect of building the future:

  • a) ensuring the platform stays competitive. Competitiveness involves platform completeness, modernization and innovation, developer productivity as well as quality and security
  • b) making sure that the collaborative process through which the platform is being developed, works well. The process needs open and transparent evolution, and active community involvement

Rizvi noted that “we have bet our business on Java and a lot of you have bet your business and careers on Java”. Oracle’s Fusion Middleware platform as well as a lot of (if not all) Oracle applications have been built on Java, so Oracle has in fact, made a huge bet on Java.

As for the Java roadmap, Oracle stated they’re committed to more regular platform major releases. During Sun stewardship there was a period of Java stagnation when 4.5 years elapsed between Java 6 ja 7 releases, and Java 7 was actually finally released by Oracle, not Sun. Even though evolving Java is a collaborative effort, a lot of responsibility lies on the steward. A key duty is to produce the reference implementation. The developers, partners, clients and all the stakeholders in the Java ecosystem need to be able to rely on the steward to move things forward in a consistent and predictable manner, and timeboxed releases are an important indication to everyone that the train is moving.

Rizvi gave some highlights of Java roadmap for SE, ME, EE, JavaFX, Java Card and NetBeans. These were later described in more detail by the product development leaders. He also presented results for Oracle’s Java 2012 scorecard. The scorecard is split into three different areas: technology, community and Oracle leadership.

Rizvi then handed over to Georges Saab, VP / Development who described the current state of Java SE 7 adoption. According to Saab they’re seeing rapid uptake of the new release and mentioned that Oracle supports its entire Fusion Middleware stack on JDK 7. (With the end of public Java 6 updates scheduled for 2013 february, it’s time to upgrade unless you have a Java support contract.) He also emphasized support for 2 new platforms added in the release. Support for Linux ARM seems very much related with Oracle’s aspirations for Java in the embedded space (Saab mentioned the emerging ARM microserver market).

Java 8 is scheduled for Q3 2013 with developer preview slated for February 2013. OpenJDK 8 early builds are available already to test things like Lambda. Some of the highlights of the planned release content include Lambda expressions (closures), parallel operations on core collections API, eliminating PermGen, a new JVM based JavaScript implementation called Nashorn, language interoperability, Java ME/SE convergence and new Date & Time APIs. Oracle is planning to contribute Nashorn to the OpenJDK project. Nashorn is said to be a high performance, modern JavaScript implementation on the JVM and will probably replace the experimental Rhino JavaScript engine shipped since JDK 6. NetBeans uses Nashorn internally for its JavaScript support.

Java 9 will likely include at least Jigsaw modularity, which was deferred from Java 8 and is scheduled for 2015. While some potential development areas were listed for this release the details were pretty scarce, as can be expected at this time.

Nandini Ramani, VP / Engineering, Java Client and Mobile Platforms, then took to the stage to describe plans for Java Client and Embedded. It’s interesting to note that JavaFX is not currently supported on all Oracle supported Java platforms, which would in theory seem to contradict the “write once, run anywhere” proposition. Ramani was briefly joined by people from Navis and Canoo to present a JavaFX in cargo management case study.

Then back to longer term plans for the JDK. Phil Rogers of AMD described Project Sumatra, which aims to bring heterogeneous computing platform to Java. Rogers described the hardware trends behind the project:

1) first the move from single core to multi-core CPUs and now to 2) full SOCs (system on chip) and a heterogeneous computing platform, where we combine a CPU and the parallel processor of the GPU into a single piece of silicon and shared memory

High level of parallelism is required from the platform by workloads such as media processing, AI, and big data. With Sumatra developers will be able to write code that will take advantage of the heterogeneous computing platform without explicitly coding for it. The JVM will decide on runtime whether to run the code on CPU or GPU.

Ramani then came back to tell about Java in the Embedded space.  I’ve written another blog entry about this, so I won’t go into detail here. It was interesting to note, however, that Oracle seems very determined to push Java in the embedded space and they’re talking a lot about the “Internet of Things” and M2M communication. In Java Embedded their focus seems to be on small headless devices, which apparently doesn’t include smart phones. They also want to lower the barrier of entry for a Java SE developer to enter embedded development through Java ME / SE convergence mentioned earlier. This could create interesting opportunities for developers by allowing them to move between these ecosystems. Java ME / SE convergence appears to be a key driver behind JDK 9 modularization (Jigsaw). Ramani concluded her part of the keynote by introducing two more case studies: Java enabled SOC by Cinterion (Java Embedded) and MintChip by The Royal Canadian Mint (Java Card based digital currency).

Cameron Purdy, VP Fusion Middleware Development and Java EE, took to stage after Ramani to discuss Java EE status and direction. He started off by briefing on Java EE 6 adoption among application developers and JEE server vendors. He then went on to describe Oracle’s Java EE focus areas that include standardization, productivity, portability, extensibility and modularity. Like other keynote speakers, Purdy also emphasized that developing the Java EE platform and specifications is a community effort. He presented some interesting details about Java EE release dates, themes and number of specifications included up to Java EE 7. Java EE 7 is currently scheduled for Q2 2013. The release themes include HTML 5 and continued developer productivity. Features such as WebSockets, Servlet 3.1 NIO, Server Sent Events, JSON, REST are considered to fall under the HTML 5 theme umbrella while API pruning, built on Java SE 7, JCache, JMS 2.0 and batch are driven by the productivity goal. Some features that Oracle would like to see in Java EE 8 were discussed briefly, but it will be the responsibility of the eventually formed expert group to decide what will go into the actual specification. Cloud programming (multitenancy for SaaS apps, PaaS enablement) model standardization was a feature deferred from Java EE 7 and will likely be included in JEE 8. Other things being considered include NoSQL, Project Avatar, state management, JSON-B and modularity based on Jigsaw. Purdy finally invited Nicole Otto from Nike to endorse Java EE as the platform for Nike’s online services.

In the final part of the keynote, Robert Ballard, oceanographer and discoverer of RMS Titanic, talked about innovation and science education. He described how modern oceanography makes pretty advanced use of information and communications technology. He told he’s often asked what he’d like to discover next? A spaceship, he said. Why? Because then I’d never have to talk about the Titanic again 🙂

IBM Keynote

IBM was a diamond sponsor for the conference and they presented their own keynote, right after the strategy keynote. The IBM talk focused a lot on cloud enablement and optimization, multitenancy, tenant isolation and reducing footprint. Polyglot also appears to be on IBM’s Java platform agenda as they discussed support for multiple JVM-based languages. A key part of IBM’s message was that hardware matters. Even if Java developers typically work at a level where the underlying hardware is abstracted away, system hardware architecture design is still crucial for mission critical applications. Somewhere deep below all the layers of indirection, hardware virtualization and JVM simulated virtual machine, the code is still run by physical processors. And since IBM can deliver the whole stack from server hardware and storage to language runtime and middleware, all the pieces have been designed and optimized to work together. So, IBM was basically echoing the Oracle “software and hardware, engineered to work together” -value proposition. They also presented SPECjEnterprise and SPECpower_ssj2008 performance performance benchmark figures where the IBM J9 JVM came out as the winner.

Java Technical Keynote

The Technical keynote was primarily delivered by Oracle Java Platform Chief Architect, Mark Reinhold. The technical keynote focused on Java SE (Java 8) and Java EE (JEE 7) platform releases. These releases were presented against the backdrop of sample applications (Schedule builder and Angry bids). Java Language Architect Brian Goetz dropped by on stage to show how Java 8 Lambda together with changes in the collections API can make the JavaFX Schedule builder application code more beautiful, and improve code and libraries in general. A large part of the presentation was dedicated to Jigsaw, which I think will play a really big role in the future of the platform. Jigsaw will not be included in Java 8 but Lambda, compact profiles, Nashorn, data/time API and type annotations will. In addition, various smaller things like PermGen removal, bulk data operations, parallel array sorting etc. are also scheduled for Java 8.

Arun Gupta, Java EE Technology Evangelist, then talked about Java EE in more detail than in the strategy keynote. Gupta briefly talked about Java EE history and current status in terms of release dates, release theme and dates. He then dived deeper into the Java EE 7 specification content. Some of the more interesting current candidate specification requests for Java EE 7 include: JAX-RS 2.0, EL 3.0, JMS 2.0, Java Caching API, Java API for JSON and Java API for WebSocket. Many other EE specifications will also get smaller updates such as JTA, EJB, CDI and JPA. After more than ten years in the making, who would’ve thought the Caching API specification would actually get finished some day 🙂 I was happy to see that EE 7 will not only bring additions to the specification, but will also remove things by making some APIs optional. The idea of pruning was introduced already in EE 6, so it’s not new, but it’s good to see the cleanup process continuing. Gupta then moved to detail changes to selected EE sub-specifications and demonstrated how the updates would improve productivity and reduce boilerplate code.

Thin server architecture is still an emerging architectural model for designing web applications that moves view generation from the server-side to the client side. Thin server architecture is platform agnostic and it effectively moves a lot of server-side code to the browser side that has traditionally been the domain of front-end or web developers. As the name implies, the server-side gets a lot simpler and thinner, and with this change, in my opinion, comes a really big productivity challenge for developing the Java backend wrt. to dynamic languages. Project Avatar and Easel are projects that are tackling this problem and exploring what kind of infrastructure and tooling is required end-to-end to build TSA applications on the Java platform. Some of the tooling is already available in NetBeans 7.3 beta, so it’s something that can be tried out right now. A TSA sample application called Angry bids, as well as the tooling part for developing the app were demoed.

Java Community Keynote

The Java community keynote was scheduled for the last day of the conference and started off with lots of thank yous and some making waves. After that, Gary Frost of AMD was brought up by Donald Smith (Oracle) to discuss Project Sumatra that was mentioned earlier in the Java Strategy keynote. AMD has been working to make it possible for Java developers to take advantage of the GPU for a few years now, and they’ve released an open source project called Aparapi for doing this. Aparapi requires that code be specifically written to get it executed on a GPU, but Sumatra aims to make all this unnecessary. Frost showed some interesting demos of rendering a Mandelbrot set, Game of life and N body physics simulation using Aparapi. Frost said AMD is hoping to get Sumatra included in the JDK within Java 9 timeframe.

Smith then reflected on the role of Java in innovation. His approach was to separately mindmap the strengths of Java and fostering innovation, and try to see how these two could be linked together. He invited people from Eucalyptus, Twitter, Cloudera, Eclipse and Perrone Robotics for a panel discussion on the role of Java in innovation.

After the innovation panel, Martijn Verburg of London JUG, introduced the Adopt a JSR -program they had started. The purpose was said to be to prevent bad specifications, such as EJB 2.0, from happening again by engaging ordinary developers in the specification process. Verburg hosted a short panel where he asked the panelists a range of questions related to their role in the Java community and Java specification process.

After the panel Saab brought up Paul Perrone to discuss and demo a Java based robotics platform his company develops. Continuing on the robotics theme, Java creator James Gosling came up on stage wearing his Sun Microsystems t-shirt to tell about his current work at Liquid Robotics, and how they’re using Java. Liquid Robotics is building robots that float in the ocean and gather telemetric data of different kinds for various purposes (e.g. marine mammal and pollution tracking, weather data, global warming studies etc.). Java is used for analysing the data delivered by the robots, but also the newer robot generation has an ARM processor and runs JDK 7 on Linux (ARM). They’ve built a Swing based UI for studying and drilling down the data, e.g. the routes that each robot has travelled. Gosling had evaluated all of the NoSQL databases for his use cases but felt that no existing ones worked well with the telemetry data they process, so he built his own NoSQLish database. The data they receive is really valuable, so reliability is crucial, which is why they’re using 3 different hosting providers. After evaluating hosting providers he confessed to be a real Jelastic fan. So, since Gosling in his role as the chief software architect in his new company picked to build on Java and chose to present at JavaOne, I guess it means he still has a soft spot for the platform.

Conclusions

Oracle is a huge company and many people in the developer and OSS communities have had reservations about what will happen to Java under Oracle leadership, and whether Java will be submitted to its owner’s short-term commercial ambitions. But despite its huge size Oracle is not self sufficient and their long term success is very much tied to the larger developer ecosystem. This means that Oracle needs to make sure Java is a platform that developers want to invest their human capital in also in the future.

Active community participation is absolutely vital for Java’s long term viability and it’s reassuring to see that Oracle seems to acknowledge and commit to this. Recent changes in the Java Community Process (JCP), that governs the rules for creating Java specifications, require a more open and transparent way of working from the expert groups. By making OpenJDK the Java SE reference implementation (RI), Oracle has leveled the playing field with regard to other Java SE vendors, as now Oracle’s Java implementation is only one Java SE implementation among others that has to conform to the specification and RI. Oracle has also been able to engage IBM in OpenJDK instead of Apache Harmony, which I think overall will be reduce the risk of fragmentation and benefit the whole Java community.

According to the Java Community Process, the specification lead of a particular JSR is responsible for developing the specification, but also for producing a reference implementation as well as Technology Compatibility Kit (TCK or test suite). For large specifications, such as Java SE and EE, this is no small task. OpenJDK is the Java SE reference implementation while GlassFish is the Java EE RI. There’s been some speculation about whether the JDK will remain to be made freely available, as well as the future of some Sun Java products, such as GlassFish and NetBeans, under Oracle leadership. OpenJDK and GlassFish have a clear role to play in this picture as platform reference implementations. NetBeans on the other hand provides support for emerging technologies and day 0 support for new Java standards, which is important for allowing developers to actually get hands-on experience with new standards. So, currently none of these products would appear to be redundant.

Traditionally ME and SE/EE development were regarded very different and were typically performed by people with different skill sets. The plan for ME / SE convergence on platform and API level could change that in the short term (Java SE 8 timeframe). Also, with the merge of previously separate JCP executive committees for ME and SE taking place in november, work is being carried out in the process level to try and avoid the platforms from diverging in the future.

Google used to be a visible and active member of the Java community before the legal dispute between Google and Oracle started over Android. Google has also released quite a few interesting Java based components as open source so, it’s been a pity to see Google withdraw from JavaOne as well as many other Java communities. No googlers appeared to be presenting at this year’s JavaOne either. I was surprised to find out after the conference (through some googling) that Google is actually still a member of the JCP Executive Committee and they’ve also joined the Java SE 8 expert group in August 2012! Hope they will be able to have a more active role in the Java ecosystem in the future.

There’re a lot of interesting technology changes planned for Java. Some of the changes I’m really looking forward to include

  • JDK modularization (via Project Jigsaw, JDK 9)
  • thin server architecture support (via Project Avatar and Easel, NetBeans v7.3, Java EE 7 / 8)
  • Java SE / ME convergence (JDK 8)
  • compact profiles (JDK 8)
  • heterogeneous computing platform support (via Project Sumatra, JDK 9?)

Many enhancements and changes that are clearly driven by polyglot requirements appeared on Oracle’s tentative roadmap plans, so they seem to be serious about improving polyglot support in the JVM.

Based on the conference and actual work being carried out by Oracle and the larger Java community, I think Java will remain viable as a community, technology platform and an ecosystem.