practicing techie

tech oriented notes to self and lessons learned

Category Archives: tech trends

Microservices vs. SOA at QCon New York

Based on the steady flow of recently published technology articles and blog posts, as well as upcoming tech conference program session titles, it seems evident that the buzz on microservices, doesn’t show signs of tailing off in the near future.

As a concept, microservices based architecture, or just microservices, is quite old by IT standards, and it first came into ThoughtWorks’ Technology Radar already in 2012. Last year when I visited QCon New York microservices architecture was one of the major themes with one track and lots of sessions dedicated to the topic. Microservices also played roles of varying importance also in other track sessions.

At QCon, microservices was often characterized as “SOA done right” or “SOA without the ESB” by speakers and attendees alike. Many of the microservices track sessions focused on various practical issues encountered when implementing microservices including:

  • service isolation vs. sharing
  • managing dependencies
  • importance of durable logical data model and API design
  • managing and validating interface changes
  • polyglot service API clients etc.
  • test automation
  • service API documentation

These topics were covered by plenty of speakers from different angles. Michael Bryzek of Gilt gave a good discussion of these issues in his talk on Microservices and the art of taming the Dependency Hell Monster.

In the Microservice open space much of the discussion revolved around defining the boundaries of a microservice: to what extent should a microservice be an entity isolated from its environment? A microservice can form dependencies to its environment through a wide variety of aspects, including logical data model, data model implementation, logical and physical runtime platforms, data storage systems and other microservices. To what degree should a microservice be allowed to depend on its environment or other microservices? How to balance aspects of reuse and isolation?

Microservices based architecture is often contrasted with SOA, with SOA being scorned at and ridiculed as the old way of doing things. Nevertheless, many of the problems and solutions related to microservices based architectures remind me of SOA literature of past years, so I decided to dust off some of my Thomas Erl SOA books to brush up my memory. According to Erl, service-orientation is a design paradigm that adheres to the following principles:

Standardized service contract – “Services within the same service inventory are in compliance with the same contract design standards.
The same design standards are applied to related services. In particular, this means the data models are coherent across related services and some common type definitions can be shared. Standardization also applies to other aspects such as policy and SLA.

Service loose coupling – “Service contracts impose low consumer coupling requirements and are themselves decoupled from their surrounding environment.
Service consumers are necessarily dependent on the contract that a service provides. This principle states that the API surface area, through which consumers interact with a service, should be minimized and explicitly defined, so that consumers can withstand evolution of services. Services are also decoupled from their runtime environment.

Service abstraction – “Service contracts only contain essential information and information about services is limited to what is published in service contracts.
A service should be considered as a black box that provides a work specified by its service contract for consumers and hides other meta data and implementation details. Service contract should expose only information essential for accessing the service.

Service reusability – “Services contain and express agnostic logic and can be positioned as reusable enterprise resources.
Organizational project delivery processes should prefer to implement business logic as reusable services and to use existing services over re-implementing business logic

Service autonomy – “Services exercise a high level of control over their underlying runtime execution environment.
Services that depend on shared virtual or physical resources (e.g. database, other information systems or services, physical hardware) necessarily lose some of their autonomy, which can lead to unpredictable runtime behaviour. High level of autonomy can be costly and tradeoffs often need to be made.

Service statelessness – “Services minimize resource consumption by deferring the management of state information when necessary.
State data management consumes system resources and can result in a significant resource burden … Therefore, the temporary delegation and deferral of state management can increase service scalability and support a wider range of reuse and recomposition over time.

Service discoverability – “Services are supplemented with communicative meta data by which they can be effectively discovered and interpreted.
Additional metadata is used to describe services to help humans looking to find existing services to reuse in their project delivery.

Service composability – “Services are effective composition participants, regardless of the size and complexity of the composition.
Services should be designed and implemented in a manner that does not limit them composing them into new services, in different layers

Most of these principles are quite general, technology agnostic and also very much relevant in a microservices based architecture. One thing they share is that at their core both are architectural styles and not technologies. Still, microservices and SOA are perceived in quite a different manner. SOA started gaining momentum in the era of the enterprise architect, and arguably, the voice of the practitioners (i.e. developers) was lost as SOA got hijacked by software tool industry heavyweights. Nowadays SOA is often regarded as enterprisey and associated with heavy tooling and onerous processes.

Microservices architecture on the other hand is associated with promoting a much leaner approach overall. There’re also some interesting relations that microservices has with the structure of the development organization. Many leading internet companies are organizing their development work around agile, small (“2 pizza” sized), autonomous teams with well-defined end-to-end responsibilities, in order to scale their organization better. According to Conway’s law organizations “are constrained to produce designs which are copies of the communication structures of these organizations“. In that sense,  microservices architecture reflects organizational structures of the development teams as well as the current best practice technological means of achieving team autonomy.

I like the pragmatism and the strive for lean practices in contemporary software development in general, and in implementing microservices in particular. However, the baby often seems to get thrown out with the bathwater when our industry is too busy reinventing things. A sign of this is that microservices developers clearly are rediscovering many of the insights that SOA architects and developers learned and documented long ago.

Thoughts on The Reactive Manifesto

Reactive programming is an emerging trend in software development that has gathered a lot of enthusiasm among technology connoisseurs during the last couple of years. After studying the subject last year, I got curious enough to attend the “Principles of Reactive Programming” course on Coursera (by Odersky, Meijer and Kuhn). Reactive advocates from Typesafe and others have created The Reactive Manifesto that tries to formulate the vocabulary for reactive programming and what it actually aims at. This post collects some reflections on the manifesto.

According to The Reactive Manifesto systems that are reactive

  • react to events – event-driven nature enables the following qualities
  • react to load – focus on scalability by avoiding contention on shared resources
  • react to failure – resilient systems that are able to recover at all levels
  • react to users – honor response time guarantees regardless of load

Event-driven

Event-driven applications are composed of components that communicate through sending and receiving events. Events are passed asynchronously, often using a push based communication model, without the event originator blocking. A key goal is to be able to make efficient use of system resources, not tie up resources unnecessarily and maximize resource sharing.

Reactive applications are built on a distributed architecture in which message-passing provides the inter-node communication layer and location transparency for components. It also enables interfaces between components and subsystems to be based on loosely coupled design, thus allowing easier system evolution over time.

Systems designed to rely on shared mutable state require data access and mutation operations to be coordinated by using some concurrency control mechanism, in order to avoid data integrity issues. Concurrency control mechanisms limit the degree of parallelism in the system. Amdahl’s law formulates clearly how reducing the parallelizable portion of the program code puts an upper limit to system scalability. Designs that avoid shared mutable state allow for higher degrees of parallelism and thus reaching higher degrees of scalability and resource sharing.

Scalable

System architecture needs to be carefully designed to scale out, as well as up, in order to be able to exploit the hardware trends of both increased node-level parallelism (increased number of CPUs and nb. of physical and logical cores within a CPU) and system level parallelism (number of nodes). Vertical and horizontal scaling should work both ways, so an elastic system will also be able to scale in and down, thereby allowing to optimize operational cost structures for lower demand conditions.

A key building block for elasticity is achieved through a distributed architecture and the node-to-node communication mechanism, provided by message-passing, that allows subsystems to be configured to run on the same node or on different nodes without code changes (location transparency).

Resilient

A resilient system will continue to function in the presence of failures in one or more parts of the system, and in unanticipated conditions (e.g. unexpected load). The system needs to be designed carefully to contain failures in well defined and safe compartments to prevent failures from escalating and cascading unexpectedly and uncontrollably.

Responsive

The Reactive manifesto characterizes the responsive quality as follows:

Responsive is defined by Merriam-Webster as “quick to respond or react appropriately”.

Reactive applications use observable models, event streams and stateful clients.

Observable models enable other systems to receive events when state changes.

Event streams form the basic abstraction on which this connection is built.

Reactive applications embrace the order of algorithms by employing design patterns and tests to ensure a response event is returned in O(1) or at least O(log n) time regardless of load.

Commentary

If you’ve been actively following software development trends during the last couple of years, ideas stated in the reactive manifesto may seem quite familiar to you. This is because the manifesto captures insights learned by the software development community in building internet-scale systems.

One such set of lessons stems from problems related to having centrally-stored state in distributed systems. The tradeoffs of having a strong consistency model in a distributed system have been formalized in the CAP theorem. CAP-induced insights led developers to consider alternative consistency models, such as BASE, in order to trade off strong consistency guarantees for availability and partition tolerance, but also scalability. Looser consistency models have been popularized during recent years, in particular, by different breeds of NoSQL databases. Application’s consistency model has a major impact on the application scalability and availability, so it would be good to address this concern more explicitly in the manifesto. The chosen consistency model is a cross-cutting trait, over which all the application layers should uniformly agree. This concern is something that is mentioned in the manifesto, but since it’s such an important issue, and it has subtle implications, it would be good to elaborate it a bit more or refer to a more thorough discussion of the topic.

Event-driven is a widely used term in programming that can take on many different meanings and has multiple variations. Since it’s such an overloaded term, it would be good to define it more clearly and try to characterize what exactly does and does not constitute as event-driven in this context. The authors clearly have event-driven architecture (EDA) in mind, but EDA is also something that can be achieved with different approaches. The same is true for “asynchronous communication”. In the reactive manifesto “asynchronous communication” seems to imply using message-passing, as in messaging systems or the Actor model, and not asynchronous function or method invocation.

The reactive manifesto adopts and combines ideas from many movements from CAP theorem, NoSQL, event-driven architecture. It captures and amalgamates valuable lessons learned by the software development community in building internet-scale applications.

The manifesto makes a lot of sense, and I can subscribe to the ideas presented in it. However, on a few occasions, the terminology could be elaborated a bit and made more approachable to developers who don’t have extensive experience in scalability issues. Sometimes, the worst thing that can happen to great ideas is that they get diluted by unfortunate misunderstandings 🙂

Building a beacon out of Pi

Building a beacon out of Pi

iBeacon is, to quote Wikipedia, an indoor positioning system that Apple calls “a new class of low-powered, low-cost transmitters that can notify nearby iOS 7 devices of their presence. The technology is not, however, restricted to iOS devices and can currently also be used with Android devices. Many people are predicting it will change retail shopping.

At it’s core, the technology allows proximity sensing so, that a device can alert a user when moved from or to close proximity of a peer device. In a typical use case a “geo fence” is established around a stationary device (i.e. “beacon”) and mobile devices carried by users issue alerts when crossing the fence to either enter or exit the region.

After Apple’s WWDC 2013 conference we’ve seen quite a bit of a buzz about iBeacon. Interestingly, the underlying technology is based on the Bluetooth Proximity profile specification ratified in 2011. Though a lot of the buzz has been associated with Apple, the company is not attributed as a contributor to the original specification. Also, Apple is yet to reveal what it plans to do exactly with iBeacon.

A few other companies are planning to bring iBeacon compatible beacons to the market, but the beacons aren’t shipping, yet. So, if you want to start developing software right now you have to resort to other solutions. One option is to use a Bluetooth LE (BLE) capable device, equipped with the right software, as a beacon. For example BLE capable iOS devices, can act as beacons with the AirLocate application. So, if you have e.g. a new iPhone 5 to spare, you can make it into a beacon very easily. Another option is to build a beacon yourself, since iBeacons are based on standard Bluetooth LE proximity profile. A company called Radius Networks has published an article about building a beacon, so I decided to try this out.

Beacon BOM

The bill of materials for the beacon was pretty simple

  • Raspberry Pi + memory card + power cord
  • a USB Bluetooth 4.0 LE dongle

Additionally, I bought the following items to make installation easier:

  • a USB SD card reader
  • HDMI display cable
  • USB hub
beacon bill-of-materials

beacon bill-of-materials

Unfortunately, Radius was using IOGEAR GBU521 Bluetooth dongle which I couldn’t find at any of the local electronics shops. USB Bluetooth dongles aren’t very expensive, however, and since there aren’t many Bluetooth chipsets on the market, I decided to experiment a bit and buy two different dongles to try out. These were the Asus USB-BT400 and TeleWell Bluetooth 4.0 LE + EDR.

Operating system deployment

Some vendors sell memory cards with pre-installed OS for Raspberry Pi, but I couldn’t find one of those either at my local electronics shop, so I bought a blank memory card and a USB-based SD card reader, just in case. The SD card reader proved handy because it turned out the card didn’t work with the built-in reader on my MacBook Pro. Installing the Raspbian (2013-09-25-wheezy) Linux distribution was fairly straightforward using the instructions on Embedded Linux Wiki (RPi Easy SD Card Setup). The only notable issue was that on Mac OS X writing the OS image to the card was a lot faster (~ 4 min. vs. ~ 30 min.) using the raw disk device instead of the buffered one. Another issue was that the memory card disk had to be unmounted using diskutil, not ejecting it through Mac OS X Finder.

After installing the OS image on the SD card it was time to see if the thing would boot. Unfortunately, I only had a VGA monitor and no suitable HDMI-VGA adapters , so I wasn’t able to make a console connection with the Pi. Being eager to see if everything worked so far, I decided to connect the Pi to a wireless access point and power it up. After a few moments, I noticed that RasPi had acquired an IP address from the AP’s DHCP server and I was able to log in to the Pi via SSH using the default credentials. So, no console whatsoever was required to set up the Pi!

Being a Java developer, I was happy to notice that Raspbian came with a fairly recent Java Standard Edition 7 installation by default. The latest Java 8 build is also available for Raspbian.

Building the Bluetooth stack

Once the basic OS setup was done, I had to compile the BlueZ Linux Bluetooth stack which, proved to be a rather simple matter of installing the compile-time pre-requisites through RasPi package management (apt-get):

libusb-dev libdbus-1-dev libglib2.0-dev libudev-dev libical-dev libreadline-dev

and then configuring the source and building it. I used BlueZ version 5.10, which was the latest official version at the time.

After compiling the BlueZ Bluetooth protocol stack it was time to test if the Bluetooth dongles I had bought were working. Running hciconfig I noticed that the Telewell dongle was being detected, while the Asus wasn’t. During further testing it became clear that the Pi’s signal wasn’t being picked up by a demo app on an iPad. More research showed that though the TeleWell dongle did support Bluetooth 4.0 LE, it didn’t officially support the required proximity profile. After yet more googling, I found that the Asus dongle seemed to include the same Broadcom BCM20702 (A0) chipset as the IOGEAR one used by Radius Networks. However, the dongle wasn’t being detected because Asus has a vendor specific USB device ID for it that wasn’t known by the Raspbian kernel. The solution for this issue was to add the device ID in the kernel source (credit to linux-bluetooth mailing list), rebuild and install the newly built kernel.

Compiling the Linux kernel

Compiling the Linux kernel is a very time-consuming task, particularly on a resource constrained environment such as the Raspberry Pi. Fortunately, it’s possible to set up a cross-compiler environment on a more powerful system to speed things up. Again, the Embedded Linux Wiki proved to be a great resource with this task (RPi Kernel Compilation). Though, you can install a x86 / Mac OS X ⇒ ARM / Linux cross-compilation environment, I thought I’d try to go for an, arguably, more mainstream choice and set up my cross-compiler environment on a Linux Mint 15 Xfce guest virtual machine. The required packages were again available via apt-get:

gcc-arm-linux-gnueabi libncurses-dev

Additionally, Git had to be installed to be able to fetch the Raspberry Pi kernel source, build tools and firmware.

Kernel compilation for Raspberry Pi is a bit different than compiling a standard kernel for a server class machine, and is done using the following procedure:

  • fetch Raspberry Pi Linux kernel source, compiler tools and firmware
  • set up environment variables for the compilation
  • configure the kernel source (using a config file from the Pi)
  • build kernel and modules
  • package up the kernel, modules and firmware
  • deploy kernel, modules and firmware to Pi
  • reboot Pi (with fingers crossed)

Fortunately, I was able to produce and deploy a working kernel build and the Pi booted up with the fresh kernel. This time, hciconfig showed that the kernel and the Bluetooth stack were able to detect the Asus Bluetooth dongle.

Testing the beacon

There are a couple of mobile applications available that can be used for verifying that a beacon is functioning properly. iBeacon Locate is available in Google play store for Android 4.3+ users and Beacon Toolkit in Apple App Store for iOS 7.0+ users. Both applications require that the device supports Bluetooth BLE.

Sample code demonstrating how to read beacon signals is also available from multiple sources, including the AirLocate application, Apple WWDC 2013 and android-ibeacon-service. AirLocate e.g. is a complete sample app that you can build, modify and install on your iOS device, provided that you have an Apple iOS developer certificate.

I was able to pick up the Raspberry Pi’s beacon signal using AirLocate on iOS and iBeacon Locate on Android. There were some problems with the sample apps, when I configured the beacon to use a custom device UUID instead of an Apple demo UUID: the demo apps failed to detect the beacon when using custom a UUID. A custom application was, however, able to detect the beacon also with a custom UUID.

beacon ranging and proximity sensing alert

beacon ranging and proximity sensing alert

Next, I’ll have to experiment a bit more with proximity sensing accuracy, notification event delay and how different physical space topologies and interference affect proximity sensing. Also, a demo app should be developed to simulate proximity alerts in the context of a real use case.

Init scripts

Once Pi beacon was running fine, the last thing was to make the beacon automatically turn on at boot. For this I studied the existing init scripts to learn what kind of metadata is required to be able to manage the “service” using update-rc.d command. I also separated the configuration parameters in a separate file (/etc/default/ibeacon.conf) from the init script.

Final thoughts

iBeacon is based on standard Bluetooth 4.0 LE proximity profile that is not Apple proprietary technology. The technology works currently on newer iOS and Android devices, and these operating systems include APIs required to detect beacon signals. iBeacon has many interesting use cases for indoor positioning, including but not limited to, retail shopping and analytics. It’s still an emerging technology and I’m sure we’ll see it applied in many unexpected contexts in the future.

The Raspberry Pi is a device platform with intriguing possibilities. Traditionally embedded software development and server-side software development have required very different skill sets. Raspberry Pi demonstrates how the embedded platforms have evolved significantly during the last few years in terms of hardware capacity as well as software platform maturity. Consider e.g. the following points:

  • the system has “standard” support for Java as well as many other popular language runtimes, making it possible to develop software using desktop or server-side development skills and tools
  • hardware capacity is gaining on low-end server-class machines in terms of memory, CPU and storage capacity
  • the OS and other software can be updated over the network using update manager tools instead of having to flash firmware
  • it’s based on a standard Linux environment
  • the system supports remote management over secure terminal connection

With the high end embedded platforms, such as the Raspberry Pi, similar tools and techniques can be used for developing software on both platforms, so from a development and maintenance point of view these platforms are converging. This creates fascinating opportunities for software developers who don’t have a traditional embedded development skill set.

JavaOne 2012 – Technical sessions

While trying to find information about technical sessions presented in JavaOne 2011 and earlier I noticed there was surprisingly little details available. Many of the results I got from googling were pointing to the JavaOne 2012 site or the links were broken. So, even though it’s been a while since the conference, I thought I’d post some notes about the themes for historical reference.

The technical sessions (487) were classified in the following tracks:

  • Core Java Platform (146 sessions)
  • Development Tools and Techniques (212 sessions)
  • Emerging Languages on the JVM (44 sessions)
  • Enterprise Service Architectures and the Cloud (112 sessions)
  • Java EE Web Profile and Platform Technologies (123 sessions)
  • Java ME, Java Card, Embedded, and Devices (62 sessions)
  • JavaFX and Rich User Experiences (70 sessions)

The numbers were from Oracle’s Schedule builder application.

Some of the hot topics in JavaOne 2012 were thin server architecture, REST, HTML 5, NoSQL, data grids, big data, polyglot, mobile web and cloud. There was also a lot of interest on next big things on the Java platform roadmap, especially Java SE 8 (Lambdas) and Java EE 7 related talks.

In addition to the new and exciting topics, the evergreen topics such as troubleshooting and optimization (esp. garbage collection), scalability, parallellism, design and design patterns seemed to draw a lot of attention.

JavaOne 2012 – Keynotes

Java Strategy Keynote

Java Strategy and JavaOne technical keynotes were delivered at the end of the first conference day, on sunday.

The Java Strategy keynote was kicked off with a “catchy” music video “coding in Java”. After the video Hasan Rizvi, EVP Middleware and Java Development, opened the more formal part of the keynote. Rizvi described how the conference theme “make the future Java” referred to two different aspect of building the future:

  • a) ensuring the platform stays competitive. Competitiveness involves platform completeness, modernization and innovation, developer productivity as well as quality and security
  • b) making sure that the collaborative process through which the platform is being developed, works well. The process needs open and transparent evolution, and active community involvement

Rizvi noted that “we have bet our business on Java and a lot of you have bet your business and careers on Java”. Oracle’s Fusion Middleware platform as well as a lot of (if not all) Oracle applications have been built on Java, so Oracle has in fact, made a huge bet on Java.

As for the Java roadmap, Oracle stated they’re committed to more regular platform major releases. During Sun stewardship there was a period of Java stagnation when 4.5 years elapsed between Java 6 ja 7 releases, and Java 7 was actually finally released by Oracle, not Sun. Even though evolving Java is a collaborative effort, a lot of responsibility lies on the steward. A key duty is to produce the reference implementation. The developers, partners, clients and all the stakeholders in the Java ecosystem need to be able to rely on the steward to move things forward in a consistent and predictable manner, and timeboxed releases are an important indication to everyone that the train is moving.

Rizvi gave some highlights of Java roadmap for SE, ME, EE, JavaFX, Java Card and NetBeans. These were later described in more detail by the product development leaders. He also presented results for Oracle’s Java 2012 scorecard. The scorecard is split into three different areas: technology, community and Oracle leadership.

Rizvi then handed over to Georges Saab, VP / Development who described the current state of Java SE 7 adoption. According to Saab they’re seeing rapid uptake of the new release and mentioned that Oracle supports its entire Fusion Middleware stack on JDK 7. (With the end of public Java 6 updates scheduled for 2013 february, it’s time to upgrade unless you have a Java support contract.) He also emphasized support for 2 new platforms added in the release. Support for Linux ARM seems very much related with Oracle’s aspirations for Java in the embedded space (Saab mentioned the emerging ARM microserver market).

Java 8 is scheduled for Q3 2013 with developer preview slated for February 2013. OpenJDK 8 early builds are available already to test things like Lambda. Some of the highlights of the planned release content include Lambda expressions (closures), parallel operations on core collections API, eliminating PermGen, a new JVM based JavaScript implementation called Nashorn, language interoperability, Java ME/SE convergence and new Date & Time APIs. Oracle is planning to contribute Nashorn to the OpenJDK project. Nashorn is said to be a high performance, modern JavaScript implementation on the JVM and will probably replace the experimental Rhino JavaScript engine shipped since JDK 6. NetBeans uses Nashorn internally for its JavaScript support.

Java 9 will likely include at least Jigsaw modularity, which was deferred from Java 8 and is scheduled for 2015. While some potential development areas were listed for this release the details were pretty scarce, as can be expected at this time.

Nandini Ramani, VP / Engineering, Java Client and Mobile Platforms, then took to the stage to describe plans for Java Client and Embedded. It’s interesting to note that JavaFX is not currently supported on all Oracle supported Java platforms, which would in theory seem to contradict the “write once, run anywhere” proposition. Ramani was briefly joined by people from Navis and Canoo to present a JavaFX in cargo management case study.

Then back to longer term plans for the JDK. Phil Rogers of AMD described Project Sumatra, which aims to bring heterogeneous computing platform to Java. Rogers described the hardware trends behind the project:

1) first the move from single core to multi-core CPUs and now to 2) full SOCs (system on chip) and a heterogeneous computing platform, where we combine a CPU and the parallel processor of the GPU into a single piece of silicon and shared memory

High level of parallelism is required from the platform by workloads such as media processing, AI, and big data. With Sumatra developers will be able to write code that will take advantage of the heterogeneous computing platform without explicitly coding for it. The JVM will decide on runtime whether to run the code on CPU or GPU.

Ramani then came back to tell about Java in the Embedded space.  I’ve written another blog entry about this, so I won’t go into detail here. It was interesting to note, however, that Oracle seems very determined to push Java in the embedded space and they’re talking a lot about the “Internet of Things” and M2M communication. In Java Embedded their focus seems to be on small headless devices, which apparently doesn’t include smart phones. They also want to lower the barrier of entry for a Java SE developer to enter embedded development through Java ME / SE convergence mentioned earlier. This could create interesting opportunities for developers by allowing them to move between these ecosystems. Java ME / SE convergence appears to be a key driver behind JDK 9 modularization (Jigsaw). Ramani concluded her part of the keynote by introducing two more case studies: Java enabled SOC by Cinterion (Java Embedded) and MintChip by The Royal Canadian Mint (Java Card based digital currency).

Cameron Purdy, VP Fusion Middleware Development and Java EE, took to stage after Ramani to discuss Java EE status and direction. He started off by briefing on Java EE 6 adoption among application developers and JEE server vendors. He then went on to describe Oracle’s Java EE focus areas that include standardization, productivity, portability, extensibility and modularity. Like other keynote speakers, Purdy also emphasized that developing the Java EE platform and specifications is a community effort. He presented some interesting details about Java EE release dates, themes and number of specifications included up to Java EE 7. Java EE 7 is currently scheduled for Q2 2013. The release themes include HTML 5 and continued developer productivity. Features such as WebSockets, Servlet 3.1 NIO, Server Sent Events, JSON, REST are considered to fall under the HTML 5 theme umbrella while API pruning, built on Java SE 7, JCache, JMS 2.0 and batch are driven by the productivity goal. Some features that Oracle would like to see in Java EE 8 were discussed briefly, but it will be the responsibility of the eventually formed expert group to decide what will go into the actual specification. Cloud programming (multitenancy for SaaS apps, PaaS enablement) model standardization was a feature deferred from Java EE 7 and will likely be included in JEE 8. Other things being considered include NoSQL, Project Avatar, state management, JSON-B and modularity based on Jigsaw. Purdy finally invited Nicole Otto from Nike to endorse Java EE as the platform for Nike’s online services.

In the final part of the keynote, Robert Ballard, oceanographer and discoverer of RMS Titanic, talked about innovation and science education. He described how modern oceanography makes pretty advanced use of information and communications technology. He told he’s often asked what he’d like to discover next? A spaceship, he said. Why? Because then I’d never have to talk about the Titanic again 🙂

IBM Keynote

IBM was a diamond sponsor for the conference and they presented their own keynote, right after the strategy keynote. The IBM talk focused a lot on cloud enablement and optimization, multitenancy, tenant isolation and reducing footprint. Polyglot also appears to be on IBM’s Java platform agenda as they discussed support for multiple JVM-based languages. A key part of IBM’s message was that hardware matters. Even if Java developers typically work at a level where the underlying hardware is abstracted away, system hardware architecture design is still crucial for mission critical applications. Somewhere deep below all the layers of indirection, hardware virtualization and JVM simulated virtual machine, the code is still run by physical processors. And since IBM can deliver the whole stack from server hardware and storage to language runtime and middleware, all the pieces have been designed and optimized to work together. So, IBM was basically echoing the Oracle “software and hardware, engineered to work together” -value proposition. They also presented SPECjEnterprise and SPECpower_ssj2008 performance performance benchmark figures where the IBM J9 JVM came out as the winner.

Java Technical Keynote

The Technical keynote was primarily delivered by Oracle Java Platform Chief Architect, Mark Reinhold. The technical keynote focused on Java SE (Java 8) and Java EE (JEE 7) platform releases. These releases were presented against the backdrop of sample applications (Schedule builder and Angry bids). Java Language Architect Brian Goetz dropped by on stage to show how Java 8 Lambda together with changes in the collections API can make the JavaFX Schedule builder application code more beautiful, and improve code and libraries in general. A large part of the presentation was dedicated to Jigsaw, which I think will play a really big role in the future of the platform. Jigsaw will not be included in Java 8 but Lambda, compact profiles, Nashorn, data/time API and type annotations will. In addition, various smaller things like PermGen removal, bulk data operations, parallel array sorting etc. are also scheduled for Java 8.

Arun Gupta, Java EE Technology Evangelist, then talked about Java EE in more detail than in the strategy keynote. Gupta briefly talked about Java EE history and current status in terms of release dates, release theme and dates. He then dived deeper into the Java EE 7 specification content. Some of the more interesting current candidate specification requests for Java EE 7 include: JAX-RS 2.0, EL 3.0, JMS 2.0, Java Caching API, Java API for JSON and Java API for WebSocket. Many other EE specifications will also get smaller updates such as JTA, EJB, CDI and JPA. After more than ten years in the making, who would’ve thought the Caching API specification would actually get finished some day 🙂 I was happy to see that EE 7 will not only bring additions to the specification, but will also remove things by making some APIs optional. The idea of pruning was introduced already in EE 6, so it’s not new, but it’s good to see the cleanup process continuing. Gupta then moved to detail changes to selected EE sub-specifications and demonstrated how the updates would improve productivity and reduce boilerplate code.

Thin server architecture is still an emerging architectural model for designing web applications that moves view generation from the server-side to the client side. Thin server architecture is platform agnostic and it effectively moves a lot of server-side code to the browser side that has traditionally been the domain of front-end or web developers. As the name implies, the server-side gets a lot simpler and thinner, and with this change, in my opinion, comes a really big productivity challenge for developing the Java backend wrt. to dynamic languages. Project Avatar and Easel are projects that are tackling this problem and exploring what kind of infrastructure and tooling is required end-to-end to build TSA applications on the Java platform. Some of the tooling is already available in NetBeans 7.3 beta, so it’s something that can be tried out right now. A TSA sample application called Angry bids, as well as the tooling part for developing the app were demoed.

Java Community Keynote

The Java community keynote was scheduled for the last day of the conference and started off with lots of thank yous and some making waves. After that, Gary Frost of AMD was brought up by Donald Smith (Oracle) to discuss Project Sumatra that was mentioned earlier in the Java Strategy keynote. AMD has been working to make it possible for Java developers to take advantage of the GPU for a few years now, and they’ve released an open source project called Aparapi for doing this. Aparapi requires that code be specifically written to get it executed on a GPU, but Sumatra aims to make all this unnecessary. Frost showed some interesting demos of rendering a Mandelbrot set, Game of life and N body physics simulation using Aparapi. Frost said AMD is hoping to get Sumatra included in the JDK within Java 9 timeframe.

Smith then reflected on the role of Java in innovation. His approach was to separately mindmap the strengths of Java and fostering innovation, and try to see how these two could be linked together. He invited people from Eucalyptus, Twitter, Cloudera, Eclipse and Perrone Robotics for a panel discussion on the role of Java in innovation.

After the innovation panel, Martijn Verburg of London JUG, introduced the Adopt a JSR -program they had started. The purpose was said to be to prevent bad specifications, such as EJB 2.0, from happening again by engaging ordinary developers in the specification process. Verburg hosted a short panel where he asked the panelists a range of questions related to their role in the Java community and Java specification process.

After the panel Saab brought up Paul Perrone to discuss and demo a Java based robotics platform his company develops. Continuing on the robotics theme, Java creator James Gosling came up on stage wearing his Sun Microsystems t-shirt to tell about his current work at Liquid Robotics, and how they’re using Java. Liquid Robotics is building robots that float in the ocean and gather telemetric data of different kinds for various purposes (e.g. marine mammal and pollution tracking, weather data, global warming studies etc.). Java is used for analysing the data delivered by the robots, but also the newer robot generation has an ARM processor and runs JDK 7 on Linux (ARM). They’ve built a Swing based UI for studying and drilling down the data, e.g. the routes that each robot has travelled. Gosling had evaluated all of the NoSQL databases for his use cases but felt that no existing ones worked well with the telemetry data they process, so he built his own NoSQLish database. The data they receive is really valuable, so reliability is crucial, which is why they’re using 3 different hosting providers. After evaluating hosting providers he confessed to be a real Jelastic fan. So, since Gosling in his role as the chief software architect in his new company picked to build on Java and chose to present at JavaOne, I guess it means he still has a soft spot for the platform.

Conclusions

Oracle is a huge company and many people in the developer and OSS communities have had reservations about what will happen to Java under Oracle leadership, and whether Java will be submitted to its owner’s short-term commercial ambitions. But despite its huge size Oracle is not self sufficient and their long term success is very much tied to the larger developer ecosystem. This means that Oracle needs to make sure Java is a platform that developers want to invest their human capital in also in the future.

Active community participation is absolutely vital for Java’s long term viability and it’s reassuring to see that Oracle seems to acknowledge and commit to this. Recent changes in the Java Community Process (JCP), that governs the rules for creating Java specifications, require a more open and transparent way of working from the expert groups. By making OpenJDK the Java SE reference implementation (RI), Oracle has leveled the playing field with regard to other Java SE vendors, as now Oracle’s Java implementation is only one Java SE implementation among others that has to conform to the specification and RI. Oracle has also been able to engage IBM in OpenJDK instead of Apache Harmony, which I think overall will be reduce the risk of fragmentation and benefit the whole Java community.

According to the Java Community Process, the specification lead of a particular JSR is responsible for developing the specification, but also for producing a reference implementation as well as Technology Compatibility Kit (TCK or test suite). For large specifications, such as Java SE and EE, this is no small task. OpenJDK is the Java SE reference implementation while GlassFish is the Java EE RI. There’s been some speculation about whether the JDK will remain to be made freely available, as well as the future of some Sun Java products, such as GlassFish and NetBeans, under Oracle leadership. OpenJDK and GlassFish have a clear role to play in this picture as platform reference implementations. NetBeans on the other hand provides support for emerging technologies and day 0 support for new Java standards, which is important for allowing developers to actually get hands-on experience with new standards. So, currently none of these products would appear to be redundant.

Traditionally ME and SE/EE development were regarded very different and were typically performed by people with different skill sets. The plan for ME / SE convergence on platform and API level could change that in the short term (Java SE 8 timeframe). Also, with the merge of previously separate JCP executive committees for ME and SE taking place in november, work is being carried out in the process level to try and avoid the platforms from diverging in the future.

Google used to be a visible and active member of the Java community before the legal dispute between Google and Oracle started over Android. Google has also released quite a few interesting Java based components as open source so, it’s been a pity to see Google withdraw from JavaOne as well as many other Java communities. No googlers appeared to be presenting at this year’s JavaOne either. I was surprised to find out after the conference (through some googling) that Google is actually still a member of the JCP Executive Committee and they’ve also joined the Java SE 8 expert group in August 2012! Hope they will be able to have a more active role in the Java ecosystem in the future.

There’re a lot of interesting technology changes planned for Java. Some of the changes I’m really looking forward to include

  • JDK modularization (via Project Jigsaw, JDK 9)
  • thin server architecture support (via Project Avatar and Easel, NetBeans v7.3, Java EE 7 / 8)
  • Java SE / ME convergence (JDK 8)
  • compact profiles (JDK 8)
  • heterogeneous computing platform support (via Project Sumatra, JDK 9?)

Many enhancements and changes that are clearly driven by polyglot requirements appeared on Oracle’s tentative roadmap plans, so they seem to be serious about improving polyglot support in the JVM.

Based on the conference and actual work being carried out by Oracle and the larger Java community, I think Java will remain viable as a community, technology platform and an ecosystem.