Java turned 25 years old in May, marking a quarter of a century in which it has consistently been one of the most widely used programming languages. The IEEE Computer Society lists it as one of the top three programming languages to learn in 2020, while a recent survey from developer tool outfit JetBrains listed it as is the most popular primary programming language.
The early attraction of Java was its promise of write-once, run anywhere that contrasted with the standard practice of code being tightly coupled to specific hardware. In theory, this portability should allow a developer to write code that will run unmodified on any platform, although it turns out that in practice there are some caveats to this.
But the continued popularity of Java as an enterprise platform may have more to do with the stability and backwards compatibility of the language that means that application code written over a decade ago will still run, despite gradual enhancements to the Java standard over time. Then there are the application frameworks that have developed around Java that have kept it a stalwart favorite for building enterprise applications and services. Frameworks such as the Java Database Connectivity (JDBC) APIs provide a standard way for a Java application to query a database, for example.
However, technology moves ever onwards, and the modern IT ecosystem is moving towards cloud-native technologies such as containers and building applications that are more modular and distributed to make them easier to scale. Java, of course, pre-dates the cloud-native era, and was designed for monolithic application stacks where a single application might be running on its own dedicated physical server.
This doesn’t mean that you cannot run Java applications in containers, it just means that some of the assumptions that were made in creating Java make it less than ideal for this style of application deployment. Developers have had to be aware of these and sometimes work around them. For example, Java applications can be slow at start up and somewhat memory heavy, both of which are disadvantages in a world of lightweight containerized applications with relatively short lifespans.
Java was designed to run in its own environment, with a Java Virtual Machine (JVM) to manage application resources during execution. The JVM is how Java achieves portability – Java applications are compiled to Java bytecode, which is executed by a JVM created for each platform. Alternatively, bytecode may be compiled to native code at runtime, which is where the start up delay comes in.
One company with a foot firmly in both the Java and cloud-native camps is Red Hat, which has its JBoss Enterprise Application Platform and its own build of OpenJDK on one side, and the OpenShift Container Platform built around Kubernetes on the other. In order to improve the way that Java operates within a container-based environment, the firm started up a new open source project called Quarkus with the aim of creating a Kubernetes-native Java framework, according to Rich Sharples, senior director for product management at Red Hat.
“What we’ve found over the last several years is that Java is really not a good fit for high density, low resource environments,” Sharples tells The Next Platform. “The Java JDK runtime is a remarkable feat of engineering, it does some incredible things, but it was really designed for an era where you pretty much had a dedicated machine, a big multi-processor system with tons of memory, and Java’s job was to exploit to the maximum all of the resources the machine had to offer and give you the best throughput for your big banking application.“
One notable issue with Java applications was that that Java processes running inside containers did not behave as expected if the developer allowed the JVM to set the default values for functions such as the garbage collector and heap size. This is precisely because Java pre-dates Linux kernel features, such as cgroups, that govern the resources used by processes inside containers, and so it simply looks at the resources available to the entire system and allocates itself memory and CPU accordingly.
This particular issue was partly addressed with the release of Java 10, so that the JVM was able to recognize constraints set by cgroups, allowing both memory and CPU constraints to be directly defined for Java applications in containers.
But that isn’t the only issue. As already mentioned, the doctrine behind containers is that they are ephemeral; instances are spawned when needed, they run for a short time, then they get killed and new instances take their place if their function is needed again. This is almost the polar opposite of the way that Java was designed to operate.
“We now live in a world where you are lucky to get a slice of a slice of a physical machine, you are going to get a container in a virtual machine, you may just get enough CPU and memory to execute your function one time. Your application components, your microservices, or your functions are fleeting in nature – they come and go, they get redeployed maybe tens of times an hour. And a lot of the amazing stuff that Java does is the kind of profiling and tuning it can do over time to optimize code execution. And again, if your code is running for months or even years at a time, then that makes complete sense. If you get one shot to run as a function, you don’t have the ability to do any of that stuff,” Sharples explains.
What this means is that Java code would be wasting precious time on things that are superfluous in a distributed container deployment model. What a Java application or application server does is spend a lot of time at startup with class loading and loading configuration data, and it does that every time the application starts.
“All that starts to become a real overhead, all that initialization stuff you have to do. So we have basically taken all of that stuff and pushed that to compile time, so it’s done once only. You can get it completely out of the production environment. And that really significantly improves start up time, which is important for your microservices that could be deployed frequently. The time between you hitting the deploy button and being able to handle the first request is essential – If you have to wait 30 seconds or a minute before you can start handling requests, that’s just wasted.”
This is essentially what Quarkus is about – providing a Java execution framework that minimizes the memory and CPU footprint and cutting back on anything that delays the code getting up and running when it is deployed as a container instance.
The main way it achieves this is by incorporating yet another open source project, GraalVM. (Open source software projects are often like Russian dolls, containing layers of other open source projects). GraalVM replaces the widely used HotSpot JVM and instead of the usual just-in-time compilation of Java bytecode, this compiles the application code ahead of time into a native executable. Functions such as memory management and thread scheduling are drawn from yet another JVM called Substrate VM and compiled into the native executable along with the application, to produce a block of code with a faster start up time and less runtime memory overhead.
It should be noted, however, that while Quarkus enables Java to become a first-class citizen in the world of Kubernetes and containers, it isn’t envisioned that existing Java enterprise applications will be migrated to run with Quarkus.
“We aren’t expecting you to take your million lines of monolithic Java application and run that in Quarkus. That’s not going to work,” says Sharples.
Instead, Quarkus is intended for greenfield development of new applications, but allowing developers to preserve their Java skills and programming experience when moving into the world of cloud-native applications. But to do so requires those developers to have access to many of the code libraries and frameworks that have evolved around Java over the past 25 years to provide key business functions and services.
“One of the real benefits of Java is that it does have a huge ecosystem of libraries and utilities for doing practically everything. Some of the languages like Golang or Rust, while they’re well suited for running in a cloud native environment, they’re not well suited for building business applications, because they don’t have all of those kind of libraries and that big ecosystem,” says Sharples.
“So what we decided early on was to ensure this is still useful and so that we get the benefits of Java, we need to ensure a lot of those popular frameworks can also run in a Quarkus environment. What we have had to do is add some kind of metadata to those projects so that Quarkus can optimize them and ensure that they’re compiled in the right way.”
Sharples claims that about 200 Java frameworks have the necessary support for Quarkus now, including key tools like Hibernate for mapping an object-oriented domain model to a relational database, RESTEasy for RESTful Java applications and the Spring Boot framework for creating microservices.
As an open source project, Quarkus is not “owned” by Red Hat, but as with other open source tools – such as OpenStack – the IBM software division offers its own build of the project which has been added to its family of Red Hat Runtimes designed to integrate with its OpenShift application platform. OpenShift itself went through a transformation, moving from a custom Linux container technology to Docker-style containers and Kubernetes back in 2015.
What this all means is that developers can take Quarkus and Kubernetes and combine the two along with an IDE like IntelliJ, Eclipse, or VSCode to operate Java applications in a containerized framework, while Red Hat offers it as part of its own development and deployment environment for its customers.
One final factor worth noting is that Red Hat is intending to “eat its own dogfood” as far as Quarkus is concerned, and aims to use it in the development of some of its own products going forwards.
“Over time, it will be the foundation for many of Red Hat’s other products. Many of our products are written in Java, and just like any other customer, we will leverage Quarkus in the next generation of those products as well,” says Sharples.
“We will be building many of our own products with the option to compile down to native and run in a much smaller footprint with greater efficiency, and we expect partners to do that as well. So it’s both an end user technology for developing your own custom applications, and also an enabling technology for Red Hat’s products as well as our partners.”
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.