Almost 28 years after its initial release, Java remains one of the most popular programming languages. Over 33% of developers worldwide use it when creating new services and apps for web, desktop, and mobile users. And as more businesses take the route of cloud migration, they’re choosing Java for developing cloud-native apps. The ecosystem offers great scalability and contains robust tools for data management, analytics, and building microservices.
One reason Java has stayed so relevant is the constant flow of enhancements and updates it receives. At the same time, there’s a huge amount of legacy software built with Java that requires optimization or new features. This is why tools like GraalVM, which can be used to optimize and speed up Java-based software, are in great demand.
We recently used Oracle's GraalVM to address performance issues when rebuilding a backend service for a client. Spoiler: aside from a few minor hiccups, it went really well. Just to give you an idea, startup time went from up to 100 seconds down to 3.5 seconds max!
In this article, we’ll tell you about our experience of using GraalVM and explain what it can and can’t do. We'll also walk you through the implementation process and offer some advice on when to use this tool for maximum effect.
What Is GraalVM?
GraalVM is a Java Virtual Machine (JVM) and Java Development Kit (JDK) designed for creating and executing Java code with a focus on performance and compatibility. Its advantage lies in the innovative GraalVM compiler that can be built on top of the standard Java HotSpot Virtual Machine (Oracle JDK) or OpenJDK. Developers turn to GraalVM to accelerate startup times and execution in existing apps, optimize resource use, and streamline the use of different programming languages in the same project.
Note: GraalVM comes in two versions: a free open source Community (based on OpenJDK) and a commercial Enterprise edition (built on OracleJDK). We used the Community edition for our product, and it had all the tools we needed to achieve our goals.
GraalVM has three main components:
- The Java HotSpot Virtual Machine. This environment allows Java programs to run on any device and OS and converts the most-used (hot) portions of code into much faster machine code. The HotSpot VM works with Java and JVM languages.
- The GraalVM compiler. This can be used in two ways:
- Just-in-time (JIT) compilation translates platform-independent bytecode into machine code on the fly, improving performance at runtime.
- Ahead-of-time (AOT) compilation, GraalVM’s big advantage, works its magic by turning your app's code into a much smaller native binary for the OS of your choice at build time.
- The Truffle language implementation framework. This enables polyglot programming with languages other than Java and JVM, including Node.js, Python, Ruby, and R. It also allows developers to create language-agnostic tools for code profiling, inspection, and analysis.
Other elements include GraalVM Updater — a utility for installing additional features — and advanced tools for debugging, profiling, optimizing resources, and monitoring performance.
GraalVM has a unique feature: it can work in three runtime modes.
JVM Runtime Mode
The HotSpot JVM runtime mode uses GraalVM's JIT compiler. It receives bytecode from the JVM, optimizes it, and sends it back to the JVM as low-level machine code.
You can use this mode to accelerate your Java and JVM-based apps.
AOT compilation takes place during your build, so you get an executable ready for immediate use. You don't even need to have JVM installed—the native file includes all classes of the app, its dependencies, and the runtime library, as well as a statically linked JDK.
Use this option to build a compact native executable that starts up quickly.
Java on Truffle
This version of JVM is based on Oracle's own Truffle language implementation framework. Truffle has all the components of the Java Virtual Machine and allows developers to run Java and other languages through interpreters, enabling program interoperability. It's not part of the standard package, so you'll need to use the GraalVM Updater to install it.
This mode helps you leverage the strengths of different programming languages and libraries in one app.
But enough technical details for now. Let's look at the practical benefits of using GraalVM and what they brought to our client project.
The Benefits of GraalVM
We were able to personally experience most of the benefits in this section, so we'll add our comments where relevant.
The default GraalVM JIT compiler produces improved code with a low appetite for memory and processing power. The compiler uses up to 62 algorithms (in the Enterprise version) to automatically analyze your code and select the best optimization methods on the fly.
As the graph below shows, the performance gain isn't always extraordinary, but it's present across the board. In practice, the results vary a lot depending on your workload and kind of code, and the external libraries you're using.
In our case, the Techstack team had already optimized the client’s code manually, so we didn't expect much from JIT compilation. What did solve our issue was GraalVM's Native Image technology.
AOT Native Image Compilation
In this mode, GraalVM employs ahead-of-time compilation to turn a Java app into a native binary. The AOT compiler does all the heavy lifting at build time, so you get a lightweight executable ready for use. Source code compilation, loading and parsing configurations, analyzing dependencies and building the tree, and code execution are all taken care of.
This compilation option works efficiently and has several benefits:
- A smaller app footprint. Because the resulting native image binary uses fewer resources than the original JVM code, it's smaller and requires less memory to run.
- Faster startup time and improved performance. Once compiled, the executable will fire up instantly (with no warm-up) and deliver better throughput during peak loads.
- Framework compatibility. You can use microservice frameworks like Spring, Micronaut, Quarkus, and Helidon with GraalVM's native images.
- Better security. GraalVM's algorithms remove unused resources from your app and produce low-level machine code, giving you better protection from cyberattacks.
- Perfect for use with containers. You can put the lightweight, fully functional native executables inside containers and benefit from improved startup times and security.
- Polyglot language support. You can use multiple languages with your Java-based apps, which gives you more flexibility.
Let's talk about that last item in more detail.
Unfortunately, not all popular programming languages have high-performance implementations. Virtual machines usually only support one or a few languages. As a result, you can't use different languages within one project without coding overhead and performance penalties.
GraalVM changes that. With its polyglot approach, you can
- Access the capabilities and libraries of other languages
- Reduce the overhead of working on redundant tasks such as garbage collection and compilation for each language
- Build tools that can be implemented across a variety of languages without having to write new code every time
These things are all possible thanks to GraalVM’s direct support for JVM languages and the use of the Truffle language implementation framework for native languages.
GraalVM gives you a complete toolkit for monitoring, debugging, profiling, and resource optimization. You can now use one set of tools for code written in every language supported by GraalVM:
- VS Code Extensions for an improved GraalVM polyglot experience with Visual Studio Code
- GraalVM Dashboard for visualizing certain aspects of Native Image compilations
- VisualVM for monitoring and troubleshooting apps written in the languages supported by GraalVM
- GraalVM Insight for tracing runtime behavior and getting insights
- Ideal Graph Visualizer for implementing languages on top of GraalVM
To sum up, we found that GraalVM offers a well-balanced set of features, and some of its functionality is truly unique.
So were there any downsides? Read the next section to find out.
GraalVM: Some Limitations Are Inevitable
To explain the limitations of GraalVM, we have to get fairly technical, and it takes a Java developer to appreciate the finer details. The key message is that you can always find workarounds, and the benefits GraalVM provides are worth any extra effort required.
Most problems arise from the fact that GraalVM uses a closed-world assumption model at build time. If something can't be reached at build time, the compiler assumes it doesn't exist and doesn’t include it in the image. More specifically,
- GraalVM performs static analysis from the main entry point at build time.
- If it can't reach some methods when creating a native image, it will disregard that code and won't add it to the final executable.
- You'll also have to tell GraalVM about things like reflection, serialization, resources, and dynamic proxies, as it’s not aware of the app's dynamic elements by default. This is done to improve performance.
- By default, GraalVM fixes your app's classpath at build time and initializes classes at runtime.
- Lazy class loading is not supported.
- GraalVM has limited support for Java-oriented debugging and monitoring tools with Native Image. You'll usually have to use tools built for those specific native languages.
Similar limitations affect Spring Boot. The beans defined in your application cannot change at runtime, which means that
- The Spring @Profile annotation and profile-specific configuration are not supported.
- Properties that change if a bean is created are not supported (for example, @ConditionalOnProperty and .enable properties).
Overall, none of these limitations are critical, and we were able to work around them in our case. Speaking of which, let’s tell you about our specific experience with GraalVM.
From Theory to Practice: Teamstack’s Experience with GraalVM
We've already mentioned some details of our journey with GraalVM, but here's the full account.
We needed to optimize a Spring Boot service for a cloud-based app. After replacing it with a lighter, more efficient version, we added K8s (Kubernetes) jobs. Most of the work was now assigned to these jobs, and that's when our problems began.
During peak loads, the jobs took a long time to start—60–80 seconds on average, and sometimes up to 100 seconds. This completely negated our efforts to optimize the service. We tried using a more powerful instance, but it didn’t solve the problem—the startup time was still unacceptable. What’s more, none of the tweaks we found in the Spring Boot guides could remedy the situation.
This is when we decided to try GraalVM. While the project is ongoing, we can report that GraalVM has helped us eliminate the slow startup issue. We're currently at 3.5 seconds max, no matter how much work we throw at the K8s jobs.
Here are the steps we took to implement GraalVM, what the challenges were, and how we overcame them.
Once we’d installed GraalVM on our test machine, we were ready to pre-process our Spring Boot application. This is what AOT processing of the Spring app generates:
- Java source code
- Bytecode (for dynamic proxies, etc.)
- Metadata (GraalVM JSON hint files):
Resource hints (resource-config.json)
Reflection hints (reflect-config.json)
Serialization hints (serialization-config.json)
Java proxy hints (proxy-config.json)
JNI hints (jni-config.json)
Here's how the process of creating a native image differs from a regular Java build:
Instead of packaging bytecode into a JAR file, we run it through GraalVM's AOT compiler, which removes excessive code and adds metadata, then builds a native executable.
To date, we’ve encountered a few issues with implementation, but solving them has been easier than we thought. Below are the steps we took to accomplish the task.
1. Tried to use the newest Spring Boot.
We had Spring Boot version 2.6.2 and tried to update the Spring Boot (SB) version to 3.0.0, but there wasn't a stable version out yet. Also, we needed to add the corresponding native plugin. We added the following line to our config file:
Since SB3 was not released then, we got a strange exception in Spring init, and didn’t find a way to configure the plugin.
2. Downgraded back to SB 2.x
We downgraded the Spring Boot version from 3.0.0 to 2.7.5. It was the latest SB 2.x version at the time. We also needed to add the corresponding native plugin.
We needed to add a plugin management configuration as well:
Since this combination of SB and plugin versions was proposed by Spring Initializr, everything worked fine.
3. Tried to launch native build of the project with all existing dependencies
Got a lot of exceptions during build time.
4. Removed all dependencies except Spring’s ones and started adding them one by one
Some dependencies didn’t break the build. But some problems needed to be fixed.
5. Solved the problem with logging
- Java Util Logging has full support of GraalVM
- Logback has limited support
- Log2j is not supported yet
There was no info about other providers. You need to try to build and fix any exceptions along the way.
We used our own SB starter with Logback + logback logstash encoder, and it was not working with GraalVM. We had a lot of errors at build time and even when we had fixed them, we would get errors at runtime.
Removed our starter and rewrote the logging logic with logback.xml and connected encoder.
Enabled XML support in, which is disabled by default in GraalVM:
We solved all previous errors, but got new errors related to reflection. The reason for this is the closed-world assumption. We added a reflection configuration for all classes mentioned at stack trace :
And registered this config file in build config:
Successful build with logging dependencies.
6. Added more dependencies and fixed an error related to incorrect build/run time
One of the dependencies used apache.commons.logging, so we got an error about incorrect usage of class initialization and moved it to build time:
Successful build with our 3d party dependencies.
7. Added config for Spring YAML properties support
Enabled support of YAML files that had been disabled by default:
Build was successful, we were ready to check runtime.
8. Fixed NoSuchBeanDefinitionException
It took us back to closed-world assumptions and reminded us about limitations. A profile must be specified at build time to know which beans we need at runtime.
Set dev profile and launch.
Got other exceptions at runtime.
9. Fixed problems with AWS SDK V1
We used several modules from AWS SDK V1 and got an exception at runtime. We found that AWS SDK V2 has built-in support for GraalVM, but we didn’t find any mentions of support for SDK V1. We decided to migrate to V2.
Successful runtime with AWS SDK dependencies.
10. Fixed exception about failed serialization.
We added our own classes to reflect-configs.json mentioned earlier.
Serialization was fixed. Got other runtime exceptions.
11. Fixed exception related to JNI call
Got an exception about usage of ImageIO, which uses system components as canvas (JNI calls).
Removed ImageIO and used Ffmpeg for this operation, as we had used it for other operations.
Our native application was running without any exceptions.
12. Built in Docker
The native plugin provides Buildpacks for building an image, but we used our custom build process.
Got exception: executable file not found.
Our application is a sub-module, and we applied a java-library plugin.
Added libraries gcompat and libstdc++ to launch native executable.
Also set profile and application name at Dockerfile.
But we must only have one container for all non-dev profiles.
Added spring.profile.group with needed profiles and built for group instead of specific profile.
13. Fixed http requests error
In a non-local environment, we got an exception during AWS authorization.
We used an EC2 instance authorization, but the AWS default authorization chain failed.
Official documentation said that https requests were disabled by default, but surprisingly http requests were disabled as well.
Enabled protocols at config:
Final configuration code at build.gradle:
As you can see, we mostly ran into trouble with external dependencies, logging, and reflection. However, we found workarounds, and the resulting native executable works as intended with a remarkable boost in startup time.
Сomparing a Classic Build with the GraalVM Build
GraalVM’s AOT compilation solved many of our problems, but our build time and resources increased as a result.
As you can see, it takes much longer to build with GraalVM, and the process does hog four times more memory. This needs to be taken into account when allocating build resources.
Still, the payoff for us is worth it, as we’ve achieved a drastic reduction in startup time. What’s more, the native image is almost 2.5x smaller in size, and the binary requires four times less RAM to run. This is a major advantage when it comes to keeping cloud infrastructure costs low.
In a nutshell, GraalVM has completely fulfilled our expectations, and we'll definitely be adding its Native Image technology to our arsenal of tools at Techstack.
So will GraalVM work for your project, too?
When Using GraalVM Makes the Most Sense: Our Take
Based on our experience, we can think of several cases where using GraalVM can bring the most benefits.
Startup Time Is Critical
In our case, we were able to achieve a lightning-fast startup of just 3 seconds, down from the original monstrous 80–100 seconds. While your mileage may vary, GraalVM's Native Image technology does make a big difference.
You're Starting a Project on Spring Boot 3.0
Spring Boot is widely used for creating web apps and microservices, and it works with GraalVM with little or no overhead. Use version 3.0 if you want out-of-the-box support of GraalVM native images.
Here's our advice: you'll always be better off using the latest available version of Spring Boot. Don't try to run GraalVM with older versions, as you may run into compatibility issues that will negate your gains in performance.
You Have Few Third-party Libraries
Getting GraalVM to work correctly with a large number of external libraries was a challenge. We often had to manually set up dependencies when the tool reported missing libraries at build time or during runtime.
This situation may improve in future versions, but for now, we recommend limiting GraalVM implementation to projects with just a small number of external libraries.
You Want To Reduce the Cost of Cloud Infrastructure
Thanks to the low memory and CPU requirements of apps produced with GraalVM, you can save on resource usage in the cloud. Of course, your actual savings will depend on factors such as the type of workload and the number of external dependencies.
You Need a Lightweight Application (K8s Jobs, AWS Lambda)
GraalVM is the perfect solution to Java code cold-start problems for cloud apps. With its optimization algorithms, you can build efficient, short-running processes for both containerized and serverless applications.
When applied with these cases in mind, the technology behind GraalVM can tangibly boost app efficiency. It’s no wonder that major players in several industries are using this tool in their products.
GraalVM Use Cases: Industry Leaders Are In
Facebook is heavily reliant on Java for its mobile app, big data, and backend services. As soon as the team started running Spark workloads on GraalVM, CPU usage for Facebook's big data processing services dropped by 10%. Replacing the OpenJDK runtime with GraalVM also helped Facebook improve the performance of their Java applications.
You can find more info on how Facebook migrated their services to GraalVM in the full article.
Twitter was looking for ways to improve the platform's uptime and reliability while keeping costs sustainable. They wanted a future-proof solution that would be as language-agnostic as possible.
After extensive testing, Twitter updated many of their Scala-based services with GraalVM's compiler. Those services saw an immediate 11% improvement in CPU usage efficiency. To put this into perspective, switching compilers rarely improves CPU usage efficiency by more than 2%.
You'll find the full account of Twitter's move to GraalVM here.
The team at Goldman Sachs created their own programming language, Slang, back in 1992. It was state-of-the-art at the time, but it's now in dire need of an upgrade. Currently, more than 150 million lines of code are written in Slang, and Goldman Sachs uses a lot of this code in its critical systems. The main problems they're facing are poor performance and limited interoperability.
The company's engineers are now using GraalVM and its Truffle language implementation to overcome their challenges. Slang is based on C and C++, which requires using Sulong, GraalVM's LLVM bitcode interpreter. With these three tools, the team plans to improve performance, achieve better interoperability, and possibly simplify the transition to other languages.
You can watch Goldman Sachs’ presentation to learn more about their story and follow their decision-making process.
Overall, we’re thoroughly impressed with GraalVM. It's an extremely useful tool for Java developers, and it delivers on its promises. Here’s a summary of our thoughts:
- GraalVM is great for speeding up applications written in Java and JVM languages like Kotlin, Scala, Groovy, and others, without the need to make major changes to the code.
- GraalVM's Native Image is a state-of-the-art technology that produces lightweight native executables ready for use in the cloud.
- The VM reduces your app's CPU usage and memory footprint, saving the costs of cloud resources.
- GraalVM's Truffle takes polyglot interoperability to a whole new level when crossing borders between different programming languages within an app.
And we’re done! We hope this article has answered your questions and shed some light on the intricacies of using GraalVM.
At Techstack, we’re committed to sharing our expertise to help everyone build better-quality software. It’s the same approach that’s helped us build amazing products and forge over 20 long-term partnerships with our partners. If you’re looking to develop a mobile app or any other custom software, we’d love to be part of your journey.