In this post, I share my experience testing the performance of three popular JSON libraries in Kotlin: Jackson, Moshi, and kotlinx-serialization. The purpose was to measure how each library handles JSON parsing under different payload sizes and to ensure that the testing methodology minimizes external factors—like JVM startup time—so that the results are as accurate as possible.
Why Performance Testing Matters
JSON libraries are a critical part of many applications, especially those dealing with extensive I/O or parsing large volumes of data. Evaluating their performance helps in making informed decisions about which library to use when efficiency is a priority. In this project, I tested three libraries under multiple conditions using a Dockerized setup, which guarantees a consistent environment across test runs.
Testing Environment and Setup
To achieve reliable results, I set up a complex testing environment that incorporated several best practices:
-
Dockerized Testing:
Using Docker, I created a controlled, lightweight runtime environment. This eliminates inconsistencies that might arise from varying local settings or system configurations. -
JVM Warm-Up:
A global dummy test runs before the actual performance tests. This approach ensures that the JVM is fully warmed up, reducing the impact of first-use overhead on the results. -
Multiple Test Modes:
The tests are run on three different JSON payload sizes:- Short Mode: A small JSON file, 100 lines.
- Medium Mode: A moderately sized JSON file, 1k lines.
- Long Mode: A large JSON file, 10k lines.
By varying the payload sizes, I was able to observe how each library scales its performance with increasing data complexity.
-
Automated Test Script:
A Bash script automates the following:- Building the Docker image.
- Running the tests in all specified modes.
- Extracting performance metrics from the test output using a portable regex command (
grep -E
). - Compiling the results into a Markdown file for easy analysis.
Methodology
The approach to performance testing involved these key steps:
-
Building the Docker Image:
The Dockerfile uses a multi-stage build—first compiling the project using a Gradle-based image, then packaging it into a slim runtime image running OpenJDK. -
Executing the Tests:
For each test mode, the Docker container is run with a command-line parameter specifying the JSON payload to use. The application first loads the JSON data and performs a global dummy test to warm up the JVM. Then, it runs the benchmark tests on each JSON library. -
Extracting and Analyzing Results:
The script captures the test output and extracts the execution times (in milliseconds) for Jackson, Moshi, and kotlinx-serialization. This extraction is done using an extended regex (grep -oE
) to ensure compatibility across systems. -
Generating a Results Table:
Finally, the script compiles the output into a neatly formatted Markdown table that summarizes the performance of each library across the different test modes.
Test Results
After running the tests, I compiled the performance results for each mode. For the short test mode, I executed two independent runs to ensure consistency. The figures below represent the average timings for that mode.
Below is the summary of the performance measurements:
Test Mode | Iterations | Jackson (ms) | Moshi (ms) | Kotlinx (ms) |
---|---|---|---|---|
Short | 10,000 | 1344.11 | 1693.91 | 1225.98 |
Medium | 1,000 | 1397.03 | 1764.63 | 1387.09 |
Long | 100 | 1435.56 | 1888.05 | 1541.89 |
Observations and Conclusion
These results highlight several interesting points about JSON parsing performance in Kotlin:
-
Short Mode:
The averaged results for the short JSON payload reveal that kotlinx-serialization has a slight edge over Jackson and Moshi. This might be due to optimizations in handling small-to-moderate data sizes. -
Medium and Long Modes:
As the payload size increases, the differences between the libraries narrow slightly. However, Moshi tends to take a bit longer, while Jackson and kotlinx-serialization remain more comparable in performance. -
Testing Consistency:
The consistency between the two independent runs for the short mode underscores the reliability of the testing approach—especially the use of Docker and the JVM warm-up routine to minimize external noise.
In summary, the performance of JSON libraries can vary depending on the context—such as the size of the JSON payload and the number of iterations performed. This benchmark provides a solid baseline from which to assess these libraries. Ultimately, the best choice of library will depend on your project’s specific performance and usability requirements.
I hope these insights help you make an informed decision when selecting a JSON parsing library for your Kotlin projects.
If you’re interested in this performance testing approach and want to dive deeper, you can check out and try the project yourself on GitHub: kotlin-json-performance.