Day 2: Virtual Threads vs. Platform Threads

Lesson 2 15 min

Welcome back, fellow architects and engineers!

Yesterday, we laid the groundwork, bootstrapping our Spring Boot microservice and understanding its basic lifecycle. We talked about how a simple Spring application comes to life. Today, we’re peeling back another layer, diving into a feature that’s not just an optimization but a paradigm shift for high-concurrency Java applications: Virtual Threads.

For systems aiming for 100 million requests per second, the choice of concurrency model isn't academic; it's existential. The traditional Java threading model, tied directly to operating system (OS) threads, has served us well, but it hits inherent limits when pushed to extreme scales, especially in I/O-bound microservices. Let's understand why and how Virtual Threads shatter those limits.

Agenda for Day 2:

State Machine

Runnable Running Blocked Mount Yield (Preempt) I/O Wait (Unmount) Data Ready (Remount)

Flowchart

HTTP Request Arrives JVM spawns Virtual Thread (VT) Blocking I/O Call (e.g. DB) VT Unmounts? Carrier Thread Released I/O Done: VT Remounts & Returns

Component Architecture

Operating System (OS Threads / Kernel) Java Virtual Machine (Project Loom) Carrier Threads (Platform Threads Pool) Thousands/Millions of Virtual Threads
  1. Revisiting Platform Threads: Understanding their cost and limitations.

  2. Introducing Virtual Threads (Project Loom): The game-changer.

  3. System Design Implications: How virtual threads reshape scalability, resilience, and developer productivity for hyperscale.

  4. Architecture Fit: Where do virtual threads sit in our Spring Boot microservice?

  5. Control & Data Flow: The subtle shift in execution.

  6. Sizing for Production: Rethinking capacity planning.

  7. Hands-on Assignment: Building and observing the difference.


Core Concepts: The Threading Evolution

1. The Bottleneck of Platform Threads

Imagine a bustling restaurant. Each customer (request) needs a dedicated waiter (platform thread) to take their order, deliver it to the kitchen (database call), wait for the food, and serve it.

  • Platform Threads are essentially OS threads. They are heavy. Each one consumes significant memory (megabytes of stack space) and requires the OS to schedule its execution.

  • When a waiter goes to the kitchen and waits for the food to be ready (a blocking I/O operation like a database query or an external API call), they are unavailable to serve other customers. The OS thread is blocked.

  • To handle more customers concurrently, you need more waiters. But creating too many waiters is expensive (memory), and managing them (context switching by the OS) becomes a performance overhead.

  • In a microservice, this means your Tomcat/Netty thread pool, typically configured for a few hundred threads, quickly becomes saturated when requests involve blocking I/O. Your service might have plenty of CPU available, but it grinds to a halt because all its platform threads are stuck waiting. This is a primary bottleneck for I/O-bound microservices trying to hit hyperscale.

2. The Dawn of Virtual Threads

Now, imagine the same restaurant, but with a magical twist. When a waiter (now called a carrier thread) takes an order and goes to the kitchen, instead of waiting there, they instantly teleport back to the dining area, ready to serve another customer. The moment the food is ready, they instantly teleport back to the kitchen, pick up the food, and serve the original customer.

  • Virtual Threads are lightweight, user-mode threads managed by the JVM, not directly by the OS. They are multiplexed onto a smaller pool of platform threads (the "carrier threads").

  • When a virtual thread performs a blocking I/O operation (like our waiter waiting for food), it unmounts from its carrier thread. The carrier thread is then free to pick up another waiting virtual thread. When the I/O operation completes, the virtual thread remounts onto an available carrier thread to continue its execution.

  • This means a few hundred platform threads can efficiently handle millions of virtual threads, each representing a concurrent operation. Virtual threads have a tiny memory footprint (kilobytes), making them incredibly cheap to create.

Why this is a game-changer for 100M req/s:
For a hyperscale system, most microservices are I/O-bound, spending much of their time waiting for databases, caches, message queues, or other microservices. Virtual Threads make "blocking" cheap. You can write simple, synchronous-looking code that internally leverages asynchronous I/O without the complexity of reactive programming (Mono, Flux, callbacks). This dramatically increases the number of concurrent requests a single microservice instance can handle, boosting throughput and resource utilization.

System Design Implications

  • Scalability: Directly enables higher concurrency with fewer resources. You can process far more concurrent requests per JVM instance, leading to better horizontal scaling (fewer instances needed for the same load) and better vertical scaling (more throughput per instance).

  • Resilience: While not directly a fault-tolerance mechanism, by allowing more requests to be processed concurrently and efficiently, it reduces resource contention. This can make your service more responsive under load, potentially helping it weather transient spikes or slow dependencies better.

  • Developer Productivity: This is huge. Engineers can write straightforward, imperative, blocking-style code for I/O operations without incurring the performance penalties of traditional platform threads. No more complex reactive pipelines just to achieve high concurrency for I/O-bound tasks. This simplifies debugging, reasoning about code, and onboarding new team members.

Architecture Fit in Spring Boot Microservices

Virtual Threads are not a new architectural component; they are a runtime enhancement to Java's concurrency model. They seamlessly integrate into existing Spring Boot applications running on Java 21+.

  • Spring Web MVC (Servlet Stack): Spring Boot 3.2+ on Java 21+ allows you to configure your servlet container (e.g., Tomcat, Jetty) to use virtual threads for handling incoming requests. This means each incoming HTTP request, instead of consuming a traditional platform thread from the servlet container's pool, will be handled by a virtual thread.

  • Spring WebFlux (Reactive Stack): While WebFlux already handles concurrency asynchronously, virtual threads can still simplify certain internal blocking calls or integrate better with libraries that are not fully reactive. However, the primary benefit is for the traditional blocking model.

  • Internal Operations: Any java.util.concurrent.Executor you use for internal parallel processing can be replaced or wrapped with Executors.newVirtualThreadPerTaskExecutor(), allowing your internal tasks to also benefit from lightweight concurrency.

Control Flow & Data Flow

The high-level control flow (HTTP request -> Controller -> Service -> Repository) remains the same. The magic happens within the execution of these steps. When a Spring MVC controller method using a virtual thread makes a blocking call (e.g., repository.findById()), the virtual thread unmounts. The carrier thread is released to serve another virtual thread. When the database returns data (data flow resumes), the original virtual thread remounts and continues processing.

Sizing for Production

With virtual threads, you no longer need huge platform thread pools for I/O-bound services. Instead of sizing for (max_concurrent_requests * average_io_wait_time_ratio), you size your carrier thread pool (often the default ForkJoinPool.commonPool() or a small fixed pool) based on the number of CPU cores. The number of virtual threads can be orders of magnitude higher. This shifts capacity planning focus from thread count to CPU and memory (heap for application data).


Assignment: Observe the Threads!

Your mission, should you choose to accept it, is to build a simple Spring Boot microservice that vividly demonstrates the difference between platform threads and virtual threads.

Goal: Create two HTTP endpoints:

  1. /platform-thread-task: Simulates a blocking I/O operation using a traditional Thread.sleep(). Observe the thread name in logs.

  2. /virtual-thread-task: Simulates the same blocking I/O operation, but executes it within a virtual thread. Observe its thread name and compare.

Steps:

  1. Project Setup: Create a new Spring Boot 3.2+ project with Java 21+. Add Spring Web and Lombok dependencies.

  2. Controller: Create a @RestController class.

  3. Platform Thread Endpoint:

  • Define a GET endpoint /platform-thread-task.

  • Inside, log the current Thread.currentThread().getName().

  • Add Thread.sleep(2000) to simulate a 2-second blocking I/O call.

  • Log the thread name again after sleep.

  • Return a simple message.

  1. Virtual Thread Endpoint:

  • Define a GET endpoint /virtual-thread-task.

  • Crucially, enable virtual threads for Spring Web. In application.properties, add spring.threads.virtual.enabled=true.

  • Inside this endpoint, log the current Thread.currentThread().getName(). This will now be a virtual thread!

  • Add Thread.sleep(2000) to simulate a 2-second blocking I/O call.

  • Log the thread name again after sleep.

  • Return a simple message.

  1. Build & Run: Compile and run your Spring Boot application.

  2. Test & Observe:

  • Open two terminal windows. In one, run tail -f logs/application.log (or similar, depending on your logging setup).

  • In the other terminal, use curl to hit /platform-thread-task. Note the thread name in the logs (e.g., http-nio-8080-exec-X).

  • Immediately after, hit /virtual-thread-task with curl. Note the thread name (e.g., Tomcat-VirtualThread-X).

  • Try hitting both endpoints multiple times concurrently (e.g., open many browser tabs or use ab or hey). Observe how quickly the virtual thread endpoint responds compared to platform threads when the thread pool is saturated.

This hands-on exercise will solidify your understanding of how virtual threads manifest in a real Spring Boot application.


Solution Hints:

Your application.properties should contain:

properties
server.port=8080
spring.threads.virtual.enabled=true
logging.level.com.example.demo=DEBUG # Adjust package name as needed
logging.file.name=logs/application.log

Your Controller might look something like this:

java
package com.example.demo;

import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@Slf4j
public class ThreadDemoController {

@GetMapping("/platform-thread-task")
public String platformThreadTask() throws InterruptedException {
String threadName = Thread.currentThread().getName();
log.info("Platform Thread Task: Started on thread {}", threadName);
Thread.sleep(2000); // Simulate blocking I/O
log.info("Platform Thread Task: Finished on thread {}", threadName);
return "Platform Thread Task Completed on " + threadName;
}

@GetMapping("/virtual-thread-task")
public String virtualThreadTask() throws InterruptedException {
String threadName = Thread.currentThread().getName();
log.info("Virtual Thread Task: Started on thread {}", threadName);
Thread.sleep(2000); // Simulate blocking I/O
log.info("Virtual Thread Task: Finished on thread {}", threadName);
return "Virtual Thread Task Completed on " + threadName;
}
}

Notice how the code for both endpoints is identical. The magic of virtual threads is that you don't change your code much; you change the runtime environment to leverage them. The spring.threads.virtual.enabled=true property is the key here for Spring Web.

This simple exercise will make the concept tangible. In hyperscale systems, this translates directly to handling millions of concurrent users without breaking a sweat, all while keeping your codebase readable and maintainable.

Happy coding, and see you in the next lesson where we'll dive deeper into building resilient microservices!

Need help?