6 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article examines how many HTTP requests per second a single machine can handle using a simple setup. It details the testing process, configurations, and results from various load levels on different machine specifications. The findings highlight performance limits and response times under sustained loads.
If you do, here's more
The article explores how many HTTP requests a single machine can handle, using a straightforward setup with a Java 21-based REST API built on Spring Boot 3 and PostgreSQL. The tests were conducted on various machine configurations, ranging from 1 CPU with 2 GB of memory to 4 CPUs with 8 GB. The infrastructure was hosted on DigitalOcean and automated using a Python script, making it easy to replicate the environment. The tests focused mainly on read requests, simulating real-world usage with a 20% write load.
Four different load profiles were tested: low (20 requests per second), average (200 requests per second), high (1000 requests per second), and very high (4000 requests per second). The tests consistently ran across multiple machines in parallel, providing a comprehensive view of performance. Results showed that with the small machine configuration, the average load performed well, handling 750 requests in about 15 seconds without any timeouts. However, at very high loads, the same small machine struggled, resulting in significant timeouts, with 28% of requests experiencing connection timeouts and over half timing out on requests.
The detailed statistics revealed that as the load increased, response times varied, with the mean response time for the high load at 0.013 seconds and the very high load spiking to an average of 4 seconds. The article illustrates the limits of a single machine when handling high traffic and emphasizes the importance of understanding these metrics for system design and architecture choices.
Questions about this article
No questions yet.