
I recently worked through a stability issue with a VPS where memory usage was spiking, maxing out, requiring a reboot to bring it back up again.
I use ServerPilot to provision and manage my servers, it allows a handful of sites to run on one VPS. Since each site keeps its own logs, tracking down the source of a spike can be tricky.
ServerPilot’s top tier plan includes a metrics dashboard with per-app memory breakdowns, and the ability to switch plans on demand made it possible to quickly identify which site was responsible.
The problematic site’s access logs revealed hundreds of requests for WooCommerce product pages within 1-2 second bursts. These bursts were spaced a few hours apart, each time from a different IP.
I ended up handling this at the Nginx level with rate limiting, stopping the traffic before it even reached PHP.
Rate limiting on ServerPilot
Step 1: create the rate limit zone
Create /etc/nginx-sp/http.d/limit-req.conf
:
Step 2: apply to your app
Create /etc/nginx-sp/vhosts.d/yourapp.d/10-rate-limit.conf
:
This example only rate limits the WooCommerce pages that were being targeted. The burst setting allows for short bursts of activity, such as quickly browsing through a category.
Step 3: test and deploy
Test the configuration:
You can verify it’s working by hitting a product page repeatedly and checking your access logs for 429 status codes after:
The result
In practice, real users and courteous bots aren’t affected, while bad actors are throttled before they cause any harm. Memory usage has flattened out, and the server has stopped falling over.
Any public server will attract bots. Many are legitimate and harmless, but there’s always a stream probing for vulnerabilities or simply chewing through resources. Firewalls and WAFs help, but for me, rate limiting was the missing piece.
Recent Comments