A response time spike is a check result where the response time was significantly higher than normal — then returned to normal on the next check. This article explains what causes spikes and how to interpret them.
On the Response Time Chart, a spike appears as a single tall point much higher than the surrounding line. In Check History, it's a single check with a high response time surrounded by normal checks.
One isolated spike is generally not cause for concern. A series of spikes or sustained high response times are worth investigating.
Application runtimes (JVM, Node.js, Ruby, etc.) pause to collect garbage, causing all requests during that window to wait. GC pauses often last 50–500ms and produce brief, regular spikes.
A slow query (perhaps due to lock contention, missing index, or a large result set) causes the request handler to wait, inflating response time for that check.
If your endpoint calls an upstream API and that API is slow, your response time increases proportionally. The spike may not be your service at all.
If your application server scales down and needs to "warm up" (common in serverless functions, containers, and auto-scaling groups), the first request after idle can be much slower.
Packet loss, routing changes, or momentary congestion on the path between PulseAPI and your server cause a single check to take longer. This is the least actionable cause but also very common.
A single spike doesn't need an alert. But if your application regularly has 5+ second spikes that are causing user-facing slowness, consider creating a response time alert rule.
See Alert Rule Conditions: Response Time for recommended thresholds.
Still have questions? Contact support.