Setting Rule Priority

Setting Rule Priority

The priority of an alert rule determines the severity of the incidents it creates. This article explains how priority maps to severity and how to choose the right priority for each rule.


How Priority Works

Every alert rule has a priority: Critical, High, Medium, or Low.

When the rule fires and creates an incident, the incident's severity matches the rule's priority. A Critical priority rule creates a Critical severity incident. A Low priority rule creates a Low severity incident.

This means priority is how you communicate urgency. It affects:

  • How the incident appears in your dashboard and incidents list (color-coded by severity)
  • How your team triage and responds to incidents

Choosing Priority

Critical Use for failures that have immediate, significant user impact or that threaten core system availability.

  • "Production API returning 5xx"
  • "Checkout endpoint down"
  • "Primary database health check failing"
  • "SSL certificate expiring in 7 days or fewer"

High Use for significant problems that aren't causing immediate widespread user impact but require prompt attention.

  • "Response times degraded above 3 seconds"
  • "Staging environment down"
  • "SSL certificate expiring in 30 days"
  • "Non-primary-path endpoint down"

Medium Use for degraded performance or potential early warning signs.

  • "Response times elevated but still acceptable"
  • "Uptime below 99.9% over the last 7 days"
  • "SSL certificate expiring in 60 days"

Low Use for informational alerts that are worth tracking but don't need urgent attention.

  • "Response times above baseline but not user-impacting"
  • "Non-critical internal endpoint down"
  • "Long-window uptime slightly below threshold"

Practical Rule Sets

A well-configured team usually has multiple rules covering the same monitor at different priority levels:

Monitor: Production API — /checkout

1. Status code != 200     → Critical  (immediate alert: it's down)
2. Response time > 5000ms → High      (very slow: likely impacting users)
3. Response time > 2000ms → Medium    (slow: investigate)
4. Uptime < 99.9% (24h)  → Medium    (SLA tracking)
5. SSL expiry < 7 days   → Critical  (cert about to expire)
6. SSL expiry < 30 days  → Low       (advance warning)

Related Articles


Still have questions? Contact support.

    • Related Articles

    • Creating an Alert Rule

      This article walks you through creating an alert rule in PulseAPI. Alert rules define when PulseAPI creates an incident and sends notifications. Prerequisites: At least one notification channel must exist before you can assign it to a rule. See ...
    • Alert Rule Conditions: Response Time

      A response time alert rule fires when a check takes longer than a threshold you define. This article explains how to configure response time rules and recommends sensible starting thresholds. How It Works After every check, PulseAPI records the ...
    • Setting Up Slack Notifications

      This article explains how to connect PulseAPI to Slack so your team receives alert notifications in a Slack channel. Prerequisites: Slack notifications require a Starter, Professional, or Team plan. You'll need permission to add apps to your Slack ...
    • Setting Up Email Notifications

      This article explains how to create an email notification channel so PulseAPI can send you alert emails when incidents are created or resolved. Step 1: Open Notification Channels In the left sidebar, click Alerts. Click Notification Channels. Click ...
    • Setting Up Webhook Notifications

      This article explains how to create a webhook notification channel so PulseAPI can send JSON payloads to any URL when incidents are created or resolved. Use webhooks to integrate with PagerDuty, Opsgenie, custom incident systems, or any service that ...