Rule Scope: Team-Wide vs. Endpoint-Specific

Rule Scope: Team-Wide vs. Endpoint-Specific

When you create an alert rule, you choose whether it applies to all monitors on your team or only to specific monitors. This article explains how scope works and when to use each option.


Team-Wide Scope

A team-wide rule is evaluated against every monitor on your team. When any monitor meets the rule's condition, the rule fires.

When to use:

  • Baseline rules that should apply everywhere (e.g., "alert if any monitor returns a non-200 status")
  • Rules for a uniform SLA across all your endpoints
  • Starting out — one team-wide rule covers everything; add specific rules later

Example: A team-wide rule for "status code != 200, Critical priority" means you get a Critical alert if any monitor on your team goes down.


Endpoint-Specific Scope

An endpoint-specific rule only fires for the monitors you select. You can select one or many monitors.

When to use:

  • Applying different thresholds to different monitors (e.g., your payments API has a stricter response time threshold than your status page)
  • Monitoring specific high-value endpoints with higher priority than your general baseline
  • Rules that only make sense for certain endpoints (e.g., an SSL expiry rule for a specific monitor)
  • Reducing noise from low-value monitors that you don't want paging you at 3am

Example: An endpoint-specific Critical rule for "status code != 200" scoped to "Production API — Checkout" means only the checkout endpoint triggers Critical alerts — other monitors that go down generate lower-priority alerts.


Using Both Together

The most effective setup uses both scope types:

  1. Team-wide Low rule — catch everything; no one is woken up, but incidents are recorded
  2. Team-wide Medium rule — for sustained degradation (uptime < 99% over 24h)
  3. Endpoint-specific Critical/High rules — for your most important endpoints with tight thresholds

This way, a minor endpoint going down creates a Low-priority incident (tracked but not alarming), while your critical production endpoints fire Critical alerts immediately.


Related Articles


Still have questions? Contact support.

    • Related Articles

    • Creating an Alert Rule

      This article walks you through creating an alert rule in PulseAPI. Alert rules define when PulseAPI creates an incident and sends notifications. Prerequisites: At least one notification channel must exist before you can assign it to a rule. See ...
    • Key Concepts: Endpoints, Checks, Incidents, and Alerts

      This article explains the four core building blocks of PulseAPI and how they work together. Understanding these concepts makes every other part of the product easier to use. Endpoint (Monitor) An endpoint is a URL you want PulseAPI to watch. In ...
    • Quick Start: Monitor Your First Endpoint in 5 Minutes

      This guide walks you from a new PulseAPI account to receiving your first alert notification. By the end, you'll have a working monitor set up, an alert rule configured, and a notification channel verified. Prerequisites: A PulseAPI account. If you ...
    • Setting Rule Priority

      The priority of an alert rule determines the severity of the incidents it creates. This article explains how priority maps to severity and how to choose the right priority for each rule. How Priority Works Every alert rule has a priority: Critical, ...
    • Alert Rule Conditions: Status Code

      A status code alert rule fires when the check receives an HTTP status code that matches (or doesn't match) your condition. This is the most direct way to detect when an endpoint is down. How It Works After every check, PulseAPI records the HTTP ...