What's Happening to OpsGenie
Atlassian has officially announced the end of life for OpsGenie. If you are one of the thousands of teams that depend on OpsGenie for on-call scheduling, alert routing, and incident notification, this affects you directly. The clock is ticking and there is no extension coming.
Here are the dates that matter:
- June 4, 2025 — New OpsGenie sales stopped. No new customers can purchase standalone OpsGenie subscriptions.
- April 5, 2027 — OpsGenie shuts down permanently. All instances go offline. The product ceases to exist.
- April 5, 2027 at 12:00 AM PST — All OpsGenie data is permanently deleted. Schedules, escalation policies, integration configurations, historical alert data, and audit logs are gone forever.
This is not a gradual deprecation where features slowly disappear. Atlassian is pulling the plug on a specific date. If you have not migrated by April 5, 2027, you lose everything: your on-call rotations, your carefully tuned alert rules, your integration configs, and every piece of historical data you have accumulated.
The announcement caught many teams off-guard. OpsGenie was acquired by Atlassian in 2018 and had been a reliable, well-regarded alerting platform with a loyal user base. For many mid-market IT operations teams, it was the sweet spot between PagerDuty's enterprise pricing and basic monitoring tool notifications.
Now that sweet spot is gone, and the prescribed migration path is not what most teams expected. See our detailed comparison of citk vs OpsGenie.
What Atlassian Is Recommending (and Why Many Teams Are Unhappy)
Atlassian's official recommendation is to migrate from OpsGenie to Jira Service Management (JSM) combined with Compass, their developer portal. On paper, this sounds like a natural evolution. In practice, the feedback from operations teams has been overwhelmingly negative.
The JSM + Compass Migration Path
Atlassian positions JSM as the successor to OpsGenie's alerting and on-call management features. Compass, their newer developer experience product, is intended to cover service catalog and ownership mapping that OpsGenie partially handled. Together, Atlassian argues, they provide a stronger operations platform.
The reality is more complicated. Here is what teams are running into:
Pricing Shock
JSM locks change management capabilities behind the Premium tier, which starts at $48 or more per agent per month. For a mid-market team of 30 agents, that is $1,440 per month or over $17,000 per year, and that is before you add any other Atlassian products. Many OpsGenie customers were paying significantly less for alerting-only functionality and do not need a full ITSM platform.
Multi-Interface Complexity
The JSM + Compass combination means your team now operates across two separate interfaces. On-call engineers need to context-switch between JSM for incident management and Compass for service ownership. This is a step backward from OpsGenie's unified alerting experience where everything related to on-call lived in one place.
No Integrated Monitoring
OpsGenie was monitoring-agnostic. It ingested alerts from Datadog, Prometheus, CloudWatch, New Relic, and dozens of other sources. JSM has its own alerting system, but the integration experience is different. Teams report that recreating their multi-source alert routing in JSM requires significant rework and that some integrations are less mature.
Feature Gaps in Change Management
Perhaps most critically, JSM's change management is an ITSM-style workflow: create a change request, route it through approvals, close it. There is no native impact analysis, no risk scoring, and no intelligent notification routing based on what a change actually affects. You get process enforcement, not operational intelligence.
For teams that used OpsGenie as part of a modern DevOps workflow, the JSM path feels like being pushed toward a traditional ITIL model they deliberately avoided. If JSM’s ITSM orientation has you looking beyond Atlassian entirely, our ServiceNow alternatives guide covers the broader change management market.
Your Migration Options
The good news is that the OpsGenie sunset opens the door to evaluate what you actually need today, not just replicate what you had. Here is a look at six options, including the one Atlassian is pushing.
1. Jira Service Management (Atlassian's Path)
JSM is the default option if you want to stay within the Atlassian ecosystem. It offers incident management, on-call scheduling, and change management. If your organization is heavily invested in Jira and Confluence, there are integration benefits.
Pros: Deep Jira integration, familiar Atlassian UI, built-in ITSM workflows, Atlassian will help with migration tooling.
Cons: Expensive at $48+/agent/month for Premium, change management locked behind higher tiers, multi-interface experience with Compass, heavy ITSM orientation that does not suit DevOps-first teams.
Best for: Organizations already running Jira and Confluence enterprise-wide that want to consolidate on Atlassian.
2. PagerDuty
PagerDuty is the closest feature-for-feature replacement for OpsGenie. It covers alerting, on-call scheduling, escalation policies, and incident management. It is the most mature product in the alerting space and the most common migration target.
Pros: Best-in-class alerting and on-call management, widest integration ecosystem, proven at scale, strong mobile experience.
Cons: Pricing is significantly higher than OpsGenie was (especially for mid-market teams), enterprise features require top-tier plans, does not address change management or impact analysis natively.
Best for: Enterprise teams that need the most battle-tested alerting platform and budget is not the primary constraint.
3. incident.io
incident.io takes a Slack-native approach to incident management. It turns Slack channels into structured incident workflows with automated status pages, post-mortems, and role assignments. It is modern, fast, and beloved by engineering teams.
Pros: Beautiful Slack-native experience, fast adoption (teams are already in Slack), strong on post-incident learning, modern API-first design.
Cons: Heavily Slack-dependent (weaker if your org uses Teams), alerting and on-call features are newer and less mature than PagerDuty, limited change management capabilities.
Best for: Engineering-first organizations where Slack is the operational hub and incident response culture is strong.
4. Rootly
Rootly is an AI-native incident management platform that automates incident workflows, retrospectives, and status page updates. Like incident.io, it integrates deeply with Slack and focuses on reducing the toil of incident management.
Pros: Strong AI-driven automation for incident workflows, good Slack integration, automated retrospectives, growing quickly with modern engineering teams.
Cons: Primarily focused on incident response (not proactive change management), smaller integration ecosystem than PagerDuty, newer company with a shorter track record.
Best for: Teams that want AI-first incident management and are willing to invest in a newer platform.
5. Squadcast
Squadcast positions itself as a cost-effective alternative to PagerDuty with on-call scheduling, alert routing, incident management, and SRE workflows. It targets the same mid-market segment that OpsGenie served.
Pros: Most affordable direct OpsGenie replacement, good feature coverage for alerting and on-call, SRE-focused features like error budgets and runbooks.
Cons: Smaller ecosystem and community, fewer enterprise-grade features, integrations are not as deep or broad as PagerDuty, limited change management.
Best for: Budget-conscious teams that need a straightforward OpsGenie replacement without premium pricing.
6. citk (A Different Approach)
Full disclosure: this is our product. We are including it because the OpsGenie sunset is exactly the kind of moment where teams should question whether replacing alerting with alerting is the right move.
citk is not an OpsGenie replacement. It is a change intelligence platform that answers a different question: instead of “who do we page when something breaks,” it asks “who needs to know before this change breaks something.”
Pros: AI-powered impact analysis and risk scoring for changes, intelligent notification routing based on what a change actually affects, built for the gap between “change deployed” and “incident declared,” integrates with your existing alerting tool (including PagerDuty, incident.io, or whatever you migrate to).
Cons: Does not replace alerting or on-call scheduling (you still need a tool for that), newer product, best suited for teams that have a change management problem, not just an alerting problem.
Best for: Teams that want to use this migration as an opportunity to add proactive change intelligence on top of their alerting stack. See our pricing and feature overview.
Comparison Table
Here is a side-by-side comparison to help you evaluate these options across the dimensions that matter most for an OpsGenie migration:
| Capability | JSM | PagerDuty | incident.io | Rootly | Squadcast | citk |
|---|---|---|---|---|---|---|
| On-call scheduling | Yes | Best-in-class | Yes | Via integration | Yes | No (pair with alerting tool) |
| Alert routing | Yes | Best-in-class | Yes | Yes | Yes | No (pair with alerting tool) |
| Change management | Premium tier only | Limited | Limited | Limited | Limited | Core focus |
| AI impact analysis | No | No | No | Incident-focused | No | Yes |
| Risk scoring | No | No | No | No | No | Yes |
| Intelligent notifications | Basic escalation | Alert-based | Slack-native | Slack-native | Basic escalation | Change-aware routing |
| Incident response | Yes | Yes | Best-in-class | Best-in-class | Yes | Integrates with your IR tool |
| Pricing model | $48+/agent/mo (Premium) | $21-$41/user/mo | Custom pricing | Custom pricing | $9-$21/user/mo | See pricing |
| Best OpsGenie replacement? | If all-in on Atlassian | Closest 1:1 | If Slack-first | If AI-first IR | If budget-first | Complementary (not 1:1) |
Migration Timeline: What to Do When
You have roughly 12 months from now until the shutdown. That sounds like a lot, but migrations always take longer than expected, especially when on-call reliability is at stake. Here is a month-by-month plan to ensure a smooth transition.
Months 1-2: Audit and Evaluate (Now through May 2026)
- Document everything in OpsGenie. Export your current configuration: teams, schedules, escalation policies, routing rules, integration settings, and notification templates. Do this now, not later.
- Inventory your integrations. List every tool that sends alerts to OpsGenie (monitoring, CI/CD, security, custom webhooks). Note which integrations are critical path versus nice to have.
- Assess your actual needs. This is the most important step. Do you need a 1:1 OpsGenie replacement, or has your operations maturity changed since you first adopted it? Many teams discover they need more than alerting now.
- Shortlist two to three alternatives. Run proof of concept trials. Do not commit to a vendor based on feature lists alone. Test with your actual alert volume and team structure.
Months 3-4: Select and Plan (June-July 2026)
- Choose your primary platform. Lock in your vendor decision. Negotiate pricing (mention the OpsGenie migration, as most vendors are offering migration incentives right now).
- Design your new routing architecture. Map your current OpsGenie routing rules to the new platform's model. This is where most complexity lives. Do not rush it.
- Plan the integration migration order. Start with your highest-volume, most critical alert sources. Save low-volume and custom integrations for later.
- Set up your new on-call schedules. Recreate your rotation patterns. Verify timezone handling (a common source of migration bugs).
Months 5-7: Migrate Core (August-October 2026)
- Run dual-routing. Point your critical alert sources to both OpsGenie and your new platform simultaneously. This is the safety net phase. You want to verify that every alert that hits OpsGenie also arrives correctly in the new system.
- Migrate integrations in waves. Move three to five integrations per week. After each wave, run a full on-call cycle before moving to the next wave.
- Test escalation paths. Trigger test alerts through every escalation policy. Verify that the right people get notified at the right times through the right channels.
- Train your on-call team. Schedule hands-on training sessions. Give every on-call engineer at least two weeks of dual-system experience before cutting over.
Months 8-9: Cut Over (November-December 2026)
- Shift primary routing to the new platform. Keep OpsGenie as a read-only fallback but stop sending new alerts to it.
- Run a full incident simulation. Execute a tabletop exercise that covers your three most common incident types using only the new platform. Identify gaps.
- Migrate remaining custom integrations. Tackle the long-tail of low-volume and custom webhook integrations.
- Decommission OpsGenie alert sources. Remove OpsGenie endpoints from your monitoring tools one by one. Confirm each removal.
Months 10-12: Harden and Archive (January-March 2027)
- Final data export. Export everything one last time (see the next section for what to export). Store it somewhere durable.
- Performance-tune your new setup. After running in production for two to three months, review alert noise levels, escalation effectiveness, and team satisfaction.
- Consider complementary tools. Now that alerting is stable, evaluate whether you need to add change intelligence, better status pages, or automated runbooks on top of your new stack.
- Document your new architecture. Update your operational runbooks, onboarding docs, and vendor inventory. Your future self will thank you.
What to Export Before the Shutdown
Remember: all OpsGenie data is permanently deleted on April 5, 2027. Even if you have fully migrated to a new platform, there is data in OpsGenie that you may need for compliance, auditing, or historical reference. Export these items well before the shutdown date.
Critical Exports
- Alert history and audit logs. If you are subject to SOC 2, ISO 27001, or similar compliance frameworks, you likely need to retain incident response records for a defined period. Export your full alert history including timestamps, acknowledgments, escalations, and resolution notes.
- On-call schedule history. Who was on-call when is a common audit question. Export historical schedule data, not just current rotations.
- Integration configurations. Even after migrating, having a record of your OpsGenie integration configs is valuable for troubleshooting. Export the configuration for every connected tool.
- Routing rules and escalation policies. These represent institutional knowledge about how your team handles different types of alerts. Export them as documentation even if you have already recreated them in your new platform.
Recommended Exports
- Notification templates and custom actions. Any custom notification templates, auto-close rules, or custom actions you built should be documented.
- Team structures and contact methods. Export your team definitions, member lists, and preferred contact methods as a reference document.
- API keys and webhook URLs (for documentation only). Record which external services pointed to OpsGenie endpoints. You will need this list to verify that all sources have been migrated.
- Post-mortem and incident notes. If your team used OpsGenie's incident notes or attached post-mortem links, export those records.
How to Export
OpsGenie offers API-based exports for most data types. Use the OpsGenie REST API to pull alerts, schedules, teams, and integrations programmatically. For a complete export, you will want to page through the alerts API (which returns 100 records at a time) and store the results in JSON format. Atlassian may also release migration tooling as the shutdown date approaches, but do not depend on it. Start your exports now and run them again closer to the date.
Beyond Alerting: Why This Is a Chance to Upgrade
The natural reaction to an OpsGenie shutdown is to find the closest replacement and move as fast as possible. That is a reasonable instinct, but it misses an opportunity. We have seen too many teams treat forced migrations as a 1:1 swap and end up right back where they started six months later.
OpsGenie was built in a world where the operations workflow was linear: something breaks, an alert fires, someone gets paged, they fix it, everyone writes a post-mortem. That reactive model served teams well for years. But the operational reality has shifted.
The Gap Between Change and Incident
Across the industry, 60 to 70 percent of production incidents trace back to changes: deployments, configuration updates, infrastructure modifications, database migrations. Yet the standard operations stack has a massive blind spot between “a change was deployed” and “an incident was declared.” That gap is the single biggest missed opportunity in most operations stacks today.
Alerting tools tell you something is broken. Change management tools (like JSM) enforce a process for approving changes. But nothing in the traditional stack answers the critical question in between: “This change is about to go live. Who needs to know, what is the risk, and what might break?” That gap is exactly what change intelligence is designed to close, using notification routing best practices to get the right information to the right people before incidents occur.
The Case for Change Intelligence
This is what change intelligence addresses. Instead of waiting for alerts after something breaks, a change intelligence layer analyzes changes as they happen, scores their risk based on historical patterns and service dependencies, and routes awareness to the right stakeholders before problems escalate.
Think of it as the difference between a smoke detector and a fire prevention system. Alerting tools are excellent smoke detectors. Change intelligence is the system that reviews the wiring, identifies the risks, and makes sure the right people are watching when someone works near a gas line.
A Practical Migration Strategy
You do not have to choose between alerting and change intelligence. The best operations teams run both. Here is a practical approach:
- Replace OpsGenie with the alerting tool that fits your team. PagerDuty if you want the most mature platform, incident.io if you are Slack-native, Squadcast if budget is the priority.
- Layer change intelligence on top. Add a platform like citk that integrates with your new alerting tool and your CI/CD pipeline to provide proactive change awareness.
- Connect the two. When citk identifies a high-risk change, it can route notifications through your alerting tool's channels. When your alerting tool fires an incident, citk provides context about what recently changed. The tools complement each other.
What This Looks Like in Practice
Imagine a database migration is scheduled for Tuesday at 2 PM. In the old OpsGenie world, the on-call engineer finds out about it when latency alerts fire 10 minutes after the migration starts. In the change intelligence world, affected service owners and on-call engineers are notified before the migration begins, with an AI-generated risk assessment and a list of services that depend on the affected database.
The alerting tool is still there as a safety net. But now you also have a proactive layer that prevents the alert from firing in the first place, or at least ensures the right people are already watching when it does.
Making the Decision
If your primary challenge is on-call management and alert routing, focus on a direct OpsGenie replacement first. Get that stable, then evaluate whether change intelligence addresses your remaining pain points.
If your primary challenge is that changes keep causing incidents and nobody finds out until something breaks, you may want to evaluate migration-specific tooling that addresses the root cause, not just the symptoms.
Either way, the OpsGenie sunset is not just a migration. It is a forcing function that gives you permission to rethink your operations stack. Take advantage of it.
If you’re rethinking your operations stack as part of this migration, we’d like to hear from you. Our early adopter program gives operations teams hands-on access to change intelligence alongside their new alerting tool. No commitment, just a chance to see whether proactive beats reactive for your team.