RDS Performance Insights Is Going Away
AWS announced it's discontinuing RDS Performance Insights as a standalone product, folding it into CloudWatch Database Insights. The original deadline was November 30, 2025. They extended it to June 30, 2026. After that, if you've done nothing, your instances auto-default to CloudWatch Standard mode.
Let's talk about what that actually means for your PostgreSQL and RDS database monitoring.
Need the practical rollout version? Use the dedicated AWS PI migration guide.
What's Going Away with Performance Insights
AWS announced the deprecation of the standalone Performance Insights console as part of the consolidation into CloudWatch Database Insights. The API itself is not going away, which matters if you've built anything on top of it. But the console experience is moving, the retention model is changing, and the pricing is changing.
Affected databases: Aurora PostgreSQL, Aurora MySQL, RDS PostgreSQL, RDS MySQL, RDS MariaDB, RDS SQL Server, and RDS Oracle.
Your Two New Options
CloudWatch Standard Mode (free)
Seven days of retention. That's it. If you're doing incident postmortems, trying to understand a query regression that happened 10 days ago, or tracking down a slow degradation over time, seven days is not enough. For hobby projects or non-critical workloads, fine. For production databases that anyone depends on, this is a step backward.
CloudWatch Advanced Mode (paid)
Fifteen months of retention. The price: roughly $9 per vCPU per month for provisioned RDS instances, or $2.50 per ACU per month for serverless. A 4 vCPU RDS instance is $36/month just for this. A busy Aurora cluster with 32 vCPUs is $288/month.
Compare that to what Performance Insights cost before this migration: $0.02 per vCPU-hour for retention beyond 7 days, which works out to around $14.40 per vCPU-month. So Advanced Mode is actually cheaper than the old extended retention pricing, if you were paying for that. Good. But the framing here is still "pay AWS more money to not lose the visibility you already had."
Comparing Your Options at a Glance
| CloudWatch Standard | CloudWatch Advanced | Third-Party (e.g. Basira) | |
|---|---|---|---|
| Retention | 7 days | 15 months | 30+ days (varies by tool) |
| Cost (4 vCPU instance) | Free | ~$36/month | $29/month flat |
| Cost (32 vCPU cluster) | Free | ~$288/month | $29/month flat |
| Setup | Automatic after June 2026 | Opt-in per instance | Install agent + API key |
| AWS console integration | Yes | Yes | No (own dashboard) |
| Query-level analysis | Basic | Enhanced | Deep (pg_stat_statements) |
| Multi-cloud support | No | No | Yes |
| ClickHouse support | No | No | Yes (Basira) |
The cost difference becomes significant at scale. A team running 10 Aurora instances with 8 vCPUs each pays $720/month for Advanced Mode. The same fleet costs $290/month with Basira.
The Real Risk Before June 2026
If you take no action before June 30, 2026, your instances silently drop to Standard mode. Seven-day retention. Any historical data you had beyond that window is gone.
This isn't a "your app breaks" situation. Your databases keep running. But your monitoring context disappears. The next time something goes wrong and you need to look back two weeks, you can't.
Here's a concrete scenario: a deployment goes out on a Monday. By Wednesday, users start reporting intermittent slowness. Your team investigates on Thursday, but the regression started with queries that were already running slowly for weeks. You want to compare current query performance against the baseline from two weeks ago. With seven-day retention, that baseline is gone. You're debugging blind, comparing against nothing.
This happens more than people expect. Slow degradations are harder to catch than sudden failures, and they're exactly the type of problem that requires historical context to diagnose.
Set a reminder. Do something before the deadline.
Option 1: Migrate to CloudWatch Database Insights
The path of least resistance. Your infrastructure stays AWS-native, your IAM policies keep working, and your existing CloudWatch dashboards and alarms don't need to be rebuilt from scratch.
The tradeoff: you're now more locked into the CloudWatch ecosystem, and you're paying per-vCPU for visibility that used to be cheaper. If you have a lot of compute, Advanced Mode adds up fast. Teams running large Aurora clusters are already asking whether database monitoring costs can be contained elsewhere.
To opt in, go to the RDS console, select your instances, and enable CloudWatch Database Insights on each one. You can also do this via AWS CLI or Terraform. For Terraform, keep performance_insights_enabled set to true and set performance_insights_retention_period to your desired retention in days (up to 731 for the maximum 24 months). CloudWatch Advanced Mode becomes the underlying provider automatically after the migration deadline.
Option 2: Use the Performance Insights API Directly
The API isn't going away. If you've built custom tooling on top of GetResourceMetrics or DescribeDimensionKeys, that continues to work. The API just starts using CloudWatch Database Insights as the backend.
This is only relevant if you've invested in custom dashboards or tooling that calls the PI API directly. Most teams haven't.
Option 3: Move Your PostgreSQL Monitoring Off AWS
This is the option worth considering if you're unhappy being pushed into a CloudWatch-shaped box.
Performance Insights was always a somewhat thin layer on top of what PostgreSQL (and MySQL) already expose natively. For Postgres specifically, pg_stat_statements gives you normalized query fingerprints, execution counts, total and mean execution time, block reads vs. hits, and WAL bytes. pg_stat_activity gives you active query visibility. That's the core of what PI showed you, collected at the database level rather than the AWS infrastructure level.
The benefit of the AWS approach is you don't have to run anything. The benefit of going direct is you're not beholden to AWS's product roadmap, and you get richer data because you can query these views at whatever interval you want with whatever retention you want.
Where Basira Fits
We built Basira as a lightweight agent that connects to your database, collects from pg_stat_statements and related system views, and ships that to a hosted analytics backend. It runs near your database, not inside AWS infrastructure. You get query-level performance data, active query monitoring, table statistics, and AI-powered query analysis.
The pricing is $29 per database per month, flat. No per-vCPU charges. A 32-vCPU Aurora cluster costs the same to monitor as a single-vCPU RDS micro instance.
What you lose by going this route: the native AWS console integration, and any CloudWatch alarms you'd built on top of Performance Insights metrics. What you gain: visibility that's independent of your cloud provider, consistent pricing regardless of instance size, and query-level data that the PI console didn't always surface clearly.
It's not the right call for everyone. If you're deeply invested in CloudWatch and your team lives in the AWS console, staying there makes sense. But if this migration is prompting you to question whether AWS-native monitoring is actually meeting your needs, it's worth looking at alternatives. The agent setup is fully API-driven, so you can be collecting data in a few minutes without touching a UI.
If you want to try it: https://app.usebasira.com/signup. The agent install takes a few minutes, and you don't need to tear out anything you have now to evaluate it.
What to Do With Your Historical Data
Before the June 2026 cutover, consider exporting any historical Performance Insights data you want to keep. The PI API will continue to work after migration, but the data backend changes, and there's no guarantee your historical data will be fully preserved in the new system.
Export via the PI API: Use GetResourceMetrics to pull historical counter and wait-event data for your instances. You can export up to 24 months of data (if you were on the paid retention tier). Script this for each instance and store the results in S3 or your own data warehouse.
CloudWatch metrics: Performance Insights publishes some metrics to CloudWatch (like DBLoad). These follow CloudWatch's own retention rules (1-second resolution for 3 hours, 1-minute for 15 days, 5-minute for 63 days, 1-hour for 455 days). These survive the migration, but the granularity decreases over time.
pg_stat_statements snapshots: If you're moving to a third-party tool, start collecting pg_stat_statements data now so you have a baseline. The Basira agent begins computing deltas from the moment it connects, so you'll have comparison data immediately. There's no gap if you set it up before you turn off PI.
Don't wait until June to figure this out. Run the new tool alongside PI for a month, verify the data matches your expectations, then let PI sunset without stress.
Summary
- June 30, 2026 deadline. Don't miss it or you lose historical data.
- Free tier gives you 7 days. That's not enough for production.
- Paid tier (Advanced Mode) is ~$9/vCPU/month for provisioned instances.
- The PI API continues to work. Console moves to CloudWatch.
- If you want to stop depending on AWS for this, pg_stat_statements-based monitoring is a viable alternative.
If you want a checklist you can hand to an owner today, start here: AWS PI migration guide.