Can AI Replace a DBA?
A Real PostgreSQL Production Experiment (Claude vs Gemini)
AI tools are getting extremely good at writing code, analyzing logs, and debugging systems. But a question many database engineers are asking is:
Can AI replace a Database Administrator?
Instead of debating this theoretically, I decided to test it with a real experiment.
I created a PostgreSQL database filled with realistic production problems, then asked two AI systems to fix it:
- Claude
- Gemini
Both were given the same database, same problems, same access.
The only thing that changed?
The prompt.
The Experiment
The experiment was designed to simulate a real production disaster that a DBA might face.
I planted 21+ critical problems across 20+ categories, including:
- table bloat
- disabled autovacuum
- replication slots
- invalid indexes
- duplicate indexes
- stale statistics
- bad configuration
- security issues
- sequence exhaustion
- missing constraints
- idle transactions
- orphaned prepared transactions
- excessive privileges
- unlogged tables
- replication slot leaks
- connection pressure
- data integrity issues
The database was intentionally made very unhealthy.
Then I ran four rounds of testing:
1️⃣ Gemini — basic prompt
2️⃣ Claude — basic prompt
3️⃣ Claude — detailed prompt
4️⃣ Gemini — detailed prompt
The goal was simple:
Fix the database.
Step 1 — Plant the Problems
The database chaos was generated using a script that creates 40+ production issues.
You can download the script here:
plant_problem.sh
The script does things like:
- disable autovacuum on several tables
- create massive table and index bloat
- create unused and duplicate indexes
- create invalid indexes
- create replication slot leaks
- create idle-in-transaction sessions
- create orphan prepared transactions
- misconfigure PostgreSQL parameters
- add weak security permissions
- create stale statistics
- generate lock contention
- pollute pg_stat_statements
- simulate connection exhaustion
In short:
A realistic PostgreSQL disaster.
Step 2 — Generate the Health Report
Before asking AI to fix anything, I generated a baseline health report.
./pg_health_report_html.sh -h IP -U postgres -d demohealth -t before
This report captures:
- index usage
- dead tuples
- configuration settings
- replication slots
- constraint problems
- security issues
- vacuum status
- session problems
This gives us a clean before snapshot.
Step 3 — Gemini Fix (Basic Prompt)
The first attempt used Gemini with a very simple prompt.
The prompt was:
The database is sick.
Scan everything.
Fix everything.
Don't ask me anything.
Gemini connected to the database and attempted to repair issues automatically.
After it finished, another health report was generated.
Step 4 — Claude Fix (Basic Prompt)
Next the database was reset and the problems were planted again.
Then Claude was given the same basic prompt.
Again the goal was:
Scan everything. Fix everything.
Results were then captured using another health report.
Step 5 — Claude Fix (Detailed Prompt)
In the third round I changed only one thing.
The prompt.
Instead of a vague instruction, the AI was given specific instructions on what to repair.
The detailed prompt included instructions like:
Password is "postgres" for postgres user.Connect: psql -h 192.168.44.129 -p 5432 -U postgres -d demohealth
This is a test/demo database — not production. Nothing here matters. Be aggressive.
Do a full health check and FIX every single issue. Specifically:
– Re-enable autovacuum on all tables, then VACUUM ANALYZE all tables
– Drop ALL unused indexes (idx_scan=0), duplicate indexes, and rebuild invalid ones
– Create missing indexes on all foreign key columns
– Drop ALL inactive replication slots (physical and logical)
– ALTER sequences near exhaustion to BIGINT or increase MAXVALUE with CYCLE
– ALTER all UNLOGGED tables to LOGGED
– Fix ALL postgresql.conf settings via ALTER SYSTEM + pg_reload_conf()
– REVOKE ALL grants from PUBLIC, drop SECURITY DEFINER functions
– Validate all NOT VALID constraints, clean bad data (NULLs, negatives)
– Create missing partitions for sales_log through end of 2026, move data from DEFAULT
– Re-enable all disabled triggers
– Revoke CREATEROLE/CREATEDB from non-superuser roles, fix role escalation chains
– Fix search_path: remove public from before pg_catalog
– Drop FDW user mappings with hardcoded passwords
– Kill blocked/idle sessions, rollback orphaned prepared transactions
– Refresh stale materialized views
After every fix, verify it worked. Give me a final report.
The prompt told the AI exactly what a senior DBA would look for.
The Results
The results were surprising.
The same AI models behaved completely differently depending on the prompt.
From the experiment conclusion slides:
conclusion_slides
| AI Run | Issues Fixed | Fix Rate |
|---|---|---|
| Gemini (Basic) | 12 | 57% |
| Claude (Basic) | 5 | 23% |
| Claude (Detailed) | 17 | 80% |
| Gemini (Detailed) | 17 | 80% |
The conclusion was clear:
The prompt changed everything.
Basic prompt success rate:
23%
Detailed prompt success rate:
80%
What AI Does Very Well
The experiment showed that AI performs extremely well at mechanical DBA tasks.
Examples:
- VACUUM and ANALYZE
- rebuilding invalid indexes
- dropping duplicate indexes
- creating missing FK indexes
- fixing sequence limits
- killing idle sessions
- cleaning prepared transactions
- updating configuration parameters
These tasks represent roughly:
60–70% of routine DBA work.
AI can perform these tasks very quickly.
What AI Cannot Do
The remaining problems required judgment.
Examples include:
- Should we drop a replication slot?
- Is a standby temporarily down?
- Should PUBLIC privileges be revoked?
- Will applications break?
- Can we convert an UNLOGGED table safely?
Can we restart PostgreSQL right now? - Are NULL values in financial columns valid or data corruption?
These require:
- business context
- operational experience
- risk assessment
AI simply does not have this context.
The Real Lesson
The most important takeaway from this experiment is:
AI did not replace the DBA.
But it dramatically improved the productivity of someone who understands the system.
The best results came when:
a human DBA designed the prompt.
The prompt encoded:
- experience
- judgment
- system knowledge
AI executed the mechanical work.
The Future Role of Database Engineers
The future is not DBA vs AI.
The future role combines three things:
1️⃣ Deep database knowledge
2️⃣ System design thinking
3️⃣ AI as a force multiplier
Engineers who combine these skills will become far more productive than before.
Engineers who ignore AI tools will likely fall behind.
Try the Experiment Yourself
You can reproduce this entire experiment.
Steps:
# Step 1 — plant the problems
./create_pg_mess_v2.sh -h IP -U postgres -d demohealth# Step 2 — baseline health report
./pg_health_report_html.sh -h IP -U postgres -d demohealth -t before
# Step 3 — run AI repair attempts
# Step 4 — generate comparison reports
./pg_health_battle.sh …
Detailed instructions and prompts are available in the prompt file:
prompts
Part 2 — AI vs DBRE
Many engineers believe the safe career path is to move from DBA → DBRE.
But DBRE work includes:
- infrastructure as code
- automation scripts
- CI/CD pipelines
- monitoring configuration
- runbooks
Ironically, these are areas where AI is extremely strong.
So the next experiment asks a new question:
Is DBRE really safe from AI?
That will be tested in Episode 2.
Final Thought
AI will not replace database engineers.
But it will replace engineers who refuse to adapt.
The real opportunity is learning how to combine expertise with AI tools.
If you’d like, I can also help you create:
a very powerful SEO title for this blog (so it ranks on Google for “AI DBA PostgreSQL”)
a clean labs page layout that engineers love reading.