There’s a quiet war happening in QA right now.
On one side: Manual testing.
✅ Flexible
✅ Human judgment
🚫 Slow
🚫 Expensive
🚫 Doesn’t scale
On the other side: Scripted automation.
✅ Fast
✅ Deterministic
🚫 Brittle
🚫 Painful to author
🚫 Breaks every time you change the UI
If you’re leading a software team today, you’re probably stuck between the two.
And you’ve probably asked the same question I did:
“Isn’t AI supposed to solve this?”
Turns out—not yet. But maybe… almost.
Where AI Actually Helps (and Where It Doesn’t)
The dream is tempting:
“Just describe the flow in plain English. The AI will write and run your test.”
That’s the pitch. But here’s the reality:
AI at runtime isn’t trustworthy yet.
It hallucinates. It adapts to bugs. It silently fails in the worst ways.
Worst of all? It doesn’t tell you when it just made stuff up.
You don’t want an agent that’s “mostly right.” You want one that’s reliably accountable.
Because in QA, unpredictability is failure.
But here’s where GenAI does shine today:
🧠 Test authoring: Generate boilerplate from natural language or human sessions
🔁 Script repair: Patch brittle logic automatically when locators change
🧪 Exploratory testing: Suggest workflows humans might miss
🔍 Test targeting: Recommend what should be tested based on changes
Think of it not as a magic replacement—but as a QA co-pilot.
That’s exactly what Nuwan Samarasekera and his team at TestChimp are building:
AI-assisted test creation that’s editable and reviewable
Self-healing tests that still ask permission before changing core logic
Human-in-the-loop systems that treat QA as a partnership, not an offload
The difference is trust.
You’re not surrendering the testing process to AI.
You’re collaborating with it.
The Real Future of QA Is a Hybrid Model
Here’s what I’ve come to believe:
You don’t need to choose between manual and automated.
You need a system that gives you both—and knows when to use each.
✅ Let AI generate initial test coverage from user flows
✅ Let humans validate and flag what matters
✅ Let the system recommend updates—but don’t let it merge changes blindly
✅ Let testers focus on the weird edge cases AI still can’t see
That’s the balance:
Let AI automate the boring parts.
Let humans investigate the hard ones.
When you do that, you stop wasting your best people on the lowest-leverage work. And you stop treating “fast” and “safe” like mutually exclusive goals.
We don’t need perfect AI.
We need smart, supervised AI—that helps people do their jobs better.
In a world where AI is already accelerating code generation, QA is the new bottleneck.
We can’t afford to waste cycles choosing between hand-testing and brittle scripts. We need a third path.
What Happens If You Don’t Adapt?
Your test suite either:
Grows slower than your codebase
Breaks every time your product team moves fast
Stops telling you anything useful
Meanwhile, your engineers:
Spend days debugging “flaky” tests
Avoid touching the test suite altogether
Learn to ship and hope nothing breaks
That’s not a culture of quality.
That’s burnout.
And as the product evolves, QA gets left behind.
The further that gap grows, the more bugs ship to production—until eventually, testing becomes ceremonial.
Teams run tests. They get green lights.
But no one trusts what they’re seeing.
So What’s Next?
The teams that win in the next five years will build hybrid QA systems that:
Use AI where it’s strongest (generation, pattern detection, repair)
Use humans where they’re irreplaceable (judgment, prioritization, weirdness)
Close the loop between code and coverage without burning people out
If you’re still treating QA as a checkbox—if you’re still arguing about manual vs. automation—you’re solving the wrong problem.
The question isn’t: Should we automate more?
The question is: How do we build a testing system that scales with the product and keeps humans in control?
🎧 That’s what we unpacked in the latest episode of Product Driven.
I sat down with Nuwan to talk about what’s actually working in AI-assisted QA right now—and what’s still pure hype.
👉 Listen to the full episode here
This is where testing is going. Let’s make sure we build it right.