AI’s proficiency in technical disciplines
Let’s be direct: AI is now capable of handling technical exams that most human students struggle with. A recent study out of Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL) shows AI systems like ChatGPT can correctly answer about 66% of test questions across 50 technical and scientific subjects. That includes computer science and data science. This isn’t a guess. It’s measured performance, real data.
So what does this mean for your business? It means we’re entering an era where the diploma hanging on someone’s wall may no longer be a reliable indicator of deep knowledge or practical skills. With 70% of students already using tools like ChatGPT in their coursework, regardless of whether they’re allowed to, it becomes clear that we’re developing a generation trained alongside AI. Academic achievement is still significant, but it’s increasingly hard to separate human effort from algorithmic assistance.
For C-suite leaders, here’s the implication: if you’re still using traditional credentials as a filter in hiring, you’re behind. AI now solves entry-level technical problems faster and without rest. What matters isn’t what people know from memory, it’s what they can build, analyze, and improve when working with or without AI support.
Evolving hiring strategies in the age of AI
Credentials are starting to fall behind capability. Let’s not sugarcoat it, AI is helping students produce polished work, often without truly understanding what they’re doing. This creates a recruiting minefield. Candidates may show you a shiny résumé and a portfolio full of clean code, but when the AI crutch is gone, there’s no depth behind it.
Peter Wood, CTO at Spectrum Search, puts it well: businesses are likely to face an incoming wave of graduates whose skills look solid on paper but don’t translate reliably in real-world conditions. That’s a warning worth listening to. It doesn’t mean these candidates are useless. It means we need better filters, ones aligned to actual performance instead of assumed knowledge.
What does that look like in practice? You need more hands-on assessments. Give people real problems. Observe how they solve them. See how they think, do they hit a wall when the AI autopilot fails? Or do they adapt and troubleshoot? Programming tasks, collaborative challenges, architecture reviews, those are your new candidate screens.
The point here is evolution. The hiring process must adapt. It’s not enough to ask what someone can recite or what certificate they hold. Ask: can they think independently? Can they solve without scripting every move through ChatGPT? The companies that get this right will attract and retain real talent, people who can work with AI, not be replaced by it.
Bridging the skill gap between junior and senior developers
There’s a growing divide in development teams that’s going to be hard to ignore. Junior developers are coming in already adapted to AI-powered tools. They’ve learned to write code with copilots checking their work automatically. They move fast, and they’re efficient, but they often don’t understand the foundations. Senior engineers notice the difference. They know what’s underneath the abstractions. That knowledge doesn’t come from shortcuts, it comes from repetition and problem-solving, without an AI partner doing the work behind the scenes.
Laurent Dougin, Director of Developer Relations and Strategy at Couchbase, flagged this shift early. He’s right to be concerned. When junior developers rely too heavily on assistance from tools like ChatGPT, they stop developing the deeper critical thinking you need to design resilient, scalable systems. There’s more to software than getting working code. There’s architecture, trade-offs, debugging, edge cases, all of it demands the ability to think through problems unaided.
For executive leaders managing large-scale engineering teams, this becomes a structural issue. You don’t just hire bodies, you develop skills pipelines. You’ll need to invest in bridging programs that upskill newer developers while ensuring that senior staff aren’t left carrying the architectural complexity alone. A structured system of mentorship, code reviews that go beyond function, and clearly defined progression paths can close that knowledge gap before it creates misalignment across the team.
The imperative of continuous education
You can’t bet against AI. It’s moving quickly, and it’s not slowing down to wait for formal education systems to catch up. That puts companies in a position where internal training moves from a “nice-to-have” to a necessity. If you don’t continually train your teams, their skills flatten out while the technology, especially generative AI, keeps evolving.
Peter Wood from Spectrum Search said it best: continuous learning isn’t optional anymore. It’s survival. AI tools hallucinate. They give wrong answers with complete confidence. If your teams don’t have solid fundamentals, they won’t know when to trust AI and when to step in. You don’t want people blindly deploying code that works on the surface but creates fragile systems underneath.
So make the investment. Build onboarding bootcamps that are relevant to today’s tools. Integrate regular refreshers that aren’t just lectures but are based on real technical use cases and decision-making. Give your engineers, product managers, and analysts time each quarter to focus specifically on skill upgrades. Don’t wait until something breaks or performance drops before you take action.
The companies that prioritize learning build structured knowledge that AI can’t replace. And that’s what lasts. AI will keep evolving, but companies with well-trained teams that know how to use it without over-depending on it will adapt faster, execute better, and stay ahead.
Evaluating AI utilization in candidates
The question isn’t whether someone uses AI tools like ChatGPT, it’s how they use them. If you’re filtering out candidates because they’ve leaned on AI, you’re missing the point. What matters is whether they rely on it blindly or use it intelligently to extend their capabilities. There’s a clear difference between outsourcing thinking and using AI to move faster while still owning the work.
Peter Wood, Chief Technical Officer at Spectrum Search, outlined it directly: the best candidates will be the ones who understand AI’s limits and apply it as a tool, not as a substitute. They’ll know when to trust the outputs, when to double-check, and when to fall back on independent thought. That type of judgment is going to be one of the most valuable skills in the workforce.
For executive teams, hiring strategies need to shift accordingly. Don’t screen for degrees or scan résumés for spotless experience. Screen for mindset. Give candidates short tasks and observe how they think, how they approach unknowns, and whether they challenge AI outputs instead of following them blindly. Review how they learn, how they refine ideas, and whether they can deliver something original with support from AI, not something AI produced with minor edits.
The future favors those who can reason, adapt, and innovate beyond templates. If a tool can automate something, it’s no longer a differentiator. What you need are people who can build on top, move faster, make better decisions, and execute with a clear understanding of what the machine can and can’t do. That’s where the real value will come from.
Key executive takeaways
- AI is outperforming traditional benchmarks: Leaders should rethink how they evaluate technical talent, as AI like ChatGPT now performs on par with students across engineering and science disciplines, challenging the relevance of academic credentials alone.
- Degrees no longer guarantee capability: Decision-makers must shift hiring strategies toward practical assessments and real-world problem-solving to separate true expertise from AI-augmented résumés.
- The junior–senior skills gap is widening: Technical leaders should implement mentorship and capability-building programs to address the growing reliance of junior developers on AI tools and the resulting knowledge imbalance with senior staff.
- Continuous learning is now core strategy: Executives should prioritize structured training and skill refresh programs that strengthen foundational understanding and reduce overdependency on AI-generated outputs.
- AI use is a filter, not a flaw: Rather than screening out AI-assisted candidates, focus hiring on those who use AI appropriately, leveraging it for speed without losing critical thinking or technical depth.