Over-reliance on AI tools can impair cognitive abilities

AI doesn’t make everyone smarter. Used the wrong way, it can actually make people less capable. There’s a rising group of users heavily dependent on generative AI, calling themselves “sloppers.” These are people outsourcing almost all their decision-making to chatbots, from choosing a meal to writing business emails. That level of dependency has a cost. It reduces critical thinking. It diminishes memory. Cognitive discipline fades fast when you stop exercising it.

A study from MIT, published in June, found that people using ChatGPT to write performed worse on tests for memory, focus, and analytical reasoning than those who wrote unaided or used search engines. The impact was strongest among users under 30. EEG scans showed significant drops in brain activity, confirming weaker cognitive engagement.

A joint survey from Microsoft and Carnegie Mellon found something similar. Office workers who relied most on AI tools were more likely to skip fact-checking and expressed lower confidence in their own judgment. That’s a red flag for any organization. If your team doesn’t question the output, your product, messaging, or strategy could rest on shallow foundations.

AI is not a substitute for thinking. It’s a tool. Used poorly, especially early in the decision-making chain, it can weaken the mental rigor we depend on in leadership, innovation, and execution.

Using AI strategically, especially at the end of the creative process, can enhance learning and creativity

There’s a smarter way to use generative AI, and it starts with when, not what, you ask. You don’t start a task by leaning on AI. You start it with your own thinking. Ideas, drafts, strategies, they need to come from you first. Once that skeleton exists, then bring in the AI. Use it to catch blind spots, illuminate angles you missed, stress-test solutions, or improve phrasing. This approach keeps your mind sharp and ensures the output is yours, with AI adding depth, not replacing direction.

The benefit is true collaboration. Instead of surrendering judgment, you shape the core work and invite AI to refine it. It’s like having a second set of eyes, fast, tireless, thorough, but not in charge. That’s the difference between being an AI-assisted professional and being automated.

For leaders building teams and culture, this matters. A team trained to think first, then optimize with AI, gains both agility and long-term skill. They won’t lose the ability to reason, write, or strategize just because they have AI access. They’ll grow sharper because they challenge themselves before the software. That creates competitive resilience, in a world where too many lean too fast on automation.

Keep thinking. Use AI to make your good work better. Not to avoid the work itself.

Lex facilitates collaborative writing that strengthens user skills rather than replacing human effort

Most writing tools powered by generative AI are designed to do the work for you. They produce text, average at best, inaccurate at worst, and leave you with something synthetic. Lex takes a different approach. It doesn’t replace you. It pushes you to improve. The tool gives you real-time, document-specific feedback that helps you write better.

Lex engages you through a clean interface that invites iteration. You can ask questions like, “What’s missing here?” or “Is this persuasive enough?”, and get constructive responses. It doesn’t dump generic suggestions. It tailors insight based on your specific text. Beyond that, its style check tools are flexible. You select what you want to focus on, brevity, clarity, repetition, citation gaps, and the system highlights only those areas. There’s also a custom mode where you define your own rules via prompt. Every suggestion includes context. Every change is your decision. You stay in control of the output.

This process matters for anyone in a leadership or communications role. Your team doesn’t just produce documents; they communicate strategy, value, and vision. Tools like Lex reinforce the process of learning while delivering. They keep human authors in the driver’s seat, with the AI acting as a smart reviewer.

In enterprise environments, that distinction is critical. Outsourcing communication is a strategic risk. Enabling your people to develop clarity and precision with meaningful AI support, that’s how you scale competence over time.

Study mode for ChatGPT fosters interactive learning and critical thinking

OpenAI’s new Study Mode feature makes generative AI more useful for long-term learning. Most generative tools are answer-driven. Study Mode flips that behavior. It engages users with questions and feedback loops instead of static responses. The system begins by asking what you want to study, at what level, and then adapts its approach based on your input. It asks questions. It challenges assumptions. It doesn’t spoon-feed solutions.

That format forces engagement. It encourages you to break down problems, think through solutions, and explain your reasoning as you go. It’s active, not passive. This drives understanding beyond just recalling facts, it embeds reasoning patterns that scale across tasks.

From a leadership standpoint, this changes how we think about continuous professional development. Teams need more than access to information, they need frameworks that push them to examine, revise, and think deeply. Study Mode can be embedded into training programs or used as an internal knowledge development tool. It works for technical fields, strategic planning, and high-level knowledge building.

The feature is already live and accessible on all ChatGPT plans, Free, Plus, Pro, Team, and will soon be available for ChatGPT Edu subscribers. If your people have access to ChatGPT, they have access to Study Mode. What you do with that access, train them to use it effectively or not, determines the real ROI on that AI spend.

LearnLM and Gemini support teacher-guided and self-directed education

Google launched two distinct education-focused platforms: Gemini for Education and LearnLM. Both are designed to increase the effectiveness of learning, but they serve different audiences. Gemini for Education is limited to academic institutions. It allows teachers and students to upload material, notes, textbooks, assignments, and produce breakdowns, quizzes, study guides, and even podcast explanations. Teachers can create their own custom AI assistants, called “Gems,” to support specific lessons with guided simulations and tailored content.

LearnLM broadens accessibility. It’s designed for general users and integrates directly into products like Google Search, YouTube, and the Gemini AI platform. You can interact with LearnLM through Google AI Studio or even from your phone. It acts as an on-demand coach, pushing users to think through tough subjects, break down ideas with instructor-like feedback, and use active recall methods like flashcard-style questioning. It adapts based on how you respond, and that flexibility transforms the learning session into progressive skill building.

For executives, LearnLM creates real value in workforce development. Employees can use it to break down new topics quickly, technical or strategic. If you’re onboarding teams in new markets, adopting emerging tech, or navigating policy environments, tools like LearnLM let people get up to speed faster using self-directed AI dialogue driven by their own goals.

Google launched LearnLM in May and Gemini for Education in June. While Gemini is currently locked to academic contexts, LearnLM is open to anyone with a Google account, globally. The scalability and distribution of these platforms position them as high-leverage tools for corporate learning ecosystems.

A broad ecosystem of AI tools is enabling accelerated learning for diverse audiences

We’re now in a phase where specialized AI learning tools are wide-ranging and increasingly effective. There’s StudyMonkey and AI Blaze for AI-generated quizzes, summaries, and flashcards. These function well across both academic and general knowledge domains. Mindgrasp adds live class recording and note summarization. Tools like Wisdolia convert documents and web pages into flashcards that promote active recall, useful for deeper subject memorization.

Google’s own mobile app Socratic explains answers visually and with step-by-step solutions, designed for high school and early college students but accessible to anyone. Revisely builds custom quizzes and content maps for easier review. Tutor AI and TutorBin offer instant explanations and even connect users to live human tutors. Long-established platforms like Quizlet, Anki, and StudySmarter have added generative AI to support smart flashcard creation, spaced repetition, and deeper engagement.

For C-suite leaders, the important factor isn’t just the availability of these tools, it’s how these capabilities align with broader goals on workforce agility and knowledge retention. If employees can use these AI apps to master new material more quickly and with greater retention, you’re reducing training times and improving performance without expanding L&D budgets excessively.

These tools aren’t just for students. They support anyone who values fast, durable learning, at any stage of a career. Ignoring this part of the AI ecosystem is a missed opportunity for building smarter, more capable teams in less time.

AI can enhance human intelligence, but only with mindful, selective usage

AI isn’t inherently good or bad. It depends entirely on how it’s used. This might sound obvious, but it’s being missed at scale. Too many users interact with AI tools passively, asking for answers, direction, and outputs without doing the initial thinking themselves. That behavior creates the illusion of productivity while decreasing original thought and long-term learning.

Used correctly, AI can amplify your existing skills. But it has to be applied at the right moment, in the right way. If you build the foundation yourself, develop the ideas, do the rough work, and then bring in AI to challenge, refine, or expand, the outcome improves. You’re not giving up responsibility. You’re using the system to catch gaps and sharpen execution. That’s where the real value is.

For C-suite decision-makers, this shouldn’t be framed as a productivity question, it’s strategic. Teams that rely too heavily on AI for ideation, writing, or problem-solving will deteriorate in capability. Competence declines quietly. By contrast, individuals and teams that treat AI as feedback, not leadership, maintain higher-level cognitive skills and deepen their expertise over time.

This mindset needs to be part of internal culture. Organizations must set clear norms around AI use: when to use it, how to use it, and how to measure its impact. That ensures long-term skill growth across all levels. Without those guardrails, AI becomes a creative shortcut that dulls future performance rather than enhancing it. The winners here won’t be the ones who use AI the most, but the ones who use it most intelligently.

Final thoughts

The opportunity with AI isn’t automation, it’s acceleration. But only if you’re disciplined about how you implement it. The biggest gains won’t come from using more AI. They’ll come from using it more intelligently. When your team applies AI after the thinking is done, not before, you get sharper judgment, stronger output, and long-term capability that compounds.

Leaders need to set the tone. If your culture starts to treat AI as a shortcut, expect to see the quality of analysis, strategy, and execution drop over time. But when AI is used as a second layer, review, critique, enhancement, it builds sharper minds and smarter organizations. Decision-making improves. Learning accelerates. Teams grow instead of leaning.

That’s the delta that matters. AI’s technical edge is real, but what separates winning companies will be the human layer that insists on leading with thought, not automation. Build that culture, and the tools will serve you well, without eroding what makes your people great.

Alexander Procter

September 17, 2025

10 Min