The AI Predictions You're Hearing May Say More About the Predictors Than the Future

You Have More Agency Than You Think

Palantir CEO Alex Karp made a striking claim in early 2026: in the age of AI, the two groups most likely to succeed are those with vocational and trade skills. And the neurodivergent. He's spoken openly about his own dyslexia, and his company has been intentionally built around unconventional, contrarian thinkers. Meanwhile, Anthropic president Daniela Amodei argued the opposite: that humanities majors and people with strong emotional intelligence and communication skills are precisely what the AI era demands. She majored in literature and has said Anthropic actively hires for EQ, curiosity, and interpersonal skills.

Two powerful leaders. Two confident predictions. Radically different answers.

Here's the observation: both of them are describing themselves.

This isn't a criticism of their intentions. It's a window into something far more useful. A cognitive pattern that, once named, may give you back considerable control over how you think about your own career in this moment. It's a pattern that runs not just through Silicon Valley, but through virtually every high-performing professional environment. Including, if we're being honest, the ones you and I operate in every day.

The Psychology Has a Name (Several, Actually)

What Karp and Amodei are exhibiting is a well-documented cluster of cognitive biases that affect even the most analytically sophisticated thinkers.

The first is the availability heuristic: the tendency to assess the likelihood or importance of something based on how easily examples come to mind. Originally identified by psychologists Tversky and Kahneman, the availability heuristic explains why vivid, personal experiences carry disproportionate weight in our predictions. If neurodivergent thinkers have populated your professional world—because you are one, and because you built a company that sought them out—then neurodivergent success is intensely available to your memory. It feels like the pattern. Because for you, it largely is.

The second is confirmation bias: the tendency to notice and remember information that supports what we already believe, while unconsciously filtering out what contradicts it. You don't see the neurodivergent people who didn't thrive in the AI era. You don't see the humanities graduates who are struggling. The hits are visible; the misses are invisible.

The third, and perhaps most consequential, is survivorship bias: drawing conclusions from the data that's visible (the people who succeeded) while ignoring the far larger, invisible population of those who didn't. Leadership advice is particularly vulnerable to this. We study the winners and replicate their attributes, rarely asking how many people with identical attributes never made it. As one analysis put it plainly: "Just because there are successful CEOs that get up at 5:00 a.m. doesn't mean that getting up at 5:00 a.m. will cause you to become a successful CEO". The same logic applies to being neurodivergent, having a literature degree, or dropping out of Stanford.​

There's also the Baader-Meinhof phenomenon—also called frequency illusion—which describes how something you've recently noticed begins to seem ubiquitous. It doesn't suddenly appear more often. You've simply become sensitized to it. Your brain's reticular activating system flags every confirming instance, creating the impression of a dominant pattern where there may only be a personally relevant one.

None of these biases are signs of low intelligence. In fact, research suggests they operate most powerfully in high-confidence, high-expertise individuals. The people who have spent decades accumulating evidence that their way of thinking works. The more successful you are, the more invisible your lens becomes.​

It's Not Just Karp and Amodei; The Pattern Holds Across Every Major AI Leader

The same pattern holds when you examine every major AI leader's prediction about who wins the future of work.

Jensen Huang (Nvidia) has urged blue-collar workers to embrace AI as a tool and frames AI literacy and adaptability as the defining success factors. His biography reads as a masterclass in forced adaptation: immigrating to the U.S. at age nine speaking no English, accidentally enrolled in a reform school for troubled youth in rural Kentucky, working graveyard shifts at Denny's at 15. Adaptation wasn't a philosophy for Huang, it was survival. Of course that's what he sees when he surveys the future.

Sam Altman (OpenAI) argues the meta-skill is learning how to learn. Creativity, resilience, and the ability to make sound decisions under radical uncertainty. Altman dropped out of Stanford after two years, and has credited poker with teaching him “how to make decisions with very imperfect information”. His entire career from Loopt to Y Combinator to OpenAI has been a series of high-stakes bets under radical uncertainty. He's prescribing his own cognitive operating system as the universal formula.

Sundar Pichai (Google) offers perhaps the most universalist prediction: that anyone who learns to use AI tools will be positioned to succeed, regardless of background or field. Pichai's path is uncommonly multi-domain: a metallurgical engineering degree from IIT Kharagpur, a materials science master's from Stanford, an MBA from Wharton, followed by a methodical rise through Chrome, Android, and eventually the CEO office at Google. His prescription for the AI era is his biography: anyone can do this if they just keep adapting incrementally.

Laid out plainly, the mirror is hard to miss:

1. Alex Karp / Palantir
Background: Neurodivergent; holds a doctorate in philosophy from Goethe University Frankfurt
Prediction: Tradespeople and neurodivergent thinkers will win the AI era
The mirror: He's championing the exact traits that define him — and that he built his company around

2. Daniela Amodei / Anthropic
Background: Literature degree; built her leadership philosophy around EQ, communication, and human connection
Prediction: Humanities majors and emotionally intelligent communicators will thrive
The mirror: She's describing her own educational path and the hiring culture she created

3. Jensen Huang / Nvidia
Background: Taiwanese immigrant who arrived in the U.S. at age nine speaking no English; worked graveyard shifts at Denny's at 15
Prediction: Those who embrace AI tools and adapt relentlessly will succeed — background doesn't matter
The mirror: Adaptation has never been a strategy for Huang — it's been survival. He's prescribing his own biography.

4. Sam Altman / OpenAI
Background: Stanford dropout; credits poker — not school — with teaching him how to make decisions under radical uncertainty
Prediction: The meta-skill of learning how to learn — creativity, resilience, comfort with ambiguity — is what separates winners from everyone else
The mirror: He's describing his own cognitive operating system and calling it the universal formula

5. Sundar Pichai / Google
Background: Metallurgical engineer turned materials scientist turned MBA turned CEO — one of the most methodically multi-domain career paths in tech
Prediction: Anyone, in any field, can succeed in the AI era — as long as they learn to use the tools
The mirror: His entire career has been incremental adaptation across disciplines. Of course that's the path he sees.

None of them are wrong. That's the critical distinction. Each of these paths is genuinely viable. But each is also deeply, predictably colored by a single person's lived experience. And then it’s presented to millions of people as an objective forecast about the future.

I've Watched This Pattern Up Close for Nearly 30 Years

Here's where the research stops being academic for me.

I've worked with high performers full-time since 2008, and part-time since 1997. Over that time, I've worked closely with nearly 2,900 people across law, finance, technology, healthcare, and beyond—helping them make career decisions, develop their professional brand, and build strategies for what comes next. I added executive coaching in 2018, working with senior leaders on decision-making and the kind of difficult conversations that don't have clean answers.

What that means practically is this: I have had deeply honest, often vulnerable conversations with several thousand accomplished professionals over decades. And I've noticed something in their most real and unguarded moments that maps precisely onto what we're watching these tech CEOs do on a global stage.

They see their world through the lens they've spent a career building.

Lawyers see almost everything through a liability and risk framework because that's the lens that's kept them sharp, credible, and effective. Finance executives reduce complex decisions to P&L implications because that's how they've learned to evaluate what matters. Operators want a process for everything. Strategists see five moves ahead but sometimes miss what's right in front of them. Communicators can articulate almost anything, but often struggle when it comes to staying with ambiguity long enough to let the right answer surface for themselves.

Here's the thing: those lenses are powerful. They're not wrong. They represent decades of refined expertise. But a lens that makes you exceptional in one domain can make you a less reliable narrator about everything outside of it, including the future of your own industry, your own leadership, and your own career.

Much of the work I do with clients is designed specifically to help them see outside that lens. To borrow a framework from another field. To ask a question their training never would have prompted. To recognize that the most important answer they need right now might not live inside the domain where they've spent the last twenty years.

The tech CEOs making sweeping AI predictions aren't exempt from this. If anything, their scale of success and investor expectations make the pattern harder to see. Not to mention easier to present with extraordinary confidence.

What the Actual Research Says (Not Just the Predictions)

A note before we get to the data, because the data needs context. Some of my clients have been laid off because of AI. Others have been hired back into roles that didn't exist two years ago. Some are watching their departments shrink while a new function grows three floors up. This isn't theoretical for them, and it shouldn't be theoretical here. The disruption is real. What I'm pushing back on is not the existence of change. It's the false certainty about who it will affect, how fast, and in which direction. That uncertainty cuts both ways, and it belongs in this conversation.

Setting aside the individual forecasts, here's what workforce researchers and economists actually show. It’s a picture that’s both more nuanced and, in some ways, more encouraging than the headlines suggest.

Nearly 9 in 10 senior HR leaders expect AI to reshape jobs in 2026. But the prediction that mass job displacement will materialize quickly keeps not coming true. Hearing how my own clients are actually deploying AI inside their organizations gives me a strong sense that we're quite a ways off from mass displacement. (Although I'm writing this as Meta lays off another 700 people. A reminder that the disruption, however uneven, is real.) Historical patterns and demographic necessity–shrinking workforces in most developed nations–are working as powerful structural counterbalances. Anthropic's own research (somewhat ironically, given Daniela Amodei's optimism) has mapped out which roles AI could most plausibly displace, and the picture is more targeted than the blanket disruption narrative suggests. Worth noting: Anthropic's own advertising currently defies that narrative entirely, positioning Claude as a "thinking partner". A collaborator, not a replacement.

The picture on compensation is equally unsettled. While some early data suggested meaningful wage premiums for AI-fluent roles, a March 2026 Korn Ferry survey of more than 4,000 companies across 133 countries found only a modest 10% average premium for AI skills — and 64% of compensation professionals say they genuinely don't know what premium to offer. "Firms know they have to pay more, but they don't know how much," noted one Korn Ferry senior partner. The one exception: AI-ready leaders at the executive level, where nearly 40% of compensation professionals expect higher base salaries, signing bonuses, and equity incentives for those driving transformation. If you're reading this, that's likely you.

Perhaps most importantly: a peer-reviewed study published in 2025 found that workers who know more about AI are actually more optimistic about their own career outcomes. Fear, it turns out, is most acute in the absence of direct experience. Engagement with the technology tends to produce the opposite of what the fearful imagination projects.​

That last finding matters enormously. Because the dominant emotional register of AI career coverage right now, including the high-profile predictions we've been analyzing, is anxiety. And anxiety, by its nature, amplifies the authority of anyone who speaks with confidence. When someone smart tells you they know who will win and who will lose, and you're already afraid, you listen. You don't examine the source of the prediction or ask whose reflection is staring back from the crystal ball.

And let’s be honest, even after all of this data: nobody knows. Not the researchers, not the compensation professionals, not the economists. And certainly not the CEOs making the loudest predictions. The studies contradict each other. The premiums are smaller than expected. The layoffs are real but uneven. The rehiring is happening but not where predicted. We are in genuinely uncharted territory, which is precisely why outsourcing your thinking to any single confident voice, however credentialed, is the wrong move right now.

What I Did Instead of Worrying

If you’re interested, here’s my own experience.

When AI started making genuine headlines—not as science fiction but as a real, present-tense disruption to professional work—I was afraid. Not in a vague, theoretical way. In a specific, very personal way.

I'm a writer. It's not incidental to what I do. It’s the very tool I use in 70% of my work with executives. The craft of finding the right word, constructing the right sentence, building a narrative that carries someone from confusion to clarity—that has been central to my practice for decades. And AI writes. Fluently, prolifically, around the clock, without billing by the hour.

How do you compete with that?

My first instinct—it’s worth calling out, because I've heard versions of it from hundreds of clients since—was a kind of defensive crouch. This threatens me. Therefore I should be afraid of it. That's the lens talking. That's the availability heuristic in action, conjuring the most vivid and alarming version of the future because it's the one most available to my imagination.

What I actually did after 12 months of fear was open Perplexity and start using it. Not strategically. Not with a plan. With something more like, “I’ve got to figure this out.” In fact I said to friends: I may go down, but I'll go down trying.

That was the turning point.

What I found, and what I now see reflected in clients who make the same shift from avoidance to engagement, is that the technology didn't replace my thinking. It accelerated it. Research that once took hours to complete is now done, or at least meaningfully started, in minutes. I can pressure-test three ideas simultaneously. I can sit with a client in real time as they navigate a complex, unexpected career pivot and have substantive, well-supported thinking ready in the same conversation rather than between sessions. The work is deeper, faster, and more meaningful. And my practice is as busy as ever.

The combination of their experience + my expertise + AI has become something that none of the three could achieve alone.

I tell this story not to suggest my path is the path. I am, after all, doing exactly what this article warns against. I'm describing my own experience and offering it as evidence. What I'm actually suggesting is something more modest and more universal: the fear is understandable, the lens is real, and direct engagement with the technology tends to dissolve both in ways that abstract prediction never can.

You probably have 80 ideas already. And if you don't, and you haven't used AI yet, buckle up. The creators, entrepreneurs, and leaders I know are having a hard time pulling away from it because it puts their creativity into overdrive. The barrier is almost never capability. It's almost always the anxious story we've been told—and told ourselves—about what's coming. The clients I work with who truly get into the tools find this almost universally: once they start experimenting, the stories begin to lose a bit of their grip on us.

The Real Meta-Skill Nobody's Naming

If there's a genuine throughline across all of this research, it isn't neurodivergence, humanities fluency, immigrant resilience, dropout grit, or incremental adaptation. Those are real. They matter. But the common thread underneath all of them is something more foundational: self-awareness about how you think.

Leaders who can identify their own cognitive tendencies — who can catch themselves over-indexing on familiar patterns and ask what they might be missing — make materially better decisions. Research published in the Journal of Behavioral Decision Making and in organizational psychology literature confirms that metacognitive awareness — the ability to observe your own thinking — is one of the strongest predictors of adaptive performance under uncertainty. A 2025 Harvard Business Review study found, with some irony, that executives who relied too heavily on AI forecasting tools actually made worse predictions — because the tools amplified their existing optimism bias rather than correcting for it.

Knowing your own lens is not just a useful soft skill. In the AI era, where the loudest voices are the most confident and the most confident voices are the most self-referential, it may be the most genuinely competitive advantage available.

You are not a billionaire who built a defense tech company. You are also not a philosophy PhD, a literature major from an elite university, or a college dropout turned venture capitalist who learned decision-making at a poker table. You are a senior professional with a specific body of experience, expertise, and judgment that is genuinely — irreplaceably — yours. That is not a limitation. It is a starting point. And a far more reliable one than any prediction from someone describing their own reflection.

The question worth sitting with isn't: "Am I neurodivergent enough? Did I major in the right thing? Did I adapt the right way?"

The question is: "What do I know, how do I think, and what can I actually do with that — right now, in active collaboration with these tools?"

The question worth asking yourself isn’t, "Am I neurodivergent enough? Did I major in the right thing? Did I adapt the right way?" Those are their questions, mapped onto you.

Instead, I suggest asking yourself, "What do I know, how do I think, and what can I actually do with that right now, in active collaboration with these tools?"

Sources & Further Reading

  1. Fortune — "Palantir's billionaire CEO says only two kinds of people will succeed in AI era" (March 2026) — fortune.com

  2. National Today / ABC News — "Anthropic Cofounder Says Humanities Majors Will Be 'More Important'" (February 2026)

  3. Fortune — "Anthropic cofounder says studying the humanities will be 'more important than ever'" (February 2026) — fortune.com

  4. Business Insider — "Anthropic President: AI Will Make Humanities Majors 'More Important'" (February 2026)

  5. MEXC News / Nvidia — "Nvidia CEO Says AI Skills Beat Degrees in Hiring" (February 2026)

  6. Business Insider — "Nvidia's Jensen Huang Urges Blue-Collar Workers to Embrace AI" (March 2026)

  7. Immigrant Learning Center — Jensen Huang biography and immigration history

  8. Britannica / Founderoo — Sam Altman biography

  9. NDTV / LinkedIn Pulse — Sundar Pichai education and career background

  10. Forbes — "Availability Heuristic: What It Is and How to Overcome It" (April 2024)

  11. The Leadership Sphere — "Availability Heuristic: The Cognitive Bias That Will Hold You Back" (2023)

  12. Verywell Mind / Scribbr — Baader-Meinhof Phenomenon explained

  13. Farnam Street — "Survivorship Bias: The Tale of Forgotten Failures"

  14. The Decision Lab — Survivorship Bias

  15. A Smart Bear — "Business Advice Plagued by Survivor Bias" (2025)

  16. PMC / National Institutes of Health — "The Impact of Cognitive Biases on Professionals' Decision-Making" (2022)

  17. PMC — "Invulnerability bias in perceptions of AI's future impact" (2025)

  18. CNBC — "AI will impact jobs in 2026, say 89% of HR leaders" (November 2025)

  19. Forbes — "2026 Workplace Trends: Human Skills, AI, and Talent Scarcity" (December 2025)

  20. Harvard Business Review — "Research: Executives Who Used Gen AI Made Worse Predictions" (July 2025)

  21. Fortune — "Anthropic just mapped out which jobs AI could potentially replace" (March 2026)

  22. The Hill — "7 skills you'll need in 2026 to win the human worker vs. AI battle" (October 2025)

  23. Korn Ferry — "Only Modest Pay Bumps for AI Skills" (March 2026) — kornferry.com

About Jared

Jared Redick is a San Francisco-based executive coach, communications strategist, and brand development consultant with more than 25 years of experience helping companies and high-level professionals position themselves for growth and change. Get career coaching here, or co-develop your professional identity here.

FAQ: AI Career Predictions, Cognitive Bias, and What It Means for Your Future

  • Palantir CEO Alex Karp stated in early 2026 that the two groups most likely to succeed in the AI era are people with vocational or trade skills, and people who are neurodivergent. He has spoken openly about his own dyslexia and framed unconventional thinking as a core survival trait in the age of AI.

  • Anthropic president Daniela Amodei argued in February 2026 that studying the humanities will be "more important than ever" in the age of AI — that qualities like communication, emotional intelligence, curiosity, and human connection will become more valuable as AI handles technical tasks, not less.

  • Yes, pointedly so. Karp has suggested AI will effectively "destroy humanities jobs," while Amodei argues humanities-trained professionals are precisely what the AI era demands. Both predictions are plausible on their face — and both align almost exactly with each leader's personal background, professional identity, and hiring philosophy.

  • Several overlapping biases are at play. The availability heuristic leads people to over-weight outcomes that are vivid and personally memorable. Survivorship bias causes leaders to focus on patterns visible in their own success while ignoring the larger population of those who didn't succeed the same way. Confirmation bias reinforces existing beliefs by filtering out contradicting evidence. And the Baader-Meinhof phenomenon creates the illusion that a familiar pattern is universal when it may simply be personally salient.

  • With calibrated skepticism. Their predictions are informed by genuine domain expertise — but they are also deeply shaped by personal experience and the cognitive biases all high-confidence experts are vulnerable to. When a leader says "people like X will win," it is worth asking whether that prediction happens to describe the leader themselves. Use their perspectives as data points, not directives.

  • Research consistently points to hybrid skill sets — combining domain expertise with AI fluency — as the strongest predictor of career resilience. Roles integrating AI into existing functions command meaningful wage premiums in multiple markets. Emotional intelligence, strategic thinking, complex judgment, and clear communication also remain high-value precisely because they are difficult for AI to replicate.


  • Workforce research suggests significant transformation rather than wholesale elimination. Historical patterns and demographic necessity — including shrinking workforces in developed nations — are working as structural counterbalances to rapid displacement. Companies using AI for human augmentation rather than replacement show substantially stronger business outcomes. The more accurate framing is job transformation, not job extinction — particularly for senior professionals.

  • Survivorship bias is the cognitive error of drawing conclusions only from visible successes while ignoring the far larger population of failures. In career advice, it means treating the traits of highly successful people as causal factors — without accounting for the many others who shared those same traits and had very different outcomes.

  • The Redick Group is an executive career coaching and positioning firm based in San Francisco. Founded by Jared Redick, the practice specializes in helping senior executives, C-suite leaders, and board members craft compelling career narratives, navigate transitions, and position themselves strategically in the hidden job market. Jared has worked directly with more than 2,800 professionals since 1997.