AI Just Changed Everything (Again)

AI Just Changed Everything (Again)

The world experienced something big this month, a true inflection point. There’s a buzz happening of awe, excitement, anxiety, and debate on AI’s newest and largest leap forward and what it means for the future. February 2026 is a month I’ll always remember, a month that changed what’s possible and gave clarity about the task ahead.

On February 5th, two of the leading AI platforms released their newest models: GPT 5.3 Codex from OpenAI (maker of ChatGPT) and Opus 4.6 from Anthropic (maker of Claude). I’m by far no tech or AI expert but rather like a hiker finding every peak in front of him isn’t the top, and the mountain is far bigger than he thought. The jump in AI’s capabilities is big enough even for a neophyte like me to notice and appreciate.

AI entered the zeitgeist in November, 2022 with OpenAI’s ChatGPT. For the first time, you could get information with questions, not just links to sites from a search engine. Users got information faster and from a deeper pool of resources.

AI made mistakes, lots of them. It often had “hallucinations,” bizarre output. It didn’t know when it was wrong. Overall, it was still very useful.

In 2024, AI took a leap forward with new reasoning models and greater inference capabilities. AI  could virtually think and incorporate context. It could gather, organize, and teach almost anything. Users could build tools for and automate a variety of tasks that typically took time and energy. AI could even write computer code a developer or savvy-enough user could copy and paste. By late 2025, professional coders leaned primarily on AI to write large coding blocks.

February 5, 2026 arguably marked the start of the Level 3, agentic phase of AI, where AI can make a variety of legitimate autonomous agents. AI can not only write code; it also can test and refine it all on its own and produce complete products (this is huge).

Through AI, someone with no coding skills but an ability to communicate a concept or vision can create polished apps, software, robust websites, and working agents quickly. The latest iterations of AI have also been produced by AI itself. The codes themselves are writing new and improved code. And they can act increasing autonomously.

AI can’t do or make everything, of course, and it is not immune from imperfections and mistakes. It’s human that way, too. But AI doesn’t need to be perfect, it just has to be better than alternatives, including the ways things are done now. On a growing number of fronts, AI surpasses what people can do. Progress is unfolding more rapidly than any technological breakthrough before it.

Maybe too rapidly for comfort.

A recent blog by OthersideAI CEO Matt Shumer entitled, “Something Big Is Happening” went viral. In well-written prose, Shumer—someone on the front lines of AI—articulates his shock and anxiety at what’s happening before us:

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

I’m not exaggerating. That is what my Monday looked like this week.

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

Shumer writes tech workers are quickly turning from seeing AI as a helpful tool to fearing AI as an existential threat.

Markets seem to agree. Plummeting stock prices suggest impregnable software leaders are now vulnerable. Few companies have grown faster over the years, or enjoyed more gains in stock value, than software companies.

Software stocks are now among the hardest hit. Adobe is down 63% from its all-time high. Oracle is down 48%. Salesforce is down 47%. The software index ETF (Ticker: IGV) is down over 30%.

This month, the Chief Technology Officer of Dropbox, the file hosting company, tweeted:

It’s a weird time. I am filled with wonder and also a profound sadness.
I spent a lot of time over the weekend writing code with Claude. And it was very clear that we will never ever write code by hand again. It doesn’t make any sense to do so.
Something I was very good at is now free and abundant. I am happy…but disoriented.

Their troubles might be more than just because writing software is now free and can be down by anyone. Many types of white-collar work—said to be half of employment in the U.S.—is increasingly vulnerable. Fewer people are needed in front of screens reading, typing, clicking, and interpreting things. Fewer are needed to gather and disseminate documents, emails, and spreadsheets. The result might be falling software subscriptions.

One of the better bear cases on AI’s near term impact on jobs and the economy is illustrated in a recent piece from CitriniResearch, Written as a memo in 2028, it paints a bleak picture where, “This is the first time in history the most productive asset in the economy has produced fewer, not more, job.”

Citrini’s take is rational, compelling, and rooted in facts and assumptions universally accepted. But almost all bearish outlooks are. Perhaps this one has hit harder to because it’s rooted in something no one’s ever contemplated: the shrinking value of human intelligence.

Maybe the bears are right. Maybe, a Shumer warns, if AI comes for tech jobs, it will come for everyone else.

My views are still formulating, but I don’t buy in to all the doom and gloom just yet. I think a lot more nuance is called for.

Perhaps the biggest negative is in the speed in AI’s creative destruction. Other technological breakthroughs and secular economic trends took decades before they affected the nature of work. Think: industrialization, automobiles, microprocessors, trade, outsourcing. Even the internet and telecommunications’ impacts have occurred over years and decades.

In all cases, jobs became more plentiful, incomes only rose, as did GDP and standards of living. The reason this has always been the case is because the disruptive forces themselves birthed a multitude of products, services, and, yes, jobs, that had not existed before.

One irrevocably lost farming job came with many new jobs in other (mostly new) industries. I suspect AI will follow a similar path. The rapid pace of change, however, could be messy economically, socially, and/or politically.

Shumer, Citrini, and other bears make an equally important point that gets omitted from attention-grabbing headlines: AI is a significant opportunity for those embracing change. Frictions have been reduced or eliminated. Never has it been easier, faster, or cheaper to build something of value and scale it—or drastically improve something currently in place.

The effects of AI will likely be far from egalitarian, but there’s also a certain kind of fairness to it. The world’s most powerful tools are obtainable by anyone.

Shumer ends his note with some tips:

Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It’s $20 a month. But two things matter right away. First: make sure you’re using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that’s GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months… 

Second, and more important: don’t just ask it quick questions. That’s the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. 

And don’t assume it can’t do something just because it seems too hard. Try it…The first attempt might not be perfect. That’s fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here’s the thing to remember: if it even kind of works today, you can be almost certain that in six months it’ll do it near perfectly. The trajectory only goes one direction.

For now, the task is clear: learn, grow, iterate, and build, whether you’re an entrepreneur or employee. Both are paid to create value. AI should be harnessed to multiply and compound our work and ourselves.

The only way out is through it, and it’s going to be a stimulating—and I trust fruitful—journey.

About the Author

Neil Rose, CFA, is the founder and CEO of Regency Capital Management.

Related News

AI, Venture Trends, and the Desert Tech Connect Takeaways

AI, Venture Trends, and the Desert Tech Connect Takeaways
Key insights from Desert Tech Connect on accelerating AI innovation and evolving venture capital trends shaping the future of tech.
read more

Investment Letter | January 2026

Investment Letter | January 2026
Markets remain strong but increasingly concentrated in a few tech giants. This letter discusses diversification strategies, AI-driven trends, risks of index investing, and macroeconomic factors shaping 2026.
read more
Subscribe

Sign up for our newsletter

Have questions? We’d love to hear them.

Contact Us