We’re in a unique moment for AI companies building their own foundation model. First, there is a whole generation of industry veterans who made their name at major tech companies and are now going solo. You also have legendary researchers with immense experience but ambiguous commercial aspirations. There’s a clear chance that at least some of these new labs will become OpenAI-sized behemoths, but there’s also room for them to putter around doing interesting research without worrying too much about commercialization. The end result? It’s getting hard to tell who is actually trying to make money. To make things simpler, I’m proposing a kind of sliding scale for any company making a foundation model. It’s a five-level scale where it doesn’t matter if you’re actually making money – only if you’re trying to. The idea here is to measure ambition, not success. Think of it in these terms: Level 5: We are already making millions of dollars every day, thank you very much. Level 4: We have a detailed multi-stage plan to become the richest human beings on Earth. Level 3: We have many promising product ideas, which will be revealed in the fullness of time. Level 2: We have the outlines of a concept of a plan. Level 1: True wealth is when you love yourself. The big names are all at Level 5: OpenAI, Anthropic, Gemini, and so on. The scale gets more interesting with the new generation of labs launching now, with big dreams but ambitions that can be harder to read. Crucially, the people involved in these labs can generally choose whatever level they want. There’s so much money in AI right now that no one is going to interrogate them for a business plan. Even if the lab is just a research project, investors will count themselves happy to be involved. If you aren’t particularly motivated to become a billionaire, you might well live a happier life at Level 2 than at Level 5. Techcrunch event San Francisco | October 13-15, 2026 The problems arise because it isn’t always clear where an AI lab lands on the scale — and a lot of the AI industry’s current drama comes from that confusion. Much of the anxiety over OpenAI’s conversion from a non-profit came because the lab spent years at Level 1, then jumped to Level 5 almost overnight. On the other side, you might argue that Meta’s early AI research was firmly at Level 2, when what the company really wanted was Level 4. With that in mind, here’s a quick rundown of four of the biggest contemporary AI labs, and how they measure up on the scale. Humans& Humans& was the big AI news this week, and part of the inspiration for coming up with this whole scale. The founders have a compelling pitch for the next generation of AI models, with scaling laws giving way to an emphasis on communication and coordination tools. But for all the glowing press, Humans& has been coy about how that would translate into actual monetizable products. It seems it does want to build products; the team just won’t commit to anything specific. The most they’ve said is that they will be building some kind of AI workplace tool, replacing products like Slack, Jira and Google Docs but also redefining how these other tools work at a fundamental level. Workplace software for a post-software workplace! It’s my job to know what this stuff means, and I’m still pretty confused about that last part. But it is just specific enough that I think we can put them at Level 3. Thinking Machines Lab This is a very hard one to rate! Generally, if you have a former CTO and project lead for ChatGPT raising a $2 billion seed round, you have to assume there is a pretty specific roadmap. Mira Murati does not strike me as someone who jumps in without a plan, so coming into 2026, I would have felt good putting TML at Level 4. But then the last two weeks happened. The departure of CTO and co-founder Barret Zoph has gotten most of the headlines, due in part to the special circumstances involved. But at least five other employees left with Zoph, many citing concerns about the direction of the company. Just one year in, nearly half the executives on TML’s founding team are no longer working there. One way to read events is that they thought they had a solid plan to become a world-class AI lab, only to find the plan wasn’t as solid as they thought. Or in terms of the scale, they wanted a Level 4