Key Takeaways
- 60 trillion tokens in 30 days. An internal Meta leaderboard called “Claudeonomics” tracked AI token consumption across 85,000+ employees — logging the equivalent of approximately $9 billion in compute costs at public pricing.
- The top user: 281 billion tokens. One Meta employee averaged 9.36 billion tokens per day for a month. For context, a typical Claude conversation uses 1,000–10,000 tokens.
- “Tokenmaxxing” is the new Silicon Valley productivity metric. Jensen Huang and Meta’s CTO have both publicly endorsed AI token spending as a proxy for engineer productivity — with Huang saying he would be “deeply alarmed” if a $500k engineer wasn’t burning $250k in tokens.
- The obvious problem. Some employees are leaving AI agents running idle for hours to inflate their position. Input metrics are not output metrics. 60 trillion tokens produced is not 60 trillion tokens of value delivered.
The Leaderboard That Explains 2026’s AI Culture
In early April 2026, The Information reported on an internal Meta intranet tool called “Claudeonomics” — named, somewhat ironically, after Anthropic’s Claude models that Meta employees use extensively.
The leaderboard was not built by Meta’s leadership. An employee created it voluntarily on the company intranet. It tracks AI token consumption — the fundamental unit of large language model usage — across more than 85,000 Meta employees. The top 250 consumers are ranked and awarded titles:
- Token Legend — the highest achievers
- Session Immortal — extraordinary session duration
- Model Connoisseur — breadth of model usage
- Cache Wizard — efficiency in reuse
In the most recent 30-day period tracked by the leaderboard, Meta employees collectively consumed 60 trillion tokens. The single highest individual consumer averaged 281 billion tokens per day — an almost incomprehensible quantity that works out to approximately 9.36 billion tokens every day for a month.
Direct Answer: What is Meta’s Claudeonomics leaderboard? Claudeonomics is an internal Meta employee-built leaderboard on the company’s intranet that tracks AI token consumption across 85,000+ employees and ranks the top 250 “power users.” It was built by employees, not mandated by management, though Meta’s leadership has publicly endorsed high AI token usage as a productivity indicator. The leaderboard logged 60 trillion tokens in a recent 30-day period. Top performers receive gamified titles like “Token Legend” and “Session Immortal.” It is part of a broader Silicon Valley trend called “tokenmaxxing” — using AI token spending as a competitive proxy for productivity.
The Corporate Endorsement: Jensen Huang and Andrew Bosworth
The Claudeonomics leaderboard did not emerge in a vacuum. It reflects an explicit cultural position from some of Silicon Valley’s most prominent figures.
Jensen Huang, CEO of Nvidia: Last month, Huang stated that he would be “deeply concerned” if an engineer earning $500,000 annually was spending less than $250,000 per year on AI tokens. The implication: half of a senior engineer’s salary should be redirected into AI compute to justify the engineer’s existence.
Andrew Bosworth, CTO of Meta: Bosworth said publicly in February that one of Meta’s top engineers spends the equivalent of his entire salary on AI tokens annually and has achieved approximately 10× productivity as a result. His conclusion: “It’s a no-brainer deal; keep doing it, with no upper limit.”
Mark Zuckerberg’s underlying directive: The cultural context for Claudeonomics traces to Zuckerberg’s internal instruction to engineering teams to rewrite the existing codebase — cleaning up legacy code to make it parsable by AI — in preparation for AI systems to take over routine code modifications. Heavy AI token usage is the leading indicator that engineers are doing this work.
Meta has also formalised the connection between AI usage and career advancement: from 2026, employee performance reviews include assessment of “AI-driven impact” — how much employees use AI to deliver results is now a “core expectation.”
The Problem: Tokenmaxxing vs Actual Productivity
The Claudeonomics leaderboard has generated as much internal criticism as enthusiasm.
The fundamental flaw: Token consumption is an input metric, not an output metric. Measuring productivity by tokens consumed is analogous to measuring writing quality by words typed, or engineering quality by lines of code. More input does not automatically mean more output.
The gaming problem: Multiple sources confirm that some Meta employees leave AI agents running for hours executing research tasks specifically to inflate their leaderboard position — consuming tokens while producing nothing of value. The leaderboard incentivises the appearance of AI usage, not the reality of AI-driven results.
The cost question: At public pricing, 60 trillion tokens in 30 days would cost approximately $9 billion. Meta uses internal infrastructure rather than paying public API prices, so the actual cost is lower — but not zero. Enterprise GPU time is expensive. 60 trillion tokens of AI compute that produces idle agent loops rather than business value is a real cost.
Bloomberg’s Joe Weisenthal framed the criticism sharply: “What is the point of measuring productivity by total token consumption?” He described the practice as having a “backyard moonshine vibe” — pursuing a numerical metric with the fervour of optimisation while decoupling from the actual goal.
Tech analyst Noah Brier offered a more charitable read: “I don’t think it makes sense, but when you’re trying to turn a ship as big as Meta, sometimes you have to deliberately overcorrect.” The leaderboard may function as a cultural forcing function — getting 85,000 employees to change work habits — even if the specific metric is flawed.
The Models Being Used
The “Claudeonomics” name is revealing. Meta employees are not limited to Meta’s own models. They use:
- Anthropic’s Claude (giving the leaderboard its name)
- OpenAI’s GPT models
- Google’s Gemini
- Meta’s internal MyClaw (Meta’s version of OpenClaw)
- Manus (an agent framework Meta recently acquired)
- Meta’s Llama models (available internally)
The fact that the leaderboard is named after Anthropic’s Claude — a competitor’s product — at a company that has invested billions in its own AI development, is itself a data point about which models Meta employees find most useful for their actual work.
The “Tokenmaxxing” Phenomenon in Context
Meta’s Claudeonomics is the most visible instance of a broader cultural shift in enterprise tech. The practice — spending heavily on AI compute as a deliberate productivity strategy — has acquired the name “tokenmaxxing” in developer communities.
The underlying argument has legitimate foundations. If a $500k engineer can be 10× more productive with $250k in AI compute, the company spends $750k to get the output of 10 engineers at $5 million. The maths genuinely work — if the productivity multiplier is real and sustained.
The scepticism also has legitimate foundations. Many productivity claims from AI-heavy workflows are based on self-reporting, short-term measurement, and tasks that are easy for AI to accelerate (writing, summarisation, code generation for well-understood problems) rather than the deep reasoning and novel problem-solving that justify senior engineer salaries.
The honest position: AI tools provide genuine productivity leverage on a significant subset of knowledge work tasks. The leverage is real but not universal. Measuring it by token consumption rather than by output quality and delivery speed produces incentives that can degrade the signal the metric was supposed to track.
What This Means for AI Sovereignty
For Vucense readers, Meta’s Claudeonomics surfaces a specific concern: AI tool dependency and data exposure at enterprise scale.
When 85,000 Meta employees use Claude, GPT, and Gemini for their daily work, they are feeding proprietary business information, internal project details, and strategic discussions to models operated by Anthropic, OpenAI, and Google. Each token processed by a cloud AI model is data that left the enterprise network.
The enterprise API agreements with these providers typically include privacy protections — data is not used for training, logs are deleted, etc. But the structural dependency remains: Meta’s competitive intelligence about its own business flows through competitors’ infrastructure when employees use Claude or GPT for internal work.
The sovereign alternative — models running entirely on internal infrastructure with no external data exposure — is what Meta’s Llama models and the internal MyClaw tooling are ostensibly designed to achieve. The existence of Claudeonomics, which prominently features external model usage, suggests the internal tooling has not yet fully replaced the external models in employee preference.
This is the enterprise AI sovereignty challenge in miniature: the best tools often belong to potential competitors, and the productivity cost of restricting employees to internal tools may outweigh the data exposure risk.
FAQ
What does “Claudeonomics” mean? It is a portmanteau of “Claude” (Anthropic’s AI model) and “economics,” reflecting that the leaderboard tracks the economic resource (AI tokens) spent on Claude and other models. The name was chosen by the employee who built the leaderboard, not by Meta leadership.
Did Meta authorise this leaderboard? The leaderboard was built voluntarily by an employee on the company intranet, not by management. However, Meta’s leadership has publicly endorsed the culture of high AI usage that the leaderboard reflects, and Meta has separately built official AI usage dashboards for engineers.
Is 60 trillion tokens an unusual amount? For a company of 85,000 employees over 30 days, it works out to approximately 707 million tokens per employee — roughly 23.6 million tokens per employee per day. A typical AI conversation uses 1,000–10,000 tokens. This suggests extremely heavy usage by some employees with much lower usage by others, consistent with a top-250 leaderboard structure where power users dominate consumption.
What is the actual cost to Meta? Meta uses its own AI infrastructure rather than paying public API prices. The $9 billion estimate based on public pricing would be significantly lower on internal infrastructure. The actual cost is not disclosed but represents a meaningful allocation of Meta’s GPU compute resources.
Related Articles
- Stanford Study: AI Sycophancy Is Measurably Harmful
- Anthropic Overtakes OpenAI in Revenue: $30B ARR
- Claude Code Source Leak: What 512,000 Lines Revealed
- Oracle Cuts 30,000 Jobs to Fund AI Data Centres
Sources & Further Reading
- MIT Technology Review — AI Section — In-depth coverage of AI research and industry trends
- arXiv AI Papers — Pre-print research papers on AI and machine learning
- EFF on AI — Civil liberties perspective on AI policy