Written by: Golden Legend Big Smart
I attended an internal discussion at OpenAI, the topic was how AI is changing the labor market. Four participants:
→Ronnie Chatterji, Chief Economist at OpenAI, Professor at Duke Business School, former Chief Economist at the Department of Commerce and White House CHIPS Coordinator during the Biden administration
→Alex Martin Richmond, Labor Economist on OpenAI's Economic Research Team, PhD from MIT, previously conducted labor market research at Burning Glass Institute
→Daniel Rock, Assistant Professor at Wharton, AI Economics researcher, Co-founder of Workhelix, PhD from MIT
→Gregor Schubert, Assistant Professor of Finance at UCLA Anderson, conducts empirical research on the impact of AI on labor and firms
All four are economists who have been grappling with data
Here are several important points I gathered after listening
The Solow Paradox Replays
Daniel opened with a classic line from Bob Solow in 1987: "You can see computers everywhere, but not in the productivity statistics."
He said AI is like that now.
He explained using a framework called "Productivity J Curve." He and Erik Brynjolfsson and Chad Syverson co-authored a paper on it a few years ago. The essence is that a general technology goes through a long J-shaped curve before it actually changes economic data.
In the first half, companies need to invest money and people in intangible areas like new processes, new culture, new organizational structures, and new workshops. To outsiders, these investments look like costs, but internally, they are future outputs.
Only once these foundations are built can productivity truly rise.
What Solow saw back then was computers, and now it is AI; the pattern has not changed.
Gregor added a data perspective. Current research assesses how companies use AI on two dimensions: potential (whether a company or position has room for using AI) and implementation (whether it has actually been put to use). There is a significant gap between the two. His research found that companies with strong technical capabilities implement faster, while most others are still building internal scaffolding.
Alex related this phenomenon to her own experiences. She joined OpenAI in December 2024, and in over six months, her methods for writing code, doing data analysis, and designing experiments have changed completely since she started. Now, much of her analysis is written by CodeX, and she is responsible for defining problems and verifying results. But these changes are invisible to external statistical offices.
AI Works from Middle to Middle
This is a statement made by Gregor, the most valuable line of the entire discussion.
End-to-end refers to a task being handed completely to AI without human involvement. This is the scenario most people envision for AI replacement.
Middle-to-middle means that AI takes over a segment of a task, and humans are still needed before and after. People are needed to clearly design the task, organize data, and write effective prompts. Afterward, someone needs to verify, conduct safety checks, and link outputs downstream.
Alex explained with her own example. She had CodeX run an analysis, and after CodeX returned results, she needed to spend time verifying whether the results were correct before handing them to Ronnie. Verifying is a new task that didn’t exist before.
This creates an awkward measurement situation. You see AI automating some tasks, but at the same time, it creates new tasks. It's difficult to say whether a position has been replaced or enhanced.
Gregor expanded this observation to the organizational structure level. He mentioned that new roles are emerging within companies:
→ People who organize internal data into formats that AI can use
→ People who design task inputs and adjust prompts
→ People who verify and evaluate AI outputs
These roles did not exist before. Now, every company that wants to use AI seriously is scrambling to create these roles.
Daniel added a case study. He recently spoke with the CEO of a Korean bank, who delegated the design of AI workflows to business experts, allowing those who understand the business to design processes, while the IT department's role became to provide a reliable, maintainable, and secure foundation.
In this context, Daniel mentioned the history of power in factories. Early on, factories used a central steam engine to drive all equipment through belts. Later, with the advent of electricity, the initial approach was to replace the steam engine with a large motor while continuing to use belts. The real efficiency revolution happened decades later when factories equipped each machine with a small motor and restructured the entire workshop.
AI is currently following the same path. Previously, technical capabilities were concentrated in the IT department; in the future, technical capabilities need to be decentralized to every specific business personnel.
There are more AIs at home than at companies
Gregor referenced a study he conducted with Michael Blank and Ben Zhang, showing a statistic: the number of people using chatbots for household tasks exceeds those using chatbots for work tasks.
Now, the largest use case for AI is not in companies but in kitchens, on sofas, and on phone screens.
People use AI to plan trips, check medical issues, make shopping lists, write speeches for parent-teacher meetings, help children with homework, and choose restaurants. All these activities are creating real value but won’t count towards GDP because they are not market transactions.
Alex provided internal data from OpenAI: approximately 30% of consumer-level ChatGPT usage contains work-related signals. The boundaries between work and life have never been clear, with many people using their personal accounts to handle work tasks.
She continued with a candid comment. The U.S. currently lacks good administrative data to track the impact of AI on occupational levels. Unlike some European countries with complete tax and wage links, the U.S. can only rely on small sample surveys to make inferences. Researchers want to connect consumer-level AI use with macro productivity, but the data foundation is insufficient.
Output is already present at home; GDP has yet to catch up.
Small Irony in Consulting
Gregor interjected his own experience. After graduating with his undergraduate degree, he worked at BCG. When consulting firms recruit newcomers, they always say, “You will tackle major strategic issues.” The reality is that the first two years for new hires mainly involve creating PPTs, taking meeting notes, and organizing spreadsheets.
What is AI best at? Making PPTs, taking meeting notes, and organizing spreadsheets.
His observation is that if AI takes over these tasks, the consulting firm’s original promise of “you will tackle major strategic issues” might finally come true.
It is the same in personal life. You don’t want to spend three hours reading forums to check a drug reaction, nor do you want to spend two afternoons filling out seven identical application forms. By compressing those tasks, you can spend more time on what truly matters to you.
It Takes Five Weeks for an MBA, Longer for the Average Worker
Gregor teaches a five-week AI application course for MBA students. Even for graduate-level students with high IQs and ample budgets, five weeks of coursework is only enough for them to feel "capable of using AI."
He extrapolated from this. Most workers don't have an MBA-level knowledge base or that much continuous learning time. It will likely take ten to twenty weeks to bring an ordinary employee to the same level.
Merely granting model access is not sufficient; the real barrier is training time. If society wants more people to enjoy the dividends of AI, public investment in training is unavoidable. Otherwise, a typical Matthew effect will occur: those with a foundation, time, and resources will use cutting-edge models, while others won’t even know what "good things look like."
Daniel added another perspective. He previously worked in over-the-counter options trading. He suggested that in high-uncertainty environments, policies should be made based on "option" logic: conduct numerous small-scale experiments, evaluate which seem useful, and then scale them. Don’t commit to a large comprehensive plan right from the start.
Alex pointed out an often overlooked observation. The current unemployment insurance system in the U.S. is actually designed for this type of transition. When someone is laid off from Company A before moving to Company B, there is often a gap period, and UI is meant to support that gap.
The transition brought by AI may extend this gap period and increase the distance of jumps, but the underlying mechanism is correct. What needs to be done is parameter adjustment: extend benefit periods, increase amounts, and provide retraining support. There’s no need to start from scratch.
Capability Overhang
Alex mentioned a concept called "capability overhang," which is the third valuable expression from this discussion.
It means that the model's capabilities have reached a certain level, but most users' usage patterns are still stuck at six months or even a year ago.
Both she and Ronnie said they can clearly see this gap within OpenAI. The power users in the team are using CodeX in ways that are two steps ahead of regular users. This gap can quickly narrow through diffusion among colleagues or may persist long-term due to a lack of correct demonstrations.
For a company, if there are one or two power users in the team, the most valuable thing is to have them uplift others. Simply celebrating one person's output doubling is not enough.
For individuals, if your way of using AI hasn’t changed for two months, you are likely already falling behind in terms of capability overhang.
The greatest potential for productivity now lies in enabling those who are already using AI to do so more effectively.
What to Learn
The final question came from the audience: what advice would you give to young people entering university about what to study?
Daniel first made a roundabout statement, saying both AI and economics are good. He then emphasized that the truly important thing is problem-solving ability. Engineering and science will always be useful because they fundamentally deal with unsolved problems. The humanities will also always be useful because explaining the impact of AI on society requires humans.
Gregor provided a more practical answer. Disciplines themselves will not devalue; what devalues are specific tasks within those disciplines. In economics, the parts involving problem-solving and deductions will devalue, while questioning and data set sourcing will increase in value. In history, manually translating archives will devalue, while designing automated archival analysis will increase in value.
Everything you need to learn in each discipline should move towards judgment, questioning, and agency.
One Observation
One characteristic of this generation of AI is that it has become usable almost simultaneously on both the consumer and enterprise sides. Electricity, railroads, and the internet typically spread first on the production side and then gradually entered households. In contrast, AI has already saturated the consumer side while the enterprise side is still building ladders.
One consequence is that ordinary people’s intuition about AI capabilities is ahead of policies, statistics, and corporate structures. You may feel at home that AI is already very powerful, but upon going to work, you find it seems limited, and when looking at government statistics, you realize nothing has changed.
This inconsistency is, in itself, the entire meaning of the modern Solow Paradox.
The productivity curve will catch up, but the path is longer than expected.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。