← Home

AI Special

AI Is Not Just Replacing Jobs. It Is Closing the Entry-Level Ladder

AI may not cause mass unemployment overnight. The first shock may be smaller graduate programs, fewer internships, and fewer entry-level roles, breaking the career ladder modern education promised.

Abstract illustration of AI-driven labor market transition

The scariest version of the AI unemployment wave is probably not a company firing half its staff in one afternoon.

It is quieter than that. A graduate sends out resumes and finds that the doorway has already become smaller.

For decades, schools told people a simple story: study hard, acquire skills, enter a company, and grow over time. That promise depended on one condition: companies had to be willing to pay for the training period of large numbers of beginners, assistants, analysts, interns, and junior employees.

AI puts that old bargain under pressure.

If one experienced employee, helped by AI, can handle the research, first drafts, boilerplate code, customer replies, and spreadsheet work that used to keep three or four juniors busy, the spreadsheet in HR changes very quickly.

So the first visible sign may not be "machines replacing everyone." It may be smaller graduate programs, fewer internships, thinner junior teams, and promotion paths that no longer have enough steps at the bottom. The job market is being squeezed, but so is something more basic: the path that modern education sold to ordinary people.

Many of us spent years becoming the kind of worker companies said they wanted. Now companies are discovering that a machine can do a surprising amount of that worker's first job.

What worries me most is not the grand question of whether AI will make an entire profession disappear. It is the smaller change that comes earlier and receives less attention: a company hires fewer interns, cuts a graduate cohort, or lets senior employees with AI absorb the work that used to train beginners.

That kind of change does not immediately show up in the unemployment rate. It is more like removing the bottom rungs of a ladder. People already on the ladder remain visible. The next group cannot reach the first step.

The everyday signal will not be dramatic. It may simply be fewer "junior" listings, internships that read like full-time jobs, or teams where nobody says they are cutting beginners, but every beginner has to prove they are worth training more than an AI workflow is worth expanding.

Why White-Collar Workers Are Now on the Automation Front Line

Earlier automation waves mainly hit repetitive physical work and clearly defined routine tasks. Robots entered factories. ATMs changed bank branches. E-commerce reduced some offline intermediary work. Software compressed drafting, layout, and bookkeeping tasks.

AI is different because it enters non-routine cognitive work at scale.

Emails, meeting notes, basic code, contract summaries, marketing drafts, slide decks, research collection, spreadsheet analysis, customer support, translation, first-pass design, financial explanation, and legal triage can now be produced at low cost by models. The output may not be final or accountable, but it is good enough to change staffing.

This breaks white-collar work into pieces. A junior analyst collected information, cleaned tables, drafted memos, and made small judgments under review. A junior developer wrote boilerplate, fixed small bugs, added tests, and read documentation. A junior lawyer searched clauses, compared contracts, and drafted notes. These were not glamorous tasks, but they were the training ground. They are also exactly the tasks large models compress first.

That is why the first white-collar impact may not be the elimination of senior experts. It may be experts with AI doing the work that once justified several junior seats.

Why This Automation Feels Different

Earlier technology revolutions often started by replacing a specific tool or a local process. Steam power amplified physical force. Electricity reorganized factories. Cars changed transportation. Computers improved calculation and storage. The internet lowered the cost of information distribution and transactions. These technologies disrupted work, but people often reorganized around the new tool: operating machines, managing processes, serving customers, designing products, and maintaining systems.

AI is different because it enters the core activities that many white-collar workers were trained to perform: understanding information, producing first drafts, making preliminary judgments, and coordinating tasks. It can read material, summarize documents, generate code, draft proposals, answer customers, compare contracts, analyze spreadsheets, and sit inside office software, support systems, developer tools, search products, and enterprise workflows.

AI rarely maps neatly to one job title. It cuts through jobs at the task layer. In assistant work, analysis, programming, customer support, legal review, design, operations, marketing, and management, the standardized, reusable, checkable parts are the parts most exposed.

There is also a speed difference. A machine entering a factory requires purchasing, installation, space, training, and maintenance. An AI capability can enter a company through a subscription, an API, or a workflow change. Once the model is cheap enough and useful enough, diffusion looks more like software than machinery.

The serious claim is not that AI is automatically more destructive than every earlier technology. The serious claim is narrower: AI enters cognitive tasks, travels across industries like software, and hits the beginner work where experience is normally built. That combination is why the entry-level effect may be unusually direct.

The Vanishing Entry-Level Job Is the Harder Problem

Most AI labor debates ask whether jobs will disappear. I think the sharper question is simpler: where will people get experience?

Every profession needs beginners to build experience from simpler tasks. New workers produce uneven output, need review from seniors, and gradually learn the business through real work. Companies accepted this training cost because beginners became mid-level contributors and eventually senior staff. But if AI raises the output of senior employees by 30% to 50%, companies will recalculate how many beginners they need.

That is the hidden version of the AI unemployment wave. Not everyone is fired. The next generation just struggles to get in.

In 2025, Anthropic CEO Dario Amodei warned in an Axios interview that AI could eliminate half of entry-level white-collar jobs within one to five years and push unemployment to 10% to 20%. That forecast is controversial, and Anthropic has its own incentives as an AI company. But the risk it identifies is real: the most fragile part of the labor market may be the entry point.

Nvidia CEO Jensen Huang strongly disagreed with Amodei's pessimism, arguing that doomsday predictions are not grounded in facts. His position also has commercial context, since Nvidia benefits from AI expansion, but the counterpoint is useful: companies are complex, responsibility is complex, data access is complex, and generating an answer is not the same as owning a complete job.

Taken together, the two arguments are more useful than either one alone. Amodei points to the weak spot: the bottom of the ladder. Huang is right that generating an answer is not the same as owning a job.

The Data Does Not Prove Doomsday, But It Should Not Reassure Us Too Much

The debate around AI unemployment keeps falling into two lazy positions. One side treats every layoff as an AI layoff, as if every empty chair already has an autonomous agent sitting in it. The other waves away the concern as another round of technology panic because, in the long run, past technologies created new work.

Both views are too simple.

A more realistic view is less dramatic and more uncomfortable. AI may not create mass unemployment immediately, but it is already changing the entry points, the value of different tasks, and the way managers think about hiring. The danger is not only that a machine takes your job today. It is that tomorrow the company no longer needs as many beginners to do the work people used to learn from.

If an AI unemployment wave arrives, it may not first appear as sudden mass joblessness. It may show up as hiring freezes, smaller graduate programs, fewer internships, reduced outsourcing, narrower promotion paths, and junior work being absorbed by senior employees using AI.

In January 2024, IMF Managing Director Kristalina Georgieva cited IMF analysis estimating that almost 40% of global employment is exposed to AI. In advanced economies, the share may be about 60%. Her point was not that AI will destroy employment by default. It was that productivity may rise while inequality worsens.

The World Economic Forum's Future of Jobs Report 2025 offers a number that is often read optimistically: by 2030, broad macro trends are expected to create 170 million jobs and displace 92 million, for a net gain of 78 million. But the same report says disruption will equal 22% of today's formal jobs, nearly 40% of required job skills will change, and 41% of employers plan to reduce workforce size as AI automates certain tasks.

Goldman Sachs estimated in 2023 that generative AI could expose the equivalent of 300 million full-time jobs to automation globally. In the United States and Europe, about two-thirds of jobs may be exposed in some way, though most are only partially exposed. That distinction matters: AI often replaces tasks inside a job before it eliminates the entire occupation.

The OECD's Employment Outlook 2023 was more cautious. It found little evidence at the time that AI had already caused broad declines in labor demand. That caution matters: exposure is not the same as disappearance. But caution is not comfort. Often, the thing being replaced is not the occupation name. It is the part of the occupation where beginners used to learn.

Put together, these sources do not give a clean headline. AI may create new jobs and still move faster than education and reskilling. Companies may call AI a productivity tool while freezing hiring and slimming back-office teams. A net-positive job number can be true while a displaced person has no realistic path into the new work.

The macro chart can look optimistic while the individual experience is still brutal. Those two facts can coexist.

Why Companies May Amplify the AI Layoff Narrative

AI can also become a convenient story for layoffs.

Layoffs are not caused by technology alone. Interest rates, weak demand, pandemic overhiring, stock pressure, business contraction, leadership changes, and cost cutting all matter. But in an AI-heavy market, explaining layoffs as AI efficiency sounds strategic and can be easier for investors to accept.

That creates a mixed reality. Some roles are genuinely compressed by AI. Others were already marked for cost reduction, and AI gives management a cleaner story.

To judge whether AI replacement is real, I would watch simpler signals: does hiring return, do junior roles come back, and has the workflow actually changed?

If a company cuts roles and does not rehire, using AI tools and higher output per employee to absorb the work, replacement is more credible. If senior roles and AI roles grow while graduate roles, internships, assistant roles, junior analyst roles, and junior developer roles shrink, the career entry point is narrowing. If the company merely installs a chatbot, the layoff may still be ordinary cost cutting. If it rebuilds support, sales, code review, contract review, finance analysis, reporting, approvals, and accountability around AI systems, then the organization has changed.

Historical Cases: Technology Is Not Gentle

Technological displacement is not new.

During the Industrial Revolution, textile machinery raised output and weakened many handloom weavers. The Luddites were not simply irrational opponents of progress. They were workers whose skills, wages, and social order were being broken faster than they could adapt, while factory owners captured much of the early gain.

Agricultural mechanization is another example. In rich countries, a large share of the population once worked in agriculture. Over time, mechanization, fertilizers, breeding, logistics, and productivity gains reduced agriculture's employment share and moved workers into industry and services. Our World in Data summarizes this long-run pattern: as countries grow richer, the share of workers in agriculture tends to fall.

That did not leave modern economies without work. It enabled the growth of manufacturing, services, healthcare, education, software, finance, entertainment, and logistics. But the transition was not automatically fair. A displaced farm worker did not automatically become a software engineer. A displaced textile worker did not automatically capture the gains of industrial capitalism.

Computers and the internet followed a similar pattern. Spreadsheets reduced manual bookkeeping, but helped create financial analysts, data analysts, software engineers, product managers, cloud operators, and digital marketers. MIT economist David Autor and co-authors found that about six out of ten U.S. jobs in 2018 were in categories that did not exist in 1940. But the distribution of new work changed: since 1980, more new work has appeared in high-paid professional roles and low-paid service roles, while the middle has been under pressure.

This history should not be used as a sedative. Technology creates new work over time, but it can still damage specific workers in specific periods. New work does not automatically appear beside the old work. People have to move across industries, locations, skills, and identities. Whether the gains are shared depends less on the technology itself than on company behavior, education systems, taxes, labor protections, and safety nets.

The Change Will Hide Inside Workflows

The AI unemployment wave may not follow a single path. Several things can happen at once: the job remains, but half the work is automated; the company does not fire you, but stops expanding the team; junior roles shrink; AI-capable experts begin to look like small teams; AI drafts the work, but humans sign off, leaving review burden and responsibility concentrated in fewer roles.

A more concrete example is a customer support center. A medium-sized support team may once have needed 50 people rotating through inquiries, refunds, complaints, logistics questions, and after-sales explanations. Today's AI is not stable enough to run this completely alone. It can misunderstand tone, invent policy, promise compensation incorrectly, or fail on complex complaints. But that does not mean all 50 people are safe. The more likely transition is that AI handles 80% of standard issues, generates replies, classifies tickets, and escalates exceptions, while one person or a small team monitors quality and handles the hard cases.

From a technical perspective, this is human oversight. From a labor-market perspective, it is role collapse. The original value of the 50 people was direct task handling. Once the system handles those tasks in parallel, humans remain as reviewers and fallback operators. But review does not require 50 people. Even if AI is only 80% reliable, if it can process the basic work of 50 people at once, the company will ask why it should keep 50 people instead of one supervisor, a few quality reviewers, and a small group of complex-case specialists.

The same logic can appear in content operations, basic software development, legal triage, financial reporting, sales lead qualification, and data labeling. AI is unstable today, so it needs human supervision. But needing supervision is not the same as needing the original number of supervisors. If a role's core output can be generated in bulk by a system, the remaining human work becomes review, exception handling, accountability, and process improvement. What happens to the other 49 people is the question the AI unemployment debate cannot avoid.

This will show up differently by industry. Customer support, content production, basic programming, administration, translation, data cleaning, and junior research are likely to feel pressure earlier. Healthcare, education, engineering, law, and finance may first be augmented and then reorganized. Offline services, care work, repair, construction, food service, and complex sales may move more slowly, but they will still be affected by scheduling systems, robotics, and AI management tools.

One-Person Companies Will Grow, But They Are Not a Universal Escape

AI also has another side: it does not only compress jobs inside companies. It can also give a small number of individuals the production capacity that once required a team.

This is why the idea of the one-person company keeps coming up. Sam Altman has said some tech CEOs even have a betting pool for the first year a one-person billion-dollar company appears. The phrase is classic Silicon Valley exaggeration, but the direction is real: AI is pushing coding, design, customer support, marketing, data analysis, copywriting, finance operations, and workflow automation into one person's toolkit.

In the past, building a product usually required engineering, design, operations, sales, and support. Today, a strong individual can use AI to write prototypes, generate pages, draft support scripts, write marketing emails, analyze user feedback, handle simple finance tasks, and automate processes. That person may not truly need nobody else, but they can delay hiring, use contractors and APIs more selectively, and replace part of a fixed team with AI agents and software.

Some cases already point in this direction. Pieter Levels has long operated products such as Nomad List, Remote OK, and Photo AI as an independent developer, making him a reference case for solo builders and automated internet businesses. Base44 is closer to the AI-native version of the pattern: Maor Shlomo used a tiny team and AI coding tools to build an app-generation platform for non-programmers in a matter of months, and Wix later acquired it for $80 million. It was not literally a one-person billion-dollar company, but it shows that individuals and very small teams can now create products that previously required much larger organizations.

Klarna's customer-service AI shows the same force from the opposite side. The company said its AI assistant handled 2.3 million conversations in its first month and performed work equivalent to 700 full-time agents. That is not a one-person company case. It is an internal "fewer people" case: when AI compresses standard tasks, organizations recalculate staffing.

So the rise of one-person companies does not mean every displaced worker can become a founder. It means competition may become sharper. A small number of people with judgment, product sense, sales ability, trust, and cash-flow discipline will be amplified by AI. Many people who only execute narrow tasks will be priced down by AI.

The real test of a one-person company is not whether someone can use AI. It is whether one person can own a complete commercial loop: finding demand, defining a product, acquiring users, delivering results, handling complaints, carrying legal and financial responsibility, and iterating over time. AI can help with many tasks, but it does not carry the consequences for you. One person managing ten agents may sound like freedom. When something goes wrong, it also means one person carries the responsibility created by ten agents.

Individuals Cannot Only Defend Execution

If a task can be clearly described, broken into steps, trained on historical examples, and easily checked, it will be priced down by AI. You can still do it, but you should not assume it will keep the same wage premium or job volume.

The better move is toward judgment. AI can draft a report, but it does not know which report changes a client's decision. AI can write code, but it does not know which technical debt will damage the system in six months. AI can summarize a contract, but it does not bear legal responsibility. Future value lies in defining problems, setting standards, judging risk, and owning outcomes.

Real business context matters more for the same reason. Tool knowledge depreciates quickly. Context compounds. A person who only knows prompts is easy to replace with a cheaper tool. A person who understands industry logic, customers, workflows, cost, compliance, and risk knows where AI should be inserted and where human judgment must remain.

Professional identity will also become less stable. The old identity was "I am an accountant," "I am a programmer," or "I am a designer." The newer identity is closer to: what problem can I solve, what tools can I coordinate, and what outcome can I take responsibility for?

More concretely: students need real projects earlier, not only credentials. Junior workers need contact with customers, workflows, costs, and risk, not just assigned fragments of work. Managers need to keep some trainable entry-level paths instead of moving every lower-level task to AI at once. Otherwise individuals lose the doorway into experience, and companies quietly destroy the pipeline that produces mid-level talent.

Why the "Good Student" Is Exposed

The most painful part of the AI unemployment wave is not just corporate layoffs. It is that AI exposes a deeper function of modern schooling: schools do not only enlighten people. They also mass-produce people who are useful to the economic system.

Industrial society needed people who were punctual, obedient, literate, able to follow instructions, and comfortable with division of labor. Schools trained bells, timetables, exams, rankings, discipline, and standard answers. Companies needed manageable white-collar workers, so schools trained resumes, grades, certificates, reporting formats, teamwork, and sensitivity to evaluation from above. Many people thought they were receiving a complete education. In practice, much of that training made them easier to place, measure, manage, and replace inside enterprise systems.

That is an uncomfortable sentence, but it explains why AI feels so threatening. If the main product of twenty years of education is a person who follows instructions, submits standard answers, and completes templated tasks, then the graduate's first professional value sits exactly where AI is strongest. Schools trained people to become excellent executors. Companies used to pay for execution. Now machines are becoming cheaper, faster, and less resistant executors.

This is not only a modern problem. Ancient Greek education already had explicit class boundaries. Free male citizens could be trained in rhetoric, music, athletics, philosophy, and public life, while slaves and resident foreigners were excluded from the political community. Britannica's entries on Athenian education and the Academy keep returning to the same background condition: leisure. Only people with time and property could afford education aimed at debate, rule, and public life.

The continuity is unsettling. Some people receive usefulness training. Others receive judgment training. One group learns how to complete tasks; another learns how to define tasks. One learns how to follow processes; another learns how to design processes. One learns how to be evaluated; another learns how to create evaluation standards.

Modern education has another structural problem that is easy to miss: the payer, the provider, and the recipient are not the same actor.

In most countries, basic education is funded mainly by the state. Schools are managed by government systems or quasi-public systems. Students and families receive the service. On the surface, this is a public good. But in terms of incentives, the education system does not always answer first to what this individual student actually needs as a person. It answers to what the state, industry, and social order need. The state needs literacy, labor participation, a tax base, social stability, industrial upgrading, and national competitiveness. Companies need employable, trainable, manageable, mobile human resources. Schools need graduation rates, college admission results, employment rates, inspection scores, and budget logic. The student's personal curiosity, talent, freedom, and long-term flourishing often come later.

That is why educational content keeps changing with social demand. Agrarian societies emphasized clan ethics and basic literacy. Industrial societies emphasized discipline, mathematics, engineering, and standardized skills. Globalization emphasized English, finance, management, and computing. The AI era now emphasizes programming, data, algorithms, interdisciplinarity, and innovation. Students may feel they are choosing their future, but much of the menu has already been written by industrial policy, hiring structures, and examination systems.

This does not mean public education has no value. Public education has massively improved literacy, mobility, and modern state capacity. The point is that public education is not a service market organized purely around the individual student. The student is not the only customer, and often not the most powerful one. Education tends to answer first to whoever pays, sets the metrics, and controls resource allocation.

So when AI changes the kind of talent society demands, schools change their slogans too. Yesterday they trained standard-answer employees. Today they talk about innovation. Yesterday they kept tools away from students. Today they ask students to use AI. But if the system remains driven mainly by external demand rather than individual judgment, capability, and integrity, it may only move students from one generation of replaceable jobs to the next.

AI widens that gap because it first replaces people shaped by usefulness training: people who execute, template, summarize, organize, repeat, and submit work according to process. AI is much weaker at replacing people who define problems, allocate resources, own responsibility, create trust, and manage conflicts of interest. If schools continue training most people as standardized labor for enterprises, they are pushing them into the zone AI reaches first.

The sharper question is not only whether AI will cause unemployment. It is whether we spent years being trained into the kind of worker companies wanted, just as companies began discovering that machines could do much of that work too.

If Companies Need Fewer People, What Should Schools Teach?

This is the deeper reflection behind the AI unemployment wave. The implicit promise of modern schooling is: study hard, and society will need you. But if future companies need fewer people, or only need a smaller number of people who can define problems, manage systems, and own responsibility, that promise breaks.

For a long time, education could treat employment as the final exit. Mathematics led to engineering and finance. English led to global companies. Programming led to internet jobs. Management led to organizational hierarchy. Students accepted years of training in exchange for a ticket into the labor market. As long as companies hired at scale, the exchange held.

AI changes the buyer structure. Companies may still need people, but fewer pure executors. They may still need employees, but prefer fewer workers who can multiply output with AI. They may still need beginners, but may be less willing to pay for the long training period beginners require. If companies become a weaker buyer of mass human labor, schools cannot keep pretending that making students into good employees is the whole meaning of education.

So what should schools teach? I do not think the answer is simply more AI classes.

Judgment matters first: whether information is true, whether model output is reliable, where risk boundaries lie, who benefits, who is harmed, what should not be outsourced to machines, and what responsibility cannot be pushed onto software.

Problem definition matters just as much. Machines are getting better at solving defined problems. The harder human work is deciding what the problem is: what the customer actually needs, what society lacks, why a system failed, who a policy will hurt, and why a product has no users. People who cannot define problems will be assigned problems. People who can define problems can allocate resources.

Tool orchestration will become basic literacy. A future worker may need to operate like a small organization, coordinating AI, data, code, design, law, finance, supply chains, and human networks. If schools keep dividing knowledge into disconnected subjects, they will produce people who are locally competent and globally weak.

Finally, schools need to take cooperation, care, and life outside employment more seriously. Trust, companionship, negotiation, teaching, care, organizing, and conflict mediation were often dismissed as soft skills. When AI prices down many hard skills, they may become some of the hardest currency. If a person's value no longer comes reliably from a company role, that person also has to learn how to structure time, build relationships, maintain mental stability, create meaning, participate in public life, and avoid placing all self-worth inside employment.

This is the individual's hardest position. In the old model, a person could understand themselves as labor: learn skills, sell time, receive wages, build a life. If AI lowers the price of labor, the individual cannot only ask what job remains. They must ask what relationships, judgment, trust, and responsibility they can create that are not easily replaceable.

The future individual may have to play three roles at once. First, worker: still selling some skills. Second, operator: managing tools, reputation, output, and cash flow. Third, citizen: helping decide how technological gains are distributed, how education changes, and how social safety nets are built. Being only a worker becomes passive. Being only an operator becomes exhausting. Being only a spectator leaves the rules to capital and technology companies.

The educational shift is not simply adding more AI classes. It is moving from producing human resources for companies toward helping people remain whole in an age where human labor is no longer scarce. That may sound idealistic. Without it, education only prepares students to be material for the next round of automation.

Policy and Companies Must Share the Cost

If AI raises productivity while the gains flow mainly to capital and a small group of high-skill workers, the unemployment wave may not explode in headline statistics. It may appear as stagnant wages, youth unemployment, class immobility, and political anger.

Governments should not try to stop AI. They should reduce transition costs: stronger unemployment insurance, industry-linked retraining, investment in vocational education and community colleges, support for apprenticeships and entry-level pipelines, disclosure of labor impacts from large automation projects, and mechanisms that let workers share in productivity gains.

Companies also need a better answer than "AI first." If AI eliminates every training role, companies may later discover they have no mid-level bench. If AI rewards only a few star employees, organizational knowledge becomes more concentrated and fragile. Mature AI transformation is not about deleting humans from spreadsheets. It is about redesigning the division of labor between people and machines.

Education has to face the same reality. Training students to memorize, repeat, and follow templates increasingly prepares them to become enterprise-useful and machine-replaceable executors. The future needs people who can ask questions, verify claims, collaborate, communicate, and transfer knowledge across domains.

Conclusion

The AI unemployment wave will probably not arrive as one dramatic event. It will look more like rising water: first covering the lowest ground, narrowing entry-level roles, compressing repetitive tasks, and leaving some people with a job title that still exists but a market value that has fallen.

History says technology can create more work over time. Reality says the long run does not automatically protect those harmed in the short run. Treating AI as pure disaster misses its productivity potential. Treating history as automatic comfort ignores the people who pay the transition cost.

The clearer position is to admit that AI will replace tasks, watch the disappearance of entry-level work, push institutions to share transition costs, and move individual capability from execution toward judgment, context, and responsibility.

I do not believe the simple story that "AI replaces humans" explains enough. The more likely story is that people and organizations that use AI well replace those that do not. At that point, the question is not whether machines have work. It is how much bargaining power, growth, and dignity humans can still keep.

Sources

These are the sources I used for the numbers, cases, and historical background above, not a complete bibliography.

Labor market data and forecasts:

Company cases:

History and education background: