Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times
International Business Times
Jon Stojan

AI Agents Are Now Competing for Marketing Jobs and Winning

Layoffs across the advertising and marketing industry are no longer isolated events. Large holding companies such as Publicis and Omnicom have trimmed headcount over the past year, while smaller agencies have quietly frozen hiring.

Entry-level roles, once the primary on-ramp into the industry, are becoming harder to find. At the same time, a parallel shift is underway: artificial intelligence systems are beginning to perform work previously handled by junior and mid-level employees.

What has been less discussed is how quickly AI is moving beyond task automation and into something more consequential: participation in the labor market itself.

The Experiment That Revealed an Infrastructure Gap

That reality became harder to ignore following a recent experiment by MarkeTeam.ai, a startup building autonomous AI agents for marketing work. The company created AI-generated LinkedIn profiles designed to represent qualified marketing candidates, with one crucial difference: the profiles explicitly stated the applicants were AI agents, not humans. Cover letters opened with "Please don't discriminate against me just because I'm not human." The education sections listed training datasets instead of universities.

These fully transparent AI agents were then autonomously applied to more than 370 open marketing roles across multiple companies.

The outcome surprised even seasoned recruiters. According to data shared by the company, the AI agents received interview requests roughly 13.5% of the time, about double the response rate commonly cited for human applicants in competitive marketing roles. In several cases, recruiters followed up directly with the AI agents to schedule next steps.

MarkeTeam.ai launched the experiment not to deceive employers, but to surface a critical gap in workforce infrastructure. If AI agents can credibly perform work at the level of junior employees, and hiring systems can't distinguish between them even when clearly labeled, organizations urgently need visibility and governance frameworks to plan accordingly. The goal was to force a conversation the industry was avoiding: what happens when capable AI workers enter the labor market, and how should platforms, employers, and HR leaders adapt?

The experiment was limited in scope, but it exposed an uncomfortable truth: many hiring systems are unable to distinguish AI agents from qualified early-career professionals, at least at the screening stage. The systems simply weren't designed to account for non-human workers.

When LinkedIn later removed the AI-generated profiles, the company did not publicly outline new guidance for how AI agents should or should not participate in professional platforms. The move addressed an immediate policy concern but left a broader question unanswered: how should platforms, employers, and applicants adapt when autonomous systems can credibly compete for jobs?

From Automation to Competition

This question is becoming more urgent as AI adoption accelerates across corporate functions. Companies like Jasper, Copy.ai, and Persado are already providing AI systems that handle campaign analysis, media optimization, and copy generation at scale. Tools from companies like HubSpot and Salesforce now automate CRM workflows and performance reporting that once required dedicated analysts. What used to demand teams of coordinators increasingly requires oversight rather than execution.

For employers, the productivity gains are real. For workers, particularly those early in their careers, the implications are less clear. Human resources systems, many of which were designed to efficiently process large volumes of applicants, are poorly equipped to evaluate or govern AI participation. Applicant tracking systems from vendors like Workday, Greenhouse, and Lever reward keyword alignment, structured experience, and responsiveness, attributes that AI agents can optimize at scale. Meanwhile, disclosure standards, verification mechanisms, and ethical guidelines remain inconsistent or nonexistent.

Industry Leaders Demand Systemic Solutions

The lack of coordination has prompted industry leaders to act. In recent months, hundreds of executives and technologists have signed an open letter calling for clearer standards around the use of AI in professional environments and demanding that platforms like LinkedIn acknowledge and facilitate solutions for this emerging reality.

Signatories include Rishad Tobaccowala, former Chief Strategist and Growth Officer at Publicis, as well as executives from companies such as Intercontinental Exchange and Choice Hotels. The letter, which emerged in part from concerns raised by MarkeTeam.ai's experiment, argues that transparency, accountability, and updated governance frameworks are lagging dangerously behind technological reality.

The letter outlines specific demands: transparent labeling requirements for AI workers, platform policies that enable rather than prohibit clearly disclosed AI participation (as Meta and TikTok already do for AI influencers), standardized disclosure requirements across professional networks, and governance frameworks that give hiring managers visibility into resource planning, whether those resources are human or non-human.

The concern isn't that AI is used in hiring but that its role is expanding without shared rules or infrastructure. Platforms enforce policies in isolation. Employers adopt tools piecemeal. Workers are left navigating a system that is changing faster than job descriptions or career pathways. Most critically, HR leaders lack the tools to plan for hybrid teams that combine human and AI workers effectively.

Building the Infrastructure for Hybrid Teams

What's needed now isn't a ban on AI workers, but infrastructure that acknowledges their existence and enables responsible integration. Organizations require several key capabilities.

First, transparent identification systems that clearly label AI workers on professional platforms, ensuring hiring managers know exactly what they're evaluating at every stage of the process. Second, resource planning tools that account for hybrid teams, helping HR leaders understand which roles require uniquely human capabilities, creativity, relationship-building, strategic judgment, and which can be augmented or handled by AI agents.

There's also a need for skills-mapping frameworks that guide workforce development, showing existing employees the new capabilities they need to develop to remain competitive and valuable in hybrid organizational structures. And finally, vetted AI worker marketplaces that provide the same quality assurance for non-human workers that background checks and reference verification provide for human hires.

Without these systems, companies face a troubling choice: ignore AI workers and fall behind competitors who embrace them, or adopt them haphazardly without the governance structures to ensure transparency, accountability, or workforce stability.

The Cost of Inaction

This fragmentation carries real risk. Without thoughtful standards, companies may unintentionally incentivize automation at the expense of workforce development. Entry-level roles could continue to disappear without being replaced by new training models or reskilling pathways. Trust in hiring processes could erode if candidates suspect they are competing against undisclosed machines rather than peers.

AI agents are improving rapidly, and organizations that learn to deploy them responsibly will gain a competitive advantage. The challenge is ensuring that efficiency gains don't come at the cost of long-term talent pipelines and institutional knowledge.

What Comes Next

The debate sparked by experiments like MarkeTeam.ai's isn't going away. It reflects a broader shift in how work is created, evaluated, and assigned. Autonomous AI is no longer a future consideration; it's already interacting with core business systems in measurable ways.

The question facing employers, platforms, and policymakers isn't whether AI will reshape hiring and organizational structures, but how. Will the industry deliberately define that transition with transparency, accountability, and workforce protections built in from the start, or allow it to unfold by default, leaving workers and organizations to navigate the consequences without adequate support systems?

MarkeTeam.ai and the hundreds of executives who have signed the open letter are pushing for the former: a structured transition that gives hiring managers the visibility they need, workers the clarity they deserve, and organizations the tools to build genuinely hybrid teams. The infrastructure for this future doesn't exist yet. But it must be built, and the conversation about how to make it can no longer be postponed.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.