
The Rise of AI Agents in the Workplace Will Lead to a New Corporate Department
With over a decade of experience in technology, I’ve spent the last six years at Google, working on some of the world’s largest and most complex data environments. My focus has been spearheading the development of data and AI governance products and services, ensuring organizations can navigate the evolving landscape of data integrity, compliance, and ethical AI.
Back in 2019, I predicted the emergence of "Invisible UI"—the idea that buttons, dropdown menus, and complex click-based navigation would become obsolete as technology shifts toward more intuitive, seamless interfaces. My vision assumed chat and voice automation would take center stage, powered by AI. I said we would see a shift in the next five years.
Today, five years later, we now have AI agents, intelligent chat and voice systems capable of performing complex tasks; they’re on the cusp of becoming ubiquitous in workplaces. These digital entities are poised to revolutionize workflows, augment human capabilities, and redefine traditional job roles. However, their integration introduces corporate complexities.
I’m making another prediction.
The Prediction: A New Department to Manage and Oversee AI
Historically, human resources (HR) and people operations (People Ops) have been responsible and accountable for managing labor. These departments address everything from talent, retention, reorgs, downsizing, engagement (culture, disagreements, optimization, etc.) to compliance with labor laws, ensuring a business operates ethically and efficiently.
Another department called “GRC” (governance, risk, and compliance) has traditionally aligned to the CISO (chief information security officer) or IT, to ensure new technologies are implemented responsibly. However, AI agents are not simply business processes or software tools—they are technology-based labor, capable of learning, making decisions, and influencing outcomes at scale.
Over the next 24 months, the public will begin to see a new corporate department dedicated to the oversight and management of AI agents: the “AI Operations and Ethics Department,” or possibly something that sounds much cooler, like “AI Guardian Department"—or, if you’re a fan of the Fallout TV show, maybe the “AI Overseer Department (AOD).”
I’m already seeing some big brands develop and deploy specialized AI labor departments. The idea will soon be mainstream, and it will focus on several key responsibilities, each critical to the future of AI agents. These new functions will address the sociological, ethical, legal, and operational challenges of integrating AI labor into the workplace alongside human labor.
The Future of AI Departments
Like HR, this new AI department will be responsible and accountable for “placement” of AI agents. Responsibility will shift away from procurement or IT. Similar to GRC, the AI department will also manage policies for AI agents, through collaborating across multiple teams. This means it will act as the guardian and liaison for AI business needs, holding specialized subject expertise in AI. This new function will also coordinate between IT, engineering, and GRC functions to understand competitive innovation opportunities to push AI even further.
The AI department will also handle compliance, monitoring legal frameworks like the EU’s AI Act and maintaining documentation to demonstrate accountability and transparency. Like GRC, they will assess the risks and benefits of deploying AI agents in various scenarios, addressing issues like bias, explainability, and unintended consequences; they might also look at future-proofing activities and internal policy setting that is above and beyond the law. By bridging expertise, the department will ensure that AI agents are responsibly utilized.
Launching an AI department of any type requires specialized training that integrates technical expertise, ethical considerations, and regulatory awareness. Future training will likely focus on understanding the lifecycle of AI agents, including their design, deployment, and performance monitoring. Professionals in this department will have a high-level understanding of concepts like machine learning, algorithmic transparency, and data quality management, and a detailed understanding of business ethics, human factors, and data/AI governance.
To build this function, organizations might collaborate with AI companies like OpenAI, Google DeepMind, Salesforce, Meta, or Anthropic to gain insight into the capabilities and risks of AI agents. Workshops on frameworks such as OpenLineage for data lineage and explainability, or courses on responsible AI principles, could ensure the team can assess the risks and benefits of AI deployment. Initial training will likely involve simulations of real-world scenarios, teaching teams to evaluate when AI or humans should take on specific tasks.
AI Agents on the Market Today
You might think, what is the difference between LLMs and AI agents? Allow me to clarify.
LLMs are “large language models” designed to excel at understanding and generating human language, while AI agents are task-oriented, using LLMs and other tools to reason, make decisions, and take actions to achieve goals.
Leading tech companies are pioneering the development of AI agents with diverse capabilities. For instance, Salesforce Einstein integrates AI agents into CRM workflows, automating sales forecasting and customer insights. Google recently announced Gemini version 2.0, which is focused on maximizing what’s now being called “the agentic era.” Anthropic offers Claude, focusing on safety and interpretability in AI interactions. OpenAI is experimenting with AI agents on its GPT platform. There also are hundreds of startups focusing on AI agents, too.
The Evolution and Future of AI Agents in the Workplace
Technical Advancements
Recent insights from Gartner and Forrester highlight how AI agents are set to become a transformative force in the workplace. According to Gartner, by 2028, 33% of companies are expected to use AI agents for specialized tasks such as customer service, IT support, and operational decision-making. These agents will not only handle routine tasks, but also augment strategic functions like data analytics, risk assessment, and specific marketing functions, making them indispensable collaborators.
Forrester’s research underscores the economic impact of AI agents, estimating they will contribute to a 10-15% increase in organizational efficiency by automating repetitive workflows and reducing decision-making time. Companies like OpenAI, Anthropic, and Google DeepMind are at the forefront of this movement, offering AI solutions that integrate seamlessly into enterprise ecosystems and offshore labor. For example, OpenAI's Codex powers code generation, enabling engineers to focus on higher-order problem-solving, while Anthropic's Claude prioritizes safe interactions for sensitive applications.
Sociological Implications
The rise of AI agents will reshape workplace culture and societal norms, broadly speaking, on a level not seen since the iPhone and “app-ecosystem” was created.
Traditional job descriptions will evolve to include managing and collaborating with AI. New jobs will be created that make decisions about the decisions that AI generates. And of course, there will be new modalities and formats for AI-generated content.
Economically, AI agents will democratize access to expertise. However, society must grapple with ethical questions surrounding accountability, privacy, and the potential displacement of some human workers.
An Anthropological Perspective
AI agents are expected to evolve beyond task-specific functionalities into roles requiring complex judgment and collaboration, especially when we achieve artificial general intelligence (AGI). Before AGI is available, we will begin to see AI agents serve as project managers, synthesizing data, coordinating team efforts, and offering insights in real time.
Sociologically, this shift will redefine workplace dynamics, with AI working alongside humans as co-creators rather than tools. Just as the assembly line required new management practices, AI agents require new departments and structures to ensure they serve humanity’s best interests.
Throughout history, technological innovation has driven societal transformation; the light bulb, the automobile, the Space Shuttle, the iPhone. From the Industrial Revolution to the Materials Boom, to the Information Age, humans have adapted to technologies augmenting their capabilities. AI agents represent the next phase in this continuum, blurring the lines between human intelligence and machine assistance.
Your Call to Action: Implementation of an AI Department
As the world continues to homogenize and optimize systems, processes, and people; as companies look to cut costs and reduce duplicate effort across teams and functions, I encourage you to consider how a new "AI-Centric Department" is the solution.
If you agree with this prediction or you’re already starting to explore the logical next step, then I have a short list of recommendations to follow:
- Start an internal committee of stakeholders, to evaluate your current use of AI systems and/or anticipated use and investment in AI agents.
- Work with your CIO or CFO to outline the possible risks of AI agents as you focus on cost optimization and augmenting workflows.
- Identify employees in HR or People-Ops functions that want to stretch their technical knowledge and begin working on AI-related projects.
- Identify employees in engineering or IT functions that want to stretch their soft skills in compliance, ethics, or resource decision-making.
- Ask your CISO or GRC department to incubate a new “tiger team” to be accountable for the ongoing evaluation and use of AI agents.
- Train these early adopting employees in basic knowledge of AI technology and facilitate opportunities for them to explore and own the development of new AI agent processes.
- Ensure adoption is a ground-up effort, gather tribal knowledge, processes, and experience in a new title, celebrate their success; dare to be innovative.
- Formalize the tiger team into a dedicated department. Then scale.
This is the approach I’m seeing in the marketplace.
About the Author
Post by: Tyler Fischella
With over a decade of experience in technology, Tyler Fischella has spent the last six years at Google, working on some of the world’s largest and most complex data environments. Tyler's focus has been spearheading the development of data and AI governance products and services, ensuring organizations can navigate the evolving landscape of data integrity, compliance, and ethical AI use.
Company: Google Cloud
Website: www.cloud.google.com
Connect with me on
LinkedIn and Facebook.