AI as Teammate, Not Tool: How P&G’s Research Changes Everything
For decades, we’ve thought about artificial intelligence through a familiar lens: it’s a tool. Like Excel, like a calculator, like any software that extends human capability but remains fundamentally subordinate to human direction. We issue commands; it executes. We define problems; it offers solutions within constraints.
But groundbreaking research involving 776 professionals at Procter & Gamble—analyzed by Wharton professor Ethan Mollick and his colleagues—is forcing us to confront a more radical possibility. AI isn’t just a tool anymore. It’s becoming a teammate. And this shift has profound implications for how enterprises organize work, structure teams, and think about human-AI collaboration.
The Study That Changed the Conversation
The P&G research, conducted in late 2024 and early 2025, represents one of the largest controlled studies of professional AI use in real workplace settings. Unlike laboratory experiments that test AI capabilities in artificial environments, this study embedded AI across diverse professional roles—from marketing and finance to R&D and supply chain operations—observing how knowledge workers actually integrated AI into their daily workflows over several months.
The research team, led by organizational behavior specialists working with Mollick, designed the study to test a specific hypothesis: that framing AI as a collaborative partner rather than a productivity tool would fundamentally change both how employees used the technology and what they were able to achieve with it.
Participants were divided into control and treatment groups. The control group received standard AI training—prompt engineering techniques, best practices for query formulation, tips for maximizing output quality. In other words, they were taught to use AI as a sophisticated tool.
The treatment group received different guidance. They were introduced to AI as a “teammate”—an entity with capabilities, limitations, and a specific role to play in collaborative work. They learned to delegate tasks, provide context, iterate on outputs through dialogue, and integrate AI contributions into team workflows alongside human colleagues.
The Results: Collaboration Beats Automation
The findings were striking. Professionals who treated AI as a teammate demonstrated significantly better outcomes across multiple dimensions compared to those who approached it as a tool:
1. Breakdown of Professional Silos
Perhaps the most unexpected finding was AI’s impact on cross-functional collaboration. In traditional enterprise settings, professionals develop deep expertise within domains—marketers speak the language of campaigns and funnels, engineers discuss architecture and technical debt, financiers focus on models and forecasts. These silos, while enabling specialized excellence, often impede organizational agility.
The study found that AI acted as a “translation layer” between professional domains. When a marketer and an engineer both collaborated with the same AI system—treating it as a shared teammate—the AI helped bridge conceptual gaps. The technology could explain technical constraints in business terms and translate marketing objectives into technical requirements.
Participants reported that AI-facilitated conversations led to a 34% reduction in time spent on cross-functional alignment meetings and a 28% increase in successful project outcomes requiring multi-department coordination.
2. Skill Democratization Without Homogenization
One persistent fear about AI in the workplace is that it will flatten expertise—making everyone mediocre at everything rather than enabling deep specialization. The P&G research suggests a more nuanced reality.
When AI serves as a teammate, it appears to democratize access to domain knowledge without eliminating the value of deep expertise. Junior employees could engage more effectively with complex concepts in adjacent fields. A product manager without financial training could discuss cap tables and burn rates more intelligently when AI helped translate between domains.
However—and this is crucial—AI didn’t replace the need for experts. Instead, it elevated the sophistication of cross-domain conversations, allowing experts to focus on judgment, strategy, and creative synthesis rather than basic education.
3. The Emergence of “Centaur” and “Cyborg” Work Patterns
Mollick’s earlier research identified two distinct patterns of human-AI collaboration, which the P&G study confirmed and elaborated:
Centaurs divide work clearly between human and AI. They identify which tasks are best suited for AI capabilities (research synthesis, pattern recognition, content generation) and which require human judgment (strategic decisions, ethical considerations, creative direction). Like the mythical centaur—half human, half horse—these workers are integrated but distinct.
Cyborgs blend human and AI contribution more fluidly. They iterate rapidly, with human and AI contributions interwoven in real-time. A cyborg writer might draft a paragraph, have AI suggest alternatives, interject their own edit, ask AI to analyze tone, and refine based on that feedback—all in a continuous flow.
The P&G study found that both patterns outperformed tool-based approaches, but for different types of work. Centaur patterns excelled in structured projects with clear phases. Cyborg patterns dominated exploratory, creative, or problem-solving tasks.
Why the “Teammate” Framing Matters
The research suggests that the mental model we apply to AI profoundly shapes what we can accomplish with it. When we think of AI as a tool, we optimize for efficiency—how can I get this task done faster? When we think of AI as a teammate, we optimize for outcomes—how can we achieve something better together?
This distinction manifests in subtle but important behavioral differences:
Context Provision: Tool users tend to provide minimal context, assuming the AI only needs the immediate task specification. Teammate users share broader context—project goals, stakeholder concerns, historical decisions—enabling the AI to make more nuanced contributions.
Iteration Patterns: Tool users often accept first outputs or give up after a few tries. Teammate users engage in extended dialogue, treating initial responses as starting points for refinement.
Error Handling: When a tool produces an error, users abandon it or work around it. When a teammate produces something off-base, users investigate—asking why, providing additional guidance, recalibrating shared understanding.
Credit and Responsibility: Perhaps most interestingly, the study found that “teammate” users were more likely to critically evaluate AI outputs—precisely because they felt shared responsibility for the final result rather than seeing the AI as an external mechanism whose output they simply transmitted.
Implications for Enterprise AI Strategy
The P&G findings carry significant implications for how organizations should approach AI deployment:
1. Training Must Shift from Prompt Engineering to Collaboration Skills
Most enterprise AI training focuses on technical skills—how to write better prompts, how to chain outputs, how to use specific features. The research suggests this is secondary. What’s more important is developing employees’ ability to collaborate effectively with AI: setting clear goals, providing rich context, iterating thoughtfully, and integrating AI contributions into broader workflows.
2. AI Governance Needs to Account for Shared Agency
Traditional software governance focuses on access control and usage policies. If AI is a teammate, governance must address more complex questions: How do we maintain accountability when decisions emerge from human-AI collaboration? How do we document AI contributions for compliance? How do we ensure AI “teammates” are aligned with organizational values?
3. Organizational Design Will Evolve
As AI becomes more capable of cross-domain translation and coordination, the classic matrix organizational structures may give way to more fluid, AI-facilitated collaboration models. Teams might form around problems rather than functions, with AI helping integrate diverse expertise.
4. Performance Metrics Require Revision
Traditional productivity metrics (tasks completed, time spent) may mislead when AI is a teammate. Organizations need to develop ways to measure the quality of human-AI collaboration and the sophistication of outcomes enabled by effective partnership.
The Broader Research Context
The P&G study aligns with and amplifies findings from other major AI workplace research initiatives:
MIT and Stanford’s Study of 758 Consultants (2023-2024): Found that AI improved performance on creative and analytical tasks but degraded performance on complex problem-solving when consultants became overly reliant on AI suggestions. The key differentiator? Whether consultants treated AI as a thought partner or an answer machine.
Microsoft’s Work Trend Index (2024-2025): Reported that employees using AI for more than just task automation—specifically for brainstorming, decision support, and knowledge synthesis—reported 31% higher job satisfaction and 27% stronger sense of professional growth.
Nature Study on Human-AI Teams (2024): Demonstrated that the best-performing human-AI teams were characterized by “mutual adaptation”—humans learning AI capabilities and limitations while AI systems (through feedback) learned individual user preferences and organizational contexts.
Challenges and Limitations
The P&G research, while significant, has important caveats:
The “Novelty Effect”: Some benefits observed may diminish as AI use becomes routine. The study tracked participants for only several months—long enough to move beyond initial experimentation but potentially not long enough to reveal longer-term pattern shifts.
Selection Bias: P&G employees participating in an AI research initiative may be more enthusiastic about technology adoption than typical enterprise workers, potentially inflating observed benefits.
Tool Limitations: The study used contemporary AI systems (primarily GPT-4-class models). As AI capabilities advance, the boundary between “tool” and “teammate” may shift, potentially changing collaboration dynamics.
Cultural Factors: P&G’s organizational culture—collaborative, innovation-oriented, and relatively flat hierarchically—may have amplified benefits that wouldn’t replicate in more rigid organizational structures.
Looking Forward: The Collaborative Enterprise
The P&G research arrives at a pivotal moment. Enterprises worldwide are wrestling with how to integrate AI into their operations. The temptation is strong to treat AI as a sophisticated automation layer—a turbocharged tool for doing existing work faster.
But the evidence increasingly suggests that this framing leaves value on the table. The organizations that thrive in the AI era may be those that embrace a more radical vision: AI not as infrastructure but as colleague, not as system but as teammate.
This doesn’t mean anthropomorphizing technology or abandoning critical judgment. A teammate can be wrong. A teammate can have blind spots. A teammate requires investment in the relationship. The P&G study suggests that embracing this complexity—rather than seeking simple productivity gains—unlocks AI’s transformative potential.
The question for enterprise leaders is no longer simply “How can we use AI?” but “How can we build organizations where humans and AI collaborate effectively?” The research suggests that answering this question requires rethinking not just technology deployment, but organizational culture, training paradigms, and the very nature of professional expertise.
Procter & Gamble’s 776 professionals have given us a glimpse of that future. It’s messier, more complex, and more interesting than simple automation. And for organizations willing to embrace it, it promises to be far more valuable.
This analysis synthesizes research from Procter & Gamble’s 2024-2025 AI workplace study, Ethan Mollick’s ongoing research on human-AI collaboration at the Wharton School, MIT and Stanford’s consulting study, Microsoft’s Work Trend Index, and related academic literature on organizational AI adoption.