An AGILEdge Foundational Whitepaper:  Effective Use of AI to Augment People

Generative AI and Behavioral Suitability

Improving Organizational and People Performance with AI

A White Paper by Alan Hoffmanner, Jerry Marino and Ed Rayo

For questions or to discuss contents, contact Alan Hoffmanner @ alan@agiledge.com or 858-705-1482

Generative AI has rapidly evolved from a niche innovation to a foundational layer in modern enterprise workflows. Its transformative potential is no longer theoretical. According to McKinsey & Company (2023), generative AI could contribute up to $4.4 trillion annually to the global economy, reshaping customer operations, software development, marketing, and knowledge work across industries. Yet, as organizations rush to embed AI into every facet of work, a fundamental dimension remains underexamined: not just what AI can do, but who it works best with.

Behavioral suitability—the degree to which individuals’ traits, cognitive styles, and interaction preferences align with AI-enabled environments—has emerged as a critical, yet underutilized, success factor in AI deployment. Early adopters across sectors are discovering that AI’s impact is not uniform. It varies significantly based on the user's skill level, adaptability, and behavioral alignment with task-AI fit (Somers, 2023; Wang et al., 2025).

A comprehensive synthesis of more than two dozen studies reveals a pattern: AI delivers the greatest productivity uplifnt—often exceeding 40%—when paired with workers whose behavioral profiles align with AI-supported workflows, such as structured thinking, receptiveness to feedback(self improvement attitude, taking initiative), and openness to digital augmentation (Noy & Zhang, 2023; MIT Sloan, 2023; Gambacorta et al., 2024). Conversely, when behavioral suitability is low, AI can degrade performance by as much as 19%, primarily due to overreliance or cognitive offloading (Somers, 2023).

This insight represents a strategic inflection point. The real competitive advantage of AI in the enterprise will not be determined solely by adoption rates or infrastructure investments, but by behavioral alignment at scale—matching the right talent to the right tools in the right context.

 

The Behavioral Layer of AI Productivity

The latest field experiments from institutions such as the Federal Reserve Bank of St. Louis, MIT Sloan, and the Bank for International Settlements collectively converge on one theme: AI augments human productivity best when the behavioral and cognitive profile of the user is compatible with the AI tool’s strengths (Gambacorta et al., 2024).   Several situations we've seen involve a group of people who are surface thinkers with low tendencies to analyze or do what-if.  These can lead to epic failures.  Other challenges occur with low persistence tendencies, so you quickly create a prompt and if the output looks good enough, you just go.  Not a productive tendency here.

Less-experienced workers consistently experience higher productivity gains—from 27% to 66% across writing, coding, and customer service tasks—when supported by generative AI (Nielsen, 2023; Cui et al., 2025; Gambacorta et al., 2024).

Cognitively aligned users—those with higher tolerance for ambiguity, digital fluency, and feedback receptivity—are more likely to adopt and effectively use AI, accelerating task completion and reducing error rates (Wang et al., 2025; Al Naqbi et al., 2024).

Behaviorally mismatched users—those with low tech affinity or poor critical reasoning—exhibit technostress, overdependence, and deteriorating performance when AI exceeds their decision-monitoring capacity (Somers, 2023; Wang et al., 2025).

This signals a critical evolution in workforce design: AI performance is not just a function of algorithmic power or domain specificity—it is a behavioral design problem. Without accounting for the cognitive and emotional dimensions of users, organizations risk sub-optimizing their AI investments.

 

Evidence of Behavioral Suitability

The productivity promise of generative AI is undeniable. Yet, emerging evidence reveals that who uses AI and how they behave is as determinative of impact as the AI model itself. Across dozens of empirical studies, a clear pattern has emerged: AI’s value is not evenly distributed—it is mediated by attributes of behavioral suitability.    So your tendencies toward learning new things, sticking with things, thinking things through,  experimenting,  being able to take calculated risks and others make a huge difference.  And these can all be measured and taken into account.

 

1. Skill Level as a Behavioral Proxy

One of the most consistent findings across experimental and field studies is that less-experienced employees exhibit disproportionately greater productivity gains from AI augmentation. In controlled settings:

Workers with limited experience improved performance by 43% with AI, compared to a 17% gain for their more experienced peers (Somers, 2023, MIT Sloan).

GitHub Copilot trials showed junior developers completing 27–39% more tasks, while senior developers saw only 8–13% improvement (Walsh, 2024; Cui et al., 2025).

Among customer service agents, low-skilled workers improved task throughput by 35%, compared to negligible improvements among seasoned agents (Nielsen, 2023).

These differences are not simply about skills—they point to deeper behavioral readiness. Younger, less-experienced workers tend to exhibit higher adaptability, openness to feedback, and willingness to engage with digital augmentation, which enhances their alignment with AI-enhanced workflows.

 

2. Occupation and Task Type

Behavioral suitability also varies across occupations and task types:

The information services sector, which involves language-heavy, feedback-driven tasks, reported the highest generative AI utilization (14% of work hours) and the largest productivity uplift (2.6%) (Federal Reserve Bank of St. Louis, 2025).

In contrast, personal services and leisure industries, which involve nuanced human interactions and emotional labor, saw minimal AI adoption (1.3–2.3%) and limited time savings (0.4–0.6%).

This divergence highlights an essential insight: task structure, combined with behavioral compatibility, determines AI utility. Tasks that benefit from structured logic, rapid iteration, and low ambiguity (e.g., coding, writing, data summarization) align better with AI tools—and with individuals comfortable operating in such environments.

 

3. Interaction Design and Cognitive Offloading

Behavioral misalignment doesn’t just neutralize AI’s value—it can degrade performance. In a seminal study at MIT Sloan, workers relying on AI for tasks outside the tool’s proficiency saw a 19 percentage point decline in performance (Somers, 2023). The culprit: cognitive offloading—an over-reliance on AI recommendations and under-engagement with critical thinking.

Similarly, the BIS study on CodeFuse revealed that senior programmers underperformed not because the tool lacked capability, but because they engaged less with the AI, signaling resistance, skepticism, or task misfit (Gambacorta et al., 2024).

These findings suggest that behavioral suitability is not just about capability—it’s about interaction trust, decision ownership, and task alignment.

 

4. Behavioral Suitability as a Lever for Inclusion

One of the more promising dimensions of behavioral suitability is its role in reducing intra-team skill disparities. By lifting the floor of performance, AI tools can equalize contribution levels across teams. The randomized trial by Noy & Zhang (2023) found that ChatGPT use:

  •  Reduced performance gaps between low- and high-skilled professionals,

  • Boosted task completion speed by 40%, and

  • Enhanced quality by 18% across the board.

This suggests that AI, when aligned with user behavior, can act as a democratizing force, accelerating the productivity and growth trajectory of those historically underleveraged due to lack of experience or exposure.

 

Strategic Implications

  • The emerging data makes one point unmistakably clear: AI productivity is not uniformly distributed—it is behaviorally mediated. For leaders, this demands a strategic pivot in how AI initiatives are conceived, deployed, and scaled. Organizations that ignore behavioral suitability risk underutilization, disengagement, and even performance erosion. Those that embrace it can unlock disproportionate gains in capability building, workforce agility, and long-term productivity.

This behavioral lens leads to four high-impact implications for organizational design and transformation:

1. Rethink Talent Models Around AI Behavioral Profiles

Traditional talent segmentation—by role, title, or tenure—is insufficient for navigating the AI era. Instead, companies must introduce AI behavioral archetypes that reflect users’ adaptability, digital fluency, and cognitive styles. Early evidence suggests that characteristics such as openness to experimentation, structured problem-solving, and receptiveness to feedback strongly correlate with AI effectiveness (Wang et al., 2025; MIT Sloan, 2023).

Leading firms should:

  •  Embed behavioral diagnostics into talent assessment frameworks to identify high-AI-fit profiles.

  • Tailor AI deployment based on these profiles—not just by job function, but by cognitive alignment.

2. Integrate Behavioral Suitability into Job and Workflow Design

Generative AI tools change not just what work gets done—but how work is done and who is best suited to do it. This challenges legacy models of role design.

Strategic redesign should:

  •  Deconstruct roles into AI-augmentable vs. human-critical tasks and assign accordingly.

  • Recombine tasks based on behavioral alignment—ensuring high-fit employees focus on AI-leveraged components while others specialize in relational, creative, or judgment-based work.

The implication: organizations must evolve from static job descriptions to dynamic capability maps that integrate human strengths and AI capabilities.

3. Build Learning Ecosystems That Reinforce Critical Engagement, Not Blind Trust

One of the greatest risks associated with poor behavioral fit is cognitive complacency—workers overly trusting AI outputs without sufficient judgment. As Somers (2023) and Walsh (2024) show, this erodes performance and undermines the intended benefits of augmentation.

To mitigate this:

  •  Create learning environments that promote AI accountability, not dependency.

  • Encourage practices like red-teaming, peer-review of AI outputs, and scenario-based simulations where employees must critique AI-generated content.

  • Train teams to understand when to trust, when to verify, and when to override AI recommendations.

4. Treat Behavioral Fit as an Infrastructure for AI ROI

Much like cybersecurity or cloud readiness, behavioral suitability should be treated as a core infrastructure variable in enterprise AI readiness. As adoption scales, the marginal return on AI investment will increasingly depend on behavioral alignment—not model architecture.

Practical actions include:

  •  Incorporating behavioral AI-fit metrics into performance dashboards and productivity analyses.

  • Including behavioral suitability in AI vendor selection criteria (e.g., UX design that supports user adaptability).

  • Incentivizing AI experimentation in employee performance objectives and recognition programs.

The McKinsey Global Institute (2023) estimates that labor productivity could increase by 0.6% annually through 2040 with effective AI integration. But such projections are contingent on the organization’s ability to align people, tasks, and tools—not just technically, but behaviorally.

 

Embedding Behavioral Suitability into AI Rollouts, Talent Models, and Training

To realize the full productivity potential of generative AI, organizations must operationalize this insight through deliberate and scalable design choices across workforce, technology, and culture. Below is a five-part implementation playbook that synthesizes current evidence into actionable steps for enterprise leaders.

 

1. Diagnose AI Behavioral Readiness with a Baseline Audit

 Before deployment, organizations should assess both individual and organizational readiness—not only in terms of technical infrastructure, but behavioral alignment. This includes:

  •  Behavioral assessments to profile AI-relevant traits (e.g., cognitive adaptability, digital trust, reflection bias).

  • AI interaction simulations to evaluate how employees respond to machine-generated outputs in real-world scenarios.

  • Readiness heat maps across departments and roles to identify where AI is likely to yield outsized gains—or resistance.

This diagnostic layer is foundational. Without it, organizations risk investing in tools that are misaligned with team dynamics or employee traits.

2. Embed AI Suitability into Role Architecture and Workforce Planning

Redesigning roles for AI-augmented workflows requires moving from task aggregation to task segmentation based on augmentation potential and behavioral fit. Effective steps include:

  • Decomposing roles into core tasks, assessing each for AI compatibility and behavioral demands.

  • Developing new job archetypes that blend AI fluency with human-centric strengths (e.g., AI interaction designers, human-in-the-loop validators).

  • Aligning hiring, promotion, and succession planning with emerging AI-behavioral archetypes, not legacy functional titles.

This reframing ensures that AI complements—not collides with—how people create value in real-world environments.

3. Redesign Training for Behavioral Fluency, Not Just Technical Proficiency

Most AI training today focuses on prompt engineering, tool navigation, or data usage. But as MIT Sloan and BIS studies reveal, the performance gap often stems from behavior, not knowledge.

Forward-looking training models should:

  •  Teach critical reflection skills (e.g., knowing when to override AI suggestions).

  • Incorporate cognitive load management and technostress mitigation (Wang et al., 2025).

  • Use adaptive learning platforms to tailor content based on user experience, learning speed, and behavioral response patterns.

Crucially, AI training must be positioned as performance augmentation, not just compliance or upskilling.

4. Build Feedback Loops to Monitor Behavioral-AI Interactions in Real Time

Organizations need continuous insight into how employees are engaging with AI—not just usage rates, but interaction quality and behavioral markers.

This includes:

  •  Monitoring AI-generated content acceptance/rejection patterns,

  • Tracking indicators of overreliance or disengagement (e.g., uncritical acceptance rates),

  • Incorporating peer feedback and team-based assessments to spot behavioral trends early.

Such feedback loops enable agile course corrections and ongoing calibration between AI tools and human users.

5. Incentivize AI-Aligned Behaviors Through Culture and Performance Systems

The final layer is cultural: embedding behavioral alignment into what is valued, rewarded, and replicated across the enterprise. This requires:

  •  Recognizing not just AI tool adoption, but smart AI usage—employees who question, improve, and contextualize AI output.

  • Highlighting “behavioral success stories” in internal communications to normalize adaptive experimentation.

  • Aligning performance reviews with AI engagement quality—evaluating how employees integrate AI into their workflows and decision-making.

Ultimately, the goal is not universal AI usage—it is behaviorally intelligent augmentation that improves outcomes for the individual, the team, and the organization.


APPENDIX

 

Al Naqbi, H., Bahroun, Z., & Ahmed, V. (2024). Enhancing work productivity through generative artificial intelligence: A comprehensive literature review. Sustainability, 16(3), 1166. https://doi.org/10.3390/su16031166

Brookings Institution. (n.d.). How will AI affect productivity? https://www.brookings.edu/articles/how-will-ai-affect-productivity/

Cui, Z., Demirer, M., Jaffe, S., Musolff, L., Peng, S., & Salz, T. (2025). The effects of generative AI on high-skilled work: Evidence from three field experiments with software developers. SSRN. https://doi.org/10.2139/ssrn.4945566

Filippucci, F., Gal, P., Jona-Lasinio, C., Leandro, A., & Nicoletti, G. (2024). The impact of artificial intelligence on productivity, distribution and growth: Key mechanisms, initial evidence and policy challenges (OECD Artificial Intelligence Papers, No. 15). OECD Publishing. https://doi.org/10.1787/8d900037-en

Gambacorta, L., Qiu, H., Shan, S., & Rees, D. (2024). Generative AI and labour productivity: A field experiment on coding (BIS Working Paper No. 1208). Bank for International Settlements. https://www.bis.org/publ/work1208.htm

Jacobs, J. (2024, June 11). Evidence shows productivity benefits of AI. Center for Data Innovation. https://datainnovation.org/2024/06/evidence-shows-productivity-benefits-of-ai/

KPMG LLP. (2024). Generative AI has an increasing effect on the workforce and productivity – KPMG survey. https://kpmg.com/us/en/media/news/kpmg-genai-workforce-survey.html

Lee, J., & Kim, H. (2025). The impact of artificial intelligence on organizational performance. Journal of Innovation & Knowledge, 10(1), 100234. https://doi.org/10.1016/j.jik.2025.100234

McKinsey & Company. (2023, June 14). The economic potential of generative AI: The next productivity frontier. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier

Mullins, J., & Penciakova, V. (2025, February 27). The impact of generative AI on work: Productivity gains and people interactions. Federal Reserve Bank of St. Louis. https://www.stlouisfed.org/on-the-economy/2025/feb/impact-generative-ai-work-productivity

Necula, S.-C., Fotache, D., & Rieder, E. (2024). Assessing the impact of artificial intelligence tools on employee productivity: Insights from a comprehensive survey analysis. Electronics, 13(18), 3758. https://doi.org/10.3390/electronics13183758

Nielsen, J. (2023, July 16). AI improves employee productivity by 66%. Nielsen Norman Group. https://www.nngroup.com/articles/ai-tools-productivity-gains/

Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192. https://doi.org/10.1126/science.adh2586

Noy, S., & Zhang, W. (2023). The productivity effects of generative artificial intelligence. VoxEU. https://cepr.org/voxeu/columns/productivity-effects-generative-artificial-intelligence

Saam, M. (2024). The impact of artificial intelligence on productivity and employment – How can we assess it and what can we observe? Intereconomics, 59(1), 22–27. https://www.intereconomics.eu/contents/year/2024/number/1/article/the-impact-of-artificial-intelligence-on-productivity-and-employment-how-can-we-assess-it-and-what-can-we-observe.html

Smith, M. (2024, February 29). Insights on generative AI and the future of work. North Carolina Department of Commerce. https://www.commerce.nc.gov/news/the-lead-feed/generative-ai-and-future-work

Somers, M. (2023, October 19). How generative AI can boost highly skilled workers' productivity. MIT Sloan School of Management. https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-can-boost-highly-skilled-workers-productivity

Walsh, D. (2024, November 4). How generative AI affects highly skilled workers. MIT Sloan School of Management. https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-affects-highly-skilled-workers

Wang, B., Liu, Y., Qiao, K., Yu, Z., & Parker, S. K. (2025). AI’s dual impact on employees’ work and life well-being. Information & Management, 62(4), 103950.  

©AGILEdge 2025