Leveraging AI for Better People Performance
Whether it's self-updating analytical models to predict behavior (forecasts, what you might want to see, etc…) or LLM's that enable you to inquire across vast amount of information and…, These are leading to remarkable stories of productivity and performance improvement for many people in many different jobs. Unfortunately, the converse is true. There are plenty of examples of people becoming very efficient at getting things done but doing the wrong thing with AI.
When you look at how effective one is in any job, role or task, you should ask two primary questions:
Can they do it? Do they have the skills, knowledge and experience to do it right and well?
Will they do it? Or we prefer, Will they consistently do what it takes to be a high performer at it?
They are really two different things. The first is typically called eligibility and the second suitability.
We've explained the bulk of what you need to know about eligibility above. Suitability on the other hand has to do with the arenas of soft skills, emotional intelligence, natural tendencies, aversions and passions.
There are many organizations that are jumping into AI projects (which they should be) without even considering suitability - even some who understand how important it is for every other role, job and key task in their organization. I had one person explain to me you have IQ, EI or EQ (a big part of suitability) and you have AI; they are 3 distinct separate things. This is commonly known as an over-simplification and I would say, a great example of extreme ignorance (which can be temporarily blissful for some).
Let's use a simple example- whether you are a Microsoft/Copilot or Google/Gemini shop or have some similar set-ups with Chatgpt or Llama or….?, you can have AI easily write your email responses or slack responses or… and if you do that, you will find out real fast how unsuitable you are for that task. A response I've heard a few times, but I proofread it for grammar and usage? You may and usually will find stylistic issues if you do this but almost never grammar or usage. That being said, what will be happening a significant percentage of the time is that you will communicate things you do not want to communicate; you will communicate in a manner that may convey an emotion you don't want conveyed (apathy, fear, surprise, …). In any case, this example applies to using available AI agents and some other uses of LLM's. So we have enough experience that we can at least at this point throw out some hypotheses that we believe in and will continue to test.
H1: Analytical vs. Intuitive
What we see over and over again is that those who are extremely analytical and non-intuitive (laser logical) have a tough time being effective with AI - they either get trapped by analysis by paralysis or reject the tools commonly. On the other hand, those that lack analytical - essentially ignore the facts and figures, often are seduced by AI/LLM's "gift" of making everything look like it should be right even when it's not. In this natural tendency, high analytical and high intuitive makes a big difference.
H2: There's another aspect of analytical. Analytical, behaviorally, is your tendency to examine facts and figures (looking back mainly) whereas analyzing pitfalls is your tendency for looking ahead at what may happen and exploring pitfalls and different scenarios. For most, these tend to correlate; but for a sizeable double digit %, they differ. We've observed real challenges using and adapting to AI with those that are high analytical and low analyzes pitfalls.
H3: Opportunity Management
This is another natural tendency where we've seen a big impact on productive or unproductive AI usage. Serious issues with in the impulsive arena but also significant issues in the more common overly cautious area. Although we don't have enough statistical data, we can infer that those low in both in complacent indifference would also have major challenges. Clearly those that have a strong tendency to take calculated risk which means they analyze the "what ifs" are the ones who can master and benefit from using LLM's most productively…
H4. Implementation
This is an important one and also cited in the few research studies done; Using LLM's effectively as managers and creators to assist in developing drafts for emails, plans, analysis, designs etc… is effective if and only if there is experimentation (some trial and error). That is the hypothesis and we believe it proves out. On the other hand, too much experimentation i.e. Curious non finishers with little persistence, results in the same scenario as one with paralysis by analysis- a lot of activity, very little accomplishment.