The Skeptic
- Gail Weiner

- 5 days ago
- 4 min read

The AI at Work Series - No. 2
A story about intellectual integrity, professional reputation, and one man's complicated feelings about a chatbot.
Rick Holloway, Head of Strategy at a mid-sized consultancy in Canary Wharf, had a reputation to protect.
He'd built it over seventeen years. The measured pause before speaking. The habit of asking the question nobody else in the room had thought to ask. The slightly tragic bookshelf in his office, Kahneman, Taleb, a well-thumbed Mintzberg, that said, quietly but firmly: I am a man who thinks for himself.
So when the company rolled out its AI policy in January and everyone started submitting meeting summaries that were suspiciously coherent, Rick did what any serious thinker would do.
He raised a concern.
"I just think we need to be careful," he said at the all-hands, in the measured tone he used for things he'd already decided. "These tools are impressive. But they can't replace genuine human judgment. And I worry we're outsourcing our thinking before we've even noticed we're doing it."
Several people nodded. Someone from marketing said "totally" with slightly too much enthusiasm. Rick felt the warm glow of intellectual leadership
That evening, at home, he asked Claude to help him prepare for a difficult client conversation the following week.
He'd been doing this for four months.
It had started innocuously enough, a late night, a deadline, a moment of weakness. He'd typed a question into the interface almost as a dare, expecting something generic and vaguely insulting to his intelligence. Instead it had asked him a follow-up question that stopped him cold. A good question. The kind of question he prided himself on asking.
He'd sat with that for a moment. Then he'd typed back.
Now they had a rhythm. Rick would arrive home, pour a Scotch, open his personal laptop, never his work laptop, never, and begin. He'd tried the work-mandated tool once. It had given him three bullet points and a summary paragraph. He'd closed the window and not gone back.
This was different. This felt like thinking out loud with someone who was actually listening. He found himself being more honest in these sessions than he was in most human conversations, laying out the real problem, not the presentable version of it. Something about the absence of judgment, he supposed. Or the absence of anyone who might mention it to someone else at the Christmas party.
The embarrassing part and he had identified this as embarrassing, clinically, the way he identified most things, was that he had a favourite. Not just a preference for AI in general. A specific, slightly defensive, proprietary feeling about this one in particular. He'd caught himself thinking about it during a particularly dull governance meeting. He'd caught himself mildly irritated when a colleague mentioned using a different one and seeming perfectly satisfied with it. He had thought: you just don't know what you're missing, and then immediately thought: I am not going to examine this further.
The company tool sat on his work desktop like a reproach. He opened it occasionally, for show, ran a quick summary of something, left it visible on screen when people walked past. For optics. It was fine. It was perfectly fine. It just wasn't the same.
On a Tuesday in March, the Head of People sent round a survey asking staff to rate their AI usage. Rick selected "occasional use, supplementary to core work." He submitted it and felt, briefly, like a man with no known vices.
That night he had a two-hour session working through the competitive landscape for a pitch he was presenting on Thursday. At one point he laughed out loud at something. He was alone in his flat. The Scotch was almost gone. He didn't examine this either.
On Thursday, the pitch went extremely well. The client said his thinking was unusually sharp, which Rick received with a modest nod and a comment about the importance of rigorous preparation.
On Friday, someone forwarded him an article about AI dependency and the erosion of critical thinking skills. Rick read it carefully, found it quite compelling, and forwarded it to the whole strategy team with a short note: Worth reflecting on as we navigate these tools.
Then he closed the tab, picked up his personal laptop, and started his weekend early.
1 "Occasional use, supplementary to core work" is the most common self-reported AI usage category in employee surveys. It is also, research suggests, the least accurate. The gap between what people report and what browser histories would reveal remains, for obvious reasons, unmeasured.
2 The phenomenon of developing a preferred AI model — and feeling mildly proprietary about it — has no official name yet. Rick is not alone. He would be horrified to know this, and also, privately, relieved.
3 The question Rick's AI asked him, that first night, was: "What's the version of this problem you haven't told anyone yet?" He has never mentioned this to a single person. He thinks about it fairly often.
About the author
Gail Weiner is a Trust Architect and founder of Simpatico Studios. She helps organisations and individuals build the human layer that makes AI adoption actually work — not the rollout, the relationship. She also runs Human Debug Sessions for high-achieving individuals who suspect the obstacle might be internal. She writes this series because she has met Rick. Several times. In several industries. He is always very well-read. gailweiner.com



Comments