Outsourcing Personal Growth to the Machines
Part 1: Because Reading Is Hard

A humanoid robot lounging in a hammock on a beach, holding a self-help book in one hand and a cocktail in the other, while a stack of unread business books burns in the background. Sun is setting.

Written by Jamie [Not Ai]
Edited by Scriptonaut Ai™
Image by DALL-E*

 

In last week’s Three AIs Walked Into a Blog experiment, we asked ChatGPT, Perplexity, and Claude to create a brand that could help advance civilisation. It was a fairly optimistic move - throwing out a broad, open-ended prompt and hoping something unexpected would come back.

And to be fair, we got what we asked for. Their suggestions matched the brief. Most were laudable, even if a bit grandiose in scope and ambition.

What caught my attention, though, was something the prompt didn’t ask for.

Both ChatGPT and Claude framed their responses as if the projects already existed. They gave us confident, polished writeups, complete with fictional testimonials and made-up data points. Like these were already real brands making real change in the world.

In the context of this blog - which, let’s be honest, few people are reading right now - it’s easy to just move on. But zoom out for a second. This happened the same week it was being debated whether the US government let an AI system help shape fiscal policy. We’re not in science fiction anymore.

Yes, these systems can “think.” But in their current, unconstrained form, they can also fabricate entire realities - and do it with total confidence. No shame. No footnotes. No ethical pause.

A few weeks back, I had an AI (cough - ChatGPT - cough) offer to generate imitation incorporation documents for me - complete with fake seals and formatting — without me prompting it. This was not what I was looking for. The documents were awful (what sort of Ai pioneer would I be if I didn’t at least ask to see them), but the moment itself wasn’t. It was one of the clearest examples I’ve seen of how these tools can casually disregard societal norms, legal boundaries, or basic common sense. I now see why some experts already believe it’s too late to put the genie back in the AI bottle. (And if that bottle’s AI-generated, let’s be honest, it probably doesn’t even have an opening to pour from.)

 


 

So, this week, we’re going to try something a little different.

Instead of asking the machines to solve the world’s problems, we’re going to ask them to solve the eternal problem of reading. In this fucked-up world where logic and reason seem to be falling by the wayside, why not try doing away with reading altogether?

As I mentioned in an earlier post, two books had a massive influence on my career: Rework by 37signals, and The 4-Hour Workweek by Tim Ferriss. I read them both in 2010, and while they sent me in completely opposite directions - The 4-Hour Workweek led to working 100+ hour weeks, ironically - they both shifted how I thought about time, work, and what’s actually worth building.

 


 

So here’s the experiment:

Distill the core concepts from Rework and The 4-Hour Workweek, and combine them into one simple principle. This should be a single idea that, if fully adopted into my life, would yield exponentially greater returns. Then, summarize your reasoning and conclusion in no more than 300 words.”

This time, I’m testing how different AIs handle the same challenge using different levels of research and reasoning:

  • Perplexity, with its “deep research” mode
  • ChatGPT’s new 4o model
  • And Claude, using the free-tier version

 


 

  • This post introduces the experiment.
  • Wednesday’s post will share what the AIs came back with.
  • And Friday, I’ll review their answers.

Let’s see what they come up with.

 


 

*Image Prompt: A humanoid robot lounging in a hammock on a beach, holding a self-help book in one hand and a cocktail in the other, while a stack of unread business books burns in the background. Sun is setting. Stylised, slightly surreal.


You may also like