


……………….
Might as well read it.
A newsletter about technology, power, culture, and staying human in a world that keeps trying to automate you.

-

WHERE DID I PUT THAT THOUGHT?
What Dewey do with all this information?
Somewhere between the second half of a genius thought and the third app you swiped into, it vanished.
Not deleted. Not corrupted. Just gone. Maybe it's floating in some half-named note. Could be a Slack thread. Google Doc draft. Or one of those AI chats that got purged for your privacy. Your best idea of the week, killed by its own diaspora.
But hey. You saved it, right?
Everything is saved. Nothing is findable. Welcome to hell.
This is life in the infinite information age.
As is typically the case, even the most cutting edge phenomena have historical echoes from the past.
Every leap in information technology (printing press then, digital platforms now) creates a new organizational crisis.
The information explosion started with the printing press. Suddenly books multiplied, became affordable, spread everywhere. By the 18th and 19th centuries, libraries were drowning in their own success. New writings, new editions, holdings growing faster than anyone could catalog. The very abundance that democratized knowledge created its first crisis of findability.
Melvil Dewey saw this coming. He’s the Dewey in the decimal system we’ve all forgotten. Anyway, he thought every idea needs ONE place. A singular home. No wandering. Just you, a number, and a shelf.
But Dewey's world had edges. Ours doesn't.
Meeting notes in Notion. Action items in Asana. Actual decision in Slack. Or wait, personal OneNote? That Airtable we set up for Q3? They all plug into each other. We switch contexts between different, yet very similar containers with a swipe or alt-tab.
Dewey solved his era's problem with rigid classification. Now rigidity is the problem.
Also This Week:
The Memento Problem
There's this passage in a recent TrustGraph blog that made me laugh out loud: "Current implementations operate like Leonard Shelby in Memento: consulting disconnected notes without understanding their relationships."
For the uninitiated: Leonard has anterograde amnesia. Can't form new memories. Lives by Polaroids and tattoos, trying to solve his wife's murder with fragments he can't connect.
Sound familiar?
Your AI assistant can quote your meeting notes verbatim, but it has no idea why you took them. It retrieves fragments without the story that connects them.
We've built Leonard Shelby at scale.
The Unaskable Questions
Here's what's breaking: the brain builds knowledge through what neuroscientists call "desirable difficulty." That sweet spot where you're struggling just enough to forge new neural pathways. The 85% rule: succeed about 85% of the time for optimal learning.
But when every thought gets scattered across platforms, we can't form connections. When AI kills the struggle, we stop encoding memories. No encoding, no retrieval. No retrieval, no knowledge.
Just fragments. Floating. Somewhere.
We need people who can see the invisible threads. Who treat "where did we put that?" like the existential crisis it is. Who can help you discover what you don't know you don't know.
Call them whatever. Information Sherpas. Context Wranglers. Professional Dot-Connectors.
Or we could just call them what they are: Librarians. The memory workers who've been doing this all along.
Not Your Grandmother's Librarian
Forget shushing. Forget dusty card catalogs. I'm talking about librarians as infrastructure for judgment in the digital age.
The AI librarian is a retrieval god. Send Claude or ChatGPT into the infinite stacks with any halfway coherent query and it'll sprint back with armfuls of relevant material. It can haul through halls of information that make the Library of Congress look like the book nook at the dentist's office. Cross-reference, synthesize, summarize, serve you a five-course meal of information before you've finished typing. Getting the information is easy. Figuring out what you need to get is hard.
I use Claude all the time. The first question it asks is right there in the search box: “What can I help you with today?”
Good fucking question, Claude. I don't know what I don't know.
This is where taste, context, and ownership come into play. These are the fundamental building blocks of human judgment. This is what creates the good friction between all the ideas in the world and what we’re actually looking for. AI can retrieve everything. But retrieval isn't discovery. Discovery requires someone who understands that the question you ask is rarely the question you need answered.
Previously on Out of Scope…
The False Promise of Total Context
We keep hearing the same pitch: AI will understand all our context if we just give it all our data. It'll know us better than we know ourselves. Anticipate our needs. Curate our perfect information diet.
We already ran this experiment. It's called Netflix.
Also DoorDash. And Spotify. And every other algorithm that promised to solve choice paralysis by knowing us deeply. How's that working out?
Still spending forty minutes scrolling through options? (Yep. End up rewatching Fyre Fraud. Again.)
Still ordering the same three things? (Yep. Sala Thai, Daily Provisions, Chopt)
Still listening to that one playlist from 2019? (Lots of Kygo. And The Chainsmokers. They’re always playing.)
Turns out the answer to decision fatigue isn't more data. It's better questions.
Memory Workers
The librarian role of the future will look more like a combination of librarian, corporate historian, and archivist. This class of "memory worker" won't just curate information. Their job will be to curate the context that makes it usable.
They'll build maps between scattered thoughts, preserve the why behind the what, and create the cognitive infrastructure that turns data scatter into actual knowledge.
Future memory workers need to be like those brain regions whose only job is linking other regions together. They don't store memories; they store maps to memories. They won't organize information, but relationships between information. Not asking "where is it?" but "what does it connect to?" and "why did this matter when you saved it?"
They'll create the patterns, repetition, and rehearsal our brains need. They'll watch where things get put down. They'll know that finding information is only step one. Understanding its place in your larger context is what transforms data into decisions.
The Partnership
Now, before anyone accuses me of being in the pocket of Big Neuron, let me be clear: I genuinely believe we need both human and AI capabilities here.
We need the AI to sprint through infinity. The human to know where to point it. To help you understand what you're really looking for. To build the mental maps that turn information scatter into actionable knowledge.
One without the other is just a very sophisticated failure.
In a world where everything is saved but nothing is findable, where AI can retrieve anything but can't tell you what you need, where your best ideas vanish into the digital ether, we don't need fewer librarians.
We need more.
A lot more.
Our catalog deepens…
Check out Inherent Price, a term sheet fever dream for those who dare.
Until next week
