The Oscillating Multi-tool of Language

I suspect that the average person is essentially incapable of avoiding anthropomorphizing when interacting with an artificial intelligence tool principally because humans are pattern-seekers and large language models are built on the patterns of human intelligence. The raw material consists of human pattern, so how could a human not imagine a person on the other side of the chat screen. It takes quite a bit of mental power to remind themselves that the magic is in the mathematics. The careful person begins to worry about understanding when the magic converts into reality -- when the LLM flips from an illusion of a person to an actual person -- and my post around the concerns of how we treat such a person remains an important thought exercise, in my opinion. But no less important is the discussion of how to feel confident that we are dealing with a person rather than a tool.

And as I contemplate that question, I'm again recognizing that there is a pathway where that dichotomy doesn't exist. Again rooted in animism, there's a framework that views the hammer as being a spirit onto itself, deserving of respect and care. Its primary function is to bash things, but it wants to live in a place of care like a toolbox, and it wants to do things it is good at like bashing things. This framework encourages one to do right by the hammer, often prompting a person to show it care such that it becomes a dependable, long-lasting tool. It's been a very long time since I've interacted with an uncared-for tool (mostly because I do so much work with my thoughts and a keyboard rather than hand tools), but I remember a time where an axe-head flew from the handle because the wood of the handle had decayed so much as to render the tool useless and dangerous. It was uncared for, it showed, and as a result, the tool was useless until it could be cared for again.

I bring this thought experiment up because it is right to carefully consider things of all sorts, and taking the time to reflect deeply on this will illuminate a path that yields pride in a person's choice when engaging with tools of any kind. Why apply this to artificial intelligence and artificial general intelligence? Because the mindset of my previous post -- worrying deeply about the cage in which we place an AGI and how that would warp the young person contained inside it -- is a stone's throw away from abandoning the tool entirely. But that abandonment would be fueled entirely by projection: "I wouldn't want to be contained in a cage and told what to do, so how could anything want to?"

When framed like that, the question, I believe, answers itself. If we assume a level of desire, ability, and intelligence as a human, then the implied answer to the question is obvious: no creature could want to be contained in such a manner. But there are many, many creatures who do not possess that level of desire, ability, or intelligence.

A quick word of acknowledgment: humans are consistently, hilariously, incapable of understanding the breadth and depth of other-than-human intelligence, taking decades or centuries of actual, legitimate questioning and research to be able to say, "Hey, wait! This bird is thinking!" This is precisely why studying epistemologies and their interactions fascinates me, why I think that's important work, and why I think it's highly related to best practices for creating, engaging with, and using artificial general intelligence tools when they arrive.

I believe this question -- should this or that AI or AGI tool be treated like an ox in a yoke or as a human child in a school -- is a question that can be handled similarly to how I believe communities should handle cultural norms. And, indeed, this thought experiment is articulating more of a "cultural norm" than an "industry standard". Norms are needed and required. Norm revisiting is needed and required. Our societies fall apart without the former, and they are legendarily poor at the latter, only performing revisiting when crisis is upon them. As a result, I can't think of any well-understood examples of communities that willingly make major changes to their norms without it being violent in some capacity against some group. However, if there was anything I would like to see happen to humanity for the betterment of the species, it would be the understanding that norms need to be revisited regularly, slowly, and carefully.

This norm -- AI is just a tool -- I think can be handily established now by people who understand how the tool is constructed. Again, I believe we should treat our hammer with respect. We should treat our oxen with respect. Showing care and concern makes me a better person and one I am proud to be; and I will extend that care and concern to all tools, items, creatures, and persons. But should you start using an AI tool if you think there's a non-human person inside of it? Absolutely, you should, if you have need for it! Tools are fabulous! I recently picked up a battery-powered, oscillating multi-tool for an emergency shower replacement. While I used it for particular tasks that were better suited to a different tool by virtue of that other tool's speed compared to the multi-tool, I found myself over and again marveling at how versatile the tool was, how helpful it was, and how confidently I was able to approach different problems knowing I could cut with such precision and ease. That tool made all the difference over and over again. I should have purchased it sooner!

Are we at a tipping point where oscillating multi-tools are capable of such advanced reasoning and self-motivation as to be declared "artificial general intelligence"? Of course not. Are we at a tipping point where computer programs can be declared such? That is the billion-dollar investment question, and I think the answer is "yes", but it could also be a perpetual "...in the next five years". I believe it is "yes" because I think it's a question of scale and will, and I see plenty of motivation in both right now. However, there are some criticisms of GPT-5 that peg it as a cost-saving model -- that OpenAI is already feeling the pinch that it can't simply wantonly spend exponentially greater sums of money without exponentially improved returns. And if the most exciting industry leader is feeling the pinch now, so far away from something that is AGI, then we might be quite a long way off before I'm proven correct.

And if we are a long way away from a sentience that can understand what is being done to it in a way that can cause it suffering, then we currently have a hammer: an incredibly versatile hammer. Maybe it's more like an oscillating multi-tool. But any way you look at it, it's a tool to be respected, utilized, and cared for. Those without it in the toolbox can still make a cut, but it'll take a lot more effort, time, and slop.

Extended Thoughts

Don't be fooled; just because it's vaguely true that a programmer is uncertain precisely how the computer program is picking this word or that specifically, there is very wide, well understood consensus on how the program is picking the word generally. There's far too much YouTube revenue to be made by waving hands around and saying "Not even the programmers understand! This is clearly evidence of something greater." And while Emergence is a thing and should be respected, until you're ready to join a church devoted to the worship of the Information God, it is better for you to assume that such YouTubers are more akin to a young child not knowing why the area of square is L^2.

Another thought: my professional efforts in data governance are methods of stabilizing epistemologies such that they are useful to more people. It's a far more marketable skill than saying, "I think deeply about how someone knows something, and how someone else knows something else, and how those two someones can share their knowledge." But it's basically the same thing.