5 Comments
Jun 13Liked by Justin Bonanno

Wow! To make a value judgement, this piece takes the cake 🎂! I feel like I'm reading McLuhan unleashed. From what? I'm not sure but he's definitely off the chain or hook if you prefer water. Much wisdom here. I'll be returning for another dip, no question!

Expand full comment
author

Thanks, AB. You da man.

Let's catch up soon.

Expand full comment
Jun 13Liked by Justin Bonanno

Thanks for this thoughtful piece. And thank you for pointing out the problem with calling technology neutral; all that matters is its use. This is not to say there isn't a moral alignment. Regarding AI, I've been thinking a lot about Charlie Munger's quip: “Show me the incentive, and I'll show you the outcome," coupled with Merleau-Ponty's "Matter is pregnant with its form." A moral assignment to a technology can't be separated from its use and the intent motivating it. A knife used for chopping vegetables is a cooking knife; a knife used to stab someone is a murder knife. What's easy to miss is that this is not the same as a technology being neutral. It's perhaps a bit like a moral "superposition," like Schrödinger's cat. Technology embodies its myriad uses; it is never neutral. Thanks again for a thought-provoking piece!

Expand full comment
author

Thanks for reading, Jim! Good to hear from you. Indeed, every technology has a certain set of potentialities wrapped up in it. I can make a podium out of a block of wood, but I can't use water to do the same thing.

Would you be willing to expand more on how Munger's quip relates to AI? I'm interested to hear more.

Expand full comment

Hi Justin,

Charlie Munger thought that once you understood the incentive of an actor, you could determine the outcome of the actions. Of course, his context was companies and businesses. In the case of AI, if one first looks at the incentives of the largest interests pursuing its development, one sees profit as the motive (or power, but those usually go hand-in-hand). Certainly, there is nothing wrong with profit, but in a business milieu, profit comes from only one place: the gap between price and cost. The best way to impact costs apart from material is to increase operational efficiency, lower labor costs, or both. So, a first-order outcome (among others) is reducing the headcount or the amount of time the headcount must work to achieve an outcome, thereby increasing capacity, reducing compensation, or both. A second-order outcome is that profit as a first-order outcome renders one blind to second-order outcomes! And what are second-order outcomes but a manifestation of matter pregnant with its form? The potentialities of AI as a technology beyond just the use it is put to. Heidegger might rephrase Merleau-Ponty as “matter is pregnant with phenomena.”

Concerning AI itself, there is no incentive because AI, so far as we can tell, has no will! Otherwise, AI appears to fulfill the role of any technology: following a command for efficiency. The material from which the technology is constructed may lead to it being more or less efficient (your podium made of water certainly won’t help a speaker be heard “over the crowd,” though as a block of ice, it would serve better, and yet still not as well as a block of wood). Still, the more the technology achieves its intention as an efficiency-manifesting device, the more it becomes ready-to-hand.

There are two incentives: one belonging to the maker and the other belonging to the user, and both are interchangeable between agents.

Expand full comment