Wednesday, April 19, 2023

Whether AIs are "Conscious" or "Intelligent", Etc. Is Irrelevant to Questions of Danger and Alignment


1. If the question of our continued survival and flourishing is what's important – then it's an interesting question whether AIs are just imitating language, or have experience and therefore possibly worthy of moral consideration. But it’s not important to the question of danger. If the AI can out-process us (whether are not you believe it’s “thinking”) and destroy us, who cares if it's "just imitating"? Unless you can show that whatever those words mean (thinking, processing, conscio etc.) has some bearing on predicting the objective behavior of the entity, it's irrelevant.

To my knowledge these are open questions in philosophy, and it surprises me to see the most intense doomers e.g. Yudkowsky giving one second of attention to them in discussions.


2. AIs are, so far, increasingly complicated echo chambers. That is, to an AI, "enojo" means "zorn" and vice versa - both mean “anger” in English, and the AI can put them in the right place in context with words, but there is no real argument at all that these phrases correspond to the Ais experience, unlike the way most of us think about language.

Again, this is not a dismissal of the possible danger of AI. That GPT4 is an echo chamber is irrelevant if the machines can out-process us even if they're "just imitating". However the AIs are language engines, not survival engines. Humans have come from selection over billions of years in the realm of real-world physics and have programmed in the very core of our being to avoid death and to reproduce; language is a recent side effect of that. AT THIS STAGE, I would be surprised if these language models were to resist being erased or turned off. Therefore we should be very concerned about genetic or evolutionary techniques for producing language engines.