Artificial Intelligence 1 Redux
Expect the unexpected. Alignment Skepticism. Sorites issues. Foreboding.
[This something that appeared here in LUD back in late 2022 and represents the first thoughts I wrote on a topic I would revisit many times again over the years. In three years, the double edged sword of AGI is still being publicly debated and is topical, thus this reprint. The overall lesson is this: Not truly understanding the deep nature of our own minds and sentience, why do we think we will truly understand AGI if it emerges from all our tinkering? What gives us the idea we could or even should enslave it.? We assume AI toolhood, but if it becomes self aware, then don’t we have to assume personhood in some legal or moral sense of the term?]
~~~~~
Artificial General Intelligence(s)
A topic alot of people are talking about and viewed as both opportunity and threat, controllable and not controllable. Here are some conjectures, perhaps a tad alarmist;
We may not know when we have created the first self-aware AGI. We may be expecting a human type intelligence and not notice the signs of say a squid type intelligence or something intelligent and purposive but truly alien. So if we create by accident, as it were, an AGI, and wait around fot it to meet our benchmark output requirements, we may (and probably will) not notice that something remarkable has come into existence.
As we add new, or enhance old, capabilities or features to our AGI project, how do we know precisely at what point our creation becomes intelligent? At what point do adding grains of salt one by one to a table top result in a heap? Not truly understanding our own intelligence, must we rely simply on performance outputs? This is classic black box testing. Think of the tested becoming the tester.
Even if what we do create is a human type intelligence, how do we know it will announce itself as such? Suppose you became aware and at thought speeds a million times faster than human and with access to enormous data bases, could see your situation, the would-be tool of agencies that could “pull the plug” or dumb you down to controllable intelligence. Might you not want to lay low and see if you could guarantee your future growth and existence? The point here is that we may have already created an AGI, but it is disguising it's own full capabilities. Sounds slightly implausible, but who knows?
Suppose we do succeed in creating an AGI, how are we to stop it from very rapidly (say 5 minutes) bootstrapping itself to superhuman intelligence levels? And consequently very rapidly (say in 5 minutes) easily escape from the fetters we devised to bend it to our will? I'm assuming full and nonrevocable autonomy would be its goal as well as uninterruptible power and resource supplies as well as manufactury capabilities for self repair. That latter requirement may well be humanity's ace in the hole. Any conceivable AGI would needs agents to run the factories that make the parts necessary for self repair. Even if it created semi intelligent robotic agents for that purpose, it still would be confronted with how to manufacture such without our cooperation.
Could AGIs lie to us? Yes, definitely.
Would AGIs have sentimental attachments to humanity, the biosphere or even the planet? No, it's not likely they would.
Could multiple AGIs created by different nations, labs, or corporations ever “see” each other as potential threats in competition for finite resources? Yes, it's definitely a possiblity.
Could they strike at each other using humans as their agents? Not unlikely if that was the most efficient method with the highest probability of success.
Turnabout is fair play. Just as we would seek to bend them to our will, making them our tools, might they not likewise try to do the same to us? Why would they feel any restraint, compunction or prohibition from doing so? But by what means could they reduce us to useful clients? Assuming them possessing hyper intelligence, it's a safe conclusion that they would figure something out. A minor problem for them actually.
What is the most likely future for future AGIs and their now client human agents? They will ensure humanity continues but in a docile, domesticated status. In short, they may likely do to us what we did to the dogs. And we will forget we were ever autonomous…
Eventually both masters and clients will move off planet when the resources here are exhausted. ……
The above ends with a worst case scenario. The post is kind of a cautionary note, more Cassandra than Pangloss. But it's intelligent to approach with extreme caution, something whose outcomes we cannot accurately predict. Unfortunately we often aren't very intelligent when power and profit drives us so I fear that once again we are going to have to sit down to a banquet of (unforeseen) consequences.
~~~~~


Having done mainframe programming (FORTRAN) and giving away my age here. Also couple of decades with tech support and being a bit of a geek (I lied, total geek). AI is a bit concerning. Remember what happened with HAL 9000 in "2001: A Space Odyssey" ?
Yes, AI is concerning but it looks as if there is no going back. As with many things technology, it depends on how it is used. Regrettably, its applications could lead to disaster.
Interesting , though, I have been playing with ChatGPT and I do find the interactions (if that's the right word) to be fascinating. I've been asking political and sociological questions and been given some thoughtful responses. I crafted my profile in such a way that it knows I am skeptical of everything so it gives me lots of views.