There were a couple of panels at Minicon 54 that I attended and wrote up in my retrospect post. Over the years, I have been referring back to that post and telling people which panel to look at. It has occurred to me that the panels that I’ve referred to the most need their own individual posts, and so I’m reproducing them here.
Artificial Intelligence Best Practices—What do AI’s want?
‘OK Google, tell me why humans should be afraid of Artificial Intelligence.’ In 1951, the year of the first rudimentary chess program and neural network, Alan Turing predicted that machines would ‘outstrip our feeble powers’ and ‘take control.’ In 1965, Turing’s colleague Irving Good posited that devices more intelligent than humans could design devices more intelligent than themselves, ad infinitum: ‘Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.’ What can we do to minimize the chances that our robots and computers turn against humanity or enslave us for our own good? Is it possible to create a free-willed intelligence that finds humanity likeable? What will Artificial Intelligence look like and what will it want?
Sharon Kahn (m), Lyda Morehouse, Naomi Kritzer, Eleanor Arnason, Shaun Jamison
What is sentience? Well, I guess that also ties back to the earlier question “what is a person?” Is it tool making? Birds have tools. Other hominids have tools. Are they sentient? Are we sentient? Is it because we can think? Or at least, we think we can think, anyway. The brain is the most important organ, according to the brain.
We don’t actually know what other animals are thinking. At all. We have no sense of the emotions or self-awareness of other animals - we’re only putting our sense of thought on their actions. Hell, we don’t know what each other are thinking, either - we have language, and so we can ask each other what we are thinking and hope for a truthful answer. We can’t tell animals to respond to questions about self-awareness, but we can program machines to respond to questions about self-awareness.
And what is self-awareness? Is it a fear of death? It is believed by some that Koko the Gorilla was possibly aware of the concept of death. Crows do murder investigations. Are they self-aware? As a note - apparently, chimps and bonobos use a similar sign language that they may have picked up from Koko.
Our sense of consciousness and self is intrinsically tied to having a body - we’ve often described self-awareness as being aware of our bodies, even though not all animals use senses the way that we do. But then would AIs be aware and conscious in the same way? Or a different way? There is an interesting storyline in the webcomic Questionable Content where one of the AI characters has to get a new body. They’re still self-aware while disembodied, and they’re self-aware in their new body.
Machines and humans learn differently. Would young AI go to school with human children? Would they even want to, and what would be the benefit of it? AI are already very sophisticated and are becoming more so every day. AI can even teach themselves to a degree, and there are creative and semi-creative AI out there - see Emily Howell as an example of a creative AI, and look up journalist AI for semi-creative AI. Also, check out Humans Need Not Apply to get a feel for how a potential future with increasingly sophisticated robots may affect us.
For the most part, machines and computer programs don’t currently have rights. So if an AI doesn’t have rights over creative work, and the person running the AI doesn’t have the rights over what the AI does, then we could end up in a situation that could end up in a situation similar to Naruto the crested macaque and the selfie copyright dispute. If an AI is somehow declared sentient, or even near-sentient, could the work from an AI be considered public domain, or could the AI be the copyright holder of its own work, or will the person running the AI have the rights?
Star Trek: The Next Generation and Star Trek: Voyager both tried to tackle this in their own ways with the episodes “The Measure of a Man” (TNG S2E09) and “Author, Author” (VOY S7E20). Though these episodes take place in the far future, here in the present day we have yet to have universal human rights, and it crops up in unusual ways sometimes: Saudi Arabia grants robot rights that women there don’t have. Of all of the things that I expected to happen in the world of AI, that seriously was not one of them.
Would an AI gain emotions? What are we teaching AI? We did accidentally make an AI that became a Nazi once. Or was it twice? Either way, humans are the greatest enemy to AI, and that could cause trouble if an AI did have the capacity to have emotions. And if an AI did have emotions, would it necessarily have human emotions? Emotions aren’t the only basis for self-motivation, though we do use self-motivation as a basis for determining sentience - like the self-motivation to stay alive. But then robots wouldn’t necessarily have the same need to have a fear of death as we do, as they can be uploaded into new chassis - see again Questionable Content. There is also an episode of Futurama that deals with Bender not being able to go into a new body and having a “limited” lifespan. However, us being humans, we see mortality as a death of a body - the military has bomb-detecting robots that soldiers have begun to anthropomorphize (which I totally get), even though the hard drive could be put into a new body.
But would robot emotions be like human emotions? Would robot emotions have to be like human emotions? Perhaps they would have to a Robot Mythology so that they can fear robo-death or robo-pain. Imagine Robot Mythology and comment below! Dust has great short films about AI, including this one about Robot Mythology. Plus, the story “Damage” by David Levine was recommended, which is related to this.
A Reverse Turing Test. I don’t remember if this is a Futurama reference or if a person failed it while talking to someone in a call center. “Is the puppy mechanical in any way?” I have had many people think that I am an AI when I worked in call centers and I had to convince them that I am not a computer program. Do Androids Dream of Electric Sheep? Would I pass the Voight-Kampff test? Granted, it’s not necessarily necessary for an AI to pass the Turing Test, either, for most things - Isaac Arthur has many videos about Artificial Intelligence.
In opposition to the possibility of fearing death, would AI have a will to live? CollegeHumor made a short, “If Google Was A Guy”. Would this sort of life eliminate an AI’s will to live? I mean, there’s more. But say a robot was running low on energy - would it be worried about running low on energy before it can get to a charging port? Will it be motivated to get itself to a charging hub before it shuts off?
Should we be afraid of AI? Humans are horrible, after all. The people who run large corporations have so much power already, so what more harm can AI do to us? Are we just putting our biases onto AI, making it that we are fearing ourselves with a veneer of AI covering the mirror?
I’m not afraid of AI. Are you? Please leave a comment with why or why not!
If you can, please help me get out to future conventions and attend panels. Writing about each panel takes a lot of work, ranging from extensive note-taking and transcribing, to understanding the content of the panel, to analyzing the information, and there is maintaining this website and creating the content for ease of access. As little as $1/month will help me get into a position where I can prepare and create quality content for everyone. To this end, I am on both Patreon and Ko-fi, membership platform services designed to help facilitate the relationships between patrons and creators.
I know that it's not always possible to sponsor someone on a monthly basis. If you would help me with a one time donation, please feel free to use PayPal, CashApp, or Vemno. Producing content takes time and effort, and any support would be most appreciated. Your donations help me cover expenses and eventually will let me be able to create quality content full-time.