Creepiest Stories in Artificial Intelligence (AI) Development

Virtual cannibalism

This story was told to us by Mike Sellers, who was working on some social AI for DARPA in the early 2000s. They were making agents that learned to interact socially together.

“For one simulation, we had two agents, naturally enough named Adam and Eve. They started out knowing how to do things, but not knowing much else. They knew how to eat for example, but not what to eat. We’d given them an apple tree (the symbolism of that honestly didn’t occur to us at the time), and they found that eating the apples made them happy. They had also tried eating the tree, the house, etc., but none of those worked. There was also another agent named Stan, who wanted to be social but wasn’t very good at it, so was often hanging around and kind of lonely.

“And of course, there were a few bugs in the system.

“So at one point, Adam and Eve were eating apples… and this is where the first bug kicks in: they weren’t getting full fast enough. So they ate all the apples. Now, these agents learned associatively: if they experienced pain around the time they saw a dog, they’d learn to associate the dog with pain. So, since Stan had been hanging around while they were eating, they began to associate him with food (bug #2 — you can see where this is going).

“Not long into this particular simulation, as we happened to be watching, Adam and Eve finished up the apples on the tree and were still hungry. They looked around assessing other potential targets. Lo and behold, to their brains, Stan looked like food.

“So they each took a bite out of Stan.

“Bug #3: human body mass hadn’t been properly initialized. Each object by default had a mass of 1.0, which is what Stan’s body was. Each bite of food took away 0.5 unit of mass from whatever was being eaten. So, when Adam and Eve both took a bite of Stan, his mass went to 0.0 and — he vanished. As far as I know, he was the first victim of virtual cannibalism.

“We had to reconstruct some of this after the fact from the agents’ internal telemetry. At the time it was pretty horrifying as we realized what had happened. In this AI architecture, we tried to put as few constraints on behaviors as possible… but we did put in a firm no cannibalism restriction after that: no matter how hungry they got, they would never eat each other again.

“We also fixed their body mass, how fast they got full, and changed the association with someone else from one of food to the action of eating: when you have lunch with someone often you may feel like going to eat when you see them again — but you wouldn’t think of turning them into the main course!”

Prev1 of 6

Leave a Reply

Your email address will not be published. Required fields are marked *