StatCounter

Monday 21 January 2008

Here's HAL

I read that robots that can lie may be closer than we thought.

"The team at the Laboratory of Intelligent Systems at the Federal Institute of Technology created the little experimental learning devices to work in groups and hunt for "food" targets nearby while avoiding "poison." Imagine their surprise when one generation of robots learned to signal lies about the poison, sending opponents to their doom.

The little wheeled robots had neural circuitry with about 30 "genes" that determine their behavior, and how much they react to light in the environment. The food sources charged up the robots' batteries while the poison drained them, and by using the genes of the most successful feeders in 50 successive generations, the team was hoping to select the fittest.

Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death. Eerily wicked, to say the least. Saving the robots' honor, luckily, there were also a few "hero robots" that signalled danger and then rolled to their death to save the others."


Of course we may be reading more into this than there actually is. It is possible that the first bot 1 wasn't "lying". Maybe bot 2 was spying on bot 1 and misread the lights or maybe bot 1's programming or detection of charge/poison was askew. Anyway is bot 1 capable of knowing that it is being observed?



If the story does bear examination then watch out for any computers with "a soft voice and a conversational manner".

No comments: