Self-Aware Computers aren’t that Smart

Groundbreaking research from Rensselaer Polytechnic Institute has demonstrated artificial consciousness in machines.

Three robots sit in a line. Their job is to determine whether or not a researcher has given them a “dumbing pill” that renders them speechless.

Robots can’t eat pills. Instead, the researcher turns two of the robots off. The third robot stands up and says “I don’t know”.

Hearing its own voice, it then says “Sorry, I know now. I was able to prove that I was not given the dumbing pill.”

Next stop: Skynet.

A self-aware, self-driving car? Humanity is doomed.
A self-aware, self-driving car? Humanity is doomed.

I’m just kidding. This isn’t groundbreaking research. The robots are off-the-shelf toys that run python scripts to control their motor and speech functions. This particular implementation is a simplified version of the “three wise men” puzzle, a well-solved programming problem [1].

These recent “self-aware computers” only gained attention because researchers programmed the solution into cute little robots.

This isn’t self-aware computing so much as semantic distortion. In fact, I can write a “self-aware” computer program right here. I’ll make it open-source, in the interest of scientific progress.


while True:
print “If I am printing this text, I am not dead.”

BAM. Self-aware computer. Turing Award, please.

Reference:
Robot homes in on consciousness by passing self-awareness test –New Scientist

1. In the puzzle, three wise men are given blue or white hats and must determine the color of his own hat. Each man sees the others’ hats but not his own. They know that at least one person has a blue hat. More here.

One thought on “Self-Aware Computers aren’t that Smart

Leave a Reply