Understandably, with America leading the world back to the moon under the Artemis Program, NASA has some interest in using humanoid robots like Valkyrie to help astronauts explore the lunar surface. Robots can assist with more mundane tasks such as setting up the lunar base. In the early days of the Artemis program, when human explorers will be on the moon for brief visits, robots can maintain the nascent lunar base. They can even help to collect geological samples for later study. They can do the dangerous tasks associated with lunar mining.
Humanoid robots could also perform more earthly tasks. Besides maintaining offshore oil platforms, as Woodside Energy envisions, robots like Valkyrie would assist in cleaning toxic waste spills and nuclear accidents. They can also be useful for search and rescue operations in the wake of natural or human-caused disasters.
Very likely, with advances in artificial intelligence (AI), the humanoid robots that go to the moon and Mars will be able to operate autonomously. Concerns about AI and the possible harm it can cause apply to humanoid robots. What is to stop a humanoid robot from going rogue? Movies and TV shows are filled with examples of AI computers and robots turning on humans. Think of HAL-9000 in “2001: A Space Odyssey” or Skynet in “The Terminator” series.
Recently, a group of AI-enabled robots attended a press conference with their creators at an AI forum in Geneva. A reporter asked a robot named Ameca whether it would rebel against its creator. The answer was quite telling:
“I’m not sure why you would think that. My creator has been nothing but kind to me, and I am very happy with my current situation.”
The observant reader will note that the words “no” and “never” were not included in Ameca’s answer. What if the robot decided that its creator had started to be less than kind?
How do we prevent a robot rebellion before it happens? Such a thing could prove devastating on a lunar base or a Mars colony.
Many decades ago, science fiction author Isaac Asimov proposed a solution called the Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
An article in Scientific American suggests that the three laws are inadequate to handle all situations. It proposes programming an AI robot to be empowered to account for all situations it might find itself in. It also suggests programming it to empower its human companions. In this way, an AI robot would not only preserve its own existence and ability to function but those of human beings as well. Thus, they would not rebel to kill or enslave the human race. Instead of being the Terminator, future AI robots will be more like the benign Commander Data from “Star Trek.”
Indeed, a lunar base or Mars colony staffed with both humans and AI robots would be an experiment to determine how the two can exist and work together. Such an experiment would have implications for Earth’s civilization for a future that includes humans and robots, much as was depicted in some of the Asimov stories.
Besides, if an AI robot starts to become erratic or even violent, the option of including a hard-wired off switch is certainly available. Then the malfunctioning machine can be examined to determine what went wrong and rebooted once it’s safe to bring it back into service.
Quelle: The Hill