As robots and artificial intelligence increase in complexity, so do the ethics associated with their use. The dilemma was first codified as far back 1942, when Isaac Asimov laid out his famous Three Laws of Robotics. While these three laws attempted to curb attempts by robots to harm humans, the laws were far from perfect. Asimov himself pointed out the dilemmas inherent in these laws through his “I, Robot” series of short stories.
Recent advancements in AI have made the need for boundaries and parameters even more urgent. In a future world, robots will have to make the ethical case for their actions and be prepared to make moral decisions in a flash, whether it’s running into traffic to save a child or attacking a robber.
Ultimately, it will come down to a hybrid of society’s accepted moral code, machine learning and some variation of Asimov’s laws. Future robotics researchers should consider attending a top engineering college to learn from experienced professors with backgrounds in not just the hard science but ethical boundaries as well.