It looks cute, but this Russian-made robot had recently been “arrested,” after making rounds in a political rally, recording voters’ opinions about a candidate’s team. It sounds fairly harmless, but Promobot seemed to have made enough trouble to make the local authorities ask policemen to apprehend and detain the robot.
“Police asked to remove the robot away from the crowded area, and even tried to handcuff him,” the Promobot spokesperson said.
This isn’t the first time that Promobot got itself in a fair amount of mischief—it’s run away from its home laboratory before, twice. The mad run for freedom ended with a battery-drained robot blocking thickening traffic in the street; the programmers were left scratching their heads. The curious event, however, was called a hoax and a publicity stunt.
So, what happens when artificially intelligent beings breaks the law? The scenario might seem like something straight out of sci-fi, but it’s a concern that’s quickly realizing itself as the technology develops more and more rapidly.
Back in February, the US legally designated Google’s software as “the driver” in its autonomous cars. Which begs the question, who’s accountable for collisions caused by the software? Or in cases when there is a passenger in the autonomous car, and a third party is injured or killed, who gets sued?
As the Washington Post points out, these questions boil down to liability, and not necessarily insurance. John Townsend, a spokesman for AAA Mid-Atlantic told the paper, “If you have a catastrophic failure of a product, you can sue the bejeezus out of a company, if the product causes the crash.” While these cases may be more straightforward, the difficulty compounds in vehicles where humans do have the option of manually taking over.
Aside from driverless cars, other applications of AI raise different questions. For example, last year two artists created a bot and enabled it to surf the dark web and make illegal purchases. This was a part of an art exhibition tasked to explore the legal and philosophical implications of AI breaking the law. Ultimately, they did not face charges as “the couple acted in the name of performance and public education.” But, this was in Sweden, would other countries come to the same conclusion? Even if so, what about those who employ AI to do the same outside of the scope of art?
These are just two examples of what we will be dealing with as artificial intelligence becomes capable of more. There are no concrete answers as of now. But it may be wise for developers to work with government agencies to work out these tricky questions.
References: ScienceAlert – Latest