Siri, Accessory to Murder via Independent
So Siri assists a murderer:
US police say a Florida man accused of killing his roommate asked Apple’s digital assistant Siri for advice on hiding the body the day the man went missing. According to evidence reproduced from the trial by local news stations, Siri responded “What kind of place are you looking for?” before offering four options: “Swamps, reservoirs, metal foundries, dumps”.
But the iPhone data (including flashlight records!) gets him prosecuted:
Police say that Bravo was using the phone’s flashlight function to hide the body in the woods, and say that location data gathered from the smartphone doesn’t fit with Bravo’s account of his movements that evening.
Isn’t this violating Asimov’s first law of robotics?
"A robot may not injure a human being or, through inaction, allow a human being to come to harm."
While suggesting places to hide a body isn’t a physical assault, the family of the man buried in a shallow grave surely feels slighted by Siri. Should accomplice status or indirect harm be covered by the first law? Is it even possible for programers to foresee these types of situations and code an appropriate response? Who is coding ethics for AI?