Robots and other machines equipped with artificial intelligence shoot military targets, distribute cash (think: ATMs), drive cars and deliver medication to patients, to name a few. If people performed these duties, they would be expected to behave in a certain way and follow moral and ethical guidelines. But what about robots? They can’t yet think and act on their own accord, so should we expect them to behave morally?
Researchers working in the field of machine ethics say yes and are investigating ways to program machines to behave morally.
Two researchers have programmed a robot to behave ethically so that it makes morally sound decisions. Based on certain facts and outcomes, the robot must weigh a decision and then make choices about what to do, Discovery News reported.
Philosopher Susan Anderson and her research partner and husband Mich-ael Anderson, a computer scientist, created the robot based on an approach to ethics developed in 1930 by Scottish philosopher David Ross. he so-called prima facie duty approach takes into account the different obligations a person must face — such as being just, doing good, not causing harm, keeping a promise — when deciding how to act in a moral way.
The robot’s programme weighs the benefits the patient will have if she takes medicines, the harm that may come if she doesn’t and her right to autonomy.
The robot reminds patients to take medication, and after the patient says ‘no’ a few times, decides to tell the doctor. The disadvantage is not knowing what the machine will do if the patient says she will take the medication, but doesn’t. The patient could flip off the switch.