Most of us don't like thinking about our own mortality or that of our loved ones. Most of us hope we can get through life without being confronted by these sorts of issues and hope that if we do have to make them that we will get it right when the time comes. This, no doubt, is the reason that AI making such decisions plays out so well in the media. Because it is not AI making them really but the person(s) that have established the guidelines upon which AI makes these decisions. A person who is making the decision in a calm quiet considered manner without the pressure of imminent danger to blame. I know that a decision made in this environment is likely to be a much better decision than one made in split seconds at a time the decision needs to be made. But it doesn't change my fear that if I was in one of those situations my fate might already have been sealed.
Most technologists and futurists believe that we're still decades away from fully autonomous, "intelligent" machines, but we already have commercially available machines that are making potential life and death decisions. .... The obvious risk is technical malfunction, but ethical concerns begin to appear when the machine could be faced with a decision between two damaging alternatives. If one of the primitive self-driving cars on the market detects an impending collision that will harm either the driver or another motorist, what logic should be coded to handle the situation? Who should know about this logic in advance? What value judgments should the machine make, meaning is the driver's well-being more "valuable" than a motorist's? Less valuable than a pedestrian's