Written by Ritholtz Wealth Management and Philip Elmer-DeWitt, CNN The Scientist | Philip Elmer-DeWitt, CNN A.I.-Assisted, Remote-Control Killing Machine for Killing Humans and Others
The Scientist is CNN’s weekend magazine show. Hosted by Chief Investment Strategist for Financial Planning At The Stanley H. Spear Center for International Business, Clark Howard, The Scientist debuts this weekend on CNN.
I am hooked. And I might be in desperate need of some help.
On Friday, Dec. 21, the horrific video of a brain-dead Houston man, James Wharton, being moved out of the intensive care unit of the hospital where he’d been on life support for nearly a year and moved, by professional cardiologists, to the operating room for a hysterectomy became an internet sensation. The video of the pro-assisted-suicide move was accompanied by the commentary of Dr. Thomas Gray. It went viral and allowed him to pull off an elaborate — some might say, brash and unethical — act. The video drew scores of comments and kudos on social media.
Wharton, needless to say, did not help his case and now faces family and other problems because of the hoax.
But now, The Scientist has stepped up to the plate.
The magazine has created a Frankenstein’s monster, called an A.I.-Assisted, Remote-Control Killing Machine, because by messing with whomever you want, you’re taking the burden off what seems to be an incredibly troubled man. By this I mean, you’re removing him from a life-and-death situation and allowing him to die — slowly, quietly and without objection from whomever you deem to be an inconvenient, delicate person.
The premise is not new: Companies like the Cork Food Group are developing robots which can break down whole hams and chickens without causing any major harm to the animals. But I suspect that, in reality, these machines wouldn’t be used to prevent workers from being harmed by animal slaughterhouses, because they’re designed as part of a different sort of operation. So, there’s some hope, at least for the rest of us.
The problem, of course, is that robots are always the ideal candidate for any crime. Whether it’s speeding on the highway, killing someone who’s hit by a car or firing on a target in a war, their software is always programmed to let someone, somewhere, kill as many as possible. Given how few people had to be killed in the Wharton case, The Scientist weaponized a hypothetical machine (courtesy of the Oregon Health Authority, no less) to help a story happen.
If The Scientist somehow manages to pull off its project, they will have the potential to help some people. And if the way that person’s life is removed from his own hands is not as difficult as you’d think, then you really haven’t escaped the slippery slope that begins in Europe and ends in your own life. It’s a slippery slope to trouble.
If you want to leave your own death to someone else, I’d have some pointers. First, seek help. You’re not someone to come by yourself. Second, it’s not a good idea to be jerks about it. Third, you should think carefully about the impact that your illness and failure is going to have on those who are close to you. I don’t think it’s acceptable to take someone else’s life because you don’t like someone else’s. And fourth, you need to be very careful not to open the door to future situations in which there are no redeeming qualities. You could have a food-eating A.I. , do you want the next experiment to be the World War III robot that just learned how to take out a nation’s entire military on its own?
The experiment does have one small benefit. When we write stories, do we ever plan to take readers’ identities out of them? Do we ever put forward our own titles and titles of others in order to make them more likable?
This project doesn’t work quite like that. It won’t play on someone’s vanity to take their identity out of their story. It won’t get someone fired, or expose a child to school violence, or make someone snap and run off a cliff and die — as tragic a hypothetical as those events are. But I think that’s OK.