To me it makes sense for humans to have to make the decision when it comes to the killing and destruction of other humans.
Due to things such as; Empathy, Morality, and Emotion.
Things an Artificial Intelligence does not have.
Granted, things like that could be PROGRAMMED into the A.I, but the A.I will never be able to come up with such things itself, and eventually, due to flaws in programming/a loophole the designers did not see, a catastrophe could pretty easily happen.
Yeah. I think it crosses an ethical line.
A.I has no personal way to tell when to stop. Only that it needs to get from point A to point B.
And who knows what it might do to get there?.
Be the first to reply to this answer.
Join in on more popular conversations.