Guys, I am totally back in sci-fi land these days. Just about to finish my final chapter on artificial intelligence for my thesis. Today I am pondering over the arguments the University of Hongkong-based philosopher Andreas Matthias recently made. He works on responsibility and - surprise, surprise - calls for a way to judge a machine for its deeds.
Matthias wants robots to get paid like the human workforce in order to secure the repair of the damage they do. He wants to make them a legal entity - arguing that international law already makes a different between a legal person and a human being anyways. Responsibility is no longer bound to one single person. If corporate bodies are being judged, if a legal person is no longer a human being, if individual responsibility is already lost anyways - why not make an AI responsible, too? If humans are only valued as function owners within their respective system, why not value machines the same way? Right now the person who sets the goals for the machines is responsible. There are however more and more cases where initial goal and unforeseen consequences of action strongly diverge. Thus, Matthias argues, the machine needs to be judged.
The machine needs to be judged?!
I totally disagree with him. The machine does not need to be judged, BUT: the system needs an update. Individual responsibility needs to be possible again. Symbolic interaction needs to be restructured. You have to be responsible for your deeds again. The man-as-functionary principle needs to be overcome. I'll get back to you later on this subject, once I can make a smart suggestion on how that could be possible from within these complex structures that define our current form of society. Or maybe someone else makes a suggestion first and I just keep pondering over that. Anyways.
The man needs to be judged. Period.