Musing on AI and responsibility
A recent article on the Ethics of AI led me to think about responsibility for the consequences of decisions by an AI entity.
Arguably, however autonomous an AI entity, its manufacture, programming, and energising must derive, directly or indirectly from an human originator or human enterprise – a creator – accessible in a way our Creator is not – at least in this world.
That creator may have delegated duties to an operator such as the driver/pilot of an autonomous vehicle. My point is that, at some level, the most autonomous device always will be directly or indirectly traceable to a legal person; a second – or higher – level device ultimately will be so traceable.
Even when the intellect of the artefact becomes superior to that of the human originator, as it may, the artefact will remain an artefact. Its decisions are the responsibility – be it never so indirect – of a human.
There will be a question as to the human succession when the first originator dies but the law, though differing from jurisdiction to jurisdiction, can deal with such problems.
An autonomous vehicle at large is, I imagine, no different from a tiger. The owner is responsible. The English case of Rylands v Fletcher  UKHL 1, LR 3 HL 330, (1868) LR 3 HL 330 provided strict liability for the release of dangerous things that cause damage. I would expect other jurisdictions to have such a law. I would suggest that it is a logical part of Natural Law.
I haven’t seen much discussion of ultimate liability in the generalisations about the ethics of quasi-autonomous entities, but I may not have been looking. I would be interested in any reaction to my proposition. There is something finite about an accessible, worldly creator.