English
Share
Sign In
Accountability of Artificial Intelligence: Should AI be held accountable if something goes wrong?
Haebom
👍
Dr. Zena Assaad points out that in existing technologies such as cars and airplanes, responsibility lies with the manufacturer or mechanic. So can the same principle be applied to AI? Assaad argues that AI is ultimately a technology designed and used by humans.
AI makes decisions based on data and objective functions. Since these objective functions and data are provided by humans, the ‘intelligence’ of AI is actually a result of human design.
Case 1. Autonomous driving or automobile
When a car causes an accident, the responsibility lies not with the car itself, but with the manufacturer, mechanic, or driver. Similarly, if AI makes a mistake, the human behind it should be held responsible.
Case 2. AI in medical diagnosis
Let's say in the medical field, AI makes a wrong diagnosis. In this case, the responsibility would not lie with the AI, but with the medical staff and researchers who designed and trained the AI.
Chain of Responsibility
Assaad and his colleague Dr. Brendan Walker-Munro propose a 'Chain of Responsibility' (COR) model, which aims to clarify responsibilities at each stage throughout the entire life cycle of AI.
When determining liability for accidents involving autonomous vehicles, applying this COR model allows us to consider liability at all levels: designers, manufacturers, testers, and users.

The responsibility for artificial intelligence ultimately falls on humans. No matter how advanced the technology becomes, there is always a human design and intention behind it, and therefore, responsibility also lies with humans.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
👍