As AI systems become more complex and integrated into critical decision-making, failures can have significant real-world consequences. But when things go wrong, who should be held accountable? Is it the developers, the companies deploying the AI, policymakers, or the user? This discussion explores the complexities of AI accountability, the risks involved, and the mechanisms needed to assign responsibility.