After Orthogonality: Virtue-Ethical Agency and AI Alignment
This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices[1]:...
Read More
