Desire Homuncularism

Steve Petersen

Abstract

Late in his philosophical career, Daniel Dennett changed his mind about minds in ways that have crucial implications for AI sentience and existential risk. This paper examines Dennett’s shift toward viewing genuine minds as requiring “real caring,” and explores the further claim that such caring requires being made of parts with simpler cares. I connect these ideas to questions of AI moral patiency and agency, suggesting that cognitively sophisticated AI systems might still lack conative capacities required for both moral standing and dangerous agency. These considerations support the possibility of creating helpful but largely “mindless” robots, addressing both ethical and safety concerns in AI development.

Previous
Previous

Aoyagi — "Singular Learning Theory and Deep Neural networks"

Next
Next

Thornley, Roman, Ziakas et al. — "Towards shutdownable agents via stochastic choice"