Richard D. Bartlett, co-founder of Microsolidarity and a prominent voice in community building, recently shared a critical perspective on the discourse surrounding artificial intelligence (AI) safety and existential risk. In a tweet, Bartlett cautioned against the "psychically hazardous" nature of extending one's "sphere of concern very far beyond your sphere of influence." He further elaborated that a "felt sense of powerlessness spoils your epistemics like nothing else," suggesting that an overwhelming focus on uncontrollable, large-scale threats can undermine one's ability to think clearly and act effectively.
Bartlett's statement underscores a core principle of personal agency and effective action within complex global challenges. The concept of aligning one's "sphere of concern" with their "sphere of influence" suggests that individuals should prioritize engagement with issues where they can genuinely make a difference. This approach aims to counteract the psychological burden and cognitive distortion that can arise from feeling overwhelmed by problems perceived as insurmountable.
His philosophy extends into practical initiatives, such as the "Hermes' Loom" project, which he discussed in a recent interview. Co-founded with Julian Nalenz, Hermes' Loom seeks to connect philanthropic funding with community-driven projects, thereby empowering individuals and groups to address societal needs from the ground up. Bartlett believes that fostering human agency and collaborative action on a local level can indirectly contribute to broader AI safety by mitigating the underlying drivers of conflict and powerlessness.
The tweet comes amidst a burgeoning global conversation on AI's potential existential risks, with debates often highlighting the rapid pace of technological advancement versus the slower progress in AI alignment and governance. While some experts focus on regulatory frameworks and corporate responsibility to mitigate risks, Bartlett's view emphasizes the importance of individual and community empowerment. He suggests that a pervasive sense of helplessness can hinder constructive engagement and critical thinking within the AI safety discourse itself, potentially leading to unproductive or distorted approaches.
Ultimately, Bartlett advocates for a recalibration of focus, urging individuals to channel their energy into areas where their efforts can yield tangible results. By cultivating a stronger sense of influence and responsibility within one's immediate environment, he posits that individuals can better contribute to a more stable and collaborative future, which in turn supports the broader goal of responsible AI development. This perspective offers a human-centric counter-narrative to the often abstract and daunting discussions surrounding AI's long-term impact.