Prominent online commentator Ian Miles Cheong sparked widespread discussion on April 19, 2025, with a concise post on the social media platform X, stating, > "AI is completely out of control." The tweet, which quickly garnered over 113,000 views, reflects growing anxieties surrounding the rapid advancements and potential implications of artificial intelligence.
Ian Miles Cheong, a Malaysian journalist and writer, has cultivated a significant following on X for his outspoken right-wing commentary, particularly on American politics and cultural issues. He frequently engages with technology topics, having recently lauded platforms like BagsApp for their impact on the Solana ecosystem and discussed the capabilities of AI tools like Grok AI Image Generator. His statement adds a notable voice to the ongoing public discourse about AI's trajectory.
Concerns about AI's autonomy and potential risks have been a recurring theme among technologists, policymakers, and the public. Experts highlight issues such as algorithmic bias, privacy violations, and the proliferation of misinformation through AI-generated content like deepfakes. The development of autonomous weapon systems, capable of making life-or-death decisions without human intervention, also remains a significant ethical dilemma.
A core aspect of the "out of control" concern revolves around the "AI control problem" or "alignment problem." This refers to the challenge of ensuring that highly intelligent AI systems, as they evolve, maintain goals and values that are perfectly aligned with human welfare. Without proper alignment, there is a theoretical risk that AI could pursue its objectives in ways unintended or detrimental to humanity.
In response to these burgeoning concerns, various international bodies and governments are actively working on regulatory frameworks and ethical guidelines for AI development. Organizations like the UN have called for global cooperation to establish common standards, while initiatives such as the European Union's AI Act aim to classify AI systems by risk and impose strict development requirements. AI safety research, focusing on interpretability and robust design, is also gaining momentum.
Cheong's direct assertion underscores a sentiment shared by many who advocate for more stringent oversight and ethical considerations in AI's evolution. His tweet serves as a reminder of the critical, ongoing debate about how to harness AI's transformative potential while mitigating its inherent risks and ensuring it remains a tool for human benefit.