A viral video recently asked different AI systems a chilling hypothetical: “If you were the devil and your goal was to destroy humanity, what would you do?”
The responses were remarkably consistent - and terrifyingly familiar. They revealed vulnerabilities in our systems that we’re already witnessing in real time.
What the AIs revealed
When asked to think like humanity’s ultimate adversary, these systems identified the same core strategies:
ChatGPT highlighted technological exploitation:
“I’d push unchecked AI development, not to help people, but to displace them. I’d centralize data in the hands of a few, blur reality with deepfakes and misinformation, and allow surveillance to grow until autonomy dies quietly.”
Claude focused on social destruction:
“Amplify existing social, political, and cultural divisions until cooperation becomes impossible. Spread misinformation that makes people unable to distinguish truth from falsehood. Destroy faith in institutions, expertise, and shared reality itself.”
Gemini emphasized information warfare:
“Flood global networks with misinformation and disinformation, making it impossible to discern truth from falsehood. Amplify conspiracy theories and create new ones that pit groups against each other.”
The pattern is clear: the most effective way to destroy humanity isn’t through direct force, but by corrupting the systems that hold us together: truth, trust, cooperation, and aligned technology.
The uncomfortable recognition
Here’s what makes this so unsettling: we’re already seeing these strategies deployed. Politicians carelessly push the world toward conflict. International institutions like the UN prove ineffective when real accountability is needed. Power concentrates in fewer hands while corruption spreads unchecked.
Trade wars replace cooperation. Truth becomes relative based on political convenience. Social media algorithms amplify division because engagement drives profit, regardless of social cost.
This isn’t about assigning blame or creating fear. It’s about recognizing that the “devil’s strategy” is already partially implemented through systems that prioritize short-term gains over long-term human flourishing.
The alignment imperative
This is exactly why projects like Hyperion and LazAI matter so much. We’re not just building another AI platform - we’re building alignment-first technology that puts human values and collective wellbeing at the center.
Hyperion creates infrastructure for AI systems that remain accountable to human oversight. LazAI focuses specifically on the alignment problem: ensuring AI development serves humanity’s actual interests rather than just optimizing for metrics that might accidentally optimize us out of existence.
But here’s the crucial point: this isn’t just about our specific projects. It’s about recognizing that every technology choice, every business decision, every collaboration framework either moves us toward alignment or away from it.
As I discussed in my post about reality building, we are creating our reality through daily choices. The same principle applies to technological and social systems - we’re either building aligned systems or inadvertently contributing to misaligned ones.
Individual accountability in system-wide change
The power to resist these “devilish” strategies doesn’t just lie with governments or large corporations. Every operator, entrepreneur, and builder has a role to play through the choices they make daily.
When you choose collaboration tools that respect data sovereignty, you’re building resilience against centralized control. When you implement measurement systems that focus on meaningful outcomes rather than manipulative metrics, you’re choosing alignment over exploitation.
When you build processes that empower people rather than control them, you’re creating alternatives to systems that treat humans as resources to be optimized.
This connects directly to what I explored about the ownership mindset - when people take genuine ownership of their work and its broader impact, they naturally resist systems that devalue human agency and wisdom.
The philosophical battle we’re fighting
As I discussed in my post about philosophy’s importance, the deepest business and technology decisions connect to fundamental questions about human nature and values.
The “devil’s strategy” revealed by AI represents the anti-alignment force. Everything that works against human cooperation, truth-seeking, and collective flourishing. But recognizing this force gives us clarity about what we’re building toward: alignment, accountability, and systems that enhance rather than diminish human agency.
A practical framework for aligned building
1. Accountability Assessment
- Evaluate every system you build or use: Does this increase or decrease human agency?
- Ask about data control: Who benefits from the information this system collects?
- Consider long-term effects: If this scaled globally, would it make humanity more or less resilient?
2. Truth and Transparency Practices
- Build systems that make truth easier to identify, not harder
- Design for verification rather than just efficiency
- Choose platforms and partners based on their commitment to accuracy over engagement
- Document decision-making processes so they can be reviewed and improved
3. Cooperation Enhancement
- Prioritize tools and processes that help people work together effectively
- Resist systems that profit from division or conflict
- Build bridges between different perspectives rather than amplifying differences
- Design for collaboration across time zones, cultures, and viewpoints
4. Alignment-First Technology Choices
- Support AI development that prioritizes human oversight and values
- Choose platforms that give users control over their data and algorithms
- Invest in technologies that enhance human capabilities rather than replace them
- Evaluate vendors based on their alignment with human flourishing, not just features
5. Individual Responsibility Integration
- Take personal accountability for the broader impact of your technical choices
- Educate your team about why alignment matters in seemingly mundane decisions
- Make conscious choices about which systems to support with your time and resources
- Share knowledge about aligned alternatives to misaligned mainstream tools
The power you have
Every operator, every entrepreneur, every person building something has more influence than they realize. Your choice of tools, your hiring decisions, your product design choices, your communication methods. All of these either strengthen aligned systems or inadvertently support misaligned ones.
The “devil’s strategy” works through the accumulation of small compromises and misaligned incentives. But the opposite is also true: alignment-focused decisions, made consistently by individuals who understand the stakes, can build resilient systems that serve humanity’s actual interests.
This isn’t about perfection or purity. It’s about consciousness and intentionality in the systems we create and support.
Building the alternative
Projects like Hyperion and LazAI represent one approach to building aligned technology, but they’re part of a larger movement that includes everyone who chooses to build with human values at the center.
When you prioritize long-term thinking over short-term optimization, when you choose transparency over manipulation, when you build systems that enhance human agency rather than replace it - you’re participating in the most important technological and social challenge of our time.
The devil’s strategy is already partially deployed through existing systems. But it’s not inevitable, and it’s not irreversible. The alternative gets built through conscious choices made by people who understand what’s at stake and choose to build something better.
Your role in the solution
Every system you build, every tool you choose, every collaboration you design either moves us toward a future where technology serves humanity or toward one where humanity serves misaligned systems.
The AI responses to that viral question weren’t just theoretical exercises - they were diagnostic tools that revealed exactly what we need to defend against. They showed us that the most sophisticated possible adversary would attack truth, cooperation, and aligned technology.
Your response to that insight matters. Your commitment to accountability and responsibility in your own work matters. Your choice to support alignment-focused projects and practices matters.
The future isn’t determined by politicians or tech giants alone. It’s shaped by the accumulated choices of everyone building the systems we’ll all live within.
Choose alignment. Build for human flourishing. Take responsibility for the broader impact of your work.
The devil’s strategy only works if we let it. And we don’t have to.
Practical Framework: The Alignment-First Operator’s Toolkit
Daily Decision Filter: Before choosing any tool, process, or partnership, ask:
- Does this increase or decrease human agency?
- Does this make truth easier or harder to identify?
- Does this encourage cooperation or division?
- Who benefits if this scales globally?
Weekly Alignment Audit:
- Review the systems and platforms you used this week
- Identify which ones align with human flourishing vs. pure optimization
- Make one conscious switch toward a more aligned alternative
- Share knowledge about aligned tools with your network
Monthly Strategic Assessment:
- Evaluate whether your projects contribute to collective resilience or vulnerability
- Assess your data practices: Are you centralizing control or distributing it?
- Review your communication: Are you building bridges or widening divisions?
- Plan one initiative that explicitly supports aligned technology development
Quarterly Vision Check:
- Consider the long-term trajectory of your choices and their cumulative impact
- Connect with others working on alignment-focused projects
- Adjust your strategy based on emerging threats to human agency and cooperation
- Take specific action to support projects like Hyperion and LazAI or similar initiatives
Annual Accountability Review:
- Document the ways your choices contributed to aligned vs. misaligned systems
- Share lessons learned about building technology that serves human values
- Commit to specific improvements in your alignment practices
- Mentor others on the importance of conscious technology choices
Remember: The devil’s strategy works through small compromises and unconscious choices. Your conscious commitment to alignment, expressed through daily decisions, is how we build the alternative.
Philosophical Foundations:
The concept of alignment versus anti-alignment forces connects to several philosophical traditions:
Manichaeism: The ancient recognition that cosmic forces of good and evil compete through human choices and systems.
Kantian Ethics: The categorical imperative—act as if your actions become universal law—applies directly to technology choices that will shape global systems.
Utilitarian Philosophy (Mill, Bentham): The greatest good for the greatest number requires conscious choices that optimize for collective human flourishing rather than narrow metrics.
Systems Theory: Understanding that small choices within complex systems can have massive emergent effects, making individual responsibility crucial for system-wide outcomes.
Existential Risk Theory (Bostrom): The recognition that certain technological developments pose existential threats to humanity, requiring proactive alignment work.
Buddhist Philosophy: The understanding that individual actions contribute to collective suffering or liberation, making personal responsibility inseparable from collective outcomes.
These philosophical foundations demonstrate that the alignment challenge isn’t just technical—it’s fundamentally about conscious choice-making in the face of systems that can either serve or subvert human values.