Alith Starter Guide

HyperHack – Alith Integrations

Alith is Hyperion’s experimental AI subsystem that enables developers to build AI-native dApps and autonomous agents that are verifiable, privacy-preserving, and aligned with user intent. It provides APIs and tooling for integrating AI inference, data alignment, and agent logic into decentralized applications—allowing builders to create agents that act intelligently while staying accountable onchain.

What is Alith Inference?

Alith Inference is the backbone of Hyperion’s AI capabilities. It leverages MetisVM’s AI-optimized infrastructure to run intelligent logic on-chain—supporting use cases like decentralized decision-making, autonomous agents, and real-time AI.

Built on MetisVM, Alith provides:

  • Efficient On-Chain Inference: Run AI models directly in smart contracts using precompiled execution paths and quantized model support.

  • Verifiable Results: Use zkVM integrations to generate zero-knowledge proofs of model inference without revealing the data or model.

  • AI Acceleration: Tap into SIMD, GPU, or FPGA-backed acceleration for compute-heavy AI logic.

  • Security & Governance Controls: Model versioning, oracle sanity checks, and phased rollouts for secure on-chain AI behavior.

Think of Alith as your framework for building trust-minimized, performant, and composable AI-native dApps on Hyperion.

Why Alith Matters for HyperHack

Alith is one of HyperHack’s Tier 1 tracks, offering teams the chance to explore:

  • Onchain AI agents that take verifiable actions in DeFi, gaming, and governance

  • Decentralized inference or decision-making systems for real-time use cases

  • Privacy-aware or pseudonymous agents using ZK and credential-based access

  • Crowdsourced feedback loops for training, scoring, or aligning AI models

Whether you’re building autonomous NPCs, predictive modules, or governance tools, Alith is your playground for inventing trustworthy AI systems that operate at the speed and scale of Hyperion.

Alith Integrations Track

The Alith Integration Prize Track rewards teams building AI-native and data-aligned dApps that integrate with Alith, the experimental AI subsystem co-deployed with Hyperion. Projects should push the boundaries of decentralized intelligence, data coordination, and real-time inference.

Prize Pool: $30,000+ special prizes, to be split among Alith integrations

Bonus: Potential feature spotlight in the LazAI testnet takeover event (July)


How to Qualify for the Alith Prize Track:

To be considered for this category, your project must:

  1. Integrate Alith AI APIs (inference, alignment tasks, AI agents, etc.)

  2. Focus on on-chain or hybrid use of Alith (e.g., decision-making, prediction markets, model calls)

  3. Provide documentation and testing tasks to help community test and validate your dApp

  4. Submit during the main Hackathon phase with the Alith category selected in your application


Example Use Cases:

  • Decentralized AI Alignment Markets: Crowdsource model alignment data using token-incentivized prompts.

  • AI NPCs / Agents in Games: On-chain, evolving NPCs powered by Alith behavior models.

  • Predictive Governance Modules: Use Alith to generate proposals or simulate governance outcomes before votes.

  • Autonomous Credit Scoring: Run on-chain predictions based on user activity.

  • Model QA Feedback Loops: Allow users to “correct” AI outputs and reward aligned improvements.

  • AI-Powered Oracle Interfaces: Real-time info interpretation through LLMs integrated with Hyperion data.


Quickstart Checklist

Here’s how to get started with the Alith integration path:

Step 1: Read the Alith Integration Docs (link TBD)

Step 2: Clone a Starter Template GitHub - 0xLazAI/alith: Simple, Composable, High-Performance, Safe and Web3 Friendly AI Agents and LazAI Gateway for Everyone

Step 3: Join the Discourse & TG group for #Alith-Integrators

Step 4: Submit your idea via the Hackathon Application Form (Tag Alith, ,metisdevs)

Step 5: Prepare Tasks for community testers by July 10th (Tutorials, bugs, feedback forms)

Step 6: Participate in the C.Alith “Takeover” event starting July 1st!

Additional Resources

17 Likes

Is there a github starter template on setting up and performing on-chain inference with Metis VM?

As outlined in this:

4 Likes

thank you for the resources and information :victory_hand:

2 Likes

How would you design an architecture for an AI agent built with Alith to make an on-chain decision that is both privacy-preserving and verifiable using zero-knowledge proofs (zk-proofs)?

2 Likes

Thanks for sharing the resources, its really useful for new joiners. :slight_smile:

2 Likes

To ensure both privacy and verifiability, we can run Alith on a TEE (Trusted Execution Environment) node. This setup securely protects wallet keys, private data, models, and other sensitive components. From there, we can integrate zkML or similar technologies to generate zero-knowledge proofs of the AI agent’s decision-making process. These proofs can then be published on-chain, enabling verifiable, privacy-preserving on-chain decisions.

1 Like

That’s the kind of architecture that earns long-term trust. Combining TEE with zkML isn’t just technically sound it’s a clear signal you’re designing for real-world accountability and user control. Curious to see how this scales in production.

1 Like