Comreton AI - Transparent & Auditable AI Marketplace

Project Name

Comreton AI - Transparent & Auditable AI Marketplace

Problem Statement

Today’s AI systems are largely black boxes—users can’t inspect how models make decisions, whether they’re fair, or if they’re even functioning as advertised.

Developers lack transparent, on-chain tools to monetize their models, while users, researchers, and regulators have no way to independently audit performance, safety, or bias.

This lack of transparency erodes trust, reinforces centralized control by a few tech giants, and limits open innovation in the AI ecosystem.

Solution Overview

Comreton AI reimagines AI as a transparent, trustless public good by making every inference step—down to individual neural network layers—verifiable on-chain.

We introduce a universal model standard and SDK that converts popular AI models into blockchain-executable formats without sacrificing performance or auditability.

Before going live, models are peer-reviewed by community auditors who stake tokens, ensuring safety, fairness, and compliance.
Once verified, models are deployed on-chain via Alith’s optimized infrastructure, enabling secure, fee-based access for users.

This creates a fully decentralized AI marketplace where creators earn sustainably, auditors uphold quality, and users gain access to transparent, provably fair AI models they can trust.

Project Description

Comreton AI is building the world’s first transparent and auditable AI marketplace, powered by the Hyperion blockchain and optimized through Alith’s on-chain inference engine.

Today, AI functions like a black box—users can’t verify how decisions are made, developers struggle to monetize models fairly, and trust is continually eroded. We’re solving that.

Core Functionality:

  • On-Chain Transparency: Every layer of an AI model runs on-chain and emits verifiable proofs, enabling anyone to audit the model’s behavior step-by-step.

  • Universal Model Conversion: Our SDK converts models from TensorFlow, PyTorch, or ONNX into a blockchain-executable format—with auditability built in from day one.

  • Decentralized Quality Control: Community auditors stake tokens to review models for bias, safety, and performance before they go live—ensuring integrity without gatekeepers.

Tech Stack & Architecture:

  • Execution Layer: Hyperion + MetisVM for parallelized, high-speed smart contract execution
  • Optimization Engine: Alith (Rust-based) for compiling and compressing models into blockchain-efficient formats
  • Storage: IPFS for decentralized model file storage
  • Smart Contracts: For model registration, staking, inference payments, and reputation management

How Users Interact:

  • Creators: Upload models via our SDK and choose pricing. We handle the rest—optimization, registration, and listing.
  • Auditors: Stake tokens to review models. Earn rewards for honest, accurate verification.
  • End Users: Browse models like an app store. Run AI with one click, without needing compute—results are delivered verifiably and transparently.

What Excites Us:

Comreton AI turns opaque, centralized AI into a provable, community-owned infrastructure. Imagine verifying—mathematically—that:

  • A hiring model isn’t biased by gender.
  • A medical diagnosis AI followed best practices.
  • A financial model acted fairly and consistently.

We’re not just building another AI tool—we’re building the foundation of trust for AI’s future.

Community Engagement Features

Our platform is designed to onboard and retain users through gamified participation and collaborative incentives:

  • Creator Track: Upload models, convert them to our transparent format, and earn revenue as your models gain usage and trust.
  • Auditor Track: Review models for integrity, stake tokens to verify quality, and build your reputation as a trusted AI auditor.
  • User Track: Run inferences, explore different models, share feedback, and refer others to grow the ecosystem.

To make this fun and rewarding:

  • Leaderboards spotlight top contributors weekly.
  • Achievement Badges & NFTs mark important milestones.
  • Tiers & Challenges unlock new benefits and recognition.
  • Social Sharing builds community visibility and collaboration.

Getting Involved

  • AI Creators: Upload and monetize your models using our SDK and transparent deployment tools.
  • Auditors: Join the verification process, stake to review models, and earn by ensuring AI integrity.
  • Developers: Contribute to open-source components,tooling, and ecosystem integrations.
  • Users: Use, Explore, test, and give feedback on models—help shape a trustworthy AI future.
10 Likes

Hello @Legend101Zz , How are you?

How do you ensure inference on-chain doesn’t drastically increase gas costs or latency, especially for deep neural models?

7 Likes

This sounds like a game-changer for AI transparency and trust-finally giving users and regulators a way to verify model behavior.

If I’m a regular user relying on a model from Comreton AI-for example, for financial advice or medical guidance-how can I easily understand or verify why the model gave me a certain output, without needing deep technical knowledge?

2 Likes

Hi @priyankg3 — apologies for the delayed response, I had my notifications turned off.

So that question actually touches on one of the core bottlenecks our team is actively working to solve. While we don’t have a completely finalized answer yet (as the architecture is still evolving), here’s the current direction we’re taking:

  1. Custom Opcodes via MetisVM
    We’re leveraging Hyperion’s MetisVM, which includes specialized precompiled contracts tailored for AI workloads — such as matrix multiplications, activation functions, and convolutions. These significantly reduce the on-chain inference cost.

  2. Bulk Execution Model
    Rather than executing thousands of discrete operations on-chain (which would be prohibitively expensive), we batch them into gas-efficient precompile calls. This dramatically reduces the gas footprint per inference.

  3. Aggressive Model Optimization
    One of our core goals is to aggressively optimize models for size and performance — including quantization (INT8/FP16), pruning, and other compression techniques — while still preserving accuracy and inference quality.

  4. The Fees users pay as gas ( so yes running AI is expensive ) and that is true for web2 AI cloud deployments too right …


The key insight here is that transparency comes at a cost, but we’re making it economically viable through a combination of low-level technical optimizations and novel economic design.


3 Likes

Hi @Han — apologies for the delay in getting back to you, I had my notifications turned off!

Your question really cuts to the core of what we’re trying to solve:

“If I’m a regular user relying on a model from Comreton AI—for example, for financial advice or medical guidance—how can I easily understand or verify why the model gave me a certain output, without needing deep technical knowledge?”

To be honest, the type of users we’re currently targeting are more developer-focused — think of it like how people interact with models on Hugging Face. You download the model, set up the dependencies, and run your own evaluations — but you never really get to see what’s happening inside. There’s no built-in mechanism for trust or interpretability. With Comreton, we’re trying to change that.

Unlike traditional ML platforms, our models live on-chain — which means you don’t just get an output, you get a provable, auditable trail of how that output came to be.


What actually happens when we convert a model?

When we bring a model into Comreton’s ecosystem (via our SDK), we don’t just copy it onto the chain. We instrument it. That means injecting audit hooks at important checkpoints within the model — right down to the individual layers.

Original Model:        Input → [Dense Layer] → Output

Comreton Conversion:   Input 
                        ↓
                [Audit Hook: Input State]
                        ↓
                    [Dense Layer]
                        ↓
           [Audit Hook: Weights + Activation]
                        ↓
                      Output

These hooks capture things like:

  • The inputs at each layer
  • The actual weights being used
  • Post-activation values
  • For transformer models, even the attention weights from each head

This allows us to reconstruct and analyze every forward pass — either in real-time or post-execution.


What does execution look like on-chain?

When the model runs on MetisVM, it doesn’t just compute blindly. It walks through a layer-by-layer state machine, where each stage is captured and cryptographically verified.

MetisVM Trace:
┌──────────────┐    ┌──────────────────────┐    ┌────────────────────┐
│ Input Layer  │───▶│ Audit Hook (Input)   │───▶│ Audit Hook (Output)│
│ [28×28 px]   │    │ - hash(input)        │    │ - hash(output)     │
└──────────────┘    │ - norm/range checks  │    └────────────────────┘
                    └──────────────────────┘

This ensures that what’s executing is the same as what was verified, and that there’s no tampering at runtime. Every transition is anchored by a state hash, which becomes part of the public blockchain record.


What kind of mathematical guarantees do you get?

Every forward pass emits a zk-SNARK proof, giving you full mathematical assurance without revealing the actual model internals.

On top of that, we compute gradient-based sensitivity analysis to help explain why the model predicted what it did:

Screenshot 2025-06-22 at 00.28.54

And in case of classification, we also emit confidence intervals:


So what does the user actually see?

This results in a model experience that feels more like debugging — and less like black-box magic.

When you get a result, you also receive:

  • A layer-by-layer visualization of what happened internally
  • A breakdown of feature importance (i.e., what input mattered most)
  • Attention maps if the model is a transformer
  • Confidence scores, and mathematical guarantees
  • The ability to verify everything via cryptographic proofs — publicly and forever

Compared to Hugging Face

To give a quick sense of where we differ:

Feature Hugging Face Comreton AI
Setup Time 2–4 hours (env, CUDA, deps) 0 seconds (on-chain ready)
Output Verification Trust the README Cryptographically provable
Explainability None (black box) Full layer-by-layer trace
Bias/Robustness Detection Manual testing Automated + auditable
Reproducibility Depends on setup Guaranteed (blockchain-backed)
Version Control Git tags (mutable) Immutable contract versions
Performance Local hardware dependent Gas-optimized execution on MetisVM

So yeah, while there’s still a lot of polish left, this is the direction we’re headed — explainability, accountability, and openness baked into every step of the pipeline :grinning_face_with_smiling_eyes:

3 Likes

Thanks for the clear explanation! It’s great to hear Comreton is prioritizing transparency and trust by making models auditable on-chain. Looking forward to seeing how this improves user confidence, especially for non-technical users.

1 Like

The approach with MetisVM and model optimizations sounds promising , excited to see how it evolves. Will be following closely :fire:

3 Likes

This is really a great idea that could change things, and especially a great case of how to build something that uses both Hyperion and Alith together.

4 Likes