GM GM, Apologies for the delayed response , I was in the middle of relocating and just got everything settled. Thanks again for your interest in our project and for the thoughtful questions. Let me address each of them below
1) If the dataset is private, how do you ensure Auditors don’t misuse or leak it?
Great question. In our current model, auditors must undergo two layers of validation before they’re granted access:
- First, they are verified by us (the platform) based on identity and domain expertise.
- Second, they are validated by the DAO, ensuring community-level accountability.
While trust in auditors is still required to some extent, just like in real-world auditing , we are actively exploring ways to make this process more trustless.
2) Are Creators allowed to challenge an audit report if they think it’s unfair or inaccurate?
Yes, it’s a crucial safeguard.
We agree that auditors shouldn’t have unchecked authority. Creators will have the ability to challenge audit reports, either by:
- Requesting a re-audit (potentially by a different auditor or group of auditors), or
- Submitting a formal rebuttal that gets appended to the audit report for transparency.
3) What kinds of use cases are you targeting first – trading bots, oracles, LLMs?
Our initial focus is on trading bots and LLMs. These use cases have both high impact and urgent need for trust and verifiability.