🎙️ LazTalk Episode 4 – Building Privacy & Trust in Web3 AI

Hey frens,

I just came across something that feels super relevant to a lot of the conversations happening here lately about data, AI, and onchain transparency. Tomorrow we’re hosting the 4th episode of LazTalks, diving into one of the biggest questions in our space:

:backhand_index_pointing_right: How do we actually build privacy and trust into Web3 AI systems?

Here’s the lineup (pretty stacked panel :backhand_index_pointing_down:):

  • @danielk – Head of Marketing, LazAI Network

  • @BrianNovell – Head of BD, Lagrange

  • @garyliu – Co-Founder & CEO, Terminal 3

  • @reredameow – Head of Ecosystem, Athena X

  • Hosted by @Liametis from LazAI / @MetisL2

:date: Date: Sept 10
:alarm_clock: Time: 1PM UTC
:link: Link: https://x.com/i/spaces/1vAxRQPwgOXJl

Personally, what I’m curious about is how teams are balancing the need for transparent provenance onchain with the need for user data privacy in AI training and agent interactions. Feels like we’re hitting the edge of two conflicting values—openness vs. confidentiality.

I’d love to bring this back to the community:

  • What do you think is the biggest trust issue with AI in Web3 right now?

  • Do you lean more towards radical transparency or privacy-first design when it comes to data use in AI?

  • Have you seen any projects already getting this balance right?

Curious to hear your takes before we go live tomorrow :eyes:

1 Like

Nice, Will be there :grinning_face:

1 Like