Pi Squared
HomeAbout UsCareersDeveloperResourcesBlog
  • Pi Squared
  • VSL Devnet
  • Overview
  • FAQ
  • Getting Started
    • Run a Local Devnet
    • Core Concepts
      • The Devnet Architecture
      • The Account Model
      • Transactions and Claims
  • Interact with the Devnet
    • Send Raw Curl Requests
    • Run Commands with the CLI
    • Use MetaMask Integration
    • Browse Claims with the Explorer
  • 1+3 Architecture
    • Multichain Interoperability
    • Blockchain Mirroring
      • Block Header Settlement
    • AI + Trusted Execution Environment
  • Metamask Integration
    • Run the MetaMask Snap
    • Sample dApp using MetaMask
  • Tutorials
    • Creating Accounts
Powered by GitBook
On this page
  • Getting Started
  • How it works
  • Try it yourself

Was this helpful?

  1. 1+3 Architecture

AI + Trusted Execution Environment

This section of the Devnet provides AI developers and service providers with everything they need to build verifiable, cross-chain AI workflows using Pi Squared’s Verifiable Settlement Layer (VSL). It focuses on integrating AI agents with Trusted Execution Environments (TEEs) and submitting verifiable claims about their execution to the VSL.

The VSL-AI-app-examples repository contains working examples of AI clients or services running in a TEE, generating proof-backed claims, and settling them on the VSL network. This repo includes clear, step-by-step setup instructions, code examples, and test flows.

To get your hands dirty, you'll be working with a web application that allows you to demonstrate how to use image classification or run text prompts in a TEE. The example app includes both a frontend and backend, allowing you to see how the AI client interacts with the TEE and submits claims to the VSL.

Getting Started

As previously stated, this demo demonstrates how AI tasks, whether based on text prompts or image inputs, are processed securely within a Trusted Execution Environment (TEE), producing cryptographic claims that can be independently verified via the Verifiable Settlement Layer (VSL).

When you upload an image or enter a plain text prompt, the task is executed inside the TEE, and a signed claim is generated. This claim, which captures both the input and the model's output, is then posted to the VSL, where it can be validated and viewed.

The interface shows each interaction in real-time, listing the type of task, creation time, claim ID, and a button to view the result. It provides a clear view of how verifiable execution works in practice across various input types.

We have deployed a live version of this demo, ready for you to explore. You can check it out here: https://tee.pi2.network/.

Note that in order to play around with this demo, you need to sign in with your Metamask wallet and have VSL tokens.

If you don't have VSL tokens, you can signup for the Devnet here!

How it works

Here’s the step-by-step flow of what happens when you interact with the demo:

  1. User request: A user interacts with the frontend and submits a prompt or image.

  2. TEE execution: The request is sent to a backend client that routes it to a TEE-based attester, which runs the AI model and computes the result.

  3. Proof generation: The TEE generates an attestation report confirming the input, output, and program run.

  4. Claim submission: The backend generates a claim-proof pair (input, output, program + proof) and submits it to VSL.

  5. Verification: A verifier fetches the claim, verifies the attestation report, and posts a verification certificate to the VSL.

  6. Settlement: Once verified, the claim is considered settled, and a claim ID is returned.

  7. Frontend updates: The dashboard updates with the result, claim ID, and proof data. You can also explore the claim details and proof on the VSL Explorer.

Try it yourself

This demo offers two ways to explore how VSL verifies TEE attestation in practice. You can either:

  • Use CLI commands only for a leaner, terminal-based setup.

  • Run the full stack, which includes a backend server and frontend UI, for a more interactive experience.

Regardless of which path you choose, the general flow stays the same:

  1. Set up your environment and clone the demo repository.

  2. Start the attester (requires a Google Cloud Confidential VM).

  3. Start the verifier, configured to monitor the VSL network.

  4. Start the client or the backend and frontend, depending on your desired mode.

  5. Observe the verification process of claim-proof pairs across these components.

The VSL-AI-app-examples README provides all command-line instructions and .env file configurations for each component.

After this is set up, you'll be able to interact with the frontend of this application on your browser on http://localhost:3000.

The UI will look like this:

You can only interact with this browser interface when the frontend and backend have been started. Otherwise, if you start the client, then all your interactions will be via the CLI.

The following steps will guide you to run an image classification task securely within a Trusted Execution Environment (TEE).

Step 1: Upload an image

  • Use the “Click or drag an image here to upload” section.

  • You can either drag an image into the area or click to open your file explorer.

  • Once uploaded, the system will display a preview label based on the image content.

Step 2: Confirm classification

  • Click the “Confirm” button to start the classification process in the TEE.

  • This will trigger the image classification service to run inside a TEE and generate a claim about the output (the label).

Step 3: View history of attestations

  • The History panel automatically refreshes.

  • It shows recent image classification events with the following details:

    • Created At: Timestamp of the classification request.

    • Type: img_class (short for image classification).

    • Status: Indicates if the request is completed.

    • Claim: A cryptographic claim ID (e.g., 0x56b8...46eb11) that references the result. Clicking on the claim ID will lead you to the VSL Explorer, where you’ll get more details about the claim.

    • Result: Click “View” to inspect the attested output and verify the result.

Each entry proves that the classification ran inside a TEE and that the label hasn’t been tampered with.

Although the steps above only highlight how to carry out image classification, you can replicate these same steps for the LLM model prompt. All you need to do is, instead of selecting ‘Image Classification’ in the Type field, select ‘Plain Text’.

PreviousBlock Header SettlementNextRun the MetaMask Snap

Last updated 1 day ago

Was this helpful?