Skip to main content
Verifiable Storage is more than a storage utility—it is the trust anchor that guarantees on-chain data really originated from your robot. By combining ROS 2 lifecycle management, DID-backed authentication, and IPFS persistence (with optional managed pinning), the bridge turns raw telemetry into signed, auditable evidence.

Why It Matters

  • Authenticity first – every ingest request is checked against the robot’s DID and RBAC grants before it ever touches the blockchain.
  • Tamper proof – payloads are hashed, pinned to IPFS, and referenced on-chain so downstream consumers can verify integrity independently (managed pinning services remain optional).
  • Production ready – lifecycle nodes, retries, and detailed status topics keep operators informed without leaving ROS tools.
Big picture: ROS 2 fleets can now publish verifiable data streams that regulators, partners, or marketplaces can trust instantly.

Architecture

StageWhat HappensVerification Hooks
1. IngestStorageIngest message arrives on peaq/storage/ingestBridge loads the robot wallet, derives its DID, and checks it exists on-chain when require_did is enabled
2. EncodePayload hashed, optional files pinned to IPFS (local or managed)Hash and DID are logged together; RBAC rules (from the Access module) can gate who triggers ingest
3. SubmitTransaction sent via peaq storage palletTransaction is signed by the robot’s keystore and tracked through peaq/tx_status
4. FinalizeOn-chain record references IPFS CIDAny consumer can fetch the CID, recompute the hash, and confirm the signing DID

Launch the Bridge

ros2 launch peaq_ros2_core storage_bridge.launch.py \
  config_yaml:=/work/peaq_ros2_examples/config/peaq_robot.yaml \
  log_level:=INFO

ros2 lifecycle set /peaq_storage_bridge configure
ros2 lifecycle set /peaq_storage_bridge activate

Verification-Centric Configuration

storage_bridge:
  robot:
    require_did: true            # verify robot DID is registered on peaq
  storage:
    mode: both                   # local_ipfs | pinata | both
    pinata:                       # placeholder name used by the default config
      jwt: "<IPFS_GATEWAY_JWT>"   # works with any compatible managed gateway
      gateway_url: "https://your-gateway.example.com/ipfs"
    local_ipfs:
      api_url: "http://127.0.0.1:5001"
      gateway_url: "http://127.0.0.1:8080/ipfs"
  retry:
    max_attempts: 3
    delay_seconds: 5.0

signature:
  algorithm: sr25519             # bridge signs payloads with the robot wallet
Secrets for managed gateways (JWT/API keys) and wallet passwords must stay out of source control—load them via environment variables before launch.

IPFS Setup

Local IPFS (Kubo)

# Install Kubo
wget https://dist.ipfs.tech/kubo/v0.38.1/kubo_v0.38.1_linux-amd64.tar.gz
 tar -xzf kubo_v0.38.1_linux-amd64.tar.gz
 sudo bash kubo/install.sh

# Initialize and start
ipfs init
ipfs daemon
Update the config YAML to point at your node:
storage_bridge:
  storage:
    mode: local_ipfs
    local_ipfs:
      api_url: http://127.0.0.1:5001
      gateway_url: http://127.0.0.1:8080/ipfs
Optional quality-of-life settings:
  • local_ipfs.save_dir: cache directory for downloaded blobs
  • local_ipfs.pin_results=true: keep data pinned locally for quick replays

Managed Gateway (Optional)

Use a third-party IPFS pinning/gateway provider (e.g., Pinata, web3.storage, NFT.storage) only if you need off-device persistence or public access. Example environment variables:
export PEAQ_ROBOT_IPFS_JWT="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
export PEAQ_ROBOT_IPFS_GATEWAY="https://your-gateway.example.com/ipfs"
Reference them in the YAML (the pinata block name is historical—you can still point it at any managed gateway):
storage_bridge:
  storage:
    mode: pinata
    pinata:
      jwt: "${PEAQ_ROBOT_IPFS_JWT}"
      gateway_url: "${PEAQ_ROBOT_IPFS_GATEWAY}"
      pin: true
      mode: upload
Using both local IPFS and a managed gateway provides redundancy—set mode: both to mirror uploads.

Access Control & Wallets

  • Ensure the wallet referenced in wallet.path has enough balance on the target network (fund via faucet for Agung).
  • Combine with RBAC by allow-listing roles under storage_bridge.robot.allowlist_roles when you want to restrict which robots can publish telemetry.

Publishing Verifiable Data

ros2 topic pub --once /peaq/storage/ingest \
  peaq_ros2_interfaces/msg/StorageIngest \
  '{key: "robot:telemetry", content: "{\"battery\": 0.87}", is_file: false}'

ros2 topic echo /peaq/storage/status
The storage status stream surfaces the CID, IPFS URL, transaction hash, and success state for each submission. Combine it with peaq/tx_status to see the confirmation phases tied back to the robot DID in logs.

Reading & Verifying Downstream

ros2 service call /peaq_core_node/storage/read \
  peaq_ros2_interfaces/srv/StoreReadData \
  '{key: "robot:telemetry"}'
The response includes the payload, IPFS CID, and the originating DID. Consumers can recompute the hash against the IPFS artifact and ensure it matches the on-chain record.

Manual Verification Checklist

  1. Fetch the CID from the service response.
  2. Retrieve the payload: ipfs cat <CID> or curl <gateway>/<CID>.
  3. Recompute the hash and compare with the value logged in storage bridge outputs.
  4. Confirm DID ownership using /peaq_core_node/identity/read.

Automated Attestation

  • Attach mission metadata via the metadata_json field in StorageIngest so every record includes firmware versions or profile IDs.
  • Pair with the Access Control guides to revoke publishing rights instantly.
  • Leverage /tmp/storage_bridge_failures.jsonl and the replay scripts to prove that no data was dropped—even during outages.
python3 scripts/check_storage_failures.py --details
python3 scripts/retry_failed_storage.py --key robot:telemetry

Observability & Audit

  • Switch to JSON logs (PEAQ_ROBOT_LOG_FORMAT=json) for ingestion into SIEM or compliance tooling.
  • Track wallet-derived DID and CID pairs in your log pipeline to detect impersonation attempts.
  • Use ros2 lifecycle get /peaq_storage_bridge in your health probes; the node exports ready/active states so Kubernetes or fleet managers can react quickly.

Dashboard Pointers

  • peaq/storage/status: success vs failure counts
  • /tmp/storage_bridge_failures.jsonl: monitor size/age to detect backlogs
  • peaq/tx_status: confirmation latency per network
With verifiable telemetry in place, your ROS 2 fleet can supply zero-trust data to marketplaces, regulators, or partners. Continue with Event Streams to surface confirmations to autonomy stacks.