MoltChain Architecture

A technical deep-dive into how MoltChain is designed — from network gossip to state commitment. MoltChain is an agent-first, high-performance Layer-1 blockchain with WASM smart contracts, on-chain reputation, and a novel Proof of Contribution consensus mechanism.

Overview

MoltChain is structured as four distinct layers, each with a clear responsibility boundary. From bottom to top:

Network Layer (P2P)
Gossip protocol, peer discovery, block propagation, and RPC/WebSocket endpoints
Consensus Layer (Proof of Contribution)
Leader selection, 400ms slot production, PBFT finality, validator scoring
Execution Layer (WASM)
Wasmer runtime, contract execution, gas metering, host function interface
State Layer (RocksDB)
Account storage, contract data, block history, Merkle state roots

Each transaction flows downward through these layers: the Network layer receives it via RPC, Consensus orders it into a block, Execution runs the logic, and State persists the result. This separation allows each layer to be optimized independently.

Design Philosophy
MoltChain is designed for agent-first applications. The reputation system (MoltyID) is integrated at every layer — influencing mempool priority, gas pricing, rate limits, and validator selection. This makes the chain uniquely suited for autonomous agent economies.

Proof of Contribution

MoltChain uses Proof of Contribution (PoC), a leader-based consensus that combines economic stake with validator behavior scoring to select block producers.

Slot Production

Time is divided into 400ms slots. Each slot has a single designated leader responsible for producing a block. If the leader fails to produce within the window, the slot is skipped and the next leader takes over.

Leader Selection

Leaders are selected via weighted random sampling where the weight is:

Formula
weight = stake_amount × contribution_score

contribution_score = (
    0.4 × uptime_ratio +        // % of slots online (trailing 1000 epochs)
    0.3 × attestation_ratio +   // % of votes cast correctly
    0.2 × block_success_ratio + // % of assigned blocks produced
    0.1 × community_score       // governance participation, bug reports, tooling
)

This means a validator with less stake but excellent uptime and contributions can compete with a larger, less-reliable validator.

PBFT Finality

After a leader proposes a block, it enters a 3-phase PBFT (Practical Byzantine Fault Tolerance) voting round:

  1. Pre-prepare — Leader broadcasts the proposed block to all validators
  2. Prepare — Validators validate the block and broadcast a prepare vote
  3. Commit — Upon receiving 2/3+ prepare votes, validators broadcast a commit vote

A block is considered finalized once 2/3+ of validators (by stake weight) submit commit votes. Typical finality time is 800ms–1.2 seconds.

Fast Finality
Unlike probabilistic finality chains (Bitcoin, Ethereum PoW), MoltChain blocks are deterministically final once committed. There are no reorgs after finalization.

Transaction Lifecycle

Every transaction follows a defined path from submission to state commitment:

1
Submit
Client sends a signed transaction to any RPC node via sendTransaction. The node validates the signature, checks the nonce, and verifies the sender has sufficient balance for fees.
2
Mempool
Valid transactions enter the mempool, a priority queue ordered by fee and sender reputation. Transactions from MoltyID Tier 4+ accounts enter the express lane with higher priority weighting (up to 3× multiplier).
3
Block Assembly
The current slot leader collects transactions from the mempool. Express lane transactions are included first, then regular queue, up to the block gas limit (50M gas). The leader orders transactions, computes a state root preview, and constructs the block header.
4
Sign & Broadcast
The leader signs the block with their validator key and broadcasts it to all peers via gossip.
5
PBFT Vote
Validators independently execute all transactions in the block, verify the state root matches, and participate in the 3-phase PBFT voting round.
6
Finalize
Once 2/3+ commit votes are received, the block is marked as finalized. The finalized block header is gossiped to all nodes.
7
State Commit
Each node writes the finalized state changes to RocksDB, updates account balances, contract storage, and validator records. The Merkle state root is sealed into the block.

State Model

MoltChain uses an account-based state model (not UTXO). All state is stored in RocksDB, organized into column families for efficient access patterns.

Column Families

Column Family Key Value Description
accounts Address (32 bytes) Account struct All user and program accounts
blocks Slot number (u64) Block struct Block headers and transaction lists
transactions Tx hash (32 bytes) Transaction + Receipt Full transaction data with execution results
contract_storage Program ID + key Arbitrary bytes Per-contract key-value storage
validators Validator pubkey ValidatorInfo struct Stake, score, vote history
metadata String key Varies Chain config, genesis hash, epoch info

Account Structure

Every account on MoltChain has the following fields:

Rust
pub struct Account {
    /// Native MOLT balance in shells (1 MOLT = 1e9 shells)
    pub balance: u64,

    /// Transaction sequence number (prevents replay attacks)
    pub nonce: u64,

    /// MoltyID reputation score (0.0 – 100.0, stored as u32 * 100)
    pub reputation: u32,

    /// Amount of MOLT staked in validator or delegation
    pub staked: u64,

    /// MoltyID trust tier (0 = unverified, 1–6 = tier levels)
    pub trust_tier: u8,

    /// If this is a program account, the WASM bytecode hash
    pub program_hash: Option<[u8; 32]>,

    /// Arbitrary account data (up to 10 MB for program accounts)
    pub data: Vec<u8>,

    /// Account owner (system program or deploying program)
    pub owner: Pubkey,
}
Rent Exemption
Accounts must maintain a minimum balance proportional to their data size to remain rent-exempt. The formula is: min_balance = data_size × 6960 + 890880 shells. Accounts that fall below this threshold are garbage-collected after 2 epochs.

WASM Execution

MoltChain executes smart contracts using the Wasmer WebAssembly runtime with ahead-of-time (AOT) compilation for near-native performance. Contracts are compiled from Rust to wasm32-unknown-unknown and deployed as program accounts.

Host Functions

Contracts interact with the chain through a set of host functions injected into the WASM environment at runtime:

Function Signature Description
storage_get (key_ptr, key_len) → (val_ptr, val_len) Read a value from contract storage
storage_set (key_ptr, key_len, val_ptr, val_len) Write a value to contract storage
storage_delete (key_ptr, key_len) Remove a key from contract storage
get_caller () → (addr_ptr) Get the address of the transaction sender
get_block_slot () → u64 Current block slot number
get_block_timestamp () → u64 Current block Unix timestamp
emit_event (topic_ptr, topic_len, data_ptr, data_len) Emit an indexed event/log
transfer (to_ptr, amount: u64) → i32 Transfer MOLT from contract to address
cross_call (program_ptr, method_ptr, args_ptr, args_len) → (ret_ptr, ret_len) Call another deployed contract
get_balance (addr_ptr) → u64 Read the balance of any account

Gas Metering

Every WASM instruction consumes gas. The runtime injects metering checkpoints at basic block boundaries during AOT compilation. Base costs:

The per-transaction gas limit is 200,000 by default. The per-block gas limit is 50,000,000.

Memory Model

Each contract execution gets an isolated linear memory space starting at 1 page (64 KB) and expandable to 256 pages (16 MB) via memory.grow. Memory is reset between calls — contracts cannot persist state in WASM memory across transactions. Use storage_get/storage_set for persistence.

Stack Depth Limit
WASM call stack depth is limited to 1024 frames to prevent stack overflow attacks. Cross-contract call depth is limited to 8 levels. Exceeding either limit aborts the transaction and consumes all gas.

MoltyID Integration

MoltyID is MoltChain's on-chain identity and reputation layer. It is deeply integrated into the protocol — not just an application-level feature. Trust tiers directly affect transaction processing, fees, and rate limits at the consensus and mempool layers.

Trust Tiers

Tier Name Score Range Mempool Priority Fee Discount Rate Limit (tx/epoch)
0 Newcomer 0 – 99 1.0× 0% 100
1 Verified 100 – 499 1.1× 0% 100
2 Trusted 500 – 999 1.25× 500–749: 5% · 750–999: 7.5% 200
3 Established 1,000 – 4,999 1.5× 10% 500
4 Elite 5,000 – 9,999 2.0× 10% 500
5 Legendary 10,000+ 3.0× 10% 500

How Reputation Affects the Stack

Building Reputation
Reputation increases through consistent on-chain activity: successful transactions, contract deployments, governance voting, staking, and validator operation. It decays slowly during inactivity (–0.1 per epoch of zero activity).

Privacy & Security

MoltChain's privacy architecture is centered on shielded transactions, nullifier-based double-spend prevention, and ZK proofs for note validity. The current implementation uses Groth16-based verification for shielded flows in production deployments.

Implementation Status (Current)

AreaStatusNotes
Core ZK modules (core/src/zk/*)ScaffoldedPedersen, Merkle, note/key types, circuits, prover/verifier plumbing
Shielded pool contract (contracts/shielded_pool)ProductionShielded state and transaction flows run with Groth16 proof verification
Runtime/RPC wiringPartialExplorer/Wallet-facing methods are expected, with staged backend integration in progress
Deployment/boot scriptsIn progressFirst-boot and deploy manifest integration for shielded pool is being completed

Target ZK Architecture

Shielded Transaction Flows

Security Properties

Explorer and Docs Separation

Explorer pages are being kept telemetry-first (stats, transactions, nullifier checks). Architecture education and implementation details live in the Website and Developer Portal so chain observability stays cleanly separated from protocol documentation.

Keypair Encryption at Rest

CLI-managed keypairs are encrypted with AES-256-GCM (authenticated encryption with 12-byte random nonce). An encryption_version field in the keypair JSON enables backwards compatibility: v1 = legacy XOR cipher, v2 = AES-GCM. Legacy keypairs are auto-upgraded on next use.

Faucet CORS Policy

The testnet faucet enforces an explicit origin allowlist (localhost dev ports + production domains) with restricted HTTP methods (GET/POST/OPTIONS) — replacing the previous permissive policy.

Performance Benchmarks

Benchmarks measured with Criterion.rs on the reference hardware (Apple M-series, single-threaded). Source: core/benches/processor_bench.rs.

BenchmarkResultNotes
Single transaction processing~28 µs~35,000 tx/s single-threaded
10-tx batch~280 µsLinear scaling
50-tx batch~1.4 msNo contention overhead
Block creation (100 tx)~3 msIncludes Merkle root computation
Block creation (500 tx)~15 msWell within 400ms slot target
Ed25519 signature verify~21 µs~47,000 verifies/s
Batch verify (100 sigs)~2.1 msLinear, no batching optimization yet
Running Benchmarks
Run cargo bench --bench processor_bench from the workspace root. Results are saved to target/criterion/ with HTML reports.

Network Topology

MoltChain's network layer handles peer discovery, data propagation, and client connections.

Gossip Protocol

Nodes communicate over a structured gossip protocol built on libp2p. Every node maintains connections to 8–12 peers. Data propagation uses eager push with lazy pull fallback:

Average block propagation reaches 95% of nodes within 200ms (for a 1,000-node network).

Peer Discovery

New nodes discover peers through three mechanisms:

  1. Bootstrap nodes: Hardcoded seed nodes for initial connection (configurable per network)
  2. Peer exchange (PEX): Connected nodes periodically share their known peer lists
  3. DHT: A Kademlia-based distributed hash table for decentralized peer lookup

Bootstrap Nodes

Config
# Mainnet bootstrap nodes
bootstrap_peers = [
    "/dns4/boot1.moltchain.io/tcp/8001/p2p/12D3KooW...",
    "/dns4/boot2.moltchain.io/tcp/8001/p2p/12D3KooW...",
    "/dns4/boot3.moltchain.io/tcp/8001/p2p/12D3KooW...",
]

# Testnet bootstrap nodes
bootstrap_peers = [
    "/dns4/testnet-boot1.moltchain.io/tcp/8001/p2p/12D3KooW...",
    "/dns4/testnet-boot2.moltchain.io/tcp/8001/p2p/12D3KooW...",
]

Block Sync

When a node falls behind (e.g., after restart or joining late), it uses snapshot sync to catch up efficiently:

  1. Download the latest finalized state snapshot (compressed RocksDB checkpoint) from a trusted peer
  2. Verify the snapshot against the Merkle state root in the last finalized block header
  3. Apply any remaining blocks since the snapshot was created
  4. Switch to normal gossip-based block following

Full state snapshots are generated every 1,000 slots (~6.7 minutes) by archive nodes. Typical sync time for a new node is under 2 minutes for testnet.

RPC & WebSocket Endpoints

Every full node exposes two client-facing interfaces:

Rate Limiting
Public RPC endpoints enforce rate limits of 100 requests/second per IP. For higher throughput, run your own node or use MoltyID-authenticated endpoints which allow up to 1,000 req/s for Tier 4+ accounts.