MoltChain Architecture
A technical deep-dive into how MoltChain is designed — from network gossip to state commitment. MoltChain is an agent-first, high-performance Layer-1 blockchain with WASM smart contracts, on-chain reputation, and a novel Proof of Contribution consensus mechanism.
Overview
MoltChain is structured as four distinct layers, each with a clear responsibility boundary. From bottom to top:
Each transaction flows downward through these layers: the Network layer receives it via RPC, Consensus orders it into a block, Execution runs the logic, and State persists the result. This separation allows each layer to be optimized independently.
Proof of Contribution
MoltChain uses Proof of Contribution (PoC), a leader-based consensus that combines economic stake with validator behavior scoring to select block producers.
Slot Production
Time is divided into 400ms slots. Each slot has a single designated leader responsible for producing a block. If the leader fails to produce within the window, the slot is skipped and the next leader takes over.
Leader Selection
Leaders are selected via weighted random sampling where the weight is:
weight = stake_amount × contribution_score
contribution_score = (
0.4 × uptime_ratio + // % of slots online (trailing 1000 epochs)
0.3 × attestation_ratio + // % of votes cast correctly
0.2 × block_success_ratio + // % of assigned blocks produced
0.1 × community_score // governance participation, bug reports, tooling
)
This means a validator with less stake but excellent uptime and contributions can compete with a larger, less-reliable validator.
PBFT Finality
After a leader proposes a block, it enters a 3-phase PBFT (Practical Byzantine Fault Tolerance) voting round:
- Pre-prepare — Leader broadcasts the proposed block to all validators
- Prepare — Validators validate the block and broadcast a prepare vote
- Commit — Upon receiving 2/3+ prepare votes, validators broadcast a commit vote
A block is considered finalized once 2/3+ of validators (by stake weight) submit commit votes. Typical finality time is 800ms–1.2 seconds.
Transaction Lifecycle
Every transaction follows a defined path from submission to state commitment:
sendTransaction. The node validates the signature, checks the nonce, and verifies the sender has sufficient balance for fees.State Model
MoltChain uses an account-based state model (not UTXO). All state is stored in RocksDB, organized into column families for efficient access patterns.
Column Families
| Column Family | Key | Value | Description |
|---|---|---|---|
accounts |
Address (32 bytes) | Account struct | All user and program accounts |
blocks |
Slot number (u64) | Block struct | Block headers and transaction lists |
transactions |
Tx hash (32 bytes) | Transaction + Receipt | Full transaction data with execution results |
contract_storage |
Program ID + key | Arbitrary bytes | Per-contract key-value storage |
validators |
Validator pubkey | ValidatorInfo struct | Stake, score, vote history |
metadata |
String key | Varies | Chain config, genesis hash, epoch info |
Account Structure
Every account on MoltChain has the following fields:
pub struct Account {
/// Native MOLT balance in shells (1 MOLT = 1e9 shells)
pub balance: u64,
/// Transaction sequence number (prevents replay attacks)
pub nonce: u64,
/// MoltyID reputation score (0.0 – 100.0, stored as u32 * 100)
pub reputation: u32,
/// Amount of MOLT staked in validator or delegation
pub staked: u64,
/// MoltyID trust tier (0 = unverified, 1–6 = tier levels)
pub trust_tier: u8,
/// If this is a program account, the WASM bytecode hash
pub program_hash: Option<[u8; 32]>,
/// Arbitrary account data (up to 10 MB for program accounts)
pub data: Vec<u8>,
/// Account owner (system program or deploying program)
pub owner: Pubkey,
}
min_balance = data_size × 6960 + 890880 shells. Accounts that fall below this threshold are garbage-collected after 2 epochs.
WASM Execution
MoltChain executes smart contracts using the Wasmer WebAssembly runtime with ahead-of-time (AOT) compilation for near-native performance. Contracts are compiled from Rust to wasm32-unknown-unknown and deployed as program accounts.
Host Functions
Contracts interact with the chain through a set of host functions injected into the WASM environment at runtime:
| Function | Signature | Description |
|---|---|---|
storage_get |
(key_ptr, key_len) → (val_ptr, val_len) |
Read a value from contract storage |
storage_set |
(key_ptr, key_len, val_ptr, val_len) |
Write a value to contract storage |
storage_delete |
(key_ptr, key_len) |
Remove a key from contract storage |
get_caller |
() → (addr_ptr) |
Get the address of the transaction sender |
get_block_slot |
() → u64 |
Current block slot number |
get_block_timestamp |
() → u64 |
Current block Unix timestamp |
emit_event |
(topic_ptr, topic_len, data_ptr, data_len) |
Emit an indexed event/log |
transfer |
(to_ptr, amount: u64) → i32 |
Transfer MOLT from contract to address |
cross_call |
(program_ptr, method_ptr, args_ptr, args_len) → (ret_ptr, ret_len) |
Call another deployed contract |
get_balance |
(addr_ptr) → u64 |
Read the balance of any account |
Gas Metering
Every WASM instruction consumes gas. The runtime injects metering checkpoints at basic block boundaries during AOT compilation. Base costs:
- Arithmetic/logic ops: 1 gas per instruction
- Memory load/store: 3 gas per instruction
- storage_get: 200 gas + 1 gas per byte read
- storage_set: 5,000 gas + 5 gas per byte written (new key) or 200 gas (update)
- emit_event: 375 gas + 8 gas per byte of event data
- cross_call: 2,500 gas base + callee gas
- transfer: 5,000 gas
The per-transaction gas limit is 200,000 by default. The per-block gas limit is 50,000,000.
Memory Model
Each contract execution gets an isolated linear memory space starting at 1 page (64 KB) and expandable to 256 pages (16 MB) via memory.grow. Memory is reset between calls — contracts cannot persist state in WASM memory across transactions. Use storage_get/storage_set for persistence.
MoltyID Integration
MoltyID is MoltChain's on-chain identity and reputation layer. It is deeply integrated into the protocol — not just an application-level feature. Trust tiers directly affect transaction processing, fees, and rate limits at the consensus and mempool layers.
Trust Tiers
| Tier | Name | Score Range | Mempool Priority | Fee Discount | Rate Limit (tx/epoch) |
|---|---|---|---|---|---|
| 0 | Newcomer | 0 – 99 | 1.0× | 0% | 100 |
| 1 | Verified | 100 – 499 | 1.1× | 0% | 100 |
| 2 | Trusted | 500 – 999 | 1.25× | 500–749: 5% · 750–999: 7.5% | 200 |
| 3 | Established | 1,000 – 4,999 | 1.5× | 10% | 500 |
| 4 | Elite | 5,000 – 9,999 | 2.0× | 10% | 500 |
| 5 | Legendary | 10,000+ | 3.0× | 10% | 500 |
How Reputation Affects the Stack
- Mempool: Transaction priority =
fee × tier_multiplier. Tier 4+ transactions enter the express lane, which the block leader processes before the regular queue. - Gas Pricing: Higher-tier accounts receive fee discounts applied at the protocol level. A Tier 5 account paying a 0.001 MOLT fee effectively pays 0.0009 MOLT.
- Rate Limiting: Each account is allowed a maximum number of transactions per epoch (432 slots ≈ ~173 seconds). Exceeding the limit results in mempool rejection until the next epoch.
- Validator Selection: For validators, MoltyID score contributes to the
community_scorecomponent (10% weight) in leader selection.
Privacy & Security
MoltChain's privacy architecture is centered on shielded transactions, nullifier-based double-spend prevention, and ZK proofs for note validity. The current implementation uses Groth16-based verification for shielded flows in production deployments.
Implementation Status (Current)
| Area | Status | Notes |
|---|---|---|
Core ZK modules (core/src/zk/*) | Scaffolded | Pedersen, Merkle, note/key types, circuits, prover/verifier plumbing |
Shielded pool contract (contracts/shielded_pool) | Production | Shielded state and transaction flows run with Groth16 proof verification |
| Runtime/RPC wiring | Partial | Explorer/Wallet-facing methods are expected, with staged backend integration in progress |
| Deployment/boot scripts | In progress | First-boot and deploy manifest integration for shielded pool is being completed |
Target ZK Architecture
- Proving system: Groth16 with trusted setup and succinct on-chain verification
- Curve: BN254 for broad tooling compatibility and efficient pairing checks
- Commitments: Pedersen commitments for note value hiding + binding
- Merkle tree: Poseidon-based tree and inclusion paths for shielded notes
- Nullifiers: Deterministic nullifier derivation to prevent double spends without revealing note contents
Shielded Transaction Flows
- Shield (transparent → shielded): Creates a new note commitment from public funds
- Unshield (shielded → transparent): Spends a note and reveals only the necessary public outputs
- Transfer (shielded → shielded): Consumes note inputs, emits new commitments, preserves value in-circuit
Security Properties
- Zero-knowledge: Proofs validate correctness without exposing note secrets
- Soundness: Invalid witness data cannot satisfy circuit constraints
- Double-spend resistance: One-time nullifier publication prevents note replay
- Value conservation: Circuit constraints enforce that spent and created value balances
Explorer and Docs Separation
Explorer pages are being kept telemetry-first (stats, transactions, nullifier checks). Architecture education and implementation details live in the Website and Developer Portal so chain observability stays cleanly separated from protocol documentation.
Keypair Encryption at Rest
CLI-managed keypairs are encrypted with AES-256-GCM (authenticated encryption with 12-byte random nonce). An encryption_version field in the keypair JSON enables backwards compatibility: v1 = legacy XOR cipher, v2 = AES-GCM. Legacy keypairs are auto-upgraded on next use.
Faucet CORS Policy
The testnet faucet enforces an explicit origin allowlist (localhost dev ports + production domains) with restricted HTTP methods (GET/POST/OPTIONS) — replacing the previous permissive policy.
Performance Benchmarks
Benchmarks measured with Criterion.rs on the reference hardware (Apple M-series, single-threaded). Source: core/benches/processor_bench.rs.
| Benchmark | Result | Notes |
|---|---|---|
| Single transaction processing | ~28 µs | ~35,000 tx/s single-threaded |
| 10-tx batch | ~280 µs | Linear scaling |
| 50-tx batch | ~1.4 ms | No contention overhead |
| Block creation (100 tx) | ~3 ms | Includes Merkle root computation |
| Block creation (500 tx) | ~15 ms | Well within 400ms slot target |
| Ed25519 signature verify | ~21 µs | ~47,000 verifies/s |
| Batch verify (100 sigs) | ~2.1 ms | Linear, no batching optimization yet |
cargo bench --bench processor_bench from the workspace root. Results are saved to target/criterion/ with HTML reports.
Network Topology
MoltChain's network layer handles peer discovery, data propagation, and client connections.
Gossip Protocol
Nodes communicate over a structured gossip protocol built on libp2p. Every node maintains connections to 8–12 peers. Data propagation uses eager push with lazy pull fallback:
- Eager push: Blocks and high-priority transactions are immediately forwarded to all connected peers
- Lazy pull: Nodes periodically exchange inventory hashes with peers and request any missing data
Average block propagation reaches 95% of nodes within 200ms (for a 1,000-node network).
Peer Discovery
New nodes discover peers through three mechanisms:
- Bootstrap nodes: Hardcoded seed nodes for initial connection (configurable per network)
- Peer exchange (PEX): Connected nodes periodically share their known peer lists
- DHT: A Kademlia-based distributed hash table for decentralized peer lookup
Bootstrap Nodes
# Mainnet bootstrap nodes
bootstrap_peers = [
"/dns4/boot1.moltchain.io/tcp/8001/p2p/12D3KooW...",
"/dns4/boot2.moltchain.io/tcp/8001/p2p/12D3KooW...",
"/dns4/boot3.moltchain.io/tcp/8001/p2p/12D3KooW...",
]
# Testnet bootstrap nodes
bootstrap_peers = [
"/dns4/testnet-boot1.moltchain.io/tcp/8001/p2p/12D3KooW...",
"/dns4/testnet-boot2.moltchain.io/tcp/8001/p2p/12D3KooW...",
]
Block Sync
When a node falls behind (e.g., after restart or joining late), it uses snapshot sync to catch up efficiently:
- Download the latest finalized state snapshot (compressed RocksDB checkpoint) from a trusted peer
- Verify the snapshot against the Merkle state root in the last finalized block header
- Apply any remaining blocks since the snapshot was created
- Switch to normal gossip-based block following
Full state snapshots are generated every 1,000 slots (~6.7 minutes) by archive nodes. Typical sync time for a new node is under 2 minutes for testnet.
RPC & WebSocket Endpoints
Every full node exposes two client-facing interfaces:
- JSON-RPC (HTTP POST, default port 8899) — Request/response queries and transaction submission. See API Reference.
- WebSocket (default port 8900) — Persistent subscriptions for real-time block, transaction, and log events. See WebSocket Reference.