How Stellar's Roadmap to 5000 TPS Impacts Your API Integrations
The Stellar Development Foundation has outlined ambitious scalability goals: achieving approximately 5,000 transactions per second (TPS) on mainnet. This represents a 50x increase from current capacity. Here's what this means for developers building on Stellar's API infrastructure.
Current State vs. Future State
Where We Are Today
Stellar mainnet currently processes around 100-200 TPS during peak periods, with theoretical capacity around 1,000 TPS. The network handles:
Where We're Heading
The SDF's scalability roadmap targets:
Technical Changes Driving Scale
1. Parallel Transaction Validation
Current Stellar processes transactions sequentially within a ledger. The new architecture introduces:
Current Model:
┌─────────────────────────────────────────┐
│ Ledger N │
│ Tx1 → Tx2 → Tx3 → ... → TxN (sequential)│
└─────────────────────────────────────────┘
Future Model:
┌─────────────────────────────────────────┐
│ Ledger N │
│ ┌────────┐ ┌────────┐ ┌────────┐ │
│ │ Shard 1│ │ Shard 2│ │ Shard 3│ (parallel)
│ │Tx1,Tx4 │ │Tx2,Tx5 │ │Tx3,Tx6 │ │
│ └────────┘ └────────┘ └────────┘ │
└─────────────────────────────────────────┘API Impact: Transaction ordering within a ledger may become non-deterministic for independent transactions.
2. Expanded Ledger Capacity
To support 5,000 TPS with 5-second ledgers, each ledger must contain ~25,000 transactions:
| Metric | Current | 5K TPS Target |
|---|---|---|
| Transactions/ledger | ~500-1000 | ~25,000 |
| Ledger size | ~1MB | ~10-20MB |
| Operations/ledger | ~2000 | ~100,000 |
API Impact:
3. Optimized Soroban VM
Smart contract execution is being optimized with:
API Impact: simulateTransaction responses will include new fields for parallel execution hints.
Preparing Your API Integrations
Update 1: Handle Increased Data Volume
Current approach (may break):
// This might timeout or OOM at 5K TPS
async function getAllTransactions(ledger: number) {
const response = await fetch(
`${HORIZON_URL}/ledgers/${ledger}/transactions?limit=200`
);
const data = await response.json();
// Simple pagination - loads everything into memory
let allTxs = data._embedded.records;
let nextUrl = data._links.next?.href;
while (nextUrl) {
const next = await fetch(nextUrl);
const nextData = await next.json();
allTxs = allTxs.concat(nextData._embedded.records);
nextUrl = nextData._links.next?.href;
}
return allTxs; // Could be 25,000+ records!
}Future-proof approach:
// Stream-based processing
async function* streamTransactions(ledger: number) {
let cursor = '';
while (true) {
const url = new URL(`${HORIZON_URL}/ledgers/${ledger}/transactions`);
url.searchParams.set('limit', '200');
url.searchParams.set('order', 'asc');
if (cursor) url.searchParams.set('cursor', cursor);
const response = await fetch(url);
const data = await response.json();
const records = data._embedded.records;
for (const tx of records) {
yield tx; // Process one at a time
}
if (records.length < 200) break;
cursor = records[records.length - 1].paging_token;
}
}
// Usage - memory efficient
async function processLedger(ledger: number) {
for await (const tx of streamTransactions(ledger)) {
await processSingleTransaction(tx);
}
}Update 2: Implement Robust Connection Pooling
At 5K TPS, your application will need efficient connection management:
// connection-pool.ts
import { Agent } from 'https';
const horizonAgent = new Agent({
keepAlive: true,
maxSockets: 50, // Increase from default 5
maxFreeSockets: 10,
timeout: 30000,
});
const rpcAgent = new Agent({
keepAlive: true,
maxSockets: 100, // Higher for RPC due to simulation load
maxFreeSockets: 20,
timeout: 60000, // Longer for complex simulations
});
export async function horizonFetch(path: string, options: RequestInit = {}) {
return fetch(`${HORIZON_URL}${path}`, {
...options,
// @ts-ignore - Node.js specific
agent: horizonAgent,
headers: {
'X-API-Key': process.env.LUMENQUERY_API_KEY!,
...options.headers,
},
});
}
export async function rpcFetch(method: string, params: any) {
return fetch(STELLAR_RPC_URL, {
method: 'POST',
// @ts-ignore - Node.js specific
agent: rpcAgent,
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.LUMENQUERY_API_KEY!,
},
body: JSON.stringify({
jsonrpc: '2.0',
id: Date.now(),
method,
params,
}),
});
}Update 3: Prepare for New Response Fields
The Stellar RPC will include new fields for scalability features:
// Future simulateTransaction response
interface SimulateTransactionResult {
// Existing fields
transactionData: string;
minResourceFee: string;
events: string[];
// New scalability-related fields (coming soon)
parallelizationHint?: {
conflictingAccounts: string[];
conflictingContracts: string[];
canParallelize: boolean;
};
executionMetrics?: {
cpuInstructions: number;
memoryBytes: number;
estimatedWallTime: number; // New: actual execution time estimate
};
}
// Future-proof parsing
function parseSimulation(result: any): SimulateTransactionResult {
return {
transactionData: result.transactionData,
minResourceFee: result.minResourceFee,
events: result.events || [],
// Gracefully handle new fields
parallelizationHint: result.parallelizationHint,
executionMetrics: result.executionMetrics,
};
}Update 4: Implement Adaptive Rate Limiting
With increased throughput, API providers may adjust rate limits:
// adaptive-rate-limiter.ts
class AdaptiveRateLimiter {
private tokens: number;
private maxTokens: number;
private refillRate: number;
private lastRefill: number;
private backoffMultiplier: number = 1;
constructor(maxTokens = 100, refillRate = 10) {
this.tokens = maxTokens;
this.maxTokens = maxTokens;
this.refillRate = refillRate;
this.lastRefill = Date.now();
}
private refill() {
const now = Date.now();
const elapsed = (now - this.lastRefill) / 1000;
this.tokens = Math.min(
this.maxTokens,
this.tokens + elapsed * this.refillRate
);
this.lastRefill = now;
}
async acquire(): Promise<void> {
this.refill();
if (this.tokens < 1) {
const waitTime = ((1 - this.tokens) / this.refillRate) * 1000;
await new Promise(r => setTimeout(r, waitTime * this.backoffMultiplier));
this.refill();
}
this.tokens -= 1;
}
// Call when receiving 429 responses
backoff() {
this.backoffMultiplier = Math.min(this.backoffMultiplier * 2, 32);
}
// Call on successful requests
reset() {
this.backoffMultiplier = 1;
}
}
const limiter = new AdaptiveRateLimiter();
export async function rateLimitedFetch(url: string, options?: RequestInit) {
await limiter.acquire();
const response = await fetch(url, options);
if (response.status === 429) {
limiter.backoff();
throw new Error('Rate limited');
}
limiter.reset();
return response;
}Infrastructure Implications
Running Your Own Nodes
At 5K TPS, node requirements increase significantly:
| Component | Current | 5K TPS Ready |
|---|---|---|
| CPU | 4 cores | 16+ cores |
| RAM | 16GB | 64GB+ |
| Storage | 2TB SSD | 10TB NVMe |
| Network | 100Mbps | 1Gbps+ |
| Horizon DB | 1TB | 5TB+ |
Managed Infrastructure Benefits
For most teams, managed infrastructure becomes more attractive at scale:
LumenQuery handles:
// Simple configuration for managed infrastructure
const config = {
horizonUrl: 'https://api.lumenquery.io',
rpcUrl: 'https://rpc.lumenquery.io',
apiKey: process.env.LUMENQUERY_API_KEY,
// Automatic handling of:
// - Connection pooling
// - Retry logic
// - Regional failover
// - Rate limit compliance
};Timeline and Migration Path
Expected Rollout
| Phase | Timeline | Changes |
|---|---|---|
| Phase 1 | Q2 2026 | Increased ledger size (2MB → 5MB) |
| Phase 2 | Q3 2026 | Parallel validation (subset of txs) |
| Phase 3 | Q4 2026 | Full parallel processing |
| Phase 4 | 2027 | Sub-second finality exploration |
Migration Checklist
✅ Now:
✅ Q2 2026:
✅ Q3 2026:
Conclusion
Stellar's path to 5,000 TPS is an exciting development that will unlock new use cases—high-frequency trading, mass micropayments, and enterprise-scale applications. Prepare your integrations now by:
The future of Stellar is high-throughput, and your applications should be ready.
*Want infrastructure that scales with Stellar? LumenQuery is built for the 5K TPS future—automatic scaling, global distribution, and zero ops burden.*