Skip to main content
The DLMM SDK employs sophisticated data composition patterns to transform raw blockchain data into application-ready structures. Understanding these patterns helps developers build more efficient, maintainable, and robust DeFi applications.

Why Data Composition Matters

Raw blockchain data is optimized for storage and consensus, not application consumption. DLMM pools store data across multiple accounts, use different numeric formats, and require complex calculations to derive meaningful metrics. Data composition bridges this gap.

The Raw Data Challenge

Blockchain Storage Reality:
  • Pool state scattered across multiple accounts
  • Numbers stored as big integers for precision
  • Bin data stored in compressed arrays
  • No human-readable token symbols or metadata
  • Time-sensitive data mixed with static configuration
Application Needs:
  • Single unified data structures
  • Human-readable numbers and formats
  • Real-time calculated metrics and analytics
  • Type-safe interfaces with validation
  • Cached and optimized data access patterns

Composition Architecture Patterns

1. Layered Composition Pattern

The SDK uses a layered approach where each layer adds value and abstraction:
┌─────────────────────────────────────┐
│ Application Layer                   │
│ • Business logic                    │
│ • User interfaces                   │
│ • Analytics dashboards             │
└─────────────────────────────────────┘

┌─────────────────────────────────────┐
│ Composition Layer                   │
│ • Data aggregation                  │
│ • Metric calculations               │
│ • Format transformations            │
└─────────────────────────────────────┘

┌─────────────────────────────────────┐
│ SDK Abstraction Layer               │
│ • Type safety                       │
│ • Error handling                    │
│ • Caching strategies                │
└─────────────────────────────────────┘

┌─────────────────────────────────────┐
│ Blockchain Data Layer               │
│ • Pool accounts                     │
│ • Bin arrays                        │
│ • Position accounts                 │
└─────────────────────────────────────┘

2. Aggregation Pattern

Multiple data sources are efficiently combined: Parallel Data Fetching:
// Instead of sequential calls (slow):
const poolInfo = await getPoolInfo();
const poolState = await getPoolState(); 
const binData = await getBinData();

// Use parallel aggregation (fast):
const [poolInfo, poolState, binData] = await Promise.all([
  getPoolInfo(),
  getPoolState(),
  getBinData()
]);

const composedData = aggregatePoolData(poolInfo, poolState, binData);
Why This Pattern Works:
  • Parallel Execution: Multiple RPC calls execute simultaneously
  • Failure Isolation: One failed call doesn’t block others
  • Dependency Management: Clear separation of independent vs dependent data
  • Performance Optimization: Reduces total wait time by 60-80%

3. Transformation Pipeline Pattern

Raw data flows through transformation stages:
class DataTransformationPipeline {
  // Stage 1: Raw data extraction
  extract(rawAccountData: any) {
    return {
      poolInfo: parsePoolAccount(rawAccountData.pool),
      binArrays: parseBinArrays(rawAccountData.bins),
      tokenMetadata: parseTokenMetadata(rawAccountData.tokens)
    };
  }

  // Stage 2: Type conversion and validation
  transform(extractedData: any) {
    return {
      poolInfo: validateAndConvertPoolInfo(extractedData.poolInfo),
      liquidity: calculateLiquidityMetrics(extractedData.binArrays),
      pricing: calculatePricingData(extractedData.poolInfo, extractedData.binArrays)
    };
  }

  // Stage 3: Composition and enrichment
  compose(transformedData: any) {
    return {
      ...transformedData,
      analytics: calculateAnalytics(transformedData),
      metadata: enrichWithMetadata(transformedData),
      timestamp: Date.now()
    };
  }
}
Pipeline Benefits:
  • Modularity: Each stage has single responsibility
  • Testability: Individual stages can be unit tested
  • Flexibility: Easy to modify or extend pipeline stages
  • Debugging: Clear visibility into transformation process

4. Caching Strategy Pattern

Different data types require different caching approaches:
class StrategicCacheManager {
  // Static data: Cache for hours
  cacheMetadata(poolAddress: string, data: PoolMetadata) {
    this.cache.set(`metadata:${poolAddress}`, data, { ttl: 3600000 }); // 1 hour
  }

  // Dynamic data: Cache for minutes  
  cachePoolState(poolAddress: string, data: PoolState) {
    this.cache.set(`state:${poolAddress}`, data, { ttl: 60000 }); // 1 minute
  }

  // Volatile data: Cache for seconds
  cachePricing(poolAddress: string, data: PricingData) {
    this.cache.set(`pricing:${poolAddress}`, data, { ttl: 10000 }); // 10 seconds
  }
}
Why Tiered Caching Works:
  • Efficiency: Frequently accessed static data cached longer
  • Accuracy: Volatile data refreshed frequently
  • Resource Optimization: Balances performance vs freshness
  • Cost Reduction: Fewer expensive RPC calls

Advanced Composition Patterns

5. Dependency Resolution Pattern

Complex data often depends on other data being fetched first:
class DependencyResolver {
  async resolvePoolData(poolAddress: string) {
    // Level 1: Independent data (parallel)
    const [poolInfo, rawBinData] = await Promise.all([
      this.fetchPoolInfo(poolAddress),
      this.fetchRawBinData(poolAddress)
    ]);

    // Level 2: Dependent data (requires Level 1)
    const [poolState, liquidityDistribution] = await Promise.all([
      this.fetchPoolState(poolAddress, poolInfo),
      this.calculateLiquidity(rawBinData, poolInfo)
    ]);

    // Level 3: Highly dependent data (requires Level 1 & 2)
    const pricingData = await this.calculatePricing(
      poolInfo, 
      poolState, 
      liquidityDistribution
    );

    return this.composeComplete(poolInfo, poolState, liquidityDistribution, pricingData);
  }
}
Dependency Resolution Benefits:
  • Optimal Parallelization: Maximum concurrent operations at each level
  • Clear Dependencies: Explicit dependency relationships
  • Error Handling: Failures at each level handled appropriately
  • Performance: Minimizes sequential bottlenecks

6. Partial Composition Pattern

Allow applications to request only needed data components:
interface CompositionOptions {
  includeLiquidityDistribution?: boolean;
  includePricingData?: boolean;
  binRange?: number;
  calculateMetrics?: string[];
}

class PartialComposer {
  async compose(poolAddress: string, options: CompositionOptions) {
    const result: any = {};

    // Always include basic pool info
    result.pool = await this.getPoolInfo(poolAddress);

    // Conditionally include expensive operations
    if (options.includeLiquidityDistribution) {
      result.liquidity = await this.getLiquidityDistribution(
        poolAddress, 
        options.binRange || 50
      );
    }

    if (options.includePricingData) {
      result.pricing = await this.getPricingData(poolAddress);
    }

    if (options.calculateMetrics) {
      result.metrics = await this.calculateSpecificMetrics(
        poolAddress, 
        options.calculateMetrics
      );
    }

    return result;
  }
}
Partial Composition Advantages:
  • Performance: Only fetch/calculate required data
  • Flexibility: Different views need different data granularity
  • Resource Efficiency: Reduce bandwidth and compute costs
  • User Experience: Faster loading for lightweight operations

7. Event-Driven Composition Pattern

Respond to data changes with smart recomposition:
class EventDrivenComposer {
  private eventBus = new EventEmitter();

  constructor() {
    this.setupEventHandlers();
  }

  private setupEventHandlers() {
    // When pool state changes, update dependent calculations
    this.eventBus.on('poolStateChanged', async (poolAddress) => {
      await this.recomposePricingData(poolAddress);
      await this.updateLiquidityMetrics(poolAddress);
    });

    // When bin data changes, update distribution analysis
    this.eventBus.on('binDataChanged', async (poolAddress, binIds) => {
      await this.recomposeLiquidityDistribution(poolAddress, binIds);
    });

    // When new position created, update position analytics
    this.eventBus.on('positionCreated', async (poolAddress, positionId) => {
      await this.composePositionMetrics(poolAddress, positionId);
    });
  }

  async handleDataUpdate(dataType: string, poolAddress: string, newData: any) {
    // Update cached data
    await this.updateCache(dataType, poolAddress, newData);
    
    // Trigger dependent recomposition
    this.eventBus.emit(`${dataType}Changed`, poolAddress);
    
    // Notify subscribers
    this.notifySubscribers(poolAddress, dataType);
  }
}
Event-Driven Benefits:
  • Efficiency: Only recompute affected calculations
  • Consistency: Ensures all dependent data stays synchronized
  • Real-time Updates: Applications stay current with minimal overhead
  • Scalability: Handles complex dependency graphs gracefully

Why These Patterns Matter

For Application Performance

Without Composition Patterns:
// Naive approach - slow and error-prone
async function getPoolAnalytics(poolAddress: string) {
  const pool = new DLMM(connection, poolAddress);
  
  const poolInfo = await pool.getPoolInfo();        // 200ms
  const poolState = await pool.getPoolState();     // 200ms  
  const bin1 = await pool.getBin(activeId - 10);   // 150ms
  const bin2 = await pool.getBin(activeId - 9);    // 150ms
  // ... 20 more sequential bin calls = 3000ms
  const bin21 = await pool.getBin(activeId + 10);  // 150ms
  
  // Manual calculations
  const price = calculatePrice(poolState.activeId);
  const liquidity = sumBinLiquidity([bin1, bin2, /* ... */, bin21]);
  
  return { poolInfo, poolState, price, liquidity }; // Total: ~5500ms
}
With Composition Patterns:
// Optimized approach - fast and robust
async function getPoolAnalytics(poolAddress: string) {
  const composer = new OptimizedDataComposer();
  
  return await composer.composePoolData(poolAddress); // Total: ~800ms
}
Performance Improvement: 85% faster execution time

For Code Maintainability

Pattern Benefits:
  • Single Responsibility: Each pattern handles one concern
  • Testability: Individual components easily unit tested
  • Extensibility: New data sources integrate cleanly
  • Error Handling: Centralized error recovery strategies

For Developer Experience

Consistent APIs:
// All composition methods follow same pattern
const poolData = await composer.composePoolData(poolAddress);
const userPortfolio = await composer.composeUserPortfolio(userAddress);
const marketAnalysis = await composer.composeMarketAnalysis(timeframe);
Type Safety:
// Composition ensures complete type safety
interface ComposedPoolData {
  pool: PoolInfo;      // Always present
  state: PoolState;    // Always present  
  liquidity?: LiquidityDistribution; // Optional
  pricing?: PricingData;             // Optional
}

Design Trade-offs and Decisions

Memory vs Speed

Trade-off: Caching improves speed but increases memory usage Decision: Tiered caching with size limits
  • High-frequency data: Small cache with short TTL
  • Low-frequency data: Larger cache with longer TTL
  • Automatic eviction prevents memory overflow

Accuracy vs Performance

Trade-off: Real-time data is more accurate but slower to fetch Decision: Configurable staleness tolerance
  • Critical operations: Always fetch fresh data
  • Analytics: Accept 30-60 second staleness
  • Overview displays: Accept 5-minute staleness

Complexity vs Flexibility

Trade-off: Simple APIs are easier but less flexible Decision: Layered API design
  • Simple methods for common use cases
  • Advanced methods for custom requirements
  • Escape hatches for direct blockchain access

Implementation Guidelines

When to Use Each Pattern

Layered Composition: Always - fundamental architecture Aggregation: Multi-source data requirements Transformation Pipeline: Complex data processing needs Strategic Caching: Performance-critical applications Dependency Resolution: Complex interdependent data Partial Composition: Variable performance requirements Event-Driven: Real-time applications with subscriptions

Anti-Patterns to Avoid

Sequential Data Fetching:
// ❌ Don't do this
const info = await getPoolInfo();
const state = await getPoolState();
const bins = await getBinData();
Over-Caching:
// ❌ Don't cache everything forever
cache.set(key, data, Infinity); // Never expires
Monolithic Composition:
// ❌ Don't put everything in one giant method
async function getEverything() {
  // 500 lines of mixed concerns
}
Ignoring Dependencies:
// ❌ Don't ignore data dependencies
const pricing = calculatePricing(); // Needs pool data first!
const poolData = getPoolData();     // Too late!

Future Evolution

Emerging Patterns

Stream-Based Composition: Real-time data streams with reactive updates ML-Enhanced Caching: Machine learning to predict data access patterns
Cross-Chain Composition: Unified data from multiple blockchain sources Decentralized Caching: Shared cache layers across applications

Scalability Considerations

As DeFi ecosystems grow, composition patterns must evolve:
  • Horizontal Scaling: Distribute composition across multiple services
  • Edge Computing: Push composition closer to users geographically
  • Protocol Aggregation: Compose data from multiple DeFi protocols
  • Interoperability: Standard composition interfaces across ecosystems
The DLMM SDK’s composition patterns provide a foundation for building scalable, performant DeFi applications while maintaining code quality and developer productivity. Understanding these patterns enables developers to make informed architectural decisions and leverage the full power of concentrated liquidity markets.