Skip to content

feat: add claimsync package and integrate with bridgesync#1539

Draft
joanestebanr wants to merge 7 commits intodevelopfrom
feat/claim_syncs_required_by_aggsender-pm285
Draft

feat: add claimsync package and integrate with bridgesync#1539
joanestebanr wants to merge 7 commits intodevelopfrom
feat/claim_syncs_required_by_aggsender-pm285

Conversation

@joanestebanr
Copy link
Collaborator

  • Add new claimsync package with its own storage (SQLite), processor, embedded mode, and ClaimsReader interface
  • Remove duplicated claim event handlers from bridgesync/downloader.go; bridgesync now delegates to claimsync via ClaimsSyncProcessor interface
  • Add ProcessBlockWithTx(insertBlock bool) to allow shared tx reuse between bridgesync and claimsync (atomic writes)
  • Add ReorgWithTx to ClaimsSyncProcessor so bridgesync can call claimsync reorg within its own transaction
  • Add ClaimsReader interface grouping read-only claim queries

🔄 Changes Summary

  • [Brief description of what was added, fixed, or changed in this PR]

⚠️ Breaking Changes

  • 🛠️ Config: [Optional: Changes to TOML config]
  • 🔌 API/CLI: [Optional: Breaking interface changes]
  • 🗑️ Deprecated Features: [Optional: Removed features]

📋 Config Updates

  • 🧾 Diff/Config snippet: [Optional: Enumerate config changes/provide snippet of changes]

✅ Testing

  • 🤖 Automatic: [Optional: Enumerate E2E tests]
  • 🖱️ Manual: [Optional: Steps to verify]

🐞 Issues

  • Closes #[issue-number]

🔗 Related PRs

  • [Optional: Enumerate related pull requests]

📝 Notes

  • [Optional: design decisions, tradeoffs, or TODOs]

- Add new claimsync package with its own storage (SQLite), processor,
  embedded mode, and ClaimsReader interface
- Remove duplicated claim event handlers from bridgesync/downloader.go;
  bridgesync now delegates to claimsync via ClaimsSyncProcessor interface
- Add ProcessBlockWithTx(insertBlock bool) to allow shared tx reuse
  between bridgesync and claimsync (atomic writes)
- Add ReorgWithTx to ClaimsSyncProcessor so bridgesync can call claimsync
  reorg within its own transaction
- Add ClaimsReader interface grouping read-only claim queries

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@joanestebanr joanestebanr self-assigned this Mar 11, 2026
@claude
Copy link

claude bot commented Mar 11, 2026

Claude finished @joanestebanr's task in 3m 4s —— View job


Code Review Complete


🚀 What's New

This PR extracts claim event synchronization from bridgesync into a new standalone claimsync package. The claimsync package can run in two modes:

  • Standalone mode: Independent syncer with its own SQLite database, EVMDriver, and downloader
  • Embedded mode: Delegates to bridgesync's transaction for atomic writes, sharing the same sync lifecycle

Key additions:

  • New claimsync package with storage, processor, downloader, and embedded mode support
  • ClaimsSyncProcessor interface for bridgesync integration
  • ClaimsReader interface for read-only claim queries
  • Transaction-based coordination via ProcessBlockWithTx and ReorgWithTx
  • Removal of ~340 lines of duplicated claim handling from bridgesync/downloader.go

🐛 Bug Fixes

None - this is a refactoring PR.

📋 Config Updates

None - existing bridgesync config is reused.

⚠️ Breaking Changes

  • API: bridgesync.NewL2 now requires a ClaimsSyncProcessor parameter (can be nil for backward compatibility)
  • API: aggsender.New and NewAggsenderValidator now require a claimsync.ClaimSyncer parameter

Review Findings

🔴 Required Issues

1. Missing Unit Tests

Severity: High
Location: claimsync/ package

The new claimsync package (11 Go files, ~1300 lines) has zero unit tests. This is a significant gap for a core synchronization component.

Impact:

  • No validation of transaction atomicity between bridgesync and claimsync
  • Storage layer logic (compaction, queries) is untested
  • Event handler logic is untested
  • Reorg handling is untested

Recommendation: Add comprehensive unit tests covering:

  • Storage operations (Insert/Get/Delete with and without transactions)
  • Processor block handling and reorg logic
  • Event handlers for all claim types
  • Embedded vs standalone mode behavior
  • Transaction rollback scenarios

Fix this →


2. Incomplete Claim Migration from bridgesync

Severity: Medium
Location: bridgesync/processor.go:1704-1750

The claim event handling in bridgesync/processor.go still has the old code paths with TODO comments but incomplete removal:

// TODO: remove
if event.Claim != nil {
    if p.claimEventsProcessor != nil {
        claimEvents = append(claimEvents, event)
    } else if err = meddler.Insert(tx, claimTableName, event.Claim); err != nil {
        // ... old code path
    }
}

Issues:

  1. The claimEvents slice is populated but only used for claims - not for UnsetClaim or SetClaim events
  2. There's a logic bug: claims are collected in a slice but then processed AFTER the event loop, meaning only the final block state is passed to claimEventsProcessor.ProcessBlockWithTx, not individual events
  3. The placement of the ProcessBlockWithTx call is inside the event loop but after the loop content, which is confusing

Location: bridgesync/processor.go:1769-1774

if p.claimEventsProcessor != nil {
    // This is called on every iteration but processes the entire block
    if err := p.claimEventsProcessor.ProcessBlockWithTx(ctx, tx, &block, false); err != nil {
        // ...
    }
}

Recommendation:

  1. The ProcessBlockWithTx call should be outside and after the event loop, called once per block
  2. Remove the TODO comments and fully commit to the new architecture
  3. Add integration tests to verify claim events flow correctly through both code paths

Fix this →


3. Error Handling: Database Close Not Checked

Severity: Medium
Location: claimsync/storage/storage.go:85

if err := migrations.RunMigrations(logger, database); err != nil {
    database.Close() //nolint:errcheck
    return nil, fmt.Errorf("claimsync storage: failed to run migrations: %w", err)
}

While the //nolint:errcheck is intentional, this pattern can hide issues. If Close() fails, the database file might be left in an inconsistent state.

Recommendation: Log the error even if not returned:

if err := migrations.RunMigrations(logger, database); err != nil {
    if closeErr := database.Close(); closeErr != nil {
        logger.Errorf("failed to close database after migration error: %v", closeErr)
    }
    return nil, fmt.Errorf("claimsync storage: failed to run migrations: %w", err)
}

Fix this →


🟡 Suggested Improvements

4. Inconsistent Null Handling Documentation

Location: claimsync/downloader.go:174-178

The code uses nil to represent NULL for fromAddress, but the comment and implementation pattern could be clearer:

// Skip expensive extraction - leave FromAddress as nil (will be stored as NULL)
txnSender = tx.From()
toAddr = tx.ToAddress()
logger.Debugf("Skipping FromAddress extraction for tx %s (SyncFromInBridges=false)", txHash.Hex())
return txnSender, nil, toAddr, nil

Recommendation: Add a comment explaining why NULL is acceptable and how it's handled downstream. This is important for maintainability.


5. ClaimSyncer Interface Redundancy

Location: claimsync/claimsync.go:25-28, claimsync/claimsync.go:39-43

type ClaimSyncer interface {
    Start(ctx context.Context)
}

type bridgeSyncClaimSyncer struct {
    bs *bridgesync.BridgeSync
}

func (b *bridgeSyncClaimSyncer) Start(_ context.Context) {}

The bridgeSyncClaimSyncer has an empty Start method because the underlying bridgesync is already started. This is a code smell.

Recommendation: Consider whether ClaimSyncer should just be *ClaimSync or if there's a better abstraction. The current design creates unnecessary indirection.


6. SQL Injection Prevention - Table Name Validation

Location: claimsync/storage/storage.go and throughout

The storage layer directly embeds table names in SQL queries. While these are constants, the pattern could be more defensive.

Current: Safe because table names are hardcoded constants
Recommendation: Document that table names are validated (or add validation if they become configurable in the future)


7. Missing Index on global_index

Location: claimsync/storage/migrations/claimsync0001.sql:12-32

The claim table has an index on (type, block_num) but not on global_index, which is used in:

  • GetClaimsByGlobalIndex (line 204-246 in storage.go)
  • UnsetClaim lookups (part of the compaction logic)

Impact: Query performance degradation as the claim table grows.

Recommendation: Add an index:

CREATE INDEX IF NOT EXISTS idx_claim_global_index ON claim (global_index);

Fix this →


8. Compaction Logic Complexity

Location: claimsync/storage/storage.go:162-201, storage.go:204-246

The claim compaction queries are sophisticated (using CTEs, window functions) but quite complex. This makes them:

  • Hard to reason about correctness
  • Difficult to optimize
  • Challenging to debug when issues arise

Current behavior:

  • Claims with an unset_claim are returned uncompacted
  • Other claims are compacted: oldest metadata + newest proofs per global_index

Recommendation:

  1. Add extensive comments explaining the compaction logic and why each CTE is needed
  2. Consider breaking this into smaller, testable helper queries
  3. Add query performance tests with realistic data volumes

9. Context Timeout Pattern Inconsistency

Location: claimsync/processor.go:37-61 vs claimsync/processor.go:95-99

The processor has inconsistent context handling:

  • ProcessBlock: Creates a timeout context (withDatabaseTimeout)
  • GetLastProcessedBlock: Ignores the context parameter entirely
func (p *processor) GetLastProcessedBlock(_ context.Context) (uint64, error) {
    return p.storage.GetLastProcessedBlock(nil)
}

Recommendation: Either use the context consistently or document why it's intentionally ignored.


10. Potential Race Condition in Embedded Mode

Location: claimsync/embedded.go:83, bridgesync/downloader.go:62-64

In embedded mode, the ClaimQuerier is passed to build the appender:

appender, err := buildAppender(ctx, ethClient, reader, bridgeAddr,
    agglayerBridgeContract, agglayerBridgeL2Contract, isSovereign, logger)

The reader uses storage which could be accessed concurrently by:

  1. Bridgesync's processor reading data
  2. Claimsync's event handlers via GetBoundaryBlockForClaimType

Current mitigation: SQLite transactions provide isolation
Recommendation: Document the thread-safety guarantees and test concurrent access patterns


🟢 Optional Enhancements

11. Add Metrics for Claimsync

The new package would benefit from instrumentation:

  • Claims processed counter
  • Block processing latency histogram
  • Storage operation durations
  • Reorg events counter

This would help with observability in production.


12. Consider Graceful Shutdown

claimsync.Start() doesn't have a way to gracefully stop. The context cancellation will stop it, but there's no guarantee of clean shutdown.

Recommendation: Add a Stop() method that waits for in-flight operations to complete.


13. Document the Migration Path

For existing deployments with claim data in bridgesync's DB:

  • How should operators migrate?
  • Is there a data migration tool needed?
  • What happens if both old and new code paths are active?

Add a migration guide or document the cutover process.


Security Considerations

SQL Injection: Safe - all queries use parameterized statements
Transaction Safety: Proper use of deferred rollback with flag pattern
Error Handling: Generally good, with proper error wrapping
⚠️ Input Validation: globalIndex nil check in GetClaimsByGlobalIndex (good), but could add more validation


Performance Considerations

⚠️ Database: New SQLite database means another file I/O bottleneck
Transaction Batching: Processes entire blocks atomically (good)
⚠️ Query Performance: Compaction queries use window functions which can be expensive at scale - needs performance testing
Connection Pooling: Reuses database connections appropriately


Architecture Assessment

✅ Strengths

  1. Clear separation of concerns - claim sync is now isolated from bridge sync
  2. Flexible deployment - supports both standalone and embedded modes
  3. Transaction atomicity - shared transaction ensures consistency
  4. Interface-driven design - ClaimsReader, ClaimStorager, ClaimsSyncProcessor interfaces enable testing
  5. Proper error handling - consistent error wrapping and logging

⚠️ Concerns

  1. Incomplete migration - old claim handling code still present in bridgesync
  2. No tests - critical gap for a synchronization component
  3. Missing documentation - no README or architecture doc for the new package
  4. Complex SQL - compaction queries need more explanation and testing

Summary

This PR represents a significant architectural improvement by extracting claim synchronization into its own package. The design is solid with good separation of concerns and flexible deployment options.

However, the PR is not ready to merge due to:

  1. 🔴 Zero test coverage for the new package
  2. 🔴 Incomplete migration with buggy event processing placement
  3. 🟡 Missing performance index on global_index

Recommendation: Address the Required issues before merging. The Suggested improvements can be follow-up PRs, but testing is mandatory.


Priority Summary

  • 🔴 Critical: Add unit tests, fix event processing loop bug
  • 🟡 Important: Add global_index index, remove TODOs
  • 🟢 Nice-to-have: Metrics, documentation, graceful shutdown

joanestebanr and others added 6 commits March 16, 2026 14:28
- Move Claim/Unclaim types from bridgesync to claimsync/types package
- Add ClaimSyncRPC server exposing l2claimsync_* JSON-RPC methods
- Add paged storage implementation for claimsync
- Add claimsync config and claim_data types
- Update aggsender interfaces to use claimsync.Claim instead of bridgesync.Claim
- Add GetNextBlockNumber and GeneratePreBuildParams to AggsenderBuilderFlow/FlowBaser interfaces
- Extend GetLastProcessedBlock signature to return (uint64, bool, error)
- Add op-pp local config and e2e bridge tests
- Regenerate all affected mocks

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
… bootstrap

- Move TrueFalseAutoMode to config/types as a struct with Mode string and
  Resolved *bool (not read from config); add type alias + var re-exports in bridgesync
- claimsync RPC: GetRPCServices() sets service name (l1claimsync/l2claimsync)
  based on syncerID stored in ClaimSync struct
- claimsync config: add AutoStart TrueFalseAutoMode field; resolved against
  BRIDGE/L1BRIDGESYNC or BRIDGE/L2BRIDGESYNC components at startup
- sync/EVMDriver: add SyncNextBlock(ctx, blockNum) to bootstrap the first block;
  returns ErrAlreadyBootstrapped if a block already exists (ignorable)
- claimsync: Start() passes InitialBlockNum to Sync() when AutoStart=true;
  syncNextBlockInfinite() retries bootstrap until success or ctx cancellation
- sync: add README.md documenting how to implement a new syncer

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant