-
Notifications
You must be signed in to change notification settings - Fork 446
feat: Add Multi-Endpoint Support with Automatic Retry and Failover #4225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: stage
Are you sure you want to change the base?
Conversation
Implements a reusable HTTP client that supports: - Multiple endpoints with automatic failover - Per-endpoint retry with exponential backoff (100ms, 200ms, 400ms...) - Priority-based endpoint selection - Remembers last successful endpoint - Configurable timeout and max retries This utility will be used to enhance RPC/REST calls across the codebase to handle endpoint failures gracefully. Key features: - Gracefully handles AbortSignal.timeout availability (Node 17.3+) - Sorts endpoints by priority (higher priority tried first) - Provides getCurrentEndpoint() to check which endpoint is active - Comprehensive error messages when all endpoints fail Includes comprehensive test coverage (11 tests): - Single/multiple endpoint support - Retry and fallback behavior - Priority sorting - Error handling - Endpoint memory (remembers last successful) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Enhances createNodeQuery to use all available REST endpoints from chain asset lists instead of only the first one. Changes: - Iterates through all chain.apis.rest endpoints - Retries each endpoint with exponential backoff before moving to next - Adds optional maxRetries (default: 3) and timeout (default: 5000ms) params - Maintains backward compatibility - existing code works unchanged - Gracefully handles AbortSignal.timeout availability Endpoint source: The REST endpoints come from the chain's asset list (osmosis-labs/assetlists). Each chain can have multiple REST endpoints for redundancy. This function will: 1. Try each endpoint in order from the chain.apis.rest array 2. Retry each endpoint up to maxRetries times with exponential backoff 3. Move to the next endpoint if all retries fail 4. Throw an error only if all endpoints have been exhausted Benefits ALL queries using createNodeQuery: - Balance queries (cosmos/bank/balances.ts) - Fee estimation (osmosis/txfees/*.ts) - Transaction simulation (cosmos/tx/simulate.ts) - Staking queries (cosmos/staking/validators.ts) - Governance queries (cosmos/governance/proposals.ts) Test coverage: - Updated 5 existing tests for backward compatibility - Added 5 new tests for retry/fallback behavior - All 10 tests passing 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Updates queryRPCStatus to accept either single endpoint (legacy) or
multiple endpoints with automatic retry/fallback.
Changes:
- New API: queryRPCStatus({ rpcUrls: string[] })
- Legacy API still works: queryRPCStatus({ restUrl: string })
- Uses MultiEndpointClient for automatic failover
- Maintains backward compatibility
This improves resilience for:
- IBC transfer time estimation
- Block height polling
- Chain status checks
Example usage:
// Old (still works)
await queryRPCStatus({ restUrl: "https://rpc.osmosis.zone" })
// New (automatic failover)
await queryRPCStatus({
rpcUrls: [
"https://rpc.osmosis.zone",
"https://osmosis-rpc.polkachu.com",
"https://rpc-osmosis.blockapsis.com"
]
})
Implementation details:
- Detects which API is being used via "rpcUrls" in params
- Creates MultiEndpointClient with 3 retries and 5s timeout
- Handles both standard and non-standard RPC response formats
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
…tion
Updates IBC bridge provider to pass all RPC endpoints from chain config
to queryRPCStatus instead of only the first one.
Changes:
- Maps all chain.apis.rpc endpoints to array
- Passes rpcUrls array to queryRPCStatus for automatic failover
- Updates error messages to be more descriptive
Before:
const fromRpc = fromChain?.apis.rpc[0]?.address;
const toRpc = toChain?.apis.rpc[0]?.address;
await queryRPCStatus({ restUrl: fromRpc })
await queryRPCStatus({ restUrl: toRpc })
After:
const fromRpcUrls = fromChain?.apis.rpc.map(rpc => rpc.address);
const toRpcUrls = toChain?.apis.rpc.map(rpc => rpc.address);
await queryRPCStatus({ rpcUrls: fromRpcUrls })
await queryRPCStatus({ rpcUrls: toRpcUrls })
Impact:
- IBC transfer time estimates no longer fail if primary RPC is down
- Automatically tries all available RPC endpoints with retry logic
- Better user experience during network issues
- More accurate transfer time estimates with increased reliability
Location: estimateTransferTime() method at packages/bridge/src/ibc/index.ts:315
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
Enhances PollingStatusSubscription to accept single or multiple RPC URLs
with automatic failover during block polling.
Changes:
- Constructor now accepts string | string[] for rpc parameter
- Converts single string to array internally for consistent handling
- Uses new queryRPCStatus multi-endpoint API when multiple URLs provided
- Maintains backward compatibility with single URL
- Added validation to ensure at least one URL is provided
Benefits:
- Block polling continues even if primary RPC fails
- Automatic failover to alternative endpoints
- More resilient IBC timeout tracking
- Better user experience during network issues
Example usage:
// Old (still works)
new PollingStatusSubscription("https://rpc.osmosis.zone")
// New (automatic failover)
new PollingStatusSubscription([
"https://rpc.osmosis.zone",
"https://osmosis-rpc.polkachu.com"
])
Implementation details:
- Stores URLs in protected readonly rpcUrls array
- Detects single vs multiple URLs and calls appropriate queryRPCStatus API
- Enhances error logging to show number of endpoints being used
Location: packages/tx/src/poll-status.ts
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
…to TxTracer
Enhances TxTracer with comprehensive WebSocket failover logic to maintain
IBC transfer status tracking during network issues.
Changes:
- Constructor now accepts string | string[] for url parameter
- Automatic reconnection with exponential backoff (1s, 2s, 4s, 8s max)
- Tries each endpoint multiple times before moving to next
- After exhausting all endpoints, waits 10s before cycling back
- Preserves all subscriptions across reconnections
- Prevents reconnection on manual close
- Adds comprehensive logging for debugging
Reconnection flow:
1. Try endpoint 0 (maxReconnectAttempts times with backoff)
2. If all fail, try endpoint 1 (maxReconnectAttempts times)
3. If all fail, try endpoint 2 (maxReconnectAttempts times)
4. After all endpoints exhausted, wait 10s and cycle back to endpoint 0
New state management:
- urls: readonly string[] - Array of WebSocket URLs
- currentUrlIndex: number - Tracks which endpoint is active
- reconnectAttempts: number - Counts retry attempts for current endpoint
- maxReconnectAttempts: number - Configurable (default: 3)
- isManualClose: boolean - Prevents auto-reconnect on user close
Event handlers:
- onOpen: Resets reconnect counter, re-subscribes all handlers
- onClose: Triggers reconnect logic unless manual close
- onError: Logs error and lets onClose handle reconnection
Benefits:
- IBC transfer status tracking continues during RPC issues
- Automatic recovery without user intervention
- Prevents lost WebSocket subscriptions
- Better visibility with console logging
Example usage:
// Old (still works)
new TxTracer("https://rpc.osmosis.zone")
// New (automatic failover)
new TxTracer([
"https://rpc.osmosis.zone",
"https://osmosis-rpc.polkachu.com"
], "/websocket", { maxReconnectAttempts: 5 })
Location: packages/tx/src/tracer.ts
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
4 Skipped Deployments
|
|
Assetlist currently doesn't contain multiple endpoints, this is to be added. |
WalkthroughAdds multi-endpoint HTTP/WebSocket support and failover: new MultiEndpointClient with retry/backoff and timeout, integrated into server RPC status queries and node-query REST calls, bridge IBC RPC aggregation, transaction polling, and TxTracer WebSocket reconnection. Public APIs mostly extended to accept single or multiple endpoints. Changes
Sequence DiagramsequenceDiagram
participant App as Caller
participant MEC as MultiEndpointClient
participant EP1 as Endpoint A
participant EP2 as Endpoint B
participant Timer as Backoff Timer
App->>MEC: fetch(path, options)
Note over MEC: start with highest priority endpoint
MEC->>EP1: HTTP request (with AbortSignal timeout)
EP1--xMEC: Failure / Timeout
MEC->>Timer: wait backoff delay
Timer-->>MEC: delay elapsed
MEC->>EP1: retry (per-endpoint)
EP1--xMEC: Failure (retries exhausted)
Note over MEC: switch to next endpoint
MEC->>EP2: HTTP request
EP2-->>MEC: Success (200)
MEC->>MEC: remember EP2 as preferred
MEC-->>App: return response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (4)
packages/server/src/queries/__tests__/create-node-query.spec.ts (1)
208-225: Consider verifying the timeout signal is actually created with the custom value.The test confirms
apiClientis called but doesn't verify that the customtimeoutof 10000ms is actually used. This is acceptable for now since testingAbortSignal.timeoutbehavior is difficult to mock, but you could add a note or verify the signal object if stricter validation is needed.packages/tx/src/poll-status.ts (1)
55-59: Consider simplifying to always use the multi-endpoint API.The conditional check
this.rpcUrls.length === 1to decide betweenrestUrlandrpcUrlsparams adds complexity. SincequeryRPCStatus({ rpcUrls: [...] })handles single-element arrays correctly (as seen inrpc-status.tsline 79-81), you could simplify this to always use the multi-endpoint path:- const status = - this.rpcUrls.length === 1 - ? await queryRPCStatus({ restUrl: this.rpcUrls[0] }) - : await queryRPCStatus({ rpcUrls: this.rpcUrls }); + const status = await queryRPCStatus({ rpcUrls: this.rpcUrls });This would provide consistent retry/failover behavior even for single endpoints.
packages/utils/src/__tests__/multi-endpoint-client.spec.ts (1)
186-193: Test assumption may be affected by priority sorting.The test at line 192 expects
getCurrentEndpoint()to return"https://endpoint1.com", but theMultiEndpointClientsorts endpoints by priority (higher first) during construction. If both endpoints have the same default priority (0), the sort is stable and endpoint1 remains first. This works correctly, but adding explicitpriority: 0to both endpoints would make the test's intent clearer.packages/server/src/queries/cosmos/rpc-status.ts (1)
83-90: Consider making retry/timeout options configurable.The
maxRetries: 3andtimeout: 5000are hardcoded. While these are reasonable defaults, consider accepting them as optional parameters for callers who may need different values (e.g., faster timeouts for user-facing requests vs. longer timeouts for background jobs).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
packages/bridge/src/ibc/index.ts(1 hunks)packages/server/src/queries/__tests__/create-node-query.spec.ts(4 hunks)packages/server/src/queries/cosmos/rpc-status.ts(2 hunks)packages/server/src/queries/create-node-query.ts(3 hunks)packages/tx/src/poll-status.ts(3 hunks)packages/tx/src/tracer.ts(5 hunks)packages/utils/src/__tests__/multi-endpoint-client.spec.ts(1 hunks)packages/utils/src/index.ts(1 hunks)packages/utils/src/multi-endpoint-client.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
packages/server/src/queries/cosmos/rpc-status.ts (2)
packages/utils/src/multi-endpoint-client.ts (1)
createMultiEndpointClient(166-171)packages/utils/src/api-client.ts (1)
apiClient(54-148)
packages/server/src/queries/__tests__/create-node-query.spec.ts (2)
packages/utils/src/api-client.ts (1)
apiClient(54-148)packages/server/src/queries/create-node-query.ts (1)
createNodeQuery(28-133)
packages/utils/src/__tests__/multi-endpoint-client.spec.ts (2)
packages/utils/src/api-client.ts (1)
apiClient(54-148)packages/utils/src/multi-endpoint-client.ts (2)
MultiEndpointClient(44-149)createMultiEndpointClient(166-171)
packages/tx/src/poll-status.ts (1)
packages/server/src/queries/cosmos/rpc-status.ts (1)
queryRPCStatus(70-110)
packages/utils/src/multi-endpoint-client.ts (1)
packages/utils/src/api-client.ts (2)
ClientOptions(34-36)apiClient(54-148)
packages/bridge/src/ibc/index.ts (4)
packages/keplr-stores/src/chain/index.ts (1)
rpc(270-272)packages/bridge/src/errors.ts (1)
BridgeQuoteError(2-18)packages/server/src/queries/cosmos/rpc-status.ts (1)
queryRPCStatus(70-110)packages/tx/src/poll-status.ts (1)
calcAverageBlockTimeMs(101-146)
packages/server/src/queries/create-node-query.ts (2)
packages/utils/src/async.ts (1)
timeout(16-46)packages/utils/src/api-client.ts (1)
apiClient(54-148)
🪛 GitHub Actions: Jest Tests Workflow
packages/tx/src/tracer.ts
[error] 126-126: TS2322: Type '(e: ErrorEvent) => void' is not assignable to type '(this: WebSocket, ev: Event) => any'.
🪛 GitHub Actions: Lint and Check Format Workflow
packages/utils/src/multi-endpoint-client.ts
[warning] 1-1: Code style issues found in the above file. Forgot to run Prettier?
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Summary
🔇 Additional comments (19)
packages/utils/src/index.ts (1)
20-20: LGTM!The new export follows the established alphabetical ordering and barrel export pattern.
packages/bridge/src/ibc/index.ts (1)
324-344: LGTM! Multi-endpoint RPC support correctly integrated.The change properly aggregates all RPC URLs for both chains and validates their availability before querying. The parallel
Promise.allcall for block time estimation is efficient.Minor consideration: if
chain.apisorchain.apis.rpcis undefined/null, the optional chaining (?.) will returnundefined, which gets coalesced to an empty array. This is safe behavior.packages/server/src/queries/__tests__/create-node-query.spec.ts (1)
122-246: Comprehensive test coverage for multi-endpoint retry logic.The test suite thoroughly covers key scenarios:
- Retry within same endpoint
- Fallback to next endpoint after exhausting retries
- Error propagation when all endpoints fail
- Custom timeout handling
- Missing endpoints validation
Good job ensuring the retry counts match expected behavior (e.g., 2 endpoints × 2 retries = 4 calls).
packages/tx/src/poll-status.ts (1)
14-30: LGTM! Backward-compatible constructor with proper validation.Good approach normalizing the input to always store as an array internally while accepting both single string and array inputs for backward compatibility.
packages/utils/src/__tests__/multi-endpoint-client.spec.ts (1)
1-194: Excellent test coverage for the MultiEndpointClient.The test suite comprehensively validates:
- Client instantiation with various configurations
- Single endpoint and multi-endpoint scenarios
- Retry behavior with proper call counting
- Endpoint fallback mechanics
- Last-successful-endpoint memory
- Priority-based ordering
- Custom timeout configuration
Well-structured and thorough.
packages/server/src/queries/cosmos/rpc-status.ts (1)
48-97: LGTM! Clean implementation with good documentation.The discriminated union approach for backward compatibility is clean and type-safe. The JSDoc examples clearly illustrate both usage patterns. Response normalization correctly handles both standard and non-standard chain responses.
packages/server/src/queries/create-node-query.ts (3)
78-126: Solid implementation of retry with exponential backoff.The nested loop structure is clear:
- Outer loop iterates through endpoints
- Inner loop handles retries per endpoint with exponential backoff
- Early return on success, comprehensive error on exhaustion
The logging strategy (only on final retry per endpoint) minimizes noise while maintaining debuggability.
17-24: Clear documentation of the fallback mechanism.The JSDoc effectively explains the endpoint fallback behavior, making the retry semantics easy to understand for consumers of this API.
86-88: Remove this comment — the URL constructor behavior concern does not apply to the actual usage patterns in this codebase.The code uses path values without leading slashes (e.g.,
"transfer/channel-208/uusdc"), and thenew URL(pathStr, baseUrl)constructor correctly preserves the base URL's path segments in this case. Additionally, no REST endpoints with path segments were found in the chain configurations, so there is no practical scenario where path truncation would occur.packages/tx/src/tracer.ts (5)
61-67: LGTM! Well-structured multi-endpoint state management.The state fields for managing multiple endpoints and reconnection are cleanly organized. Good use of
readonlyfor immutable endpoint array and sensible defaults.
74-93: LGTM! Backward-compatible constructor signature.Good approach normalizing single URL to array for backward compatibility, with proper validation for empty URL arrays.
142-184: Solid reconnection logic with exponential backoff.The endpoint cycling and backoff strategy is well-implemented. A few observations:
- The 10-second delay after exhausting all endpoints (line 162-164) is reasonable.
- Backoff formula
Math.pow(2, attempts - 1) * 1000gives 1s, 2s, 4s, capped at 8s — sensible progression.One minor note: when
reconnectAttemptsexceedsmaxReconnectAttempts, you reset to 0 and switch endpoints. This means the first attempt on the new endpoint starts immediately (backoff ofMath.pow(2, 0) * 1000 = 1000ms), which is fine, but consider if a brief initial delay is desired after switching.
223-245: LGTM! Proper state reset and subscription restoration on reconnect.Resetting
reconnectAttemptsandisManualCloseon successful connection is correct. Re-subscribing to blocks and transactions ensures continuity after failover.
307-320: LGTM! Automatic reconnect on unexpected close.Triggering reconnect on close (unless manual) correctly handles network disruptions. The
onError→onClosesequence is handled appropriately with reconnection inonCloseonly.packages/utils/src/multi-endpoint-client.ts (5)
44-67: LGTM! Well-designed client initialization.Good validation, priority-based sorting (descending), and sensible defaults. The spread operator for immutable sorting is a nice touch.
118-125: Minor: Backoff delay skipped on last retry may cause rapid endpoint switching.When
retry === maxRetries - 1and there are more endpoints, the code immediately proceeds to the next endpoint without delay. This is intentional per the comment, but could cause rapid-fire requests to multiple endpoints if all are failing. This is acceptable given the per-endpoint retry exhaustion, but worth noting for awareness.
128-133: LGTM! Clear error aggregation.The error message provides useful debugging context with endpoint count, retry count, and last error message.
166-171: LGTM! Clean factory function.The flexible union type
EndpointConfig[] | { address: string }[]accommodates both explicit configs and simpler address-only objects from chain asset lists.
90-99: [rewritten comment]
[classification tag]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
packages/server/src/queries/create-node-query.ts(3 hunks)packages/tx/src/tracer.ts(6 hunks)packages/utils/src/multi-endpoint-client.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/tx/src/tracer.ts
- packages/server/src/queries/create-node-query.ts
🧰 Additional context used
🧬 Code graph analysis (1)
packages/utils/src/multi-endpoint-client.ts (1)
packages/utils/src/api-client.ts (2)
ClientOptions(34-36)apiClient(54-148)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Summary
🔇 Additional comments (4)
packages/utils/src/multi-endpoint-client.ts (4)
3-23: Well-documented interfaces with sensible defaults.The
EndpointConfigandMultiEndpointOptionsinterfaces are clearly documented. The default values in comments align with the implementation.
51-64: Solid constructor with proper validation and immutability.Good practices observed: input validation, sorting by priority, defensive copy with spread operator to avoid mutating the original array, and sensible default values.
136-145: Accessor methods are properly encapsulated.
getEndpoints()correctly returns a defensive copy to prevent external mutation of the internal state.
163-168: Flexible factory function with good documentation.The type signature
EndpointConfig[] | { address: string }[]nicely accommodates different endpoint shapes from chain asset lists while maintaining type safety.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/utils/src/multi-endpoint-client.ts (1)
128-136: Consider making logging configurable.While
console.warnis acceptable for a frontend library, allowing consumers to provide a custom logger instance would improve integration with existing logging infrastructure.You could add an optional
loggertoMultiEndpointOptions:export interface MultiEndpointOptions { /** ... existing options ... */ logger?: { warn: (message: string, ...args: any[]) => void; }; }Then use:
(this.logger || console).warn(...)This is purely optional and can be deferred.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/utils/src/multi-endpoint-client.ts(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
- GitHub Check: test (22.x, proto-codecs)
- GitHub Check: test (22.x, unit)
- GitHub Check: test (22.x, tx)
- GitHub Check: test (22.x, server)
- GitHub Check: test (22.x, math)
- GitHub Check: test (22.x, utils)
- GitHub Check: test (22.x, stores)
- GitHub Check: test (22.x, bridge)
- GitHub Check: test (22.x, pools)
- GitHub Check: Summary
- GitHub Check: test (22.x, utils)
- GitHub Check: test (22.x, pools)
- GitHub Check: test (22.x, server)
- GitHub Check: wait-for-deployment
- GitHub Check: test (22.x, web)
- GitHub Check: test (22.x, stores)
- GitHub Check: test (22.x, bridge)
- GitHub Check: test (22.x, math)
- GitHub Check: lint-workspace
🔇 Additional comments (5)
packages/utils/src/multi-endpoint-client.ts (5)
1-23: LGTM! Well-designed interfaces with clear documentation.The type definitions are clean and well-documented. The optional priority on
EndpointConfigand the sensible defaults forMultiEndpointOptionsprovide good flexibility.
87-123: Excellent timeout handling with proper cleanup.The implementation correctly provides a fallback for environments lacking
AbortSignal.timeoutusingAbortController+setTimeout, and ensures the timeout is cleared in both success and error paths to prevent timer leaks. This addresses the previous review concern.
74-153: Robust retry and failover logic.The fetch method correctly implements:
- Endpoint rotation starting from the last successful endpoint
- Per-endpoint retries with exponential backoff
- Efficient skipping of backoff delay on the last retry when more endpoints are available
- Comprehensive error messages when all endpoints are exhausted
The logic is sound and well-implemented.
155-169: LGTM! Clean helper methods.Both methods are well-implemented.
getEndpoints()correctly returns a copy to prevent external mutations.
186-191: LGTM! Ergonomic factory function.The factory function's union type
EndpointConfig[] | { address: string }[]provides flexibility for callers while maintaining type safety through TypeScript's structural typing.
|
Chains failing current validation are: https://github.com/osmosis-labs/assetlists/actions/runs/20233835116 |
What is the purpose of the change:
The frontend only used the first RPC/REST endpoint from chain asset lists. When this primary endpoint failed or became slow, the application experienced:
Impact: Users were left unable to perform transactions when primary endpoints experienced issues, despite multiple backup endpoints being available in the chain configuration.
Linear Task
https://linear.app/osmosis/issue/FE-1550/endpoint-failover-for-ibc-transfers
Brief Changelog
Implemented comprehensive multi-endpoint support with automatic retry and failover across all RPC/REST operations:
Testing and Verifying
This change has been tested locally by rebuilding the website and verified content and links are expected