-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Fix OAuth proxy refresh token storage for multi-instance deployments #2483
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Use pluggable client_storage instead of local dict for refresh tokens - Store refresh tokens by SHA-256 hash for defense in depth - Remove unused access token and relationship mapping stores - Simplify revocation logic
WalkthroughAdds a Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/fastmcp/server/auth/oauth_proxy.py (1)
1337-1355: CRITICAL: Add client ID validation to prevent cross-client token usage.The
load_refresh_tokenmethod loads refresh token metadata by hash but does not validate that themetadata.client_idmatches theclient.client_idof the requesting client. This is a security vulnerability that allows any client to use a refresh token issued to a different client, as long as they possess the token value.This violates the OAuth security model where refresh tokens must be bound to the client they were issued to.
Apply this diff to add client validation:
async def load_refresh_token( self, client: OAuthClientInformationFull, refresh_token: str, ) -> RefreshToken | None: """Load refresh token metadata from distributed storage. Looks up by token hash and reconstructs the RefreshToken object. """ token_hash = _hash_token(refresh_token) metadata = await self._refresh_token_store.get(key=token_hash) if not metadata: return None + # Verify token belongs to this client + if metadata.client_id != client.client_id: + logger.warning( + "Refresh token client_id mismatch: expected %s, got %s", + client.client_id, + metadata.client_id, + ) + return None return RefreshToken( token=refresh_token, client_id=metadata.client_id, scopes=metadata.scopes, expires_at=metadata.expires_at, )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/fastmcp/server/auth/oauth_proxy.py(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.py: Python source code must use Python ≥3.10 with full type annotations
Never use bare except - be specific with exception types
Follow Ruff linting and Prettier formatting standards - run uv run prek run --all-files before committing
Prioritize readable, understandable code - clarity over cleverness, avoid obfuscated or confusing patterns
Files:
src/fastmcp/server/auth/oauth_proxy.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Run tests: Python 3.10 on windows-latest
🔇 Additional comments (4)
src/fastmcp/server/auth/oauth_proxy.py (4)
195-202: LGTM!The SHA-256 hashing implementation is appropriate for creating one-way token hashes for secure storage lookups. The approach provides defense in depth by ensuring tokens cannot be recovered from storage.
649-657: LGTM!The updated documentation accurately describes the new hash-based storage approach and clearly explains how pluggable storage enables horizontal scaling across multiple instances.
924-934: LGTM!The refresh token store initialization follows the established pattern for other storage adapters in this class and correctly configures the PydanticAdapter for hash-based metadata storage.
1617-1626: LGTM!The token revocation correctly implements hash-based deletion for refresh tokens and properly attempts upstream revocation when configured. The implementation aligns with the new distributed storage approach.
| class RefreshTokenMetadata(BaseModel): | ||
| """Metadata for a refresh token, stored keyed by token hash. | ||
| We store only metadata (not the token itself) for security - if storage | ||
| is compromised, attackers get hashes they can't reverse into usable tokens. | ||
| """ | ||
|
|
||
| client_id: str | ||
| scopes: list[str] | ||
| expires_at: int | None = None | ||
| created_at: float | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix type inconsistency for expires_at field.
The expires_at field is typed as int | None, but all other timestamp fields in this file (created_at, expires_at in other models) consistently use float. This type inconsistency could lead to type errors or confusion when working with timestamps.
Apply this diff to align with the codebase's timestamp convention:
class RefreshTokenMetadata(BaseModel):
"""Metadata for a refresh token, stored keyed by token hash.
We store only metadata (not the token itself) for security - if storage
is compromised, attackers get hashes they can't reverse into usable tokens.
"""
client_id: str
scopes: list[str]
- expires_at: int | None = None
+ expires_at: float | None = None
created_at: float🤖 Prompt for AI Agents
In src/fastmcp/server/auth/oauth_proxy.py around lines 182 to 193, the
RefreshTokenMetadata model declares expires_at as int | None which is
inconsistent with other timestamp fields using float; change the type annotation
to float | None so expires_at matches the codebase convention for timestamps and
update any related type hints/comments accordingly.
| # Store refresh token metadata (keyed by hash for security) | ||
| if fastmcp_refresh_token: | ||
| self._refresh_tokens[fastmcp_refresh_token] = RefreshToken( | ||
| token=fastmcp_refresh_token, | ||
| client_id=client.client_id, | ||
| scopes=authorization_code.scopes, | ||
| expires_at=None, | ||
| await self._refresh_token_store.put( | ||
| key=_hash_token(fastmcp_refresh_token), | ||
| value=RefreshTokenMetadata( | ||
| client_id=client.client_id, | ||
| scopes=authorization_code.scopes, | ||
| expires_at=None, | ||
| created_at=time.time(), | ||
| ), | ||
| ttl=60 * 60 * 24 * 30, # 30 days | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use calculated expiry values for consistency.
The refresh token metadata storage uses a hardcoded 30-day TTL (line 1297) and sets expires_at to None (line 1294), even though refresh_token_expires_at and refresh_expires_in were calculated earlier (lines 1204-1220). This creates inconsistency with the upstream token storage (lines 1237-1242), which uses the calculated refresh_expires_in value.
Using different TTL values for related storage entries could cause the metadata to expire before or after the actual upstream tokens, potentially leading to inconsistent state.
Apply this diff to align the TTL and expiry tracking:
# Store refresh token metadata (keyed by hash for security)
if fastmcp_refresh_token:
await self._refresh_token_store.put(
key=_hash_token(fastmcp_refresh_token),
value=RefreshTokenMetadata(
client_id=client.client_id,
scopes=authorization_code.scopes,
- expires_at=None,
+ expires_at=refresh_token_expires_at,
created_at=time.time(),
),
- ttl=60 * 60 * 24 * 30, # 30 days
+ ttl=refresh_expires_in, # Match upstream token TTL
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Store refresh token metadata (keyed by hash for security) | |
| if fastmcp_refresh_token: | |
| self._refresh_tokens[fastmcp_refresh_token] = RefreshToken( | |
| token=fastmcp_refresh_token, | |
| client_id=client.client_id, | |
| scopes=authorization_code.scopes, | |
| expires_at=None, | |
| await self._refresh_token_store.put( | |
| key=_hash_token(fastmcp_refresh_token), | |
| value=RefreshTokenMetadata( | |
| client_id=client.client_id, | |
| scopes=authorization_code.scopes, | |
| expires_at=None, | |
| created_at=time.time(), | |
| ), | |
| ttl=60 * 60 * 24 * 30, # 30 days | |
| ) | |
| # Store refresh token metadata (keyed by hash for security) | |
| if fastmcp_refresh_token: | |
| await self._refresh_token_store.put( | |
| key=_hash_token(fastmcp_refresh_token), | |
| value=RefreshTokenMetadata( | |
| client_id=client.client_id, | |
| scopes=authorization_code.scopes, | |
| expires_at=refresh_token_expires_at, | |
| created_at=time.time(), | |
| ), | |
| ttl=refresh_expires_in, # Match upstream token TTL | |
| ) |
🤖 Prompt for AI Agents
In src/fastmcp/server/auth/oauth_proxy.py around lines 1287 to 1298, the refresh
token metadata uses a hardcoded 30-day TTL and sets expires_at=None which is
inconsistent with earlier calculations; change the metadata storage to use the
previously computed refresh_expires_in for ttl and refresh_token_expires_at (or
refresh_expires_at) for expires_at so the metadata lifetime matches the upstream
token lifetime, i.e. replace the hardcoded ttl and None expires_at with the
calculated values used earlier.
| # Store new refresh token metadata (keyed by hash) | ||
| await self._refresh_token_store.put( | ||
| key=_hash_token(new_fastmcp_refresh), | ||
| value=RefreshTokenMetadata( | ||
| client_id=client.client_id, | ||
| scopes=scopes, | ||
| expires_at=None, | ||
| created_at=time.time(), | ||
| ), | ||
| ttl=refresh_ttl, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Track actual token expiry in refresh token metadata.
Similar to the issue in exchange_authorization_code, the refresh token metadata stores expires_at=None (line 1532) even though the expiry information is available. While the TTL correctly uses refresh_ttl, storing the actual expiry timestamp in the metadata would:
- Allow proper expiry validation if needed
- Provide consistent state tracking
- Enable debugging and monitoring of token lifetimes
Apply this diff to store the calculated expiry:
# Store new refresh token metadata (keyed by hash)
await self._refresh_token_store.put(
key=_hash_token(new_fastmcp_refresh),
value=RefreshTokenMetadata(
client_id=client.client_id,
scopes=scopes,
- expires_at=None,
+ expires_at=time.time() + refresh_ttl,
created_at=time.time(),
),
ttl=refresh_ttl,
)🤖 Prompt for AI Agents
In src/fastmcp/server/auth/oauth_proxy.py around lines 1526 to 1536, the refresh
token metadata is being stored with expires_at=None; change this to record the
actual expiry timestamp by setting expires_at to time.time() + refresh_ttl when
refresh_ttl is not None (otherwise keep None), so metadata matches the TTL used
and mirrors the behavior in exchange_authorization_code.
|
Hey @jlowin , any update on when this will be available in the new release? |
Fixes #2478 and #2479
The OAuth proxy was using a local in-memory dict for refresh token storage, which broke multi-instance deployments. Container A would issue a refresh token, but Container B couldn't find it when the client tried to refresh.
Changes:
client_storagebackend (Redis, disk, etc.)_access_token_storeand relationship mappings (unused for validation; JWT + JTI mapping handles that)Security improvement: Previously refresh tokens were stored in plaintext. Now only metadata is stored, keyed by hash: