-
Notifications
You must be signed in to change notification settings - Fork 0
LRUCache
krishanthan4 edited this page Dec 16, 2025
·
1 revision
Design and implement a Least Recently Used (LRU) cache that supports get and put operations in O(1) time complexity.
An LRU cache is a cache eviction policy that removes the least recently used item when the cache reaches its capacity. Items are ordered by their usage - most recently used items are kept, while least recently used items are evicted.
- Time Complexity: O(1) for both get and put operations
- Space Complexity: O(capacity)
- Key Operations: get, put
- Eviction Policy: Remove least recently used item when at capacity
- When an item is accessed (get or put), it becomes the most recently used
- When cache is full and a new item is added:
- Remove the least recently used item
- Add the new item as most recently used
- Maintain order of usage for efficient eviction
-
get(key):
- Return value if key exists (and mark as recently used)
- Return -1 if key doesn't exist
-
put(key, value):
- If key exists: update value and mark as recently used
- If key doesn't exist:
- If at capacity: remove LRU item
- Add new key-value pair as most recently used
Implement the following in LRUCache.java:
-
LRUCache(int capacity)- Constructor -
get(int key)- Get value and update usage -
put(int key, int value)- Put value and handle eviction - Helper methods for managing doubly linked list
Recommended Approach: HashMap + Doubly Linked List
- HashMap: O(1) access to nodes by key
-
Doubly Linked List: O(1) removal and insertion
- Head: Most recently used
- Tail: Least recently used
- Basic put and get operations
- Eviction when at capacity
- Updating existing keys
- Access order (get makes item recent)
- Put on existing key updates it
- Edge case: capacity = 1
- Multiple evictions
- Use a HashMap to store key -> Node mapping
- Use a doubly linked list to maintain usage order
- Head of list = most recently used
- Tail of list = least recently used
- When accessing: move node to head
- When evicting: remove node from tail
Node structure:
- key
- value
- prev pointer
- next pointer
Main structure:
- HashMap<Integer, Node>
- head pointer (most recent)
- tail pointer (least recent)
- capacity
Move to front:
- Remove node from current position
- Add node at head
Remove from tail:
- Get tail node
- Remove from HashMap
- Update tail pointer
- Web browser cache
- Database query cache
- Operating system page replacement
- CDN caching
- API rate limiting
- How would you implement an LFU (Least Frequently Used) cache?
- How would you add a time-to-live (TTL) feature?
- How would you make this thread-safe?
- How would you implement this with only a HashMap (without doubly linked list)?
- Forgetting to update usage order on get
- Not handling capacity = 1 edge case
- Memory leaks (not removing from HashMap when evicting)
- Incorrect pointer updates in doubly linked list
- Not checking if key exists before evicting
- LFU Cache: Evict least frequently used
- FIFO Cache: Evict first inserted
- TTL Cache: Items expire after time
- 2Q Cache: Two queues with different policies
- Design a cache with expiration
- Implement HashMap
- Design a file system cache
- Implement browser history
| Operation | Time | Space |
|---|---|---|
| get() | O(1) | - |
| put() | O(1) | - |
| Overall | - | O(capacity) |