-
Notifications
You must be signed in to change notification settings - Fork 1k
Implementing maxAvailableComponentSets for resource models #6912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @mszacillo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the cluster resource estimation capabilities by implementing a sophisticated algorithm to determine the maximum number of complete component sets a cluster can support. By leveraging resource models and a greedy bin-packing strategy combined with binary search, it provides a more precise and performant method for capacity planning and workload placement. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request implements a bin-packing based algorithm to calculate the maximum available component sets based on resource models. The implementation uses a first-fit decreasing approach, with a binary search to find the optimal number of sets. The code is well-structured and includes comprehensive tests. My feedback includes a few suggestions to improve the clarity of test comments and a potential improvement to the packing heuristic to better handle resources with different scales.
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #6912 +/- ##
==========================================
+ Coverage 46.39% 46.46% +0.07%
==========================================
Files 697 698 +1
Lines 47764 47887 +123
==========================================
+ Hits 22161 22253 +92
- Misses 23933 23949 +16
- Partials 1670 1685 +15
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Thanks |
3371695 to
8308268
Compare
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant feature by implementing maxAvailableComponentSets based on resource models. The implementation uses a binary search over the number of sets and a first-fit decreasing algorithm for placement, which is a solid approach. The code is well-structured and includes comprehensive tests for the new logic. My review includes a few suggestions for improvement, mainly around maintainability and optimizing the packing heuristic.
| capCopy := make(map[corev1.ResourceName]int64, len(capTemplate)) | ||
| for k, v := range capTemplate { | ||
| capCopy[k] = v | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This map copying logic is repeated in a few places in this file (e.g., expandReplicasOneSet at lines 272-275, and modelsFeasible at lines 287-290). To improve maintainability and reduce code duplication, consider extracting this into a private helper function.
For example:
func copyResourceMap(m map[corev1.ResourceName]int64) map[corev1.ResourceName]int64 {
if m == nil {
return nil
}
newMap := make(map[corev1.ResourceName]int64, len(m))
for k, v := range m {
newMap[k] = v
}
return newMap
}Then you could replace this block with a single call: capCopy := copyResourceMap(capTemplate).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This map copying logic is repeated in a few places in this file
Yes, but compared to extracting it into a helper method, I prefer using util.Resource. First, it's a struct and doesn't require repeated copying; second, it encapsulates many resource-related operations, which can simplify the code.
Lines 29 to 37 in 2953630
| type Resource struct { | |
| MilliCPU int64 | |
| Memory int64 | |
| EphemeralStorage int64 | |
| AllowedPodNumber int64 | |
| // ScalarResources | |
| ScalarResources map[corev1.ResourceName]int64 | |
| } |
8308268 to
0608885
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: mszacillo <[email protected]>
0608885 to
67b0f24
Compare
RainbowMango
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/assign
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request implements getMaximumSetsBasedOnResourceModels to estimate the maximum number of component sets that can be scheduled on a cluster, based on its resource models. The implementation uses a binary search over the number of sets, and for each guess, it checks feasibility using a first-fit-decreasing bin packing algorithm. This is a solid approach, and the PR includes a comprehensive set of unit tests. I've found a critical issue in the buildModelNodes helper function, where it incorrectly assumes that resource models and allocatable modelings slices are sorted by grade, which can lead to incorrect calculations or panics. I've also found a minor issue with a confusing comment in one of the test cases. I've provided suggestions to fix both.
| func buildModelNodes(cluster *clusterv1alpha1.Cluster) ([]modelNode, error) { | ||
| if len(cluster.Spec.ResourceModels) == 0 { | ||
| return nil, fmt.Errorf("resource model is inapplicable as no grades are defined") | ||
| } | ||
|
|
||
| if len(cluster.Spec.ResourceModels) > len(cluster.Status.ResourceSummary.AllocatableModelings) { | ||
| // Shouldn’t happen - status is malformed | ||
| return nil, fmt.Errorf("resource model/status mismatch: %d grades in spec, %d in status", | ||
| len(cluster.Spec.ResourceModels), len(cluster.Status.ResourceSummary.AllocatableModelings)) | ||
| } | ||
|
|
||
| // Convert Spec.ResourceModels to a map of resource -> []MinByGrade | ||
| minMap := convertToResourceModelsMinMap(cluster.Spec.ResourceModels) | ||
|
|
||
| // Build nodes for each grade index i | ||
| var nodes []modelNode | ||
| for i := 0; i < len(cluster.Spec.ResourceModels); i++ { | ||
| count := cluster.Status.ResourceSummary.AllocatableModelings[i].Count | ||
| if count == 0 { | ||
| continue | ||
| } | ||
|
|
||
| // Capacity vector for this grade = Min boundary of each resource at grade i (normalized) | ||
| capTemplate := make(map[corev1.ResourceName]int64, len(minMap)) | ||
| for resName, mins := range minMap { | ||
| if i >= len(mins) { | ||
| // Model shape mismatch; treat as missing resource for this grade | ||
| return nil, fmt.Errorf("resource model is inapplicable as missing resource %q in grade %d", string(resName), i) | ||
| } | ||
| capTemplate[resName] = quantityAsInt64(mins[i]) | ||
| } | ||
|
|
||
| // Append `count` identical nodes of this grade | ||
| for n := 0; n < count; n++ { | ||
| // Copy capTemplate to each node | ||
| capCopy := make(map[corev1.ResourceName]int64, len(capTemplate)) | ||
| for k, v := range capTemplate { | ||
| capCopy[k] = v | ||
| } | ||
| nodes = append(nodes, modelNode{cap: capCopy}) | ||
| } | ||
| } | ||
| return nodes, nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation of buildModelNodes assumes that cluster.Spec.ResourceModels and cluster.Status.ResourceSummary.AllocatableModelings slices are sorted by Grade and that grades are contiguous starting from 0. The API does not guarantee this, so iterating with an index i and using it as a grade can lead to incorrect results or panics if the slices are not sorted as expected.
To make the function robust, it's better to process these slices into maps keyed by grade. This avoids any assumptions about a-priori sorting.
// buildModelNodes constructs identical nodes for each model grade using its Min vector,
// repeated `AllocatableModelings[i].Count` times.
func buildModelNodes(cluster *clusterv1alpha1.Cluster) ([]modelNode, error) {
if len(cluster.Spec.ResourceModels) == 0 {
return nil, fmt.Errorf("resource model is inapplicable as no grades are defined")
}
// Use maps to handle potentially unsorted model and allocatable slices.
modelsByGrade := make(map[uint]clusterv1alpha1.ResourceModel)
for _, model := range cluster.Spec.ResourceModels {
modelsByGrade[model.Grade] = model
}
allocatableByGrade := make(map[uint]int)
for _, allocatable := range cluster.Status.ResourceSummary.AllocatableModelings {
allocatableByGrade[allocatable.Grade] = allocatable.Count
}
// Sort grades for deterministic node ordering.
sortedGrades := make([]uint, 0, len(modelsByGrade))
for grade := range modelsByGrade {
sortedGrades = append(sortedGrades, grade)
}
sort.Slice(sortedGrades, func(i, j int) bool { return sortedGrades[i] < sortedGrades[j] })
var nodes []modelNode
for _, grade := range sortedGrades {
model := modelsByGrade[grade]
count := allocatableByGrade[grade] // Defaults to 0 if grade not in map.
if count == 0 {
continue
}
// Capacity vector for this grade.
capTemplate := make(map[corev1.ResourceName]int64)
for _, r := range model.Ranges {
capTemplate[r.Name] = quantityAsInt64(r.Min)
}
// Append `count` identical nodes of this grade.
for n := 0; n < count; n++ {
capCopy := make(map[corev1.ResourceName]int64, len(capTemplate))
for k, v := range capTemplate {
capCopy[k] = v
}
nodes = append(nodes, modelNode{cap: capCopy})
}
}
return nodes, nil
}| AllocatableModelings: []clusterv1alpha1.AllocatableModeling{ | ||
| {Count: 2}, // g0 nodes -> 2 JMs max | ||
| {Count: 6}, // g1 nodes -> 6 TMs max | ||
| {Count: 3}, // g2 nodes -> 3 JMs + 3 TMs (1 set) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is a bit confusing. A g2 node has {4CPU, 12Gi} capacity and can fit one TM ({3CPU, 10Gi}) and one JM ({1CPU, 2Gi}). So 3 g2 nodes can accommodate 3 TMs and 3 JMs. This is not equivalent to '1 set' (which requires 1 JM and 3 TMs). To improve clarity, consider rephrasing the comment to describe the capacity of the nodes themselves.
| {Count: 3}, // g2 nodes -> 3 JMs + 3 TMs (1 set) | |
| {Count: 3}, // g2 nodes -> can fit 1 JM and 1 TM each |
|
Thanks @mszacillo — this is a tough task!
Admittedly, the |
|
Thanks for the review @zhzhuang-zju !
Yes, binary search helps here. I implemented this method with the understanding that we are looking for an exact upperBound of possible component sets - rather than an approximation which would have been a lot simpler. We're looking for a max number of component sets between 0 and upperBound, with the knowledge that:
We could naively loop through all possible set numbers [0, upperBound] and do the feasibility check, but given the rules listed above, binary search fits in quite well. And O(logN) < O(N).
Not sure I fully follow. By full validations you are referring to the feasibility fit check right? Even though each iteration fully validates feasibility, binary search reduces how many times the check is done. If we're discussing the feasibility check time complexity (assuming k = number of components), then we are sorting each time O(klogk), and then doing a greedy packing, which results in O(k * n * d), where n is nodes and d is the resources. Which given that number of components and resources is generally small, the complexity of this is generally going to depend on the number of nodes (larger clusters taking longer to check). That said! I'm realizing that we can move the sorting to be outside of the feasibility check, since the score we assign is dependent on the cluster. So it only needs to be computed once, and we can instead just set the replica count to match the # of sets we are running the feasibility check for. That would improve efficiency.
I'm happy to unify calculation logic to make the code cleaner, but to me this sounds like a follow-up task. I could possibly even pick this up. |
|
/assign |
Binary search reduces the number of feasibility fit checks. However, is it possible to obtain the final result in a single feasibility fit check—for example, by assigning component sets one by one until we can no longer assign a complete component set?
Yes, you're absolutely right—this can be addressed as a follow-up task. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR implements the maxAvailableComponentsSets within general estimator based on resource models. The packing algorithm is first-fit decreasing, which is a greedy approach to bin packing. To make the calculation of the maxSets more efficient we use binary-search for checking the max set number, and then attempting to fit all the number of replicas onto the existing nodes.
Which issue(s) this PR fixes:
Part of #6734
Does this PR introduce a user-facing change?: