Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion blog/kubecon-eu-2026-recap/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ This is why what appears to be a low-level problem has become one of the core is

HAMi Maintainer Xiao Zhang's talk started from a classic, long-standing problem in the Kubernetes community: **How can multiple containers share a GPU?**

While this question seems specific, it actually points to a challenge the entire AI infrastructure ecosystem faces. Once you enter inference, batch processing, online serving, and multi-tenant mixed scenarios, GPUs can no longer be simply allocated in an "exclusive whole-card" manner.
While this question seems specific, it actually points to a challenge the entire AI infrastructure ecosystem faces. Once you enter inference, batch processing, online serving, and multi-tenant mixed scenarios, GPUs can no longer be allocated in an "exclusive whole-card" manner.

The significance of this talk lies in putting HAMi's solution back into the original context of the Kubernetes community: not building an isolated solution from scratch, but addressing a long-standing upstream problem that hasn't been fully resolved.

Expand Down
Loading