diff --git a/blog/kubecon-eu-2026-recap/index.md b/blog/kubecon-eu-2026-recap/index.md index bc0edca4..87e4280a 100644 --- a/blog/kubecon-eu-2026-recap/index.md +++ b/blog/kubecon-eu-2026-recap/index.md @@ -75,7 +75,7 @@ This is why what appears to be a low-level problem has become one of the core is HAMi Maintainer Xiao Zhang's talk started from a classic, long-standing problem in the Kubernetes community: **How can multiple containers share a GPU?** -While this question seems specific, it actually points to a challenge the entire AI infrastructure ecosystem faces. Once you enter inference, batch processing, online serving, and multi-tenant mixed scenarios, GPUs can no longer be simply allocated in an "exclusive whole-card" manner. +While this question seems specific, it actually points to a challenge the entire AI infrastructure ecosystem faces. Once you enter inference, batch processing, online serving, and multi-tenant mixed scenarios, GPUs can no longer be allocated in an "exclusive whole-card" manner. The significance of this talk lies in putting HAMi's solution back into the original context of the Kubernetes community: not building an isolated solution from scratch, but addressing a long-standing upstream problem that hasn't been fully resolved.