Describe the bug
The pod-level resource tracking logic in GetPods does not recognize Intel or Gaudi GPU requests, leading to incomplete usage reporting. While the node inventory correctly identifies these accelerators, the pod views report 0 usage because only NVIDIA and AMD resource names are checked.
Environment
- OS: Linux
- KC Version: v0.2.x
- Kubernetes cluster type: OpenShift / EKS with Intel GPU or Gaudi accelerators.
Steps To Reproduce
- Deploy a pod that requests Intel GPUs (e.g.,
gpu.intel.com/i915: 1) or Gaudi accelerators (habana.ai/gaudi: 1).
- Navigate to the Pod Details or Workload view in the Console.
- Observe that the "GPU Requested" column or field shows
0, despite the pod having a valid accelerator request.
Expected Behavior
The console should track all supported accelerator types in pods, including Intel (gpu.intel.com/i915) and Habana/Gaudi (habana.ai/gaudi, intel.com/gaudi), to remain consistent with the node-level inventory tracker.
Browser Console / Backend Logs
Want to contribute?
Additional Context
The bug is located in pkg/k8s/client_resources.go:
// Line 68: Missing Intel/Gaudi resource names
if resourceName == "nvidia.com/gpu" || resourceName == "amd.com/gpu" {
ci.GPURequested = int(qty.Value())
}
This should be updated to include the additional resource names already supported in client_gpu.go.
Describe the bug
The pod-level resource tracking logic in
GetPodsdoes not recognize Intel or Gaudi GPU requests, leading to incomplete usage reporting. While the node inventory correctly identifies these accelerators, the pod views report0usage because only NVIDIA and AMD resource names are checked.Environment
Steps To Reproduce
gpu.intel.com/i915: 1) or Gaudi accelerators (habana.ai/gaudi: 1).0, despite the pod having a valid accelerator request.Expected Behavior
The console should track all supported accelerator types in pods, including Intel (
gpu.intel.com/i915) and Habana/Gaudi (habana.ai/gaudi,intel.com/gaudi), to remain consistent with the node-level inventory tracker.Browser Console / Backend Logs
Want to contribute?
Additional Context
The bug is located in
pkg/k8s/client_resources.go:This should be updated to include the additional resource names already supported in
client_gpu.go.