Kubernetes 1.34 Features Explained (scaleops.com)

🤖 AI Summary
Kubernetes 1.34 (“Of Wind & Will”) signals a pragmatic third era: instead of stacking abstractions, it makes core primitives hardware- and topology-aware to better match production realities—especially for AI/ML and HPC workloads. The headline is Dynamic Resource Allocation (DRA) reaching GA: ResourceClaim objects and DeviceClass definitions let the scheduler reason about real device topology (NVLink, memory, product type) rather than opaque GPU counts. DRA requires enabling the DynamicResourceAllocation feature gate across control plane and kubelets and a DRA-capable device plugin (e.g., an NVIDIA operator with DRA support). AllocateOnce semantics give predictable, topology-aware GPU assignment (reducing the “GPU lottery” and potentially doubling utilization). Swap support (Beta) introduces controlled swap for Burstable pods (never for Guaranteed or BestEffort) via kubelet flags or KubeletConfiguration; monitor via Summary API or Prometheus (node_swap_usage_bytes, pod_swap_usage_bytes) and keep control plane nodes swap-free. 1.34 also promotes kernel-level signals into scheduling and autoscaling workflows. PSI (Beta) exposes cgroup stall metrics (via --feature-gates=KubeletPSI=true) so autoscalers can react to time-spent-waiting rather than raw CPU/memory, integrable through Prometheus Adapter (example metric psi_memory_stall_seconds_total). Networking’s trafficDistribution (Beta) adds PreferSameZone and PreferSameNode to reduce cross-AZ egress and tail latency. Security gets in-process Mutating Admission Policies (Beta) using CEL—no external webhooks—enabled with feature gates and runtime-config flags, simplifying safe defaults and reducing failure points. Together these features empower platforms (e.g., ScaleOps) to ingest richer signals for topology-aware placement, stable autoscaling, and continuous rightsizing.
Loading comments...
loading comments...