@@ -732,6 +732,33 @@ Are there any tests that were run/should be run to understand performance charac
732
732
and validate the declared limits?
733
733
-->
734
734
735
+ ###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
736
+
737
+ <!--
738
+ Focus not just on happy cases, but primarily on more pathological cases
739
+ (e.g. probes taking a minute instead of milliseconds, failed pods consuming resources, etc.).
740
+ If any of the resources can be exhausted, how this is mitigated with the existing limits
741
+ (e.g. pods per node) or new limits added by this KEP?
742
+ Are there any tests that were run/should be run to understand performance characteristics better
743
+ and validate the declared limits?
744
+ -->
745
+
746
+ The kubelet is spliting the host UID/GID space for different pods, to use for
747
+ their user namespace mapping. The design allows for 65k pods per node, and the
748
+ resource is limited in the alpha phase to the min between maxPods per node
749
+ kubelet setting and 1024. This guarantees we are not inadvertly exhausting the
750
+ resource.
751
+
752
+ For container runtimes, they might use more disk space or inodes to chown the
753
+ rootfs. This is if they chose to support this feature without relying on new
754
+ Linux kernels (or supporting old kernels too), as new kernels allow idmap mounts
755
+ and no overhead (space nor inodes) is added with that.
756
+
757
+ For CRIO and containerd, we are working to incrementally support all variations
758
+ (idmap mounts, no overhead;overlyafs metacopy param, that gives us just inode
759
+ overhead; and a full rootfs chown, that has space overhead) and document them
760
+ appropiately.
761
+
735
762
### Troubleshooting
736
763
737
764
<!--
0 commit comments