@@ -973,6 +973,33 @@ This through this both in small and large cases, again with respect to the
973
973
[supported limits]: https://git.k8s.io/community//sig-scalability/configs-and-limits/thresholds.md
974
974
-->
975
975
976
+ ###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)?
977
+
978
+ <!--
979
+ Focus not just on happy cases, but primarily on more pathological cases
980
+ (e.g. probes taking a minute instead of milliseconds, failed pods consuming resources, etc.).
981
+ If any of the resources can be exhausted, how this is mitigated with the existing limits
982
+ (e.g. pods per node) or new limits added by this KEP?
983
+ Are there any tests that were run/should be run to understand performance characteristics better
984
+ and validate the declared limits?
985
+ -->
986
+
987
+ The kubelet is spliting the host UID/GID space for different pods, to use for
988
+ their user namespace mapping. The design allows for 65k pods per node, and the
989
+ resource is limited in the alpha phase to the min between maxPods per node
990
+ kubelet setting and 1024. This guarantees we are not inadvertly exhausting the
991
+ resource.
992
+
993
+ For container runtimes, they might use more disk space or inodes to chown the
994
+ rootfs. This is if they chose to support this feature without relying on new
995
+ Linux kernels (or supporting old kernels too), as new kernels allow idmap mounts
996
+ and no overhead (space nor inodes) is added with that.
997
+
998
+ For CRIO and containerd, we are working to incrementally support all variations
999
+ (idmap mounts, no overhead;overlyafs metacopy param, that gives us just inode
1000
+ overhead; and a full rootfs chown, that has space overhead) and document them
1001
+ appropiately.
1002
+
976
1003
### Troubleshooting
977
1004
978
1005
<!--
0 commit comments