@@ -579,14 +579,10 @@ use container runtime versions that have the needed changes.
579
579
580
580
##### Beta
581
581
582
- - Make plans on whether, when, and how to enable by default
583
-
584
- ###### Open Questions
585
-
586
- - Should we reconsider making the mappings smaller by default?
587
- - Should we allow any way for users to for "more" IDs mapped? If yes, how many more and how?
588
- - Should we allow the user to ask for specific mappings?
589
- - Get review from VM container runtimes maintainers
582
+ - Gather and address feedback from the community
583
+ - Be able to configure UID/GID ranges to use for pods
584
+ - Get review from VM container runtimes maintainers (not blocker, as VM runtimes should just ignore
585
+ the field, but nice to have)
590
586
591
587
##### GA
592
588
@@ -1149,15 +1145,23 @@ KEPs can explore this path if they so want to.
1149
1145
1150
1146
### 64k mappings?
1151
1147
1152
- We will start with mappings of 64K. Tim Hockin, however, has expressed
1153
- concerns. See more info on [ this Github discussion] ( https://github.com/kubernetes/enhancements/pull/3065#discussion_r781676224 )
1154
- SergeyKanzhelev [ suggested a nice alternative] ( https://github.com/kubernetes/enhancements/pull/3065#discussion_r807408134 ) ,
1155
- to limit the number of pods so we guarantee enough spare host UIDs in case we
1156
- need them for the future. There is no final decision yet on how to handle this.
1157
- For now we will limit the number of pods, so the wide mapping is not
1158
- problematic, but [ there are downsides to this too] ( https://github.com/kubernetes/enhancements/pull/3065#discussion_r812806223 )
1148
+ We discussed using shorter or even allowing for longer mappings in the past. The decision is to use
1149
+ 64k mappings (IDs from 0-65535 are mapped/valid in the pod).
1159
1150
1160
- For stateless pods this is of course not an issue.
1151
+ The reasons to consider smaller mappings were valid only before idmap mounts was merged into the
1152
+ kernel. However, idmap mounts is merged for some years now and we require it, making those reasons
1153
+ void.
1154
+
1155
+ The issues without idmap mounts in previous iterations of this KEP, is that the IDs assigned to a
1156
+ pod had to be unique for every pod in the cluster, easily reaching a limit when the cluster is "big
1157
+ enough" and the UID space runs out. However, with idmap mounts the IDs assigned to a pod just needs
1158
+ to be unique within the node (and with 64k ranges we have 64k pods possible in the node, so not
1159
+ really an issue). IOW, by using idmap mounts, we changed the IDs limit to be node-scoped instead of
1160
+ cluster-wide/cluster-scoped.
1161
+
1162
+ There are no known use cases for longer mappings that we know of. The 16bit range (0-65535) is what
1163
+ is assumed by all POSIX tools that we are aware of. If the need arises, longer mapping can be
1164
+ considered in a future KEP.
1161
1165
1162
1166
### Allow runtimes to pick the mapping?
1163
1167
@@ -1166,7 +1170,11 @@ mapping and have different runtimes pick different mappings. While KEP authors
1166
1170
disagree on this, we still need to discuss it and settle on something. This was
1167
1171
[ raised here] ( https://github.com/kubernetes/enhancements/pull/3065#discussion_r798760382 )
1168
1172
1169
- This is not a blocker for the KEP, but it is something that can be changed later on.
1173
+ Furthermore, the reasons mentioned by Tim (some nodes having CRIO, some others having containerd,
1174
+ etc.) are handled correctly now. Different nodes can use different container runtimes, if a custom
1175
+ range needs to be used by the kubelet, that can be configured per-node.
1176
+
1177
+ Therefore, this old concerned is now resolved.
1170
1178
1171
1179
<!--
1172
1180
What other approaches did you consider, and why did you rule them out? These do
0 commit comments