diff options
| -rw-r--r-- | keps/sig-network/0031-20181017-kube-proxy-services-optional.md | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/keps/sig-network/0031-20181017-kube-proxy-services-optional.md b/keps/sig-network/0031-20181017-kube-proxy-services-optional.md index d45e6bac..ce297523 100644 --- a/keps/sig-network/0031-20181017-kube-proxy-services-optional.md +++ b/keps/sig-network/0031-20181017-kube-proxy-services-optional.md @@ -49,7 +49,6 @@ The motivation for the enhancement is to allow higher scalability in large clust ### Goals The goal is to reduce the load on: -* The apiserver sending all services and endpoints to all kube-proxy pods * The kube-proxy having to deserialize and process all services and endpoints * The backend system (e.g. iptables) for whichever proxy mode kube-proxy is using @@ -70,12 +69,14 @@ As a cluster operator, operating a cluster using a service mesh I want to be abl #### Overview -It is important for overall scalability that kube-proxy does not receive data for Service/Endpoints objects that it is not going to affect. This can reduce load on the apiserver, networking, and kube-proxy itself by never receiving the updates in the first place. +It is important for overall scalability that kube-proxy does not receive data for Service/Endpoints objects that it is not going to affect. This can reduce load on the kube-proxy and the network by never receiving the updates in the first place. The proposal is to make this feature available by annotating the Service object with this label: `kube-proxy.kubernetes.io/disabled=true`. The associated Endpoints object will automatically inherit that label from the Service object as well. When this label is set, kube-proxy will behave as if that service does not exist. None of the functionality that kube-proxy provides will be available for that service. +kube-proxy will properly implement this label both as object creation and on dynamic addition/removal/updates of this label, either providing functionality or not for the service based on the latest version on the object. + It is expected that this feature will mainly be used on large clusters with lots (>1000) of services. Any use of this feature in a smaller cluster will have negligible impact. The envisioned cluster that will make use of this feature looks something like the following: @@ -96,6 +97,8 @@ The new design will simply add a LabelSelector filter to the shared informer fac + })) ``` +This code will also handle the dynamic label update case. When the label selector is matched (service is enabled) an 'add' event will be generated by the informer. When the label selector is not matched (service is disabled) a 'delete' event will be generated by the informer. + #### Testing The following cases should be tested. In each case, make sure that services are added/removed from iptables (or other) as expected: |
