diff options
| author | Lee Verberne <verb@google.com> | 2017-10-25 17:13:22 +0200 |
|---|---|---|
| committer | Lee Verberne <verb@google.com> | 2018-03-22 10:24:18 +0100 |
| commit | caa072ead7529a74b55d4bd53209e6f34717fa78 (patch) | |
| tree | afdcdaffc5acdefa2da6b09d96a3271df63a7fc9 | |
| parent | e926accd78d2822b49589e511cd31a52b57c30b8 (diff) | |
Update API of Troubleshooting Ephemeral Containers
- Report both v1.Container & v1.ContainerStatus in PodStatus
- Persist v1.Container as a container runtime label
- Start ephemeral containers from the kubelet pod worker
| -rw-r--r-- | contributors/design-proposals/node/troubleshoot-running-pods.md | 441 |
1 files changed, 229 insertions, 212 deletions
diff --git a/contributors/design-proposals/node/troubleshoot-running-pods.md b/contributors/design-proposals/node/troubleshoot-running-pods.md index 72c1cb77..2707a2b4 100644 --- a/contributors/design-proposals/node/troubleshoot-running-pods.md +++ b/contributors/design-proposals/node/troubleshoot-running-pods.md @@ -50,20 +50,21 @@ A solution to troubleshoot arbitrary container images MUST: * require no administrative access to the node * have an excellent user experience (i.e. should be a feature of the platform rather than config-time trickery) -* have no *inherent* side effects to the running container image +* have no _inherent_ side effects to the running container image +* v1.Container must be available for inspection by admission controllers ## Feature Summary Any new debugging functionality will require training users. We can ease the transition by building on an existing usage pattern. We will create a new command, `kubectl debug`, which parallels an existing command, `kubectl exec`. -Whereas `kubectl exec` runs a *process* in a *container*, `kubectl debug` will -be similar but run a *container* in a *pod*. +Whereas `kubectl exec` runs a _process_ in a _container_, `kubectl debug` will +be similar but run a _container_ in a _pod_. -A container created by `kubectl debug` is a *Debug Container*. Just like a +A container created by `kubectl debug` is a _Debug Container_. Just like a process run by `kubectl exec`, a Debug Container is not part of the pod spec and has no resource stored in the API. Unlike `kubectl exec`, a Debug Container -*does* have status that is reported in `v1.PodStatus` and displayed by `kubectl +_does_ have status that is reported in `v1.PodStatus` and displayed by `kubectl describe pod`. For example, the following command would attach to a newly created container in @@ -82,22 +83,16 @@ kubectl debug target-pod This creates an interactive shell in a pod which can examine and signal other processes in the pod. It has access to the same network and IPC as processes in -the pod. It can access the filesystem of other processes by `/proc/$PID/root`. -As is already the case with regular containers, Debug Containers can enter -arbitrary namespaces of another container via `nsenter` when run with -`CAP_SYS_ADMIN`. +the pod. When [process namespace sharing](https://features.k8s.io/495) is +enabled, it can access the filesystem of other processes by `/proc/$PID/root`. +Debug Containers can enter arbitrary namespaces of another visible container via +`nsenter` when run with `CAP_SYS_ADMIN`. -*Please see the User Stories section for additional examples and Alternatives -Considered for the considerable list of other solutions we considered.* +_Please see the User Stories section for additional examples and Alternatives +Considered for the considerable list of other solutions we considered._ ## Implementation Details -The implementation of `kubectl debug` closely mirrors the implementation of -`kubectl exec`, with most of the complexity implemented in the `kubelet`. How -functionality like this best fits into Kubernetes API has been contentious. In -order to make progress, we will start with the smallest possible API change, -extending `/exec` to support Debug Containers, and iterate. - From the perspective of the user, there's a new command, `kubectl debug`, that creates a Debug Container and attaches to its console. We believe a new command will be less confusing for users than overloading `kubectl exec` with a new @@ -106,13 +101,88 @@ subsequently be used to reattach and is reported by `kubectl describe`. ### Kubernetes API Changes -#### Chosen Solution: "exec++" +There has been much discussion about how this fits best into the Kubernetes API. +The consensus is for an imperative "debug this pod" action that's implemented +mostly in the kubelet. In order to avoid new dependencies in the kubelet, this +will be implemented in the Core API. Three possible implementations follow, and +additional implementations that were evaluated and dismissed are at the end of +this document. + +All of the proposed solutions implement the user-level concept of a _Debug +Container_ using the API-level concept of an _Ephemeral Container_. The API +doesn't prescribe how an Ephemeral Container is used. It could conceivably see +use other than Debug Containers, but we don't currently have other use cases. + +#### Chosen Solution: POST to an Existing Pod + +We're modifying an existing pod, so this fits as a subresource of the target +pod. We will create a new top-level object that contains a `v1.Container` and +`POST` to the subresource to create the Debug Container. Since this is a `POST`, +we cannot upgrade the connection to streaming if we want to continue supporting +web socket clients. + +Rather than using an existing subresource like `/exec` and conditionally stream +based on `PodExecOptions`, we will create a new subresource with consistent +streaming behavior. A new subresource has the added benefit of being able to +entirely hide interface entirely behind a feature flag by conditionally +registering the new subresource. + +A `v1.Container` by itself lacks type and object metadata, so we will create a +new type: + +``` +// EphemeralContainer describes a container to attach to a running pod for troubleshooting. +type EphemeralContainer struct { + metav1.TypeMeta `json:",inline"` + + // Spec describes the Ephemeral Container to be created. + Spec *Container `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` + + // Most recently observed status of the container. + // This data may not be up to date. + // Populated by the system. + // Read-only. + // +optional + Status *ContainerStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` + + // If set, the name of the container from PodSpec that this ephemeral container targets. + // If not set then the ephemeral container is run in whatever namespaces are shared + // for the pod. + TargetContainerName string `json:"targetContainerName,omitempty" protobuf:"bytes,4,opt,name=targetContainerName"` +} +``` -We will extend `v1.Pod`'s `/exec` subresource to support "executing" container -images. The current `/exec` endpoint must implement `GET` to support streaming -for all clients. We don't want to encode a (potentially large) `v1.Container` as -an HTTP parameter, so we must extend `v1.PodExecOptions` with the specific -fields required for creating a Debug Container: +**Note that Ephemeral Containers are not regular containers and should not be +used to build services.** They lack guarantees for resources or execution, and +many of the fields of `v1.Container` will not be allowed for Debug Containers. A +request will be rejected if any field is set other than the following +whitelisted fields: `Name`, `Image`, `Command`, `Args`, `WorkingDir`, `Env`, +`EnvFrom`, `ImagePullPolicy`, `SecurityContext`. `TTY` and `Stdin` are always +enabled for Debug Containers and will be ignored. + +The new `/ephemeralcontainers` subresource allows the following: + +1. A `POST` of a `EphemeralContainer` to + `/api/v1/namespaces/$NS/pods/$POD_NAME/ephemeralcontainers` to create an + Ephemeral Container running in pod `$POD_NAME`. +1. Support for stopping an Ephemeral Container **could be supported in the + future** by a `DELETE` of + `/api/v1/namespaces/$NS/pods/$POD_NAME/ephemeralcontainers/$NAME`. + +Once created, it is the responsibility of the client to watch for the +`EphemeralContainer` to appear in the `PodStatus` and then attach to the console +of a debug container using the existing attach endpoint, +`/api/v1/namespaces/$NS/pods/$POD_NAME/attach`. Note that any output of the new +container between its creation and subsequent attach will not be replayed and +can only be viewed using `kubectl log`. + +#### Alternative 1: "exec++" + +A simpler change is to extend `v1.Pod`'s `/exec` subresource to support +"executing" container images. The current `/exec` endpoint must implement `GET` +to support streaming for all clients. We don't want to encode a (potentially +large) `v1.Container` into a query string, so we must extend `v1.PodExecOptions` +with the specific fields required for creating a Debug Container: ``` // PodExecOptions is the query options to a Pod's remote exec call @@ -130,53 +200,16 @@ type PodExecOptions struct { } ``` -After creating the Debug Container, the kubelet will upgrade the connection to -streaming and perform an attach to the container's console. If disconnected, the -Debug Container can be reattached using the pod's `/attach` endpoint with -`EphemeralContainerName`. +After creating the Ephemeral Container, the kubelet would upgrade the connection +to streaming and perform an attach to the container's console. If disconnected, +the Ephemeral Container could be reattached using the pod's `/attach` endpoint +with `EphemeralContainerName`. -Debug Containers cannot be removed via the API and instead the process must -terminate. While not ideal, this parallels existing behavior of `kubectl exec`. -To kill a Debug Container one would `attach` and exit the process interactively -or create a new Debug Container to send a signal with `kill(1)` to the original -process. - -#### Alternative 1: Debug Subresource - -Rather than extending an existing subresource, we could create a new, -non-streaming `debug` subresource. We would create a new API Object: - -``` -// DebugContainer describes a container to attach to a running pod for troubleshooting. -type DebugContainer struct { - metav1.TypeMeta - metav1.ObjectMeta - - // Name is the name of the Debug Container. Its presence will cause - // exec to create a Debug Container rather than performing a runtime exec. - Name string `json:"name,omitempty" ...` - - // Image is an optional container image name that will be used to for the Debug - // Container in the specified Pod with Command as ENTRYPOINT. If omitted a - // default image will be used. - Image string `json:"image,omitempty" ...` -} -``` - -The pod would gain a new `/debug` subresource that allows the following: - -1. A `POST` of a `PodDebugContainer` to - `/api/v1/namespaces/$NS/pods/$POD_NAME/debug/$NAME` to create Debug - Container named `$NAME` running in pod `$POD_NAME`. -1. A `DELETE` of `/api/v1/namespaces/$NS/pods/$POD_NAME/debug/$NAME` will stop - the Debug Container `$NAME` in pod `$POD_NAME`. - -Once created, a client would attach to the console of a debug container using -the existing attach endpoint, `/api/v1/namespaces/$NS/pods/$POD_NAME/attach`. - -However, this pattern does not resemble any other current usage of the API, so -we prefer to start with exec++ and reevaluate if we discover a compelling -reason. +Ephemeral Containers could not be removed via the API and instead the process +must terminate. While not ideal, this parallels existing behavior of `kubectl +exec`. To kill an Ephemeral Container one would `attach` and exit the process +interactively or create a new Ephemeral Container to send a signal with +`kill(1)` to the original process. #### Alternative 2: Declarative Configuration @@ -192,29 +225,11 @@ type EphemeralContainer struct { metav1.TypeMeta metav1.ObjectMeta - Spec EphemeralContainerSpec + Spec v1.Container Status v1.ContainerStatus } ``` -`EphemeralContainerSpec` is similar to `v1.Container`, but contains only fields -relevant to Debug Containers: - -``` -type EphemeralContainerSpec struct { - // Target is the pod in which to run the EphemeralContainer - // Required. - Target v1.ObjectReference - - Name string - Image String - Command []string - Args []string - ImagePullPolicy PullPolicy - SecurityContext *SecurityContext -} -``` - A new controller in the kubelet would watch for EphemeralContainers and create/delete debug containers. `EphemeralContainer.Status` would be updated by the kubelet at the same time it updates `ContainerStatus` for regular and init @@ -222,66 +237,104 @@ containers. Clients would create a new `EphemeralContainer` object, wait for it to be started and then attach using the pod's attach subresource and the name of the `EphemeralContainer`. -Debugging is inherently imperative, however, rather than a state for Kubernetes -to enforce. Once a Debug Container is started it should not be automatically -restarted, for example. This solution imposes additionally complexity and -dependencies on the kubelet, but it's not yet clear if the complexity is -justified. +Debugging is inherently imperative, however, and not the a desired state to +describe. Once a Debug Container is started it should not be automatically +restarted, for example. A declarative API adds new states for the kubelet to +enforce, and SIG Node strongly prefers to minimize kubelet complexity. -### Debug Container Status +### Ephemeral Container Status -The status of a Debug Container is reported in a new field in `v1.PodStatus`: +`EphemeralContainer` is included in a new field in `PodStatus`: ``` type PodStatus struct { ... - EphemeralContainerStatuses []v1.ContainerStatus + // List of user-initiated ephemeral containers that have been run in this pod. + // +optional + EphemeralContainers []EphemeralContainer `json:"commands,omitempty" protobuf:"bytes,11,rep,name=ephemeralContainers"` + } ``` -This status is only populated for Debug Containers, but there's interest in -tracking status for traditional exec in a similar manner. - -Note that `Command` and `Args` would have to be tracked in the status object -because there is no spec for Debug Containers or exec. These must either be made -available by the runtime or tracked by the kubelet. For Debug Containers this -could be stored as runtime labels, but the kubelet currently has no method of -storing state across restarts for exec. Solving this problem for exec is out of -scope for Debug Containers, but we will look for a solution as we implement this -feature. - -`EphemeralContainerStatuses` is populated by the kubelet in the same way as -regular and init container statuses. This is sent to the API server and -displayed by `kubectl describe pod`. +The kubelet should be able to construct a complete `PodStatus` with no prior +state using information stored in the container runtime. +`EphemeralContainer.Status` introduces no new data, but the kubelet must also +now populate `EphemeralContainer.Spec` & +`EphemeralContainer.TargetContainerName`. + +The kubelet already persists container metadata as CRI +[labels](https://github.com/kubernetes/kubernetes/blob/v1.10.0-alpha.0/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto#L606) +and +[annotations](https://github.com/kubernetes/kubernetes/blob/v1.10.0-alpha.0/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto#L613). +The entire v1.Container used in the request will be serialized and stored as a +runtime annotation. The value of `TargetContainerName` will be stored as a +runtime label. Persisting this data in the runtime means it will survive kubelet +restarts. + +At least for the Docker runtime, this is [an intended use of docker +labels](https://docs.docker.com/engine/userguide/labels-custom-metadata/#value-guidelines). +Docker does not document the maximum length of labels in its API. Empirically, +it supports up to the 64K constraint of the docker client's `bufio.Scanner` +size. Because the container spec may be examined in security sensitive contexts +like admission control, we will conservatively limit the size of the spec to 32K +and add a 32K minimum label length test to runtime qualification. + +`EphemeralContainer` is populated by the kubelet in the same way as regular +container statuses. This is sent to the API server and displayed by `kubectl +describe pod`. ### Creating Debug Containers -1. `kubectl` invokes the exec API as described in the preceding section. -1. The API server checks for name collisions with existing containers, performs - admission control and proxies the connection to the kubelet's - `/exec/$NS/$POD_NAME/$CONTAINER_NAME` endpoint. -1. The kubelet instructs the Runtime Manager to create a Debug Container. -1. The runtime manager uses the existing `startContainer()` method to create a - container in an existing pod. `startContainer()` has one modification for - Debug Containers: it creates a new runtime label (e.g. a docker label) that - identifies this container as a Debug Container. -1. After creating the container, the kubelet schedules an asynchronous update - of `PodStatus`. The update publishes the debug container status to the API - server at which point the Debug Container becomes visible via `kubectl - describe pod`. -1. The kubelet will upgrade the connection to streaming and attach to the - container's console. - -Rather than performing the implicit attach the kubelet could return success to -the client and require the client to perform an explicit attach, but the -implicit attach maintains consistent semantics across `/exec` rather than -varying behavior based on parameters. +1. `kubectl` invokes the new API as described in the preceding section. +1. The API server checks for name collisions with existing running containers + (in both `PodSpec` and `PodStatus.EphemeralContainers`), performs + validation, admission control and proxies the connection to the kubelet's + `/ephemeralContainers/$NS/$POD_NAME` endpoint. + 1. Since a name collision could happen in the interval between container + creation and PodStatus being published to the API server, the kubelet + will perform an additional check for name collision. + 1. It is permissible to replace an exited container with one of the same + name. +1. The kubelet request handler opens an error channel and signals the pod's + sync worker with `UpdatePodOptions` that include the `EphemeralContainer` in + a new field and a callback in the existing + `UpdatePodOptions.OnCompleteFunc`. + 1. The pod sync worker runs the existing `syncPod()` with a new + `SyncPodType` of `SyncPodDebug`. + 1. The request handler blocks (with timeout) on receiving an error from + `syncPod()` via the callback. During this time, `syncPod()` starts the + ephemeral container, including fetching an image if necessary, and + publishes a new `PodStatus`. + 1. Timeout is configured by the cluster administrator and defaults to 2 + minutes. +1. `syncPod()` again checks for container name collision and starts an + ephemeral container via the new `kuberuntime.StartEphemeralContainer()`. + 1. `StartEphemeralContainer()` call uses the existing `startContainer()` + method, which gains support for targeting the namespaces of a container + by name. + 1. `syncPod()` runs only from a dedicated pod worker, resolving any races + for container creation. + 1. After initial creation, future invocations of `syncPod()` will publish + its ContainerStatus but otherwise ignore the Ephemeral Container. It + will exist for the life of the pod sandbox or it exits and is garbage + collected. In no event will it be restarted. +1. `syncPod()` then finishes a regular sync, publishing an updated PodStatus + (which includes the new `EphemeralContainer`) by its normal, existing means. + The pod worker sends its exit status to the request worker. +1. The request worker receives a (hopefully `nil`) `error` and returns it to + the client. + 1. `OnCompleteFunc` is not guaranteed to be called, so if the request + worker times out it will check the pod's `PodStatus` to see if the Debug + Container was started prior to returning an error. If this happens the + `EphemeralContainer.Spec` must be compared to verify it was the same one + as requested. +1. The client performs an attach to the debug container's console. The apiserver detects container name collisions with both containers in the pod spec and other running Debug Containers by checking -`EphemeralContainerStatuses`. In a race to create two Debug Containers with the -same name, the API server will pass both requests and the kubelet must return an -error to all but one request. +`PodStatus.EphemeralContainers`. In a race to create two Debug Containers with +the same name, the API server will pass both requests and the kubelet will +reject one in the synchronized pod worker. There are no limits on the number of Debug Containers that can be created in a pod, but exceeding a pod's resource allocation may cause the pod to be evicted. @@ -336,13 +389,13 @@ It's worth noting some things that do not change: Debug Containers have no additional privileges above what is available to any `v1.Container`. It's the equivalent of configuring an shell container in a pod -spec but created on demand. +spec except that it is created on demand. -Admission plugins that guard `/exec` must be updated for the new parameters. In +Admission plugins must be updated to guard `/ephemeralcontainers`. In particular, they should enforce the same container image policy on the `Image` parameter as is enforced for regular containers. During the alpha phase we will additionally support a container image whitelist as a kubelet flag to allow -cluster administrators to easily constraint debug container images. +cluster administrators to easily constrain debug container images. ### Additional Consideration @@ -356,15 +409,11 @@ cluster administrators to easily constraint debug container images. and exists because Kubernetes has no mechanism to attach a container prior to starting it. This larger issue will not be addressed by Debug Containers, but Debug Containers would benefit from future improvements or work arounds. -1. We do not want to describe Debug Containers using `v1.Container`. This is to - reinforce that Debug Containers are not general purpose containers by - limiting their configurability. Debug Containers should not be used to build - services. -1. Debug Containers are of limited usefulness without a shared PID namespace. - If a pod is configured with isolated PID namespaces, the Debug Container +1. Debug Containers should not be used to build services, which we've attempted + to reflect in the API. +1. If a pod is configured with isolated PID namespaces, the Debug Container will join the PID namespace of the target container. Debug Containers will - not be available with runtimes that do not implement PID namespace sharing - in some form. + not be available with runtimes that do not implement PID namespace sharing. ## Implementation Plan @@ -372,22 +421,32 @@ cluster administrators to easily constraint debug container images. #### Goals and Non-Goals for Alpha Release -We're targeting an alpha release in Kubernetes 1.9 that includes the following +We're targeting an alpha release in Kubernetes 1.10 that includes the following basic functionality: * Support in the kubelet for creating debug containers in a running pod -* A `kubectl debug` command to initiate a debug container +* A `kubectl alpha debug` command to initiate a debug container * `kubectl describe pod` will list status of debug containers running in a pod Functionality will be hidden behind an alpha feature flag and disabled by -default. The following are explicitly out of scope for the 1.9 alpha release: +default. The following are explicitly out of scope for the alpha release, but +must be resolved prior to beta release: -* Exited Debug Containers will be garbage collected as regular containers and - may disappear from the list of Debug Container Statuses. -* Security Context for the Debug Container is not configurable. It will always - be run with `CAP_SYS_PTRACE` and `CAP_SYS_ADMIN`. -* Image pull policy for the Debug Container is not configurable. It will - always be run with `PullAlways`. +* There's no guarantee that exited Debug Containers won't be garbage collected + as regular containers, so they may disappear from the list of + `EphemeralContainers`. +* We could improve reliability of `UpdatePodOptions.OnCompleteFunc` by + prioritizing based on `SyncPodType`. + +#### Kubernetes API Changes + +The following changes must be implemented in the API: + +1. `v1.EphemeralContainer` will be added and `v1.PodStatus` will be extended as + described above. +1. The new subresource will be added to the pods API, validation added and + proxied to the kubelet. +1. The API server must check for Ephemeral Containers when validating `attach`. #### kubelet Implementation @@ -396,71 +455,30 @@ Performing this operation with a legacy (non-CRI) runtime will result in a not implemented error. Implementation in the kubelet will be split into the following steps: -##### Step 1: Container Type - -The first step is to add a feature gate to ensure all changes are off by -default. This will be added in the `pkg/features` `DefaultFeatureGate`. - -The runtime manager stores metadata about containers in the runtime via labels -(e.g. docker labels). These labels are used to populate the fields of -`kubecontainer.ContainerStatus`. Since the runtime manager needs to handle Debug -Containers differently in a few situations, we must add a new piece of metadata -to distinguish Debug Containers from regular containers. - -`startContainer()` will be updated to write a new label -`io.kubernetes.container.type` to the runtime. Existing containers will be -started with a type of `REGULAR` or `INIT`. When added in a subsequent step, -Debug Containers will start with the type `EPHEMERAL`. - -##### Step 2: Creation and Handling of Debug Containers - -This step adds methods for creating debug containers, but doesn't yet modify the -kubelet API. Since the runtime manager discards runtime (e.g. docker) labels -after populating `kubecontainer.ContainerStatus`, the label value will be stored -in a the new field `ContainerStatus.Type` so it can be used by `SyncPod()`. - -The kubelet gains a `RunDebugContainer()` method which accepts a `v1.Container` -and passes it on to the Runtime Manager's `RunDebugContainer()` if implemented. -Currently only the Generic Runtime Manager (i.e. the CRI) implements the -`DebugContainerRunner` interface. - -The Generic Runtime Manager's `RunDebugContainer()` calls `startContainer()` to -create the Debug Container. Additionally, `SyncPod()` is modified to skip Debug -Containers unless the sandbox is restarted. - -##### Step 3: kubelet API changes - -The kubelet exposes the new functionality in its existing `/exec/` endpoint. -`ServeExec()` constructs a `v1.Container` based on `PodExecOptions`, calls -`RunDebugContainer()`, and performs the attach. - -##### Step 4: Reporting EphemeralContainerStatus - -The last major change to the kubelet is to populate -v1.`PodStatus.EphemeralContainerStatuses` based on the -`kubecontainer.ContainerStatus` for the Debug Container. - -#### Kubernetes API Changes - -There are two changes to be made to the Kubernetes, which will be made -independently: - -1. `v1.PodExecOptions` must be extended with new fields. -1. `v1.PodStatus` gains a new field to hold Debug Container statuses. - -In all cases, new fields will be prepended with `Alpha` for the duration of this -feature's alpha status. +1. New container metadata `ContainerType`, `ContainerSpec` & + `TargetContainerName` is stored using CRI labels and annotations. + `kubecontainer.ContainerStatus` will be extended with a `ContainerType` + field (possible values: `REGULAR`, `INIT` & `EPHEMERAL`) so a container can + be identified as a debug container. +1. `kuberuntimemanager` gains a new `StartEphemeralContainer()` which calls the + existing `startContainer()`. +1. The kubelet gains a `RunDebugContainer()` method which accepts a + `v1.EphemeralContainer` and triggers a pod sync to create the debug + container. The existing `generateAPIPodStatus()` will be update to also + populate `EphemeralContainers`. +1. The kubelet API gains the new `/ephemeralContainers/` endpoint to create the + Debug Container. #### kubectl changes In anticipation of this change, [#46151](https://pr.k8s.io/46151) added a `kubectl alpha` command to contain alpha features. We will add `kubectl alpha debug` to invoke Debug Containers. `kubectl` does not use feature gates, so -`kubectl alpha debug` will be visible by default in `kubectl` 1.9 and return an +`kubectl alpha debug` will be visible by default in `kubectl` 1.10 and return an error when used on a cluster with the feature disabled. -`kubectl describe pod` will report the contents of `EphemeralContainerStatuses` -when not empty as it means the feature is enabled. The field will be hidden when +`kubectl describe pod` will report the contents of `EphemeralContainers` when +not empty as it means the feature is enabled. The field will be hidden when empty. ## Appendices @@ -729,8 +747,7 @@ coupling it with container images. * [Pod Troubleshooting Tracking Issue](https://issues.k8s.io/27140) * [CRI Tracking Issue](https://issues.k8s.io/28789) * [CRI: expose optional runtime features](https://issues.k8s.io/32803) -* [Resource QoS in - Kubernetes](resource-qos.md) +* [Resource QoS in Kubernetes](resource-qos.md) * Related Features * [#1615](https://issues.k8s.io/1615) - Shared PID Namespace across containers in a pod |
