diff options
| author | Kubernetes Submit Queue <k8s-merge-robot@users.noreply.github.com> | 2017-09-11 10:26:45 -0700 |
|---|---|---|
| committer | GitHub <noreply@github.com> | 2017-09-11 10:26:45 -0700 |
| commit | 070f7f8b1e9b54acf1c97fd19b68c4ea31afb7c2 (patch) | |
| tree | d41a46420baf9bfe618f7e39bbc601de7e0ea80c | |
| parent | e6901b1adfd79227c9b04bd45b6f69d3c7794008 (diff) | |
| parent | 9e0b08529f602c06119fe191b83b477393febc7e (diff) | |
Merge pull request #1036 from kubernetes/fix-cpumanager-reservation-desc
Automatic merge from submit-queue
Fix note about reservations in cpu-manager.md.
cc @sjenning
/sig node
| -rw-r--r-- | contributors/design-proposals/cpu-manager.md | 19 |
1 files changed, 4 insertions, 15 deletions
diff --git a/contributors/design-proposals/cpu-manager.md b/contributors/design-proposals/cpu-manager.md index c36239b1..e102d2e2 100644 --- a/contributors/design-proposals/cpu-manager.md +++ b/contributors/design-proposals/cpu-manager.md @@ -152,20 +152,6 @@ reconcile frequency is set through a new Kubelet configuration value same duration as `--node-status-update-frequency` (which itself defaults to 10 seconds at time of writing.) -The number of CPUs that pods may run on can be implicitly controlled using the -existing node-allocatable configuration settings. See the [node allocatable -proposal document][node-allocatable] for details. The CPU manager will claim -`ceiling(node.status.allocatable.cpu)` as the number of CPUs available to -assign to pods, starting from the highest-numbered physical core and -descending topologically. It is recommended to configure `kube-reserved` -and `system-reserved` such that their sum is an integer when the CPU manager -is enabled. This ensures that `node.status.allocatable.cpu` is also an -integer. - -Operator documentation will be updated to explain how to configure the -system to use the low-numbered physical cores for kube-reserved and -system-reserved cgroups. - Each policy is described below. #### Policy 1: "none" cpuset control [default] @@ -191,7 +177,10 @@ becomes terminal.) The Kubelet requires the total CPU reservation from `--kube-reserved` and `--system-reserved` to be greater than zero when the static policy is enabled. This is because zero CPU reservation would allow the shared pool to -become empty. +become empty. The set of reserved CPUs is taken in order of ascending +physical core ID. Operator documentation will be updated to explain how to +configure the system to use the low-numbered physical cores for kube-reserved +and system-reserved cgroups. Workloads that need to know their own CPU mask, e.g. for managing thread-level affinity, can read it from the virtual file `/proc/self/status`: |
