summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorWei Huang <wei.huang1@ibm.com>2018-07-18 11:31:31 -0700
committerWei Huang <wei.huang1@ibm.com>2018-07-18 11:31:31 -0700
commit2defcec2749cd40b7337cf574c309f4cdf78ede6 (patch)
tree6d06a08eacc35af69e35676350bb0f8f0e3cef37
parent8e5f6e8594e856bdc047cf5e96c0c028cdca6e94 (diff)
fix a mis-placed section in schedule-DS-pod-by-scheduler.md
-rw-r--r--contributors/design-proposals/scheduling/schedule-DS-pod-by-scheduler.md21
1 files changed, 10 insertions, 11 deletions
diff --git a/contributors/design-proposals/scheduling/schedule-DS-pod-by-scheduler.md b/contributors/design-proposals/scheduling/schedule-DS-pod-by-scheduler.md
index c0c7dffa..c7038eac 100644
--- a/contributors/design-proposals/scheduling/schedule-DS-pod-by-scheduler.md
+++ b/contributors/design-proposals/scheduling/schedule-DS-pod-by-scheduler.md
@@ -45,22 +45,21 @@ This option is to leverage NodeAffinity feature to avoid introducing schedulerâ€
1. DS controller filter nodes by nodeSelector, but does NOT check against scheduler’s predicates (e.g. PodFitHostResources)
2. For each node, DS controller creates a Pod for it with the following NodeAffinity
+ ```yaml
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - nodeSelectorTerms:
+ matchExpressions:
+ - key: kubernetes.io/hostname
+ operator: in
+ values:
+ - dest_hostname
+ ```
3. When sync Pods, DS controller will map nodes and pods by this NodeAffinity to check whether Pods are started for nodes
4. In scheduler, DaemonSet Pods will stay pending if scheduling predicates fail. To avoid this, an appropriate priority must
be set to all critical DaemonSet Pods. Scheduler will preempt other pods to ensure critical pods were scheduled even when
the cluster is under resource pressure.
-```yaml
-nodeAffinity:
- requiredDuringSchedulingIgnoredDuringExecution:
- - nodeSelectorTerms:
- matchExpressions:
- - key: kubernetes.io/hostname
- operator: in
- values:
- - dest_hostname
-```
-
## Reference
* [DaemonsetController can't feel it when node has more resources, e.g. other Pod exits](https://github.com/kubernetes/kubernetes/issues/46935)