diff options
| author | k8s-merge-robot <k8s.production.user@gmail.com> | 2016-05-21 19:37:15 -0700 |
|---|---|---|
| committer | k8s-merge-robot <k8s.production.user@gmail.com> | 2016-05-21 19:37:15 -0700 |
| commit | bbc5a56dc3692f58fcdd3c4a89380bfda72a9e15 (patch) | |
| tree | 482ab70352a4e4ed6bd63f9771d55cd6af077327 /node-performance-testing.md | |
| parent | a9712c656007b24d7aa504c947cda164fd56221b (diff) | |
| parent | c3d5cfb6c45213fd9645115f25322a26ecdcbc1e (diff) | |
Merge pull request #25531 from ingvagabund/introduce-memory-pressure-to-scheduler
Automatic merge from submit-queue
Introduce node memory pressure condition to scheduler
Following the work done by @derekwaynecarr at https://github.com/kubernetes/kubernetes/pull/21274, introducing memory pressure predicate for scheduler.
Missing:
* write down unit-test
* test the implementation
At the moment this is a heads up for further discussion how the new node's memory pressure condition should be handled in the generic scheduler.
**Additional info**
* Based on [1], only best effort pods are subject to filtering.
* Based on [2], best effort pods are those pods "iff requests & limits are not specified for any resource across all containers".
[1] https://github.com/derekwaynecarr/kubernetes/blob/542668cc7998fe0acb315a43731e1f45ecdcc85b/docs/proposals/kubelet-eviction.md#scheduler
[2] https://github.com/kubernetes/kubernetes/pull/14943
Diffstat (limited to 'node-performance-testing.md')
0 files changed, 0 insertions, 0 deletions
