From c29b0c997a678d71b7ec10426047a12d9c8d2a43 Mon Sep 17 00:00:00 2001 From: Wiiliam Chang Date: Fri, 3 Nov 2017 22:08:07 +0800 Subject: Fix some errors in control-plane-resilience.md --- .../design-proposals/multicluster/control-plane-resilience.md | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/contributors/design-proposals/multicluster/control-plane-resilience.md b/contributors/design-proposals/multicluster/control-plane-resilience.md index 1e0a3baf..7b30f588 100644 --- a/contributors/design-proposals/multicluster/control-plane-resilience.md +++ b/contributors/design-proposals/multicluster/control-plane-resilience.md @@ -194,14 +194,13 @@ to do three things:
  • allocate a new node (not necessary if running etcd as a pod, in which case specific measures are required to prevent user pods from interfering with system pods, for example using node selectors as -described in nodeSelector),
  • start an etcd replica on that new node, and
  • have the new replica recover the etcd state. In the case of local disk (which fails in concert with the machine), the etcd state must be recovered from the other replicas. This is called - -dynamic member addition. +dynamic member addition. In the case of remote persistent disk, the etcd state can be recovered by attaching the remote persistent disk to the replacement node, thus the state is @@ -210,8 +209,7 @@ recoverable even if all other replicas are down. There are also significant performance differences between local disks and remote persistent disks. For example, the -sustained throughput local disks in GCE is approximately 20x that of remote -disks. +sustained throughput local disks in GCE is approximately 20x that of remote disks. Hence we suggest that self-healing be provided by remotely mounted persistent disks in non-performance critical, single-zone cloud deployments. For -- cgit v1.2.3