diff options
| author | joe2far <joe2farrell@gmail.com> | 2016-07-13 15:06:24 +0100 |
|---|---|---|
| committer | joe2far <joe2farrell@gmail.com> | 2016-07-13 15:06:24 +0100 |
| commit | fa027eea67872811a0715c7c9c9db31b3b55ad62 (patch) | |
| tree | 7ae64c32aa10d9e74a27870f05eab9e960362fd8 | |
| parent | 9813b6e476becc5bebb82bfc5be4fbfa56b31cdd (diff) | |
Fixed several typos
| -rw-r--r-- | control-plane-resilience.md | 2 | ||||
| -rw-r--r-- | daemon.md | 2 | ||||
| -rw-r--r-- | federated-services.md | 2 | ||||
| -rw-r--r-- | indexed-job.md | 2 | ||||
| -rw-r--r-- | nodeaffinity.md | 2 | ||||
| -rw-r--r-- | security.md | 2 | ||||
| -rw-r--r-- | taint-toleration-dedicated.md | 4 |
7 files changed, 8 insertions, 8 deletions
diff --git a/control-plane-resilience.md b/control-plane-resilience.md index 9e7eecae..eb5f800e 100644 --- a/control-plane-resilience.md +++ b/control-plane-resilience.md @@ -179,7 +179,7 @@ well-bounded time period. Multiple stateless, self-hosted, self-healing API servers behind a HA load balancer, built out by the default "kube-up" automation on GCE, AWS and basic bare metal (BBM). Note that the single-host approach of -hving etcd listen only on localhost to ensure that onyl API server can +having etcd listen only on localhost to ensure that only API server can connect to it will no longer work, so alternative security will be needed in the regard (either using firewall rules, SSL certs, or something else). All necessary flags are currently supported to enable @@ -174,7 +174,7 @@ upgradable, and more generally could not be managed through the API server interface. A third alternative is to generalize the Replication Controller. We would do -something like: if you set the `replicas` field of the ReplicationConrollerSpec +something like: if you set the `replicas` field of the ReplicationControllerSpec to -1, then it means "run exactly one replica on every node matching the nodeSelector in the pod template." The ReplicationController would pretend `replicas` had been set to some large number -- larger than the largest number diff --git a/federated-services.md b/federated-services.md index 124ff30a..46958146 100644 --- a/federated-services.md +++ b/federated-services.md @@ -505,7 +505,7 @@ depend on what scheduling policy is in force. In the above example, the scheduler created an equal number of replicas (2) in each of the three underlying clusters, to make up the total of 6 replicas required. To handle entire cluster failures, various approaches are possible, including: -1. **simple overprovisioing**, such that sufficient replicas remain even if a +1. **simple overprovisioning**, such that sufficient replicas remain even if a cluster fails. This wastes some resources, but is simple and reliable. 2. **pod autoscaling**, where the replication controller in each cluster automatically and autonomously increases the number of diff --git a/indexed-job.md b/indexed-job.md index 63dafc7b..799f6b04 100644 --- a/indexed-job.md +++ b/indexed-job.md @@ -522,7 +522,7 @@ The index-only approach: - Requires that the user keep the *per completion parameters* in a separate storage, such as a configData or networked storage. - Makes no changes to the JobSpec. -- Drawback: while in separate storage, they could be mutatated, which would have +- Drawback: while in separate storage, they could be mutated, which would have unexpected effects. - Drawback: Logic for using index to lookup parameters needs to be in the Pod. - Drawback: CLIs and UIs are limited to using the "index" as the identity of a diff --git a/nodeaffinity.md b/nodeaffinity.md index 3c29d6fe..77bc6e91 100644 --- a/nodeaffinity.md +++ b/nodeaffinity.md @@ -62,7 +62,7 @@ scheduling requirements. rather than replacing `map[string]string`, due to backward compatibility requirements.) -The affiniy specifications described above allow a pod to request various +The affinity specifications described above allow a pod to request various properties that are inherent to nodes, for example "run this pod on a node with an Intel CPU" or, in a multi-zone cluster, "run this pod on a node in zone Z." ([This issue](https://github.com/kubernetes/kubernetes/issues/9044) describes diff --git a/security.md b/security.md index 0ed8f2f0..650a1b70 100644 --- a/security.md +++ b/security.md @@ -204,7 +204,7 @@ arbitrary containers on hosts, to gain access to any protected information stored in either volumes or in pods (such as access tokens or shared secrets provided as environment variables), to intercept and redirect traffic from running services by inserting middlemen, or to simply delete the entire history -of the custer. +of the cluster. As a general principle, access to the central data store should be restricted to the components that need full control over the system and which can apply diff --git a/taint-toleration-dedicated.md b/taint-toleration-dedicated.md index e896519f..c7126921 100644 --- a/taint-toleration-dedicated.md +++ b/taint-toleration-dedicated.md @@ -201,7 +201,7 @@ to both `NodeSpec` and `NodeStatus`. The value in `NodeStatus` is the union of the taints specified by various sources. For now, the only source is the `NodeSpec` itself, but in the future one could imagine a node inheriting taints from pods (if we were to allow taints to be attached to pods), from -the node's startup coniguration, etc. The scheduler should look at the `Taints` +the node's startup configuration, etc. The scheduler should look at the `Taints` in `NodeStatus`, not in `NodeSpec`. Taints and tolerations are not scoped to namespace. @@ -305,7 +305,7 @@ Users should not start using taints and tolerations until the full implementation has been in Kubelet and the master for enough binary versions that we feel comfortable that we will not need to roll back either Kubelet or master to a version that does not support them. Longer-term we will use a -progamatic approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)). +programatic approach to enforcing this ([#4855](https://github.com/kubernetes/kubernetes/issues/4855)). ## Related issues |
