summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorQinglan Peng <qinglanpeng@zju.edu.cn>2016-12-10 09:55:47 +0800
committerQinglan Peng <qinglanpeng@zju.edu.cn>2016-12-10 09:55:47 +0800
commit64f4b6bd0bf96f56b5a9c783c03df43ff0b225a4 (patch)
tree64fd27d0f48fc30166e1e191c87321b4f95e10e2
parent1713f56e908a87f28175dc57d644fcade9a9b498 (diff)
fix some typos
Signed-off-by: Qinglan Peng <qinglanpeng@zju.edu.cn>
-rw-r--r--contributors/design-proposals/control-plane-resilience.md6
-rw-r--r--contributors/design-proposals/enhance-pluggable-policy.md14
-rw-r--r--contributors/design-proposals/federated-replicasets.md14
-rw-r--r--contributors/design-proposals/federated-services.md2
-rw-r--r--contributors/design-proposals/federation-phase-1.md6
-rw-r--r--contributors/design-proposals/ha_master.md2
-rw-r--r--contributors/design-proposals/indexed-job.md20
-rw-r--r--contributors/design-proposals/selector-generation.md2
8 files changed, 33 insertions, 33 deletions
diff --git a/contributors/design-proposals/control-plane-resilience.md b/contributors/design-proposals/control-plane-resilience.md
index 8193fd97..7b93cba8 100644
--- a/contributors/design-proposals/control-plane-resilience.md
+++ b/contributors/design-proposals/control-plane-resilience.md
@@ -110,13 +110,13 @@ well-bounded time period.
to machine failure to a small number of minutes per failure
(i.e. typically around "3 nines" availability), provided that:
1. cluster persistent state (i.e. etcd disks) is either:
- 1. truely persistent (i.e. remote persistent disks), or
+ 1. truly persistent (i.e. remote persistent disks), or
1. reconstructible (e.g. using etcd [dynamic member
addition](https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md#add-a-new-member)
or [backup and
recovery](https://github.com/coreos/etcd/blob/master/Documentation/admin_guide.md#disaster-recovery)).
1. and boot disks are either:
- 1. truely persistent (i.e. remote persistent disks), or
+ 1. truly persistent (i.e. remote persistent disks), or
1. reconstructible (e.g. using boot-from-snapshot,
boot-from-pre-configured-image or
boot-from-auto-initializing image).
@@ -210,7 +210,7 @@ recoverable even if all other replicas are down.
There are also significant performance differences between local disks and remote
persistent disks. For example, the
<A HREF="https://cloud.google.com/compute/docs/disks/#comparison_of_disk_types">
-sustained throughput local disks in GCE is approximatley 20x that of remote
+sustained throughput local disks in GCE is approximately 20x that of remote
disks</A>.
Hence we suggest that self-healing be provided by remotely mounted persistent
diff --git a/contributors/design-proposals/enhance-pluggable-policy.md b/contributors/design-proposals/enhance-pluggable-policy.md
index 2468d3c1..ecc908ee 100644
--- a/contributors/design-proposals/enhance-pluggable-policy.md
+++ b/contributors/design-proposals/enhance-pluggable-policy.md
@@ -75,7 +75,7 @@ namespace" (see [ResourceAccessReview](#ResourceAccessReview) further down).
```go
// OLD
type Authorizer interface {
- Authorize(a Attributes) error
+ Authorize(a Attributes) error
}
```
@@ -85,7 +85,7 @@ type Authorizer interface {
// a particular action
type Authorizer interface {
// Authorize takes a Context (for namespace, user, and traceability) and
- // Attributes to make a policy determination.
+ // Attributes to make a policy determination.
// reason is an optional return value that can describe why a policy decision
// was made. Reasons are useful during debugging when trying to figure out
// why a user or group has access to perform a particular action.
@@ -99,7 +99,7 @@ type Authorizer interface {
// namespaces they are allowed to view instead of having to choose between
// listing them all or listing none.
type AuthorizerIntrospection interface {
- // GetAllowedSubjects takes a Context (for namespace and traceability) and
+ // GetAllowedSubjects takes a Context (for namespace and traceability) and
// Attributes to determine which users and groups are allowed to perform the
// described action in the namespace. This API enables the ResourceBasedReview
// requests below
@@ -156,7 +156,7 @@ corresponding return:
// POSTed like this
curl -X POST /apis/authorization.kubernetes.io/{version}/subjectAccessReviews -d @subject-access-review.json
-// or
+// or
accessReviewResult, err := Client.SubjectAccessReviews().Create(subjectAccessReviewObject)
// output
@@ -218,7 +218,7 @@ its corresponding return:
// POSTed like this
curl -X POST /apis/authorization.kubernetes.io/{version}/localSubjectAccessReviews -d @local-subject-access-review.json
-// or
+// or
accessReviewResult, err := Client.LocalSubjectAccessReviews().Create(localSubjectAccessReviewObject)
// output
@@ -327,7 +327,7 @@ type LocalSubjectAccessReviewResponse struct {
### ResourceAccessReview
-This set of APIs nswers the question: which users and groups can perform the
+This set of APIs answers the question: which users and groups can perform the
specified verb on the specified resourceKind. Given the Authorizer interface
described above, this endpoint can be implemented generically against any
Authorizer by calling the .GetAllowedSubjects() function.
@@ -366,7 +366,7 @@ corresponding return:
// POSTed like this
curl -X POST /apis/authorization.kubernetes.io/{version}/resourceAccessReviews -d @resource-access-review.json
-// or
+// or
accessReviewResult, err := Client.ResourceAccessReviews().Create(resourceAccessReviewObject)
// output
diff --git a/contributors/design-proposals/federated-replicasets.md b/contributors/design-proposals/federated-replicasets.md
index f1744ade..3443912d 100644
--- a/contributors/design-proposals/federated-replicasets.md
+++ b/contributors/design-proposals/federated-replicasets.md
@@ -43,7 +43,7 @@ that is a member of federation.
Federated ReplicaSet (FRS) - ReplicaSet defined and running inside of Federated K8S server.
Federated ReplicaSet Controller (FRSC) - A controller running inside
-of Federated K8S server that controlls FRS.
+of Federated K8S server that controls FRS.
## User Experience
@@ -53,7 +53,7 @@ of Federated K8S server that controlls FRS.
cluster. They create a definition of federated ReplicaSet on the
federated master and (local) ReplicaSets are automatically created
in each of the federation clusters. The number of replicas is each
- of the Local ReplicaSets is (perheps indirectly) configurable by
+ of the Local ReplicaSets is (perhaps indirectly) configurable by
the user.
+ [CUJ2] When the current number of replicas in a cluster drops below
the desired number and new replicas cannot be scheduled then they
@@ -131,7 +131,7 @@ FederatedReplicaSetPreferences {
Rebalance : true
Clusters : map[string]LocalReplicaSet {
"*" : LocalReplicaSet{ Weight: 1}
- }
+ }
}
```
@@ -151,7 +151,7 @@ FederatedReplicaSetPreferences {
Rebalance : true
Clusters : map[string]LocalReplicaSet {
"*" : LocalReplicaSet{ MaxReplicas: 2; Weight: 1}
- }
+ }
}
```
@@ -197,8 +197,8 @@ There is a global target for 50, however clusters require 60. So some clusters w
**Scenario 4**. I want to have equal number of replicas in clusters A,B,C, however don’t put more than 20 replicas to cluster C.
```
-FederatedReplicaSetPreferences {
- Rebalance : true
+FederatedReplicaSetPreferences {
+ Rebalance : true
Clusters : map[string]LocalReplicaSet {
"*" : LocalReplicaSet{ Weight: 1}
“C” : LocalReplicaSet{ MaxReplicas: 20, Weight: 1}
@@ -254,7 +254,7 @@ FederatedReplicaSetPreferences {
Rebalance : false
Clusters : map[string]LocalReplicaSet {
"*" : LocalReplicaSet{ Weight: 1}
- }
+ }
}
```
diff --git a/contributors/design-proposals/federated-services.md b/contributors/design-proposals/federated-services.md
index b9d51c43..aabdf6f7 100644
--- a/contributors/design-proposals/federated-services.md
+++ b/contributors/design-proposals/federated-services.md
@@ -480,7 +480,7 @@ entire cluster failures, various approaches are possible, including:
simple, but there is some delay in the autoscaling.
3. **federated replica migration**, where the Cluster Federation
control system detects the cluster failure and automatically
- increases the replica count in the remainaing clusters to make up
+ increases the replica count in the remaining clusters to make up
for the lost replicas in the failed cluster. This does not seem to
offer any benefits relative to pod autoscaling above, and is
arguably more complex to implement, but we note it here as a
diff --git a/contributors/design-proposals/federation-phase-1.md b/contributors/design-proposals/federation-phase-1.md
index 0a3a8f50..e6c54bf6 100644
--- a/contributors/design-proposals/federation-phase-1.md
+++ b/contributors/design-proposals/federation-phase-1.md
@@ -121,7 +121,7 @@ engine to decide how to spit workloads among clusters. It creates a
Kubernetes Replication Controller on one ore more underlying cluster,
and post them back to `etcd` storage.
-One sublety worth noting here is that the scheduling decision is arrived at by
+One subtlety worth noting here is that the scheduling decision is arrived at by
combining the application-specific request from the user (which might
include, for example, placement constraints), and the global policy specified
by the federation administrator (for example, "prefer on-premise
@@ -306,7 +306,7 @@ cases it may be complex. For example:
Below is a sample of the YAML to create such a replication controller.
-```
+```
apiVersion: v1
kind: ReplicationController
metadata:
@@ -325,7 +325,7 @@ spec:
image: nginx
ports:
- containerPort: 80
- clusterSelector:
+ clusterSelector:
name in (Foo, Bar)
```
diff --git a/contributors/design-proposals/ha_master.md b/contributors/design-proposals/ha_master.md
index d4cf26a9..2575aaf7 100644
--- a/contributors/design-proposals/ha_master.md
+++ b/contributors/design-proposals/ha_master.md
@@ -93,7 +93,7 @@ denominator that will work everywhere. Instead we will document various options
solution for different deployments. Below we list possible approaches:
1. `Managed DNS` - user need to specify a domain name during cluster creation. DNS entries will be managed
-automaticaly by the deployment tool that will be intergrated with solutions like Route53 (AWS)
+automatically by the deployment tool that will be integrated with solutions like Route53 (AWS)
or Google Cloud DNS (GCP). For load balancing we will have two options:
1.1. create an L4 load balancer in front of all apiservers and update DNS name appropriately
1.2. use round-robin DNS technique to access all apiservers directly
diff --git a/contributors/design-proposals/indexed-job.md b/contributors/design-proposals/indexed-job.md
index 5a089c22..bc2860b9 100644
--- a/contributors/design-proposals/indexed-job.md
+++ b/contributors/design-proposals/indexed-job.md
@@ -295,7 +295,7 @@ kubectl run process-matrix --image=my/matrix \
--per-completion-env=EC="15 15 31 31" \
--restart=OnFailure \
-- \
- /usr/local/bin/process_matrix_block -start_row $SR -end_row $ER -start_col $ER --end_col $EC
+ /usr/local/bin/process_matrix_block -start_row $SR -end_row $ER -start_col $ER --end_col $EC
```
### Composition With Workflows and ScheduledJob
@@ -522,7 +522,7 @@ The JobStatus is also not changed. The user can gauge the progress of the job by
the `.status.succeeded` count.
-#### Job Spec Compatilibity
+#### Job Spec Compatibility
A job spec written before this change will work exactly the same as before with
the new controller. The Pods it creates will have the same environment as
@@ -535,7 +535,7 @@ This is okay for a Beta resource.
#### Job Controller Changes
-The Job controller will maintain for each Job a data structed which
+The Job controller will maintain for each Job a data structured which
indicates the status of each completion index. We call this the
*scoreboard* for short. It is an array of length `.spec.completions`.
Elements of the array are `enum` type with possible values including
@@ -601,7 +601,7 @@ kubectl run say-number --image=busybox \
--completions=3 \
--completion-index-var-name=I \
-- \
- sh -c 'echo "My index is $I" && sleep 5'
+ sh -c 'echo "My index is $I" && sleep 5'
```
will run 3 pods to completion, each printing one of the following lines:
@@ -624,7 +624,7 @@ kubectl run say-fruit --image=busybox \
--per-completion-env=FRUIT="apple banana cherry" \
--per-completion-env=COLOR="green yellow red" \
-- \
- sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
+ sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
```
or equivalently:
@@ -637,7 +637,7 @@ kubectl run say-fruit --image=busybox \
--per-completion-env=FRUIT="$(cat fruits.txt)" \
--per-completion-env=COLOR="$(cat fruits.txt)" \
-- \
- sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
+ sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
```
or similarly:
@@ -647,7 +647,7 @@ kubectl run say-fruit --image=busybox \
--per-completion-env=FRUIT=@fruits.txt \
--per-completion-env=COLOR=@fruits.txt \
-- \
- sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
+ sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
```
will all run 3 pods in parallel. Index 0 pod will log:
@@ -691,7 +691,7 @@ kubectl run say-fruit --image=busybox \
--per-completion-env=FRUIT="apple banana cherry" \
--per-completion-env=COLOR="green yellow red" \
-- \
- sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
+ sh -c 'echo "Have a nice $COLOR $FRUIT" && sleep 5'
```
First, kubectl generates the PodSpec as it normally does for `kubectl run`.
@@ -768,7 +768,7 @@ spec:
- '/etc/job-params.sh; echo "this is the rest of the command"'
volumeMounts:
- name: annotations
- mountPath: /etc
+ mountPath: /etc
- name: script
mountPath: /etc
volumes:
@@ -799,7 +799,7 @@ spec:
...
spec:
containers:
- - name: foo
+ - name: foo
...
env:
# following block added:
diff --git a/contributors/design-proposals/selector-generation.md b/contributors/design-proposals/selector-generation.md
index efb32cf2..9b4b51fa 100644
--- a/contributors/design-proposals/selector-generation.md
+++ b/contributors/design-proposals/selector-generation.md
@@ -161,7 +161,7 @@ preserved (would have been nice to do so, but requires more complicated
solution).
3. Users who only created v1beta1 examples or v1 examples, will not ever see the
existence of either field.
-4. Since v1beta1 are convertable to/from v1, the storage location (path in etcd)
+4. Since v1beta1 are convertible to/from v1, the storage location (path in etcd)
does not need to change, allowing scriptable rollforward/rollback.
# Future Work