diff options
| author | joe2far <joe2farrell@gmail.com> | 2016-07-13 15:06:24 +0100 |
|---|---|---|
| committer | joe2far <joe2farrell@gmail.com> | 2016-07-13 15:06:24 +0100 |
| commit | 75258a3392bd59c116df23a7485b52ffe7e49e38 (patch) | |
| tree | 81068f2204fa59b712d78fce54d7e4a0dbd7ee27 | |
| parent | 5248158166bcd4a010ab6cdbe83483c425ffc621 (diff) | |
Fixed several typos
| -rw-r--r-- | api-group.md | 2 | ||||
| -rw-r--r-- | client-package-structure.md | 4 | ||||
| -rw-r--r-- | federated-api-servers.md | 2 | ||||
| -rw-r--r-- | federation-lite.md | 6 | ||||
| -rw-r--r-- | flannel-integration.md | 2 | ||||
| -rw-r--r-- | kubelet-eviction.md | 2 | ||||
| -rw-r--r-- | kubelet-tls-bootstrap.md | 4 | ||||
| -rw-r--r-- | service-discovery.md | 6 | ||||
| -rw-r--r-- | templates.md | 4 |
9 files changed, 16 insertions, 16 deletions
diff --git a/api-group.md b/api-group.md index 8a4a1a10..84bed0aa 100644 --- a/api-group.md +++ b/api-group.md @@ -105,7 +105,7 @@ Documentation for other releases can be found at Types in the unversioned package will not have the APIVersion field, but may retain the Kind field. - For backward compatibility, when hanlding the Status, the server will encode it to v1 if the client expects the Status to be encoded in v1, otherwise the server will send the unversioned#Status. If an error occurs before the version can be determined, the server will send the unversioned#Status. + For backward compatibility, when handling the Status, the server will encode it to v1 if the client expects the Status to be encoded in v1, otherwise the server will send the unversioned#Status. If an error occurs before the version can be determined, the server will send the unversioned#Status. * non-top-level common API objects: diff --git a/client-package-structure.md b/client-package-structure.md index 3d1e8505..aaedb2e1 100644 --- a/client-package-structure.md +++ b/client-package-structure.md @@ -198,7 +198,7 @@ sources AND out-of-tree destinations, so it will be useful for consuming out-of-tree APIs and for others to build custom clients into their own repositories. -Typed clients will be constructabale given a ClientMux; the typed constructor will use +Typed clients will be constructable given a ClientMux; the typed constructor will use the ClientMux to find or construct an appropriate RESTClient. Alternatively, a typed client should be constructable individually given a config, from which it will be able to construct the appropriate RESTClient. @@ -342,7 +342,7 @@ changes for multiple releases, to give users time to transition. Once we release a clientset, we will not make interface changes to it. Users of that client will not have to change their code until they are deliberately upgrading their import. We probably will want to generate some sort of stub test -with a clienset, to ensure that we don't change the interface. +with a clientset, to ensure that we don't change the interface. <!-- BEGIN MUNGE: GENERATED_ANALYTICS --> diff --git a/federated-api-servers.md b/federated-api-servers.md index b7616ecc..073fddae 100644 --- a/federated-api-servers.md +++ b/federated-api-servers.md @@ -116,7 +116,7 @@ Cluster admins are also free to use any of the multiple open source API manageme provide a lot more functionality like: rate-limiting, caching, logging, transformations and authentication. In future, we can also use ingress. That will give cluster admins the flexibility to -easily swap out the ingress controller by a Go reverse proxy, ngingx, haproxy +easily swap out the ingress controller by a Go reverse proxy, nginx, haproxy or any other solution they might want. ### Storage diff --git a/federation-lite.md b/federation-lite.md index f9b1b741..5b702a24 100644 --- a/federation-lite.md +++ b/federation-lite.md @@ -38,7 +38,7 @@ Documentation for other releases can be found at ## Introduction -Full Cluster Federation will offer sophisticated federation between multiple kuberentes +Full Cluster Federation will offer sophisticated federation between multiple kubernetes clusters, offering true high-availability, multiple provider support & cloud-bursting, multiple region support etc. However, many users have expressed a desire for a "reasonably" high-available cluster, that runs in @@ -73,7 +73,7 @@ advanced/experimental functionality, so the interface is not initially going to be particularly user-friendly. As we design the evolution of kube-up, we will make multiple zones better supported. -For the initial implemenation, kube-up must be run multiple times, once for +For the initial implementation, kube-up must be run multiple times, once for each zone. The first kube-up will take place as normal, but then for each additional zone the user must run kube-up again, specifying `KUBE_USE_EXISTING_MASTER=true` and `KUBE_SUBNET_CIDR=172.20.x.0/24`. This will then @@ -226,7 +226,7 @@ Initially therefore, the GCE changes will be to: 1. change kube-up to support creation of a cluster in multiple zones 1. pass a flag enabling multi-AZ clusters with kube-up -1. change the kuberentes cloud provider to iterate through relevant zones when resolving items +1. change the kubernetes cloud provider to iterate through relevant zones when resolving items 1. tag GCE PD volumes with the appropriate zone information diff --git a/flannel-integration.md b/flannel-integration.md index 9d89b443..d42428cd 100644 --- a/flannel-integration.md +++ b/flannel-integration.md @@ -141,7 +141,7 @@ The ick-iest part of this implementation is going to the the `GET /network/lease * On each change, figure out the lease for the node, construct a [lease watch result](https://github.com/coreos/flannel/blob/0bf263826eab1707be5262703a8092c7d15e0be4/subnet/subnet.go#L72), and send it down the watch with the RV from the node * Implement a lease list that does a similar translation -I say this is gross without an api objet because for each node->lease translation one has to store and retrieve the node metadata sent by flannel (eg: VTEP) from node annotations. [Reference implementation](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/flannel_server.go) and [watch proxy](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/watch_proxy.go). +I say this is gross without an api object because for each node->lease translation one has to store and retrieve the node metadata sent by flannel (eg: VTEP) from node annotations. [Reference implementation](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/flannel_server.go) and [watch proxy](https://github.com/bprashanth/kubernetes/blob/network_vxlan/pkg/kubelet/watch_proxy.go). # Limitations diff --git a/kubelet-eviction.md b/kubelet-eviction.md index 17542af0..b6bbf44c 100644 --- a/kubelet-eviction.md +++ b/kubelet-eviction.md @@ -99,7 +99,7 @@ An eviction threshold is of the following form: * valid `operator` tokens are `<` * valid `quantity` tokens must match the quantity representation used by Kubernetes -If threhold criteria are met, the `kubelet` will take pro-active action to attempt +If threshold criteria are met, the `kubelet` will take pro-active action to attempt to reclaim the starved compute resource associated with the eviction signal. The `kubelet` will support soft and hard eviction thresholds. diff --git a/kubelet-tls-bootstrap.md b/kubelet-tls-bootstrap.md index bd0ecdba..961f2af3 100644 --- a/kubelet-tls-bootstrap.md +++ b/kubelet-tls-bootstrap.md @@ -87,7 +87,7 @@ type CertificateSigningRequest struct { unversioned.TypeMeta `json:",inline"` api.ObjectMeta `json:"metadata,omitempty"` - // The certificate request itself and any additonal information. + // The certificate request itself and any additional information. Spec CertificateSigningRequestSpec `json:"spec,omitempty"` // Derived information about the request. @@ -105,7 +105,7 @@ type CertificateSigningRequestSpec struct { // This information is derived from the request by Kubernetes and cannot be // modified by users. All information is optional since it might not be -// available in the underlying request. This is intented to aid approval +// available in the underlying request. This is intended to aid approval // decisions. type CertificateSigningRequestStatus struct { // Information about the requesting user (if relevant) diff --git a/service-discovery.md b/service-discovery.md index d18d9812..a59eadfb 100644 --- a/service-discovery.md +++ b/service-discovery.md @@ -50,7 +50,7 @@ however for the purpose of service discovery we can simplify this to the followi If a user and/or password is required then this information can be passed using Kubernetes Secrets. Kubernetes contains the host and port of each service but it lacks the scheme and path. -`Service Path` - Every Service has one (or more) endpoint. As a rule the endpoint should be located at the root "/" of the localtion URL, i.e. `http://172.100.1.52/`. There are cases where this is not possible and the actual service endpoint could be located at `http://172.100.1.52/cxfcdi`. The Kubernetes metadata for a service does not capture the path part, making it hard to consume this service. +`Service Path` - Every Service has one (or more) endpoint. As a rule the endpoint should be located at the root "/" of the location URL, i.e. `http://172.100.1.52/`. There are cases where this is not possible and the actual service endpoint could be located at `http://172.100.1.52/cxfcdi`. The Kubernetes metadata for a service does not capture the path part, making it hard to consume this service. `Service Scheme` - Services can be deployed using different schemes. Some popular schemes include `http`,`https`,`file`,`ftp` and `jdbc`. @@ -62,7 +62,7 @@ The API of a service is the point of interaction with a service consumer. The de `Service Description Path` - To facilitate the consumption of the service by client, the location this document would be greatly helpful to the service consumer. In some cases the client side code can be generated from such a document. It is assumed that the service description document is published somewhere on the service endpoint itself. -`Service Description Language` - A number of Definition Languages (DL) have been developed to describe the service. Some of examples are `WSDL`, `WADL` and `Swagger`. In order to consume a decription document it is good to know the type of DL used. +`Service Description Language` - A number of Definition Languages (DL) have been developed to describe the service. Some of examples are `WSDL`, `WADL` and `Swagger`. In order to consume a description document it is good to know the type of DL used. ## Standard Service Annotations @@ -92,7 +92,7 @@ The fragment below is taken from the service section of the kubernetes.json were ## Conclusion -Five service annotations are proposed as a standard way to desribe a service endpoint. These five annotation are promoted as a Kubernetes standard, so that services can be discovered and a service catalog can be build to facilitate service consumers. +Five service annotations are proposed as a standard way to describe a service endpoint. These five annotation are promoted as a Kubernetes standard, so that services can be discovered and a service catalog can be build to facilitate service consumers. diff --git a/templates.md b/templates.md index 06126437..f46f686a 100644 --- a/templates.md +++ b/templates.md @@ -274,7 +274,7 @@ which process templates are free to override this value based on user input. **Example Template** Illustration of a template which defines a service and replication controller with parameters to specialized -the name of the top level objects, the number of replicas, and serveral environment variables defined on the +the name of the top level objects, the number of replicas, and several environment variables defined on the pod template. ``` @@ -412,7 +412,7 @@ Instead they can invoke the k8s api directly. * **/templates** - the REST storage resource for storing and retrieving template objects, scoped within a namespace. Storing templates within k8s has the benefit of enabling template sharing and securing via the same roles/resources -that are used to provide access control to other cluster resoures. It also enables sophisticated service catalog +that are used to provide access control to other cluster resources. It also enables sophisticated service catalog flows in which selecting a service from a catalog results in a new instantiation of that service. (This is not the only way to implement such a flow, but it does provide a useful level of integration). |
