summaryrefslogtreecommitdiff
path: root/contributors
diff options
context:
space:
mode:
authorhalfcrazy <hackzhuyan@gmail.com>2018-02-04 22:41:56 +0800
committerhalfcrazy <hackzhuyan@gmail.com>2018-02-04 22:41:56 +0800
commitec3d22e7c5d6041aeca949f9a8f01a4596a5ab1f (patch)
tree5f105d97c45df3746d9aa546e66916e0cfed892b /contributors
parentf2ad4746f799f4dafc2f04d68331f845c2866828 (diff)
doc: fix some typo
Diffstat (limited to 'contributors')
-rw-r--r--contributors/design-proposals/api-machinery/api-chunking.md2
-rw-r--r--contributors/design-proposals/auth/kubectl-exec-plugins.md2
-rw-r--r--contributors/design-proposals/autoscaling/vertical-pod-autoscaler.md4
-rw-r--r--contributors/design-proposals/instrumentation/events-redesign.md2
-rw-r--r--contributors/design-proposals/instrumentation/metrics-server.md2
-rw-r--r--contributors/design-proposals/multicluster/federation-clusterselector.md2
-rw-r--r--contributors/design-proposals/multicluster/federation-phase-1.md2
-rw-r--r--contributors/design-proposals/node/cri-windows.md2
-rw-r--r--contributors/design-proposals/node/pod-resource-management.md4
-rw-r--r--contributors/design-proposals/node/sysctl.md4
-rw-r--r--contributors/design-proposals/resource-management/device-plugin.md12
-rw-r--r--contributors/design-proposals/scheduling/pod-priority-api.md2
-rw-r--r--contributors/design-proposals/scheduling/rescheduling-for-critical-pods.md2
-rw-r--r--contributors/design-proposals/storage/raw-block-pv.md6
-rw-r--r--contributors/design-proposals/storage/volume-topology-scheduling.md4
-rw-r--r--contributors/devel/gubernator.md2
-rw-r--r--contributors/devel/kubemark-guide.md2
-rw-r--r--contributors/devel/staging.md2
18 files changed, 29 insertions, 29 deletions
diff --git a/contributors/design-proposals/api-machinery/api-chunking.md b/contributors/design-proposals/api-machinery/api-chunking.md
index 4930192a..0a099fd3 100644
--- a/contributors/design-proposals/api-machinery/api-chunking.md
+++ b/contributors/design-proposals/api-machinery/api-chunking.md
@@ -130,7 +130,7 @@ GET /api/v1/pods?limit=500&continue=DEF...
Some clients may wish to follow a failed paged list with a full list attempt.
-The 5 minute default compaction interval for etcd3 bounds how long a list can run. Since clients may wish to perform processing over very large sets, increasing that timeout may make sense for large clusters. It should be possible to alter the interval at which compaction runs to accomodate larger clusters.
+The 5 minute default compaction interval for etcd3 bounds how long a list can run. Since clients may wish to perform processing over very large sets, increasing that timeout may make sense for large clusters. It should be possible to alter the interval at which compaction runs to accommodate larger clusters.
#### Types of clients and impact
diff --git a/contributors/design-proposals/auth/kubectl-exec-plugins.md b/contributors/design-proposals/auth/kubectl-exec-plugins.md
index 848e5562..dba6e7b7 100644
--- a/contributors/design-proposals/auth/kubectl-exec-plugins.md
+++ b/contributors/design-proposals/auth/kubectl-exec-plugins.md
@@ -160,7 +160,7 @@ type ExecAuthProviderConfig struct {
// to pass argument to the plugin.
Env []ExecEnvVar `json:"env"`
- // Prefered input version of the ExecInfo. The returned ExecCredentials MUST use
+ // Preferred input version of the ExecInfo. The returned ExecCredentials MUST use
// the same encoding version as the input.
APIVersion string `json:"apiVersion,omitempty"`
diff --git a/contributors/design-proposals/autoscaling/vertical-pod-autoscaler.md b/contributors/design-proposals/autoscaling/vertical-pod-autoscaler.md
index b4b08704..13d235f8 100644
--- a/contributors/design-proposals/autoscaling/vertical-pod-autoscaler.md
+++ b/contributors/design-proposals/autoscaling/vertical-pod-autoscaler.md
@@ -336,7 +336,7 @@ type VerticalPodAutoscalerStatus {
StatusMessage string
}
-// UpdateMode controls when autoscaler applies changes to the pod resoures.
+// UpdateMode controls when autoscaler applies changes to the pod resources.
type UpdateMode string
const (
// UpdateModeOff means that autoscaler never changes Pod resources.
@@ -354,7 +354,7 @@ const (
// PodUpdatePolicy describes the rules on how changes are applied to the pods.
type PodUpdatePolicy struct {
- // Controls when autoscaler applies changes to the pod resoures.
+ // Controls when autoscaler applies changes to the pod resources.
// +optional
UpdateMode UpdateMode
}
diff --git a/contributors/design-proposals/instrumentation/events-redesign.md b/contributors/design-proposals/instrumentation/events-redesign.md
index 6540d61f..bf2ae606 100644
--- a/contributors/design-proposals/instrumentation/events-redesign.md
+++ b/contributors/design-proposals/instrumentation/events-redesign.md
@@ -359,7 +359,7 @@ There's ongoing effort for adding Event deduplication and teeing to the server s
Another effort to protect API server from too many Events by dropping requests servers side in admission plugin is worked on by @staebler.
## Considered alternatives for API changes
### Leaving current dedup mechanism but improve backoff behavior
-As we're going to move all semantic informations to fields, instead of passing some of them in message, we could just call it a day, and leave the deduplication logic as is. When doing that we'd need to depend on the client-recorder library on protecting API server, by using number of techniques, like batching, aggressive backing off and allowing admin to reduce number of Events emitted by the system. This solution wouldn't drastically reduce number of API requests and we'd need to hope that small incremental changes would be enough.
+As we're going to move all semantic information to fields, instead of passing some of them in message, we could just call it a day, and leave the deduplication logic as is. When doing that we'd need to depend on the client-recorder library on protecting API server, by using number of techniques, like batching, aggressive backing off and allowing admin to reduce number of Events emitted by the system. This solution wouldn't drastically reduce number of API requests and we'd need to hope that small incremental changes would be enough.
### Timestamp list as a dedup mechanism
Another considered solution was to store timestamps of Events explicitly instead of only count. This gives users more information, as people complain that current dedup logic is too strong and it's hard to "decompress" Event if needed. This change has clearly worse performance characteristic, but fixes the problem of "decompressing" Events and generally making deduplication lossless. We believe that individual repeated events are not interesting per se, what's interesting is when given series started and when it finished, which is how we ended with the current proposal.
diff --git a/contributors/design-proposals/instrumentation/metrics-server.md b/contributors/design-proposals/instrumentation/metrics-server.md
index 344addf6..9b7a8f0b 100644
--- a/contributors/design-proposals/instrumentation/metrics-server.md
+++ b/contributors/design-proposals/instrumentation/metrics-server.md
@@ -78,7 +78,7 @@ horizontally, though it’s rather complicated and is out of the scope of this d
Metrics server will be Kubernetes addon, create by kube-up script and managed by
[addon-manager](https://git.k8s.io/kubernetes/cluster/addons/addon-manager).
-Since there is a number of dependant components, it will be marked as a critical addon.
+Since there is a number of dependent components, it will be marked as a critical addon.
In the future when the priority/preemption feature is introduced we will migrate to use this
proper mechanism for marking it as a high-priority, system component.
diff --git a/contributors/design-proposals/multicluster/federation-clusterselector.md b/contributors/design-proposals/multicluster/federation-clusterselector.md
index c12e4233..9cc9f45f 100644
--- a/contributors/design-proposals/multicluster/federation-clusterselector.md
+++ b/contributors/design-proposals/multicluster/federation-clusterselector.md
@@ -77,5 +77,5 @@ The logic to determine if an object is sent to a Federated Cluster will have two
## Open Questions
-1. Should there be any special considerations for when dependant resources would not be forwarded together to a Federated Cluster.
+1. Should there be any special considerations for when dependent resources would not be forwarded together to a Federated Cluster.
1. How to improve usability of this feature long term. It will certainly help to give first class API support but easier ways to map labels or requirements to objects may be required.
diff --git a/contributors/design-proposals/multicluster/federation-phase-1.md b/contributors/design-proposals/multicluster/federation-phase-1.md
index 25d27ee6..85c10ddb 100644
--- a/contributors/design-proposals/multicluster/federation-phase-1.md
+++ b/contributors/design-proposals/multicluster/federation-phase-1.md
@@ -335,7 +335,7 @@ only supports a simple list of acceptable clusters. Workloads will be
evenly distributed on these acceptable clusters in phase one. After
phase one we will define syntax to represent more advanced
constraints, like cluster preference ordering, desired number of
-splitted workloads, desired ratio of workloads spread on different
+split workloads, desired ratio of workloads spread on different
clusters, etc.
Besides this explicit “clusterSelector” filter, a workload may have
diff --git a/contributors/design-proposals/node/cri-windows.md b/contributors/design-proposals/node/cri-windows.md
index 9793b2f8..e1a7f1fa 100644
--- a/contributors/design-proposals/node/cri-windows.md
+++ b/contributors/design-proposals/node/cri-windows.md
@@ -5,7 +5,7 @@
**Status**: Proposed
## Background
-Container Runtime Interface (CRI) defines [APIs and configuration types](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto) for kubelet to integrate various container runtimes. The Open Container Initiative (OCI) Runtime Specification defines [platform specific configuration](https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration), including Linux, Windows, and Solaris. Currently CRI only suppports Linux container configuration. This proposal is to bring the Memory & CPU resource restrictions already specified in OCI for Windows to CRI.
+Container Runtime Interface (CRI) defines [APIs and configuration types](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/v1alpha1/runtime/api.proto) for kubelet to integrate various container runtimes. The Open Container Initiative (OCI) Runtime Specification defines [platform specific configuration](https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration), including Linux, Windows, and Solaris. Currently CRI only supports Linux container configuration. This proposal is to bring the Memory & CPU resource restrictions already specified in OCI for Windows to CRI.
The Linux & Windows schedulers differ in design and the units used, but can accomplish the same goal of limiting resource consumption of individual containers.
diff --git a/contributors/design-proposals/node/pod-resource-management.md b/contributors/design-proposals/node/pod-resource-management.md
index cc1c43a9..91f07689 100644
--- a/contributors/design-proposals/node/pod-resource-management.md
+++ b/contributors/design-proposals/node/pod-resource-management.md
@@ -118,7 +118,7 @@ The following formula is used to convert CPU in millicores to cgroup values:
The `kubelet` will create a cgroup sandbox for each pod.
The naming convention for the cgroup sandbox is `pod<pod.UID>`. It enables
-the `kubelet` to associate a particular cgroup on the host filesytem
+the `kubelet` to associate a particular cgroup on the host filesystem
with a corresponding pod without managing any additional state. This is useful
when the `kubelet` restarts and needs to verify the cgroup filesystem.
@@ -433,7 +433,7 @@ eviction decisions for the unbounded QoS tiers (Burstable, BestEffort).
The following describes the cgroup representation of a node with pods
across multiple QoS classes.
-### Cgroup Hierachy
+### Cgroup Hierarchy
The following identifies a sample hierarchy based on the described design.
diff --git a/contributors/design-proposals/node/sysctl.md b/contributors/design-proposals/node/sysctl.md
index 8ab61b8c..4d6f505f 100644
--- a/contributors/design-proposals/node/sysctl.md
+++ b/contributors/design-proposals/node/sysctl.md
@@ -115,7 +115,7 @@ supports setting a number of whitelisted sysctls during the container creation p
Some real-world examples for the use of sysctls:
- PostgreSQL requires `kernel.shmmax` and `kernel.shmall` (among others) to be
- set to reasonable high values (compare [PostgresSQL Manual 17.4.1. Shared Memory
+ set to reasonable high values (compare [PostgreSQL Manual 17.4.1. Shared Memory
and Semaphores](http://www.postgresql.org/docs/9.1/static/kernel-resources.html)).
The default of 32 MB for shared memory is not reasonable for a database.
- RabbitMQ proposes a number of sysctl settings to optimize networking: https://www.rabbitmq.com/networking.html.
@@ -342,7 +342,7 @@ Issues:
* [x] **namespaced** in net ns
* [ ] **might have application influence** for high values as it limits the socket queue length
* [?] **No real evidence found until now for accounting**. The limit is checked by `sk_acceptq_is_full` at http://lxr.free-electrons.com/source/net/ipv4/tcp_ipv4.c#L1276. After that a new socket is created. Probably, the tcp socket buffer sysctls apply then, with their accounting, see below.
- * [ ] **very unreliable** tcp memory accounting. There have a been a number of attemps to drop that from the kernel completely, e.g. https://lkml.org/lkml/2014/9/12/401. On Fedora 24 (4.6.3) tcp accounting did not work at all, on Ubuntu 16.06 (4.4) it kind of worked in the root-cg, but in containers only values copied from the root-cg appeared.
+ * [ ] **very unreliable** tcp memory accounting. There have a been a number of attempts to drop that from the kernel completely, e.g. https://lkml.org/lkml/2014/9/12/401. On Fedora 24 (4.6.3) tcp accounting did not work at all, on Ubuntu 16.06 (4.4) it kind of worked in the root-cg, but in containers only values copied from the root-cg appeared.
e - `net.ipv4.tcp_wmem`/`net.ipv4.tcp_wmem`/`net.core.rmem_max`/`net.core.wmem_max`: socket buffer sizes
* [ ] **not namespaced in net ns**, and they are not even available under `/sys/net`
- `net.ipv4.ip_local_port_range`: local tcp/udp port range
diff --git a/contributors/design-proposals/resource-management/device-plugin.md b/contributors/design-proposals/resource-management/device-plugin.md
index 6c203369..4059d0f7 100644
--- a/contributors/design-proposals/resource-management/device-plugin.md
+++ b/contributors/design-proposals/resource-management/device-plugin.md
@@ -38,13 +38,13 @@ to Kubelet and monitor them without writing custom Kubernetes code.
We also want to provide a consistent and portable solution for users to
consume hardware devices across k8s clusters.
-This document describes a vendor independant solution to:
+This document describes a vendor independent solution to:
* Discovering and representing external devices
* Making these devices available to the containers, using these devices,
scrubbing and securely sharing these devices.
* Health Check of these devices
-Because devices are vendor dependant and have their own sets of problems
+Because devices are vendor dependent and have their own sets of problems
and mechanisms, the solution we describe is a plugin mechanism that may run
in a container deployed through the DaemonSets mechanism or in bare metal mode.
@@ -187,7 +187,7 @@ sockets and follow this simple pattern:
gRPC request)
2. Kubelet answers to the `RegisterRequest` with a `RegisterResponse`
containing any error Kubelet might have encountered
-3. The device plugin start it's gRPC server if it did not recieve an
+3. The device plugin start it's gRPC server if it did not receive an
error
## Unix Socket
@@ -242,7 +242,7 @@ service Registration {
// DevicePlugin is the service advertised by Device Plugins
service DevicePlugin {
// ListAndWatch returns a stream of List of Devices
- // Whenever a Device state change or a Device disapears, ListAndWatch
+ // Whenever a Device state change or a Device disappears, ListAndWatch
// returns the new list
rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {}
@@ -282,7 +282,7 @@ message AllocateResponse {
}
// ListAndWatch returns a stream of List of Devices
-// Whenever a Device state change or a Device disapears, ListAndWatch
+// Whenever a Device state change or a Device disappears, ListAndWatch
// returns the new list
message ListAndWatchResponse {
repeated Device devices = 1;
@@ -485,7 +485,7 @@ spec:
Currently we require exact version match between Kubelet and Device Plugin.
API version is expected to be increased only upon incompatible API changes.
-Follow protobuf guidelines on versionning:
+Follow protobuf guidelines on versioning:
* Do not change ordering
* Do not remove fields or change types
* Add optional fields
diff --git a/contributors/design-proposals/scheduling/pod-priority-api.md b/contributors/design-proposals/scheduling/pod-priority-api.md
index 8b5d7219..28cd414a 100644
--- a/contributors/design-proposals/scheduling/pod-priority-api.md
+++ b/contributors/design-proposals/scheduling/pod-priority-api.md
@@ -165,7 +165,7 @@ type PriorityClass struct {
metav1.ObjectMeta
// The value of this priority class. This is the actual priority that pods
- // recieve when they have the above name in their pod spec.
+ // receive when they have the above name in their pod spec.
Value int32
GlobalDefault bool
Description string
diff --git a/contributors/design-proposals/scheduling/rescheduling-for-critical-pods.md b/contributors/design-proposals/scheduling/rescheduling-for-critical-pods.md
index 835088fc..f899a08d 100644
--- a/contributors/design-proposals/scheduling/rescheduling-for-critical-pods.md
+++ b/contributors/design-proposals/scheduling/rescheduling-for-critical-pods.md
@@ -5,7 +5,7 @@
In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine
there is a bunch of addons which due to various reasons have to run on a regular cluster node, not the master.
Some of them are critical to have fully functional cluster: Heapster, DNS, UI. Users can break their cluster
-by evicting a critical addon (either manually or as a side effect of an other operation like upgrade)
+by evicting a critical addon (either manually or as a side effect of another operation like upgrade)
which possibly can become pending (for example when the cluster is highly utilized).
To avoid such situation we want to have a mechanism which guarantees that
critical addons are scheduled assuming the cluster is big enough.
diff --git a/contributors/design-proposals/storage/raw-block-pv.md b/contributors/design-proposals/storage/raw-block-pv.md
index 5edb5311..f5ced0a1 100644
--- a/contributors/design-proposals/storage/raw-block-pv.md
+++ b/contributors/design-proposals/storage/raw-block-pv.md
@@ -25,7 +25,7 @@ This document presents a proposal for managing raw block storage in Kubernetes u
# Value add to Kubernetes
By extending the API for volumes to specifically request a raw block device, we provide an explicit method for volume consumption,
- whereas previously any request for storage was always fulfilled with a formatted fileystem, even when the underlying storage was
+ whereas previously any request for storage was always fulfilled with a formatted filesystem, even when the underlying storage was
block. In addition, the ability to use a raw block device without a filesystem will allow
Kubernetes better support of high performance applications that can utilize raw block devices directly for their storage.
Block volumes are critical to applications like databases (MongoDB, Cassandra) that require consistent I/O performance
@@ -113,7 +113,7 @@ spec:
## Persistent Volume API Changes:
For static provisioning the admin creates the volume and also is intentional about how the volume should be consumed. For backwards
-compatibility, the absence of volumeMode will default to filesystem which is how volumes work today, which are formatted with a filesystem depending on the plug-in chosen. Recycling will not be a supported reclaim policy as it has been deprecated. The path value in the local PV definition would be overloaded to define the path of the raw block device rather than the fileystem path.
+compatibility, the absence of volumeMode will default to filesystem which is how volumes work today, which are formatted with a filesystem depending on the plug-in chosen. Recycling will not be a supported reclaim policy as it has been deprecated. The path value in the local PV definition would be overloaded to define the path of the raw block device rather than the filesystem path.
```
kind: PersistentVolume
apiVersion: v1
@@ -841,4 +841,4 @@ Feature: Discovery of block devices
Milestone 1: Dynamically provisioned PVs to dynamically allocated devices
- Milestone 2: Plugin changes with dynamic provisioning support (RBD, iSCSI, GCE, AWS & GlusterFS) \ No newline at end of file
+ Milestone 2: Plugin changes with dynamic provisioning support (RBD, iSCSI, GCE, AWS & GlusterFS)
diff --git a/contributors/design-proposals/storage/volume-topology-scheduling.md b/contributors/design-proposals/storage/volume-topology-scheduling.md
index 6ee21e3c..2603e225 100644
--- a/contributors/design-proposals/storage/volume-topology-scheduling.md
+++ b/contributors/design-proposals/storage/volume-topology-scheduling.md
@@ -102,7 +102,7 @@ type VolumeNodeAffinity struct {
The `Required` field is a hard constraint and indicates that the PersistentVolume
can only be accessed from Nodes that satisfy the NodeSelector.
-In the future, a `Preferred` field can be added to handle soft node contraints with
+In the future, a `Preferred` field can be added to handle soft node constraints with
weights, but will not be included in the initial implementation.
The advantages of this NodeAffinity field vs the existing method of using zone labels
@@ -492,7 +492,7 @@ if the API update fails, the cached updates need to be reverted and restored
with the actual API object. The cache will return either the cached-only
object, or the informer object, whichever one is latest. Informer updates
will always override the cached-only object. The new predicate and priority
-functions must get the objects from this cache intead of from the informer cache.
+functions must get the objects from this cache instead of from the informer cache.
This cache only stores pointers to objects and most of the time will only
point to the informer object, so the memory footprint per object is small.
diff --git a/contributors/devel/gubernator.md b/contributors/devel/gubernator.md
index 2a25ddd7..8d5a899c 100644
--- a/contributors/devel/gubernator.md
+++ b/contributors/devel/gubernator.md
@@ -20,7 +20,7 @@ test results.
Gubernator simplifies the debugging process and makes it easier to track down failures by automating many
steps commonly taken in searching through logs, and by offering tools to filter through logs to find relevant lines.
Gubernator automates the steps of finding the failed tests, displaying relevant logs, and determining the
-failed pods and the corresponing pod UID, namespace, and container ID.
+failed pods and the corresponding pod UID, namespace, and container ID.
It also allows for filtering of the log files to display relevant lines based on selected keywords, and
allows for multiple logs to be woven together by timestamp.
diff --git a/contributors/devel/kubemark-guide.md b/contributors/devel/kubemark-guide.md
index 2c404424..ce5727e8 100644
--- a/contributors/devel/kubemark-guide.md
+++ b/contributors/devel/kubemark-guide.md
@@ -124,7 +124,7 @@ and Scheduler talk with API server using insecure port 8080.</sub>
(We use gcr.io/ as our remote docker repository in GCE, should be different for other providers)
3. [One-off] Create and upload a Docker image for NodeProblemDetector (see kubernetes/node-problem-detector repo),
which is one of the containers in the HollowNode pod, besides HollowKubelet and HollowProxy. However we
- use it with a hollow config that esentially has an empty set of rules and conditions to be detected.
+ use it with a hollow config that essentially has an empty set of rules and conditions to be detected.
This step is required only for other cloud providers, as the docker image for GCE already exists on GCR.
4. Create secret which stores kubeconfig for use by HollowKubelet/HollowProxy, addons, and configMaps
for the HollowNode and the HollowNodeProblemDetector.
diff --git a/contributors/devel/staging.md b/contributors/devel/staging.md
index 8776cf02..79ae762f 100644
--- a/contributors/devel/staging.md
+++ b/contributors/devel/staging.md
@@ -12,7 +12,7 @@ At the time of this writing, this includes the branches
- release-1.8 / release-5.0,
- and release-1.9 / release-6.0
-of the follwing staging repos in the k8s.io org:
+of the following staging repos in the k8s.io org:
- api
- apiextensions-apiserver