diff options
| -rw-r--r-- | federation.md | 2 | ||||
| -rw-r--r-- | multi-platform.md | 2 | ||||
| -rw-r--r-- | protobuf.md | 4 | ||||
| -rw-r--r-- | resource-quota-scoping.md | 10 | ||||
| -rw-r--r-- | runtimeconfig.md | 2 | ||||
| -rw-r--r-- | templates.md | 2 |
6 files changed, 11 insertions, 11 deletions
diff --git a/federation.md b/federation.md index 2508a327..0b2503f6 100644 --- a/federation.md +++ b/federation.md @@ -444,7 +444,7 @@ including discussion of: 1. admission control 1. initial placement of instances of a new -service vs scheduling new instances of an existing service in response +service vs. scheduling new instances of an existing service in response to auto-scaling 1. rescheduling pods due to failure (response might be different depending on if it's failure of a node, rack, or whole AZ) diff --git a/multi-platform.md b/multi-platform.md index f278ee10..1d3633fb 100644 --- a/multi-platform.md +++ b/multi-platform.md @@ -227,7 +227,7 @@ These addons should also be converted to multiple platforms: ### Conflicts -What should we do if there's a conflict between keeping e.g. `linux/ppc64le` builds vs merging a release blocker? +What should we do if there's a conflict between keeping e.g. `linux/ppc64le` builds vs. merging a release blocker? In fact, we faced this problem while this proposal was being written; in [#25243](https://github.com/kubernetes/kubernetes/pull/25243). It is quite obvious that the release blocker is of higher priority. diff --git a/protobuf.md b/protobuf.md index 0e951827..cbedbe02 100644 --- a/protobuf.md +++ b/protobuf.md @@ -117,7 +117,7 @@ resembles: reduce the amount of memory garbage created during serialization and deserialization. * More efficient formats like Msgpack were considered, but they only offer - 2x speed up vs the 10x observed for Protobuf + 2x speed up vs. the 10x observed for Protobuf * gRPC was considered, but is a larger change that requires more core refactoring. This approach does not eliminate the possibility of switching to gRPC in the future. @@ -356,7 +356,7 @@ deserialization of the remaining bytes into the `runtime.Unknown` type. ## Streaming wire format While the majority of Kubernetes APIs return single objects that can vary -in type (Pod vs Status, PodList vs Status), the watch APIs return a stream +in type (Pod vs. Status, PodList vs. Status), the watch APIs return a stream of identical objects (Events). At the time of this writing, this is the only current or anticipated streaming RESTful protocol (logging, port-forwarding, and exec protocols use a binary protocol over Websockets or SPDY). diff --git a/resource-quota-scoping.md b/resource-quota-scoping.md index 9647b95b..355b50bc 100644 --- a/resource-quota-scoping.md +++ b/resource-quota-scoping.md @@ -79,10 +79,10 @@ max number of active best-effort pods. In addition, the cluster-admin requires the ability to scope a quota that limits compute resources to exclude best-effort pods. -### Ability to quota long-running vs bounded-duration compute resources +### Ability to quota long-running vs. bounded-duration compute resources The cluster-admin may want to quota end-users separately -based on long-running vs bounded-duration compute resources. +based on long-running vs. bounded-duration compute resources. For example, a cluster-admin may offer more compute resources for long running pods that are expected to have a more permanent residence @@ -94,7 +94,7 @@ request if there is no active traffic. An operator that wants to control density will offer lower quota limits for batch workloads than web applications. A classic example is a PaaS deployment where the cluster-admin may -allow a separate budget for pods that run their web application vs pods that +allow a separate budget for pods that run their web application vs. pods that build web applications. Another example is providing more quota to a database pod than a @@ -105,8 +105,8 @@ pod that performs a database migration. * As a cluster-admin, I want the ability to quota * compute resource requests * compute resource limits - * compute resources for terminating vs non-terminating workloads - * compute resources for best-effort vs non-best-effort pods + * compute resources for terminating vs. non-terminating workloads + * compute resources for best-effort vs. non-best-effort pods ## Proposed Change diff --git a/runtimeconfig.md b/runtimeconfig.md index 51c46597..b41990aa 100644 --- a/runtimeconfig.md +++ b/runtimeconfig.md @@ -82,7 +82,7 @@ feature's owner(s). The following are suggested conventions: in each component to toggle on/off. - Alpha features should be disabled by default. Beta features may be enabled by default. Refer to docs/devel/api_changes.md#alpha-beta-and-stable-versions - for more detailed guidance on alpha vs beta. + for more detailed guidance on alpha vs. beta. ## Upgrade support diff --git a/templates.md b/templates.md index 22aa28e2..46951316 100644 --- a/templates.md +++ b/templates.md @@ -590,7 +590,7 @@ renaming parameters seems less likely than changing field paths. Openshift defines templates as a first class resource so they can be created/retrieved/etc via standard tools. This allows client tools to list available templates (available in the openshift cluster), allows existing resource security controls to be applied to templates, and generally provides a more integrated feel to templates. However there is no explicit requirement that for k8s to adopt templates, it must also adopt storing them in the cluster. -### Processing templates (server vs client) +### Processing templates (server vs. client) Openshift handles template processing via a server endpoint which consumes a template object from the client and returns the list of objects produced by processing the template. It is also possible to handle the entire template processing flow via the client, but this was deemed |
