summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorRémy Léone <remy.leone@gmail.com>2019-10-30 11:00:47 +0100
committerGitHub <noreply@github.com>2019-10-30 11:00:47 +0100
commitf3f999e951983a1444984dbf155e29cda3fda734 (patch)
tree16ddd12cfa9083a40dc1b38b8e5e5932d2cebac1
parent52480aa8412cf68524b304eff735505ed247a1b0 (diff)
parent7cd19957a39da5ca47b3c0d07ebbd3f5d47fa9a6 (diff)
Merge branch 'master' into patch-4
-rw-r--r--communication/mailing-list-guidelines.md2
-rw-r--r--communication/slack-config/sig-docs/docs-channels.yaml1
-rw-r--r--sig-architecture/production-readiness.md31
3 files changed, 31 insertions, 3 deletions
diff --git a/communication/mailing-list-guidelines.md b/communication/mailing-list-guidelines.md
index 64116908..580c30b4 100644
--- a/communication/mailing-list-guidelines.md
+++ b/communication/mailing-list-guidelines.md
@@ -53,7 +53,7 @@ to make groups simpler to manage. This has caused some breaks in certain groups
visibility settings related to SIG and WG Google Groups.
The instructions on how to fix from Google Groups for owners of the list:
Near the top right, click **Manage group**.
-- **Informtation** -> **Directory** -> **Edit the setting to set the desired
+- **Information** -> **Directory** -> **Edit the setting to set the desired
visibility for your group.** -> **Save**.
- This [link] have all the details related to these changes.
diff --git a/communication/slack-config/sig-docs/docs-channels.yaml b/communication/slack-config/sig-docs/docs-channels.yaml
index 8c765945..de67edb7 100644
--- a/communication/slack-config/sig-docs/docs-channels.yaml
+++ b/communication/slack-config/sig-docs/docs-channels.yaml
@@ -10,4 +10,5 @@ channels:
- name: kubernetes-docs-ko
- name: kubernetes-docs-pt
- name: kubernetes-docs-ru
+ - name: kubernetes-docs-vi
- name: kubernetes-docs-zh
diff --git a/sig-architecture/production-readiness.md b/sig-architecture/production-readiness.md
index 17ed49dc..9f953678 100644
--- a/sig-architecture/production-readiness.md
+++ b/sig-architecture/production-readiness.md
@@ -8,8 +8,7 @@ cause increased failures in production.
## Status
The process and questoinnaire are currently under development as part of the
-[PRR KEP](https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/20190731-production-readiness-review-process.md), with a target that reviews will be needed for features
-going into 1.18.
+[PRR KEP][], with a target that reviews will be needed for features going into 1.18.
During the 1.17 cycle, the PRR team will be piloting the questionnaire and other
aspects of the process.
@@ -28,6 +27,30 @@ aspects of the process.
happens if it is subsequently upgraded again?
- Are there tests for this?
* Scalability
+ - Will enabling / using the feature result in any new API calls?
+ Describe them with their impact keeping in mind the [supported limits][]
+ (e.g. 5000 nodes per cluster, 100 pods/s churn) focusing mostly on:
+ - components listing and/or watching resources they didn't before
+ - API calls that may be triggered by changes of some Kubernetes
+ resources (e.g. update object X based on changes of object Y)
+ - periodic API calls to reconcile state (e.g. periodic fetching state,
+ heartbeats, leader election, etc.)
+ - Will enabling / using the feature result in supporting new API types?
+ How many objects of that type will be supported (and how that translates
+ to limitations for users)?
+ - Will enabling / using the feature result in increasing size or count
+ of the existing API objects?
+ - Will enabling / using the feature result in increasing time taken
+ by any operations covered by [existing SLIs/SLOs][] (e.g. by adding
+ additional work, introducing new steps in between, etc.)?
+ Please describe the details if so.
+ - Will enabling / using the feature result in non-negligible increase
+ of resource usage (CPU, RAM, disk IO, ...) in any components?
+ Things to keep in mind include: additional in-memory state, additional
+ non-trivial computations, excessive access to disks (including increased
+ log volume), significant amount of data sent and/or received over
+ network, etc. Think through this in both small and large cases, again
+ with respect to the [supported limits][].
* Rollout, Upgrade, and Rollback Planning
* Dependencies
- Does this feature depend on any specific services running in the cluster
@@ -49,3 +72,7 @@ aspects of the process.
- What are the most useful log messages and what logging levels do they require?
- What steps should be taken if SLOs are not being met to determine the
problem?
+
+[PRR KEP]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/20190731-production-readiness-review-process.md
+[supported limits]: https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md
+[existing SLIs/SLOs]: https://github.com/kubernetes/community/blob/master/sig-scalability/slos/slos.md#kubernetes-slisslos