From 5068303adb8b254931bd893ea11c5957f212f0a5 Mon Sep 17 00:00:00 2001 From: Jan Safranek Date: Tue, 28 Aug 2018 14:37:12 +0200 Subject: CSI: send pod information in NodePublishVolumeRequest --- .../container-storage-interface-pod-information.md | 48 ++++++++++++++++++++++ 1 file changed, 48 insertions(+) create mode 100644 contributors/design-proposals/storage/container-storage-interface-pod-information.md diff --git a/contributors/design-proposals/storage/container-storage-interface-pod-information.md b/contributors/design-proposals/storage/container-storage-interface-pod-information.md new file mode 100644 index 00000000..872f9d45 --- /dev/null +++ b/contributors/design-proposals/storage/container-storage-interface-pod-information.md @@ -0,0 +1,48 @@ +# Pod in CSI NodePublish request +Author: @jsafrane + +## Goal +* Pass Pod information (pod name/namespace/UID + service account) to CSI drivers in `NodePublish` request as CSI volume attributes. + +## Motivation +We'd like to move away from exec based Flex to gRPC based CSI volumes. In Flex, kubelet always passes `pod.namespace`, `pod.name`, `pod.uid` and `pod.spec.serviceAccountName` ("pod information") in every `mount` call. In Kubernetes community we've seen some Flex drivers that use pod or service account information to authorize or audit usage of a volume or generate content of the volume tailored to the pod (e.g. https://github.com/Azure/kubernetes-keyvault-flexvol). + +CSI is agnostic to container orchestrators (such as Kubernetes, Mesos or CloudFoundry) and as such does not understand concept of pods and service accounts. [Enhancement of CSI protocol](https://github.com/container-storage-interface/spec/pull/252) to pass "workload" (~pod) information from Kubernetes to CSI driver has met some resistance. + +## High-level design +We decided to pass the pod information as `NodePublishVolumeRequest.volume_attributes`. + +* Kubernetes passes pod information only to CSI drivers that explicitly require that information in their [`CSIDriver` instance](https://github.com/kubernetes/community/pull/2523). These drivers are tightly coupled to Kubernetes and may not work or may require reconfiguration on other cloud orchestrators. It is expected (but not limited to) that these drivers will provide ephemeral volumes similar to Secrets or ConfigMap, extending Kubernetes secret or configuration sources. +* Kubernetes will not pass pod information to CSI drivers that don't know or don't care about pods and service accounts. It is expected (but not limited to) that these drivers will provide real persistent storage. Such CSI driver would reject a CSI call with pod information as invalid. This is current behavior of Kubernetes and it will be the default behavior. + +## Detailed design + +### API changes +No API changes. + +### CSI enhancement +We don't need to change CSI protocol in any way. It allows kubelet to pass `pod.name`, `pod.uid` and `pod.spec.serviceAccountName` in [`NodePublish` call as `volume_attributes`]((https://github.com/container-storage-interface/spec/blob/master/spec.md#nodepublishvolume)). `NodePublish` is roughly equivalent to Flex `mount` call. + +The only thing we need to do is to **define** names of the `volume_attributes` keys that CSI drivers can expect: + * `csi.storage.k8s.io/pod.name`: name of the pod that wants the volume. + * `csi.storage.k8s.io/pod.namespace`: namespace of the pod that wants the volume. + * `csi.storage.k8s.io/pod.uid`: uid of the pod that wants the volume. + * `csi.storage.k8s.io/serviceAccount.name`: name of the service account under which the pod operates. Namespace of the service account is the same as `pod.namespace`. + +Note that these attribute names are very similar to [parameters we pass to flex volume plugin](https://github.com/kubernetes/kubernetes/blob/10688257e63e4d778c499ba30cddbc8c6219abe9/pkg/volume/flexvolume/driver-call.go#L55). + +### Kubelet +Kubelet needs to create informer to cache `CSIDriver` instances. It passes the informer to CSI volume plugin as a new argument of [`ProbeVolumePlugins`](https://github.com/kubernetes/kubernetes/blob/43f805b7bdda7a5b491d34611f85c249a63d7f97/pkg/volume/csi/csi_plugin.go#L58). + +### CSI volume plugin +In `SetUpAt()`, the CSI volume plugin checks the `CSIDriver` informer if `CSIDriver` instance exists for a particular CSI driver that handles the volume. If the instance exists and has `PodInfoRequiredOnMount` set, the volume plugin adds `csi.storage.k8s.io/*` attributes to `volume_attributes` of the CSI volume. It blindly overwrites any existing values there. + +Kubelet and the volume plugin must tolerate when CRD for `CSIDriver` is not created (yet). Kubelet and CSI volume plugin falls back to original behavior, i.e. does not pass any pod information to CSI. We expect that CSI drivers will return reasonable error code instead of mounting a wrong volume. + +TODO(jsafrane): check what (shared?) informer does when it's created for non-existing CRD. Will it start working automatically when the CRD is created? Or shall we retry creation of the informer every X seconds until the CRD is created? Alternatively, we may GEt fresh `CSIDriver` from API server in `SetUpAt()`, without any informer. + +## Implementation + +* Alpha in 1.12 (behind `CSIPodInfo` feature gate) +* Beta in 1.13 (behind `CSIPodInfo` feature gate) +* GA 1.14? -- cgit v1.2.3 From b8379c030be9794f076128b55e288e5317770f87 Mon Sep 17 00:00:00 2001 From: Jordan Liggitt Date: Thu, 4 Oct 2018 10:45:26 -0400 Subject: Add tech_leads for sig-auth --- OWNERS_ALIASES | 3 +++ sig-auth/README.md | 10 ++++++++-- sigs.yaml | 16 ++++++++++------ 3 files changed, 21 insertions(+), 8 deletions(-) diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index ae628ead..5db7a714 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -14,6 +14,9 @@ aliases: - mikedanese - enj - tallclair + - deads2k + - liggitt + - mikedanese sig-autoscaling-leads: - mwielgus - directxman12 diff --git a/sig-auth/README.md b/sig-auth/README.md index b967ba7d..61b852cb 100644 --- a/sig-auth/README.md +++ b/sig-auth/README.md @@ -28,12 +28,18 @@ The Chairs of the SIG run operations and processes governing the SIG. * Mo Khan (**[@enj](https://github.com/enj)**), Red Hat * Tim Allclair (**[@tallclair](https://github.com/tallclair)**), Google +### Technical Leads +The Technical Leads of the SIG establish new subprojects, decommission existing +subprojects, and resolve cross-subproject technical issues and decisions. + +* David Eads (**[@deads2k](https://github.com/deads2k)**), Red Hat +* Jordan Liggitt (**[@liggitt](https://github.com/liggitt)**), Google +* Mike Danese (**[@mikedanese](https://github.com/mikedanese)**), Google + ## Emeritus Leads * Eric Chiang (**[@ericchiang](https://github.com/ericchiang)**), Red Hat * Eric Tune (**[@erictune](https://github.com/erictune)**), Google -* David Eads (**[@deads2k](https://github.com/deads2k)**), Red Hat -* Jordan Liggitt (**[@liggitt](https://github.com/liggitt)**), Google ## Contact * [Slack](https://kubernetes.slack.com/messages/sig-auth) diff --git a/sigs.yaml b/sigs.yaml index 4008e7f8..6d8edda8 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -294,6 +294,16 @@ sigs: - name: Tim Allclair github: tallclair company: Google + tech_leads: + - name: David Eads + github: deads2k + company: Red Hat + - name: Jordan Liggitt + github: liggitt + company: Google + - name: Mike Danese + github: mikedanese + company: Google emeritus_leads: - name: Eric Chiang github: ericchiang @@ -301,12 +311,6 @@ sigs: - name: Eric Tune github: erictune company: Google - - name: David Eads - github: deads2k - company: Red Hat - - name: Jordan Liggitt - github: liggitt - company: Google meetings: - description: Regular SIG Meeting day: Wednesday -- cgit v1.2.3 From c46e07fabaeb3d6bc671705dd50c876bf2274a4b Mon Sep 17 00:00:00 2001 From: Chris Hoge Date: Fri, 18 Jan 2019 07:37:39 -0800 Subject: Update SIG-OpenStack leadership --- OWNERS_ALIASES | 2 -- sig-list.md | 2 +- sig-openstack/README.md | 5 +++++ sigs.yaml | 19 +++++++++++++------ 4 files changed, 19 insertions(+), 9 deletions(-) diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index b4327b7e..3666378c 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -75,8 +75,6 @@ aliases: - derekwaynecarr sig-openstack-leads: - hogepodge - - dklyle - - rjmorse sig-pm-leads: - apsinha - idvoretskyi diff --git a/sig-list.md b/sig-list.md index 82522fd3..3cb161e0 100644 --- a/sig-list.md +++ b/sig-list.md @@ -42,7 +42,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Multicluster](sig-multicluster/README.md)|multicluster|* [Christian Bell](https://github.com/csbell), Google
* [Quinton Hoole](https://github.com/quinton-hoole), Huawei
|* [Slack](https://kubernetes.slack.com/messages/sig-multicluster)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-multicluster)|* Regular SIG Meeting: [Tuesdays at 9:30 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Federation v2 Working Group: [Wednesdays at 7:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Network](sig-network/README.md)|network|* [Tim Hockin](https://github.com/thockin), Google
* [Dan Williams](https://github.com/dcbw), Red Hat
* [Casey Davenport](https://github.com/caseydavenport), Tigera
|* [Slack](https://kubernetes.slack.com/messages/sig-network)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-network)|* Regular SIG Meeting: [Thursdays at 14:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Node](sig-node/README.md)|node|* [Dawn Chen](https://github.com/dchen1107), Google
* [Derek Carr](https://github.com/derekwaynecarr), Red Hat
|* [Slack](https://kubernetes.slack.com/messages/sig-node)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-node)|* Regular SIG Meeting: [Tuesdays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-|[OpenStack](sig-openstack/README.md)|openstack|* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation
* [David Lyle](https://github.com/dklyle), Intel
* [Robert Morse](https://github.com/rjmorse), Ticketmaster
|* [Slack](https://kubernetes.slack.com/messages/sig-openstack)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-openstack)|* Regular SIG Meeting: [Wednesdays at 16:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/15UwgLbEyZyXXxVtsThcSuPiJru4CuqU9p3ttZSfTaY4/edit)
+|[OpenStack](sig-openstack/README.md)|openstack|* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation
|* [Slack](https://kubernetes.slack.com/messages/sig-openstack)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-openstack)|* Regular SIG Meeting: [Wednesdays at 16:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/15UwgLbEyZyXXxVtsThcSuPiJru4CuqU9p3ttZSfTaY4/edit)
|[PM](sig-pm/README.md)|pm|* [Aparna Sinha](https://github.com/apsinha), Google
* [Ihor Dvoretskyi](https://github.com/idvoretskyi), CNCF
* [Caleb Miles](https://github.com/calebamiles), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-pm)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-pm)|* Regular SIG Meeting: [Tuesdays at 18:30 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Release](sig-release/README.md)|release|* [Caleb Miles](https://github.com/calebamiles), Google
* [Stephen Augustus](https://github.com/justaugustus), VMware
* [Tim Pepper](https://github.com/tpepper), VMware
|* [Slack](https://kubernetes.slack.com/messages/sig-release)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-release)|* Regular SIG Meeting: [Tuesdays at 21:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Scalability](sig-scalability/README.md)|scalability|* [Wojciech Tyczynski](https://github.com/wojtek-t), Google
* [Shyam Jeedigunta](https://github.com/shyamjvs), AWS
|* [Slack](https://kubernetes.slack.com/messages/sig-scalability)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-scale)|* Regular SIG Meeting: [Thursdays at 17:30 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
diff --git a/sig-openstack/README.md b/sig-openstack/README.md index 78913846..09be8a1a 100644 --- a/sig-openstack/README.md +++ b/sig-openstack/README.md @@ -21,8 +21,13 @@ Coordinates the cross-community efforts of the OpenStack and Kubernetes communit The Chairs of the SIG run operations and processes governing the SIG. * Chris Hoge (**[@hogepodge](https://github.com/hogepodge)**), OpenStack Foundation + +## Emeritus Leads + * David Lyle (**[@dklyle](https://github.com/dklyle)**), Intel * Robert Morse (**[@rjmorse](https://github.com/rjmorse)**), Ticketmaster +* Steve Gordon (**[@xsgordon](https://github.com/xsgordon)**), Red Hat +* Ihor Dvoretskyi (**[@idvoretskyi](https://github.com/idvoretskyi)**), CNCF ## Contact * [Slack](https://kubernetes.slack.com/messages/sig-openstack) diff --git a/sigs.yaml b/sigs.yaml index 26462293..3b330490 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -1448,12 +1448,19 @@ sigs: - name: Chris Hoge github: hogepodge company: OpenStack Foundation - - name: David Lyle - github: dklyle - company: Intel - - name: Robert Morse - github: rjmorse - company: Ticketmaster + emeritus_leads: + - name: David Lyle + github: dklyle + company: Intel + - name: Robert Morse + github: rjmorse + company: Ticketmaster + - name: Steve Gordon + github: xsgordon + company: Red Hat + - name: Ihor Dvoretskyi + github: idvoretskyi + company: CNCF meetings: - description: Regular SIG Meeting day: Wednesday -- cgit v1.2.3 From cb9f1baf91d5543d590d397e4a44fed532f056dd Mon Sep 17 00:00:00 2001 From: Arnaud MAZIN Date: Tue, 22 Jan 2019 19:36:06 +0100 Subject: Added back Google slide deck --- icons/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/icons/README.md b/icons/README.md index d7ef1312..20c8effb 100644 --- a/icons/README.md +++ b/icons/README.md @@ -74,6 +74,10 @@ There is 2 types of icons #### Exposed Pod with 3 replicas ![](./docs/k8s-exposed-pod.png) +### Slide Deck + +[Kubernetes_Icons_GSlide](https://docs.google.com/presentation/d/15h_MHjR2fzXIiGZniUdHok_FP07u1L8MAX5cN1r0j4U/edit) + ## License The Kubernetes Icons Set is licensed under a choice of either Apache-2.0 or CC-BY-4.0 (Creative Commons Attribution 4.0 International). The -- cgit v1.2.3 From 28400fa1d9c9ead6318df5750d75254dc6b837e2 Mon Sep 17 00:00:00 2001 From: Arnaud MAZIN Date: Tue, 22 Jan 2019 19:36:20 +0100 Subject: rb minor correction --- icons/png/resources/unlabeled/rb-128.png | Bin 7139 -> 7040 bytes icons/svg/resources/unlabeled/rb.svg | 40 ++++--------------------------- 2 files changed, 5 insertions(+), 35 deletions(-) diff --git a/icons/png/resources/unlabeled/rb-128.png b/icons/png/resources/unlabeled/rb-128.png index 08395e32..21dfc7e5 100644 Binary files a/icons/png/resources/unlabeled/rb-128.png and b/icons/png/resources/unlabeled/rb-128.png differ diff --git a/icons/svg/resources/unlabeled/rb.svg b/icons/svg/resources/unlabeled/rb.svg index 09ea1a96..e9ce8e59 100644 --- a/icons/svg/resources/unlabeled/rb.svg +++ b/icons/svg/resources/unlabeled/rb.svg @@ -14,7 +14,7 @@ viewBox="0 0 18.035334 17.500378" version="1.1" id="svg13826" - inkscape:version="0.91 r13725" + inkscape:version="0.92.4 5da689c313, 2019-01-14" sodipodi:docname="rb.svg"> @@ -26,15 +26,15 @@ inkscape:pageopacity="0.0" inkscape:pageshadow="2" inkscape:zoom="11.313708" - inkscape:cx="3.0619877" + inkscape:cx="14.287308" inkscape:cy="20.247642" inkscape:document-units="mm" inkscape:current-layer="layer1" showgrid="false" - inkscape:window-width="1440" - inkscape:window-height="775" + inkscape:window-width="1920" + inkscape:window-height="1043" inkscape:window-x="0" - inkscape:window-y="1" + inkscape:window-y="0" inkscape:window-maximized="1" fit-margin-top="0" fit-margin-left="0" @@ -90,35 +90,5 @@ inkscape:connector-curvature="0" style="fill:#ffffff;fill-opacity:1;stroke-width:0.3861911" /> - - - - - -- cgit v1.2.3 From 90585b57234a98637e426563afcc81a3c6122fdf Mon Sep 17 00:00:00 2001 From: Claudiu Belu Date: Tue, 22 Jan 2019 19:50:08 -0800 Subject: Adds the [LinuxOnly] tag If a test is known to be using Linux-specific features (e.g.: seLinuxOptions) or is unable to run on Windows nodes, it is labeled `[LinuxOnly]`. When using Windows nodes, this tag should be added to the `skip` argument. This tag was proposed during [1][2]. Depends-On: https://github.com/kubernetes/kubernetes/pull/73204 [1] https://groups.google.com/forum/#!topic/kubernetes-sig-testing/ii5584-Tkqk [2] https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit#heading=h.ukbaidczvy3r --- contributors/devel/e2e-tests.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/contributors/devel/e2e-tests.md b/contributors/devel/e2e-tests.md index 20698c49..e01a896f 100644 --- a/contributors/devel/e2e-tests.md +++ b/contributors/devel/e2e-tests.md @@ -574,6 +574,11 @@ test suite for [Conformance Testing](conformance-tests.md). This test must meet a number of [requirements](conformance-tests.md#conformance-test-requirements) to be eligible for this tag. This tag does not supersed any other labels. + - `[LinuxOnly]`: If a test is known to be using Linux-specific features +(e.g.: seLinuxOptions) or is unable to run on Windows nodes, it is labeled +`[LinuxOnly]`. When using Windows nodes, this tag should be added to the +`skip` argument. + - The following tags are not considered to be exhaustively applied, but are intended to further categorize existing `[Conformance]` tests, or tests that are being considered as candidate for promotion to `[Conformance]` as we work to -- cgit v1.2.3 From 32c04528d6998934c95840ee5614f4ad9cfc24c9 Mon Sep 17 00:00:00 2001 From: eduartua Date: Wed, 23 Jan 2019 16:26:34 -0600 Subject: Deleted on-call-federation-build-cop.md --- contributors/devel/on-call-federation-build-cop.md | 109 --------------------- 1 file changed, 109 deletions(-) delete mode 100644 contributors/devel/on-call-federation-build-cop.md diff --git a/contributors/devel/on-call-federation-build-cop.md b/contributors/devel/on-call-federation-build-cop.md deleted file mode 100644 index c153b02a..00000000 --- a/contributors/devel/on-call-federation-build-cop.md +++ /dev/null @@ -1,109 +0,0 @@ -# Federation Buildcop Guide and Playbook - -Federation runs two classes of tests: CI and Pre-submits. - -## CI - -* These tests run on the HEADs of master and release branches (starting - from Kubernetes v1.7). -* As a result, they run on code that's already merged. -* As the name suggests, they run continuously. Currently, they are - configured to run at least once every 30 minutes. -* Federation CI tests run as periodic jobs on prow. -* CI jobs always run sequentially. In other words, no single CI job - can have two instances of the job running at the same time. -* Latest build results can be viewed in [testgrid](https://k8s-testgrid.appspot.com/sig-multicluster) - -### Configuration - -Configuration steps are described in https://github.com/kubernetes/test-infra#create-a-new-job. -Federation CI e2e job names are as below: -* master branch - `ci-federation-e2e-gce` and `ci-federation-e2e-gce-serial` -* 1.8 release branch - `ci-kubernetes-e2e-gce-federation-release-1-8` -* 1.7 release branch - `ci-kubernetes-e2e-gce-federation-release-1-7` - -Search for the above job names in various configuration files as below: - -* Prow config: https://git.k8s.io/test-infra/prow/config.yaml -* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json -* Test grid config: https://git.k8s.io/test-infra/testgrid/config.yaml -* Job specific config: https://git.k8s.io/test-infra/jobs/env - -### Results - -Results of all the federation CI tests are listed in the corresponding -tabs on the Cluster Federation page in the testgrid. -https://k8s-testgrid.appspot.com/sig-multicluster - -### Playbook - -#### Triggering a new run - -Please ping someone who has access to the prow project and ask -them to click the `rerun` button from, for example -http://prow.k8s.io/?type=periodic&job=ci-federation-e2e-gce, -and execute the kubectl command. - -#### Quota cleanup - -Please ping someone who has access to the GCP project. Ask them to -look at the quotas and delete the leaked resources by clicking the -delete button corresponding to those leaked resources on Google Cloud -Console. - - -## Pre-submit - -* The pre-submit test is currently configured to run on the master - branch and any release branch that's 1.9 or newer. -* Multiple pre-submit jobs could be running in parallel(one per pr). -* Latest build results can be viewed in [testgrid](https://k8s-testgrid.appspot.com/presubmits-federation) -* We have following pre-submit jobs in federation - * bazel-test - Runs all the bazel test targets in federation. - * e2e-gce - Runs federation e2e tests on gce. - * verify - Runs federation unit, integration tests and few verify scripts. - -### Configuration - -Configuration steps are described in https://github.com/kubernetes/test-infra#create-a-new-job. -Federation pre-submit jobs have following names. -* bazel-test - `pull-federation-bazel-test` -* verify - `pull-federation-verify` -* e2e-gce - `pull-federation-e2e-gce` - -Search for the above job names in various configuration files as below: - -* Prow config: https://git.k8s.io/test-infra/prow/config.yaml -* Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json -* Test grid config: https://git.k8s.io/test-infra/testgrid/config.yaml -* Job specific config: https://git.k8s.io/test-infra/jobs/env - -### Results - -Aggregated results are available on the Gubernator dashboard page for -the federation pre-submit tests. - -https://k8s-gubernator.appspot.com/builds/kubernetes-jenkins/pr-logs/directory/pull-federation-e2e-gce - -### Metrics - -We track the flakiness metrics of all the pre-submit jobs and -individual tests that run against PRs in -[kubernetes/federation](https://github.com/kubernetes/federation). - -* The metrics that we track are documented in https://git.k8s.io/test-infra/metrics/README.md#metrics. -* Job-level metrics are available in http://storage.googleapis.com/k8s-metrics/job-flakes-latest.json. - -### Playbook - -#### Triggering a new run - -Use the `/test` command on the PR to re-trigger the test. The exact -incantation is: `/test pull-federation-e2e-gce` - -#### Quota cleanup - -Please ping someone who has access to `k8s-jkns-pr-bldr-e2e-gce-fdrtn` -GCP project. Ask them to look at the quotas and delete the leaked -resources by clicking the delete button corresponding to those leaked -resources on Google Cloud Console. -- cgit v1.2.3 From 891dede6501df6f618160a2303802f691a741e63 Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Wed, 23 Jan 2019 15:05:30 -0800 Subject: Add nikhita to github admins team, adjust OWNERS grodrigues3 has graciously agreed to step down for nikhita on the team I will step in as subproject owner add the membership team as reviewers --- github-management/OWNERS | 16 ++++++++++------ github-management/README.md | 6 ++++-- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/github-management/OWNERS b/github-management/OWNERS index cab8996b..a21d3f76 100644 --- a/github-management/OWNERS +++ b/github-management/OWNERS @@ -1,13 +1,17 @@ +# See the OWNERS docs at https://go.k8s.io/owners + +approvers: + - cblecker + - spiffxp + reviewers: - calebamiles - - cblecker - fejta - - grodrigues3 - idvoretskyi - - spiffxp -approvers: - - cblecker - - grodrigues3 + - justaugustus + - mrbobbytables + - nikhita + labels: - sig/contributor-experience - area/github-management diff --git a/github-management/README.md b/github-management/README.md index 76d206f0..74394e41 100644 --- a/github-management/README.md +++ b/github-management/README.md @@ -31,13 +31,13 @@ This team (**[@kubernetes/owners](https://github.com/orgs/kubernetes/teams/owner * Caleb Miles (**[@calebamiles](https://github.com/calebamiles)**, US Pacific) * Christoph Blecker (**[@cblecker](https://github.com/cblecker)**, CA Pacific) * Erick Fejta (**[@fejta](https://github.com/fejta)**, US Pacific) -* Garrett Rodrigues (**[@grodrigues3](https://github.com/grodrigues3)**, US Pacific) +* Nikhita Raghunath (**[@nikhita](https://github.com/nikhita)**, Indian Standard Time) * Ihor Dvoretskyi (**[@idvoretskyi](https://github.com/idvoretskyi)**, UA Eastern European) This team is responsible for holding Org Owner privileges over all the active Kubernetes orgs, and will take action in accordance with our polices and procedures. All members of this team are subject to the Kubernetes -[security embargo policy](https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md#embargo-policy). +[security embargo policy]. Nominations to this team will come from the Contributor Experience SIG, and require confirmation by the Steering Committee before taking effect. Time zones @@ -110,3 +110,5 @@ repositories and organizations: - [label_sync](https://git.k8s.io/test-infra/label_sync): Add, modify, delete, and migrate labels across an entire organization based on a defined YAML configuration + +[security embargo policy]: https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/private-distributors-list.md#embargo-policy -- cgit v1.2.3 From 4bfe3a67f0dbc089a60c0a0763eea2a7037fb3d5 Mon Sep 17 00:00:00 2001 From: Aaron Crickenberger Date: Wed, 23 Jan 2019 10:24:26 -0800 Subject: wg-k8s-infra administrivia Rename from k8s-infra-team and add youtube playlist --- sig-list.md | 2 +- sigs.yaml | 5 +++-- wg-k8s-infra/README.md | 5 +++-- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/sig-list.md b/sig-list.md index 301cb6f5..1ec4ba48 100644 --- a/sig-list.md +++ b/sig-list.md @@ -63,7 +63,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Component Standard](wg-component-standard/README.md)||* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)
* [Dr. Stefan Schimanski](https://github.com/sttts), Red Hat
* [Michael Taufen](https://github.com/mtaufen), Google
|* [Slack](https://kubernetes.slack.com/messages/)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-component-standard)|* Regular WG Meeting: [Tuesdays at 08:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/18TsodX0fqQgViQ7HHUTAhiAwkf6bNhPXH4vNVTI7GwI)
|[Container Identity](wg-container-identity/README.md)||* [Clayton Coleman](https://github.com/smarterclayton), Red Hat
* [Greg Castle](https://github.com/destijl), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-container-identity)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-container-identity)|* Regular WG Meeting: [Wednesdays at 10:00 PDT (bi-weekly (On demand))](https://zoom.us/my/k8s.sig.auth)
|[IoT Edge](wg-iot-edge/README.md)||* [Cindy Xing](https://github.com/cindyxing), Huawei
* [Dejan Bosanac](https://github.com/dejanb), Red Hat
* [Preston Holmes](https://github.com/ptone), Google
* [Steve Wong](https://github.com/cantbewong), VMWare
|* [Slack](https://kubernetes.slack.com/messages/wg-iot-edge)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-iot-edge)|* Regular WG Meeting: [Fridays at 16:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-|[K8s Infra](wg-k8s-infra/README.md)|* Architecture
* Contributor Experience
* Release
* Testing
|* [Davanum Srinivas](https://github.com/dims), Huawei
* [Aaron Crickenberger](https://github.com/spiffxp), Google
|* [Slack](https://kubernetes.slack.com/messages/k8s-infra-team)
* [Mailing List](https://groups.google.com/forum/#!forum/k8s-infra-team)|* Regular WG Meeting: [Wednesdays at 8:30 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
+|[K8s Infra](wg-k8s-infra/README.md)|* Architecture
* Contributor Experience
* Release
* Testing
|* [Davanum Srinivas](https://github.com/dims), Huawei
* [Aaron Crickenberger](https://github.com/spiffxp), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-k8s-infra)
* [Mailing List](https://groups.google.com/forum/#!forum/wg-k8s-infra)|* Regular WG Meeting: [Wednesdays at 8:30 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Kubeadm Adoption](wg-kubeadm-adoption/README.md)||* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)
* [Justin Santa Barbara](https://github.com/justinsb)
|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular WG Meeting: [Tuesdays at 18:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Machine Learning](wg-machine-learning/README.md)||* [Vishnu Kannan](https://github.com/vishh), Google
* [Kenneth Owens](https://github.com/kow3ns), Google
* [Balaji Subramaniam](https://github.com/balajismaniam), Intel
* [Connor Doyle](https://github.com/ConnorDoyle), Intel
|* [Slack](https://kubernetes.slack.com/messages/wg-machine-learning)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-machine-learning)|* Regular WG Meeting: [Thursdays at 13:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Multitenancy](wg-multitenancy/README.md)||* [David Oppenheimer](https://github.com/davidopp), Google
* [Jessie Frazelle](https://github.com/jessfraz), Microsoft
|* [Slack](https://kubernetes.slack.com/messages/wg-multitenancy)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-multitenancy)|* Regular WG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
diff --git a/sigs.yaml b/sigs.yaml index 75843b04..1fb2a2c8 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -2473,6 +2473,7 @@ workinggroups: frequency: bi-weekly url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit archive_url: http://bit.ly/wg-k8s-infra-notes + recordings_url: http://bit.ly/wg-k8s-infra-playlist contact: - slack: k8s-infra-team # TODO(spiffxp): rename to wg-k8s-infra - mailing_list: https://groups.google.com/forum/#!forum/k8s-infra-team # TODO(spiffxp): rename to wg-k8s-infra + slack: wg-k8s-infra + mailing_list: https://groups.google.com/forum/#!forum/wg-k8s-infra diff --git a/wg-k8s-infra/README.md b/wg-k8s-infra/README.md index 2246aa65..d95e98f6 100644 --- a/wg-k8s-infra/README.md +++ b/wg-k8s-infra/README.md @@ -21,6 +21,7 @@ The [charter](charter.md) defines the scope and governance of the K8s Infra Work ## Meetings * Regular WG Meeting: [Wednesdays at 8:30 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=8:30&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](http://bit.ly/wg-k8s-infra-notes). + * [Meeting recordings](http://bit.ly/wg-k8s-infra-playlist). ## Organizers @@ -28,8 +29,8 @@ The [charter](charter.md) defines the scope and governance of the K8s Infra Work * Aaron Crickenberger (**[@spiffxp](https://github.com/spiffxp)**), Google ## Contact -* [Slack](https://kubernetes.slack.com/messages/k8s-infra-team) -* [Mailing list](https://groups.google.com/forum/#!forum/k8s-infra-team) +* [Slack](https://kubernetes.slack.com/messages/wg-k8s-infra) +* [Mailing list](https://groups.google.com/forum/#!forum/wg-k8s-infra) -- cgit v1.2.3 From ae49f47f7e94c0b938a025afc78f2642ef524b22 Mon Sep 17 00:00:00 2001 From: Jeffrey Sica Date: Thu, 24 Jan 2019 23:19:59 -0500 Subject: update sig-ui info --- OWNERS_ALIASES | 4 +++- sig-list.md | 2 +- sig-ui/README.md | 6 ++++-- sigs.yaml | 14 ++++++++++---- 4 files changed, 18 insertions(+), 8 deletions(-) diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index f778360d..24f54204 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -108,8 +108,10 @@ aliases: - stevekuznetsov - timothysc sig-ui-leads: - - danielromlein - floreks + - maciaszczykm + - danielromlein + - jeefy sig-vmware-leads: - frapposelli - cantbewong diff --git a/sig-list.md b/sig-list.md index 1ec4ba48..ef92512a 100644 --- a/sig-list.md +++ b/sig-list.md @@ -50,7 +50,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Service Catalog](sig-service-catalog/README.md)|service-catalog|* [Carolyn Van Slyck](https://github.com/carolynvs), Microsoft
* [Michael Kibbe](https://github.com/kibbles-n-bytes), Google
* [Jonathan Berkhahn](https://github.com/jberkhahn), IBM
* [Jay Boyd](https://github.com/jboyd01), Red Hat
|* [Slack](https://kubernetes.slack.com/messages/sig-service-catalog)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-service-catalog)|* Regular SIG Meeting: [Mondays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Storage](sig-storage/README.md)|storage|* [Saad Ali](https://github.com/saad-ali), Google
* [Bradley Childs](https://github.com/childsb), Red Hat
|* [Slack](https://kubernetes.slack.com/messages/sig-storage)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-storage)|* Regular SIG Meeting: [Thursdays at 9:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Testing](sig-testing/README.md)|testing|* [Aaron Crickenberger](https://github.com/spiffxp), Google
* [Erick Feja](https://github.com/fejta), Google
* [Steve Kuznetsov](https://github.com/stevekuznetsov), Red Hat
* [Timothy St. Clair](https://github.com/timothysc), VMware
|* [Slack](https://kubernetes.slack.com/messages/sig-testing)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-testing)|* Regular SIG Meeting: [Tuesdays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* (testing-commons) Testing Commons: [Wednesdays at 07:30 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-|[UI](sig-ui/README.md)|ui|* [Dan Romlein](https://github.com/danielromlein), Google
* [Sebastian Florek](https://github.com/floreks), Fujitsu
|* [Slack](https://kubernetes.slack.com/messages/sig-ui)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)|* Regular SIG Meeting: [Thursdays at 18:00 CET (Central European Time) (weekly)](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)
+|[UI](sig-ui/README.md)|ui|* [Sebastian Florek](https://github.com/floreks), Loodse
* [Marcin Maciaszczyk](https://github.com/maciaszczykm), Loodse
* [Dan Romlein](https://github.com/danielromlein), Google
* [Jeffrey Sica](https://github.com/jeefy), University of Michigan
|* [Slack](https://kubernetes.slack.com/messages/sig-ui)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)|* Regular SIG Meeting: [Thursdays at 18:00 CET (Central European Time) (bi-weekly)](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)
|[VMware](sig-vmware/README.md)|vmware|* [Fabio Rapposelli](https://github.com/frapposelli), VMware
* [Steve Wong](https://github.com/cantbewong), VMware
|* [Slack](https://kubernetes.slack.com/messages/sig-vmware)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware)|* Regular SIG Meeting: [Thursdays at 11:00 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Cloud Provider vSphere monthly syncup: [Wednesdays at 09:00 PT (Pacific Time) (monthly - first Wednesday every month)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Cluster API Provider vSphere bi-weekly syncup: [Wednesdays at 13:00 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Windows](sig-windows/README.md)|windows|* [Michael Michael](https://github.com/michmike), VMware
* [Patrick Lang](https://github.com/patricklang), Microsoft
|* [Slack](https://kubernetes.slack.com/messages/sig-windows)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-windows)|* Regular SIG Meeting: [Tuesdays at 12:30 Eastern Standard Time (EST) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
diff --git a/sig-ui/README.md b/sig-ui/README.md index 40c07ce7..16c9ada0 100644 --- a/sig-ui/README.md +++ b/sig-ui/README.md @@ -11,7 +11,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener Covers all things UI related. Efforts are centered around Kubernetes Dashboard: a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. ## Meetings -* Regular SIG Meeting: [Thursdays at 18:00 CET (Central European Time)](https://groups.google.com/forum/#!forum/kubernetes-sig-ui) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=18:00&tz=CET%20%28Central%20European%20Time%29). +* Regular SIG Meeting: [Thursdays at 18:00 CET (Central European Time)](https://groups.google.com/forum/#!forum/kubernetes-sig-ui) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=18:00&tz=CET%20%28Central%20European%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/1PwHFvqiShLIq8ZpoXvE3dSUnOv1ts5BTtZ7aATuKd-E/edit?usp=sharing). ## Leadership @@ -19,8 +19,10 @@ Covers all things UI related. Efforts are centered around Kubernetes Dashboard: ### Chairs The Chairs of the SIG run operations and processes governing the SIG. +* Sebastian Florek (**[@floreks](https://github.com/floreks)**), Loodse +* Marcin Maciaszczyk (**[@maciaszczykm](https://github.com/maciaszczykm)**), Loodse * Dan Romlein (**[@danielromlein](https://github.com/danielromlein)**), Google -* Sebastian Florek (**[@floreks](https://github.com/floreks)**), Fujitsu +* Jeffrey Sica (**[@jeefy](https://github.com/jeefy)**), University of Michigan ## Contact * [Slack](https://kubernetes.slack.com/messages/sig-ui) diff --git a/sigs.yaml b/sigs.yaml index 1fb2a2c8..b7650e3b 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -2019,18 +2019,24 @@ sigs: label: ui leadership: chairs: + - name: Sebastian Florek + github: floreks + company: Loodse + - name: Marcin Maciaszczyk + github: maciaszczykm + company: Loodse - name: Dan Romlein github: danielromlein company: Google - - name: Sebastian Florek - github: floreks - company: Fujitsu + - name: Jeffrey Sica + github: jeefy + company: University of Michigan meetings: - description: Regular SIG Meeting day: Thursday time: "18:00" tz: "CET (Central European Time)" - frequency: weekly + frequency: bi-weekly url: https://groups.google.com/forum/#!forum/kubernetes-sig-ui archive_url: https://docs.google.com/document/d/1PwHFvqiShLIq8ZpoXvE3dSUnOv1ts5BTtZ7aATuKd-E/edit?usp=sharing recordings_url: -- cgit v1.2.3 From 350a52211873770326bc5aff1f15ba379a3171fd Mon Sep 17 00:00:00 2001 From: guineveresaenger Date: Thu, 24 Jan 2019 22:13:04 -0800 Subject: Add guineveresaenger to OWNERS files in contributors I would like to assist as a Reviewer in the rebuilding of our developer guide. I also think I might be of assistance approving contributor guide related issues. --- contributors/devel/OWNERS | 1 + contributors/guide/OWNERS | 1 + 2 files changed, 2 insertions(+) diff --git a/contributors/devel/OWNERS b/contributors/devel/OWNERS index c4d35842..9788673a 100644 --- a/contributors/devel/OWNERS +++ b/contributors/devel/OWNERS @@ -5,6 +5,7 @@ reviewers: - idvoretskyi - Phillels - spiffxp + - guineveresaenger approvers: - calebamiles - cblecker diff --git a/contributors/guide/OWNERS b/contributors/guide/OWNERS index a9abb261..b86ecfcd 100644 --- a/contributors/guide/OWNERS +++ b/contributors/guide/OWNERS @@ -8,6 +8,7 @@ reviewers: approvers: - castrojo - parispittman + - guineveresaenger labels: - sig/contributor-experience - area/contributor-guide -- cgit v1.2.3 From c6b5f21537f33cba6cdf4e58195c997a7daa94fe Mon Sep 17 00:00:00 2001 From: Dejan Bosanac Date: Wed, 23 Jan 2019 14:58:10 +0100 Subject: wg-iot-edge: Change meeting time and add APAC meeting --- sig-list.md | 2 +- sigs.yaml | 13 ++++++++++--- wg-iot-edge/README.md | 4 +++- 3 files changed, 14 insertions(+), 5 deletions(-) diff --git a/sig-list.md b/sig-list.md index 1ec4ba48..99d7a28c 100644 --- a/sig-list.md +++ b/sig-list.md @@ -62,7 +62,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Apply](wg-apply/README.md)||* [Daniel Smith](https://github.com/lavalamp), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-apply)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-apply)|* Regular WG Meeting: [Tuesdays at 9:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Component Standard](wg-component-standard/README.md)||* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)
* [Dr. Stefan Schimanski](https://github.com/sttts), Red Hat
* [Michael Taufen](https://github.com/mtaufen), Google
|* [Slack](https://kubernetes.slack.com/messages/)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-component-standard)|* Regular WG Meeting: [Tuesdays at 08:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/18TsodX0fqQgViQ7HHUTAhiAwkf6bNhPXH4vNVTI7GwI)
|[Container Identity](wg-container-identity/README.md)||* [Clayton Coleman](https://github.com/smarterclayton), Red Hat
* [Greg Castle](https://github.com/destijl), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-container-identity)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-container-identity)|* Regular WG Meeting: [Wednesdays at 10:00 PDT (bi-weekly (On demand))](https://zoom.us/my/k8s.sig.auth)
-|[IoT Edge](wg-iot-edge/README.md)||* [Cindy Xing](https://github.com/cindyxing), Huawei
* [Dejan Bosanac](https://github.com/dejanb), Red Hat
* [Preston Holmes](https://github.com/ptone), Google
* [Steve Wong](https://github.com/cantbewong), VMWare
|* [Slack](https://kubernetes.slack.com/messages/wg-iot-edge)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-iot-edge)|* Regular WG Meeting: [Fridays at 16:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
+|[IoT Edge](wg-iot-edge/README.md)||* [Cindy Xing](https://github.com/cindyxing), Huawei
* [Dejan Bosanac](https://github.com/dejanb), Red Hat
* [Preston Holmes](https://github.com/ptone), Google
* [Steve Wong](https://github.com/cantbewong), VMWare
|* [Slack](https://kubernetes.slack.com/messages/wg-iot-edge)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-iot-edge)|* Regular WG Meeting: [Wednesdays at 17:00 UTC (every four weeks)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* APAC WG Meeting: [Wednesdays at 5:00 UTC (every four weeks)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[K8s Infra](wg-k8s-infra/README.md)|* Architecture
* Contributor Experience
* Release
* Testing
|* [Davanum Srinivas](https://github.com/dims), Huawei
* [Aaron Crickenberger](https://github.com/spiffxp), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-k8s-infra)
* [Mailing List](https://groups.google.com/forum/#!forum/wg-k8s-infra)|* Regular WG Meeting: [Wednesdays at 8:30 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Kubeadm Adoption](wg-kubeadm-adoption/README.md)||* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)
* [Justin Santa Barbara](https://github.com/justinsb)
|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular WG Meeting: [Tuesdays at 18:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Machine Learning](wg-machine-learning/README.md)||* [Vishnu Kannan](https://github.com/vishh), Google
* [Kenneth Owens](https://github.com/kow3ns), Google
* [Balaji Subramaniam](https://github.com/balajismaniam), Intel
* [Connor Doyle](https://github.com/ConnorDoyle), Intel
|* [Slack](https://kubernetes.slack.com/messages/wg-machine-learning)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-machine-learning)|* Regular WG Meeting: [Thursdays at 13:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
diff --git a/sigs.yaml b/sigs.yaml index 1fb2a2c8..110fecca 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -2382,10 +2382,17 @@ workinggroups: company: VMWare meetings: - description: Regular WG Meeting - day: Friday - time: "16:00" + day: Wednesday + time: "17:00" tz: "UTC" - frequency: bi-weekly + frequency: every four weeks + url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit + archive_url: https://docs.google.com/document/d/1Yuwy9IO4X6XKq2wLW0pVZn5yHQxlyK7wdYBZBXRWiKI/edit?usp=sharing + - description: APAC WG Meeting + day: Wednesday + time: "5:00" + tz: "UTC" + frequency: every four weeks url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit archive_url: https://docs.google.com/document/d/1Yuwy9IO4X6XKq2wLW0pVZn5yHQxlyK7wdYBZBXRWiKI/edit?usp=sharing contact: diff --git a/wg-iot-edge/README.md b/wg-iot-edge/README.md index 3118344e..85a8e82c 100644 --- a/wg-iot-edge/README.md +++ b/wg-iot-edge/README.md @@ -11,7 +11,9 @@ To understand how this file is generated, see https://git.k8s.io/community/gener A Working Group dedicated to discussing, designing and documenting using Kubernetes for developing and deploying IoT and Edge specific applications ## Meetings -* Regular WG Meeting: [Fridays at 16:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:00&tz=UTC). +* Regular WG Meeting: [Wednesdays at 17:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (every four weeks). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=17:00&tz=UTC). + * [Meeting notes and Agenda](https://docs.google.com/document/d/1Yuwy9IO4X6XKq2wLW0pVZn5yHQxlyK7wdYBZBXRWiKI/edit?usp=sharing). +* APAC WG Meeting: [Wednesdays at 5:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (every four weeks). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=5:00&tz=UTC). * [Meeting notes and Agenda](https://docs.google.com/document/d/1Yuwy9IO4X6XKq2wLW0pVZn5yHQxlyK7wdYBZBXRWiKI/edit?usp=sharing). ## Organizers -- cgit v1.2.3 From facc916982919b205b42d32e9b562421b49d8398 Mon Sep 17 00:00:00 2001 From: "Jorge O. Castro" Date: Fri, 25 Jan 2019 11:33:44 -0500 Subject: Remove reference to SIG cluster ops, this was shut down long ago Signed-off-by: Jorge O. Castro --- OWNERS_ALIASES | 3 --- sig-list.md | 1 - sigs.yaml | 29 ----------------------------- 3 files changed, 33 deletions(-) diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index f778360d..44af0dc3 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -45,9 +45,6 @@ aliases: - roberthbailey - luxas - timothysc - sig-cluster-ops-leads: - - zehicle - - jdumars sig-contributor-experience-leads: - Phillels - parispittman diff --git a/sig-list.md b/sig-list.md index 99d7a28c..46c56dbc 100644 --- a/sig-list.md +++ b/sig-list.md @@ -33,7 +33,6 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[CLI](sig-cli/README.md)|cli|* [Maciej Szulik](https://github.com/soltysh), Red Hat
* [Sean Sullivan](https://github.com/seans3), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-cli)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cli)|* Regular SIG Meeting: [Wednesdays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Cloud Provider](sig-cloud-provider/README.md)|cloud-provider|* [Andrew Sy Kim](https://github.com/andrewsykim), VMware
* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation
* [Jago Macleod](https://github.com/jagosan), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-cloud-provider)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider)|* Regular SIG Meeting: [Wednesdays at 1:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* (cloud-provider-extraction) Weekly Sync removing the in-tree cloud providers led by @cheftako and @d-nishi: [Thursdays at 13:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1KLsGGzNXQbsPeELCeF_q-f0h0CEGSe20xiwvcR2NlYM/edit)
|[Cluster Lifecycle](sig-cluster-lifecycle/README.md)|cluster-lifecycle|* [Robert Bailey](https://github.com/roberthbailey), Google
* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)
* [Timothy St. Clair](https://github.com/timothysc), VMware
|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular SIG Meeting: [Tuesdays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* kubeadm Office Hours: [Wednesdays at 09:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Cluster API office hours: [Wednesdays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Cluster API Provider Implementers' office hours (EMEA): [Wednesdays at 15:00 CEST (Central European Summer Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Cluster API Provider Implementers' office hours (US West Coast): [Tuesdays at 12:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Cluster API (AWS implementation) office hours: [Mondays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* kops Office Hours: [Fridays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* Kubespray Office Hours: [Wednesdays at 07:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-|[Cluster Ops](sig-cluster-ops/README.md)|cluster-ops|* [Rob Hirschfeld](https://github.com/zehicle), RackN
* [Jaice Singer DuMars](https://github.com/jdumars), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-ops)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops)|* Regular SIG Meeting: [Thursdays at 20:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Contributor Experience](sig-contributor-experience/README.md)|contributor-experience|* [Elsie Phillips](https://github.com/Phillels), CoreOS
* [Paris Pittman](https://github.com/parispittman), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-contribex)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-contribex)|* Regular SIG Meeting: [Wednesdays at 9:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Docs](sig-docs/README.md)|docs|* [Andrew Chen](https://github.com/chenopis), Google
* [Zach Corleissen](https://github.com/zacharysarah), Linux Foundation
* [Jennifer Rondeau](https://github.com/bradamant3), VMware
|* [Slack](https://kubernetes.slack.com/messages/sig-docs)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)|* Regular SIG Meeting: [Tuesdays at 17:30 UTC (weekly - except fourth Tuesday every month)](https://docs.google.com/document/d/1zg6By77SGg90EVUrhDIhopjZlSDg2jCebU-Ks9cYx0w/edit)
* APAC SIG Meeting: [Wednesdays at 02:00 UTC (monthly - fourth Wednesday every month)](https://docs.google.com/document/d/1zg6By77SGg90EVUrhDIhopjZlSDg2jCebU-Ks9cYx0w/edit)
|[GCP](sig-gcp/README.md)|gcp|* [Adam Worrall](https://github.com/abgworrall), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-gcp)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-gcp)|* Regular SIG Meeting: [Thursdays at 16:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
diff --git a/sigs.yaml b/sigs.yaml index 110fecca..edb37145 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -927,35 +927,6 @@ sigs: - name: minikube owners: - https://raw.githubusercontent.com/kubernetes/minikube/master/OWNERS - - name: Cluster Ops - dir: sig-cluster-ops - mission_statement: > - Promote operability and interoperability of Kubernetes clusters. We - focus on shared operations practices for Kubernetes clusters with a goal - to make Kubernetes broadly accessible with a common baseline reference. - We also organize operators as a sounding board and advocacy group. - charter_link: - label: cluster-ops - leadership: - chairs: - - name: Rob Hirschfeld - github: zehicle - company: RackN - - name: Jaice Singer DuMars - github: jdumars - company: Google - meetings: - - description: Regular SIG Meeting - day: Thursday - time: "20:00" - tz: "UTC" - frequency: biweekly - url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit - archive_url: https://docs.google.com/document/d/1IhN5v6MjcAUrvLd9dAWtKcGWBWSaRU8DNyPiof3gYMY/edit# - recordings_url: https://www.youtube.com/watch?v=7uyy37pCk4U&list=PL69nYSiGNLP3b38liicqy6fm2-jWT4FQR - contact: - slack: sig-cluster-ops - mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops - name: Contributor Experience dir: sig-contributor-experience mission_statement: > -- cgit v1.2.3 From 3407d6418472118b9ee568eafa939864ce2d3969 Mon Sep 17 00:00:00 2001 From: Bob Killen Date: Fri, 25 Jan 2019 19:43:10 -0500 Subject: Add discuss guidelines --- communication/discuss-guidelines.md | 196 ++++++++++++++++++++++++++++++++++++ communication/moderation.md | 4 + communication/moderators.md | 13 ++- 3 files changed, 211 insertions(+), 2 deletions(-) create mode 100644 communication/discuss-guidelines.md diff --git a/communication/discuss-guidelines.md b/communication/discuss-guidelines.md new file mode 100644 index 00000000..2a1e90b3 --- /dev/null +++ b/communication/discuss-guidelines.md @@ -0,0 +1,196 @@ +# Discuss guidelines + +Discuss (discuss.kubernetes.io), is the Kubernetes community forum backed by +the [Discourse] discussion platform. It serves as the primary communication +platform for Kubernetes users; replacing the kubernetes-users mailing list in +September 2018. + +Discuss, like other Kubernetes communication platforms, is public and searchable. +Communication should be polite and respectful. Follow the general guideline of +_"be excellent to each other"_. + +**Reference Links:** +- [KEP 0007] - A community forum for Kubernetes +- [Archive k-users] - kubernetes-users mailing list migrated to Discuss + + +## Code of conduct + +Kubernetes adheres to the Cloud Native Compute Foundation's [Code of Conduct] +throughout the project, and includes all communication mediums. + + +## Privacy Policy + +Discuss adheres to the the [Linux Foundation Privacy Policy]. + + +## Admins + +- Check the [centralized list of administrators][admins] for contact information. +- Discuss administrators are listed on [Discuss About page]. + +To connect: please reach out to them using Discourse's built in message system. +If there is an issue with the platform itself, please use the +[sig contributor experience mailing list] or the `#sig-contribex` slack channel. + +--- + +## General communication guidelines + +### PM (Private Message) conversations + +Please do not engage in proprietary company specific conversations in the +Kubernetes Discourse instance. This is meant for conversations around related +Kubernetes open source topics and community. Proprietary conversations should +occur in one of your company communication platforms. As with all +communication, please be mindful of appropriateness, professionalism, and +applicability to the Kubernetes community. + + +### Escalating and/or reporting a problem + +Discourse has a [built in system for flagging inappropriate posts] that will +notify the admins of a potentially bad post or conversation. If the post +occurred during a period where one of the Admins may not be available, reach out +to one of the [moderators][admins] in the closest timezone directly. As a +moderator, they can flag the post which will [unlist] it immediately until an +Admin is available to review it. + +If there is an issue in one of the Regional Boards, engage with one of the +Regional moderators as a first step. They will be able to add context and aid +in the escalation process. + +If the problem is with one of the Admins or Moderators, reach out to one of the +other Admins and describe the situation. + +If it is a [Code of Conduct] issue, contact conduct@kubernetes.io and describe +the situation. + +--- + +## Moderation + +Discourse has a built in set of advanced auto-moderation capabilities that +rely on their _"[user trust system][user-trust]"_. For example, newly created +accounts are rate limited on posting or replying to topics until their "trust +level" increases. A user's trust level will increase based on a number of +factors including time spent on the forum, posts or replies made, likes +received, or one of several other metric. + +Moderators, both those for the General Forum and Regional Board are manually +promoted by an Admin to [Trust Level 4][user-trust]. With that comes the full + responsibilities of a board moderator. + + +### Moderator expectations and guidelines + +Moderators should adhere to the general Kubernetes project +[moderation guidelines]. + + +### Other moderator responsibilities + +#### Ingest queue + +Moderators have access to a private category called _"Ingest"_ that has topics +posted automatically from a variety of Kubernetes/CNCF sources such as +Kubernetes releases, Security Announcements, the [kubernetes.io blog], and other +useful sources such as [Last Week in Kubernetes Development (LWKD)][lwkd]. +Moderators are encouraged to tag and move these articles to their relevant +category. + +--- + +## New category requests + +### Requesting a general category + +New category requests should be posted to the [Site Feedback and Help] section. +Proposed Categories should be community focused and must be related to +Kubernetes project. They must **not** be Company specific with the exception of +Cloud providers; however their topics should not be related to proprietary +information of the provider. + +Once a request has been made, you are encouraged to solicit user support for +the category from the community. The [admins] will review the request, if two +express their support for category, it will be created. + +Once created, the _"About the Category"_ should be updated with a brief +description of the newly created category. + + +### Requesting a SIG, WG, or sub-project category + +If you are associated with a [SIG, WG or subj-project] and would like a Discuss +category to collaborate with others asynchronously; post a message with the +category creation request to the [Site Feedback and Help] section. An +admin will reach out and provide you with a URL and mail address to use for +your discussions. + + +### Requesting a regional category + +The [Regional Discussions Category] is intended for those users that belong to a +specific region or share a common language to openly interact and connect with +each other in their native language. + +The anti-spam and anti-harassment features built into Discourse do not handle +other languages as well as it does English. It can pick up on general spam +but lacks regional context. For this reason, the Regional categories require +additional native-language moderators. + +To request the creation of a new Regional board, post the request the top level +[Regional Discussions Category]. If possible solicit additional support from +the regional community and propose potential moderators. Before a Regional +Board can be created, there must be at least one moderator, preferably two with +at least one in the Region's primary time zone. + +Once moderators have been selected, the Regional category can be created. + +The first post of the new board _"About the Category"_ post should +contain the following text in both english and the region's language: +``` +Welcome to the category of the Kubernetes Forum! In here you can chat +and discuss topics of interest to you about Kubernetes in [region language]. +This is a place to share Kubernetes related news, projects, tools, blogs and +more. This site is governed by the [CNCF Code of Conduct], and we are committed +to making this a welcoming place for all. If you have any specific questions or +concerns, please contact one of the moderators for the category listed +below. + +**Moderator Team:** +- +- + +[CNCF Code of Conduct]: +``` + +The _"CNCF Code of Conduct"_ link should be linked to one of the +[translated versions of the CNCF Code of Conduct]. If none is available, create +an issue under the [CNCF foundation] project requesting the new translation, +and link to the English version until a translated version is made available. + +Lastly, update the [discuss admins][admins] section in the [moderators.md][admins] +list with the new region, the moderators and their timezone. + + +[Discourse]: https://discourse.org +[KEP 0007]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-contributor-experience/0007-20180403-community-forum.md +[archive k-users]: https://github.com/kubernetes/community/issues/2492 +[Code of Conduct]: https://github.com/cncf/foundation/blob/master/code-of-conduct.md +[Linux Foundation Privacy Policy]: https://www.linuxfoundation.org/privacy/ +[admins]: ./moderators.md#discusskubernetesio +[Discuss About page]: https://discuss.kubernetes.io/about +[sig contributor experience mailing list]: https://groups.google.com/forum/#!forum/kubernetes-sig-contribex +[built in system for flagging inappropriate posts]: https://meta.discourse.org/t/what-are-flags-and-how-do-they-work/32783 +[unlist]: https://meta.discourse.org/t/what-is-the-difference-between-closed-unlisted-and-archived-topics/51238 +[user-trust]: https://blog.discourse.org/2018/06/understanding-discourse-trust-levels/ +[moderation guidelines]: https://github.com/kubernetes/community/blob/master/communication/moderation.md +[kubernetes.io blog]: https://kubernetes.io/blog/ +[lwkd]: http://lwkd.info/ +[Site Feedback and Help]: https://discuss.kubernetes.io/c/site-feedback +[SIG, WG or subj-project]: https://github.com/kubernetes/community/blob/master/sig-list.md +[Regional Discussions Category]: https://discuss.kubernetes.io/c/regional-discussions +[translated versions of the CNCF Code of Conduct]: https://github.com/cncf/foundation/tree/master/code-of-conduct-languages +[CNCF foundation]: https://github.com/cncf/foundation \ No newline at end of file diff --git a/communication/moderation.md b/communication/moderation.md index daf030fe..44c7021b 100644 --- a/communication/moderation.md +++ b/communication/moderation.md @@ -59,6 +59,10 @@ New members who post to a group will automatically have their messages put in a Moderators will receive emails when messages are in this queue and will process them accordingly. +### Discuss + +- [Discuss Guidelines](./discuss-guidelines.md) + ### Slack - [Slack Guidelines](./slack-guidelines.md) diff --git a/communication/moderators.md b/communication/moderators.md index 42c6cf70..897868d2 100644 --- a/communication/moderators.md +++ b/communication/moderators.md @@ -33,9 +33,12 @@ See our [moderation guidelines](./moderating.md) for policies and recommendation - Bob Killen (@mrbobbytables) - ET - Jeffrey Sica (@jeefy) - ET -### Additional Moderators +### Regional category moderators -- Ihor Dvoretskyi (@idvoretskyi) - CET +- [Chinese] +- [German] +- [Italian] +- [Ukrainian] ## YouTube Channel @@ -60,3 +63,9 @@ See our [moderation guidelines](./moderating.md) for policies and recommendation - Paris Pittman (@parispittman) - PT - Jorge Castro (@castrojo) - ET + + +[Chinese]: https://discuss.kubernetes.io/t/about-the-chinese-category/2881 +[German]: https://discuss.kubernetes.io/t/about-the-german-category/3152 +[Italian]: https://discuss.kubernetes.io/t/about-the-italian-category/2917/2 +[Ukrainian]: https://discuss.kubernetes.io/t/about-the-ukrainian-category/2916 \ No newline at end of file -- cgit v1.2.3 From ca250cebf7d6ed6457ae07c66d4b3e35f15609a4 Mon Sep 17 00:00:00 2001 From: Jeffrey Sica Date: Mon, 28 Jan 2019 08:42:43 -0500 Subject: include new repo for dashboard metrics scraper --- sig-ui/README.md | 1 + sigs.yaml | 1 + 2 files changed, 2 insertions(+) diff --git a/sig-ui/README.md b/sig-ui/README.md index 16c9ada0..773d6639 100644 --- a/sig-ui/README.md +++ b/sig-ui/README.md @@ -35,6 +35,7 @@ The following subprojects are owned by sig-ui: - **dashboard** - Owners: - https://raw.githubusercontent.com/kubernetes/dashboard/master/OWNERS + - https://raw.githubusercontent.com/kubernetes-sigs/dashboard-metrics-scraper/master/OWNERS diff --git a/sigs.yaml b/sigs.yaml index b7650e3b..d75446b7 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -2047,6 +2047,7 @@ sigs: - name: dashboard owners: - https://raw.githubusercontent.com/kubernetes/dashboard/master/OWNERS + - https://raw.githubusercontent.com/kubernetes-sigs/dashboard-metrics-scraper/master/OWNERS - name: VMware dir: sig-vmware mission_statement: > -- cgit v1.2.3 From ae78ea337d624eabefc199ac86756894cd1300fd Mon Sep 17 00:00:00 2001 From: Christopher Hein Date: Tue, 22 Jan 2019 23:13:17 +0000 Subject: adding cadence starting for sig-aws Signed-off-by: Christopher Hein --- sigs.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sigs.yaml b/sigs.yaml index a5ddfdb1..2b23b1f1 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -520,7 +520,7 @@ sigs: day: Friday time: "9:00" tz: "PT (Pacific Time)" - frequency: biweekly + frequency: "biweekly 2019 start date: Jan. 11th" url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit archive_url: https://docs.google.com/document/d/1-i0xQidlXnFEP9fXHWkBxqySkXwJnrGJP9OGyP2_P14/edit contact: -- cgit v1.2.3 From 3a44a7e0587e6320d0eefef53ac7cb22d1167059 Mon Sep 17 00:00:00 2001 From: Christopher Hein Date: Tue, 22 Jan 2019 23:13:39 +0000 Subject: adding generated markdown Signed-off-by: Christopher Hein --- sig-aws/README.md | 2 +- sig-list.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/sig-aws/README.md b/sig-aws/README.md index f7eb2524..17c07c25 100644 --- a/sig-aws/README.md +++ b/sig-aws/README.md @@ -13,7 +13,7 @@ Covers maintaining, supporting, and using Kubernetes hosted on AWS Cloud. The [charter](charter.md) defines the scope and governance of the AWS Special Interest Group. ## Meetings -* Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:00&tz=PT%20%28Pacific%20Time%29). +* Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly 2019 start date: Jan. 11th). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:00&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/1-i0xQidlXnFEP9fXHWkBxqySkXwJnrGJP9OGyP2_P14/edit). ## Leadership diff --git a/sig-list.md b/sig-list.md index b6ab80b0..9cbeaff2 100644 --- a/sig-list.md +++ b/sig-list.md @@ -27,7 +27,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Architecture](sig-architecture/README.md)|architecture|* [Brian Grant](https://github.com/bgrant0607), Google
* [Jaice Singer DuMars](https://github.com/jdumars), Google
* [Matt Farina](https://github.com/mattfarina), Samsung SDS
|* [Slack](https://kubernetes.slack.com/messages/sig-architecture)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-architecture)|* Regular SIG Meeting: [Thursdays at 19:00 UTC (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Auth](sig-auth/README.md)|auth|* [Mike Danese](https://github.com/mikedanese), Google
* [Mo Khan](https://github.com/enj), Red Hat
* [Tim Allclair](https://github.com/tallclair), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-auth)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-auth)|* Regular SIG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Autoscaling](sig-autoscaling/README.md)|autoscaling|* [Marcin Wielgus](https://github.com/mwielgus), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-autoscaling)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-autoscaling)|* Regular SIG Meeting: [Mondays at 14:00 UTC (biweekly/triweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-|[AWS](sig-aws/README.md)|aws|* [Justin Santa Barbara](https://github.com/justinsb)
* [Kris Nova](https://github.com/kris-nova), VMware
* [Nishi Davidson](https://github.com/d-nishi), AWS
|* [Slack](https://kubernetes.slack.com/messages/sig-aws)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-aws)|* Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
+|[AWS](sig-aws/README.md)|aws|* [Justin Santa Barbara](https://github.com/justinsb)
* [Kris Nova](https://github.com/kris-nova), VMware
* [Nishi Davidson](https://github.com/d-nishi), AWS
|* [Slack](https://kubernetes.slack.com/messages/sig-aws)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-aws)|* Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time) (biweekly 2019 start date: Jan. 11th)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Azure](sig-azure/README.md)|azure|* [Stephen Augustus](https://github.com/justaugustus), VMware
* [Dave Strebel](https://github.com/dstrebel), Microsoft
|* [Slack](https://kubernetes.slack.com/messages/sig-azure)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-azure)|* Regular SIG Meeting: [Wednesdays at 16:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Big Data](sig-big-data/README.md)|big-data|* [Anirudh Ramanathan](https://github.com/foxish), Rockset
* [Erik Erlandson](https://github.com/erikerlandson), Red Hat
* [Yinan Li](https://github.com/liyinan926), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-big-data)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-big-data)|* Regular SIG Meeting: [Wednesdays at 17:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[CLI](sig-cli/README.md)|cli|* [Maciej Szulik](https://github.com/soltysh), Red Hat
* [Sean Sullivan](https://github.com/seans3), Google
|* [Slack](https://kubernetes.slack.com/messages/sig-cli)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cli)|* Regular SIG Meeting: [Wednesdays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-- cgit v1.2.3 From dd5f03ec493bf9ac887ce83b1015751e5f1c7755 Mon Sep 17 00:00:00 2001 From: eduartua Date: Mon, 28 Jan 2019 14:37:52 -0600 Subject: created sig-instrumentation folder and file instrumentation.md was placed in - URLs in k/community were updated --- contributors/devel/instrumentation.md | 215 --------------------- .../devel/sig-instrumentation/instrumentation.md | 215 +++++++++++++++++++++ 2 files changed, 215 insertions(+), 215 deletions(-) delete mode 100644 contributors/devel/instrumentation.md create mode 100644 contributors/devel/sig-instrumentation/instrumentation.md diff --git a/contributors/devel/instrumentation.md b/contributors/devel/instrumentation.md deleted file mode 100644 index b0a11193..00000000 --- a/contributors/devel/instrumentation.md +++ /dev/null @@ -1,215 +0,0 @@ -## Instrumenting Kubernetes - -The following references and outlines general guidelines for metric instrumentation -in Kubernetes components. Components are instrumented using the -[Prometheus Go client library](https://github.com/prometheus/client_golang). For non-Go -components. [Libraries in other languages](https://prometheus.io/docs/instrumenting/clientlibs/) -are available. - -The metrics are exposed via HTTP in the -[Prometheus metric format](https://prometheus.io/docs/instrumenting/exposition_formats/), -which is open and well-understood by a wide range of third party applications and vendors -outside of the Prometheus eco-system. - -The [general instrumentation advice](https://prometheus.io/docs/practices/instrumentation/) -from the Prometheus documentation applies. This document reiterates common pitfalls and some -Kubernetes specific considerations. - -Prometheus metrics are cheap as they have minimal internal memory state. Set and increment -operations are thread safe and take 10-25 nanoseconds (Go & Java). -Thus, instrumentation can and should cover all operationally relevant aspects of an application, -internal and external. - -## Quick Start - -The following describes the basic steps required to add a new metric (in Go). - -1. Import "github.com/prometheus/client_golang/prometheus". - -2. Create a top-level var to define the metric. For this, you have to: - - 1. Pick the type of metric. Use a Gauge for things you want to set to a -particular value, a Counter for things you want to increment, or a Histogram or -Summary for histograms/distributions of values (typically for latency). -Histograms are better if you're going to aggregate the values across jobs, while -summaries are better if you just want the job to give you a useful summary of -the values. - 2. Give the metric a name and description. - 3. Pick whether you want to distinguish different categories of things using -labels on the metric. If so, add "Vec" to the name of the type of metric you -want and add a slice of the label names to the definition. - - [Example](https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53) - ```go - requestCounter = prometheus.NewCounterVec( - prometheus.CounterOpts{ - Name: "apiserver_request_count", - Help: "Counter of apiserver requests broken out for each verb, API resource, client, and HTTP response code.", - }, - []string{"verb", "resource", "client", "code"}, - ) - ``` - -3. Register the metric so that prometheus will know to export it. - - [Example](https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78) - ```go - func init() { - prometheus.MustRegister(requestCounter) - prometheus.MustRegister(requestLatencies) - prometheus.MustRegister(requestLatenciesSummary) - } - ``` - -4. Use the metric by calling the appropriate method for your metric type (Set, -Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), -first calling WithLabelValues if your metric has any labels - - [Example](https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87) - ```go - requestCounter.WithLabelValues(*verb, *resource, client, strconv.Itoa(*httpCode)).Inc() - ``` - - -## Instrumentation types - -Components have metrics capturing events and states that are inherent to their -application logic. Examples are request and error counters, request latency -histograms, or internal garbage collection cycles. Those metrics are instrumented -directly in the application code. - -Secondly, there are business logic metrics. Those are not about observed application -behavior but abstract system state, such as desired replicas for a deployment. -They are not directly instrumented but collected from otherwise exposed data. - -In Kubernetes they are generally captured in the [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) -component, which reads them from the API server. -For this types of metric exposition, the -[exporter guidelines](https://prometheus.io/docs/instrumenting/writing_exporters/) -apply additionally. - -## Naming - -Metrics added directly by application or package code should have a unique name. -This avoids collisions of metrics added via dependencies. They also clearly -distinguish metrics collected with different semantics. This is solved through -prefixes: - -``` -_ -``` - -For example, suppose the kubelet instrumented its HTTP requests but also uses -an HTTP router providing its own implementation. Both expose metrics on total -http requests. They should be distinguishable as in: - -``` -kubelet_http_requests_total{path=”/some/path”,status=”200”} -routerpkg_http_requests_total{path=”/some/path”,status=”200”,method=”GET”} -``` - -As we can see they expose different labels and thus a naming collision would -not have been possible to resolve even if both metrics counted the exact same -requests. - -Resource objects that occur in names should inherit the spelling that is used -in kubectl, i.e. daemon sets are `daemonset` rather than `daemon_set`. - -## Dimensionality & Cardinality - -Metrics can often replace more expensive logging as they are time-aggregated -over a sampling interval. The [multidimensional data model](https://prometheus.io/docs/concepts/data_model/) -enables deep insights and all metrics should use those label dimensions -where appropriate. - -A common error that often causes performance issues in the ingesting metric -system is considering dimensions that inhibit or eliminate time aggregation -by being too specific. Typically those are user IDs or error messages. -More generally: one should know a comprehensive list of all possible values -for a label at instrumentation time. - -Notable exceptions are exporters like kube-state-metrics, which expose per-pod -or per-deployment metrics, which are theoretically unbound over time as one could -constantly create new ones, with new names. However, they have -a reasonable upper bound for a given size of infrastructure they refer to and -its typical frequency of changes. - -In general, “external” labels like pod or node name do not belong in the -instrumentation itself. They are to be attached to metrics by the collecting -system that has the external knowledge ([blog post](https://www.robustperception.io/target-labels-are-for-life-not-just-for-christmas/)). - -## Normalization - -Metrics should be normalized with respect to their dimensions. They should -expose the minimal set of labels, each of which provides additional information. -Labels that are composed from values of different labels are not desirable. -For example: - -``` -example_metric{pod=”abc”,container=”proxy”,container_long=”abc/proxy”} -``` - -It often seems feasible to add additional meta information about an object -to all metrics about that object, e.g.: - -``` -kube_pod_container_restarts{namespace=...,pod=...,container=...} -``` - -A common use case is wanting to look at such metrics w.r.t to the node the -pod is scheduled on. So it seems convenient to add a “node” label. - -``` -kube_pod_container_restarts{namespace=...,pod=...,container=...,node=...} -``` - -This however only caters to one specific query use case. There are many more -pieces of metadata that could be added, effectively blowing up the instrumentation. -They are also not guaranteed to be stable over time. What if pods at some -point can be live migrated? -Those pieces of information should be normalized into an info-level metric -([blog post](https://www.robustperception.io/exposing-the-software-version-to-prometheus/)), -which is always set to 1. For example: - -``` -kube_pod_info{pod=...,namespace=...,pod_ip=...,host_ip=..,node=..., ...} -``` - -The metric system can later denormalize those along the identifying labels -“pod” and “namespace” labels. This leads to... - -## Resource Referencing - -It is often desirable to correlate different metrics about a common object, -such as a pod. Label dimensions can be used to match up different metrics. -This is most easy if label names and values are following a common pattern. -For metrics exposed by the same application, that often happens naturally. - -For a system composed of several independent, and also pluggable components, -it makes sense to set cross-component standards to allow easy querying in -metric systems without extensive post-processing of data. -In Kubernetes, those are the resource objects such as deployments, -pods, or services and the namespace they belong to. - -The following should be consistently used: - -``` -example_metric_ccc{pod=”example-app-5378923”, namespace=”default”} -``` - -An object is referenced by its unique name in a label named after the resource -itself (i.e. `pod`/`deployment`/... and not `pod_name`/`deployment_name`) -and the namespace it belongs to in the `namespace` label. - -Note: namespace/name combinations are only unique at a certain point in time. -For time series this is given by the timestamp associated with any data point. -UUIDs are truly unique but not convenient to use in user-facing time series -queries. -They can still be incorporated using an info level metric as described above for -`kube_pod_info`. A query to a metric system selecting by UUID via a the info level -metric could look as follows: - -``` -kube_pod_restarts and on(namespace, pod) kube_pod_info{uuid=”ABC”} -``` - diff --git a/contributors/devel/sig-instrumentation/instrumentation.md b/contributors/devel/sig-instrumentation/instrumentation.md new file mode 100644 index 00000000..b0a11193 --- /dev/null +++ b/contributors/devel/sig-instrumentation/instrumentation.md @@ -0,0 +1,215 @@ +## Instrumenting Kubernetes + +The following references and outlines general guidelines for metric instrumentation +in Kubernetes components. Components are instrumented using the +[Prometheus Go client library](https://github.com/prometheus/client_golang). For non-Go +components. [Libraries in other languages](https://prometheus.io/docs/instrumenting/clientlibs/) +are available. + +The metrics are exposed via HTTP in the +[Prometheus metric format](https://prometheus.io/docs/instrumenting/exposition_formats/), +which is open and well-understood by a wide range of third party applications and vendors +outside of the Prometheus eco-system. + +The [general instrumentation advice](https://prometheus.io/docs/practices/instrumentation/) +from the Prometheus documentation applies. This document reiterates common pitfalls and some +Kubernetes specific considerations. + +Prometheus metrics are cheap as they have minimal internal memory state. Set and increment +operations are thread safe and take 10-25 nanoseconds (Go & Java). +Thus, instrumentation can and should cover all operationally relevant aspects of an application, +internal and external. + +## Quick Start + +The following describes the basic steps required to add a new metric (in Go). + +1. Import "github.com/prometheus/client_golang/prometheus". + +2. Create a top-level var to define the metric. For this, you have to: + + 1. Pick the type of metric. Use a Gauge for things you want to set to a +particular value, a Counter for things you want to increment, or a Histogram or +Summary for histograms/distributions of values (typically for latency). +Histograms are better if you're going to aggregate the values across jobs, while +summaries are better if you just want the job to give you a useful summary of +the values. + 2. Give the metric a name and description. + 3. Pick whether you want to distinguish different categories of things using +labels on the metric. If so, add "Vec" to the name of the type of metric you +want and add a slice of the label names to the definition. + + [Example](https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53) + ```go + requestCounter = prometheus.NewCounterVec( + prometheus.CounterOpts{ + Name: "apiserver_request_count", + Help: "Counter of apiserver requests broken out for each verb, API resource, client, and HTTP response code.", + }, + []string{"verb", "resource", "client", "code"}, + ) + ``` + +3. Register the metric so that prometheus will know to export it. + + [Example](https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78) + ```go + func init() { + prometheus.MustRegister(requestCounter) + prometheus.MustRegister(requestLatencies) + prometheus.MustRegister(requestLatenciesSummary) + } + ``` + +4. Use the metric by calling the appropriate method for your metric type (Set, +Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), +first calling WithLabelValues if your metric has any labels + + [Example](https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87) + ```go + requestCounter.WithLabelValues(*verb, *resource, client, strconv.Itoa(*httpCode)).Inc() + ``` + + +## Instrumentation types + +Components have metrics capturing events and states that are inherent to their +application logic. Examples are request and error counters, request latency +histograms, or internal garbage collection cycles. Those metrics are instrumented +directly in the application code. + +Secondly, there are business logic metrics. Those are not about observed application +behavior but abstract system state, such as desired replicas for a deployment. +They are not directly instrumented but collected from otherwise exposed data. + +In Kubernetes they are generally captured in the [kube-state-metrics](https://github.com/kubernetes/kube-state-metrics) +component, which reads them from the API server. +For this types of metric exposition, the +[exporter guidelines](https://prometheus.io/docs/instrumenting/writing_exporters/) +apply additionally. + +## Naming + +Metrics added directly by application or package code should have a unique name. +This avoids collisions of metrics added via dependencies. They also clearly +distinguish metrics collected with different semantics. This is solved through +prefixes: + +``` +_ +``` + +For example, suppose the kubelet instrumented its HTTP requests but also uses +an HTTP router providing its own implementation. Both expose metrics on total +http requests. They should be distinguishable as in: + +``` +kubelet_http_requests_total{path=”/some/path”,status=”200”} +routerpkg_http_requests_total{path=”/some/path”,status=”200”,method=”GET”} +``` + +As we can see they expose different labels and thus a naming collision would +not have been possible to resolve even if both metrics counted the exact same +requests. + +Resource objects that occur in names should inherit the spelling that is used +in kubectl, i.e. daemon sets are `daemonset` rather than `daemon_set`. + +## Dimensionality & Cardinality + +Metrics can often replace more expensive logging as they are time-aggregated +over a sampling interval. The [multidimensional data model](https://prometheus.io/docs/concepts/data_model/) +enables deep insights and all metrics should use those label dimensions +where appropriate. + +A common error that often causes performance issues in the ingesting metric +system is considering dimensions that inhibit or eliminate time aggregation +by being too specific. Typically those are user IDs or error messages. +More generally: one should know a comprehensive list of all possible values +for a label at instrumentation time. + +Notable exceptions are exporters like kube-state-metrics, which expose per-pod +or per-deployment metrics, which are theoretically unbound over time as one could +constantly create new ones, with new names. However, they have +a reasonable upper bound for a given size of infrastructure they refer to and +its typical frequency of changes. + +In general, “external” labels like pod or node name do not belong in the +instrumentation itself. They are to be attached to metrics by the collecting +system that has the external knowledge ([blog post](https://www.robustperception.io/target-labels-are-for-life-not-just-for-christmas/)). + +## Normalization + +Metrics should be normalized with respect to their dimensions. They should +expose the minimal set of labels, each of which provides additional information. +Labels that are composed from values of different labels are not desirable. +For example: + +``` +example_metric{pod=”abc”,container=”proxy”,container_long=”abc/proxy”} +``` + +It often seems feasible to add additional meta information about an object +to all metrics about that object, e.g.: + +``` +kube_pod_container_restarts{namespace=...,pod=...,container=...} +``` + +A common use case is wanting to look at such metrics w.r.t to the node the +pod is scheduled on. So it seems convenient to add a “node” label. + +``` +kube_pod_container_restarts{namespace=...,pod=...,container=...,node=...} +``` + +This however only caters to one specific query use case. There are many more +pieces of metadata that could be added, effectively blowing up the instrumentation. +They are also not guaranteed to be stable over time. What if pods at some +point can be live migrated? +Those pieces of information should be normalized into an info-level metric +([blog post](https://www.robustperception.io/exposing-the-software-version-to-prometheus/)), +which is always set to 1. For example: + +``` +kube_pod_info{pod=...,namespace=...,pod_ip=...,host_ip=..,node=..., ...} +``` + +The metric system can later denormalize those along the identifying labels +“pod” and “namespace” labels. This leads to... + +## Resource Referencing + +It is often desirable to correlate different metrics about a common object, +such as a pod. Label dimensions can be used to match up different metrics. +This is most easy if label names and values are following a common pattern. +For metrics exposed by the same application, that often happens naturally. + +For a system composed of several independent, and also pluggable components, +it makes sense to set cross-component standards to allow easy querying in +metric systems without extensive post-processing of data. +In Kubernetes, those are the resource objects such as deployments, +pods, or services and the namespace they belong to. + +The following should be consistently used: + +``` +example_metric_ccc{pod=”example-app-5378923”, namespace=”default”} +``` + +An object is referenced by its unique name in a label named after the resource +itself (i.e. `pod`/`deployment`/... and not `pod_name`/`deployment_name`) +and the namespace it belongs to in the `namespace` label. + +Note: namespace/name combinations are only unique at a certain point in time. +For time series this is given by the timestamp associated with any data point. +UUIDs are truly unique but not convenient to use in user-facing time series +queries. +They can still be incorporated using an info level metric as described above for +`kube_pod_info`. A query to a metric system selecting by UUID via a the info level +metric could look as follows: + +``` +kube_pod_restarts and on(namespace, pod) kube_pod_info{uuid=”ABC”} +``` + -- cgit v1.2.3 From 9cfd840e1cd9376f562662dfe8135d3042a1e4cd Mon Sep 17 00:00:00 2001 From: eduartua Date: Mon, 28 Jan 2019 14:45:07 -0600 Subject: file logging.md was moved to the new sig-instrumentation folder - URLs in k/community were updated --- contributors/devel/README.md | 4 +-- contributors/devel/instrumentation.md | 3 ++ contributors/devel/logging.md | 35 ++--------------------- contributors/devel/sig-instrumentation/logging.md | 34 ++++++++++++++++++++++ contributors/guide/coding-conventions.md | 2 +- sig-instrumentation/charter.md | 2 +- 6 files changed, 43 insertions(+), 37 deletions(-) create mode 100644 contributors/devel/instrumentation.md create mode 100644 contributors/devel/sig-instrumentation/logging.md diff --git a/contributors/devel/README.md b/contributors/devel/README.md index 626adaad..e943a7a1 100644 --- a/contributors/devel/README.md +++ b/contributors/devel/README.md @@ -32,12 +32,12 @@ Guide](http://kubernetes.io/docs/admin/). * **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests. Here's how to run your tests many times. -* **Logging Conventions** ([logging.md](logging.md)): Glog levels. +* **Logging Conventions** ([logging.md](sig-instrumentation/logging.md)): Glog levels. * **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. * **Instrumenting Kubernetes with a new metric** - ([instrumentation.md](instrumentation.md)): How to add a new metrics to the + ([instrumentation.md](sig-instrumentation/instrumentation.md)): How to add a new metrics to the Kubernetes code base. * **Coding Conventions** ([coding-conventions.md](../guide/coding-conventions.md)): diff --git a/contributors/devel/instrumentation.md b/contributors/devel/instrumentation.md new file mode 100644 index 00000000..110359b2 --- /dev/null +++ b/contributors/devel/instrumentation.md @@ -0,0 +1,3 @@ +This file has moved to https://git.k8s.io/community/contributors/devel/sig-instrumentation/instrumentation.md. + +This file is a placeholder to preserve links. Please remove by April 28, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/logging.md b/contributors/devel/logging.md index c4da6829..d857bc64 100644 --- a/contributors/devel/logging.md +++ b/contributors/devel/logging.md @@ -1,34 +1,3 @@ -## Logging Conventions +This file has moved to https://git.k8s.io/community/contributors/devel/sig-instrumentation/logging.md. -The following conventions for the klog levels to use. -[klog](http://godoc.org/github.com/kubernetes/klog) is globally preferred to -[log](http://golang.org/pkg/log/) for better runtime control. - -* klog.Errorf() - Always an error - -* klog.Warningf() - Something unexpected, but probably not an error - -* klog.Infof() has multiple levels: - * klog.V(0) - Generally useful for this to ALWAYS be visible to an operator - * Programmer errors - * Logging extra info about a panic - * CLI argument handling - * klog.V(1) - A reasonable default log level if you don't want verbosity. - * Information about config (listening on X, watching Y) - * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) - * klog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. - * Logging HTTP requests and their exit code - * System state changing (killing pod) - * Controller state change events (starting pods) - * Scheduler log messages - * klog.V(3) - Extended information about changes - * More info about system state changes - * klog.V(4) - Debug level verbosity - * Logging in particularly thorny parts of code where you may want to come back later and check it - * klog.V(5) - Trace level verbosity - * Context to understand the steps leading up to errors and warnings - * More information for troubleshooting reported issues - -As per the comments, the practical default level is V(2). Developers and QE -environments may wish to run at V(3) or V(4). If you wish to change the log -level, you can pass in `-v=X` where X is the desired maximum level to log. +This file is a placeholder to preserve links. Please remove by April 28, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-instrumentation/logging.md b/contributors/devel/sig-instrumentation/logging.md new file mode 100644 index 00000000..c4da6829 --- /dev/null +++ b/contributors/devel/sig-instrumentation/logging.md @@ -0,0 +1,34 @@ +## Logging Conventions + +The following conventions for the klog levels to use. +[klog](http://godoc.org/github.com/kubernetes/klog) is globally preferred to +[log](http://golang.org/pkg/log/) for better runtime control. + +* klog.Errorf() - Always an error + +* klog.Warningf() - Something unexpected, but probably not an error + +* klog.Infof() has multiple levels: + * klog.V(0) - Generally useful for this to ALWAYS be visible to an operator + * Programmer errors + * Logging extra info about a panic + * CLI argument handling + * klog.V(1) - A reasonable default log level if you don't want verbosity. + * Information about config (listening on X, watching Y) + * Errors that repeat frequently that relate to conditions that can be corrected (pod detected as unhealthy) + * klog.V(2) - Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems. + * Logging HTTP requests and their exit code + * System state changing (killing pod) + * Controller state change events (starting pods) + * Scheduler log messages + * klog.V(3) - Extended information about changes + * More info about system state changes + * klog.V(4) - Debug level verbosity + * Logging in particularly thorny parts of code where you may want to come back later and check it + * klog.V(5) - Trace level verbosity + * Context to understand the steps leading up to errors and warnings + * More information for troubleshooting reported issues + +As per the comments, the practical default level is V(2). Developers and QE +environments may wish to run at V(3) or V(4). If you wish to change the log +level, you can pass in `-v=X` where X is the desired maximum level to log. diff --git a/contributors/guide/coding-conventions.md b/contributors/guide/coding-conventions.md index 63cc18ce..ebabbcbf 100644 --- a/contributors/guide/coding-conventions.md +++ b/contributors/guide/coding-conventions.md @@ -61,7 +61,7 @@ following Go conventions - `stateLock`, `mapLock` etc. - [Kubectl conventions](/contributors/devel/kubectl-conventions.md) - - [Logging conventions](/contributors/devel/logging.md) + - [Logging conventions](/contributors/devel/sig-instrumentation/logging.md) ## Testing conventions diff --git a/sig-instrumentation/charter.md b/sig-instrumentation/charter.md index d767a706..b5cd7643 100644 --- a/sig-instrumentation/charter.md +++ b/sig-instrumentation/charter.md @@ -69,5 +69,5 @@ By SIG Technical Leads [sig-node]: https://github.com/kubernetes/community/tree/master/sig-node [sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml#L964-L1018 [Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md -[instrumenting-kubernetes]: https://github.com/kubernetes/community/blob/master/contributors/devel/instrumentation.md +[instrumenting-kubernetes]: /contributors/devel/sig-instrumentation/instrumentation.md [core-metrics-pipeline]: https://kubernetes.io/docs/tasks/debug-application-cluster/core-metrics-pipeline/ -- cgit v1.2.3 From f439e4146080c9c8c1add6b2ff133efcc1eaa343 Mon Sep 17 00:00:00 2001 From: eduartua Date: Mon, 28 Jan 2019 14:59:09 -0600 Subject: file event-style-guide.md was moved to the new sig-instrumentation folder - URLs in k/community were updated --- contributors/devel/event-style-guide.md | 52 +--------------------- .../devel/sig-instrumentation/event-style-guide.md | 51 +++++++++++++++++++++ 2 files changed, 53 insertions(+), 50 deletions(-) create mode 100644 contributors/devel/sig-instrumentation/event-style-guide.md diff --git a/contributors/devel/event-style-guide.md b/contributors/devel/event-style-guide.md index bc4ba22b..52356d36 100644 --- a/contributors/devel/event-style-guide.md +++ b/contributors/devel/event-style-guide.md @@ -1,51 +1,3 @@ -# Event style guide - -Status: During Review - -Author: Marek Grabowski (gmarek@) - -## Why the guide? - -The Event API change proposal is the first step towards having useful Events in the system. Another step is to formalize the Event style guide, i.e. set of properties that developers need to ensure when adding new Events to the system. This is necessary to ensure that we have a system in which all components emit consistently structured Events. - -## When to emit an Event? - -Events are expected to provide important insights for the application developer/operator on the state of their application. Events relevant to cluster administrators are acceptable, as well, though they usually also have the option of looking at component logs. Events are much more expensive than logs, thus they're not expected to provide in-depth system debugging information. Instead concentrate on things that are important from the application developer's perspective. Events need to be either actionable, or be useful to understand past or future system's behavior. Events are not intended to drive automation. Watching resource status should be sufficient for controllers. - -Following are the guidelines for adding Events to the system. Those are not hard-and-fast rules, but should be considered by all contributors adding new Events and members doing reviews. -1. Emit events only when state of the system changes/attempts to change. Events "it's still running" are not interesting. Also, changes that do not add information beyond what is observable by watching the altered resources should not be duplicated as events. Note that adding a reason for some action that can't be inferred from the state change is considered additional information. -1. Limit Events to no more than one per change/attempt. There's no need for Events on "About to do X" AND "Did X"/"Failed to do X". Result is more interesting and implies an attempt. - 1. It may give impression that this gets tricky with scale events, e.g. Deployment scales ReplicaSet which creates/deletes Pods. For us those are 3 (or more) separate Events (3 different objects are affected) so it's fine to emit multiple Events. -1. When an error occurs that prevents a user application from starting or from enacting other normal system behavior, such as object creation, an Event should be emitted (e.g. invalid image). - 1. Note that Events are garbage collected so every user-actionable error needs to be surfaced via resource status as well. - 1. It's usually OK to emit failure Events for each failure. Dedup mechanism will deal with that. The exception is failures that are frequent but typically ephemeral and automatically repairable/recoverable, such as broken socket connections, in which case they should only be reported if persistent and unrepairable, in order to mitigate event spam. -1. When a user application stops running for any reason, an Event should be emitted (e.g. Pod evicted because Node is under memory pressure) -1. If it's a system-wide change of state that may impact currently running applications or have an may have severe impact on future workload schedulability, an Event should be emitted (e.g. Node became unreachable, 1. Failed to create route for Node). -1. If it doesn't fit any of above scenarios you should consider not emitting Event. - -## How to structure an Event? -New Event API tries to use more descriptive field names to influence how Events are structured. Event has following fields: -* Regarding -* Related -* ReportingController -* ReportingInstance -* Action -* Reason -* Type -* Note - -The Event should be structured in a way that following sentence "makes sense": -"Regarding : - ", e.g. -* Regarding Node X: BecameNotReady - NodeUnreachable -* Regarding Pod X: ScheduledOnNode Node Y - -* Regarding PVC X: BoundToNode Node Y - -* Regarding Pod X: KilledContainer Container Y - NodeMemoryPressure - -1. ReportingController is a type of a Controller reporting an Event, e.g. k8s.io/node-controller, k8s.io/kubelet. There will be a standard list for controller names for Kubernetes components. Third-party components must namespace themselves in the same manner as label keys. Validation ensures it's a proper qualified name. This shouldn’t be needed in order for users to understand the event, but is provided in case the controller’s logs need to be accessed for further debugging. -1. ReportingInstance is an identifier of the instance of the ReportingController which needs to uniquely identify it. I.e. host name can be used only for controllers that are guaranteed to be unique on the host. This requirement isn't met e.g. for scheduler, so it may need a secondary index. For singleton controllers use Node name (or hostname if controller is not running on the Node). Can have at most 128 alpha-numeric characters. -1. Regarding and Related are ObjectReferences. Regarding should represent the object that's implemented by the ReportingController, Related can contain additional information about another object that takes part in or is affected by the Action (see examples). -1. Action is a low-cardinality (meaning that there's a restricted, predefined set of values allowed) CamelCase string field (i.e. its value has to be determined at compile time) that explains what happened with Regarding/what action did the ReportingController take in Regarding's name. The tuple of {ReportingController, Action, Reason} must be unique, such that a user could look up documentation. Can have at most 128 characters. -1. Reason is a low-cardinality CamelCase string field (i.e. its value has to be determined at compile time) that explains why ReportingController took Action. Can have at most 128 characters. -1. Type can be either "Normal" or "Warning". "Warning" types are reserved for Events that represent a situation that's not expected in a healthy cluster and/or healthy workload: something unexpected and/or undesirable, at least if it occurs frequently enough and/or for a long enough duration. -1. Note can contain an arbitrary, high-cardinality, user readable summary of the Event. This field can lose data if deduplication is triggered. Can have at most 1024 characters. +This file has moved to https://git.k8s.io/community/contributors/devel/sig-instrumentation/event-style-guide.md. +This file is a placeholder to preserve links. Please remove by April 28, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-instrumentation/event-style-guide.md b/contributors/devel/sig-instrumentation/event-style-guide.md new file mode 100644 index 00000000..bc4ba22b --- /dev/null +++ b/contributors/devel/sig-instrumentation/event-style-guide.md @@ -0,0 +1,51 @@ +# Event style guide + +Status: During Review + +Author: Marek Grabowski (gmarek@) + +## Why the guide? + +The Event API change proposal is the first step towards having useful Events in the system. Another step is to formalize the Event style guide, i.e. set of properties that developers need to ensure when adding new Events to the system. This is necessary to ensure that we have a system in which all components emit consistently structured Events. + +## When to emit an Event? + +Events are expected to provide important insights for the application developer/operator on the state of their application. Events relevant to cluster administrators are acceptable, as well, though they usually also have the option of looking at component logs. Events are much more expensive than logs, thus they're not expected to provide in-depth system debugging information. Instead concentrate on things that are important from the application developer's perspective. Events need to be either actionable, or be useful to understand past or future system's behavior. Events are not intended to drive automation. Watching resource status should be sufficient for controllers. + +Following are the guidelines for adding Events to the system. Those are not hard-and-fast rules, but should be considered by all contributors adding new Events and members doing reviews. +1. Emit events only when state of the system changes/attempts to change. Events "it's still running" are not interesting. Also, changes that do not add information beyond what is observable by watching the altered resources should not be duplicated as events. Note that adding a reason for some action that can't be inferred from the state change is considered additional information. +1. Limit Events to no more than one per change/attempt. There's no need for Events on "About to do X" AND "Did X"/"Failed to do X". Result is more interesting and implies an attempt. + 1. It may give impression that this gets tricky with scale events, e.g. Deployment scales ReplicaSet which creates/deletes Pods. For us those are 3 (or more) separate Events (3 different objects are affected) so it's fine to emit multiple Events. +1. When an error occurs that prevents a user application from starting or from enacting other normal system behavior, such as object creation, an Event should be emitted (e.g. invalid image). + 1. Note that Events are garbage collected so every user-actionable error needs to be surfaced via resource status as well. + 1. It's usually OK to emit failure Events for each failure. Dedup mechanism will deal with that. The exception is failures that are frequent but typically ephemeral and automatically repairable/recoverable, such as broken socket connections, in which case they should only be reported if persistent and unrepairable, in order to mitigate event spam. +1. When a user application stops running for any reason, an Event should be emitted (e.g. Pod evicted because Node is under memory pressure) +1. If it's a system-wide change of state that may impact currently running applications or have an may have severe impact on future workload schedulability, an Event should be emitted (e.g. Node became unreachable, 1. Failed to create route for Node). +1. If it doesn't fit any of above scenarios you should consider not emitting Event. + +## How to structure an Event? +New Event API tries to use more descriptive field names to influence how Events are structured. Event has following fields: +* Regarding +* Related +* ReportingController +* ReportingInstance +* Action +* Reason +* Type +* Note + +The Event should be structured in a way that following sentence "makes sense": +"Regarding : - ", e.g. +* Regarding Node X: BecameNotReady - NodeUnreachable +* Regarding Pod X: ScheduledOnNode Node Y - +* Regarding PVC X: BoundToNode Node Y - +* Regarding Pod X: KilledContainer Container Y - NodeMemoryPressure + +1. ReportingController is a type of a Controller reporting an Event, e.g. k8s.io/node-controller, k8s.io/kubelet. There will be a standard list for controller names for Kubernetes components. Third-party components must namespace themselves in the same manner as label keys. Validation ensures it's a proper qualified name. This shouldn’t be needed in order for users to understand the event, but is provided in case the controller’s logs need to be accessed for further debugging. +1. ReportingInstance is an identifier of the instance of the ReportingController which needs to uniquely identify it. I.e. host name can be used only for controllers that are guaranteed to be unique on the host. This requirement isn't met e.g. for scheduler, so it may need a secondary index. For singleton controllers use Node name (or hostname if controller is not running on the Node). Can have at most 128 alpha-numeric characters. +1. Regarding and Related are ObjectReferences. Regarding should represent the object that's implemented by the ReportingController, Related can contain additional information about another object that takes part in or is affected by the Action (see examples). +1. Action is a low-cardinality (meaning that there's a restricted, predefined set of values allowed) CamelCase string field (i.e. its value has to be determined at compile time) that explains what happened with Regarding/what action did the ReportingController take in Regarding's name. The tuple of {ReportingController, Action, Reason} must be unique, such that a user could look up documentation. Can have at most 128 characters. +1. Reason is a low-cardinality CamelCase string field (i.e. its value has to be determined at compile time) that explains why ReportingController took Action. Can have at most 128 characters. +1. Type can be either "Normal" or "Warning". "Warning" types are reserved for Events that represent a situation that's not expected in a healthy cluster and/or healthy workload: something unexpected and/or undesirable, at least if it occurs frequently enough and/or for a long enough duration. +1. Note can contain an arbitrary, high-cardinality, user readable summary of the Event. This field can lose data if deduplication is triggered. Can have at most 1024 characters. + -- cgit v1.2.3 From 4971949b1e520b8d84a4c282bf028f9d22c256ed Mon Sep 17 00:00:00 2001 From: Aaron Small Date: Mon, 14 Jan 2019 11:23:32 -0800 Subject: published accepted proposal and updated mailing lists --- sig-list.md | 2 +- sigs.yaml | 2 +- .../Atredis and Trail of Bits Proposal.pdf | Bin 0 -> 437215 bytes wg-security-audit/README.md | 11 ++++++++--- 4 files changed, 10 insertions(+), 5 deletions(-) create mode 100644 wg-security-audit/Atredis and Trail of Bits Proposal.pdf diff --git a/sig-list.md b/sig-list.md index b6ab80b0..435bf11b 100644 --- a/sig-list.md +++ b/sig-list.md @@ -68,7 +68,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Multitenancy](wg-multitenancy/README.md)||* [David Oppenheimer](https://github.com/davidopp), Google
* [Jessie Frazelle](https://github.com/jessfraz), Microsoft
|* [Slack](https://kubernetes.slack.com/messages/wg-multitenancy)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-multitenancy)|* Regular WG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Policy](wg-policy/README.md)||* [Howard Huang](https://github.com/hannibalhuang), Huawei
* [Torin Sandall](https://github.com/tsandall), Styra
* [Yisui Hu](https://github.com/easeway), Google
* [Erica von Buelow](https://github.com/ericavonb), Red Hat
* [Michael Elder](https://github.com/mdelder), IBM
|* [Slack](https://kubernetes.slack.com/messages/wg-policy)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-policy)|* Regular WG Meeting: [Wednesdays at 16:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Resource Management](wg-resource-management/README.md)||* [Vishnu Kannan](https://github.com/vishh), Google
* [Derek Carr](https://github.com/derekwaynecarr), Red Hat
|* [Slack](https://kubernetes.slack.com/messages/wg-resource-mgmt)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-resource-management)|* Regular WG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly (On demand))](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-|[Security Audit](wg-security-audit/README.md)||* [Aaron Small](https://github.com/aasmall), Google
* [Joel Smith](https://github.com/joelsmith), Red Hat
* [Craig Ingram](https://github.com/cji), Salesforce
|* [Slack](https://kubernetes.slack.com/messages/wg-security-audit)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-audit)|* Regular WG Meeting: [Mondays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1RbC4SBZBlKth7IjYv_NaEpnmLGwMJ0ElpUOmsG-bdRA/edit)
+|[Security Audit](wg-security-audit/README.md)||* [Aaron Small](https://github.com/aasmall), Google
* [Joel Smith](https://github.com/joelsmith), Red Hat
* [Craig Ingram](https://github.com/cji), Salesforce
|* [Slack](https://kubernetes.slack.com/messages/wg-security-audit)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-security-audit)|* Regular WG Meeting: [Mondays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1RbC4SBZBlKth7IjYv_NaEpnmLGwMJ0ElpUOmsG-bdRA/edit)
diff --git a/sigs.yaml b/sigs.yaml index a5ddfdb1..b6f97e41 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -2401,7 +2401,7 @@ workinggroups: url: https://docs.google.com/document/d/1RbC4SBZBlKth7IjYv_NaEpnmLGwMJ0ElpUOmsG-bdRA/edit contact: slack: wg-security-audit - mailing_list: https://groups.google.com/forum/#!forum/kubernetes-wg-audit + mailing_list: https://groups.google.com/forum/#!forum/kubernetes-wg-security-audit - name: Component Standard dir: wg-component-standard mission_statement: > diff --git a/wg-security-audit/Atredis and Trail of Bits Proposal.pdf b/wg-security-audit/Atredis and Trail of Bits Proposal.pdf new file mode 100644 index 00000000..ca82ac39 Binary files /dev/null and b/wg-security-audit/Atredis and Trail of Bits Proposal.pdf differ diff --git a/wg-security-audit/README.md b/wg-security-audit/README.md index d1aa12c4..28baee11 100644 --- a/wg-security-audit/README.md +++ b/wg-security-audit/README.md @@ -21,14 +21,19 @@ Perform a security audit on k8s with a vendor and produce as artifacts a threat ## Contact * [Slack](https://kubernetes.slack.com/messages/wg-security-audit) -* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-audit) +* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-security-audit) ## Request For Proposals The RFP will be open between 2018/10/29 and 2018/11/30 and has been published [here](https://github.com/kubernetes/community/blob/master/wg-security-audit/RFP.md). -## Submission +## Vendor Selection + +The [RFP](https://github.com/kubernetes/community/blob/master/wg-security-audit/RFP.md) is now closed. The working group selected Trail of Atredis, a collaboration between [Trail of Bits](https://www.trailofbits.com/) and [Atredis Partners](https://www.atredis.com/) to perform the audit. + +## Mailing Lists + +* Sensitive communications regarding the audit shouls be sent to the [private variant of the mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-security-audit-private). -Submissions should be sent to the [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-audit) -- cgit v1.2.3 From 791f74944bdcb36180e3803528b03431e4e14a26 Mon Sep 17 00:00:00 2001 From: Stephen Augustus Date: Mon, 28 Jan 2019 13:12:14 -0800 Subject: Update wg-security-audit/README.md Co-Authored-By: aasmall --- wg-security-audit/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/wg-security-audit/README.md b/wg-security-audit/README.md index 28baee11..93e2cad1 100644 --- a/wg-security-audit/README.md +++ b/wg-security-audit/README.md @@ -34,6 +34,6 @@ The [RFP](https://github.com/kubernetes/community/blob/master/wg-security-audit/ ## Mailing Lists -* Sensitive communications regarding the audit shouls be sent to the [private variant of the mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-security-audit-private). +* Sensitive communications regarding the audit should be sent to the [private variant of the mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-security-audit-private). -- cgit v1.2.3 From 3368adb2cc68e206bb2bb8cdbe92fe39b648e60b Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 11:03:22 -0600 Subject: /devel/sig-release folder created - file cherry-picks.md moved to /devel/sig-release - URLs updated --- contributors/devel/cherry-picks.md | 74 +------------------------- contributors/devel/sig-release/cherry-picks.md | 73 +++++++++++++++++++++++++ contributors/guide/contributor-cheatsheet.md | 2 +- contributors/guide/release-notes.md | 2 +- 4 files changed, 77 insertions(+), 74 deletions(-) create mode 100644 contributors/devel/sig-release/cherry-picks.md diff --git a/contributors/devel/cherry-picks.md b/contributors/devel/cherry-picks.md index 7769f970..f7284c73 100644 --- a/contributors/devel/cherry-picks.md +++ b/contributors/devel/cherry-picks.md @@ -1,73 +1,3 @@ -# Overview +This file has moved to https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md. -This document explains how cherry-picks are managed on release branches within -the kubernetes/kubernetes repository. -A common use case for this task is backporting PRs from master to release -branches. - -## Prerequisites - * [Contributor License Agreement](http://git.k8s.io/community/CLA.md) is - considered implicit for all code within cherry-pick pull requests, - **unless there is a large conflict**. - * A pull request merged against the master branch. - * [Release branch](https://git.k8s.io/release/docs/branching.md) exists. - * The normal git and GitHub configured shell environment for pushing to your - kubernetes `origin` fork on GitHub and making a pull request against a - configured remote `upstream` that tracks - "https://github.com/kubernetes/kubernetes.git", including `GITHUB_USER`. - * Have `hub` installed, which is most easily installed via `go get - github.com/github/hub` assuming you have a standard golang development - environment. - -## Initiate a Cherry-pick - * Run the [cherry-pick - script](https://git.k8s.io/kubernetes/hack/cherry_pick_pull.sh). - This example applies a master branch PR #98765 to the remote branch - `upstream/release-3.14`: `hack/cherry_pick_pull.sh upstream/release-3.14 - 98765` - * Be aware the cherry-pick script assumes you have a git remote called - `upstream` that points at the Kubernetes github org. - Please see our [recommended Git workflow](https://git.k8s.io/community/contributors/guide/github-workflow.md#workflow). - * You will need to run the cherry-pick script separately for each patch release you want to cherry-pick to. - - * Your cherry-pick PR will immediately get the `do-not-merge/cherry-pick-not-approved` label. - The [Branch Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager) - will triage PRs targeted to the next .0 minor release branch up until the - release, while the [Patch Release Team](https://git.k8s.io/sig-release/release-team/role-handbooks/patch-release-manager) - will handle all cherry-picks to patch releases. - Normal rules apply for code merge. - * Reviewers `/lgtm` and owners `/approve` as they deem appropriate. - * Milestones on cherry-pick PRs should be the milestone for the target - release branch (for example, milestone 1.11 for a cherry-pick onto - release-1.11). - * You can find the current release team members in the - [appropriate release folder](https://git.k8s.io/sig-release/releases) for the target release. - You may cc them with `<@githubusername>` on your cherry-pick PR. - -## Cherry-pick Review - -Cherry-pick pull requests have an additional requirement compared to normal pull -requests. -They must be approved specifically for cherry-pick by Approvers. -The [Branch Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager) -or the [Patch Release Team](https://git.k8s.io/sig-release/release-team/role-handbooks/patch-release-manager) -are the final authority on removing the `do-not-merge/cherry-pick-not-approved` -label and triggering a merge into the target branch. - -## Searching for Cherry-picks - -- [A sample search on kubernetes/kubernetes pull requests that are labeled as `cherry-pick-approved`](https://github.com/kubernetes/kubernetes/pulls?q=is%3Aopen+is%3Apr+label%3Acherry-pick-approved) - -- [A sample search on kubernetes/kubernetes pull requests that are labeled as `do-not-merge/cherry-pick-not-approved`](https://github.com/kubernetes/kubernetes/pulls?q=is%3Aopen+is%3Apr+label%3Ado-not-merge%2Fcherry-pick-not-approved) - - -## Troubleshooting Cherry-picks - -Contributors may encounter some of the following difficulties when initiating a cherry-pick. - -- A cherry-pick PR does not apply cleanly against an old release branch. -In that case, you will need to manually fix conflicts. - -- The cherry-pick PR includes code that does not pass CI tests. -In such a case you will have to fetch the auto-generated branch from your fork, amend the problematic commit and force push to the auto-generated branch. -Alternatively, you can create a new PR, which is noisier. +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-release/cherry-picks.md b/contributors/devel/sig-release/cherry-picks.md new file mode 100644 index 00000000..7769f970 --- /dev/null +++ b/contributors/devel/sig-release/cherry-picks.md @@ -0,0 +1,73 @@ +# Overview + +This document explains how cherry-picks are managed on release branches within +the kubernetes/kubernetes repository. +A common use case for this task is backporting PRs from master to release +branches. + +## Prerequisites + * [Contributor License Agreement](http://git.k8s.io/community/CLA.md) is + considered implicit for all code within cherry-pick pull requests, + **unless there is a large conflict**. + * A pull request merged against the master branch. + * [Release branch](https://git.k8s.io/release/docs/branching.md) exists. + * The normal git and GitHub configured shell environment for pushing to your + kubernetes `origin` fork on GitHub and making a pull request against a + configured remote `upstream` that tracks + "https://github.com/kubernetes/kubernetes.git", including `GITHUB_USER`. + * Have `hub` installed, which is most easily installed via `go get + github.com/github/hub` assuming you have a standard golang development + environment. + +## Initiate a Cherry-pick + * Run the [cherry-pick + script](https://git.k8s.io/kubernetes/hack/cherry_pick_pull.sh). + This example applies a master branch PR #98765 to the remote branch + `upstream/release-3.14`: `hack/cherry_pick_pull.sh upstream/release-3.14 + 98765` + * Be aware the cherry-pick script assumes you have a git remote called + `upstream` that points at the Kubernetes github org. + Please see our [recommended Git workflow](https://git.k8s.io/community/contributors/guide/github-workflow.md#workflow). + * You will need to run the cherry-pick script separately for each patch release you want to cherry-pick to. + + * Your cherry-pick PR will immediately get the `do-not-merge/cherry-pick-not-approved` label. + The [Branch Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager) + will triage PRs targeted to the next .0 minor release branch up until the + release, while the [Patch Release Team](https://git.k8s.io/sig-release/release-team/role-handbooks/patch-release-manager) + will handle all cherry-picks to patch releases. + Normal rules apply for code merge. + * Reviewers `/lgtm` and owners `/approve` as they deem appropriate. + * Milestones on cherry-pick PRs should be the milestone for the target + release branch (for example, milestone 1.11 for a cherry-pick onto + release-1.11). + * You can find the current release team members in the + [appropriate release folder](https://git.k8s.io/sig-release/releases) for the target release. + You may cc them with `<@githubusername>` on your cherry-pick PR. + +## Cherry-pick Review + +Cherry-pick pull requests have an additional requirement compared to normal pull +requests. +They must be approved specifically for cherry-pick by Approvers. +The [Branch Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager) +or the [Patch Release Team](https://git.k8s.io/sig-release/release-team/role-handbooks/patch-release-manager) +are the final authority on removing the `do-not-merge/cherry-pick-not-approved` +label and triggering a merge into the target branch. + +## Searching for Cherry-picks + +- [A sample search on kubernetes/kubernetes pull requests that are labeled as `cherry-pick-approved`](https://github.com/kubernetes/kubernetes/pulls?q=is%3Aopen+is%3Apr+label%3Acherry-pick-approved) + +- [A sample search on kubernetes/kubernetes pull requests that are labeled as `do-not-merge/cherry-pick-not-approved`](https://github.com/kubernetes/kubernetes/pulls?q=is%3Aopen+is%3Apr+label%3Ado-not-merge%2Fcherry-pick-not-approved) + + +## Troubleshooting Cherry-picks + +Contributors may encounter some of the following difficulties when initiating a cherry-pick. + +- A cherry-pick PR does not apply cleanly against an old release branch. +In that case, you will need to manually fix conflicts. + +- The cherry-pick PR includes code that does not pass CI tests. +In such a case you will have to fetch the auto-generated branch from your fork, amend the problematic commit and force push to the auto-generated branch. +Alternatively, you can create a new PR, which is noisier. diff --git a/contributors/guide/contributor-cheatsheet.md b/contributors/guide/contributor-cheatsheet.md index 180a368f..320cd980 100644 --- a/contributors/guide/contributor-cheatsheet.md +++ b/contributors/guide/contributor-cheatsheet.md @@ -20,7 +20,7 @@ A list of common resources when contributing to Kubernetes. - [GitHub labels](https://go.k8s.io/github-labels) - [Release Buckets](https://gcsweb.k8s.io/gcs/kubernetes-release/) - Developer Guide - - [Cherry Picking Guide](/contributors/devel/cherry-picks.md) + - [Cherry Picking Guide](/contributors/devel/sig-release/cherry-picks.md) - [Kubernetes Code Search](https://cs.k8s.io/), maintained by [@dims](https://github.com/dims) diff --git a/contributors/guide/release-notes.md b/contributors/guide/release-notes.md index 655dff1c..81dca597 100644 --- a/contributors/guide/release-notes.md +++ b/contributors/guide/release-notes.md @@ -30,4 +30,4 @@ For pull requests that don't need to be mentioned at release time, use the `/rel To see how to format your release notes, view the kubernetes/kubernetes [pull request template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. Pull Request titles and body comments can be modified at any time prior to the release to make them friendly for release notes. -Release notes apply to pull requests on the master branch. For cherry-pick pull requests, see the [cherry-pick instructions](/contributors/devel/cherry-picks.md). The only exception to these rules is when a pull request is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master pull request. +Release notes apply to pull requests on the master branch. For cherry-pick pull requests, see the [cherry-pick instructions](/contributors/devel/sig-release/cherry-picks.md). The only exception to these rules is when a pull request is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master pull request. -- cgit v1.2.3 From 5a4bdf39dbb198fc161303dcc9b71b855053f08d Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 11:18:55 -0600 Subject: getting-builds.md has been moved to /devel/sig-release - URLs updated --- contributors/devel/README.md | 2 +- contributors/devel/getting-builds.md | 49 +----------------------- contributors/devel/sig-release/getting-builds.md | 48 +++++++++++++++++++++++ 3 files changed, 51 insertions(+), 48 deletions(-) create mode 100644 contributors/devel/sig-release/getting-builds.md diff --git a/contributors/devel/README.md b/contributors/devel/README.md index 626adaad..ffeb47f4 100644 --- a/contributors/devel/README.md +++ b/contributors/devel/README.md @@ -15,7 +15,7 @@ Guide](http://kubernetes.io/docs/admin/). * **Pull Request Process** ([/contributors/guide/pull-requests.md](/contributors/guide/pull-requests.md)): When and why pull requests are closed. -* **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI. +* **Getting Recent Builds** ([getting-builds.md](sig-release/getting-builds.md)): How to get recent builds including the latest builds that pass CI. * **Automated Tools** ([automation.md](automation.md)): Descriptions of the automation that is running on our github repository. diff --git a/contributors/devel/getting-builds.md b/contributors/devel/getting-builds.md index 0ae7031b..3e35fe73 100644 --- a/contributors/devel/getting-builds.md +++ b/contributors/devel/getting-builds.md @@ -1,48 +1,3 @@ -# Getting Kubernetes Builds +This file has moved to https://git.k8s.io/community/contributors/devel/sig-release/getting-builds.md. -You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) -to get a build or to use as a reference on how to get the most recent builds -with curl. With `get-build.sh` you can grab the most recent stable build, the -most recent release candidate, or the most recent build to pass our ci and gce -e2e tests (essentially a nightly build). - -Run `./hack/get-build.sh -h` for its usage. - -To get a build at a specific version (v1.1.1) use: - -```console -./hack/get-build.sh v1.1.1 -``` - -To get the latest stable release: - -```console -./hack/get-build.sh release/stable -``` - -Use the "-v" option to print the version number of a build without retrieving -it. For example, the following prints the version number for the latest ci -build: - -```console -./hack/get-build.sh -v ci/latest -``` - -You can also use the gsutil tool to explore the Google Cloud Storage release -buckets. Here are some examples: - -```sh -gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number -gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e -gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release -gsutil ls gs://kubernetes-release/release # list all official releases and rcs -``` - -## Install `gsutil` - -Example installation: - -```console -$ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C /usr/local/src -$ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil -``` +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-release/getting-builds.md b/contributors/devel/sig-release/getting-builds.md new file mode 100644 index 00000000..0ae7031b --- /dev/null +++ b/contributors/devel/sig-release/getting-builds.md @@ -0,0 +1,48 @@ +# Getting Kubernetes Builds + +You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) +to get a build or to use as a reference on how to get the most recent builds +with curl. With `get-build.sh` you can grab the most recent stable build, the +most recent release candidate, or the most recent build to pass our ci and gce +e2e tests (essentially a nightly build). + +Run `./hack/get-build.sh -h` for its usage. + +To get a build at a specific version (v1.1.1) use: + +```console +./hack/get-build.sh v1.1.1 +``` + +To get the latest stable release: + +```console +./hack/get-build.sh release/stable +``` + +Use the "-v" option to print the version number of a build without retrieving +it. For example, the following prints the version number for the latest ci +build: + +```console +./hack/get-build.sh -v ci/latest +``` + +You can also use the gsutil tool to explore the Google Cloud Storage release +buckets. Here are some examples: + +```sh +gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number +gsutil cat gs://kubernetes-release-dev/ci/latest-green.txt # output the latest ci version number that passed gce e2e +gsutil ls gs://kubernetes-release-dev/ci/v0.20.0-29-g29a55cc/ # list the contents of a ci release +gsutil ls gs://kubernetes-release/release # list all official releases and rcs +``` + +## Install `gsutil` + +Example installation: + +```console +$ curl -sSL https://storage.googleapis.com/pub/gsutil.tar.gz | sudo tar -xz -C /usr/local/src +$ sudo ln -s /usr/local/src/gsutil/gsutil /usr/bin/gsutil +``` -- cgit v1.2.3 From f684fcb8dff1641f11924fb23fcb4732cf4a97c8 Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 11:23:48 -0600 Subject: file release.md has been moved to /devel/sig-release - URLs in k/community updated --- contributors/devel/release.md | 308 +----------------------------- contributors/devel/sig-release/release.md | 307 +++++++++++++++++++++++++++++ contributors/guide/issue-triage.md | 2 +- contributors/guide/pull-requests.md | 2 +- 4 files changed, 311 insertions(+), 308 deletions(-) create mode 100644 contributors/devel/sig-release/release.md diff --git a/contributors/devel/release.md b/contributors/devel/release.md index b4e9224e..9ce19241 100644 --- a/contributors/devel/release.md +++ b/contributors/devel/release.md @@ -1,307 +1,3 @@ -# Targeting Features, Issues and PRs to Release Milestones +This file has moved to https://git.k8s.io/community/contributors/devel/sig-release/release.md. -This document is focused on Kubernetes developers and contributors -who need to create a feature, issue, or pull request which targets a specific -release milestone. - -- [TL;DR](#tldr) -- [Definitions](#definitions) -- [The Release Cycle](#the-release-cycle) -- [Removal Of Items From The Milestone](#removal-of-items-from-the-milestone) -- [Adding An Item To The Milestone](#adding-an-item-to-the-milestone) - - [Milestone Maintainers](#milestone-maintainers) - - [Feature additions](#feature-additions) - - [Issue additions](#issue-additions) - - [PR Additions](#pr-additions) -- [Other Required Labels](#other-required-labels) - - [SIG Owner Label](#sig-owner-label) - - [Priority Label](#priority-label) - - [Issue Kind Label](#issue-kind-label) - -The process for shepherding features, issues, and pull requests -into a Kubernetes release spans multiple stakeholders: -* the feature, issue, or pull request owner -* SIG leadership -* the release team - -Information on workflows and interactions are described below. - -As the owner of a feature, issue, or pull request (PR), it is your -responsibility to ensure release milestone requirements are met. -Automation and the release team will be in contact with you if -updates are required, but inaction can result in your work being -removed from the milestone. Additional requirements exist when the -target milestone is a prior release (see [cherry pick -process](cherry-picks.md) for more information). - -## TL;DR - -If you want your PR to get merged, it needs the following required labels and milestones, represented here by the Prow /commands it would take to add them: - - - - - - - - - - - - - - - - - - - - -
Normal DevCode FreezePost-Release
Weeks 1-8Weeks 9-11Weeks 11+
Required Labels -
    - -
  • /sig {name}
  • -
  • /kind {type}
  • -
  • /lgtm
  • -
  • /approved
  • -
-
-
    - -
  • /milestone {v1.y}
  • -
  • /sig {name}
  • -
  • /kind {bug, failing-test}
  • -
  • /priority critical-urgent
  • -
  • /lgtm
  • -
  • /approved
  • -
-
- -Return to 'Normal Dev' phase requirements: -
    -
  • /sig {name}
  • -
  • /kind {type}
  • -
  • /lgtm
  • -
  • /approved
  • -
- -Merges into the 1.y branch are now [via cherrypicks](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md), approved by release branch manager. -
-
    -
- -In the past there was a requirement for a milestone targeted pull -request to have an associated GitHub issue opened, but this is no -longer the case. Features are effectively GitHub issues or -[KEPs](https://git.k8s.io/community/keps) -which lead to subsequent PRs. The general labeling process should -be consistent across artifact types. - ---- - -## Definitions - -- *issue owners*: Creator, assignees, and user who moved the issue into a release milestone. -- *release team*: Each Kubernetes release has a team doing project - management tasks described - [here](https://git.k8s.io/sig-release/release-team/README.md). The - contact info for the team associated with any given release can be - found [here](https://git.k8s.io/sig-release/releases/). -- *Y days*: Refers to business days (using the location local to the release-manager M-F). -- *feature*: see "[Is My Thing a Feature?](http://git.k8s.io/features/README.md#is-my-thing-a-feature) -- *release milestone*: semantic version string or [GitHub milestone](https://help.github.com/articles/associating-milestones-with-issues-and-pull-requests/) referring to a release MAJOR.MINOR vX.Y version. See also [release versioning](http://git.k8s.io/community/contributors/design-proposals/release/versioning.md) -- *release branch*: Git branch "release-X.Y" created for the vX.Y milestone. Created at the time of the vX.Y-beta.0 release and maintained after the release for approximately 9 months with vX.Y.Z patch releases. - -## The Release Cycle - -![Image of one Kubernetes release cycle](release-cycle.png) - -Kubernetes releases currently happen four times per year. The release -process can be thought of as having three main phases: -* Feature Definition -* Implementation -* Stabilization - -But in reality this is an open source and agile project, with feature -planning and implementation happening at all times. Given the -project scale and globally distributed developer base, it is critical -to project velocity to not rely on a trailing stabilization phase and -rather have continuous integration testing which ensures the -project is always stable so that individual commits can be -flagged as having broken something. - -With ongoing feature definition through the year, some set of items -will bubble up as targeting a given release. The **enhancement freeze** -starts ~4 weeks into release cycle. By this point all intended -feature work for the given release has been defined in suitable -planning artifacts in conjunction with the Release Team's [enhancements -lead](https://git.k8s.io/sig-release/release-team/role-handbooks/enhancements/README.md). - -Implementation and bugfixing is ongoing across the cycle, but -culminates in a code freeze period: -* The **code freeze** starts in week ~10 and continues for ~2 weeks. - Only critical bug fixes are accepted into the release codebase. - -There are approximately two weeks following code freeze, and preceding -release, during which all remaining critical issues must be resolved -before release. This also gives time for documentation finalization. - -When the code base is sufficiently stable, the master branch re-opens -for general development and work begins there for the next release -milestone. Any remaining modifications for the current release are cherry -picked from master back to the release branch. The release is built from -the release branch. - -Following release, the [Release Branch -Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager/README.md) -cherry picks additional critical fixes from the master branch for -a period of around 9 months, leaving an overlap of three release -versions forward support. Thus, each release is part of a broader -Kubernetes lifecycle: - -![Image of Kubernetes release lifecycle spanning three releases](release-lifecycle.png) - -## Removal Of Items From The Milestone - -Before getting too far into the process for adding an item to the -milestone, please note: - -Members of the Release Team may remove Issues from the milestone -if they or the responsible SIG determine that the issue is not -actually blocking the release and is unlikely to be resolved in a -timely fashion. - -Members of the Release Team may remove PRs from the milestone for -any of the following, or similar, reasons: - -* PR is potentially de-stabilizing and is not needed to resolve a blocking issue; -* PR is a new, late feature PR and has not gone through the features process or the exception process; -* There is no responsible SIG willing to take ownership of the PR and resolve any follow-up issues with it; -* PR is not correctly labelled; -* Work has visibly halted on the PR and delivery dates are uncertain or late. - -While members of the Release Team will help with labelling and -contacting SIG(s), it is the responsibility of the submitter to -categorize PRs, and to secure support from the relevant SIG to -guarantee that any breakage caused by the PR will be rapidly resolved. - -Where additional action is required, an attempt at human to human -escalation will be made by the release team through the following -channels: - -- Comment in GitHub mentioning the SIG team and SIG members as appropriate for the issue type -- Emailing the SIG mailing list - - bootstrapped with group email addresses from the [community sig list](/sig-list.md) - - optionally also directly addressing SIG leadership or other SIG members -- Messaging the SIG's Slack channel - - bootstrapped with the slackchannel and SIG leadership from the [community sig list](/sig-list.md) - - optionally directly "@" mentioning SIG leadership or others by handle - -## Adding An Item To The Milestone - -### Milestone Maintainers - -The members of the GitHub [“kubernetes-milestone-maintainers” -team](https://github.com/orgs/kubernetes/teams/kubernetes-milestone-maintainers/members) -are entrusted with the responsibility of specifying the release milestone on -GitHub artifacts. This group is [maintained by -SIG-Release](https://git.k8s.io/sig-release/release-team/README.md#milestone-maintainers) -and has representation from the various SIGs' leadership. - -### Feature additions - -Feature planning and definition takes many forms today, but a typical -example might be a large piece of work described in a -[KEP](https://git.k8s.io/community/keps), with associated -task issues in GitHub. When the plan has reached an implementable state and -work is underway, the feature or parts thereof are targeted for an upcoming -milestone by creating GitHub issues and marking them with the Prow "/milestone" -command. - -For the first ~4 weeks into the release cycle, the release team's -Enhancements Lead will interact with SIGs and feature owners via GitHub, -Slack, and SIG meetings to capture all required planning artifacts. - -If you have a feature to target for an upcoming release milestone, begin a -conversation with your SIG leadership and with that release's Enhancements -Lead. - -### Issue additions - -Issues are marked as targeting a milestone via the Prow -"/milestone" command. - -The release team's [Bug Triage -Lead](https://git.k8s.io/sig-release/release-team/role-handbooks/bug-triage/README.md) and overall community watch -incoming issues and triage them, as described in the contributor -guide section on [issue triage](/contributors/guide/issue-triage.md). - -Marking issues with the milestone provides the community better -visibility regarding when an issue was observed and by when the community -feels it must be resolved. During code freeze, to merge a PR it is required -that a release milestone is set. - -An open issue is no longer required for a PR, but open issues and -associated PRs should have synchronized labels. For example a high -priority bug issue might not have its associated PR merged if the PR is -only marked as lower priority. - -### PR Additions - -PRs are marked as targeting a milestone via the Prow -"/milestone" command. - -This is a blocking requirement during code freeze as described above. - -## Other Required Labels - -*Note* [Here is the list of labels and their use and purpose.](https://git.k8s.io/test-infra/label_sync/labels.md#labels-that-apply-to-all-repos-for-both-issues-and-prs) - -### SIG Owner Label - -The SIG owner label defines the SIG to which we escalate if a -milestone issue is languishing or needs additional attention. If -there are no updates after escalation, the issue may be automatically -removed from the milestone. - -These are added with the Prow "/sig" command. For example to add -the label indicating SIG Storage is responsible, comment with `/sig -storage`. - -### Priority Label - -Priority labels are used to determine an escalation path before -moving issues out of the release milestone. They are also used to -determine whether or not a release should be blocked on the resolution -of the issue. - -- `priority/critical-urgent`: Never automatically move out of a release milestone; continually escalate to contributor and SIG through all available channels. - - considered a release blocking issue - - code freeze: issue owner update frequency: daily - - would require a patch release if left undiscovered until after the minor release. -- `priority/important-soon`: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts. - - not considered a release blocking issue - - would not require a patch release - - will automatically be moved out of the release milestone at code freeze after a 4 day grace period -- `priority/important-longterm`: Escalate to the issue owners; move out of the milestone after 1 attempt. - - even less urgent / critical than `priority/important-soon` - - moved out of milestone more aggressively than `priority/important-soon` - -### Issue/PR Kind Label - -The issue kind is used to help identify the types of changes going -into the release over time. This may allow the release team to -develop a better understanding of what sorts of issues we would -miss with a faster release cadence. - -For release targeted issues, including pull requests, one of the following -issue kind labels must be set: - -- `kind/api-change`: Adds, removes, or changes an API -- `kind/bug`: Fixes a newly discovered bug. -- `kind/cleanup`: Adding tests, refactoring, fixing old bugs. -- `kind/design`: Related to design -- `kind/documentation`: Adds documentation -- `kind/failing-test`: CI test case is failing consistently. -- `kind/feature`: New functionality. -- `kind/flake`: CI test case is showing intermittent failures. +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-release/release.md b/contributors/devel/sig-release/release.md new file mode 100644 index 00000000..b4e9224e --- /dev/null +++ b/contributors/devel/sig-release/release.md @@ -0,0 +1,307 @@ +# Targeting Features, Issues and PRs to Release Milestones + +This document is focused on Kubernetes developers and contributors +who need to create a feature, issue, or pull request which targets a specific +release milestone. + +- [TL;DR](#tldr) +- [Definitions](#definitions) +- [The Release Cycle](#the-release-cycle) +- [Removal Of Items From The Milestone](#removal-of-items-from-the-milestone) +- [Adding An Item To The Milestone](#adding-an-item-to-the-milestone) + - [Milestone Maintainers](#milestone-maintainers) + - [Feature additions](#feature-additions) + - [Issue additions](#issue-additions) + - [PR Additions](#pr-additions) +- [Other Required Labels](#other-required-labels) + - [SIG Owner Label](#sig-owner-label) + - [Priority Label](#priority-label) + - [Issue Kind Label](#issue-kind-label) + +The process for shepherding features, issues, and pull requests +into a Kubernetes release spans multiple stakeholders: +* the feature, issue, or pull request owner +* SIG leadership +* the release team + +Information on workflows and interactions are described below. + +As the owner of a feature, issue, or pull request (PR), it is your +responsibility to ensure release milestone requirements are met. +Automation and the release team will be in contact with you if +updates are required, but inaction can result in your work being +removed from the milestone. Additional requirements exist when the +target milestone is a prior release (see [cherry pick +process](cherry-picks.md) for more information). + +## TL;DR + +If you want your PR to get merged, it needs the following required labels and milestones, represented here by the Prow /commands it would take to add them: + + + + + + + + + + + + + + + + + + + + +
Normal DevCode FreezePost-Release
Weeks 1-8Weeks 9-11Weeks 11+
Required Labels +
    + +
  • /sig {name}
  • +
  • /kind {type}
  • +
  • /lgtm
  • +
  • /approved
  • +
+
+
    + +
  • /milestone {v1.y}
  • +
  • /sig {name}
  • +
  • /kind {bug, failing-test}
  • +
  • /priority critical-urgent
  • +
  • /lgtm
  • +
  • /approved
  • +
+
+ +Return to 'Normal Dev' phase requirements: +
    +
  • /sig {name}
  • +
  • /kind {type}
  • +
  • /lgtm
  • +
  • /approved
  • +
+ +Merges into the 1.y branch are now [via cherrypicks](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md), approved by release branch manager. +
+
    +
+ +In the past there was a requirement for a milestone targeted pull +request to have an associated GitHub issue opened, but this is no +longer the case. Features are effectively GitHub issues or +[KEPs](https://git.k8s.io/community/keps) +which lead to subsequent PRs. The general labeling process should +be consistent across artifact types. + +--- + +## Definitions + +- *issue owners*: Creator, assignees, and user who moved the issue into a release milestone. +- *release team*: Each Kubernetes release has a team doing project + management tasks described + [here](https://git.k8s.io/sig-release/release-team/README.md). The + contact info for the team associated with any given release can be + found [here](https://git.k8s.io/sig-release/releases/). +- *Y days*: Refers to business days (using the location local to the release-manager M-F). +- *feature*: see "[Is My Thing a Feature?](http://git.k8s.io/features/README.md#is-my-thing-a-feature) +- *release milestone*: semantic version string or [GitHub milestone](https://help.github.com/articles/associating-milestones-with-issues-and-pull-requests/) referring to a release MAJOR.MINOR vX.Y version. See also [release versioning](http://git.k8s.io/community/contributors/design-proposals/release/versioning.md) +- *release branch*: Git branch "release-X.Y" created for the vX.Y milestone. Created at the time of the vX.Y-beta.0 release and maintained after the release for approximately 9 months with vX.Y.Z patch releases. + +## The Release Cycle + +![Image of one Kubernetes release cycle](release-cycle.png) + +Kubernetes releases currently happen four times per year. The release +process can be thought of as having three main phases: +* Feature Definition +* Implementation +* Stabilization + +But in reality this is an open source and agile project, with feature +planning and implementation happening at all times. Given the +project scale and globally distributed developer base, it is critical +to project velocity to not rely on a trailing stabilization phase and +rather have continuous integration testing which ensures the +project is always stable so that individual commits can be +flagged as having broken something. + +With ongoing feature definition through the year, some set of items +will bubble up as targeting a given release. The **enhancement freeze** +starts ~4 weeks into release cycle. By this point all intended +feature work for the given release has been defined in suitable +planning artifacts in conjunction with the Release Team's [enhancements +lead](https://git.k8s.io/sig-release/release-team/role-handbooks/enhancements/README.md). + +Implementation and bugfixing is ongoing across the cycle, but +culminates in a code freeze period: +* The **code freeze** starts in week ~10 and continues for ~2 weeks. + Only critical bug fixes are accepted into the release codebase. + +There are approximately two weeks following code freeze, and preceding +release, during which all remaining critical issues must be resolved +before release. This also gives time for documentation finalization. + +When the code base is sufficiently stable, the master branch re-opens +for general development and work begins there for the next release +milestone. Any remaining modifications for the current release are cherry +picked from master back to the release branch. The release is built from +the release branch. + +Following release, the [Release Branch +Manager](https://git.k8s.io/sig-release/release-team/role-handbooks/branch-manager/README.md) +cherry picks additional critical fixes from the master branch for +a period of around 9 months, leaving an overlap of three release +versions forward support. Thus, each release is part of a broader +Kubernetes lifecycle: + +![Image of Kubernetes release lifecycle spanning three releases](release-lifecycle.png) + +## Removal Of Items From The Milestone + +Before getting too far into the process for adding an item to the +milestone, please note: + +Members of the Release Team may remove Issues from the milestone +if they or the responsible SIG determine that the issue is not +actually blocking the release and is unlikely to be resolved in a +timely fashion. + +Members of the Release Team may remove PRs from the milestone for +any of the following, or similar, reasons: + +* PR is potentially de-stabilizing and is not needed to resolve a blocking issue; +* PR is a new, late feature PR and has not gone through the features process or the exception process; +* There is no responsible SIG willing to take ownership of the PR and resolve any follow-up issues with it; +* PR is not correctly labelled; +* Work has visibly halted on the PR and delivery dates are uncertain or late. + +While members of the Release Team will help with labelling and +contacting SIG(s), it is the responsibility of the submitter to +categorize PRs, and to secure support from the relevant SIG to +guarantee that any breakage caused by the PR will be rapidly resolved. + +Where additional action is required, an attempt at human to human +escalation will be made by the release team through the following +channels: + +- Comment in GitHub mentioning the SIG team and SIG members as appropriate for the issue type +- Emailing the SIG mailing list + - bootstrapped with group email addresses from the [community sig list](/sig-list.md) + - optionally also directly addressing SIG leadership or other SIG members +- Messaging the SIG's Slack channel + - bootstrapped with the slackchannel and SIG leadership from the [community sig list](/sig-list.md) + - optionally directly "@" mentioning SIG leadership or others by handle + +## Adding An Item To The Milestone + +### Milestone Maintainers + +The members of the GitHub [“kubernetes-milestone-maintainers” +team](https://github.com/orgs/kubernetes/teams/kubernetes-milestone-maintainers/members) +are entrusted with the responsibility of specifying the release milestone on +GitHub artifacts. This group is [maintained by +SIG-Release](https://git.k8s.io/sig-release/release-team/README.md#milestone-maintainers) +and has representation from the various SIGs' leadership. + +### Feature additions + +Feature planning and definition takes many forms today, but a typical +example might be a large piece of work described in a +[KEP](https://git.k8s.io/community/keps), with associated +task issues in GitHub. When the plan has reached an implementable state and +work is underway, the feature or parts thereof are targeted for an upcoming +milestone by creating GitHub issues and marking them with the Prow "/milestone" +command. + +For the first ~4 weeks into the release cycle, the release team's +Enhancements Lead will interact with SIGs and feature owners via GitHub, +Slack, and SIG meetings to capture all required planning artifacts. + +If you have a feature to target for an upcoming release milestone, begin a +conversation with your SIG leadership and with that release's Enhancements +Lead. + +### Issue additions + +Issues are marked as targeting a milestone via the Prow +"/milestone" command. + +The release team's [Bug Triage +Lead](https://git.k8s.io/sig-release/release-team/role-handbooks/bug-triage/README.md) and overall community watch +incoming issues and triage them, as described in the contributor +guide section on [issue triage](/contributors/guide/issue-triage.md). + +Marking issues with the milestone provides the community better +visibility regarding when an issue was observed and by when the community +feels it must be resolved. During code freeze, to merge a PR it is required +that a release milestone is set. + +An open issue is no longer required for a PR, but open issues and +associated PRs should have synchronized labels. For example a high +priority bug issue might not have its associated PR merged if the PR is +only marked as lower priority. + +### PR Additions + +PRs are marked as targeting a milestone via the Prow +"/milestone" command. + +This is a blocking requirement during code freeze as described above. + +## Other Required Labels + +*Note* [Here is the list of labels and their use and purpose.](https://git.k8s.io/test-infra/label_sync/labels.md#labels-that-apply-to-all-repos-for-both-issues-and-prs) + +### SIG Owner Label + +The SIG owner label defines the SIG to which we escalate if a +milestone issue is languishing or needs additional attention. If +there are no updates after escalation, the issue may be automatically +removed from the milestone. + +These are added with the Prow "/sig" command. For example to add +the label indicating SIG Storage is responsible, comment with `/sig +storage`. + +### Priority Label + +Priority labels are used to determine an escalation path before +moving issues out of the release milestone. They are also used to +determine whether or not a release should be blocked on the resolution +of the issue. + +- `priority/critical-urgent`: Never automatically move out of a release milestone; continually escalate to contributor and SIG through all available channels. + - considered a release blocking issue + - code freeze: issue owner update frequency: daily + - would require a patch release if left undiscovered until after the minor release. +- `priority/important-soon`: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts. + - not considered a release blocking issue + - would not require a patch release + - will automatically be moved out of the release milestone at code freeze after a 4 day grace period +- `priority/important-longterm`: Escalate to the issue owners; move out of the milestone after 1 attempt. + - even less urgent / critical than `priority/important-soon` + - moved out of milestone more aggressively than `priority/important-soon` + +### Issue/PR Kind Label + +The issue kind is used to help identify the types of changes going +into the release over time. This may allow the release team to +develop a better understanding of what sorts of issues we would +miss with a faster release cadence. + +For release targeted issues, including pull requests, one of the following +issue kind labels must be set: + +- `kind/api-change`: Adds, removes, or changes an API +- `kind/bug`: Fixes a newly discovered bug. +- `kind/cleanup`: Adding tests, refactoring, fixing old bugs. +- `kind/design`: Related to design +- `kind/documentation`: Adds documentation +- `kind/failing-test`: CI test case is failing consistently. +- `kind/feature`: New functionality. +- `kind/flake`: CI test case is showing intermittent failures. diff --git a/contributors/guide/issue-triage.md b/contributors/guide/issue-triage.md index ff67ba3e..879648a9 100644 --- a/contributors/guide/issue-triage.md +++ b/contributors/guide/issue-triage.md @@ -206,7 +206,7 @@ block the release on it. A few days before release, we will probably move all that milestone in bulk. More information can be found in the developer guide section for -[targeting issues and PRs to a milestone release](/contributors/devel/release.md). +[targeting issues and PRs to a milestone release](/contributors/devel/sig-release/release.md). ## Closing issues Issues that are identified as a support request, duplicate, not-reproducible diff --git a/contributors/guide/pull-requests.md b/contributors/guide/pull-requests.md index a24310a6..a9c26086 100644 --- a/contributors/guide/pull-requests.md +++ b/contributors/guide/pull-requests.md @@ -115,7 +115,7 @@ The GitHub robots will add and remove the `do-not-merge/hold` label as you use t ## Pull Requests and the Release Cycle -If a pull request has been reviewed, but held or not approved, it might be due to the current phase in the [Release Cycle](/contributors/devel/release.md). Occasionally, a SIG may freeze their own code base when working towards a specific feature or goal that could impact other development. During this time, your pull request could remain unmerged while their release work is completed. +If a pull request has been reviewed, but held or not approved, it might be due to the current phase in the [Release Cycle](/contributors/devel/sig-release/release.md). Occasionally, a SIG may freeze their own code base when working towards a specific feature or goal that could impact other development. During this time, your pull request could remain unmerged while their release work is completed. If you feel your pull request is in this state, contact the appropriate [SIG](https://git.k8s.io/community/sig-list.md) or [SIG-Release](https://git.k8s.io/sig-release) for clarification. -- cgit v1.2.3 From be45fe1740776f5232abdd1c62eff745b00ba74f Mon Sep 17 00:00:00 2001 From: Michael Taufen Date: Tue, 29 Jan 2019 09:34:19 -0800 Subject: sigs.yaml update wg-component-standard --- sig-list.md | 2 +- sigs.yaml | 15 ++++++++++++--- wg-component-standard/README.md | 24 ++++++++++++++++++++++-- 3 files changed, 35 insertions(+), 6 deletions(-) diff --git a/sig-list.md b/sig-list.md index 199cf234..48b738a7 100644 --- a/sig-list.md +++ b/sig-list.md @@ -59,7 +59,7 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |------|------------------|-----------|---------|----------| |[App Def](wg-app-def/README.md)||* [Antoine Legrand](https://github.com/ant31), CoreOS
* [Bryan Liles](https://github.com/bryanl), VMware
* [Gareth Rushgrove](https://github.com/garethr), Docker
|* [Slack](https://kubernetes.slack.com/messages/wg-app-def)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-app-def)|* Regular WG Meeting: [Wednesdays at 16:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[Apply](wg-apply/README.md)||* [Daniel Smith](https://github.com/lavalamp), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-apply)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-apply)|* Regular WG Meeting: [Tuesdays at 9:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
-|[Component Standard](wg-component-standard/README.md)||* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)
* [Dr. Stefan Schimanski](https://github.com/sttts), Red Hat
* [Michael Taufen](https://github.com/mtaufen), Google
|* [Slack](https://kubernetes.slack.com/messages/)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-component-standard)|* Regular WG Meeting: [Tuesdays at 08:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/18TsodX0fqQgViQ7HHUTAhiAwkf6bNhPXH4vNVTI7GwI)
+|[Component Standard](wg-component-standard/README.md)|* Architecture
* API Machinery
* Cluster Lifecycle
|* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)
* [Dr. Stefan Schimanski](https://github.com/sttts), Red Hat
* [Michael Taufen](https://github.com/mtaufen), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-component-standard)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-component-standard)|* Regular WG Meeting: [Tuesdays at 08:30 PT (Pacific Time) (weekly)](https://zoom.us/j/705540322)
|[Container Identity](wg-container-identity/README.md)||* [Clayton Coleman](https://github.com/smarterclayton), Red Hat
* [Greg Castle](https://github.com/destijl), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-container-identity)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-container-identity)|* Regular WG Meeting: [Wednesdays at 10:00 PDT (bi-weekly (On demand))](https://zoom.us/my/k8s.sig.auth)
|[IoT Edge](wg-iot-edge/README.md)||* [Cindy Xing](https://github.com/cindyxing), Huawei
* [Dejan Bosanac](https://github.com/dejanb), Red Hat
* [Preston Holmes](https://github.com/ptone), Google
* [Steve Wong](https://github.com/cantbewong), VMWare
|* [Slack](https://kubernetes.slack.com/messages/wg-iot-edge)
* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-iot-edge)|* Regular WG Meeting: [Wednesdays at 17:00 UTC (every four weeks)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
* APAC WG Meeting: [Wednesdays at 5:00 UTC (every four weeks)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
|[K8s Infra](wg-k8s-infra/README.md)|* Architecture
* Contributor Experience
* Release
* Testing
|* [Davanum Srinivas](https://github.com/dims), Huawei
* [Aaron Crickenberger](https://github.com/spiffxp), Google
|* [Slack](https://kubernetes.slack.com/messages/wg-k8s-infra)
* [Mailing List](https://groups.google.com/forum/#!forum/wg-k8s-infra)|* Regular WG Meeting: [Wednesdays at 8:30 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)
diff --git a/sigs.yaml b/sigs.yaml index 6d502400..635b6065 100644 --- a/sigs.yaml +++ b/sigs.yaml @@ -2025,7 +2025,7 @@ sigs: - name: dashboard owners: - https://raw.githubusercontent.com/kubernetes/dashboard/master/OWNERS - - https://raw.githubusercontent.com/kubernetes-sigs/dashboard-metrics-scraper/master/OWNERS + - https://raw.githubusercontent.com/kubernetes-sigs/dashboard-metrics-scraper/master/OWNERS - name: VMware dir: sig-vmware mission_statement: > @@ -2411,11 +2411,11 @@ workinggroups: mailing_list: https://groups.google.com/forum/#!forum/kubernetes-wg-security-audit - name: Component Standard dir: wg-component-standard + label: component-standard mission_statement: > Develop a standard foundation (philosophy and libraries) for core Kubernetes components to build on top of. Areas to standardize include configuration (flags, ComponentConfig APIs, ...), status endpoints (healthz, configz, ...), integration points (delegated authn/z, ...), and logging. Details are outlined in KEP 0032: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/0032-create-a-k8s-io-component-repo.md. - charter_link: leadership: chairs: - name: Lucas Käldström @@ -2427,15 +2427,24 @@ workinggroups: - name: Michael Taufen github: mtaufen company: Google + stakeholder_sigs: + - Architecture + - API Machinery + - Cluster Lifecycle meetings: - description: Regular WG Meeting day: Tuesday time: "08:30" tz: "PT (Pacific Time)" frequency: weekly - url: https://docs.google.com/document/d/18TsodX0fqQgViQ7HHUTAhiAwkf6bNhPXH4vNVTI7GwI + url: https://zoom.us/j/705540322 + archive_url: https://docs.google.com/document/d/18TsodX0fqQgViQ7HHUTAhiAwkf6bNhPXH4vNVTI7GwI contact: + slack: wg-component-standard mailing_list: https://groups.google.com/forum/#!forum/kubernetes-wg-component-standard + teams: + - name: wg-component-standard + description: Component Standard Discussion - name: K8s Infra dir: wg-k8s-infra mission_statement: > diff --git a/wg-component-standard/README.md b/wg-component-standard/README.md index d6d08884..d65ceacc 100644 --- a/wg-component-standard/README.md +++ b/wg-component-standard/README.md @@ -10,8 +10,14 @@ To understand how this file is generated, see https://git.k8s.io/community/gener Develop a standard foundation (philosophy and libraries) for core Kubernetes components to build on top of. Areas to standardize include configuration (flags, ComponentConfig APIs, ...), status endpoints (healthz, configz, ...), integration points (delegated authn/z, ...), and logging. Details are outlined in KEP 0032: https://github.com/kubernetes/enhancements/blob/master/keps/sig-cluster-lifecycle/0032-create-a-k8s-io-component-repo.md. +## Stakeholder SIGs +* SIG Architecture +* SIG API Machinery +* SIG Cluster Lifecycle + ## Meetings -* Regular WG Meeting: [Tuesdays at 08:30 PT (Pacific Time)](https://docs.google.com/document/d/18TsodX0fqQgViQ7HHUTAhiAwkf6bNhPXH4vNVTI7GwI) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=08:30&tz=PT%20%28Pacific%20Time%29). +* Regular WG Meeting: [Tuesdays at 08:30 PT (Pacific Time)](https://zoom.us/j/705540322) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=08:30&tz=PT%20%28Pacific%20Time%29). + * [Meeting notes and Agenda](https://docs.google.com/document/d/18TsodX0fqQgViQ7HHUTAhiAwkf6bNhPXH4vNVTI7GwI). ## Organizers @@ -20,8 +26,22 @@ Develop a standard foundation (philosophy and libraries) for core Kubernetes com * Michael Taufen (**[@mtaufen](https://github.com/mtaufen)**), Google ## Contact -* [Slack](https://kubernetes.slack.com/messages/) +* [Slack](https://kubernetes.slack.com/messages/wg-component-standard) * [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-wg-component-standard) +* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/wg%2Fcomponent-standard) + +## GitHub Teams + +The below teams can be mentioned on issues and PRs in order to get attention from the right people. +Note that the links to display team membership will only work if you are a member of the org. + +The google groups contain the archive of Github team notifications. +Mentioning a team on Github will CC its group. +Monitor these for Github activity if you are not a member of the team. + +| Team Name | Details | Google Groups | Description | +| --------- |:-------:|:-------------:| ----------- | +| @kubernetes/wg-component-standard | [link](https://github.com/orgs/kubernetes/teams/wg-component-standard) | [link](https://groups.google.com/forum/#!forum/kubernetes-wg-component-standard) | Component Standard Discussion | -- cgit v1.2.3 From 07aa22eca67efca038022c3b2d892d9cf202976e Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 12:10:17 -0600 Subject: folder /devel/sig-scalability created - file kubemark-guide.md moved to it --- contributors/devel/kubemark-guide.md | 257 +-------------------- .../devel/sig-scalability/kubemark-guide.md | 256 ++++++++++++++++++++ 2 files changed, 258 insertions(+), 255 deletions(-) create mode 100644 contributors/devel/sig-scalability/kubemark-guide.md diff --git a/contributors/devel/kubemark-guide.md b/contributors/devel/kubemark-guide.md index ce5727e8..a92b19f9 100644 --- a/contributors/devel/kubemark-guide.md +++ b/contributors/devel/kubemark-guide.md @@ -1,256 +1,3 @@ -# Kubemark User Guide - -## Introduction - -Kubemark is a performance testing tool which allows users to run experiments on -simulated clusters. The primary use case is scalability testing, as simulated -clusters can be much bigger than the real ones. The objective is to expose -problems with the master components (API server, controller manager or -scheduler) that appear only on bigger clusters (e.g. small memory leaks). - -This document serves as a primer to understand what Kubemark is, what it is not, -and how to use it. - -## Architecture - -On a very high level, Kubemark cluster consists of two parts: a real master -and a set of “Hollow” Nodes. The prefix “Hollow” to any component means an -implementation/instantiation of the actual component with all “moving” -parts mocked out. The best example is HollowKubelet, which pretends to be an -ordinary Kubelet, but does not start anything, nor mount any volumes - it just -lies it does. More detailed design and implementation details are at the end -of this document. - -Currently, master components run on a dedicated machine as pods that are -created/managed by kubelet, which itself runs as either a systemd or a supervisord -service on the master VM depending on the VM distro (though currently it is -only systemd as we use a GCI image). Having a dedicated machine for the master -has a slight advantage over running the master components on an external cluster, -which is being able to completely isolate master resources from everything else. -The HollowNodes on the other hand are run on an ‘external’ Kubernetes cluster -as pods in an isolated namespace (named kubemark). This idea of using pods on a -real cluster behave (or act) as nodes on the kubemark cluster lies at the heart of -kubemark's design. - -## Requirements - -To run Kubemark you need a Kubernetes cluster (called `external cluster`) -for running all your HollowNodes and a dedicated machine for a master. -Master machine has to be directly routable from HollowNodes. You also need -access to a Docker repository (which is gcr.io in the case of GCE) that has the -container images for etcd, hollow-node and node-problem-detector. - -Currently, scripts are written to be easily usable by GCE, but it should be -relatively straightforward to port them to different providers or bare metal. -There is an ongoing effort to refactor kubemark code into provider-specific (gce) -and provider-independent code, which should make it relatively simple to run -kubemark clusters on other cloud providers as well. - -## Common use cases and helper scripts - -Common workflow for Kubemark is: -- starting a Kubemark cluster (on GCE) -- running e2e tests on Kubemark cluster -- monitoring test execution and debugging problems -- turning down Kubemark cluster - -(For now) Included in descriptions there will be comments helpful for anyone who’ll -want to port Kubemark to different providers. -(Later) When the refactoring mentioned in the above section finishes, we would replace -these comments with a clean API that would allow kubemark to run on top of any provider. - -### Starting a Kubemark cluster - -To start a Kubemark cluster on GCE you need to create an external kubernetes -cluster (it can be GCE, GKE or anything else) by yourself, make sure that kubeconfig -points to it by default, build a kubernetes release (e.g. by running -`make quick-release`) and run `test/kubemark/start-kubemark.sh` script. -This script will create a VM for master (along with mounted PD and firewall rules set), -then start kubelet and run the pods for the master components. Following this, it -sets up the HollowNodes as Pods on the external cluster and do all the setup necessary -to let them talk to the kubemark apiserver. It will use the configuration stored in -`cluster/kubemark/config-default.sh` - you can tweak it however you want, but note that -some features may not be implemented yet, as implementation of Hollow components/mocks -will probably be lagging behind ‘real’ one. For performance tests interesting variables -are `NUM_NODES` and `KUBEMARK_MASTER_SIZE`. After start-kubemark script is finished, -you’ll have a ready Kubemark cluster, and a kubeconfig file for talking to the Kubemark -cluster is stored in `test/kubemark/resources/kubeconfig.kubemark`. - -Currently we're running HollowNode with a limit of 0.09 CPU core/pod and 220MB of memory. -However, if we also take into account the resources absorbed by default cluster addons -and fluentD running on the 'external' cluster, this limit becomes ~0.1 CPU core/pod, -thus allowing ~10 HollowNodes to run per core (on an "n1-standard-8" VM node). - -#### Behind the scene details: - -start-kubemark.sh script does quite a lot of things: - -- Prepare a master machine named MASTER_NAME (this variable's value should be set by this point): - (*the steps below use gcloud, and should be easy to do outside of GCE*) - 1. Creates a Persistent Disk for use by the master (one more for etcd-events, if flagged) - 2. Creates a static IP address for the master in the cluster and assign it to variable MASTER_IP - 3. Creates a VM instance for the master, configured with the PD and IP created above. - 4. Set firewall rule in the master to open port 443\* for all TCP traffic by default. - -\* Port 443 is a secured port on the master machine which is used for all -external communication with the API server. In the last sentence *external* -means all traffic coming from other machines, including all the Nodes, not only -from outside of the cluster. Currently local components, i.e. ControllerManager -and Scheduler talk with API server using insecure port 8080. - -- [Optional to read] Establish necessary certs/keys required for setting up the PKI for kubemark cluster: - (*the steps below are independent of GCE and work for all providers*) - 1. Generate a randomly named temporary directory for storing PKI certs/keys which is delete-trapped on EXIT. - 2. Create a bearer token for 'admin' in master. - 3. Generate certificate for CA and (certificate + private-key) pair for each of master, kubelet and kubecfg. - 4. Generate kubelet and kubeproxy tokens for master. - 5. Write a kubeconfig locally to `test/kubemark/resources/kubeconfig.kubemark` for enabling local kubectl use. - -- Set up environment and start master components (through `start-kubemark-master.sh` script): - (*the steps below use gcloud for SSH and SCP to master, and should be easy to do outside of GCE*) - 1. SSH to the master machine and create a new directory (`/etc/srv/kubernetes`) and write all the - certs/keys/tokens/passwords to it. - 2. SCP all the master pod manifests, shell scripts (`start-kubemark-master.sh`, `configure-kubectl.sh`, etc), - config files for passing env variables (`kubemark-master-env.sh`) from the local machine to the master. - 3. SSH to the master machine and run the startup script `start-kubemark-master.sh` (and possibly others). - - Note: The directory structure and the functions performed by the startup script(s) can vary based on master distro. - We currently support the GCI image `gci-dev-56-8977-0-0` in GCE. - -- Set up and start HollowNodes (as pods) on the external cluster: - (*the steps below (except 2nd and 3rd) are independent of GCE and work for all providers*) - 1. Identify the right kubemark binary from the current kubernetes repo for the platform linux/amd64. - 2. Create a Docker image for HollowNode using this binary and upload it to a remote Docker repository. - (We use gcr.io/ as our remote docker repository in GCE, should be different for other providers) - 3. [One-off] Create and upload a Docker image for NodeProblemDetector (see kubernetes/node-problem-detector repo), - which is one of the containers in the HollowNode pod, besides HollowKubelet and HollowProxy. However we - use it with a hollow config that essentially has an empty set of rules and conditions to be detected. - This step is required only for other cloud providers, as the docker image for GCE already exists on GCR. - 4. Create secret which stores kubeconfig for use by HollowKubelet/HollowProxy, addons, and configMaps - for the HollowNode and the HollowNodeProblemDetector. - 5. Create a ReplicationController for HollowNodes that starts them up, after replacing all variables in - the hollow-node_template.json resource. - 6. Wait until all HollowNodes are in the Running phase. - -### Running e2e tests on Kubemark cluster - -To run standard e2e test on your Kubemark cluster created in the previous step -you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to -use Kubemark cluster instead of something else and start an e2e test. This -script should not need any changes to work on other cloud providers. - -By default (if nothing will be passed to it) the script will run a Density '30 -test. If you want to run a different e2e test you just need to provide flags you want to be -passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the -Load test. - -By default, at the end of each test, it will delete namespaces and everything -under it (e.g. events, replication controllers) on Kubemark master, which takes -a lot of time. Such work aren't needed in most cases: if you delete your -Kubemark cluster after running `run-e2e-tests.sh`; you don't care about -namespace deletion performance, specifically related to etcd; etc. There is a -flag that enables you to avoid namespace deletion: `--delete-namespace=false`. -Adding the flag should let you see in logs: `Found DeleteNamespace=false, -skipping namespace deletion!` - -### Monitoring test execution and debugging problems - -Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but -if you need to dig deeper you need to learn how to debug HollowNodes and how -Master machine (currently) differs from the ordinary one. - -If you need to debug master machine you can do similar things as you do on your -ordinary master. The difference between Kubemark setup and ordinary setup is -that in Kubemark etcd is run as a plain docker container, and all master -components are run as normal processes. There's no Kubelet overseeing them. Logs -are stored in exactly the same place, i.e. `/var/logs/` directory. Because -binaries are not supervised by anything they won't be restarted in the case of a -crash. - -To help you with debugging from inside the cluster startup script puts a -`~/configure-kubectl.sh` script on the master. It downloads `gcloud` and -`kubectl` tool and configures kubectl to work on unsecured master port (useful -if there are problems with security). After the script is run you can use -kubectl command from the master machine to play with the cluster. - -Debugging HollowNodes is a bit more tricky, as if you experience a problem on -one of them you need to learn which hollow-node pod corresponds to a given -HollowNode known by the Master. During self-registeration HollowNodes provide -their cluster IPs as Names, which means that if you need to find a HollowNode -named `10.2.4.5` you just need to find a Pod in external cluster with this -cluster IP. There's a helper script -`test/kubemark/get-real-pod-for-hollow-node.sh` that does this for you. - -When you have a Pod name you can use `kubectl logs` on external cluster to get -logs, or use a `kubectl describe pod` call to find an external Node on which -this particular HollowNode is running so you can ssh to it. - -E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. -To do so you can execute: - -``` -$ kubectl kubernetes/test/kubemark/resources/kubeconfig.kubemark describe pod my-pod -``` - -Which outputs pod description and among it a line: - -``` -Node: 1.2.3.4/1.2.3.4 -``` - -To learn the `hollow-node` pod corresponding to node `1.2.3.4` you use -aforementioned script: - -``` -$ kubernetes/test/kubemark/get-real-pod-for-hollow-node.sh 1.2.3.4 -``` - -which will output the line: - -``` -hollow-node-1234 -``` - -Now you just use ordinary kubectl command to get the logs: - -``` -kubectl --namespace=kubemark logs hollow-node-1234 -``` - -All those things should work exactly the same on all cloud providers. - -### Turning down Kubemark cluster - -On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which -will delete HollowNode ReplicationController and all the resources for you. On -other providers you’ll need to delete all this stuff by yourself. As part of -the effort mentioned above to refactor kubemark into provider-independent and -provider-specific parts, the resource deletion logic specific to the provider -would move out into a clean API. - -## Some current implementation details and future roadmap - -Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This -means that it will never be out of date. On the other hand HollowNodes use -existing fake for Kubelet (called SimpleKubelet), which mocks its runtime -manager with `pkg/kubelet/dockertools/fake_manager.go`, where most logic sits. -Because there's no easy way of mocking other managers (e.g. VolumeManager), they -are not supported in Kubemark (e.g. we can't schedule Pods with volumes in them -yet). - -We currently plan to extend kubemark along the following directions: -- As you would have noticed at places above, we aim to make kubemark more structured - and easy to run across various providers without having to tweak the setup scripts, - using a well-defined kubemark-provider API. -- Allow kubemark to run on various distros (GCI, debian, redhat, etc) for any - given provider. -- Make Kubemark performance on ci-tests mimic real cluster ci-tests on metrics such as - CPU, memory and network bandwidth usage and realizing this goal through measurable - objectives (like the kubemark metric should vary no more than X% with the real - cluster metric). We could also use metrics reported by Prometheus. -- Improve logging of CI-test metrics (such as aggregated API call latencies, scheduling - call latencies, %ile for CPU/mem usage of different master components in density/load - tests) by packing them into well-structured artifacts instead of the (current) dumping - to logs. -- Create a Dashboard that lets easy viewing and comparison of these metrics across tests. +This file has moved to https://git.k8s.io/community/contributors/devel/sig-scalability/kubemark-guide.md. +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-scalability/kubemark-guide.md b/contributors/devel/sig-scalability/kubemark-guide.md new file mode 100644 index 00000000..ce5727e8 --- /dev/null +++ b/contributors/devel/sig-scalability/kubemark-guide.md @@ -0,0 +1,256 @@ +# Kubemark User Guide + +## Introduction + +Kubemark is a performance testing tool which allows users to run experiments on +simulated clusters. The primary use case is scalability testing, as simulated +clusters can be much bigger than the real ones. The objective is to expose +problems with the master components (API server, controller manager or +scheduler) that appear only on bigger clusters (e.g. small memory leaks). + +This document serves as a primer to understand what Kubemark is, what it is not, +and how to use it. + +## Architecture + +On a very high level, Kubemark cluster consists of two parts: a real master +and a set of “Hollow” Nodes. The prefix “Hollow” to any component means an +implementation/instantiation of the actual component with all “moving” +parts mocked out. The best example is HollowKubelet, which pretends to be an +ordinary Kubelet, but does not start anything, nor mount any volumes - it just +lies it does. More detailed design and implementation details are at the end +of this document. + +Currently, master components run on a dedicated machine as pods that are +created/managed by kubelet, which itself runs as either a systemd or a supervisord +service on the master VM depending on the VM distro (though currently it is +only systemd as we use a GCI image). Having a dedicated machine for the master +has a slight advantage over running the master components on an external cluster, +which is being able to completely isolate master resources from everything else. +The HollowNodes on the other hand are run on an ‘external’ Kubernetes cluster +as pods in an isolated namespace (named kubemark). This idea of using pods on a +real cluster behave (or act) as nodes on the kubemark cluster lies at the heart of +kubemark's design. + +## Requirements + +To run Kubemark you need a Kubernetes cluster (called `external cluster`) +for running all your HollowNodes and a dedicated machine for a master. +Master machine has to be directly routable from HollowNodes. You also need +access to a Docker repository (which is gcr.io in the case of GCE) that has the +container images for etcd, hollow-node and node-problem-detector. + +Currently, scripts are written to be easily usable by GCE, but it should be +relatively straightforward to port them to different providers or bare metal. +There is an ongoing effort to refactor kubemark code into provider-specific (gce) +and provider-independent code, which should make it relatively simple to run +kubemark clusters on other cloud providers as well. + +## Common use cases and helper scripts + +Common workflow for Kubemark is: +- starting a Kubemark cluster (on GCE) +- running e2e tests on Kubemark cluster +- monitoring test execution and debugging problems +- turning down Kubemark cluster + +(For now) Included in descriptions there will be comments helpful for anyone who’ll +want to port Kubemark to different providers. +(Later) When the refactoring mentioned in the above section finishes, we would replace +these comments with a clean API that would allow kubemark to run on top of any provider. + +### Starting a Kubemark cluster + +To start a Kubemark cluster on GCE you need to create an external kubernetes +cluster (it can be GCE, GKE or anything else) by yourself, make sure that kubeconfig +points to it by default, build a kubernetes release (e.g. by running +`make quick-release`) and run `test/kubemark/start-kubemark.sh` script. +This script will create a VM for master (along with mounted PD and firewall rules set), +then start kubelet and run the pods for the master components. Following this, it +sets up the HollowNodes as Pods on the external cluster and do all the setup necessary +to let them talk to the kubemark apiserver. It will use the configuration stored in +`cluster/kubemark/config-default.sh` - you can tweak it however you want, but note that +some features may not be implemented yet, as implementation of Hollow components/mocks +will probably be lagging behind ‘real’ one. For performance tests interesting variables +are `NUM_NODES` and `KUBEMARK_MASTER_SIZE`. After start-kubemark script is finished, +you’ll have a ready Kubemark cluster, and a kubeconfig file for talking to the Kubemark +cluster is stored in `test/kubemark/resources/kubeconfig.kubemark`. + +Currently we're running HollowNode with a limit of 0.09 CPU core/pod and 220MB of memory. +However, if we also take into account the resources absorbed by default cluster addons +and fluentD running on the 'external' cluster, this limit becomes ~0.1 CPU core/pod, +thus allowing ~10 HollowNodes to run per core (on an "n1-standard-8" VM node). + +#### Behind the scene details: + +start-kubemark.sh script does quite a lot of things: + +- Prepare a master machine named MASTER_NAME (this variable's value should be set by this point): + (*the steps below use gcloud, and should be easy to do outside of GCE*) + 1. Creates a Persistent Disk for use by the master (one more for etcd-events, if flagged) + 2. Creates a static IP address for the master in the cluster and assign it to variable MASTER_IP + 3. Creates a VM instance for the master, configured with the PD and IP created above. + 4. Set firewall rule in the master to open port 443\* for all TCP traffic by default. + +\* Port 443 is a secured port on the master machine which is used for all +external communication with the API server. In the last sentence *external* +means all traffic coming from other machines, including all the Nodes, not only +from outside of the cluster. Currently local components, i.e. ControllerManager +and Scheduler talk with API server using insecure port 8080. + +- [Optional to read] Establish necessary certs/keys required for setting up the PKI for kubemark cluster: + (*the steps below are independent of GCE and work for all providers*) + 1. Generate a randomly named temporary directory for storing PKI certs/keys which is delete-trapped on EXIT. + 2. Create a bearer token for 'admin' in master. + 3. Generate certificate for CA and (certificate + private-key) pair for each of master, kubelet and kubecfg. + 4. Generate kubelet and kubeproxy tokens for master. + 5. Write a kubeconfig locally to `test/kubemark/resources/kubeconfig.kubemark` for enabling local kubectl use. + +- Set up environment and start master components (through `start-kubemark-master.sh` script): + (*the steps below use gcloud for SSH and SCP to master, and should be easy to do outside of GCE*) + 1. SSH to the master machine and create a new directory (`/etc/srv/kubernetes`) and write all the + certs/keys/tokens/passwords to it. + 2. SCP all the master pod manifests, shell scripts (`start-kubemark-master.sh`, `configure-kubectl.sh`, etc), + config files for passing env variables (`kubemark-master-env.sh`) from the local machine to the master. + 3. SSH to the master machine and run the startup script `start-kubemark-master.sh` (and possibly others). + + Note: The directory structure and the functions performed by the startup script(s) can vary based on master distro. + We currently support the GCI image `gci-dev-56-8977-0-0` in GCE. + +- Set up and start HollowNodes (as pods) on the external cluster: + (*the steps below (except 2nd and 3rd) are independent of GCE and work for all providers*) + 1. Identify the right kubemark binary from the current kubernetes repo for the platform linux/amd64. + 2. Create a Docker image for HollowNode using this binary and upload it to a remote Docker repository. + (We use gcr.io/ as our remote docker repository in GCE, should be different for other providers) + 3. [One-off] Create and upload a Docker image for NodeProblemDetector (see kubernetes/node-problem-detector repo), + which is one of the containers in the HollowNode pod, besides HollowKubelet and HollowProxy. However we + use it with a hollow config that essentially has an empty set of rules and conditions to be detected. + This step is required only for other cloud providers, as the docker image for GCE already exists on GCR. + 4. Create secret which stores kubeconfig for use by HollowKubelet/HollowProxy, addons, and configMaps + for the HollowNode and the HollowNodeProblemDetector. + 5. Create a ReplicationController for HollowNodes that starts them up, after replacing all variables in + the hollow-node_template.json resource. + 6. Wait until all HollowNodes are in the Running phase. + +### Running e2e tests on Kubemark cluster + +To run standard e2e test on your Kubemark cluster created in the previous step +you execute `test/kubemark/run-e2e-tests.sh` script. It will configure ginkgo to +use Kubemark cluster instead of something else and start an e2e test. This +script should not need any changes to work on other cloud providers. + +By default (if nothing will be passed to it) the script will run a Density '30 +test. If you want to run a different e2e test you just need to provide flags you want to be +passed to `hack/ginkgo-e2e.sh` script, e.g. `--ginkgo.focus="Load"` to run the +Load test. + +By default, at the end of each test, it will delete namespaces and everything +under it (e.g. events, replication controllers) on Kubemark master, which takes +a lot of time. Such work aren't needed in most cases: if you delete your +Kubemark cluster after running `run-e2e-tests.sh`; you don't care about +namespace deletion performance, specifically related to etcd; etc. There is a +flag that enables you to avoid namespace deletion: `--delete-namespace=false`. +Adding the flag should let you see in logs: `Found DeleteNamespace=false, +skipping namespace deletion!` + +### Monitoring test execution and debugging problems + +Run-e2e-tests prints the same output on Kubemark as on ordinary e2e cluster, but +if you need to dig deeper you need to learn how to debug HollowNodes and how +Master machine (currently) differs from the ordinary one. + +If you need to debug master machine you can do similar things as you do on your +ordinary master. The difference between Kubemark setup and ordinary setup is +that in Kubemark etcd is run as a plain docker container, and all master +components are run as normal processes. There's no Kubelet overseeing them. Logs +are stored in exactly the same place, i.e. `/var/logs/` directory. Because +binaries are not supervised by anything they won't be restarted in the case of a +crash. + +To help you with debugging from inside the cluster startup script puts a +`~/configure-kubectl.sh` script on the master. It downloads `gcloud` and +`kubectl` tool and configures kubectl to work on unsecured master port (useful +if there are problems with security). After the script is run you can use +kubectl command from the master machine to play with the cluster. + +Debugging HollowNodes is a bit more tricky, as if you experience a problem on +one of them you need to learn which hollow-node pod corresponds to a given +HollowNode known by the Master. During self-registeration HollowNodes provide +their cluster IPs as Names, which means that if you need to find a HollowNode +named `10.2.4.5` you just need to find a Pod in external cluster with this +cluster IP. There's a helper script +`test/kubemark/get-real-pod-for-hollow-node.sh` that does this for you. + +When you have a Pod name you can use `kubectl logs` on external cluster to get +logs, or use a `kubectl describe pod` call to find an external Node on which +this particular HollowNode is running so you can ssh to it. + +E.g. you want to see the logs of HollowKubelet on which pod `my-pod` is running. +To do so you can execute: + +``` +$ kubectl kubernetes/test/kubemark/resources/kubeconfig.kubemark describe pod my-pod +``` + +Which outputs pod description and among it a line: + +``` +Node: 1.2.3.4/1.2.3.4 +``` + +To learn the `hollow-node` pod corresponding to node `1.2.3.4` you use +aforementioned script: + +``` +$ kubernetes/test/kubemark/get-real-pod-for-hollow-node.sh 1.2.3.4 +``` + +which will output the line: + +``` +hollow-node-1234 +``` + +Now you just use ordinary kubectl command to get the logs: + +``` +kubectl --namespace=kubemark logs hollow-node-1234 +``` + +All those things should work exactly the same on all cloud providers. + +### Turning down Kubemark cluster + +On GCE you just need to execute `test/kubemark/stop-kubemark.sh` script, which +will delete HollowNode ReplicationController and all the resources for you. On +other providers you’ll need to delete all this stuff by yourself. As part of +the effort mentioned above to refactor kubemark into provider-independent and +provider-specific parts, the resource deletion logic specific to the provider +would move out into a clean API. + +## Some current implementation details and future roadmap + +Kubemark master uses exactly the same binaries as ordinary Kubernetes does. This +means that it will never be out of date. On the other hand HollowNodes use +existing fake for Kubelet (called SimpleKubelet), which mocks its runtime +manager with `pkg/kubelet/dockertools/fake_manager.go`, where most logic sits. +Because there's no easy way of mocking other managers (e.g. VolumeManager), they +are not supported in Kubemark (e.g. we can't schedule Pods with volumes in them +yet). + +We currently plan to extend kubemark along the following directions: +- As you would have noticed at places above, we aim to make kubemark more structured + and easy to run across various providers without having to tweak the setup scripts, + using a well-defined kubemark-provider API. +- Allow kubemark to run on various distros (GCI, debian, redhat, etc) for any + given provider. +- Make Kubemark performance on ci-tests mimic real cluster ci-tests on metrics such as + CPU, memory and network bandwidth usage and realizing this goal through measurable + objectives (like the kubemark metric should vary no more than X% with the real + cluster metric). We could also use metrics reported by Prometheus. +- Improve logging of CI-test metrics (such as aggregated API call latencies, scheduling + call latencies, %ile for CPU/mem usage of different master components in density/load + tests) by packing them into well-structured artifacts instead of the (current) dumping + to logs. +- Create a Dashboard that lets easy viewing and comparison of these metrics across tests. + -- cgit v1.2.3 From 4ed8007b76d7d4b0b97d96d430e9a022d06ee75d Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 12:24:48 -0600 Subject: file /devel/profiling.md moved to /devel/sig-scalability. All tombstone files created and URLs updated. --- contributors/devel/README.md | 2 +- contributors/devel/profiling.md | 77 +------------------------ contributors/devel/sig-scalability/profiling.md | 76 ++++++++++++++++++++++++ 3 files changed, 79 insertions(+), 76 deletions(-) create mode 100644 contributors/devel/sig-scalability/profiling.md diff --git a/contributors/devel/README.md b/contributors/devel/README.md index 626adaad..afbc7145 100644 --- a/contributors/devel/README.md +++ b/contributors/devel/README.md @@ -34,7 +34,7 @@ Guide](http://kubernetes.io/docs/admin/). * **Logging Conventions** ([logging.md](logging.md)): Glog levels. -* **Profiling Kubernetes** ([profiling.md](profiling.md)): How to plug in go pprof profiler to Kubernetes. +* **Profiling Kubernetes** ([profiling.md](sig-scalability/profiling.md)): How to plug in go pprof profiler to Kubernetes. * **Instrumenting Kubernetes with a new metric** ([instrumentation.md](instrumentation.md)): How to add a new metrics to the diff --git a/contributors/devel/profiling.md b/contributors/devel/profiling.md index f7c8b2e5..9951eb27 100644 --- a/contributors/devel/profiling.md +++ b/contributors/devel/profiling.md @@ -1,76 +1,3 @@ -# Profiling Kubernetes +This file has moved to https://git.k8s.io/community/contributors/devel/sig-scalability/profiling.md. -This document explain how to plug in profiler and how to profile Kubernetes services. To get familiar with the tools mentioned below, it is strongly recommended to read [Profiling Go Programs](https://blog.golang.org/profiling-go-programs). - -## Profiling library - -Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. - -## Adding profiling to services to APIserver. - -TL;DR: Add lines: - -```go -m.mux.HandleFunc("/debug/pprof/", pprof.Index) -m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) -m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) -``` - -to the `init(c *Config)` method in 'pkg/master/master.go' and import 'net/http/pprof' package. - -In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/kubelet/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. - -## Connecting to the profiler - -Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: - -```sh -ssh kubernetes_master -L:localhost:8080 -``` - -or analogous one for you Cloud provider. Afterwards you can e.g. run - -```sh -go tool pprof http://localhost:/debug/pprof/profile -``` - -to get 30 sec. CPU profile. - -## Contention profiling - -To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. - -## Profiling in tests - -To gather a profile from a test, the HTTP interface is probably not suitable. Instead, you can add the `-cpuprofile` flag to your KUBE_TEST_ARGS, e.g. - -```sh -make test-integration WHAT="./test/integration/scheduler" KUBE_TEST_ARGS="-cpuprofile cpu.out" -go tool pprof cpu.out -``` - -See the ['go test' flags](https://golang.org/cmd/go/#hdr-Description_of_testing_flags) for how to capture other types of profiles. - -## Profiling in a benchmark test - -Gathering a profile from a benchmark test works in the same way as regular tests, but sometimes there may be expensive setup that you want excluded from the profile. (i.e. any time you would use `b.ResetTimer()`) - -To solve this problem, you can explicitly start the profile in your test code like so. - -```go -func BenchmarkMyFeature(b *testing.B) { - // Expensive test setup... - b.ResetTimer() - f, err := os.Create("bench_profile.out") - if err != nil { - log.Fatal("could not create profile file: ", err) - } - if err := pprof.StartCPUProfile(f); err != nil { - log.Fatal("could not start CPU profile: ", err) - } - defer pprof.StopCPUProfile() - // Rest of the test... -} -``` - -> Note: Code added to a test to gather CPU profiles should not be merged. It is meant to be temporary while you create an analyze profiles. +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-scalability/profiling.md b/contributors/devel/sig-scalability/profiling.md new file mode 100644 index 00000000..f7c8b2e5 --- /dev/null +++ b/contributors/devel/sig-scalability/profiling.md @@ -0,0 +1,76 @@ +# Profiling Kubernetes + +This document explain how to plug in profiler and how to profile Kubernetes services. To get familiar with the tools mentioned below, it is strongly recommended to read [Profiling Go Programs](https://blog.golang.org/profiling-go-programs). + +## Profiling library + +Go comes with inbuilt 'net/http/pprof' profiling library and profiling web service. The way service works is binding debug/pprof/ subtree on a running webserver to the profiler. Reading from subpages of debug/pprof returns pprof-formatted profiles of the running binary. The output can be processed offline by the tool of choice, or used as an input to handy 'go tool pprof', which can graphically represent the result. + +## Adding profiling to services to APIserver. + +TL;DR: Add lines: + +```go +m.mux.HandleFunc("/debug/pprof/", pprof.Index) +m.mux.HandleFunc("/debug/pprof/profile", pprof.Profile) +m.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol) +``` + +to the `init(c *Config)` method in 'pkg/master/master.go' and import 'net/http/pprof' package. + +In most use cases to use profiler service it's enough to do 'import _ net/http/pprof', which automatically registers a handler in the default http.Server. Slight inconvenience is that APIserver uses default server for intra-cluster communication, so plugging profiler to it is not really useful. In 'pkg/kubelet/server/server.go' more servers are created and started as separate goroutines. The one that is usually serving external traffic is secureServer. The handler for this traffic is defined in 'pkg/master/master.go' and stored in Handler variable. It is created from HTTP multiplexer, so the only thing that needs to be done is adding profiler handler functions to this multiplexer. This is exactly what lines after TL;DR do. + +## Connecting to the profiler + +Even when running profiler I found not really straightforward to use 'go tool pprof' with it. The problem is that at least for dev purposes certificates generated for APIserver are not signed by anyone trusted and because secureServer serves only secure traffic it isn't straightforward to connect to the service. The best workaround I found is by creating an ssh tunnel from the kubernetes_master open unsecured port to some external server, and use this server as a proxy. To save everyone looking for correct ssh flags, it is done by running: + +```sh +ssh kubernetes_master -L:localhost:8080 +``` + +or analogous one for you Cloud provider. Afterwards you can e.g. run + +```sh +go tool pprof http://localhost:/debug/pprof/profile +``` + +to get 30 sec. CPU profile. + +## Contention profiling + +To enable contention profiling you need to add line `rt.SetBlockProfileRate(1)` in addition to `m.mux.HandleFunc(...)` added before (`rt` stands for `runtime` in `master.go`). This enables 'debug/pprof/block' subpage, which can be used as an input to `go tool pprof`. + +## Profiling in tests + +To gather a profile from a test, the HTTP interface is probably not suitable. Instead, you can add the `-cpuprofile` flag to your KUBE_TEST_ARGS, e.g. + +```sh +make test-integration WHAT="./test/integration/scheduler" KUBE_TEST_ARGS="-cpuprofile cpu.out" +go tool pprof cpu.out +``` + +See the ['go test' flags](https://golang.org/cmd/go/#hdr-Description_of_testing_flags) for how to capture other types of profiles. + +## Profiling in a benchmark test + +Gathering a profile from a benchmark test works in the same way as regular tests, but sometimes there may be expensive setup that you want excluded from the profile. (i.e. any time you would use `b.ResetTimer()`) + +To solve this problem, you can explicitly start the profile in your test code like so. + +```go +func BenchmarkMyFeature(b *testing.B) { + // Expensive test setup... + b.ResetTimer() + f, err := os.Create("bench_profile.out") + if err != nil { + log.Fatal("could not create profile file: ", err) + } + if err := pprof.StartCPUProfile(f); err != nil { + log.Fatal("could not start CPU profile: ", err) + } + defer pprof.StopCPUProfile() + // Rest of the test... +} +``` + +> Note: Code added to a test to gather CPU profiles should not be merged. It is meant to be temporary while you create an analyze profiles. -- cgit v1.2.3 From 458953783bbf81db29a11e728642d0a6e9eccc1a Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 13:03:54 -0600 Subject: folder /devel/sig-scheduling has been created - file scheduler_algorithm.md moved to it - tombstone file created --- contributors/devel/scheduler_algorithm.md | 41 ++-------------------- .../devel/sig-scheduling/scheduler_algorithm.md | 40 +++++++++++++++++++++ 2 files changed, 42 insertions(+), 39 deletions(-) create mode 100644 contributors/devel/sig-scheduling/scheduler_algorithm.md diff --git a/contributors/devel/scheduler_algorithm.md b/contributors/devel/scheduler_algorithm.md index e6596b47..07af7e49 100644 --- a/contributors/devel/scheduler_algorithm.md +++ b/contributors/devel/scheduler_algorithm.md @@ -1,40 +1,3 @@ -# Scheduler Algorithm in Kubernetes - -For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. - -## Filtering the nodes - -The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: - -- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. Currently supported volumes are: AWS EBS, GCE PD, ISCSI and Ceph RBD. Only Persistent Volume Claims for those supported types are checked. Persistent Volumes added directly to pods are not evaluated and are not constrained by this policy. -- `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. -- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design-proposals/node/resource-qos.md). -- `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. -- `HostName`: Filter out all nodes except the one specified in the PodSpec's NodeName field. -- `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `nodeAffinity` if present. See [here](https://kubernetes.io/docs/user-guide/node-selection/) for more details on both. -- `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. -- `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. -- `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` pods should be placed on a node under memory pressure as it gets automatically evicted by kubelet. -- `CheckNodeDiskPressure`: Check if a pod can be scheduled on a node reporting disk pressure condition. Currently, no pods should be placed on a node under disk pressure as it gets automatically evicted by kubelet. - -The details of the above predicates can be found in [pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithmprovider/defaults/defaults.go). - -## Ranking the nodes - -The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: - - finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) - -After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. - -Currently, Kubernetes scheduler provides some practical priority functions, including: - -- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. -- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. -- `SelectorSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service, replication controller, or replica set on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. -- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. -- `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. -- `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](https://kubernetes.io/docs/user-guide/node-selection/) for more details. - -The details of the above priority functions can be found in [pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). +This file has moved to https://git.k8s.io/community/contributors/devel/sig-scheduling/scheduler_algorithm.md. +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-scheduling/scheduler_algorithm.md b/contributors/devel/sig-scheduling/scheduler_algorithm.md new file mode 100644 index 00000000..e6596b47 --- /dev/null +++ b/contributors/devel/sig-scheduling/scheduler_algorithm.md @@ -0,0 +1,40 @@ +# Scheduler Algorithm in Kubernetes + +For each unscheduled Pod, the Kubernetes scheduler tries to find a node across the cluster according to a set of rules. A general introduction to the Kubernetes scheduler can be found at [scheduler.md](scheduler.md). In this document, the algorithm of how to select a node for the Pod is explained. There are two steps before a destination node of a Pod is chosen. The first step is filtering all the nodes and the second is ranking the remaining nodes to find a best fit for the Pod. + +## Filtering the nodes + +The purpose of filtering the nodes is to filter out the nodes that do not meet certain requirements of the Pod. For example, if the free resource on a node (measured by the capacity minus the sum of the resource requests of all the Pods that already run on the node) is less than the Pod's required resource, the node should not be considered in the ranking phase so it is filtered out. Currently, there are several "predicates" implementing different filtering policies, including: + +- `NoDiskConflict`: Evaluate if a pod can fit due to the volumes it requests, and those that are already mounted. Currently supported volumes are: AWS EBS, GCE PD, ISCSI and Ceph RBD. Only Persistent Volume Claims for those supported types are checked. Persistent Volumes added directly to pods are not evaluated and are not constrained by this policy. +- `NoVolumeZoneConflict`: Evaluate if the volumes a pod requests are available on the node, given the Zone restrictions. +- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of requests of all Pods on the node. To learn more about the resource QoS in Kubernetes, please check [QoS proposal](../design-proposals/node/resource-qos.md). +- `PodFitsHostPorts`: Check if any HostPort required by the Pod is already occupied on the node. +- `HostName`: Filter out all nodes except the one specified in the PodSpec's NodeName field. +- `MatchNodeSelector`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field and, as of Kubernetes v1.2, also match the `nodeAffinity` if present. See [here](https://kubernetes.io/docs/user-guide/node-selection/) for more details on both. +- `MaxEBSVolumeCount`: Ensure that the number of attached ElasticBlockStore volumes does not exceed a maximum value (by default, 39, since Amazon recommends a maximum of 40 with one of those 40 reserved for the root volume -- see [Amazon's documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html#linux-specific-volume-limits)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. +- `MaxGCEPDVolumeCount`: Ensure that the number of attached GCE PersistentDisk volumes does not exceed a maximum value (by default, 16, which is the maximum GCE allows -- see [GCE's documentation](https://cloud.google.com/compute/docs/disks/persistent-disks#limits_for_predefined_machine_types)). The maximum value can be controlled by setting the `KUBE_MAX_PD_VOLS` environment variable. +- `CheckNodeMemoryPressure`: Check if a pod can be scheduled on a node reporting memory pressure condition. Currently, no ``BestEffort`` pods should be placed on a node under memory pressure as it gets automatically evicted by kubelet. +- `CheckNodeDiskPressure`: Check if a pod can be scheduled on a node reporting disk pressure condition. Currently, no pods should be placed on a node under disk pressure as it gets automatically evicted by kubelet. + +The details of the above predicates can be found in [pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithmprovider/defaults/defaults.go). + +## Ranking the nodes + +The filtered nodes are considered suitable to host the Pod, and it is often that there are more than one nodes remaining. Kubernetes prioritizes the remaining nodes to find the "best" one for the Pod. The prioritization is performed by a set of priority functions. For each remaining node, a priority function gives a score which scales from 0-10 with 10 representing for "most preferred" and 0 for "least preferred". Each priority function is weighted by a positive number and the final score of each node is calculated by adding up all the weighted scores. For example, suppose there are two priority functions, `priorityFunc1` and `priorityFunc2` with weighting factors `weight1` and `weight2` respectively, the final score of some NodeA is: + + finalScoreNodeA = (weight1 * priorityFunc1) + (weight2 * priorityFunc2) + +After the scores of all nodes are calculated, the node with highest score is chosen as the host of the Pod. If there are more than one nodes with equal highest scores, a random one among them is chosen. + +Currently, Kubernetes scheduler provides some practical priority functions, including: + +- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption. +- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed. +- `SelectorSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service, replication controller, or replica set on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes. +- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label. +- `ImageLocalityPriority`: Nodes are prioritized based on locality of images requested by a pod. Nodes with larger size of already-installed packages required by the pod will be preferred over nodes with no already-installed packages required by the pod or a small total size of already-installed packages required by the pod. +- `NodeAffinityPriority`: (Kubernetes v1.2) Implements `preferredDuringSchedulingIgnoredDuringExecution` node affinity; see [here](https://kubernetes.io/docs/user-guide/node-selection/) for more details. + +The details of the above priority functions can be found in [pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize). + -- cgit v1.2.3 From 9a4bfdc2385379f3b4958b039706227381934d1a Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 13:08:31 -0600 Subject: file scheduler.md moved to /devel/sig-scheduling - URLs updated - tombstone file created --- .../scheduling/scheduler_extender.md | 2 +- contributors/devel/scheduler.md | 91 +--------------------- contributors/devel/sig-scheduling/scheduler.md | 90 +++++++++++++++++++++ 3 files changed, 93 insertions(+), 90 deletions(-) create mode 100644 contributors/devel/sig-scheduling/scheduler.md diff --git a/contributors/design-proposals/scheduling/scheduler_extender.md b/contributors/design-proposals/scheduling/scheduler_extender.md index de7a6259..bc65f9ba 100644 --- a/contributors/design-proposals/scheduling/scheduler_extender.md +++ b/contributors/design-proposals/scheduling/scheduler_extender.md @@ -2,7 +2,7 @@ There are three ways to add new scheduling rules (predicates and priority functions) to Kubernetes: (1) by adding these rules to the scheduler and -recompiling, [described here](/contributors/devel/scheduler.md), +recompiling, [described here](/contributors/devel/sig-scheduling/scheduler.md), (2) implementing your own scheduler process that runs instead of, or alongside of, the standard Kubernetes scheduler, (3) implementing a "scheduler extender" process that the standard Kubernetes scheduler calls out to as a final pass when diff --git a/contributors/devel/scheduler.md b/contributors/devel/scheduler.md index 486b04a9..6f2ae192 100644 --- a/contributors/devel/scheduler.md +++ b/contributors/devel/scheduler.md @@ -1,90 +1,3 @@ -# The Kubernetes Scheduler - -The Kubernetes scheduler runs as a process alongside the other master components such as the API server. -Its interface to the API server is to watch for Pods with an empty PodSpec.NodeName, -and for each Pod, it posts a binding indicating where the Pod should be scheduled. - -## Exploring the code - -We are dividing scheduler into three layers from high level: -- [cmd/kube-scheduler/scheduler.go](http://releases.k8s.io/HEAD/cmd/kube-scheduler/scheduler.go): - This is the main() entry that does initialization before calling the scheduler framework. -- [pkg/scheduler/scheduler.go](http://releases.k8s.io/HEAD/pkg/scheduler/scheduler.go): - This is the scheduler framework that handles stuff (e.g. binding) beyond the scheduling algorithm. -- [pkg/scheduler/core/generic_scheduler.go](http://releases.k8s.io/HEAD/pkg/scheduler/core/generic_scheduler.go): - The scheduling algorithm that assigns nodes for pods. - -## The scheduling algorithm - -``` -For given pod: - - +---------------------------------------------+ - | Schedulable nodes: | - | | - | +--------+ +--------+ +--------+ | - | | node 1 | | node 2 | | node 3 | | - | +--------+ +--------+ +--------+ | - | | - +-------------------+-------------------------+ - | - | - v - +-------------------+-------------------------+ - - Pred. filters: node 3 doesn't have enough resource - - +-------------------+-------------------------+ - | - | - v - +-------------------+-------------------------+ - | remaining nodes: | - | +--------+ +--------+ | - | | node 1 | | node 2 | | - | +--------+ +--------+ | - | | - +-------------------+-------------------------+ - | - | - v - +-------------------+-------------------------+ - - Priority function: node 1: p=2 - node 2: p=5 - - +-------------------+-------------------------+ - | - | - v - select max{node priority} = node 2 -``` - -The Scheduler tries to find a node for each Pod, one at a time. -- First it applies a set of "predicates" to filter out inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). -- Second, it applies a set of "priority functions" -that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes and zones while at the same time favoring the least (theoretically) loaded nodes (where "load" - in theory - is measured as the sum of the resource requests of the containers running on the node, divided by the node's capacity). -- Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in [pkg/scheduler/core/generic_scheduler.go](http://releases.k8s.io/HEAD/pkg/scheduler/core/generic_scheduler.go) - -### Predicates and priorities policies - -Predicates are a set of policies applied one by one to filter out inappropriate nodes. -Priorities are a set of policies applied one by one to rank nodes (that made it through the filter of the predicates). -By default, Kubernetes provides built-in predicates and priorities policies documented in [scheduler_algorithm.md](scheduler_algorithm.md). -The predicates and priorities code are defined in [pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/predicates/predicates.go) and [pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/priorities/) , respectively. - - -## Scheduler extensibility - -The scheduler is extensible: the cluster administrator can choose which of the pre-defined -scheduling policies to apply, and can add new ones. - -### Modifying policies - -The policies that are applied when scheduling can be chosen in one of two ways. -The default policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in -[pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithmprovider/defaults/defaults.go). -However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](https://git.k8s.io/examples/staging/scheduler-policy-config.json) for an example -config file. (Note that the config file format is versioned; the API is defined in [pkg/scheduler/api](http://releases.k8s.io/HEAD/pkg/scheduler/api/)). -Thus to add a new scheduling policy, you should modify [pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/predicates/predicates.go) or add to the directory [pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/priorities/), and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. +This file has moved to https://git.k8s.io/community/contributors/devel/sig-scheduling/scheduler.md. +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-scheduling/scheduler.md b/contributors/devel/sig-scheduling/scheduler.md new file mode 100644 index 00000000..486b04a9 --- /dev/null +++ b/contributors/devel/sig-scheduling/scheduler.md @@ -0,0 +1,90 @@ +# The Kubernetes Scheduler + +The Kubernetes scheduler runs as a process alongside the other master components such as the API server. +Its interface to the API server is to watch for Pods with an empty PodSpec.NodeName, +and for each Pod, it posts a binding indicating where the Pod should be scheduled. + +## Exploring the code + +We are dividing scheduler into three layers from high level: +- [cmd/kube-scheduler/scheduler.go](http://releases.k8s.io/HEAD/cmd/kube-scheduler/scheduler.go): + This is the main() entry that does initialization before calling the scheduler framework. +- [pkg/scheduler/scheduler.go](http://releases.k8s.io/HEAD/pkg/scheduler/scheduler.go): + This is the scheduler framework that handles stuff (e.g. binding) beyond the scheduling algorithm. +- [pkg/scheduler/core/generic_scheduler.go](http://releases.k8s.io/HEAD/pkg/scheduler/core/generic_scheduler.go): + The scheduling algorithm that assigns nodes for pods. + +## The scheduling algorithm + +``` +For given pod: + + +---------------------------------------------+ + | Schedulable nodes: | + | | + | +--------+ +--------+ +--------+ | + | | node 1 | | node 2 | | node 3 | | + | +--------+ +--------+ +--------+ | + | | + +-------------------+-------------------------+ + | + | + v + +-------------------+-------------------------+ + + Pred. filters: node 3 doesn't have enough resource + + +-------------------+-------------------------+ + | + | + v + +-------------------+-------------------------+ + | remaining nodes: | + | +--------+ +--------+ | + | | node 1 | | node 2 | | + | +--------+ +--------+ | + | | + +-------------------+-------------------------+ + | + | + v + +-------------------+-------------------------+ + + Priority function: node 1: p=2 + node 2: p=5 + + +-------------------+-------------------------+ + | + | + v + select max{node priority} = node 2 +``` + +The Scheduler tries to find a node for each Pod, one at a time. +- First it applies a set of "predicates" to filter out inappropriate nodes. For example, if the PodSpec specifies resource requests, then the scheduler will filter out nodes that don't have at least that much resources available (computed as the capacity of the node minus the sum of the resource requests of the containers that are already running on the node). +- Second, it applies a set of "priority functions" +that rank the nodes that weren't filtered out by the predicate check. For example, it tries to spread Pods across nodes and zones while at the same time favoring the least (theoretically) loaded nodes (where "load" - in theory - is measured as the sum of the resource requests of the containers running on the node, divided by the node's capacity). +- Finally, the node with the highest priority is chosen (or, if there are multiple such nodes, then one of them is chosen at random). The code for this main scheduling loop is in the function `Schedule()` in [pkg/scheduler/core/generic_scheduler.go](http://releases.k8s.io/HEAD/pkg/scheduler/core/generic_scheduler.go) + +### Predicates and priorities policies + +Predicates are a set of policies applied one by one to filter out inappropriate nodes. +Priorities are a set of policies applied one by one to rank nodes (that made it through the filter of the predicates). +By default, Kubernetes provides built-in predicates and priorities policies documented in [scheduler_algorithm.md](scheduler_algorithm.md). +The predicates and priorities code are defined in [pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/predicates/predicates.go) and [pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/priorities/) , respectively. + + +## Scheduler extensibility + +The scheduler is extensible: the cluster administrator can choose which of the pre-defined +scheduling policies to apply, and can add new ones. + +### Modifying policies + +The policies that are applied when scheduling can be chosen in one of two ways. +The default policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in +[pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithmprovider/defaults/defaults.go). +However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](https://git.k8s.io/examples/staging/scheduler-policy-config.json) for an example +config file. (Note that the config file format is versioned; the API is defined in [pkg/scheduler/api](http://releases.k8s.io/HEAD/pkg/scheduler/api/)). +Thus to add a new scheduling policy, you should modify [pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/predicates/predicates.go) or add to the directory [pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/priorities/), and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file. + -- cgit v1.2.3 From ab55d850b8342abe1b87eb1c8bf397c4d05daff4 Mon Sep 17 00:00:00 2001 From: eduartua Date: Tue, 29 Jan 2019 15:04:02 -0600 Subject: file flexvolume.md moved to the new folder /devel/sig-storage - tombstome file created - URLs updated --- .../storage/container-storage-interface.md | 8 +- .../storage/flexvolume-deployment.md | 4 +- contributors/devel/flexvolume.md | 156 +-------------------- contributors/devel/sig-storage/flexvolume.md | 155 ++++++++++++++++++++ sig-storage/volume-plugin-faq.md | 4 +- 5 files changed, 169 insertions(+), 158 deletions(-) create mode 100644 contributors/devel/sig-storage/flexvolume.md diff --git a/contributors/design-proposals/storage/container-storage-interface.md b/contributors/design-proposals/storage/container-storage-interface.md index 9a1b3d5e..9c4db8b8 100644 --- a/contributors/design-proposals/storage/container-storage-interface.md +++ b/contributors/design-proposals/storage/container-storage-interface.md @@ -29,7 +29,7 @@ Kubernetes volume plugins are currently “in-tree” meaning they are linked, c 4. Volume plugins get full privileges of kubernetes components (kubelet and kube-controller-manager). 5. Plugin developers are forced to make plugin source code available, and can not choose to release just a binary. -The existing [Flex Volume](/contributors/devel/flexvolume.md) plugin attempted to address this by exposing an exec based API for mount/unmount/attach/detach. Although it enables third party storage vendors to write drivers out-of-tree, it requires access to the root filesystem of node and master machines in order to deploy the third party driver files. +The existing [Flex Volume] plugin attempted to address this by exposing an exec based API for mount/unmount/attach/detach. Although it enables third party storage vendors to write drivers out-of-tree, it requires access to the root filesystem of node and master machines in order to deploy the third party driver files. Additionally, it doesn’t address another pain of in-tree volumes plugins: dependencies. Volume plugins tend to have many external requirements: dependencies on mount and filesystem tools, for example. These dependencies are assumed to be available on the underlying host OS, which often is not the case, and installing them requires direct machine access. There are efforts underway, for example https://github.com/kubernetes/community/pull/589, that are hoping to address this for in-tree volume plugins. But, enabling volume plugins to be completely containerized will make dependency management much easier. @@ -56,7 +56,7 @@ The objective of this document is to document all the requirements for enabling * Recommend deployment process for Kubernetes compatible, third-party CSI Volume drivers on a Kubernetes cluster. ## Non-Goals -* Replace [Flex Volume plugin](/contributors/devel/flexvolume.md) +* Replace [Flex Volume plugin] * The Flex volume plugin exists as an exec based mechanism to create “out-of-tree” volume plugins. * Because Flex drivers exist and depend on the Flex interface, it will continue to be supported with a stable API. * The CSI Volume plugin will co-exist with Flex volume plugin. @@ -777,3 +777,7 @@ Instead of creating a new `VolumeAttachment` object, another option we considere * List of nodes the volume was successfully attached to. We dismissed this approach because having attach/detach triggered by the creation/deletion of an object is much easier to manage (for both external-attacher and Kubernetes) and more robust (fewer corner cases to worry about). + + +[Flex Volume]: /contributors/devel/sig-storage/flexvolume.md +[Flex Volume plugin]: /contributors/devel/sig-storage/flexvolume.md \ No newline at end of file diff --git a/contributors/design-proposals/storage/flexvolume-deployment.md b/contributors/design-proposals/storage/flexvolume-deployment.md index 0b40748b..19b7ea63 100644 --- a/contributors/design-proposals/storage/flexvolume-deployment.md +++ b/contributors/design-proposals/storage/flexvolume-deployment.md @@ -10,7 +10,7 @@ Beginning in version 1.8, the Kubernetes Storage SIG is putting a stop to accept [CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md) provides a single interface that storage vendors can implement in order for their storage solutions to work across many different container orchestrators, and volume plugins are out-of-tree by design. This is a large effort, the full implementation of CSI is several quarters away, and there is a need for an immediate solution for storage vendors to continue adding volume plugins. -[Flexvolume](/contributors/devel/flexvolume.md) is an in-tree plugin that has the ability to run any storage solution by executing volume commands against a user-provided driver on the Kubernetes host, and this currently exists today. However, the process of setting up Flexvolume is very manual, pushing it out of consideration for many users. Problems include having to copy the driver to a specific location in each node, manually restarting kubelet, and user's limited access to machines. +[Flexvolume] is an in-tree plugin that has the ability to run any storage solution by executing volume commands against a user-provided driver on the Kubernetes host, and this currently exists today. However, the process of setting up Flexvolume is very manual, pushing it out of consideration for many users. Problems include having to copy the driver to a specific location in each node, manually restarting kubelet, and user's limited access to machines. An automated deployment technique is discussed in [Recommended Driver Deployment Method](#recommended-driver-deployment-method). The crucial change required to enable this method is allowing kubelet and controller manager to dynamically discover plugin changes. @@ -164,3 +164,5 @@ Cons: Does not guarantee every node has a pod running. Pod anti-affinity can be * How does this system work with containerized kubelet? * Are there any SELinux implications? + +[Flexvolume]: /contributors/devel/sig-storage/flexvolume.md \ No newline at end of file diff --git a/contributors/devel/flexvolume.md b/contributors/devel/flexvolume.md index 12c46382..36fe837d 100644 --- a/contributors/devel/flexvolume.md +++ b/contributors/devel/flexvolume.md @@ -1,155 +1,3 @@ -# Flexvolume +This file has moved to https://git.k8s.io/community/contributors/devel/sig-storage/flexvolume.md. -Flexvolume enables users to write their own drivers and add support for their volumes in Kubernetes. Vendor drivers should be installed in the volume plugin path on every node, and on master if the driver requires attach capability (unless `--enable-controller-attach-detach` Kubelet option is set to false, but this is highly discouraged because it is a legacy mode of operation). - -Flexvolume is a GA feature from Kubernetes 1.8 release onwards. - -## Prerequisites - -Install the vendor driver on all nodes (also on master nodes if "--enable-controller-attach-detach" Kubelet option is enabled) in the plugin path. Path for installing the plugin: `//`. The default plugin directory is `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`. It can be changed in kubelet via the `--volume-plugin-dir` flag, and in controller manager via the `--flex-volume-plugin-dir` flag. - -For example to add a `cifs` driver, by vendor `foo` install the driver at: `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/foo~cifs/cifs` - -The vendor and driver names must match flexVolume.driver in the volume spec, with '~' replaced with '/'. For example, if `flexVolume.driver` is set to `foo/cifs`, then the vendor is `foo`, and driver is `cifs`. - -## Dynamic Plugin Discovery -Beginning in v1.8, Flexvolume supports the ability to detect drivers on the fly. Instead of requiring drivers to exist at system initialization time or having to restart kubelet or controller manager, drivers can be installed, upgraded/downgraded, and uninstalled while the system is running. -For more information, please refer to the [design document](/contributors/design-proposals/storage/flexvolume-deployment.md). - -## Automated Plugin Installation/Upgrade -One possible way to install and upgrade your Flexvolume drivers is by using a DaemonSet. See [Recommended Driver Deployment Method](/contributors/design-proposals/storage/flexvolume-deployment.md#recommended-driver-deployment-method) for details, and see [here](https://git.k8s.io/examples/staging/volumes/flexvolume/deploy/) for an example. - -## Plugin details -The plugin expects the following call-outs are implemented for the backend drivers. Some call-outs are optional. Call-outs are invoked from Kubelet and Controller Manager. - -### Driver invocation model: - -#### Init: -Initializes the driver. Called during Kubelet & Controller manager initialization. On success, the function returns a capabilities map showing whether each Flexvolume capability is supported by the driver. -Current capabilities: -* `attach` - a boolean field indicating whether the driver requires attach and detach operations. This field is *required*, although for backward-compatibility the default value is set to `true`, i.e. requires attach and detach. -See [Driver output](#driver-output) for the capabilities map format. -``` - init -``` - -#### Attach: -Attach the volume specified by the given spec on the given node. On success, returns the device path where the device is attached on the node. Called from Controller Manager. - -This call-out does not pass "secrets" specified in Flexvolume spec. If your driver requires secrets, do not implement this call-out and instead use "mount" call-out and implement attach and mount in that call-out. - -``` - attach -``` - -#### Detach: -Detach the volume from the node. Called from Controller Manager. -``` - detach -``` - -#### Wait for attach: -Wait for the volume to be attached on the remote node. On success, the path to the device is returned. Called from Controller Manager. The timeout should be 10m (based on https://git.k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go#L88 ) - -``` - waitforattach -``` - -#### Volume is Attached: -Check the volume is attached on the node. Called from Controller Manager. - -``` - isattached -``` - -#### Mount device: -Mount device mounts the device to a global path which individual pods can then bind mount. Called only from Kubelet. - -This call-out does not pass "secrets" specified in Flexvolume spec. If your driver requires secrets, do not implement this call-out and instead use "mount" call-out and implement attach and mount in that call-out. - -``` - mountdevice -``` - -#### Unmount device: -Unmounts the global mount for the device. This is called once all bind mounts have been unmounted. Called only from Kubelet. - -``` - unmountdevice -``` -In addition to the user-specified options and [default JSON options](#default-json-options), the following options capturing information about the pod are passed through and generated automatically. - -``` -kubernetes.io/pod.name -kubernetes.io/pod.namespace -kubernetes.io/pod.uid -kubernetes.io/serviceAccount.name -``` - -#### Mount: -Mount the volume at the mount dir. This call-out defaults to bind mount for drivers which implement attach & mount-device call-outs. Called only from Kubelet. - -``` - mount -``` - -#### Unmount: -Unmount the volume. This call-out defaults to bind mount for drivers which implement attach & mount-device call-outs. Called only from Kubelet. - -``` - unmount -``` - -See [lvm] & [nfs] for a quick example on how to write a simple flexvolume driver. - -### Driver output: - -Flexvolume expects the driver to reply with the status of the operation in the -following format. - -``` -{ - "status": "", - "message": "", - "device": "" - "volumeName": "" - "attached": - "capabilities": - { - "attach": - } -} -``` - -### Default Json options - -In addition to the flags specified by the user in the Options field of the FlexVolumeSource, the following flags (set through their corresponding FlexVolumeSource fields) are also passed to the executable. -Note: Secrets are passed only to "mount/unmount" call-outs. - -``` -"kubernetes.io/fsType":"", -"kubernetes.io/readwrite":"", -"kubernetes.io/fsGroup":"", -"kubernetes.io/mountsDir":"", -"kubernetes.io/pvOrVolumeName":"" - -"kubernetes.io/pod.name":"", -"kubernetes.io/pod.namespace":"", -"kubernetes.io/pod.uid":"", -"kubernetes.io/serviceAccount.name":"", - -"kubernetes.io/secret/key1":"" -... -"kubernetes.io/secret/keyN":"" -``` - -### Example of Flexvolume - -Please refer to the [Flexvolume example directory]. See [nginx-lvm.yaml] & [nginx-nfs.yaml] for a quick example on how to use Flexvolume in a pod. - - -[lvm]: https://git.k8s.io/examples/staging/volumes/flexvolume/lvm -[nfs]: https://git.k8s.io/examples/staging/volumes/flexvolume/nfs -[nginx-lvm.yaml]: https://git.k8s.io/examples/staging/volumes/flexvolume/nginx-lvm.yaml -[nginx-nfs.yaml]: https://git.k8s.io/examples/staging/volumes/flexvolume/nginx-nfs.yaml -[Flexvolume example directory]: https://git.k8s.io/examples/staging/volumes/flexvolume/ +This file is a placeholder to preserve links. Please remove by April 29, 2019 or the release of kubernetes 1.13, whichever comes first. \ No newline at end of file diff --git a/contributors/devel/sig-storage/flexvolume.md b/contributors/devel/sig-storage/flexvolume.md new file mode 100644 index 00000000..12c46382 --- /dev/null +++ b/contributors/devel/sig-storage/flexvolume.md @@ -0,0 +1,155 @@ +# Flexvolume + +Flexvolume enables users to write their own drivers and add support for their volumes in Kubernetes. Vendor drivers should be installed in the volume plugin path on every node, and on master if the driver requires attach capability (unless `--enable-controller-attach-detach` Kubelet option is set to false, but this is highly discouraged because it is a legacy mode of operation). + +Flexvolume is a GA feature from Kubernetes 1.8 release onwards. + +## Prerequisites + +Install the vendor driver on all nodes (also on master nodes if "--enable-controller-attach-detach" Kubelet option is enabled) in the plugin path. Path for installing the plugin: `//`. The default plugin directory is `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/`. It can be changed in kubelet via the `--volume-plugin-dir` flag, and in controller manager via the `--flex-volume-plugin-dir` flag. + +For example to add a `cifs` driver, by vendor `foo` install the driver at: `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/foo~cifs/cifs` + +The vendor and driver names must match flexVolume.driver in the volume spec, with '~' replaced with '/'. For example, if `flexVolume.driver` is set to `foo/cifs`, then the vendor is `foo`, and driver is `cifs`. + +## Dynamic Plugin Discovery +Beginning in v1.8, Flexvolume supports the ability to detect drivers on the fly. Instead of requiring drivers to exist at system initialization time or having to restart kubelet or controller manager, drivers can be installed, upgraded/downgraded, and uninstalled while the system is running. +For more information, please refer to the [design document](/contributors/design-proposals/storage/flexvolume-deployment.md). + +## Automated Plugin Installation/Upgrade +One possible way to install and upgrade your Flexvolume drivers is by using a DaemonSet. See [Recommended Driver Deployment Method](/contributors/design-proposals/storage/flexvolume-deployment.md#recommended-driver-deployment-method) for details, and see [here](https://git.k8s.io/examples/staging/volumes/flexvolume/deploy/) for an example. + +## Plugin details +The plugin expects the following call-outs are implemented for the backend drivers. Some call-outs are optional. Call-outs are invoked from Kubelet and Controller Manager. + +### Driver invocation model: + +#### Init: +Initializes the driver. Called during Kubelet & Controller manager initialization. On success, the function returns a capabilities map showing whether each Flexvolume capability is supported by the driver. +Current capabilities: +* `attach` - a boolean field indicating whether the driver requires attach and detach operations. This field is *required*, although for backward-compatibility the default value is set to `true`, i.e. requires attach and detach. +See [Driver output](#driver-output) for the capabilities map format. +``` + init +``` + +#### Attach: +Attach the volume specified by the given spec on the given node. On success, returns the device path where the device is attached on the node. Called from Controller Manager. + +This call-out does not pass "secrets" specified in Flexvolume spec. If your driver requires secrets, do not implement this call-out and instead use "mount" call-out and implement attach and mount in that call-out. + +``` + attach +``` + +#### Detach: +Detach the volume from the node. Called from Controller Manager. +``` + detach +``` + +#### Wait for attach: +Wait for the volume to be attached on the remote node. On success, the path to the device is returned. Called from Controller Manager. The timeout should be 10m (based on https://git.k8s.io/kubernetes/pkg/kubelet/volumemanager/volume_manager.go#L88 ) + +``` + waitforattach +``` + +#### Volume is Attached: +Check the volume is attached on the node. Called from Controller Manager. + +``` + isattached +``` + +#### Mount device: +Mount device mounts the device to a global path which individual pods can then bind mount. Called only from Kubelet. + +This call-out does not pass "secrets" specified in Flexvolume spec. If your driver requires secrets, do not implement this call-out and instead use "mount" call-out and implement attach and mount in that call-out. + +``` + mountdevice +``` + +#### Unmount device: +Unmounts the global mount for the device. This is called once all bind mounts have been unmounted. Called only from Kubelet. + +``` + unmountdevice +``` +In addition to the user-specified options and [default JSON options](#default-json-options), the following options capturing information about the pod are passed through and generated automatically. + +``` +kubernetes.io/pod.name +kubernetes.io/pod.namespace +kubernetes.io/pod.uid +kubernetes.io/serviceAccount.name +``` + +#### Mount: +Mount the volume at the mount dir. This call-out defaults to bind mount for drivers which implement attach & mount-device call-outs. Called only from Kubelet. + +``` + mount +``` + +#### Unmount: +Unmount the volume. This call-out defaults to bind mount for drivers which implement attach & mount-device call-outs. Called only from Kubelet. + +``` + unmount +``` + +See [lvm] & [nfs] for a quick example on how to write a simple flexvolume driver. + +### Driver output: + +Flexvolume expects the driver to reply with the status of the operation in the +following format. + +``` +{ + "status": "", + "message": "", + "device": "" + "volumeName": "" + "attached": + "capabilities": + { + "attach": + } +} +``` + +### Default Json options + +In addition to the flags specified by the user in the Options field of the FlexVolumeSource, the following flags (set through their corresponding FlexVolumeSource fields) are also passed to the executable. +Note: Secrets are passed only to "mount/unmount" call-outs. + +``` +"kubernetes.io/fsType":"", +"kubernetes.io/readwrite":"", +"kubernetes.io/fsGroup":"", +"kubernetes.io/mountsDir":"", +"kubernetes.io/pvOrVolumeName":"" + +"kubernetes.io/pod.name":"", +"kubernetes.io/pod.namespace":"", +"kubernetes.io/pod.uid":"", +"kubernetes.io/serviceAccount.name":"", + +"kubernetes.io/secret/key1":"" +... +"kubernetes.io/secret/keyN":"" +``` + +### Example of Flexvolume + +Please refer to the [Flexvolume example directory]. See [nginx-lvm.yaml] & [nginx-nfs.yaml] for a quick example on how to use Flexvolume in a pod. + + +[lvm]: https://git.k8s.io/examples/staging/volumes/flexvolume/lvm +[nfs]: https://git.k8s.io/examples/staging/volumes/flexvolume/nfs +[nginx-lvm.yaml]: https://git.k8s.io/examples/staging/volumes/flexvolume/nginx-lvm.yaml +[nginx-nfs.yaml]: https://git.k8s.io/examples/staging/volumes/flexvolume/nginx-nfs.yaml +[Flexvolume example directory]: https://git.k8s.io/examples/staging/volumes/flexvolume/ diff --git a/sig-storage/volume-plugin-faq.md b/sig-storage/volume-plugin-faq.md index bae94897..5a9ba9d7 100644 --- a/sig-storage/volume-plugin-faq.md +++ b/sig-storage/volume-plugin-faq.md @@ -68,7 +68,7 @@ For more information on how to write and deploy a CSI Driver on Kubernetes, see FlexVolume is an out-of-tree plugin interface that has existed in Kubernetes since version 1.2 (before CSI). It uses an exec-based model to interface with drivers. FlexVolume driver binaries must be installed on host machines. Kubernetes performs volume operations by executing pre-defined commands in the FlexVolume API against the driver on the host. FlexVolume is GA as of Kubernetes 1.8. For more information about Flex, see: -* https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md +* [Flexvolume.md] **What are the limitations of FlexVolume?** @@ -86,3 +86,5 @@ The Storage SIG suggests implementing a CSI driver if possible. CSI overcomes th If Flex Volume satisfies your requirements, there is no need to migrate to CSI. The Kubernetes Storage-SIG plans to continue to support and maintain the Flex Volume API. For those who would still like to migrate to CSI, there is an effort underway in the storage community to build a CSI adapter for FlexVolume. This will allow existing FlexVolume implementations to easily be containerized and deployed as a CSI plugin. See [this link](https://github.com/kubernetes-csi/drivers/tree/master/pkg/flexadapter) for details. However, the adapter will be a stop-gap solution, and if migration to CSI is the goal, we recommend writing a CSI driver from scratch to take full advantage of the API. + +[Flexvolume.md]: /contributors/devel/sig-storage/flexvolume.md -- cgit v1.2.3 From 00bcd08a568e083c324428d936dbdfb67ac1877b Mon Sep 17 00:00:00 2001 From: Arnaud MAZIN Date: Wed, 30 Jan 2019 03:22:00 +0100 Subject: This commit changes the Etcd icons in order to use the official logo --- .../infrastructure_components/labeled/etcd-128.png | Bin 8246 -> 9400 bytes .../unlabeled/etcd-128.png | Bin 7313 -> 8481 bytes .../svg/infrastructure_components/labeled/etcd.svg | 12 +++++------ .../infrastructure_components/unlabeled/etcd.svg | 22 ++++++++++----------- 4 files changed, 17 insertions(+), 17 deletions(-) diff --git a/icons/png/infrastructure_components/labeled/etcd-128.png b/icons/png/infrastructure_components/labeled/etcd-128.png index aa94bab7..abad4c9b 100644 Binary files a/icons/png/infrastructure_components/labeled/etcd-128.png and b/icons/png/infrastructure_components/labeled/etcd-128.png differ diff --git a/icons/png/infrastructure_components/unlabeled/etcd-128.png b/icons/png/infrastructure_components/unlabeled/etcd-128.png index 5febda69..9db197f2 100644 Binary files a/icons/png/infrastructure_components/unlabeled/etcd-128.png and b/icons/png/infrastructure_components/unlabeled/etcd-128.png differ diff --git a/icons/svg/infrastructure_components/labeled/etcd.svg b/icons/svg/infrastructure_components/labeled/etcd.svg index ecef1d2f..ead4b904 100644 --- a/icons/svg/infrastructure_components/labeled/etcd.svg +++ b/icons/svg/infrastructure_components/labeled/etcd.svg @@ -14,7 +14,7 @@ viewBox="0 0 18.035334 17.500378" version="1.1" id="svg13826" - inkscape:version="0.91 r13725" + inkscape:version="0.92.4 5da689c313, 2019-01-14" sodipodi:docname="etcd.svg"> @@ -78,9 +78,9 @@ id="text2066" y="16.811775" x="10.058525" - style="font-style:normal;font-weight:normal;font-size:10.58333302px;line-height:6.61458349px;font-family:Sans;letter-spacing:0px;word-spacing:0px;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:0.26458332px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" + style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:0.26458332px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" xml:space="preserve"> + style="fill:#326ce5;fill-opacity:1;stroke-width:0.03359316" + d="m 14.151044,9.9095586 c -0.04038,0.00338 -0.08097,0.00498 -0.124063,0.00498 -0.250136,0 -0.492818,-0.058652 -0.711745,-0.1660825 0.07289,-0.4181427 0.10387,-0.8393954 0.09575,-1.2591032 -0.237339,-0.3433203 -0.508606,-0.667255 -0.814843,-0.9656935 0.132864,-0.2490902 0.32925,-0.4634127 0.57445,-0.6153861 l 0.105252,-0.065067 -0.08204,-0.092515 C 12.771152,6.2747901 12.266747,5.9068496 11.694851,5.6571895 L 11.5808,5.6075745 11.55185,5.7281041 C 11.48366,6.0090413 11.341088,6.2624974 11.145373,6.4658689 A 6.5118438,6.5116843 0 0 0 9.9710601,5.9819652 6.5175545,6.5173949 0 0 0 8.7985778,6.4648611 C 8.6036684,6.2617263 8.4614333,6.0089071 8.3935399,5.7287764 L 8.3643457,5.6082468 8.2507324,5.6576286 C 7.6860579,5.9036961 7.1677436,6.2816139 6.7520236,6.7503999 l -0.082271,0.092718 0.1051489,0.065067 C 7.0193276,7.0595537 7.2151782,7.2726335 7.3478048,7.520346 7.042507,7.8176413 6.7715407,8.1405352 6.5343693,8.4822739 c -0.00924,0.4195399 0.020152,0.8438491 0.09339,1.2677893 -0.2178209,0.1063869 -0.4591566,0.1644022 -0.7076482,0.1644022 -0.0436,0 -0.084421,-0.00169 -0.1240626,-0.00488 l -0.1235557,-0.00944 0.011593,0.1233186 c 0.060535,0.624087 0.2543036,1.217636 0.576264,1.764392 l 0.062818,0.106688 0.094396,-0.08009 a 1.613835,1.6137953 0 0 1 0.7638186,-0.357758 6.5460419,6.5458815 0 0 0 0.65373,1.064755 c 0.3973437,0.138973 0.8113179,0.242709 1.238863,0.304453 0.040986,0.282747 0.00835,0.575173 -0.1031951,0.845561 l -0.047022,0.114578 0.1209335,0.02665 c 0.3097342,0.06809 0.6222523,0.102794 0.9282233,0.102794 l 0.9279559,-0.102794 0.121073,-0.02665 -0.04714,-0.114784 c -0.111266,-0.27042 -0.143915,-0.563181 -0.102892,-0.845966 0.425832,-0.06182 0.838296,-0.165278 1.23416,-0.303845 a 6.5543737,6.5542129 0 0 0 0.65447,-1.065764 1.6210239,1.6209841 0 0 1 0.767547,0.358097 l 0.09435,0.07996 0.06255,-0.106424 c 0.3225,-0.54746 0.516269,-1.140977 0.575864,-1.764058 l 0.01164,-0.1231153 z m -2.851962,1.5081784 c -0.438999,0.119396 -0.884853,0.179654 -1.3280199,0.179654 -0.4443729,0 -0.8896575,-0.06024 -1.3290598,-0.179654 A 5.1483818,5.1482557 0 0 1 8.0731949,10.219483 C 7.9366054,9.7990039 7.8566858,9.3585035 7.8335063,8.9048685 8.1172363,8.5540584 8.4399375,8.2467518 8.796465,7.9880213 A 5.12705,5.1269243 0 0 1 9.9710621,7.3493885 5.147945,5.1478187 0 0 1 11.143411,7.9866434 c 0.357907,0.2601084 0.682016,0.5698014 0.967159,0.922792 -0.02433,0.4510478 -0.105214,0.8890325 -0.24201,1.3088066 a 5.1319879,5.1318621 0 0 1 -0.569478,1.199495 z M 10.333604,9.3014636 c 0,0.3101252 0.251348,0.5609972 0.561081,0.5609972 0.309632,0 0.560676,-0.2508035 0.560676,-0.5609972 0,-0.3089201 -0.251044,-0.5607306 -0.560676,-0.5607306 -0.309733,0 -0.561047,0.2518105 -0.561047,0.5607306 z m -0.7248791,0 c 0,0.3101252 -0.2512797,0.5609972 -0.561014,0.5609972 -0.309934,0 -0.5605081,-0.2508035 -0.5605081,-0.5609972 0,-0.3088187 0.250609,-0.5606294 0.5605439,-0.5606294 0.3097303,0 0.5610111,0.2518107 0.5610111,0.5606621 z" + id="path4590" /> diff --git a/icons/svg/infrastructure_components/unlabeled/etcd.svg b/icons/svg/infrastructure_components/unlabeled/etcd.svg index 1dff93b5..fe651b57 100644 --- a/icons/svg/infrastructure_components/unlabeled/etcd.svg +++ b/icons/svg/infrastructure_components/unlabeled/etcd.svg @@ -14,7 +14,7 @@ viewBox="0 0 18.035334 17.500378" version="1.1" id="svg13826" - inkscape:version="0.91 r13725" + inkscape:version="0.92.4 5da689c313, 2019-01-14" sodipodi:docname="etcd.svg"> @@ -25,17 +25,17 @@ borderopacity="1.0" inkscape:pageopacity="0.0" inkscape:pageshadow="2" - inkscape:zoom="8" - inkscape:cx="16.847496" - inkscape:cy="33.752239" + inkscape:zoom="5.6568543" + inkscape:cx="-40.594342" + inkscape:cy="20.272431" inkscape:document-units="mm" inkscape:current-layer="layer1" showgrid="false" - inkscape:window-width="1440" - inkscape:window-height="771" - inkscape:window-x="19" + inkscape:window-width="1920" + inkscape:window-height="1043" + inkscape:window-x="0" inkscape:window-y="0" - inkscape:window-maximized="0" + inkscape:window-maximized="1" fit-margin-top="0" fit-margin-left="0" fit-margin-right="0" @@ -81,9 +81,9 @@ inkscape:connector-curvature="0" sodipodi:nodetypes="ccccccccccccc" /> + style="fill:#326ce5;fill-opacity:1;stroke-width:0.03359297" + d="m 14.176587,10.528588 c -0.04038,0.0034 -0.08097,0.005 -0.124059,0.005 -0.250132,0 -0.492809,-0.05865 -0.711734,-0.166084 0.07289,-0.4181455 0.10387,-0.8394013 0.09574,-1.2591116 C 13.199199,8.765044 12.927938,8.4411072 12.621703,8.1426666 12.754564,7.8935747 12.950948,7.6792508 13.196144,7.5272763 l 0.10525,-0.065067 -0.08205,-0.092515 C 12.796721,6.8937935 12.292323,6.5258506 11.720437,6.2761888 L 11.60639,6.2265729 11.57743,6.3471033 C 11.50924,6.6280424 11.366669,6.8815004 11.170955,7.0848732 A 6.5117284,6.5117284 0 0 0 9.9966791,6.6009667 6.517439,6.517439 0 0 0 8.824216,7.0838659 C 8.62931,6.8807297 8.4870773,6.6279087 8.4191852,6.3477762 L 8.3899915,6.2272457 8.2763801,6.2766278 C 7.7117155,6.5226971 7.1934105,6.9006174 6.777698,7.3694067 l -0.08227,0.092718 0.1051471,0.065067 c 0.2444216,0.1513707 0.4402687,0.364452 0.572893,0.612166 -0.3052923,0.2972974 -0.5762539,0.6201936 -0.8134211,0.9619347 -0.00924,0.419543 0.020151,0.8438553 0.093389,1.2677986 -0.2178171,0.106388 -0.4591485,0.164403 -0.7076358,0.164403 -0.0436,0 -0.084419,-0.0017 -0.1240603,-0.0049 l -0.1235535,-0.0094 0.011592,0.123319 c 0.060535,0.624091 0.2542991,1.217645 0.576254,1.764404 l 0.062816,0.10669 0.094395,-0.08009 a 1.6138063,1.6138063 0 0 1 0.7638051,-0.357764 6.545926,6.545926 0 0 0 0.6537186,1.064764 c 0.3973366,0.138974 0.8113033,0.242709 1.2388409,0.304453 0.040984,0.28275 0.00835,0.575178 -0.1031928,0.845567 l -0.047022,0.11458 0.1209322,0.02664 c 0.3097274,0.06809 0.6222418,0.102794 0.9282062,0.102794 l 0.9279404,-0.102794 0.121069,-0.02664 -0.04713,-0.114786 c -0.111264,-0.270421 -0.143912,-0.563184 -0.10289,-0.84597 0.425826,-0.06183 0.838281,-0.16528 1.234138,-0.303848 a 6.5542573,6.5542573 0 0 0 0.654458,-1.065772 1.6209952,1.6209952 0 0 1 0.767534,0.358101 l 0.09436,0.07995 0.06255,-0.106424 c 0.322492,-0.547464 0.516258,-1.140985 0.575852,-1.76407 l 0.01161,-0.123116 z m -2.85191,1.508189 c -0.438992,0.119396 -0.884837,0.179655 -1.3279961,0.179655 -0.4443658,0 -0.8896428,-0.06024 -1.3290378,-0.179655 A 5.1482907,5.1482907 0 0 1 8.0988459,10.838515 C 7.9622586,10.418032 7.8823406,9.9775285 7.8591614,9.5238899 8.1428867,9.173078 8.465582,8.8657685 8.8221033,8.6070365 A 5.1269592,5.1269592 0 0 1 9.9966809,7.9683993 5.1478538,5.1478538 0 0 1 11.169008,8.6056585 c 0.3579,0.26011 0.682005,0.569805 0.967144,0.9227983 -0.02432,0.4510515 -0.105214,0.8890392 -0.242007,1.3088162 a 5.131897,5.131897 0 0 1 -0.569468,1.199504 z m -0.96546,-2.1162886 c 0,0.3101276 0.251342,0.5610006 0.561069,0.5610006 0.309626,0 0.560667,-0.250805 0.560667,-0.5610006 0,-0.3089223 -0.251041,-0.5607348 -0.560667,-0.5607348 -0.309727,0 -0.561036,0.2518125 -0.561036,0.5607348 z m -0.724868,0 c 0,0.3101276 -0.2512761,0.5610006 -0.5610047,0.5610006 -0.3099283,0 -0.5604978,-0.250805 -0.5604978,-0.5610006 0,-0.3088214 0.2506046,-0.5606339 0.5605341,-0.5606339 0.309725,0 0.5610011,0.2518125 0.5610011,0.5606665 z" + id="path4590" /> -- cgit v1.2.3 From 75752cc3b63c12c55b460639cec40a1ae6c97926 Mon Sep 17 00:00:00 2001 From: Bob Killen Date: Mon, 21 Jan 2019 15:04:45 -0500 Subject: Add documentation style guide. --- contributors/guide/style-guide.md | 678 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 678 insertions(+) create mode 100644 contributors/guide/style-guide.md diff --git a/contributors/guide/style-guide.md b/contributors/guide/style-guide.md new file mode 100644 index 00000000..05ccbb04 --- /dev/null +++ b/contributors/guide/style-guide.md @@ -0,0 +1,678 @@ +--- +title: Documentation Style Guide +--- + +# Documentation Style Guide + +This style guide is for content in the Kubernetes github [community repository]. +It is an extension of the [Kubernetes documentation style-guide]. + +These are **guidelines**, not rules. Use your best judgement. + +- [Cheatsheet](#cheatsheet) +- [Content design, formatting, and language](#content-formatting-and-language) + - [Contact information](#contact-information) + - [Dates and times](#dates-and-times) + - [Diagrams, images and other assets](#diagrams-images-and-other-assets) + - [Document Layout](#document-layout) + - [Formatting text](#formatting-text) + - [Language, grammar, and tone](#language-grammar-and-tone) + - [Moving a document](#moving-a-document) + - [Punctuation](#punctuation) + - [Quotation](#quotation) +- [Markdown formatting](#markdown-and-formatting) + - [Code Blocks](code-blocks) + - [Emphasis](#emphasis) + - [Headings](#headings) + - [Horizontal Lines](#horizontal-lines) + - [Line Length](#line-length) + - [Links](#links) + - [Lists](#lists) + - [Metadata](#metadata) + - [Tables](#tables) +- [Attribution](#attribution) + + +## Cheatsheet + +### Cheatsheet: Content design, formatting, and language + +**[Contact information:](#contact-information)** +- Use official Kubernetes contact information. + +**[Dates and times:](#dates-and-times)** +- Format dates as `month day, year`. (December 13, 2018) +- When conveying a date in numerical form, use [ISO 8601] Format: `yyyy-mm-dd`. +- Use the 24 hour clock when referencing time. +- Times for single events (example: KubeCon) should be expressed in an absolute + time zone such as Pacific Standard Time (PST) or Coordinated Universal Time + (UTC). +- Times for reoccurring events should be expressed in a time zone that follows + Daylight Savings Time (DST) such as Pacific Time (PT) or Eastern Time (ET). +- Supply a link to a globally available time zone converter service. + - `http://www.thetimezoneconverter.com/?t=