diff options
| author | Jeffrey Sica <jeef111x@gmail.com> | 2018-11-14 11:56:55 -0500 |
|---|---|---|
| committer | Jeffrey Sica <jeef111x@gmail.com> | 2018-11-14 11:56:55 -0500 |
| commit | 6ff4175b008331e596473f9c48bd9833afa8c9d9 (patch) | |
| tree | 9fb88888d85be1993db7108dc2827578e8d356a9 | |
| parent | 0f2075fe7a3f7d3286cc20fcdcf9d7a256af9151 (diff) | |
| parent | 49c5d9ad316d70d9c57d51b33f9dd9ccded9a166 (diff) | |
Merge branch 'master' of github.com:kubernetes/community into ui-charter
121 files changed, 3318 insertions, 424 deletions
@@ -13,6 +13,8 @@ It is important to read and understand this legal agreement. ## How do I sign? +If your work is done as an employee of your company, contact your company's legal department and ask to be put on the list of approved contributors for the Kubernetes CLA. Below, we have included steps for "Corporation signup" in case your company does not have a company agreement and would like to have one. + #### 1. Log in to the Linux Foundation ID Portal with Github Click one of: diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES index 280ac77f..8e8cf257 100644 --- a/OWNERS_ALIASES +++ b/OWNERS_ALIASES @@ -22,8 +22,9 @@ aliases: - d-nishi sig-azure-leads: - justaugustus - - shubheksha + - dstrebel - khenidak + - feiskyer sig-big-data-leads: - foxish - erikerlandson @@ -50,8 +51,8 @@ aliases: - grodrigues3 - cblecker sig-docs-leads: - - zacharysarah - chenopis + - zacharysarah - bradamant3 sig-gcp-leads: - abgworrall @@ -81,8 +82,9 @@ aliases: - idvoretskyi - calebamiles sig-release-leads: - - jdumars - calebamiles + - justaugustus + - tpepper sig-scalability-leads: - wojtek-t - countspongebob @@ -121,7 +123,10 @@ aliases: - smarterclayton - destijl wg-iot-edge-leads: + - cindyxing - dejanb + - ptone + - cantbewong wg-kubeadm-adoption-leads: - luxas - justinsb @@ -143,9 +148,9 @@ aliases: - vishh - derekwaynecarr wg-security-audit-leads: - - jessfraz - aasmall - joelsmith + - cji ## BEGIN CUSTOM CONTENT steering-committee: - bgrant0607 diff --git a/committee-steering/governance/README.md b/committee-steering/governance/README.md index b61ac493..b99d0d6c 100644 --- a/committee-steering/governance/README.md +++ b/committee-steering/governance/README.md @@ -18,7 +18,7 @@ All Kubernetes SIGs must define a charter defining the scope and governance of t 6. Send the SIG Charter out for review to steering@kubernetes.io. Include the subject "SIG Charter Proposal: YOURSIG" and a link to the PR in the body. 7. Typically expect feedback within a week of sending your draft. Expect longer time if it falls over an - event such as Kubecon or holidays. Make any necessary changes. + event such as KubeCon/CloudNativeCon or holidays. Make any necessary changes. 8. Once accepted, the steering committee will ratify the PR by merging it. ## Steps to update an existing SIG charter diff --git a/committee-steering/governance/sig-governance.md b/committee-steering/governance/sig-governance.md index 8a45d721..569b9d29 100644 --- a/committee-steering/governance/sig-governance.md +++ b/committee-steering/governance/sig-governance.md @@ -84,7 +84,7 @@ Subproject Owner Role. (this different from a SIG or Organization Member). - SIG meets bi-weekly on zoom with agenda in meeting notes - *SHOULD* be facilitated by chairs unless delegated to specific Members -- SIG overview and deep-dive sessions organized for Kubecon +- SIG overview and deep-dive sessions organized for KubeCon/CloudNativeCon - *SHOULD* be organized by chairs unless delegated to specific Members - SIG updates to Kubernetes community meeting on a regular basis - *SHOULD* be presented by chairs unless delegated to specific Members diff --git a/communication/meeting-notes-archive/q1-2_2018_community_meeting_minutes.md b/communication/meeting-notes-archive/q1-2_2018_community_meeting_minutes.md index 7118b6eb..9a9d8e85 100644 --- a/communication/meeting-notes-archive/q1-2_2018_community_meeting_minutes.md +++ b/communication/meeting-notes-archive/q1-2_2018_community_meeting_minutes.md @@ -326,7 +326,7 @@ * Rolling out new contributor workshop + playground * Will have smaller summit in Shanghai (contact @jberkus) * Started planning for Seattle, will have an extra ½ day. - * Registration will be going through kubecon site + * Registration will be going through KubeCon/CloudNativeCon site * Manage alacarte events at other people's conferences * Communication pipelines & moderation * Clean up spam @@ -380,7 +380,7 @@ * Github Groups [Jorge Castro] * [https://github.com/kubernetes/community/issues/2323](https://github.com/kubernetes/community/issues/2323) working to make current 303 groups in the org easier to manage * Shoutouts this week (Check in #shoutouts on slack) - * jberkus: To Jordan Liggitt for diagnosing & fixing the controller performance issue that has haunted us since last August, and to Julia Evans for reporting the original issue. + * jberkus: To Jordan Liggitt for diagnosing & fixing the controller performance issue that has haunted us since last August, and to Julia Evans for reporting the original issue. * Maulion: And another to @liggitt for always helping anyone with a auth question in all the channels with kindness * jdumars: @paris - thank you for all of your work helping to keep our community safe and inclusive! I know that you've spent countless hours refining our Zoom usage, documenting, testing, and generally being super proactive on this. * Nikhita: shoutout to @cblecker for excellent meme skills! @@ -612,7 +612,7 @@ * GitHub:[ https://github.com/YugaByte/yugabyte-db](https://github.com/YugaByte/yugabyte-db) * Docs:[ https://docs.yugabyte.com/](https://docs.yugabyte.com/) * Slides: https://www.slideshare.net/YugaByte - * Yugabyte is a database focusing on, planet scale, transactional and high availability. It implements many common database apis making it a drop in replacement for those DBs. Can run as a StatefulSet on k8s. Multiple db api paradigms can be used for one database. + * Yugabyte is a database focusing on, planet scale, transactional and high availability. It implements many common database apis making it a drop in replacement for those DBs. Can run as a StatefulSet on k8s. Multiple db api paradigms can be used for one database. * No Kubernetes operator yet, but it's in progress. * Answers from Q&A: * @jberkus - For q1 - YB is optimized for small reads and writes, but can also perform batch reads and writes efficiently - mostly oriented towards modern OLTP/user-facing applications. Example is using spark or presto on top for use-cases like iot, fraud detection, alerting, user-personalization, etc. @@ -778,7 +778,7 @@ * Aish Sundar - Shoutout to Benjamin Elder for adding Conformance test results to all Sig-release dashboards - master-blocking and all release branches. * Josh Berkus and Stephen Augustus - To Misty Stanley-Jones for aggressively and doggedly pursuing 1.11 documentation deadlines, which both gives folks earlier warning about docs needs and lets us bounce incomplete features earlier * Help Wanted - * Looking for Mandarin-speakers to help with new contributor workshop and other events at KubeCon Shanghai. If you can help, please contact @jberkus / [jberkus@redhat.com](mailto:jberkus@redhat.com) + * Looking for Mandarin-speakers to help with new contributor workshop and other events at KubeCon/CloudNativeCon Shanghai. If you can help, please contact @jberkus / [jberkus@redhat.com](mailto:jberkus@redhat.com) * [KEP-005](https://github.com/kubernetes/community/blob/master/keps/sig-contributor-experience/0005-contributor-site.md) - Contributor Site - ping [jorge@heptio.com](mailto:jorge@heptio.com) if you can help! * Meet Our Contributors (mentors on demand) * June 6th at 230p and 8pm **UTC** [https://git.k8s.io/community/mentoring/meet-our-contributors.md](https://git.k8s.io/community/mentoring/meet-our-contributors.md) @@ -852,7 +852,7 @@ * External projects: SIG has something like 20 projects and is breaking them apart, looking for owners and out of tree locations for them to better live. Projects should move to CSI, a kubernetes-sigs/* repo, a utility library, or EOL * [ 0:00 ] **Announcements** * <span style="text-decoration:underline;">Shoutouts this week</span> (Check in #shoutouts on slack) - * Big shoutout to @carolynvs for being welcoming and encouraging to newcomers, to @paris for all the community energy and dedication, and to all the panelists from the recent Kubecon diversity lunch for sharing their experiences. + * Big shoutout to @carolynvs for being welcoming and encouraging to newcomers, to @paris for all the community energy and dedication, and to all the panelists from the recent KubeCon/CloudNativeCon diversity lunch for sharing their experiences. * Big shoutout to @mike.splain for running the Boston Kubernetes meetup (9 so far!) * everyone at svcat is awesome and patient especially @carolynvs, @Jeremy Rickard & @jpeeler who all took time to help me when I hit some bumps on my first PR. * <span style="text-decoration:underline;">Help Wanted</span> @@ -962,7 +962,7 @@ * SIG Scalability is looking for contributors! * We need more contributor mentors! [Fill this out.](https://goo.gl/forms/17Fzwdm5V2TVWiwy2) * The next Meet Our Contributors (mentors on demand!) will be on June 6th. Check out kubernetes.io/community for time slots and to copy to your calendar. - * **Kubecon Follow Ups** + * **KubeCon/CloudNativeCon Follow Ups** * Videos and slides: [https://github.com/cloudyuga/kubecon18-eu](https://github.com/cloudyuga/kubecon18-eu) Thanks CloudYuga for this! * **Other** * Don't forget to check out [discuss.kubernetes.io](https://discuss.kubernetes.io/)! @@ -1014,7 +1014,7 @@ * Communication platform * Flow in github * [Developers Guide underway](https://github.com/kubernetes/community/issues/1919) under Contributor Docs subproject - * Contributor Experience Update [slide deck](https://docs.google.com/presentation/d/1KUbnP_Bl7ulLJ1evo-X_TdXhlvQWUyru4GuZm51YfjY/edit?usp=sharing) from KubeConEU [if you are in k-dev mailing list, you'll have access) + * Contributor Experience Update [slide deck](https://docs.google.com/presentation/d/1KUbnP_Bl7ulLJ1evo-X_TdXhlvQWUyru4GuZm51YfjY/edit?usp=sharing) from KubeCon/CloudNativeCon UE [if you are in k-dev mailing list, you'll have access) * **Announcements:** * **Shoutouts!** * See someone doing something great in the community? Mention them in #shoutouts on slack and we'll mention them during the community meeting: @@ -1023,7 +1023,7 @@ * Tim Pepper to Aaron Crickenberger for being such a great leader on the project during recent months * Chuck Ha shouts out to the doc team - "Working on the website is such a good experience now that it's on hugo. Page rebuild time went from ~20 seconds to 60ms" :heart emoji: * Jason de Tiber would like to thank Leigh Capili (@stealthybox) for the hard work and long hours helping to fix kubeadm upgrade issues. (2nd shoutout in a row for Leigh! -ed) - * Jorge Castro and Paris Pittman would like to thank Vanessa Heric and the rest of the CNCF/Linux Foundation personnel that helped us pull off another great Contributor Summit and Kubecon + * Jorge Castro and Paris Pittman would like to thank Vanessa Heric and the rest of the CNCF/Linux Foundation personnel that helped us pull off another great Contributor Summit and KubeCon/CloudNativeCon * [Top Stackoverflow Users](https://stackoverflow.com/tags/kubernetes/topusers) in the Kubernetes Tag for the month * Anton Kostenko, Nicola Ben, Maruf Tuhin, Jonah Benton, Const * Message from the docs team re: hugo transition: @@ -1076,7 +1076,7 @@ * 35k users with 5k weekly active users * Produced Quarterly * **SIG Updates:** - * **Thanks to test infra folks for labels** + * **Thanks to test infra folks for labels** * **Cluster Lifecycle [Tim St. Clair]** * Kubeadm * Steadily burning down against 1.11 @@ -1103,21 +1103,21 @@ * Slight changes to structure of object (Unify metrics sources) * Better e2e tests on all HPA functionality * Movement along the path to blocking HPA custom metrics e2e tests - * VPA work coming along, alpha soon (demo at KubeCon) - * Come say hi at KubeCon (Intro and Deep Dive, talks on HPA) + * VPA work coming along, alpha soon (demo at KubeCon/CloudNativeCon) + * Come say hi at KubeCon/CloudNativeCon (Intro and Deep Dive, talks on HPA) * **PM [Jaice Singer DuMars]** * Working on mechanisms to get feedback from the user community (playing with something like [http://kubernetes.report](http://kubernetes.report) -- in development, not ready for distro yet) - * Presenting at KubeCon 16:35 on Thursday ~ Ihor and Aparna + * Presenting at KubeCon/CloudNativeCon 16:35 on Thursday ~ Ihor and Aparna * Working on a charter draft * We actually represent three 'P' areas: product, project, and program * Help SIG focus on implementations * We're trying to look a * **Announcements:** - * **Kubecon next week, no community meeting! **\o/ + * **KubeCon/CloudNativeCon next week, no community meeting! **\o/ * **Last Chance to Register for the Contributor Summit - ** * Registration ends Fri, Apr 7th @ 7pm UTC - * Tuesday, May 1, day before Kubecon - * You must [register here](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) even if you've registered for Kubecon + * Tuesday, May 1, day before KubeCon/CloudNativeCon + * You must [register here](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) even if you've registered for KubeCon/CloudNativeCon * SIGs, remember to [put yourself down on the SIG Update sheet](https://docs.google.com/spreadsheets/d/1adztrJ05mQ_cjatYSnvyiy85KjuI6-GuXsRsP-T2R3k/edit#gid=1543199895) to give your 5 minute update that afternoon. * **Shoutouts!** * See someone doing something great in the community? Mention them in #shoutouts on slack and we'll mention them during the community meeting: @@ -1201,7 +1201,7 @@ * @cblecker for fielding so many issues and PRs. * <span style="text-decoration:underline;">Help Wanted?</span> * SIG UI is looking for more active contributors to revitalize the dashboard. Please join their [communication channels](https://github.com/kubernetes/community/blob/master/sig-ui/README.md) and attend the next meeting to announce your interest. - * <span style="text-decoration:underline;">KubeCon EU Update</span> + * <span style="text-decoration:underline;">KubeCon/CloudNativeCon EU Update</span> * Current contributor track session voting will be emailed to attendees today!C * RSVP for Contributor Summit [[here]](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) * SIG Leads, please do your updates for the 5 minute updates @@ -1223,7 +1223,7 @@ * Support for kubeadm and minikube * Create issues on crio project on github * sig-node does not have plans to choose one yet - * Working on conformance to address implementations which should lead to choosing default implementation + * Working on conformance to address implementations which should lead to choosing default implementation * Choice is important since it would be used under scalability testing * Test data? Plan to publish results to testgrid, will supply results ASAP * Previously blocked on dashboard issue @@ -1301,8 +1301,8 @@ * 6 charters in flight working on charter, then going to other SIGs * [r/kubernetes: Ask Me Anything](https://www.reddit.com/r/kubernetes/comments/8b7f0x/we_are_kubernetes_developers_ask_us_anything/) - thanks everyone for participating, lots of user feedback, please have a look. * We'll likely do more of these in the future. - * [Kubernetes Contributor Summit @ Kubecon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 (jb) - * You need to register for this even if you already registered for Kubecon! Link to the form in the link above. + * [Kubernetes Contributor Summit @ KubeCon/CloudNativeCon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 (jb) + * You need to register for this even if you already registered for KubeCon/CloudNativeCon! Link to the form in the link above. * New contributor/on-going contrib in morning and general tracks in afternoon * New CNCF Interactive Landscape: [https://landscape.cncf.io/](https://landscape.cncf.io/) (dan kohn) @@ -1322,7 +1322,7 @@ * creating docker registry and helm repos, pushing helm chart * CLI and web UI * Caching upstream repositories - * Walkthrough and Example: [https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/](https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/) & [https://github.com/jfrogtraining/kubernetes_example](https://github.com/jfrogtraining/kubernetes_example) + * Walkthrough and Example: [https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/](https://jfrog.com/blog/control-your-kubernetes-voyage-with-artifactory/) & [https://github.com/jfrogtraining/kubernetes_example](https://github.com/jfrogtraining/kubernetes_example) * Questions * Difference between commercial and free (and what's the cost) * Free only has maven support, is open source, commercial supports everything (including Kubernetes-related technologies, like Helm) @@ -1399,8 +1399,8 @@ * They will be migrated, with blog manager opening PRs as needed * SIG Service Catalog - bumped to 5/24 * **Announcements** - * [Kubernetes Contributor Summit @ Kubecon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 [Jorge Castro] - * You need to register for this even if you already registered for Kubecon! Link to the form in the link above. + * [Kubernetes Contributor Summit @ KubeCon/CloudNativeCon](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - May 1 [Jorge Castro] + * You need to register for this even if you already registered for KubeCon/CloudNativeCon! Link to the form in the link above. * Current contributor track voting on topics will be emailed to attendees Monday * Reddit r/kubernetes AMA [Jorge Castro] * This next Tuesday: [https://www.reddit.com/r/kubernetes/comments/89gdv0/kubernetes_ama_will_be_on_10_april_tuesday/](https://www.reddit.com/r/kubernetes/comments/89gdv0/kubernetes_ama_will_be_on_10_april_tuesday/) @@ -1439,7 +1439,7 @@ * Generates join keys for kubeadm * Sends information like master election, cluster admin config file, etc back to shared data set * Resources: - * Kubecon Presentation [https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes](https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes) + * KubeCon/CloudNativeCon Presentation [https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes](https://www.slideshare.net/rhirschfeld/kubecon-2017-zero-touch-kubernetes) * Longer Demo Video [https://www.youtube.com/watch?v=OMm6Oz1NF6I](https://www.youtube.com/watch?v=OMm6Oz1NF6I) * Digital Rebar:[https://github.com/digitalrebar/provision](https://github.com/digitalrebar/provision), * Project Site: [http://rebar.digital](http://rebar.digital) @@ -1453,7 +1453,7 @@ * Looking for contributors to answer questions, 2 slots * Reach out to @paris on Slack if you're interested in participating * Contributor Summit in Copenhagen May 1 - [registration](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/co-located-events/kubernetes-contributor-summit/) is live - * KubeCon Copenhagen (May 2-4) is **on track to sell out**. [Register](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/) + * KubeCon/CloudNativeCon Copenhagen (May 2-4) is **on track to sell out**. [Register](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/) * Shoutouts this week (from #shoutouts in slack): * @nabrahams who picked the 1.10 release notes as his first contribution. We literally could not have done this without him! * [ 0:15 ]** Kubernetes 1.10 Release Retrospective** @@ -1590,7 +1590,7 @@ * Registration for the Contributor Summit is now live: * See [this page](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/co-located-events/kubernetes-contributor-summit/) for details * Please register if you're planning on attending, we need this so we have the correct amount of food! - * Just registering for Kubecon is not enough! + * Just registering for KubeCon/CloudNativeCon is not enough! * [Office Hours Next Week!](https://github.com/kubernetes/community/blob/master/events/office-hours.md) * Volunteer developers needed to answer questions * [Helm Summit Videos](https://www.youtube.com/playlist?list=PL69nYSiGNLP3PlhEKrGA0oN4eY8c4oaAH&disable_polymer=true) are up. @@ -1628,7 +1628,7 @@ * [ 0:00 ] **Graph o' the Week **Zach Corleissen, SIG Docs * Weekly update on data from devstats.k8s.io * [https://k8s.devstats.cncf.io/d/44/time-metrics?orgId=1&var-period=w&var-repogroup_name=Docs&var-repogroup=docs&var-apichange=All&var-size_name=All&var-size=all&var-full_name=Kubernetes](https://k8s.devstats.cncf.io/d/44/time-metrics?orgId=1&var-period=w&var-repogroup_name=Docs&var-repogroup=docs&var-apichange=All&var-size_name=All&var-size=all&var-full_name=Kubernetes) - * Docs folks had vague anxiety (without concrete data) on their response times for issues and PRs. Devstats shows less than approx. 4 days initial response times during the last year, outside of a few spikes associated with holidays on the calendar and KubeCon. + * Docs folks had vague anxiety (without concrete data) on their response times for issues and PRs. Devstats shows less than approx. 4 days initial response times during the last year, outside of a few spikes associated with holidays on the calendar and KubeCon/CloudNativeCon. * Introduction of prow into kubernetes/website led to a demonstrable improvement in early 2018 * [ 0:00 ] **SIG Updates** * SIG Apps [Adnan Abdulhussein] (confirmed) @@ -1647,9 +1647,9 @@ * [Governance.md updated with subprojects](https://github.com/kubernetes/community/blob/master/governance.md#subprojects) * [WIP: Subproject Meta](https://docs.google.com/document/d/1FHauGII5LNVM-dZcNfzYZ-6WRs9RoPctQ4bw5dczrkk/edit#heading=h.2nslsje41be1) * [WIP: Charter FAQ (the "Why"s)](https://github.com/kubernetes/community/pull/1908) - * Reminder: [Contributor Summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit), 1 May, day before Kubecon + * Reminder: [Contributor Summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit), 1 May, day before KubeCon/CloudNativeCon * CNCF would like feedback on the draft blog post for 1.10 beta: - * [http://blog.kubernetes.io/2018/03/first-beta-version-of-kubernetes-1-10.html](http://blog.kubernetes.io/2018/03/first-beta-version-of-kubernetes-1-10.html) + * [https://kubernetes.io/blog/2018/03/first-beta-version-of-kubernetes-1-10/](https://kubernetes.io/blog/2018/03/first-beta-version-of-kubernetes-1-10/) * Please contact [Natasha Woods](mailto:nwoods@linuxfoundation.org) with your feedback * Shoutouts this week * See someone doing something great for the community? Mention them in #shoutouts on slack. @@ -1727,9 +1727,9 @@ * [ 0:00 ] <strong>Announcements</strong> * [Owner/Maintainer ](https://github.com/kubernetes/community/pull/1861/files)[pwittrock] * Maintainer is folding into Owner - * Reminder: Contributor Summit happens 1 May, day before Kubecon + * Reminder: Contributor Summit happens 1 May, day before KubeCon/CloudNativeCon * [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) - * Kubecon price increase March 9 + * KubeCon/CloudNativeCon price increase March 9 * [https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/](https://events.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2018/) * Copenhagen May 2-4, 2018 * [Meet Our Contributors is next Weds!](https://github.com/kubernetes/community/blob/master/mentoring/meet-our-contributors.md) @@ -1772,7 +1772,7 @@ * SIG Cluster Lifecycle [First Last] * Not happening * [ 0:00 ] **Announcements** - * Reminder: Contributor Summit happens 1 May, day before Kubecon + * Reminder: Contributor Summit happens 1 May, day before KubeCon/CloudNativeCon * [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) * Shoutouts this week * Zhonghu Xu - @hzxuzhonghu for many high quality apiserver APIs PRs @@ -1866,7 +1866,7 @@ * Roadshow! * F2F this Tuesday @ INDEX * Contributor Summit in Copenhagen - * May 1; registration will be on KubeCon site this week + * May 1; registration will be on KubeCon/CloudNativeCon site this week * New weekly meeting (from bi-weekly) same day / time (Weds @ 5pUTC) * SIG API Machinery [Daniel Smith](c) * Reminder: SIG-API doesn't own the API (that's SIG-architecture), but rather mechanics in API server, registry and discovery @@ -1876,7 +1876,7 @@ * [ 0:00 ] **Announcements** * Office hours next week! * [https://github.com/kubernetes/community/blob/master/events/office-hours.md](https://github.com/kubernetes/community/blob/master/events/office-hours.md) - * Reminder: Contributor Summit will be 1 May, the day before Kubecon EU: [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) + * Reminder: Contributor Summit will be 1 May, the day before KubeCon/CloudNativeCon EU: [https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit](https://github.com/kubernetes/community/tree/master/events/2018/05-contributor-summit) * /lgtm, /approve and the principle of least surprise * [https://github.com/kubernetes/test-infra/issues/6589](https://github.com/kubernetes/test-infra/issues/6589) * Do we all need to use [the exact same code review process](https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md#the-code-review-process)? @@ -2076,11 +2076,11 @@ * contributing tests * cleaning up tests * what things are tested - * e2e framework + * e2e framework * Conformance * Please come participate * Kubernetes Documentation [User Journeys MVP](https://kubernetes.io/docs/home/) launched [Andrew Chen] - * Please give SIG Docs for feedback, still adding things later + * Please give SIG Docs for feedback, still adding things later * Can contribute normally (join SIG docs for more information) * New landing page incorporating personas (users, contributors, operators) * Levels of knowledge (foundational, advanced, etc) @@ -2090,7 +2090,7 @@ * Feel free to comment offline or on the issue if you have comments * TL;DR: call it the "control plane" * Issue: [https://github.com/kubernetes/website/issues/6525](https://github.com/kubernetes/website/issues/6525) - * Contributor Summit for Kubecon EU [Jorge and Paris] + * Contributor Summit for KubeCon/CloudNativeCon EU [Jorge and Paris] * SAVE THE DATE: May 1, 2018 * [https://github.com/kubernetes/community/pull/1718](https://github.com/kubernetes/community/pull/1718) * #shoutouts - [Jorge Castro] @@ -2136,7 +2136,7 @@ * Breaking up the monolithic kubectl. * * [ 0:00 ] **Announcements** - * SIG leads: register to offer intros and deep dives in SIG track at KubeCon Copenhagen (May 2-4): [overview](https://groups.google.com/forum/#!searchin/kubernetes-dev/kohn%7Csort:date/kubernetes-dev/5U-eNRBav2Q/g71MW47ZAgAJ), [signup](https://docs.google.com/forms/d/e/1FAIpQLSedSif6MwGfdI1-Rb33NRjTYwotQtIhNL7-ebtYQoDARPB2Tw/viewform) (1/31 deadline) + * SIG leads: register to offer intros and deep dives in SIG track at KubeCon/CloudNativeCon Copenhagen (May 2-4): [overview](https://groups.google.com/forum/#!searchin/kubernetes-dev/kohn%7Csort:date/kubernetes-dev/5U-eNRBav2Q/g71MW47ZAgAJ), [signup](https://docs.google.com/forms/d/e/1FAIpQLSedSif6MwGfdI1-Rb33NRjTYwotQtIhNL7-ebtYQoDARPB2Tw/viewform) (1/31 deadline) * [SIG Contributor Experience news: new lead, new meeting](https://groups.google.com/forum/#!topic/kubernetes-dev/65S1Y3IK8PQ) * [Meet Our Contributors ](https://github.com/kubernetes/community/blob/master/mentoring/meet-our-contributors.md)- Feb 7th [Paris] * 730a PST/ 3:30 pm UTC & 1pm PST / 9pm UTC @@ -2200,7 +2200,7 @@ * GSoC [Ihor D] * [https://github.com/cncf/soc](https://github.com/cncf/soc); [k8s gh](https://github.com/kubernetes/community/blob/master/mentoring/google-summer-of-code.md) * nikhita has volunteered to drive this program for Kubernetes - * SIG Intros & Deep Dives sessions registration at KubeCon & CloudNativeCon will be announced shortly (stay tuned!) + * SIG Intros & Deep Dives sessions registration at KubeCon/CloudNativeCon & CloudNativeCon will be announced shortly (stay tuned!) * Changes to this meeting's format [Jorge Castro] * SIGs scheduled per cycle instead of adhoc * Demo changes diff --git a/communication/resources/README.md b/communication/resources/README.md new file mode 100644 index 00000000..6dd985f8 --- /dev/null +++ b/communication/resources/README.md @@ -0,0 +1,77 @@ +# Kubernetes Resources + +> A collection of resources organized by medium (e.g. audio, text, video) + +## Table of Contents + +<!-- vim-markdown-toc GFM --> + +- [Contributions](#contributions) +- [Resources](#resources) + - [Audio](#audio) + - [Text](#text) + - [Video](#video) + - [Learning Resources](#learning-resources) + +<!-- vim-markdown-toc --> + +## Contributions + +If you would like to contribute to this list, please submit a PR and add `/sig contributor-experience` and `/assign @petermbenjamin`. + +The criteria for contributions are simple: + +- The resource must be related to Kubernetes. +- The resource must be free. +- Avoid undifferentiated search links (e.g. `https://example.com/search?q=kubernetes`), unless you can ensure the most relevant results (e.g. `https://example.com/search?q=kubernetes&category=technology`) + +## Resources + +### Audio + +- [PodCTL](https://twitter.com/PodCTL) +- [Kubernetes Podcast](https://kubernetespodcast.com) +- [The New Stack Podcasts](https://thenewstack.io/podcasts/) + +### Text + +- [Awesome Kubernetes](https://github.com/ramitsurana/awesome-kubernetes) +- [CNCF Blog](https://www.cncf.io/newsroom/blog/) +- [Dev.To](https://dev.to/t/kubernetes) +- [Heptio Blog](https://blog.heptio.com) +- [KubeTips](http://kubetips.com) +- [KubeWeekly](https://twitter.com/kubeweekly) +- [Kubedex](https://kubedex.com/category/blog/) +- [Kubernetes Blog](https://kubernetes.io/blog/) +- [Kubernetes Enhancements Repo](https://github.com/kubernetes/enhancements) +- [Kubernetes Forum](https://discuss.kubernetes.io) +- [Last Week in Kubernetes Development](http://lwkd.info) +- [Medium](https://medium.com/tag/kubernetes) +- [Reddit](https://www.reddit.com/r/kubernetes) +- [The New Stack: CI/CD With Kubernetes](https://thenewstack.io/ebooks/kubernetes/ci-cd-with-kubernetes/) +- [The New Stack: Kubernetes Deployment & Security Patterns](https://thenewstack.io/ebooks/kubernetes/kubernetes-deployment-and-security-patterns/) +- [The New Stack: Kubernetes Solutions Directory](https://thenewstack.io/ebooks/kubernetes/kubernetes-solutions-directory/) +- [The New Stack: State of Kubernetes Ecosystem](https://thenewstack.io/ebooks/kubernetes/state-of-kubernetes-ecosystem/) +- [The New Stack: Use-Cases for Kubernetes](https://thenewstack.io/ebooks/use-cases/use-cases-for-kubernetes/) +- [Weaveworks Blog](https://www.weave.works/blog/category/kubernetes/) + +### Video + +- [BrightTALK Webinars](https://www.brighttalk.com/search/?q=kubernetes) +- [Ceph YouTube Channel](https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw) +- [CNCF YouTube Channel](https://www.youtube.com/channel/UCvqbFHwN-nwalWPjPUKpvTA) +- [Heptio YouTube Channel](https://www.youtube.com/channel/UCjQU5ZI2mHswy7OOsii_URg) +- [Joe Hobot YouTube Channel](https://www.youtube.com/channel/UCdxEoi9hB617EDLEf8NWzkA) +- [Kubernetes YouTube Channel](https://www.youtube.com/channel/UCZ2bu0qutTOM0tHYa_jkIwg) +- [Lachlan Evenson YouTube Channel](https://www.youtube.com/channel/UCC5NsnXM2lE6kKfJKdQgsRQ) +- [Rancher YouTube Channel](https://www.youtube.com/channel/UCh5Xtp82q8wjijP8npkVTBA) +- [Rook YouTube Channel](https://www.youtube.com/channel/UCa7kFUSGO4NNSJV8MJVlJAA) +- [Tigera YouTube Channel](https://www.youtube.com/channel/UC8uN3yhpeBeerGNwDiQbcgw) +- [Weaveworks YouTube Channel](https://www.youtube.com/channel/UCmIz9ew1lA3-XDy5FqY-mrA/featured) + +### Learning Resources + +- [edx Courses](https://www.edx.org/course?search_query=kubernetes) +- [Katacoda Interactive Tutorials](https://www.katacoda.com) +- [Udacity Course](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) +- [Udemy Courses](https://www.udemy.com/courses/search/?courseLabel=&sort=relevance&q=kubernetes&price=price-free) diff --git a/community-membership.md b/community-membership.md index 51c76bb0..91467b99 100644 --- a/community-membership.md +++ b/community-membership.md @@ -68,7 +68,9 @@ Kubernetes organization to any related orgs automatically, but such is not the case currently. If you are a Kubernetes org member, you are implicitly eligible for membership in related orgs, and can request membership when it becomes relevant, by [opening an issue][membership request] against the kubernetes/org -repo, as above. +repo, as above. However, if you are a member of any of the related +[Kubernetes GitHub organizations] but not of the [Kubernetes org], +you will need explicit sponsorship for your membership request. ### Responsibilities and privileges @@ -226,6 +228,7 @@ The Maintainer role has been removed and replaced with a greater focus on [OWNER [contributor guide]: /contributors/guide/README.md [Kubernetes GitHub Admin team]: /github-management/README.md#github-administration-team [Kubernetes GitHub organizations]: /github-management#actively-used-github-organizations +[Kubernetes org]: https://github.com/kubernetes [kubernetes-dev@googlegroups.com]: https://groups.google.com/forum/#!forum/kubernetes-dev [kubernetes-sigs]: https://github.com/kubernetes-sigs [membership request]: https://github.com/kubernetes/org/issues/new?template=membership.md&title=REQUEST%3A%20New%20membership%20for%20%3Cyour-GH-handle%3E diff --git a/contributors/design-proposals/api-machinery/api-chunking.md b/contributors/design-proposals/api-machinery/api-chunking.md index 0a099fd3..a04c9ba4 100644 --- a/contributors/design-proposals/api-machinery/api-chunking.md +++ b/contributors/design-proposals/api-machinery/api-chunking.md @@ -89,7 +89,7 @@ Implementations that cannot offer consistent ranging (returning a set of results #### etcd3 -For etcd3 the continue token would contain a resource version (the snapshot that we are reading that is consistent across the entire LIST) and the start key for the next set of results. Upon receiving a valid continue token the apiserver would instruct etcd3 to retrieve the set of results at a given resource version, beginning at the provided start key, limited by the maximum number of requests provided by the continue token (or optionally, by a different limit specified by the client). If more results remain after reading up to the limit, the storage should calculate a continue token that would begin at the next possible key, and the continue token set on the returned list. +For etcd3 the continue token would contain a resource version (the snapshot that we are reading that is consistent across the entire LIST) and the start key for the next set of results. Upon receiving a valid continue token the apiserver would instruct etcd3 to retrieve the set of results at a given resource version, beginning at the provided start key, limited by the maximum number of requests provided by the continue token (or optionally, by a different limit specified by the client). If more results remain after reading up to the limit, the storage should calculate a continue token that would begin at the next possible key, and the continue token set on the returned list. The storage layer in the apiserver must apply consistency checking to the provided continue token to ensure that malicious users cannot trick the server into serving results outside of its range. The storage layer must perform defensive checking on the provided value, check for path traversal attacks, and have stable versioning for the continue token. diff --git a/contributors/design-proposals/api-machinery/auditing.md b/contributors/design-proposals/api-machinery/auditing.md index b4def584..2770f56d 100644 --- a/contributors/design-proposals/api-machinery/auditing.md +++ b/contributors/design-proposals/api-machinery/auditing.md @@ -35,7 +35,7 @@ while ## Constraints and Assumptions -* it is not the goal to implement all output formats one can imagine. The main goal is to be extensible with a clear golang interface. Implementations of e.g. CADF must be possible, but won't be discussed here. +* it is not the goal to implement all output formats one can imagine. The main goal is to be extensible with a clear golang interface. Implementations of e.g. CADF must be possible, but won't be discussed here. * dynamic loading of backends for new output formats are out of scope. ## Use Cases @@ -243,7 +243,7 @@ type PolicyRule struct { // An empty list implies every user. Users []string // The user groups this rule applies to. If a user is considered matching - // if the are a member of any of these groups + // if they are a member of any of these groups // An empty list implies every user group. UserGroups []string diff --git a/contributors/design-proposals/api-machinery/customresource-conversion-webhook.md b/contributors/design-proposals/api-machinery/customresource-conversion-webhook.md index a7a5c6ff..2b4aeb25 100644 --- a/contributors/design-proposals/api-machinery/customresource-conversion-webhook.md +++ b/contributors/design-proposals/api-machinery/customresource-conversion-webhook.md @@ -12,7 +12,7 @@ Thanks: @dbsmith, @deads2k, @sttts, @liggit, @enisoc ### Summary -This document proposes a detailed plan for adding support for version-conversion of Kubernetes resources defined via Custom Resource Definitions (CRD). The API Server is extended to call out to a webhook at appropriate parts of the handler stack for CRDs. +This document proposes a detailed plan for adding support for version-conversion of Kubernetes resources defined via Custom Resource Definitions (CRD). The API Server is extended to call out to a webhook at appropriate parts of the handler stack for CRDs. No new resources are added; the [CRD resource](https://github.com/kubernetes/kubernetes/blob/34383aa0a49ab916d74ea897cebc79ce0acfc9dd/staging/src/k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types.go#L187) is extended to include conversion information as well as multiple schema definitions, one for each apiVersion that is to be served. @@ -89,12 +89,12 @@ type CustomResourceDefinitionSpec struct { Version string Names CustomResourceDefinitionNames Scope ResourceScope - // This optional and correspond to the first version in the versions list + // Optional, can only be provided if per-version schema is not provided. Validation *CustomResourceValidation - // Optional, correspond to the first version in the versions list + // Optional, can only be provided if per-version subresource is not provided. Subresources *CustomResourceSubresources Versions []CustomResourceDefinitionVersion - // Optional, and correspond to the first version in the versions list + // Optional, can only be provided if per-version additionalPrinterColumns is not provided. AdditionalPrinterColumns []CustomResourceColumnDefinition Conversion *CustomResourceConversion @@ -104,9 +104,11 @@ type CustomResourceDefinitionVersion struct { Name string Served Boolean Storage Boolean - // These three fields should not be set for first item in Versions list + // Optional, can only be provided if top level validation is not provided. Schema *JSONSchemaProp + // Optional, can only be provided if top level subresource is not provided. Subresources *CustomResourceSubresources + // Optional, can only be provided if top level additionalPrinterColumns is not provided. AdditionalPrinterColumns []CustomResourceColumnDefinition } @@ -125,21 +127,49 @@ type CustomResourceConversionWebhook { } ``` -### Defaulting +### Top level fields to Per-Version fields -In case that there is no versions list, a single version with values defaulted to top level version will be created. That means a single version with a name set to spec.version. -All newly added per version fields (schema, additionalPrinterColumns or subresources) will be defaulted to the corresponding top level field except for the first version in the list that will remain empty. +In *CRD v1beta1* (apiextensions.k8s.io/v1beta1) there are per-version schema, additionalPrinterColumns or subresources (called X in this section) defined and these validation rules will be applied to them: +* Either top level X or per-version X can be set, but not both. This rule applies to individual X’s not the whole set. E.g. top level schema can be set while per-version subresources are set. +* per-version X cannot be the same. E.g. if all per-version schema are the same, the CRD object will be rejected with an error message asking the user to use the top level schema. -### Validation +in *CRD v1* (apiextensions.k8s.io/v1), there will be only version list with no top level X. The second validation guarantees a clean moving to v1. These are conversion rules: -To keep backward compatibility, the top level fields (schema, additionalPrinterColumns or subresources) stay the same and source of truth for first (top) version. The first item in the versions list must not set any of those fields. The plan is to use unified version list for v1. +*v1beta1->v1:* +* If top level X is set in v1beta1, then it will be copied to all versions in v1. +* If per-version X are set in v1beta1, then they will be used for per-version X in v1. + +*v1->v1beta1:* + +* If all per-version X are the same in v1, they will be copied to top level X in v1beta1 +* Otherwise, they will be used as per-version X in v1beta1 + +#### Alternative approaches considered + +First a defaulting approach is considered which per-version fields would be defaulted to top level fields. but that breaks backward incompatible change; Quoting from API [guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#backward-compatibility-gotchas): + +> A single feature/property cannot be represented using multiple spec fields in the same API version simultaneously + +Hence the defaulting either implicit or explicit has the potential to break backward compatibility as we have two sets of fields representing the same feature. + +There are other solution considered that does not involved defaulting: + +* Field Discriminator: Use `Spec.Conversion.Strategy` as discriminator to decide which set of fields to use. This approach would work but the proposed solution is keeping the mutual excusivity in a broader sense and is preferred. +* Per-version override: If a per-version X is specified, use it otherwise use the top level X if provided. While with careful validation and feature gating, this solution is also backward compatible, the overriding behaviour need to be kept in CRD v1 and that looks too complicated and not clean to keep for a v1 API. + +Refer to [this document](http://bit.ly/k8s-crd-per-version-defaulting) for more details and discussions on those solutions. ### Support Level The feature will be alpha in the first implementation and will have a feature gate that is defaulted to false. The roll-back story with a feature gate is much more clear. if we have the features as alpha in kubernetes release Y (>X where the feature is missing) and we make it beta in kubernetes release Z, it is not safe to use the feature and downgrade from Y to X but the feature is alpha in Y which is fine. It is safe to downgrade from Z to Y (given that we enable the feature gate in Y) and that is desirable as the feature is beta in Z. +On downgrading from a Z to Y, stored CRDs can have per-version fields set. While the feature gate can be off on Y (alpha cluster), it is dangerous to disable per-version Schema Validation or Status subresources as it makes the status field mutable and validation on CRs will be disabled. Thus the feature gate in Y only protects adding per-version fields not the actual behaviour. Thus if the feature gate is off in Y: + +* Per-version X cannot be set on CRD create (per-version fields are auto-cleared). +* Per-version X can only be set/changed on CRD update *if* the existing CRD object already has per-version X set. +This way even if we downgrade from Z to Y, per-version validations and subresources will be honored. This will not be the case for webhook conversion itself. The feature gate will also protect the implementation of webhook conversion and alpha cluster with disabled feature gate will return error for CRDs with webhook conversion (that are created with a future version of the cluster). ### Rollback @@ -153,7 +183,7 @@ Users that need to rollback to version X (but may currently be running version Y 4. If the user rolls forward again, then custom resources will be served again. -If a user does not use the webhook feature but uses the versioned schema, additionalPrinterColumns, and/or subresources and rollback to a version that does not support them per version, any value set per version will be ignored and only values in top level spec.* will be honor. +If a user does not use the webhook feature but uses the versioned schema, additionalPrinterColumns, and/or subresources and rollback to a version that does not support them per-version, any value set per-version will be ignored and only values in top level spec.* will be honor. Please note that any of the fields added in this design that is not supported in previous kubernetes releases can be removed on an update operation (e.g. status update). The kubernetes release where defined the types but gate them with an alpha feature gate, however, can keep these fields but ignore there value. @@ -233,10 +263,10 @@ For operations that need more than one conversion (e.g. LIST), no partial result No new caching is planned as part of this work, but the API Server may in the future cache webhook POST responses. Most API operations are reads. The most common kind of read is a watch. All watched objects are cached in memory. For CRDs, the cache -is per version. That is the result of having one [REST store object](https://github.com/kubernetes/kubernetes/blob/3cb771a8662ae7d1f79580e0ea9861fd6ab4ecc0/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go#L72) per version which +is per-version. That is the result of having one [REST store object](https://github.com/kubernetes/kubernetes/blob/3cb771a8662ae7d1f79580e0ea9861fd6ab4ecc0/staging/src/k8s.io/apiextensions-apiserver/pkg/registry/customresource/etcd.go#L72) per-version which was an arbitrary design choice but would be required for better caching with webhook conversion. In this model, each GVK is cached, regardless of whether some GVKs share storage. Thus, watches do not cause conversion. So, conversion webhooks will not add overhead to the watch path. Watch cache is per api server and eventually consistent. -Non-watch reads are also cached (if requested resourceVersion is 0 which is true for generated informers by default, but not for calls like `kubectl get ...`, namespace cleanup, etc). The cached objects are converted and per version (TODO: fact check). So, conversion webhooks will not add overhead here too. +Non-watch reads are also cached (if requested resourceVersion is 0 which is true for generated informers by default, but not for calls like `kubectl get ...`, namespace cleanup, etc). The cached objects are converted and per-version (TODO: fact check). So, conversion webhooks will not add overhead here too. If in the future this proves to be a performance problem, we might need to add caching later. The Authorization and Authentication webhooks already use a simple scheme with APIserver-side caching and a single TTL for expiration. This has worked fine, so we can repeat this process. It does not require Webhook hosts to be aware of the caching. diff --git a/contributors/design-proposals/api-machinery/metadata-policy.md b/contributors/design-proposals/api-machinery/metadata-policy.md index 9d07186f..b9a78e36 100644 --- a/contributors/design-proposals/api-machinery/metadata-policy.md +++ b/contributors/design-proposals/api-machinery/metadata-policy.md @@ -20,7 +20,7 @@ admission controller that uses code, rather than configuration, to map the resource requests and limits of a pod to QoS, and attaches the corresponding annotation.) -We anticipate a number of other uses for `MetadataPolicy`, such as defaulting +We anticipate a number of other uses for `MetadataPolicy`, such as defaulting for labels and annotations, prohibiting/requiring particular labels or annotations, or choosing a scheduling policy within a scheduler. We do not discuss them in this doc. diff --git a/contributors/design-proposals/apps/controller_history.md b/contributors/design-proposals/apps/controller_history.md index 6e313ce8..2e1213ad 100644 --- a/contributors/design-proposals/apps/controller_history.md +++ b/contributors/design-proposals/apps/controller_history.md @@ -267,7 +267,7 @@ ControllerRevisions, this approach is reasonable. - A revision is considered to be live while any generated Object labeled with its `.Name` is live. - This method has the benefit of providing visibility, via the label, to - users with respect to the historical provenance of a generated Object. + users with respect to the historical provenance of a generated Object. - The primary drawback is the lack of support for using garbage collection to ensure that only non-live version snapshots are collected. 1. Controllers may also use the `OwnerReferences` field of the diff --git a/contributors/design-proposals/apps/deployment.md b/contributors/design-proposals/apps/deployment.md index 16c35dfe..81ef5e66 100644 --- a/contributors/design-proposals/apps/deployment.md +++ b/contributors/design-proposals/apps/deployment.md @@ -197,7 +197,7 @@ For example, consider the following case: Users can pause/cancel a rollout by doing a non-cascading deletion of the Deployment before it is complete. Recreating the same Deployment will resume it. For example, consider the following case: -- User creats a Deployment to perform a rolling-update for 10 pods from image:v1 to +- User creates a Deployment to perform a rolling-update for 10 pods from image:v1 to image:v2. - User then deletes the Deployment while the old and new RSs are at 5 replicas each. User will end up with 2 RSs with 5 replicas each. diff --git a/contributors/design-proposals/apps/selector-generation.md b/contributors/design-proposals/apps/selector-generation.md index e0b3bf22..2f3a6b49 100644 --- a/contributors/design-proposals/apps/selector-generation.md +++ b/contributors/design-proposals/apps/selector-generation.md @@ -61,7 +61,7 @@ think about it. about uniqueness, just labeling for user's own reasons. - Defaulting logic sets `job.spec.selector` to `matchLabels["controller-uid"]="$UIDOFJOB"` -- Defaulting logic appends 2 labels to the `.spec.template.metadata.labels`. +- Defaulting logic appends 2 labels to the `.spec.template.metadata.labels`. - The first label is controller-uid=$UIDOFJOB. - The second label is "job-name=$NAMEOFJOB". diff --git a/contributors/design-proposals/apps/statefulset-update.md b/contributors/design-proposals/apps/statefulset-update.md index b4089011..06fd291e 100644 --- a/contributors/design-proposals/apps/statefulset-update.md +++ b/contributors/design-proposals/apps/statefulset-update.md @@ -304,7 +304,7 @@ as follows. should be consistent with the version indicated by `Status.UpdateRevision`. 1. If the Pod does not meet either of the prior two conditions, and if ordinal is in the sequence `[0, .Spec.UpdateStrategy.Partition.Ordinal)`, - it should be consistent with the version indicated by + it should be consistent with the version indicated by `Status.CurrentRevision`. 1. Otherwise, the Pod should be consistent with the version indicated by `Status.UpdateRevision`. @@ -446,7 +446,7 @@ object if any of the following conditions are true. 1. `.Status.UpdateReplicas` is negative or greater than `.Status.Replicas`. ## Kubectl -Kubectl will use the `rollout` command to control and provide the status of +Kubectl will use the `rollout` command to control and provide the status of StatefulSet updates. - `kubectl rollout status statefulset <StatefulSet-Name>`: displays the status @@ -648,7 +648,7 @@ spec: ### Phased Roll Outs Users can create a canary using `kubectl apply`. The only difference between a [canary](#canaries) and a phased roll out is that the - `.Spec.UpdateStrategy.Partition.Ordinal` is set to a value less than + `.Spec.UpdateStrategy.Partition.Ordinal` is set to a value less than `.Spec.Replicas-1`. ```yaml @@ -810,7 +810,7 @@ intermittent compaction as a form of garbage collection. Applications that use log structured merge trees with size tiered compaction (e.g Cassandra) or append only B(+/*) Trees (e.g Couchbase) can temporarily double their storage requirement during compaction. If there is insufficient space for compaction -to progress, these applications will either fail or degrade until +to progress, these applications will either fail or degrade until additional capacity is added. While, if the user is using AWS EBS or GCE PD, there are valid manual workarounds to expand the size of a PD, it would be useful to automate the resize via updates to the StatefulSet's diff --git a/contributors/design-proposals/architecture/architecture.md b/contributors/design-proposals/architecture/architecture.md index 1c3971f9..ff46f81f 100644 --- a/contributors/design-proposals/architecture/architecture.md +++ b/contributors/design-proposals/architecture/architecture.md @@ -65,7 +65,7 @@ The project is committed to the following (aspirational) [design ideals](princip approach is key to the system’s self-healing and autonomic capabilities. * _Advance the state of the art_. While Kubernetes intends to support non-cloud-native applications, it also aspires to advance the cloud-native and DevOps state of the art, such as - in the [participation of applications in their own management](http://blog.kubernetes.io/2016/09/cloud-native-application-interfaces.html). + in the [participation of applications in their own management](https://kubernetes.io/blog/2016/09/cloud-native-application-interfaces/). However, in doing so, we strive not to force applications to lock themselves into Kubernetes APIs, which is, for example, why we prefer configuration over convention in the [downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api). diff --git a/contributors/design-proposals/architecture/declarative-application-management.md b/contributors/design-proposals/architecture/declarative-application-management.md index 14b3677c..a5fbdf24 100644 --- a/contributors/design-proposals/architecture/declarative-application-management.md +++ b/contributors/design-proposals/architecture/declarative-application-management.md @@ -30,7 +30,7 @@ What form should this configuration take in Kubernetes? The requirements are as * In particular, it should be straightforward (but not required) to manage declarative intent under **version control**, which is [standard industry best practice](http://martinfowler.com/bliki/InfrastructureAsCode.html) and what Google does internally. Version control facilitates reproducibility, reversibility, and an audit trail. Unlike generated build artifacts, configuration is primary human-authored, or at least it is desirable to be human-readable, and it is typically changed with a human in the loop, as opposed to fully automated processes, such as autoscaling. Version control enables the use of familiar tools and processes for change control, review, and conflict resolution. -* Users need the ability to **customize** off-the-shelf configurations and to instantiate multiple **variants**, without crossing the [line into the ecosystem](https://docs.google.com/presentation/d/1oPZ4rznkBe86O4rPwD2CWgqgMuaSXguIBHIE7Y0TKVc/edit#slide=id.g21b1f16809_5_86) of [configuration domain-specific languages, platform as a service, functions as a service](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not), and so on, though users should be able to [layer such tools/systems on top](http://blog.kubernetes.io/2017/02/caas-the-foundation-for-next-gen-paas.html) of the mechanism, should they choose to do so. +* Users need the ability to **customize** off-the-shelf configurations and to instantiate multiple **variants**, without crossing the [line into the ecosystem](https://docs.google.com/presentation/d/1oPZ4rznkBe86O4rPwD2CWgqgMuaSXguIBHIE7Y0TKVc/edit#slide=id.g21b1f16809_5_86) of [configuration domain-specific languages, platform as a service, functions as a service](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not), and so on, though users should be able to [layer such tools/systems on top](https://kubernetes.io/blog/2017/02/caas-the-foundation-for-next-gen-paas/) of the mechanism, should they choose to do so. * We need to develop clear **conventions**, **examples**, and mechanisms that foster **structure**, to help users understand how to combine Kubernetes’s flexible mechanisms in an effective manner. @@ -384,7 +384,7 @@ Consider more automation, such as autoscaling, self-configuration, etc. to reduc #### What about providing an intentionally restrictive simplified, tailored developer experience to streamline a specific use case, environment, workflow, etc.? -This is essentially a [DIY PaaS](http://blog.kubernetes.io/2017/02/caas-the-foundation-for-next-gen-paas.html). Write a configuration generator, either client-side or using CRDs ([example](https://github.com/pearsontechnology/environment-operator/blob/dev/User_Guide.md)). The effort involved to document the format, validate it, test it, etc. is similar to building a new API, but I could imagine someone eventually building a SDK to make that easier. +This is essentially a [DIY PaaS](https://kubernetes.io/blog/2017/02/caas-the-foundation-for-next-gen-paas/). Write a configuration generator, either client-side or using CRDs ([example](https://github.com/pearsontechnology/environment-operator/blob/dev/User_Guide.md)). The effort involved to document the format, validate it, test it, etc. is similar to building a new API, but I could imagine someone eventually building a SDK to make that easier. #### What about more sophisticated deployment orchestration? diff --git a/contributors/design-proposals/architecture/resource-management.md b/contributors/design-proposals/architecture/resource-management.md index 888bb21e..5b6d66b8 100644 --- a/contributors/design-proposals/architecture/resource-management.md +++ b/contributors/design-proposals/architecture/resource-management.md @@ -87,7 +87,7 @@ status: API groups may be exposed as a unified API surface while being served by distinct [servers](https://kubernetes.io/docs/tasks/access-kubernetes-api/setup-extension-api-server/) using [**aggregation**](https://kubernetes.io/docs/concepts/api-extension/apiserver-aggregation/), which is particularly useful for APIs with special storage needs. However, Kubernetes also supports [**custom resources**](https://kubernetes.io/docs/concepts/api-extension/custom-resources/) (CRDs), which enables users to define new types that fit the standard API conventions without needing to build and run another server. CRDs can be used to make systems declaratively and dynamically configurable in a Kubernetes-compatible manner, without needing another storage system. -Each API server supports a custom [discovery API](https://github.com/kubernetes/client-go/blob/master/discovery/discovery_client.go) to enable clients to discover available API groups, versions, and types, and also [OpenAPI](http://blog.kubernetes.io/2016/12/kubernetes-supports-openapi.html), which can be used to extract documentation and validation information about the resource types. +Each API server supports a custom [discovery API](https://github.com/kubernetes/client-go/blob/master/discovery/discovery_client.go) to enable clients to discover available API groups, versions, and types, and also [OpenAPI](https://kubernetes.io/blog/2016/12/kubernetes-supports-openapi/), which can be used to extract documentation and validation information about the resource types. See the [Kubernetes API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md ) for more details. diff --git a/contributors/design-proposals/auth/no-new-privs.md b/contributors/design-proposals/auth/no-new-privs.md index b467c35d..5c96c9d1 100644 --- a/contributors/design-proposals/auth/no-new-privs.md +++ b/contributors/design-proposals/auth/no-new-privs.md @@ -49,7 +49,7 @@ while creating containers, for example `docker run --security-opt=no_new_privs busybox`. Docker provides via their Go api an object named `ContainerCreateConfig` to -configure container creation parameters. In this object, there is a string +configure container creation parameters. In this object, there is a string array `HostConfig.SecurityOpt` to specify the security options. Client can utilize this field to specify the arguments for security options while creating new containers. diff --git a/contributors/design-proposals/auth/security_context.md b/contributors/design-proposals/auth/security_context.md index d7a3e458..360f5046 100644 --- a/contributors/design-proposals/auth/security_context.md +++ b/contributors/design-proposals/auth/security_context.md @@ -42,7 +42,7 @@ containers. In order to support external integration with shared storage, processes running in a Kubernetes cluster should be able to be uniquely identified by their Unix -UID, such that a chain of ownership can be established. Processes in pods will +UID, such that a chain of ownership can be established. Processes in pods will need to have consistent UID/GID/SELinux category labels in order to access shared disks. diff --git a/contributors/design-proposals/multi-platform.md b/contributors/design-proposals/multi-platform.md index 279d14cd..32258ab9 100644 --- a/contributors/design-proposals/multi-platform.md +++ b/contributors/design-proposals/multi-platform.md @@ -209,7 +209,7 @@ Go 1.5 introduced many changes. To name a few that are relevant to Kubernetes: - The garbage collector became more efficient (but also [confused our latency test](https://github.com/golang/go/issues/14396)). - `linux/arm64` and `linux/ppc64le` were added as new ports. - The `GO15VENDOREXPERIMENT` was started. We switched from `Godeps/_workspace` to the native `vendor/` in [this PR](https://github.com/kubernetes/kubernetes/pull/24242). - - It's not required to pre-build the whole standard library `std` when cross-compliling. [Details](#prebuilding-the-standard-library-std) + - It's not required to pre-build the whole standard library `std` when cross-compiling. [Details](#prebuilding-the-standard-library-std) - Builds are approximately twice as slow as earlier. That affects the CI. [Details](#releasing) - The native Go DNS resolver will suffice in the most situations. This makes static linking much easier. diff --git a/contributors/design-proposals/multicluster/federated-replicasets.md b/contributors/design-proposals/multicluster/federated-replicasets.md index 59459b1c..f6c5b1cb 100644 --- a/contributors/design-proposals/multicluster/federated-replicasets.md +++ b/contributors/design-proposals/multicluster/federated-replicasets.md @@ -348,7 +348,7 @@ to that LRS along with their current status and status change timestamp. + [I6] If a cluster is removed from the federation then the situation is equal to multiple [E4]. It is assumed that if a connection with a cluster is lost completely then the cluster is removed from the - the cluster list (or marked accordingly) so + cluster list (or marked accordingly) so [[E6]](#heading=h.in6ove1c1s8f) and [[E7]](#heading=h.37bnbvwjxeda) don't need to be handled. @@ -383,7 +383,7 @@ To calculate the (re)scheduling moves for a given FRS: 1. For each cluster FRSC calculates the number of replicas that are placed (not necessary up and running) in the cluster and the number of replicas that failed to be scheduled. Cluster capacity is the difference between the -the placed and failed to be scheduled. +placed and failed to be scheduled. 2. Order all clusters by their weight and hash of the name so that every time we process the same replica-set we process the clusters in the same order. diff --git a/contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md b/contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md index 827df5a8..659bbf53 100644 --- a/contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md +++ b/contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md @@ -81,7 +81,7 @@ Kubelet would then populate the `runtimeConfig` section of the config when calli ### Pod Teardown -When we delete a pod, kubelet will bulid the runtime config for calling cni plugin `DelNetwork/DelNetworkList` API, which will remove this pod's bandwidth configuration. +When we delete a pod, kubelet will build the runtime config for calling cni plugin `DelNetwork/DelNetworkList` API, which will remove this pod's bandwidth configuration. ## Next step diff --git a/contributors/design-proposals/node/accelerator-monitoring.md b/contributors/design-proposals/node/accelerator-monitoring.md index 984ce656..5c247c19 100644 --- a/contributors/design-proposals/node/accelerator-monitoring.md +++ b/contributors/design-proposals/node/accelerator-monitoring.md @@ -53,7 +53,7 @@ type AcceleratorStats struct { // ID of the accelerator. device minor number? Or UUID? ID string `json:"id"` - // Total acclerator memory. + // Total accelerator memory. // unit: bytes MemoryTotal uint64 `json:"memory_total"` @@ -75,7 +75,7 @@ From the summary API, they will flow to heapster and stackdriver. ## Caveats - As mentioned before, this would add a requirement that cAdvisor and kubelet are dynamically linked. -- We would need to make sure that kubelet is able to access the nvml libraries. Some existing container based nvidia driver installers install drivers in a special directory. We would need to make sure that that directory is in kubelet’s `LD_LIBRARY_PATH`. +- We would need to make sure that kubelet is able to access the nvml libraries. Some existing container based nvidia driver installers install drivers in a special directory. We would need to make sure that directory is in kubelet’s `LD_LIBRARY_PATH`. ## Testing Plan - Adding unit tests and e2e tests to cAdvisor for this code. diff --git a/contributors/design-proposals/node/cri-windows.md b/contributors/design-proposals/node/cri-windows.md index 6589d985..0192f6c4 100644 --- a/contributors/design-proposals/node/cri-windows.md +++ b/contributors/design-proposals/node/cri-windows.md @@ -20,7 +20,7 @@ On the Windows platform, processes may be assigned to a job object, which can ha [#547](https://github.com/kubernetes/features/issues/547) ## Motivation -The goal is to start filling the gap of platform support in CRI, specifically for Windows platform. For example, currrently in dockershim Windows containers are scheduled using the default resource constraints and does not respect the resource requests and limits specified in POD. With this proposal, Windows containers will be able to leverage POD spec and CRI to allocate compute resource and respect restriction. +The goal is to start filling the gap of platform support in CRI, specifically for Windows platform. For example, currently in dockershim Windows containers are scheduled using the default resource constraints and does not respect the resource requests and limits specified in POD. With this proposal, Windows containers will be able to leverage POD spec and CRI to allocate compute resource and respect restriction. ## Proposed design diff --git a/contributors/design-proposals/node/secret-configmap-downwardapi-file-mode.md b/contributors/design-proposals/node/secret-configmap-downwardapi-file-mode.md index 85ee9ccc..cdfe1e1c 100644 --- a/contributors/design-proposals/node/secret-configmap-downwardapi-file-mode.md +++ b/contributors/design-proposals/node/secret-configmap-downwardapi-file-mode.md @@ -169,7 +169,7 @@ Adding it there allows the user to change the mode bits of every file in the object, so it achieves the goal, while having the option to have a default and not specify all files in the object. -The are two downside: +There are two downsides: * The files are symlinks pointint to the real file, and the realfile permissions are only set. The symlink has the classic symlink permissions. diff --git a/contributors/design-proposals/scheduling/scheduler-equivalence-class.md b/contributors/design-proposals/scheduling/scheduler-equivalence-class.md index fdc2e8d3..808de966 100644 --- a/contributors/design-proposals/scheduling/scheduler-equivalence-class.md +++ b/contributors/design-proposals/scheduling/scheduler-equivalence-class.md @@ -190,7 +190,7 @@ Please note with the change of predicates in subsequent development, this doc wi - **Invalid predicates:** - - `MaxPDVolumeCountPredicate` (only if the added/deleted PVC as a binded volume so it drops to the PV change case, otherwise it should not affect scheduler). + - `MaxPDVolumeCountPredicate` (only if the added/deleted PVC as a bound volume so it drops to the PV change case, otherwise it should not affect scheduler). - **Scope:** - All nodes (we don't know which node this PV will be attached to). @@ -229,14 +229,14 @@ Please note with the change of predicates in subsequent development, this doc wi - **Invalid predicates:** - `GeneralPredicates`. This invalidate should be done during `scheduler.assume(...)` because binding can be asynchronous. So we just optimistically invalidate predicate cached result there, and if later this pod failed to bind, the following pods will go through normal predicate functions and nothing breaks. - - No `MatchInterPodAffinity`: the scheduler will make sure newly binded pod will not break the existing inter pod affinity. So we does not need to invalidate MatchInterPodAffinity when pod added. But when a pod is deleted, existing inter pod affinity may become invalid. (e.g. this pod was preferred by some else, or vice versa). + - No `MatchInterPodAffinity`: the scheduler will make sure newly bound pod will not break the existing inter pod affinity. So we do not need to invalidate MatchInterPodAffinity when pod added. But when a pod is deleted, existing inter pod affinity may become invalid. (e.g. this pod was preferred by some else, or vice versa). - NOTE: assumptions above **will not** stand when we implemented features like `RequiredDuringSchedulingRequiredDuringExecution`. - No `NoDiskConflict`: the newly scheduled pod fits to existing pods on this node, it will also fits to equivalence class of existing pods. - **Scope:** - - The node which the pod was binded with. + - The node where the pod is bound. @@ -252,7 +252,7 @@ Please note with the change of predicates in subsequent development, this doc wi - `MatchInterPodAffinity` if the pod's labels are updated. - **Scope:** - - The node which the pod was binded with + - The node where the pod is bound. @@ -270,7 +270,7 @@ Please note with the change of predicates in subsequent development, this doc wi - `NoDiskConflict` if the pod has special volume like `RBD`, `ISCSI`, `GCEPersistentDisk` etc. - **Scope:** - - The node which the pod was binded with. + - The node where the pod is bound. ### 3.5 Node diff --git a/contributors/design-proposals/storage/volume-provisioning.md b/contributors/design-proposals/storage/volume-provisioning.md index c953fdff..316ec4f0 100644 --- a/contributors/design-proposals/storage/volume-provisioning.md +++ b/contributors/design-proposals/storage/volume-provisioning.md @@ -86,7 +86,7 @@ We propose that: ### Controller workflow for provisioning volumes -0. Kubernetes administator can configure name of a default StorageClass. This +0. Kubernetes administrator can configure name of a default StorageClass. This StorageClass instance is then used when user requests a dynamically provisioned volume, but does not specify a StorageClass. In other words, `claim.Spec.Class == ""` diff --git a/contributors/devel/OWNERS b/contributors/devel/OWNERS index 4d6fca73..a6fd6e03 100644 --- a/contributors/devel/OWNERS +++ b/contributors/devel/OWNERS @@ -1,14 +1,16 @@ reviewers: - - grodrigues3 - - Phillels - - idvoretskyi - calebamiles - cblecker - - spiffxp -approvers: - grodrigues3 - - Phillels - idvoretskyi + - Phillels + - spiffxp +approvers: - calebamiles - cblecker + - grodrigues3 + - idvoretskyi + - lavalamp + - Phillels - spiffxp + - thockin diff --git a/contributors/devel/api-conventions.md b/contributors/devel/api-conventions.md index ad9b2c5a..80aeb1e7 100644 --- a/contributors/devel/api-conventions.md +++ b/contributors/devel/api-conventions.md @@ -306,34 +306,57 @@ response reduces the complexity of these clients. ##### Typical status properties **Conditions** represent the latest available observations of an object's -current state. Objects may report multiple conditions, and new types of -conditions may be added in the future. Therefore, conditions are represented -using a list/slice, where all have similar structure. +state. They are an extension mechanism intended to be used when the details of +an observation are not a priori known or would not apply to all instances of a +given Kind. For observations that are well known and apply to all instances, a +regular field is preferred. An example of a Condition that probably should +have been a regular field is Pod's "Ready" condition - it is managed by core +controllers, it is well understood, and it applies to all Pods. + +Objects may report multiple conditions, and new types of conditions may be +added in the future or by 3rd party controllers. Therefore, conditions are +represented using a list/slice, where all have similar structure. The `FooCondition` type for some resource type `Foo` may include a subset of the following fields, but must contain at least `type` and `status` fields: ```go - Type FooConditionType `json:"type" description:"type of Foo condition"` - Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` + Type FooConditionType `json:"type" description:"type of Foo condition"` + Status ConditionStatus `json:"status" description:"status of the condition, one of True, False, Unknown"` + // +optional - LastHeartbeatTime unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` + Reason *string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` // +optional - LastTransitionTime unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` + Message *string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` + // +optional - Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` + LastHeartbeatTime *unversioned.Time `json:"lastHeartbeatTime,omitempty" description:"last time we got an update on a given condition"` // +optional - Message string `json:"message,omitempty" description:"human-readable message indicating details about last transition"` + LastTransitionTime *unversioned.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` ``` Additional fields may be added in the future. +Do not use fields that you don't need - simpler is better. + +Use of the `Reason` field is encouraged. + +Use the `LastHeartbeatTime` with great caution - frequent changes to this field +can cause a large fan-out effect for some resources. + Conditions should be added to explicitly convey properties that users and components care about rather than requiring those properties to be inferred from -other observations. +other observations. Once defined, the meaning of a Condition can not be +changed arbitrarily - it becomes part of the API, and has the same backwards- +and forwards-compatibility concerns of any other part of the API. Condition status values may be `True`, `False`, or `Unknown`. The absence of a -condition should be interpreted the same as `Unknown`. +condition should be interpreted the same as `Unknown`. How controllers handle +`Unknown` depends on the Condition in question. + +Condition types should indicate state in the "abnormal-true" polarity. For +example, if the condition indicates when a policy is invalid, the "is valid" +case is probably the norm, so the condition should be called "Invalid". In general, condition values may change back and forth, but some condition transitions may be monotonic, depending on the resource and condition type. diff --git a/contributors/devel/api_changes.md b/contributors/devel/api_changes.md index b4bc8c67..fa53adb2 100644 --- a/contributors/devel/api_changes.md +++ b/contributors/devel/api_changes.md @@ -95,9 +95,11 @@ backward-compatibly. Before talking about how to make API changes, it is worthwhile to clarify what we mean by API compatibility. Kubernetes considers forwards and backwards -compatibility of its APIs a top priority. +compatibility of its APIs a top priority. Compatibility is *hard*, especially +handling issues around rollback-safety. This is something every API change +must consider. -An API change is considered forward and backward-compatible if it: +An API change is considered compatible if it: * adds new functionality that is not required for correct behavior (e.g., does not add a new required field) @@ -107,24 +109,35 @@ does not add a new required field) * which fields are required and which are not * mutable fields do not become immutable * valid values do not become invalid + * explicitly invalid values do not become valid Put another way: -1. Any API call (e.g. a structure POSTed to a REST endpoint) that worked before -your change must work the same after your change. -2. Any API call that uses your change must not cause problems (e.g. crash or -degrade behavior) when issued against servers that do not include your change. -3. It must be possible to round-trip your change (convert to different API +1. Any API call (e.g. a structure POSTed to a REST endpoint) that succeeded +before your change must succeed after your change. +2. Any API call that does not use your change must behave the same as it did +before your change. +3. Any API call that uses your change must not cause problems (e.g. crash or +degrade behavior) when issued against an API servers that do not include your +change. +4. It must be possible to round-trip your change (convert to different API versions and back) with no loss of information. -4. Existing clients need not be aware of your change in order for them to -continue to function as they did previously, even when your change is utilized. +5. Existing clients need not be aware of your change in order for them to +continue to function as they did previously, even when your change is in use. +6. It must be possible to rollback to a previous version of API server that +does not include your change and have no impact on API objects which do not use +your change. API objects that use your change will be impacted in case of a +rollback. -If your change does not meet these criteria, it is not considered strictly -compatible, and may break older clients, or result in newer clients causing -undefined behavior. +If your change does not meet these criteria, it is not considered compatible, +and may break older clients, or result in newer clients causing undefined +behavior. Such changes are generally disallowed, though exceptions have been +made in extreme cases (e.g. security or obvious bugs). -Let's consider some examples. In a hypothetical API (assume we're at version -v6), the `Frobber` struct looks something like this: +Let's consider some examples. + +In a hypothetical API (assume we're at version v6), the `Frobber` struct looks +something like this: ```go // API v6. @@ -134,7 +147,7 @@ type Frobber struct { } ``` -You want to add a new `Width` field. It is generally safe to add new fields +You want to add a new `Width` field. It is generally allowed to add new fields without changing the API version, so you can simply change it to: ```go @@ -146,29 +159,55 @@ type Frobber struct { } ``` -The onus is on you to define a sane default value for `Width` such that rule #1 -above is true - API calls and stored objects that used to work must continue to -work. +The onus is on you to define a sane default value for `Width` such that rules +#1 and #2 above are true - API calls and stored objects that used to work must +continue to work. For your next change you want to allow multiple `Param` values. You can not -simply change `Param string` to `Params []string` (without creating a whole new -API version) - that fails rules #1 and #2. You can instead do something like: +simply remove `Param string` and add `Params []string` (without creating a +whole new API version) - that fails rules #1, #2, #3, and #6. Nor can you +simply add `Params []string` and use it instead - that fails #2 and #6. + +You must instead define a new field and the relationship between that field and +the existing field(s). Start by adding the new plural field: ```go -// Still API v6, but kind of clumsy. +// Still API v6. type Frobber struct { Height int `json:"height"` Width int `json:"width"` Param string `json:"param"` // the first param - ExtraParams []string `json:"extraParams"` // additional params + Params []string `json:"params"` // all of the params } ``` -Now you can satisfy the rules: API calls that provide the old style `Param` -will still work, while servers that don't understand `ExtraParams` can ignore -it. This is somewhat unsatisfying as an API, but it is strictly compatible. - -Part of the reason for versioning APIs and for using internal structs that are +This new field must be inclusive of the singular field. In order to satisfy +the compatibility rules you must handle all the cases of version skew, multiple +clients, and rollbacks. This can be handled by defaulting or admission control +logic linking the fields together with context from the API operation to get as +close as possible to the user's intentions. + +Upon any mutating API operation: + * If only the singular field is specified (e.g. an older client), API logic + must populate plural[0] from the singular value, and de-dup the plural + field. + * If only the plural field is specified (e.g. a newer client), API logic must + populate the singular value from plural[0]. + * If both the singular and plural fields are specified, API logic must + validate that the singular value matches plural[0]. + * Any other case is an error and must be rejected. + +For this purpose "is specified" means the following: + * On a create or patch operation: the field is present in the user-provided input + * On an update operation: the field is present and has changed from the + current value + +Older clients that only know the singular field will continue to succeed and +produce the same results as before the change. Newer clients can use your +change without impacting older clients. The API server can be rolled back and +only objects that use your change will be impacted. + +Part of the reason for versioning APIs and for using internal types that are distinct from any one version is to handle growth like this. The internal representation can be implemented as: @@ -181,24 +220,26 @@ type Frobber struct { } ``` -The code that converts to/from versioned APIs can decode this into the somewhat -uglier (but compatible!) structures. Eventually, a new API version, let's call -it v7beta1, will be forked and it can use the clean internal structure. +The code that converts to/from versioned APIs can decode this into the +compatible structure. Eventually, a new API version, e.g. v7beta1, +will be forked and it can drop the singular field entirely. -We've seen how to satisfy rules #1 and #2. Rule #3 means that you can not +We've seen how to satisfy rules #1, #2, and #3. Rule #4 means that you can not extend one versioned API without also extending the others. For example, an API call might POST an object in API v7beta1 format, which uses the cleaner `Params` field, but the API server might store that object in trusty old v6 form (since v7beta1 is "beta"). When the user reads the object back in the v7beta1 API it would be unacceptable to have lost all but `Params[0]`. This means that, even though it is ugly, a compatible change must be made to the v6 -API. +API, as above. -However, this is very challenging to do correctly. It often requires multiple +For some changes, this can be challenging to do correctly. It may require multiple representations of the same information in the same API resource, which need to -be kept in sync in the event that either is changed. For example, let's say you -decide to rename a field within the same API version. In this case, you add -units to `height` and `width`. You implement this by adding duplicate fields: +be kept in sync should either be changed. + +For example, let's say you decide to rename a field within the same API +version. In this case, you add units to `height` and `width`. You implement +this by adding new fields: ```go type Frobber struct { @@ -211,17 +252,17 @@ type Frobber struct { You convert all of the fields to pointers in order to distinguish between unset and set to 0, and then set each corresponding field from the other in the -defaulting pass (e.g., `heightInInches` from `height`, and vice versa), which -runs just prior to conversion. That works fine when the user creates a resource -from a hand-written configuration -- clients can write either field and read -either field, but what about creation or update from the output of GET, or -update via PATCH (see -[In-place updates](https://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources))? -In this case, the two fields will conflict, because only one field would be -updated in the case of an old client that was only aware of the old field (e.g., -`height`). - -Say the client creates: +defaulting logic (e.g. `heightInInches` from `height`, and vice versa). That +works fine when the user creates a sends a hand-written configuration -- +clients can write either field and read either field. + +But what about creation or update from the output of a GET, or update via PATCH +(see [In-place updates](https://kubernetes.io/docs/user-guide/managing-deployments/#in-place-updates-of-resources))? +In these cases, the two fields will conflict, because only one field would be +updated in the case of an old client that was only aware of the old field +(e.g. `height`). + +Suppose the client creates: ```json { @@ -252,17 +293,16 @@ then PUTs back: } ``` -The update should not fail, because it would have worked before `heightInInches` -was added. +As per the compatibility rules, the update must not fail, because it would have +worked before the change. ## Backward compatibility gotchas -* A single feature/property cannot be represented using multiple spec fields in the same API version - simultaneously, as the example above shows. Only one field can be populated in any resource at a time, and the client - needs to be able to specify which field they expect to use (typically via API version), - on both mutation and read. Old clients must continue to function properly while only manipulating - the old field. New clients must be able to function properly while only manipulating the new - field. +* A single feature/property cannot be represented using multiple spec fields + simultaneously within an API version. Only one representation can be + populated at a time, and the client needs to be able to specify which field + they expect to use (typically via API version), on both mutation and read. As + above, older clients must continue to function properly. * A new representation, even in a new API version, that is more expressive than an old one breaks backward compatibility, since clients that only understood the @@ -283,7 +323,7 @@ was added. be set, it is acceptable to add a new option to the union if the [appropriate conventions](api-conventions.md#objects) were followed in the original object. Removing an option requires following the [deprecation process](https://kubernetes.io/docs/reference/deprecation-policy/). - + * Changing any validation rules always has the potential of breaking some client, since it changes the assumptions about part of the API, similar to adding new enum values. Validation rules on spec fields can neither be relaxed nor strengthened. Strengthening cannot be permitted because any requests that previously @@ -291,7 +331,7 @@ was added. of the API resource. Status fields whose writers are under our control (e.g., written by non-pluggable controllers), may potentially tighten validation, since that would cause a subset of previously valid values to be observable by clients. - + * Do not add a new API version of an existing resource and make it the preferred version in the same release, and do not make it the storage version. The latter is necessary so that a rollback of the apiserver doesn't render resources in etcd undecodable after rollback. @@ -308,16 +348,15 @@ was added. ## Incompatible API changes -There are times when this might be OK, but mostly we want changes that meet this -definition. If you think you need to break compatibility, you should talk to the -Kubernetes team first. +There are times when incompatible changes might be OK, but mostly we want +changes that meet the above definitions. If you think you need to break +compatibility, you should talk to the Kubernetes API reviewers first. Breaking compatibility of a beta or stable API version, such as v1, is unacceptable. Compatibility for experimental or alpha APIs is not strictly required, but breaking compatibility should not be done lightly, as it disrupts -all users of the feature. Experimental APIs may be removed. Alpha and beta API -versions may be deprecated and eventually removed wholesale, as described in the -[versioning document](../design-proposals/release/versioning.md). +all users of the feature. Alpha and beta API versions may be deprecated and +eventually removed wholesale, as described in the [deprecation policy](https://kubernetes.io/docs/reference/deprecation-policy/). If your change is going to be backward incompatible or might be a breaking change for API consumers, please send an announcement to diff --git a/contributors/devel/architectural-roadmap.md b/contributors/devel/architectural-roadmap.md index afe37b1a..04a9002a 100644 --- a/contributors/devel/architectural-roadmap.md +++ b/contributors/devel/architectural-roadmap.md @@ -761,7 +761,7 @@ therefore wouldn’t be considered to be part of Kubernetes. applications, but not for specific applications. * Platform as a Service: Kubernetes [provides a - foundation](http://blog.kubernetes.io/2017/02/caas-the-foundation-for-next-gen-paas.html) + foundation](https://kubernetes.io/blog/2017/02/caas-the-foundation-for-next-gen-paas/) for a multitude of focused, opinionated PaaSes, including DIY ones. diff --git a/contributors/devel/automation.md b/contributors/devel/automation.md index 8f661401..ccf2218a 100644 --- a/contributors/devel/automation.md +++ b/contributors/devel/automation.md @@ -49,4 +49,4 @@ during the original test. It would be good to file flakes as an The simplest way is to comment `/retest`. Any pushes of new code to the PR will automatically trigger a new test. No human -interraction is required. Note that if the PR has a `lgtm` label, it will be removed after the pushes. +interaction is required. Note that if the PR has a `lgtm` label, it will be removed after the pushes. diff --git a/contributors/devel/bazel.md b/contributors/devel/bazel.md index de80b4b2..991a0ac2 100644 --- a/contributors/devel/bazel.md +++ b/contributors/devel/bazel.md @@ -2,6 +2,12 @@ Building and testing Kubernetes with Bazel is supported but not yet default. +Bazel is used to run all Kubernetes PRs on [Prow](https://prow.k8s.io), +as remote caching enables significantly reduced build and test times. + +Some repositories (such as kubernetes/test-infra) have switched to using Bazel +exclusively for all build, test, and release workflows. + Go rules are managed by the [`gazelle`](https://github.com/bazelbuild/rules_go/tree/master/go/tools/gazelle) tool, with some additional rules managed by the [`kazel`](https://git.k8s.io/repo-infra/kazel) tool. These tools are called via the `hack/update-bazel.sh` script. @@ -9,13 +15,16 @@ These tools are called via the `hack/update-bazel.sh` script. Instructions for installing Bazel can be found [here](https://www.bazel.io/versions/master/docs/install.html). -Several `make` rules have been created for common operations: +Several convenience `make` rules have been created for common operations: -* `make bazel-build`: builds all binaries in tree -* `make bazel-test`: runs all unit tests -* `make bazel-test-integration`: runs all integration tests +* `make bazel-build`: builds all binaries in tree (`bazel build -- //... + -//vendor/...`) +* `make bazel-test`: runs all unit tests (`bazel test --config=unit -- //... + //hack:verify-all -//build/... -//vendor/...`) +* `make bazel-test-integration`: runs all integration tests (`bazel test + --config integration //test/integration/...`) * `make bazel-release`: builds release tarballs, Docker images (for server - components), and Debian images + components), and Debian images (`bazel build //build/release-tars`) You can also interact with Bazel directly; for example, to run all `kubectl` unit tests, run @@ -46,26 +55,6 @@ There are several bazel CI jobs: Similar jobs are run on all PRs; additionally, several of the e2e jobs use Bazel-built binaries when launching and testing Kubernetes clusters. -## Known issues - -[Cross-compilation is not currently supported](https://github.com/bazelbuild/rules_go/issues/70), -so all binaries will be built for the host OS and architecture running Bazel. -(For example, you can't currently target linux/amd64 from macOS or linux/s390x -from an amd64 machine.) - -Additionally, native macOS support is still a work in progress. Using Planter is -a possible workaround in the interim. - -[Bazel does not validate build environment](https://github.com/kubernetes/kubernetes/issues/51623), thus make sure that needed -tools and development packages are installed in the system. Bazel builds require presence of `make`, `gcc`, `g++`, `glibc and libstdc++ development headers` and `glibc static development libraries`. Please check your distribution for exact names of the packages. Examples for some commonly used distributions are below: - -| Dependency | Debian/Ubuntu | CentOS | OpenSuSE | -|:---------------------:|-------------------------------|--------------------------------|-----------------------------------------| -| Build essentials | `apt install build-essential` | `yum groupinstall development` | `zypper install -t pattern devel_C_C++` | -| GCC C++ | `apt install g++` | `yum install gcc-c++` | `zypper install gcc-c++` | -| GNU Libc static files | `apt install libc6-dev` | `yum install glibc-static` | `zypper install glibc-devel-static` | - - ## Updating `BUILD` files To update `BUILD` files, run: @@ -77,10 +66,10 @@ $ ./hack/update-bazel.sh To prevent Go rules from being updated, consult the [gazelle documentation](https://github.com/bazelbuild/rules_go/tree/master/go/tools/gazelle). -Note that much like Go files and `gofmt`, BUILD files have standardized, +Note that much like Go files and `gofmt`, `BUILD` files have standardized, opinionated style rules, and running `hack/update-bazel.sh` will format them for you. -If you want to auto-format BUILD files in your editor, using something like +If you want to auto-format `BUILD` files in your editor, use of [Buildifier](https://github.com/bazelbuild/buildtools/blob/master/buildifier/README.md) is recommended. @@ -90,6 +79,106 @@ Updating the `BUILD` file for a package will be required when: * A `BUILD` file has been updated and needs to be reformatted * A new `BUILD` file has been added (parent `BUILD` files will be updated) +## Known issues and limitations + +### [Cross-compilation of cgo is not currently natively supported](https://github.com/bazelbuild/rules_go/issues/1020) +All binaries are currently built for the host OS and architecture running Bazel. +(For example, you can't currently target linux/amd64 from macOS or linux/s390x +from an amd64 machine.) + +The Go rules support cross-compilation of pure Go code using the `--platforms` +flag, and this is being used successfully in the kubernetes/test-infra repo. + +It may already be possible to cross-compile cgo code if a custom CC toolchain is +set up, possibly reusing the kube-cross Docker image, but this area needs +further exploration. + +### The CC toolchain is not fully hermetic +Bazel requires several tools and development packages to be installed in the system, including `gcc`, `g++`, `glibc and libstdc++ development headers` and `glibc static development libraries`. Please check your distribution for exact names of the packages. Examples for some commonly used distributions are below: + +| Dependency | Debian/Ubuntu | CentOS | OpenSuSE | +|:---------------------:|-------------------------------|--------------------------------|-----------------------------------------| +| Build essentials | `apt install build-essential` | `yum groupinstall development` | `zypper install -t pattern devel_C_C++` | +| GCC C++ | `apt install g++` | `yum install gcc-c++` | `zypper install gcc-c++` | +| GNU Libc static files | `apt install libc6-dev` | `yum install glibc-static` | `zypper install glibc-devel-static` | + +If any of these packages change, they may also cause spurious build failures +as described in [this issue](https://github.com/bazelbuild/bazel/issues/4907). + +An example error might look something like +``` +ERROR: undeclared inclusion(s) in rule '//vendor/golang.org/x/text/cases:go_default_library.cgo_c_lib': +this rule is missing dependency declarations for the following files included by 'vendor/golang.org/x/text/cases/linux_amd64_stripped/go_default_library.cgo_codegen~/_cgo_export.c': + '/usr/lib/gcc/x86_64-linux-gnu/7/include/stddef.h' +``` + +The only way to recover from this error is to force Bazel to regenerate its +automatically-generated CC toolchain configuration by running `bazel clean +--expunge`. + +Improving cgo cross-compilation may help with all of this. + +### Changes to Go imports requires updating BUILD files +The Go rules in `BUILD` and `BUILD.bazel` files must be updated any time files +are added or removed or Go imports are changed. These rules are automatically +maintained by `gazelle`, which is run via `hack/update-bazel.sh`, but this is +still a source of friction. + +[Autogazelle](https://github.com/bazelbuild/bazel-gazelle/tree/master/cmd/autogazelle) +is a new experimental tool which may reduce or remove the need for developers +to run `hack/update-bazel.sh`, but no work has yet been done to support it in +kubernetes/kubernetes. + +### Code coverage support is incomplete for Go +Bazel and the Go rules have limited support for code coverage. Running something +like `bazel coverage -- //... -//vendor/...` will run tests in coverage mode, +but no report summary is currently generated. It may be possible to combine +`bazel coverage` with +[Gopherage](https://github.com/kubernetes/test-infra/tree/master/gopherage), +however. + +### Kubernetes code generators are not fully supported +The make-based build system in kubernetes/kubernetes runs several code +generators at build time: +* [conversion-gen](https://github.com/kubernetes/code-generator/tree/master/cmd/conversion-gen) +* [deepcopy-gen](https://github.com/kubernetes/code-generator/tree/master/cmd/deepcopy-gen) +* [defaulter-gen](https://github.com/kubernetes/code-generator/tree/master/cmd/defaulter-gen) +* [openapi-gen](https://github.com/kubernetes/kube-openapi/tree/master/cmd/openapi-gen) +* [go-bindata](https://github.com/jteeuwen/go-bindata/tree/master/go-bindata) + +Of these, only `openapi-gen` and `go-bindata` are currently supported when +building Kubernetes with Bazel. + +The `go-bindata` generated code is produced by hand-written genrules. + +The other code generators use special build tags of the form `// ++k8s:generator-name=arg`; for example, input files to the openapi-gen tool are +specified with `// +k8s:openapi-gen=true`. + +`kazel` is used to find all packages that require OpenAPI generation, and then a +handwritten genrule consumes this list of packages to run `openapi-gen`. + +For `openapi-gen`, a single output file is produced in a single Go package, which +makes this fairly compatible with Bazel. +All other Kubernetes code generators generally produce one output file per input +package, which is less compatible with the Bazel workflow. + +The make-based build system batches up all input packages into one call to the +code generator binary, but this is inefficient for Bazel's incrementality, as a +change in one package may result in unnecessarily recompiling many other +packages. +On the other hand, calling the code generator binary multiple times is less +efficient than calling it once, since many of the generators parse the tree for +Go type information and other metadata. + +One additional challenge is that many of the code generators add additional +Go imports which `gazelle` (and `autogazelle`) cannot infer, and so they must be +explicitly added as dependencies in the `BUILD` files. + +Kubernetes has even more code generators than this limited list, but the rest +are generally run as `hack/update-*.sh` scripts and checked into the repository, +and so are not immediately needed for Bazel parity. + ## Contacts For help or discussion, join the [#bazel](https://kubernetes.slack.com/messages/bazel) channel on Kubernetes Slack. diff --git a/contributors/devel/container-runtime-interface.md b/contributors/devel/container-runtime-interface.md index a408b60a..1a121c9e 100644 --- a/contributors/devel/container-runtime-interface.md +++ b/contributors/devel/container-runtime-interface.md @@ -51,7 +51,7 @@ The old, pre-CRI Docker integration was removed in 1.7. ## Specifications, design documents and proposals -The Kubernetes 1.5 [blog post on CRI](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html) +The Kubernetes 1.5 [blog post on CRI](https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/) serves as a general introduction. diff --git a/contributors/devel/e2e-tests.md b/contributors/devel/e2e-tests.md index 861c6001..20698c49 100644 --- a/contributors/devel/e2e-tests.md +++ b/contributors/devel/e2e-tests.md @@ -666,6 +666,9 @@ If a behavior does not currently have coverage and a developer wishes to add a new e2e test, navigate to the ./test/e2e directory and create a new test using the existing suite as a guide. +**NOTE:** To build/run with tests in a new directory within ./test/e2e, add the +directory to import list in ./test/e2e/e2e_test.go + TODO(#20357): Create a self-documented example which has been disabled, but can be copied to create new tests and outlines the capabilities and libraries used. @@ -710,7 +713,7 @@ system to 30,50,100 pods per/node and measures the different characteristics of the system, such as throughput, api-latency, etc. For a good overview of how we analyze performance data, please read the -following [post](http://blog.kubernetes.io/2015/09/kubernetes-performance-measurements-and.html) +following [post](https://kubernetes.io/blog/2015/09/kubernetes-performance-measurements-and/) For developers who are interested in doing their own performance analysis, we recommend setting up [prometheus](http://prometheus.io/) for data collection, diff --git a/contributors/devel/godep.md b/contributors/devel/godep.md index ec798295..4b10a7d5 100644 --- a/contributors/devel/godep.md +++ b/contributors/devel/godep.md @@ -15,6 +15,19 @@ the tools. This doc will focus on predictability and reproducibility. +## Justifications for an update + +Before you update a dependency, take a moment to consider why it should be +updated. Valid reasons include: + 1. We need new functionality that is in a later version. + 2. New or improved APIs in the dependency significantly improve Kubernetes code. + 3. Bugs were fixed that impact Kubernetes. + 4. Security issues were fixed even if they don't impact Kubernetes yet. + 5. Performance, scale, or efficiency was meaningfully improved. + 6. We need dependency A and there is a transitive dependency B. + 7. Kubernetes has an older level of a dependency that is precluding being able +to work with other projects in the ecosystem. + ## Theory of operation The `go` toolchain assumes a global workspace that hosts all of your Go code. diff --git a/contributors/devel/logging.md b/contributors/devel/logging.md index 12a719de..889518a6 100644 --- a/contributors/devel/logging.md +++ b/contributors/devel/logging.md @@ -23,8 +23,11 @@ The following conventions for the glog levels to use. * Scheduler log messages * glog.V(3) - Extended information about changes * More info about system state changes - * glog.V(4) - Debug level verbosity (for now) + * glog.V(4) - Debug level verbosity * Logging in particularly thorny parts of code where you may want to come back later and check it + * glog.V(5) - Trace level verbosity + * Context to understand the steps leading up to errors and warnings + * More information for troubleshooting reported issues As per the comments, the practical default level is V(2). Developers and QE environments may wish to run at V(3) or V(4). If you wish to change the log diff --git a/contributors/devel/on-call-federation-build-cop.md b/contributors/devel/on-call-federation-build-cop.md index 708c854a..c153b02a 100644 --- a/contributors/devel/on-call-federation-build-cop.md +++ b/contributors/devel/on-call-federation-build-cop.md @@ -26,7 +26,7 @@ Search for the above job names in various configuration files as below: * Prow config: https://git.k8s.io/test-infra/prow/config.yaml * Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json -* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml +* Test grid config: https://git.k8s.io/test-infra/testgrid/config.yaml * Job specific config: https://git.k8s.io/test-infra/jobs/env ### Results @@ -75,7 +75,7 @@ Search for the above job names in various configuration files as below: * Prow config: https://git.k8s.io/test-infra/prow/config.yaml * Test job/bootstrap config: https://git.k8s.io/test-infra/jobs/config.json -* Test grid config: https://git.k8s.io/test-infra/testgrid/config/config.yaml +* Test grid config: https://git.k8s.io/test-infra/testgrid/config.yaml * Job specific config: https://git.k8s.io/test-infra/jobs/env ### Results diff --git a/contributors/devel/release.md b/contributors/devel/release.md index 49df8e4c..4cab95db 100644 --- a/contributors/devel/release.md +++ b/contributors/devel/release.md @@ -141,11 +141,11 @@ project is always stable so that individual commits can be flagged as having broken something. With ongoing feature definition through the year, some set of items -will bubble up as targeting a given release. The **feature freeze** +will bubble up as targeting a given release. The **enhancement freeze** starts ~4 weeks into release cycle. By this point all intended feature work for the given release has been defined in suitable -planning artifacts in conjunction with the Release Team's [features -lead](https://github.com/kubernetes/sig-release/tree/master/release-team/role-handbooks/features). +planning artifacts in conjunction with the Release Team's [enhancements +lead](https://git.k8s.io/sig-release/release-team/role-handbooks/enhancements/README.md). Implementation and bugfixing is ongoing across the cycle, but culminates in a code slush and code freeze period: @@ -232,11 +232,11 @@ milestone by creating GitHub issues and marking them with the Prow "/milestone" command. For the first ~4 weeks into the release cycle, the release team's -Features Lead will interact with SIGs and feature owners via GitHub, +Enhancements Lead will interact with SIGs and feature owners via GitHub, Slack, and SIG meetings to capture all required planning artifacts. If you have a feature to target for an upcoming release milestone, begin a -conversation with your SIG leadership and with that release's Features +conversation with your SIG leadership and with that release's Enhancements Lead. ### Issue additions diff --git a/contributors/devel/writing-good-e2e-tests.md b/contributors/devel/writing-good-e2e-tests.md index 2da64959..836479c2 100644 --- a/contributors/devel/writing-good-e2e-tests.md +++ b/contributors/devel/writing-good-e2e-tests.md @@ -61,7 +61,7 @@ making the assumption that your test can run a pod on every node in a cluster is not a safe assumption, as some other tests, running at the same time as yours, might have saturated one or more nodes in the cluster. Similarly, running a pod in the system namespace, and -assuming that that will increase the count of pods in the system +assuming that will increase the count of pods in the system namespace by one is not safe, as some other test might be creating or deleting pods in the system namespace at the same time as your test. If you do legitimately need to write a test like that, make sure to diff --git a/contributors/guide/README.md b/contributors/guide/README.md index 65f989d8..51f8f65d 100644 --- a/contributors/guide/README.md +++ b/contributors/guide/README.md @@ -247,7 +247,7 @@ If you're looking to run e2e tests on your own infrastructure, [kubetest](https: ## Issues Management or Triage Have you ever noticed the total number of [open issues](https://issues.k8s.io)? -Helping to manage or triage these open issues can be a great contributionand a great opportunity to learn about the various areas of the project. +Helping to manage or triage these open issues can be a great contribution and a great opportunity to learn about the various areas of the project. Triaging is the word we use to describe the process of adding multiple types of descriptive labels to GitHub issues, in order to speed up routing issues to the right folks. Refer to the [Issue Triage Guidelines](/contributors/guide/issue-triage.md) for more information. # Community diff --git a/contributors/guide/non-code-contributions.md b/contributors/guide/non-code-contributions.md index 8e728618..29bce79c 100644 --- a/contributors/guide/non-code-contributions.md +++ b/contributors/guide/non-code-contributions.md @@ -73,7 +73,7 @@ These are roles that are important to each and every SIG within the Kubernetes p - Editing PR text: release note, statement - Events - Organizing/helping run Face-to-Face meetings for SIGs/WGs/subprojects - - Putting together SIG Intros & Deep-dives for Kubecon + - Putting together SIG Intros & Deep-dives for KubeCon/CloudNativeCon #### Non-Code Tasks in Primarily-Code roles These are roles that are not code-based, but require knowledge of either general coding, or specific domain knowledge of the Kubernetes code base. diff --git a/contributors/guide/owners.md b/contributors/guide/owners.md index 74a43362..4d08ba1a 100644 --- a/contributors/guide/owners.md +++ b/contributors/guide/owners.md @@ -241,7 +241,7 @@ pieces of prow are used to implement the code review process above. - [plugin: assign](https://git.k8s.io/test-infra/prow/plugins/assign) - assigns GitHub users in response to `/assign` comments on a PR - unassigns GitHub users in response to `/unassign` comments on a PR -- [plugin: approve](https://git.k8s.io/test-infra/prow/plugins/assign) +- [plugin: approve](https://git.k8s.io/test-infra/prow/plugins/approve) - per-repo configuration: - `issue_required`: defaults to `false`; when `true`, require that the PR description link to an issue, or that at least one **approver** issues a `/approve no-isse` @@ -251,7 +251,7 @@ pieces of prow are used to implement the code review process above. OWNERS files has `/approve`'d - comments as required OWNERS files are satisfied - removes outdated approval status comments -- [plugin: blunderbuss](https://git.k8s.io/test-infra/prow/plugins/assign) +- [plugin: blunderbuss](https://git.k8s.io/test-infra/prow/plugins/blunderbuss) - determines **reviewers** and requests their reviews on PR's - [plugin: lgtm](https://git.k8s.io/test-infra/prow/plugins/lgtm) - adds the `lgtm` label when a **reviewer** comments `/lgtm` on a PR diff --git a/contributors/guide/pull-requests.md b/contributors/guide/pull-requests.md index b94dc2a0..a24310a6 100644 --- a/contributors/guide/pull-requests.md +++ b/contributors/guide/pull-requests.md @@ -78,9 +78,14 @@ Here's the process the pull request goes through on its way from submission to m 1. Reviewer suggests edits 1. Push edits to your pull request branch -1. Repeat the prior two steps as needed until reviewer(s) add `/lgtm` label +1. Repeat the prior two steps as needed until reviewer(s) add `/lgtm` label. The `/lgtm` label, when applied by someone listed as an `reviewer` in the corresponding project `OWNERS` file, is a signal that the code has passed review from one or more trusted reviewers for that project 1. (Optional) Some reviewers prefer that you squash commits at this step -1. Follow the bot suggestions to assign an OWNER who will add the `/approve` label to the pull request +1. Follow the bot suggestions to assign an OWNER who will add the `/approve` label to the pull request. The `/approve` label, when applied by someone listed as an `approver` in the corresponding project `OWNERS`, is a signal that the code has passed final review and is ready to be automatically merged + +The behavior of Prow is configurable across projects. You should be aware of the following configurable behaviors. + +* If you are listed as an `/approver` in the `OWNERS` file, an implicit `/approve` can be applied to your pull request. This can result in a merge being triggered by a `/lgtm` label. This is the configured behavior in many projects, including `kubernetes/kubernetes`. You can remove the implicit `/approve` with `/approve cancel` +* `/lgtm` can be configured so that from someone listed as both a `reviewer` and an `approver` will cause both labels to be applied. For `kubernetes/kuebernetes` and many other projects this is _not_ the default behavior, and `/lgtm` is decoupled from `/approve` Once the tests pass and the reviewer adds the `lgtm` and `approved` labels, the pull request enters the final merge pool. The merge pool is needed to make sure no incompatible changes have been introduced by other pull requests since the tests were last run on your pull request. <!-- TODO: create parallel instructions for reviewers --> diff --git a/contributors/guide/release-notes.md b/contributors/guide/release-notes.md index e9fcb4df..926f5946 100644 --- a/contributors/guide/release-notes.md +++ b/contributors/guide/release-notes.md @@ -22,14 +22,12 @@ For pull requests that require additional action from users switching to the new action required: your release note here ``` -For pull requests that don't need to be mentioned at release time, just write "NONE" (case insensitive): +For pull requests that don't need to be mentioned at release time, use the `/release-note-none` Prow command to add the `release-note-none` label to the PR. You can also write the string "NONE" as a release note in your PR description: ```release-note NONE ``` -The `/release-note-none` comment command can still be used as an alternative to writing "NONE" in the release-note block if it is left empty. +To see how to format your release notes, view the kubernetes/kubernetes [pull request template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. Pull Request titles and body comments can be modified at any time prior to the release to make them friendly for release notes. -To see how to format your release notes, view the kubernetes/kubernetes [pull request template](https://git.k8s.io/kubernetes/.github/PULL_REQUEST_TEMPLATE.md) for a brief example. pull request titles and body comments can be modified at any time prior to the release to make them friendly for release notes. - -Release notes apply to pull requests on the master branch. For cherry-pick pull requests, see the [cherry-pick instructions](contributors/devel/cherry-picks.md). The only exception to these rules is when a pull request is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master pull request.
\ No newline at end of file +Release notes apply to pull requests on the master branch. For cherry-pick pull requests, see the [cherry-pick instructions](contributors/devel/cherry-picks.md). The only exception to these rules is when a pull request is not a cherry-pick and is targeted directly to the non-master branch. In this case, a `release-note-*` label is required for that non-master pull request. diff --git a/events/2017/05-leadership-summit/announcement.md b/events/2017/05-leadership-summit/announcement.md index 99ed097b..6445b70a 100644 --- a/events/2017/05-leadership-summit/announcement.md +++ b/events/2017/05-leadership-summit/announcement.md @@ -1,6 +1,6 @@ This is an announcement for the 2017 Kubernetes Leadership Summit, which will occur on June 2nd, 2017 in San Jose, CA. This event will be similar to the [Kubernetes Developer's Summit](/events/2016/developer-summit-2016/Kubernetes_Dev_Summit.md) in November -2016, but involving a smaller smaller audience comprised solely of leaders and influencers of the community. These leaders and +2016, but involving a smaller audience comprised solely of leaders and influencers of the community. These leaders and influences include the SIG leads, release managers, and representatives from several companies, including (but not limited to) Google, Red Hat, CoreOS, WeaveWorks, Deis, and Mirantis. diff --git a/events/2017/12-contributor-summit/breaking-up-the-monolith.md b/events/2017/12-contributor-summit/breaking-up-the-monolith.md index baf95727..35323e3f 100644 --- a/events/2017/12-contributor-summit/breaking-up-the-monolith.md +++ b/events/2017/12-contributor-summit/breaking-up-the-monolith.md @@ -61,7 +61,7 @@ Assumption: "big tangled ball of pasta" is hard to contribute to - jdumars: the vault provider thing was one of the ebtter things that happened, it pushed us at MS to thing about genercizing the solution, it pushed us to think about what's better for the community vs. what's better for the provider - jdumars: flipside is we need to have a process where people can up with a well accepted / adopted solution, the vault provider thing was one way of doing that - lavalamp: I tend to think that most extension points are special snowflakes and you can't have a generic process for adding a new extension point -- thockin: wandering back to kubernetes/kubrnetes "main point", looking at staging as "already broken out", are there other ones that we want to break out? +- thockin: wandering back to kubernetes/kubernetes "main point", looking at staging as "already broken out", are there other ones that we want to break out? - dims: kubeadm could move out if needed, could move it to staging for sure - thockin: so what about the rest? eg: kubelet, kube-proxy... do we think that people will concretely get benefits from that? or will that cause more pain - thockin: we recognize this will slow down things diff --git a/events/2017/12-contributor-summit/feature-roadmap-2018.md b/events/2017/12-contributor-summit/feature-roadmap-2018.md index d2ae01ac..7c955b54 100644 --- a/events/2017/12-contributor-summit/feature-roadmap-2018.md +++ b/events/2017/12-contributor-summit/feature-roadmap-2018.md @@ -1,4 +1,4 @@ -Contributor summit - Kubecon 2017 +Contributor summit - KubeCon/CloudNativeCon 2017 **@AUTHORS - CONNOR DOYLE** diff --git a/events/2018/05-contributor-summit/README.md b/events/2018/05-contributor-summit/README.md index a3eb54b9..df5c2a22 100644 --- a/events/2018/05-contributor-summit/README.md +++ b/events/2018/05-contributor-summit/README.md @@ -17,7 +17,7 @@ In some sense, the summit is a real-life extension of the community meetings and ## When and Where -- Tuesday, May 1, 2018 (before Kubecon EU) +- Tuesday, May 1, 2018 (before KubeCon/CloudNativeCon EU) - Bella Center, Copenhagen, Denmark - Registration and breakfast start at 8am in Room C1-M0 - Happy hour reception onsite to close at 5:30pm @@ -58,7 +58,7 @@ There is a [Slack channel](https://kubernetes.slack.com/messages/contributor-sum | 7:00 | EmpowerHER event (offsite) | - SIG Updates (~5 minutes per SIG) - - 2 slides per SIG, focused on cross-SIG issues, not internal SIG discussions (those are for Kubecon) + - 2 slides per SIG, focused on cross-SIG issues, not internal SIG discussions (those are for KubeCon/CloudNativeCon) - Identify potential issues that might affect multiple SIGs across the project - One-to-many announcements about changes a SIG expects that might affect others - Track Leads @@ -68,6 +68,6 @@ There is a [Slack channel](https://kubernetes.slack.com/messages/contributor-sum ## Misc: -A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you. +A photographer and videographer will be onsite collecting b-roll and other shots for KubeCon/CloudNativeCon. If you would rather not be involved, please reach out to an organizer on the day of so we may accommodate you. Further details to be updated on this doc. Please check back for a complete guide. diff --git a/events/2018/05-contributor-summit/new-contributor-notes.md b/events/2018/05-contributor-summit/new-contributor-notes.md index 1858fd85..8ce9aa68 100644 --- a/events/2018/05-contributor-summit/new-contributor-notes.md +++ b/events/2018/05-contributor-summit/new-contributor-notes.md @@ -1,4 +1,4 @@ -# Kubernetes New Contributor Workshop - KubeCon EU 2018 - Notes +# Kubernetes New Contributor Workshop - KubeCon/CloudNativeCon EU 2018 - Notes Joining in the beginning was onboarding on a yacht Now is more onboarding a BIG cruise ship. @@ -110,7 +110,7 @@ Everything will be refactored (cleaning, move, merged,...) ### Project -- [kubernetes/Community](https://github.com/kubernetes/Community): Kubecon, proposition, Code of conduct and Contribution guideline, SIG-list +- [kubernetes/Community](https://github.com/kubernetes/Community): KubeCon/CloudNativeCon, proposition, Code of conduct and Contribution guideline, SIG-list - [kubernetes/Features](https://github.com/kubernetes/Features): Features proposal for future release - [kubernetes/Steering](https://github.com/kubernetes/Steering) - [kubernetes/Test-Infra](https://github.com/kubernetes/Test-Infra): All related to test except Perf diff --git a/events/2018/12-contributor-summit/README.md b/events/2018/12-contributor-summit/README.md index 263c5918..9903a206 100644 --- a/events/2018/12-contributor-summit/README.md +++ b/events/2018/12-contributor-summit/README.md @@ -11,7 +11,9 @@ In some sense, the summit is a real-life extension of the community meetings and ## Registration -- [Form to pick tracks and RSVP for the Sunday evening event](https://goo.gl/X8YrRv) +The event is now full and is accepting a wait list on the below form. If you are a SIG/WG Chair, Tech Lead, or Subproject Owner, please reach out to community@kubernetes.io after filling out the wait list form. + +- [RSVP/Wait List Form](https://goo.gl/X8YrRv) - If you are planning on attending the New Contributor Track, [Sign the CLA](/CLA.md) if you have not done so already. This is not your KubeCon/CloudNativeCon ticket. You will need to register for the conference separately. @@ -26,6 +28,14 @@ This is not your KubeCon/CloudNativeCon ticket. You will need to register for th - 6th Floor, Washington State Convention Center, Seattle, WA (Signage will be present) +### Badge pick up +If you are not attending KubeCon/CnC but attending this event, please reach out to community@kubernetes.io for a separate process. + +You will need your KubeCon/CnC badge to get into Sunday and Monday events. Badge locations: +- participating hotels (TBA) +- atrium on the 4th floor of the Washington Convention Center +- Garage on Sunday night (convenient!) + ## Agenda Day 1 - [Garage](https://www.garagebilliards.com/) @@ -61,7 +71,7 @@ Day 2 - Washington Convention Center | Time | Main Track | New Contributor Summit | Docs Sprint | Track #1 | Track #2 | Track #3 | Track #4 | Contributor Lounge | | --- | :---: | :---: | :---: | :---: | :---: |:---: | :---: | :---: | | **Room** | 608/609 | 602/603/604 | 613 | 606 | 607 | 605 | 611 | 610 | -| 1:00pm | | Pull Request Practice | Docs Sprint | State of Developer Experience | KEP BoF | Networking BoF | *Unconference Slot | Open Space | +| 1:00pm | | Pull Request Practice | Docs Sprint | Automation and CI | KEP BoF | Networking BoF | *Unconference Slot | Open Space | | 1:50pm | 10 Minute Break | - | - | - | - | - | - | | | | 2:00pm | API Codebase Tour - @sttts | Testrgid tour, docs, membership | | Cluster lifecycle BoF | Release Management | *Unconference Slot | *Unconference Slot | | | | 2:50pm | 10 Minute Break | - | - | - | - | - | - | | | diff --git a/events/elections/2017/README.md b/events/elections/2017/README.md index 0d17f68c..e9436f49 100644 --- a/events/elections/2017/README.md +++ b/events/elections/2017/README.md @@ -37,7 +37,7 @@ If you believe you are a Member of Standing, please fill out [this form](https:/ ## DECISION The newly elected body will be announced in the weekly Kubernetes Community Meeting on October 5, 2017 at 10:00am US Pacific Time. [Please join us](https://groups.google.com/forum/#!forum/kubernetes-community-video-chat). -Following the meeting, the raw voting results and winners will be published on the [Kubernetes Blog](http://blog.kubernetes.io/). +Following the meeting, the raw voting results and winners will be published on the [Kubernetes Blog](https://kubernetes.io/blog/). For more information, definitions, and/or detailed election process, see full [steering committee charter](https://github.com/kubernetes/steering/blob/master/charter.md). diff --git a/events/elections/2018/README.md b/events/elections/2018/README.md index 84e468b6..fee5b758 100644 --- a/events/elections/2018/README.md +++ b/events/elections/2018/README.md @@ -151,5 +151,5 @@ Name | Organization/Company | GitHub [2017 candidate bios]: https://github.com/kubernetes/community/tree/master/events/elections/2017 [election officers]: https://github.com/kubernetes/community/tree/master/events/elections#election-officers [Kubernetes Community Meeting]: https://github.com/kubernetes/community/blob/master/events/community-meeting.md -[Kubernetes Blog]: http://blog.kubernetes.io/ +[Kubernetes Blog]: https://kubernetes.io/blog/ [eligible voters]: https://github.com/kubernetes/community/blob/master/events/elections/2018/voters.md diff --git a/generator/app.go b/generator/app.go index a75e2b8c..8fb97125 100644 --- a/generator/app.go +++ b/generator/app.go @@ -178,6 +178,7 @@ func getExistingContent(path string, fileFormat string) (string, error) { var funcMap = template.FuncMap{ "tzUrlEncode": tzUrlEncode, + "trimSpace": strings.TrimSpace, } // tzUrlEncode returns a url encoded string without the + shortcut. This is diff --git a/generator/sig_readme.tmpl b/generator/sig_readme.tmpl index 81646625..0d4d347f 100644 --- a/generator/sig_readme.tmpl +++ b/generator/sig_readme.tmpl @@ -62,7 +62,7 @@ The following subprojects are owned by sig-{{.Label}}: {{- range .Subprojects }} - **{{.Name}}** {{- if .Description }} - - Description: {{ .Description }} + - Description: {{ trimSpace .Description }} {{- end }} - Owners: {{- range .Owners }} diff --git a/github-management/README.md b/github-management/README.md index 043bb5da..76d206f0 100644 --- a/github-management/README.md +++ b/github-management/README.md @@ -44,6 +44,19 @@ require confirmation by the Steering Committee before taking effect. Time zones and country of origin should be considered when selecting membership, to ensure sufficient after North American business hours and holiday coverage. +### Other roles + +#### New Membership Coordinator + +New Membership Coordinators help serve as a friendly face to newer, prospective +community members, guiding them through the +[process](new-membership-procedure.md) to request membership to a Kubernetes +GitHub organization. + +Our current coordinators are: +* Bob Killen (**[@mrbobbytables](https://github.com/mrbobbytables)**, US Eastern) +* Stephen Augustus (**[@justaugustus](https://github.com/justaugustus)**, US Eastern) + ## Project Owned Organizations The following organizations are currently known to be part of the Kubernetes diff --git a/hack/.spelling_failures b/hack/.spelling_failures index 7bc1a753..bf3b4eb3 100644 --- a/hack/.spelling_failures +++ b/hack/.spelling_failures @@ -1,2 +1,4 @@ events/elections/2017/ vendor/ +sig-contributor-experience/contribex-survey-2018.csv + diff --git a/keps/OWNERS b/keps/OWNERS index e3141c35..381efbc6 100644 --- a/keps/OWNERS +++ b/keps/OWNERS @@ -1,17 +1,14 @@ reviewers: - sig-architecture-leads - - jbeda - - bgrant0607 - - jdumars - calebamiles - idvoretskyi + - jbeda + - justaugustus approvers: - sig-architecture-leads - - jbeda - - bgrant0607 - - jdumars - calebamiles - idvoretskyi + - jbeda labels: - kind/kep - sig/architecture diff --git a/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md b/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md new file mode 100644 index 00000000..73b3344a --- /dev/null +++ b/keps/sig-auth/0000-20170814-bounding-self-labeling-kubelets.md @@ -0,0 +1,141 @@ +--- +kep-number: 0 +title: Bounding Self-Labeling Kubelets +authors: + - "@mikedanese" + - "@liggitt" +owning-sig: sig-auth +participating-sigs: + - sig-node + - sig-storage +reviewers: + - "@saad-ali" + - "@tallclair" +approvers: + - "@thockin" + - "@smarterclayton" +creation-date: 2017-08-14 +last-updated: 2018-10-31 +status: implementable +--- + +# Bounding Self-Labeling Kubelets + +## Motivation + +Today the node client has total authority over its own Node labels. +This ability is incredibly useful for the node auto-registration flow. +The kubelet reports a set of well-known labels, as well as additional +labels specified on the command line with `--node-labels`. + +While this distributed method of registration is convenient and expedient, it +has two problems that a centralized approach would not have. Minorly, it makes +management difficult. Instead of configuring labels in a centralized +place, we must configure `N` kubelet command lines. More significantly, the +approach greatly compromises security. Below are two straightforward escalations +on an initially compromised node that exhibit the attack vector. + +### Capturing Dedicated Workloads + +Suppose company `foo` needs to run an application that deals with PII on +dedicated nodes to comply with government regulation. A common mechanism for +implementing dedicated nodes in Kubernetes today is to set a label or taint +(e.g. `foo/dedicated=customer-info-app`) on the node and to select these +dedicated nodes in the workload controller running `customer-info-app`. + +Since the nodes self reports labels upon registration, an intruder can easily +register a compromised node with label `foo/dedicated=customer-info-app`. The +scheduler will then bind `customer-info-app` to the compromised node potentially +giving the intruder easy access to the PII. + +This attack also extends to secrets. Suppose company `foo` runs their outward +facing nginx on dedicated nodes to reduce exposure to the company's publicly +trusted server certificates. They use the secret mechanism to distribute the +serving certificate key. An intruder captures the dedicated nginx workload in +the same way and can now use the node certificate to read the company's serving +certificate key. + +## Proposal + +1. Modify the `NodeRestriction` admission plugin to prevent Kubelets from self-setting labels +within the `k8s.io` and `kubernetes.io` namespaces *except for these specifically allowed labels/prefixes*: + + ``` + kubernetes.io/hostname + kubernetes.io/instance-type + kubernetes.io/os + kubernetes.io/arch + + beta.kubernetes.io/instance-type + beta.kubernetes.io/os + beta.kubernetes.io/arch + + failure-domain.beta.kubernetes.io/zone + failure-domain.beta.kubernetes.io/region + + failure-domain.kubernetes.io/zone + failure-domain.kubernetes.io/region + + [*.]kubelet.kubernetes.io/* + [*.]node.kubernetes.io/* + ``` + +2. Reserve and document the `node-restriction.kubernetes.io/*` label prefix for cluster administrators +that want to label their `Node` objects centrally for isolation purposes. + + > The `node-restriction.kubernetes.io/*` label prefix is reserved for cluster administrators + > to isolate nodes. These labels cannot be self-set by kubelets when the `NodeRestriction` + > admission plugin is enabled. + +This accomplishes the following goals: + +- continues allowing people to use arbitrary labels under their own namespaces any way they wish +- supports legacy labels kubelets are already adding +- provides a place under the `kubernetes.io` label namespace for node isolation labeling +- provide a place under the `kubernetes.io` label namespace for kubelets to self-label with kubelet and node-specific labels + +## Implementation Timeline + +v1.13: + +* Kubelet deprecates setting `kubernetes.io` or `k8s.io` labels via `--node-labels`, +other than the specifically allowed labels/prefixes described above, +and warns when invoked with `kubernetes.io` or `k8s.io` labels outside that set. +* NodeRestriction admission prevents kubelets from adding/removing/modifying `[*.]node-restriction.kubernetes.io/*` labels on Node *create* and *update* +* NodeRestriction admission prevents kubelets from adding/removing/modifying `kubernetes.io` or `k8s.io` +labels other than the specifically allowed labels/prefixes described above on Node *update* only + +v1.15: + +* Kubelet removes the ability to set `kubernetes.io` or `k8s.io` labels via `--node-labels` +other than the specifically allowed labels/prefixes described above (deprecation period +of 6 months for CLI elements of admin-facing components is complete) + +v1.17: + +* NodeRestriction admission prevents kubelets from adding/removing/modifying `kubernetes.io` or `k8s.io` +labels other than the specifically allowed labels/prefixes described above on Node *update* and *create* +(oldest supported kubelet running against a v1.17 apiserver is v1.15) + +## Alternatives Considered + +### File or flag-based configuration of the apiserver to allow specifying allowed labels + +* A fixed set of labels and label prefixes is simpler to reason about, and makes every cluster behave consistently +* File-based config isn't easily inspectable to be able to verify enforced labels +* File-based config isn't easily kept in sync in HA apiserver setups + +### API-based configuration of the apiserver to allow specifying allowed labels + +* A fixed set of labels and label prefixes is simpler to reason about, and makes every cluster behave consistently +* An API object that controls the allowed labels is a potential escalation path for a compromised node + +### Allow kubelets to add any labels they wish, and add NoSchedule taints if disallowed labels are added + +* To be robust, this approach would also likely involve a controller to automatically inspect labels and remove the NoSchedule taint. This seemed overly complex. Additionally, it was difficult to come up with a tainting scheme that preserved information about which labels were the cause. + +### Forbid all labels regardless of namespace except for a specifically allowed set + +* This was much more disruptive to existing usage of `--node-labels`. +* This was much more difficult to integrate with other systems allowing arbitrary topology labels like CSI. +* This placed restrictions on how labels outside the `kubernetes.io` and `k8s.io` label namespaces could be used, which didn't seem proper. diff --git a/keps/sig-cli/0024-kubectl-plugins.md b/keps/sig-cli/0024-kubectl-plugins.md index b9c158b4..a79fcc4e 100644 --- a/keps/sig-cli/0024-kubectl-plugins.md +++ b/keps/sig-cli/0024-kubectl-plugins.md @@ -107,12 +107,12 @@ See https://github.com/kubernetes/kubernetes/issues/53640 and https://github.com * Relay all information given to `kubectl` (via command line args) to plugins as-is. Plugins receive all arguments and flags provided by users and are responsible for adjusting their behavior accordingly. -* Provide a way to limit which command paths can and cannot be overriddden by plugins in the command tree. +* Provide a way to limit which command paths can and cannot be overridden by plugins in the command tree. ### Non-Goals * The new plugin mechanism will not be a "plugin installer" or wizard. It will not have specific or baked-in knowledge - regarding a plugin's location or composition, nor will it it provide a way to download or unpack plugins in a correct + regarding a plugin's location or composition, nor will it provide a way to download or unpack plugins in a correct location. * Plugin discovery is not a main focus of this mechanism. As such, it will not attempt to collect data about every plugin that exists in an environment. @@ -138,7 +138,7 @@ or case-handling in `kubectl`. In essence, a plugin binary must be able to run as a standalone process, completely independent of `kubectl`. -* When `kubectl` is executed with a subcommand _foo_ that does not exist exist in the command tree, it will attempt to look +* When `kubectl` is executed with a subcommand _foo_ that does not exist in the command tree, it will attempt to look for a filename `kubectl-foo` (`kubectl-foo.exe` on Windows) in the user's `PATH` and execute it, relaying all arguments given as well as all environment variables to the plugin child-process. diff --git a/keps/sig-cloud-provider/0002-cloud-controller-manager.md b/keps/sig-cloud-provider/0002-cloud-controller-manager.md index 8545d962..cb5a4073 100644 --- a/keps/sig-cloud-provider/0002-cloud-controller-manager.md +++ b/keps/sig-cloud-provider/0002-cloud-controller-manager.md @@ -197,7 +197,7 @@ Among these controller loops, the following are cloud provider dependent. The nodeIpamController uses the cloudprovider to handle cloud specific CIDR assignment of a node. Currently the only cloud provider using this functionality is GCE. So the current plan is to break this functionality out of the common -verion of the nodeIpamController. Most cloud providers can just run the default version of this controller. However any +version of the nodeIpamController. Most cloud providers can just run the default version of this controller. However any cloud provider which needs cloud specific version of this functionality and disable the default version running in the KCM and run their own version in the CCM. diff --git a/keps/sig-cloud-provider/0013-build-deploy-ccm.md b/keps/sig-cloud-provider/0013-build-deploy-ccm.md index ff0dd415..e0775180 100644 --- a/keps/sig-cloud-provider/0013-build-deploy-ccm.md +++ b/keps/sig-cloud-provider/0013-build-deploy-ccm.md @@ -198,7 +198,7 @@ manager framework its own K8s/K8s Staging repo. It should be generally possible for cloud providers to determine where a controller runs and even over-ride specific controller functionality. Please note that if a cloud provider exercises this possibility it is up to that cloud provider to keep their custom controller conformant to the K8s/K8s standard. This means any controllers may be run in either KCM -or CCM. As an example the NodeIpamController, will be shared acrosss K8s/K8s and K8s/cloud-provider-gce, both in the +or CCM. As an example the NodeIpamController, will be shared across K8s/K8s and K8s/cloud-provider-gce, both in the short and long term. Currently it needs to take a cloud provider to allow it to do GCE CIDR management. We could handle this by leaving the cloud provider interface with the controller manager framework code. The GCE controller manager could then inject the cloud provider for that controller. For everyone else (especially the KCM) NodeIpamController is @@ -256,7 +256,7 @@ With the additions needed in the short term to make this work; the Staging area - Sample-Controller When we complete the cloud provider work, several of the new modules in staging should be moving to their permanent new -home in the appropriate K8s/Cloud-provider repoas they will no longer be needed in the K8s/K8s repo. There are however +home in the appropriate K8s/Cloud-provider repos they will no longer be needed in the K8s/K8s repo. There are however other new modules we will add which continue to be needed by both K8s/K8s and K8s/Cloud-provider. Those modules will remain in Staging until the Staging initiative completes and they are moved into some other Kubernetes shared code repo. - Api diff --git a/keps/sig-cluster-lifecycle/0015-kubeadm-join-control-plane.md b/keps/sig-cluster-lifecycle/0015-kubeadm-join-control-plane.md index b69c5783..78c2546f 100644 --- a/keps/sig-cluster-lifecycle/0015-kubeadm-join-control-plane.md +++ b/keps/sig-cluster-lifecycle/0015-kubeadm-join-control-plane.md @@ -143,7 +143,7 @@ capabilities like e.g. kubeadm upgrade for HA clusters. - This proposal doesn't provide an automated solution for transferring the CA key and other required certs from one control-plane instance to the other. More specifically, this proposal doesn't address - the ongoing discussion about storage of kubeadm TLS assets in secrets and it it is not planned + the ongoing discussion about storage of kubeadm TLS assets in secrets and it is not planned to provide support for clusters with TLS stored in secrets (but nothing in this proposal should explicitly prevent to reconsider this in future). @@ -441,4 +441,4 @@ workflow we can provide better support for: instance instead of creating a new configMap from scratch). - Checking that the cluster/the kubeadm-config is properly configured for many control plane instances - Blocking users trying to create secondary control plane instances on clusters with configurations - we don't want to support as a SIG (e.g. HA with self-hosted control plane)
\ No newline at end of file + we don't want to support as a SIG (e.g. HA with self-hosted control plane) diff --git a/keps/sig-contributor-experience/0007-20180403-community-forum.md b/keps/sig-contributor-experience/0007-20180403-community-forum.md index 51aff94f..04ceb38b 100644 --- a/keps/sig-contributor-experience/0007-20180403-community-forum.md +++ b/keps/sig-contributor-experience/0007-20180403-community-forum.md @@ -102,7 +102,7 @@ The site would be forum.k8s.io, and would be linked to from the homepage and maj - Post announcements about related kubernetes projects - Give the ecosystem of tools around k8s a place to go and build communities around all the tools people are building. - "Jill's neat K8s project on github" is too small to have it's own official k8s presence, but it could be a post on a forum. -- Events section for meetups and Kubecon +- Events section for meetups and KubeCon/CloudNativeCon - Sub boards for meetup groups - Sub boards for non-english speaking community members - Developer section can include: @@ -143,14 +143,14 @@ The site would be forum.k8s.io, and would be linked to from the homepage and maj ### Risks and Mitigations - One more thing to check everyday(tm) - - User fatigue with mailing lists, discourse, slack, stackoverflow, youtube channel, kubecon, your local meetup, etc. + - User fatigue with mailing lists, discourse, slack, stackoverflow, youtube channel, KubeCon/CloudNativeCon, your local meetup, etc. - This is why I am proposing we investigate if we can replace the lists as well, two birds with one stone. - Lack of developer participation - The mailing lists work, how suitable is Discourse to replace a mailing list these days? CNCF has tried Discourse in the past. See [@cra's post](https://twitter.com/cra/status/981548716405547008) - [Discussion on the pros and cons of each](https://meta.discourse.org/t/discourse-vs-email-mailing-lists/54298) - We have enough churn and new Working Groups that we could pilot a few, opt-in for SIGs that want to try it? - A community forum is asynchronous, whereas chat is realtime. - - This doesn't solve our Slack lock-in concerns, but can be a good first step in being more active in running our own community properties so that we can build out own own resources. + - This doesn't solve our Slack lock-in concerns, but can be a good first step in being more active in running our own community properties so that we can build out own resources. - Ghost have [totally migrated to Discourse](https://twitter.com/johnonolan/status/980872508395188224?s=12) and shut down their Slack. - We should keep an eye on this and see what data we can gleam from this. Engage with Ghost community folks to see what lessons they've learned. - Not sure if getting rid of realtime chat entirely is a good idea either. @@ -176,7 +176,7 @@ After a _three month_ prototyping period SIG Contributor Experience will: - Determine if this is a better solution than what we have, and figure out where this would fit in the ecosystem - There is a strong desire that this would replace an existing support venue, SIG Contributor Experience will weigh the options. -- If this solution is not better than what we have, and we don't want to support yet another tool we we would shut the project down. +- If this solution is not better than what we have, and we don't want to support yet another tool we would shut the project down. - If we don't have enough information to draw a conclusion, we may decide to extend the evaluation period. - Site should have a moderation and administrative policies written down. diff --git a/keps/sig-network/0011-ipvs-proxier.md b/keps/sig-network/0011-ipvs-proxier.md index cb9bb6db..7e6e2328 100644 --- a/keps/sig-network/0011-ipvs-proxier.md +++ b/keps/sig-network/0011-ipvs-proxier.md @@ -149,7 +149,7 @@ There are 3 proxy modes in ipvs - NAT (masq), IPIP and DR. Only NAT mode support ```shell # ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) -Prot LocalAddress:Port Scheduler Flags +Port LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.102.128.4:3080 rr -> 10.244.0.235:8080 Masq 1 0 0 @@ -177,7 +177,7 @@ And, IPVS proxier will maintain 5 kubernetes-specific chains in nat table **1. kube-proxy start with --masquerade-all=true** If kube-proxy starts with `--masquerade-all=true`, the IPVS proxier will masquerade all traffic accessing service ClusterIP, which behaves same as what iptables proxier does. -Suppose there is a serivice with Cluster IP `10.244.5.1` and port `8080`: +Suppose there is a service with Cluster IP `10.244.5.1` and port `8080`: ```shell # iptables -t nat -nL diff --git a/keps/sig-network/0015-20180614-SCTP-support.md b/keps/sig-network/0015-20180614-SCTP-support.md index 96a89c26..4c16aaf4 100644 --- a/keps/sig-network/0015-20180614-SCTP-support.md +++ b/keps/sig-network/0015-20180614-SCTP-support.md @@ -45,7 +45,7 @@ superseded-by: The goal of the SCTP support feature is to enable the usage of the SCTP protocol in Kubernetes [Service][], [NetworkPolicy][], and [ContainerPort][]as an additional protocol value option beside the current TCP and UDP options. SCTP is an IETF protocol specified in [RFC4960][], and it is used widely in telecommunications network stacks. -Once SCTP support is added as a new protocol option those applications that require SCTP as L4 protocol on their interfaces can be deployed on Kubernetes clusters on a more straightforward way. For example they can use the native kube-dns based service discvery, and their communication can be controlled on the native NetworkPolicy way. +Once SCTP support is added as a new protocol option those applications that require SCTP as L4 protocol on their interfaces can be deployed on Kubernetes clusters on a more straightforward way. For example they can use the native kube-dns based service discovery, and their communication can be controlled on the native NetworkPolicy way. [Service]: https://kubernetes.io/docs/concepts/services-networking/service/ [NetworkPolicy]: @@ -68,7 +68,7 @@ It is also a goal to enable ingress SCTP connections from clients outside the Ku It is not a goal here to add SCTP support to load balancers that are provided by cloud providers. The Kubernetes side implementation will not restrict the usage of SCTP as the protocol for the Services with type=LoadBalancer, but we do not implement the support of SCTP into the cloud specific load balancer implementations. -It is not a goal to support multi-homed SCTP associations. Such a support also depends on the ability to manage multiple IP addresses for a pod, and in the case of Services with ClusterIP or NodePort the support of multi-homed assocations would also require the support of NAT for multihomed associations in the SCTP related NF conntrack modules. +It is not a goal to support multi-homed SCTP associations. Such a support also depends on the ability to manage multiple IP addresses for a pod, and in the case of Services with ClusterIP or NodePort the support of multi-homed associations would also require the support of NAT for multihomed associations in the SCTP related NF conntrack modules. ## Proposal @@ -148,7 +148,7 @@ spec: #### SCTP port accessible from outside the cluster -As a user of Kubernetes I want to have the option that clien applications that reside outside of the cluster can access my SCTP based services that run in the cluster. +As a user of Kubernetes I want to have the option that client applications that reside outside of the cluster can access my SCTP based services that run in the cluster. Example: ``` diff --git a/keps/sig-network/0030-nodelocal-dns-cache.md b/keps/sig-network/0030-nodelocal-dns-cache.md new file mode 100644 index 00000000..694a11e9 --- /dev/null +++ b/keps/sig-network/0030-nodelocal-dns-cache.md @@ -0,0 +1,215 @@ +--- +kep-number: 30 +title: NodeLocal DNS Cache +authors: + - "@prameshj" +owning-sig: sig-network +participating-sigs: + - sig-network +reviewers: + - "@thockin" + - "@bowei" + - "@johnbelamaric" + - "@sdodson" +approvers: + - "@thockin" + - "@bowei" +editor: TBD +creation-date: 2018-10-05 +last-updated: 2018-10-30 +status: provisional +--- + +# NodeLocal DNS Cache + +## Table of Contents + +* [Table of Contents](#table-of-contents) +* [Summary](#summary) +* [Motivation](#motivation) + * [Goals](#goals) + * [Non-Goals](#non-goals) +* [Proposal](#proposal) + * [Risks and Mitigations](#risks-and-mitigations) +* [Graduation Criteria](#graduation-criteria) +* [Rollout Plan](#rollout-plan) +* [Implementation History](#implementation-history) +* [Drawbacks [optional]](#drawbacks-optional) +* [Alternatives [optional]](#alternatives-optional) + +[Tools for generating]: https://github.com/ekalinin/github-markdown-toc + +## Summary + +This proposal aims to improve DNS performance by running a dns caching agent on cluster nodes as a Daemonset. In today's architecture, pods in ClusterFirst DNS mode reach out to a kube-dns serviceIP for DNS queries. This is translated to a kube-dns endpoint via iptables rules added by kube-proxy. With this new architecture, pods will reach out to the dns caching agent running on the same node, thereby avoiding iptables DNAT rules and connection tracking. The local caching agent will query kube-dns for cache misses of cluster hostnames(cluster.local suffix by default). + + +## Motivation + +* With the current DNS architecture, it is possible that pods with the highest DNS QPS have to reach out to a different node, if there is no local kube-dns instance. +Having a local cache will help improve the latency in such scenarios. + +* Skipping iptables DNAT and connection tracking will help reduce [conntrack races](https://github.com/kubernetes/kubernetes/issues/56903) and avoid UDP DNS entries filling up conntrack table. + +* Connections from local caching agent to kube-dns can be upgraded to TCP. TCP conntrack entries will be removed on connection close in contrast with UDP entries that have to timeout ([default](https://www.kernel.org/doc/Documentation/networking/nf_conntrack-sysctl.txt) `nf_conntrack_udp_timeout` is 30 seconds) + +* Upgrading DNS queries from UDP to TCP would reduce tail latency attributed to dropped UDP packets and DNS timeouts usually up to 30s (3 retries + 10s timeout). Since the nodelocal cache listens for UDP DNS queries, applications don't need to be changed. + +* Metrics & visibility into dns requests at a node level. + +* Neg caching can be re-enabled, thereby reducing number of queries to kube-dns. + +* There are several open github issues proposing a local DNS Cache daemonset and scripts to run it: + * [https://github.com/kubernetes/kubernetes/issues/7470#issuecomment-248912603](https://github.com/kubernetes/kubernetes/issues/7470#issuecomment-248912603) + + * [https://github.com/kubernetes/kubernetes/issues/32749](https://github.com/kubernetes/kubernetes/issues/32749) + + * [https://github.com/kubernetes/kubernetes/issues/45363](https://github.com/kubernetes/kubernetes/issues/45363) + + +This shows that there is interest in the wider Kubernetes community for a solution similar to the proposal here. + + +### Goals + +Being able to run a dns caching agent as a Daemonset and get pods to use the local instance. Having visibility into cache stats and other metrics. + +### Non-Goals + +* Providing a replacement for kube-dns/CoreDNS. +* Changing the underlying protocol for DNS (e.g. to gRPC) + +## Proposal + +A nodeLocal dns cache runs on all cluster nodes. This is managed as an add-on, runs as a Daemonset. All pods using clusterDNS will now talk to the nodeLocal cache, which will query kube-dns in case of cache misses in cluster's configured DNS suffix and for all reverse lookups(in-addr.arpa and ip6.arpa). User-configured stubDomains will be passed on to this local agent. +The node's resolv.conf will be used by this local agent for all other cache misses. One benefit of doing the non-cluster lookups on the nodes from which they are happening, rather than the kube-dns instances, is better use of per-node DNS resources in cloud. For instance, in a 10-node cluster with 3 kube-dns instances, the 3 nodes running kube-dns will end up resolving all external hostnames and can exhaust QPS quota. Spreading the queries across the 10 nodes will help alleviate this. + +#### Daemonset and Listen Interface for caching agent + +The caching agent daemonset runs in hostNetwork mode in kube-system namespace with a Priority Class of “system-node-critical”. It listens for dns requests on a dummy interface created on the host. A separate ip address is assigned to this dummy interface, so that requests to kube-dns or any other custom service are not incorrectly intercepted by the caching agent. This will be a link-local ip address selected by the user. Each cluster node will have this dummy interface. This ip address will be passed on to kubelet via the --cluster-dns flag, if the feature is enabled. + +The selected link-local IP will be handled specially because of the NOTRACK rules described in the section below. + +#### iptables NOTRACK + +NOTRACK rules are added for connections to and from the nodelocal dns ip. Additional rules in FILTER table to whitelist these connections, since the INPUT and OUTPUT chains have a default DROP policy. + +The nodelocal cache process will create the dummy interface and iptables rules . It gets the nodelocal dns ip as a parameter, performs setup and listens for dns requests. The Daemonset runs in privileged securityContext since it needs to create this dummy interface and add iptables rules. + The cache process will also periodically ensure that the dummy interface and iptables rules are present, in the background. Rules need to be checked in the raw table and filter table. Rules in these tables do not grow with number of valid services. Services with no endpoints will have rules added in filter table to drop packets destined to these ip. The resource usage for periodic iptables check was measured by creating 2k services with no endpoints and running the nodelocal caching agent. Peak memory usage was 20Mi for the caching agent when it was responding to queries along with the periodic checks. This was measured using `kubectl top` command. More details on the testing are in the following section. + +[Proposal presentation](https://docs.google.com/presentation/d/1c43cZqbVhGAlw3dSNQIOGuvQmDfKaA2yiAPRoYpa6iY), also shared at the sig-networking meeting on 2018-10-04 + +Slide 5 has a diagram showing how the new dns cache fits in. + +#### Choice of caching agent + +The current plan is to run CoreDNS by default. Benchmark [ tests](https://github.com/kubernetes/perf-tests/tree/master/dns) were run using [Unbound dns server](https://www.nlnetlabs.nl/projects/unbound/about/) and CoreDNS. 2 more tests were added to query for 20 different services and to query several external hostnames. + +Tests were run on a 1.9.7 cluster with 2 nodes on GCE, using Unbound 1.7.3 and CoreDNS 1.2.3. +Resource limits for nodelocaldns daemonset was CPU - 50m, Memory 25Mi + +Resource usage and QPS were measured with a nanny process for Unbound/CoreDNS plugin adding iptables rules and ensuring that the rules exist, every minute. + +Caching was minimized in Unbound by setting: +msg-cache-size: 0 +rrset-cache-size: 0 +msg-cache-slabs:1 +rrset-cache-slabs:1 +Previous tests did not set the last 2 and there were quite a few unexpected cache hits. + +Caching was disabled in CoreDNS by skipping the cache plugin from Corefile. + +These are the results when dnsperf test was run with no QPS limit. In this mode, the tool sends queries until they start timing out. + +| Test Type | Program | Caching | QPS | +|-----------------------|---------|---------|------| +| Multiple services(20) | CoreDNS | Yes | 860 | +| Multiple services(20) | Unbound | Yes | 3030 | +| | | | | +| External queries | CoreDNS | Yes | 213 | +| External queries | Unbound | Yes | 115 | +| | | | | +| Single Service | CoreDNS | Yes | 834 | +| Single Service | Unbound | Yes | 3287 | +| | | | | +| Single NXDomain | CoreDNS | Yes | 816 | +| Single NXDomain | Unbound | Yes | 3136 | +| | | | | +| Multiple services(20) | CoreDNS | No | 859 | +| Multiple services(20) | Unbound | No | 1463 | +| | | | | +| External queries | CoreDNS | No | 180 | +| External queries | Unbound | No | 108 | +| | | | | +| Single Service | CoreDNS | No | 818 | +| Single Service | Unbound | No | 2992 | +| | | | | +| Single NXDomain | CoreDNS | No | 827 | +| Single NXDomain | Unbound | No | 2986 | + + +Peak memory usage was ~20 Mi for both Unbound and CoreDNS. + +For the single service and single NXDomain query, Unbound still had cache hits since caching could not be completely disabled. + +CoreDNS QPS was twice as much as Unbound for external queries. They were mostly unique hostnames from this file - [ftp://ftp.nominum.com/pub/nominum/dnsperf/data/queryfile-example-current.gz](ftp://ftp.nominum.com/pub/nominum/dnsperf/data/queryfile-example-current.gz) + +When multiple cluster services were queried with cache misses, Unbound was better(1463 vs 859), but not by a large factor. + +Unbound performs much better when all requests are cache hits. + +CoreDNS will be the local cache agent in the first release, after considering these reasons: + +* Better QPS numbers for external hostname queries +* Single process, no need for a separate nanny process +* Prometheus metrics already available, also we can get per zone stats. Unbound gives consolidated stats. +* Easier to make changes to the source code + + It is possible to run any program as caching agent by modifying the daemonset and configmap spec. Publishing an image with Unbound DNS can be added as a follow up. + +Based on the prototype/test results, these are the recommended defaults: +CPU request: 50m +Memory Limit : 25m + +CPU request can be dropped to a smaller value if QPS needs are lower. + +#### Metrics + +Per-zone metrics will be available via the metrics/prometheus plugin in CoreDNS. + + +### Risks and Mitigations + +Having the pods query the nodelocal cache introduces a single point of failure. + +* This is mitigated by having a livenessProbe to periodically ensure DNS is working. In case of upgrades, the recommendation is to drain the node before starting to upgrade the local instance. The user can also configure [customPodDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config) pointing to clusterDNS ip for pods that cannot handle DNS disruption during upgrade. + +* The Daemonset is assigned a PriorityClass of "system-node-critical", to ensure it is not evicted. + +* Populating both the nodelocal cache ip address and kube-dns ip address in resolv.conf is not a reliable option. Depending on underlying implementation, this can result in kube-dns being queried only if cache ip does not repond, or both queried simultaneously. + + +## Graduation Criteria +TODO + +## Rollout Plan +This feature will be launched with Alpha support in the first release. Master versions v1.13 and above will deploy the new add-on. Node versions v1.13 and above will have kubelet code to modify pods' resolv.conf. Nodes running older versions will run the nodelocal daemonset, but it will not be used. The user can specify a custom dnsConfig to use this local cache dns server. + +## Implementation History + +* 2018-10-05 - Creation of the KEP +* 2018-10-30 - Follow up comments and choice of cache agent + +## Drawbacks [optional] + +Additional resource consumption for the Daemonset might not be necessary for clusters with low DNS QPS needs. + + +## Alternatives [optional] + +* The listen ip address for the dns cache could be a service ip. This ip address is obtained by creating a nodelocaldns service, with same endpoints as the clusterDNS service. Using the same endpoints as clusterDNS helps reduce DNS downtime in case of upgrades/restart. When no other special handling is provided, queries to the nodelocaldns ip will be served by kube-dns/CoreDNS pods. Kubelet takes the service name as an argument `--cluster-dns-svc=<namespace>/<svc name>`, looks up the ip address and populates pods' resolv.conf with this value instead of clusterDNS. +This approach works only for iptables mode of kube-proxy. This is because kube-proxy creates a dummy interface bound to all service IPs in ipvs mode and ipvs rules are added to load-balance between endpoints. The packet seems to get dropped if there are no endpoints. If there are endpoints, adding iptables rules does not bypass the ipvs loadbalancing rules. + +* A nodelocaldns service can be created with a hard requirement of same-node endpoint, once we have [this](https://github.com/kubernetes/community/pull/2846) supported. All the pods in the nodelocaldns daemonset will be endpoints, the one running locally will be selected. iptables rules to NOTRACK connections can still be added, in order to skip DNAT in the iptables kube-proxy implementation. + +* Instead of just a dns-cache, a full-fledged kube-dns instance can be run on all nodes. This will consume much more resources since each instance will also watch Services and Endpoints. diff --git a/keps/sig-node/0008-20180430-promote-sysctl-annotations-to-fields.md b/keps/sig-node/0008-20180430-promote-sysctl-annotations-to-fields.md index 8966b818..4a2090a1 100644 --- a/keps/sig-node/0008-20180430-promote-sysctl-annotations-to-fields.md +++ b/keps/sig-node/0008-20180430-promote-sysctl-annotations-to-fields.md @@ -160,7 +160,7 @@ With the `Sysctl` feature enabled, both sysctl fields in `Pod` and `PodSecurityP and the whitelist of unsafed sysctls are acknowledged. If disabled, the fields and the whitelist are just ignored. -[1] https://kubernetes.io/docs/reference/feature-gates/ +[1] https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ ## Proposal @@ -202,7 +202,7 @@ type PodSecurityPolicySpec struct { ``` Following steps in [devel/api_changes.md#alpha-field-in-existing-api-version](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#alpha-field-in-existing-api-version) -during implemention. +during implementation. Validation checks implemented as part of [#27180](https://github.com/kubernetes/kubernetes/pull/27180). diff --git a/keps/sig-node/0030-20180906-quotas-for-ephemeral-storage.md b/keps/sig-node/0030-20180906-quotas-for-ephemeral-storage.md new file mode 100644 index 00000000..a6c5aaba --- /dev/null +++ b/keps/sig-node/0030-20180906-quotas-for-ephemeral-storage.md @@ -0,0 +1,807 @@ +--- +kep-number: 0 +title: Quotas for Ephemeral Storage +authors: + - "@RobertKrawitz" +owning-sig: sig-xxx +participating-sigs: + - sig-node +reviewers: + - TBD +approvers: + - "@dchen1107" + - "@derekwaynecarr" +editor: TBD +creation-date: yyyy-mm-dd +last-updated: yyyy-mm-dd +status: provisional +see-also: +replaces: +superseded-by: +--- + +# Quotas for Ephemeral Storage + +## Table of Contents +<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-generate-toc again --> +**Table of Contents** + +- [Quotas for Ephemeral Storage](#quotas-for-ephemeral-storage) + - [Table of Contents](#table-of-contents) + - [Summary](#summary) + - [Project Quotas](#project-quotas) + - [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) + - [Future Work](#future-work) + - [Proposal](#proposal) + - [Control over Use of Quotas](#control-over-use-of-quotas) + - [Operation Flow -- Applying a Quota](#operation-flow----applying-a-quota) + - [Operation Flow -- Retrieving Storage Consumption](#operation-flow----retrieving-storage-consumption) + - [Operation Flow -- Removing a Quota.](#operation-flow----removing-a-quota) + - [Operation Notes](#operation-notes) + - [Selecting a Project ID](#selecting-a-project-id) + - [Determine Whether a Project ID Applies To a Directory](#determine-whether-a-project-id-applies-to-a-directory) + - [Return a Project ID To the System](#return-a-project-id-to-the-system) + - [Implementation Details/Notes/Constraints [optional]](#implementation-detailsnotesconstraints-optional) + - [Notes on Implementation](#notes-on-implementation) + - [Notes on Code Changes](#notes-on-code-changes) + - [Testing Strategy](#testing-strategy) + - [Risks and Mitigations](#risks-and-mitigations) + - [Graduation Criteria](#graduation-criteria) + - [Implementation History](#implementation-history) + - [Drawbacks [optional]](#drawbacks-optional) + - [Alternatives [optional]](#alternatives-optional) + - [Alternative quota-based implementation](#alternative-quota-based-implementation) + - [Alternative loop filesystem-based implementation](#alternative-loop-filesystem-based-implementation) + - [Infrastructure Needed [optional]](#infrastructure-needed-optional) + - [References](#references) + - [Bugs Opened Against Filesystem Quotas](#bugs-opened-against-filesystem-quotas) + - [CVE](#cve) + - [Other Security Issues Without CVE](#other-security-issues-without-cve) + - [Other Linux Quota-Related Bugs Since 2012](#other-linux-quota-related-bugs-since-2012) + +<!-- markdown-toc end --> + +[Tools for generating]: https://github.com/ekalinin/github-markdown-toc + +## Summary + +This proposal applies to the use of quotas for ephemeral-storage +metrics gathering. Use of quotas for ephemeral-storage limit +enforcement is a [non-goal](#non-goals), but as the architecture and +code will be very similar, there are comments interspersed related to +enforcement. _These comments will be italicized_. + +Local storage capacity isolation, aka ephemeral-storage, was +introduced into Kubernetes via +<https://github.com/kubernetes/features/issues/361>. It provides +support for capacity isolation of shared storage between pods, such +that a pod can be limited in its consumption of shared resources and +can be evicted if its consumption of shared storage exceeds that +limit. The limits and requests for shared ephemeral-storage are +similar to those for memory and CPU consumption. + +The current mechanism relies on periodically walking each ephemeral +volume (emptydir, logdir, or container writable layer) and summing the +space consumption. This method is slow, can be fooled, and has high +latency (i. e. a pod could consume a lot of storage prior to the +kubelet being aware of its overage and terminating it). + +The mechanism proposed here utilizes filesystem project quotas to +provide monitoring of resource consumption _and optionally enforcement +of limits._ Project quotas, initially in XFS and more recently ported +to ext4fs, offer a kernel-based means of monitoring _and restricting_ +filesystem consumption that can be applied to one or more directories. + +A prototype is in progress; see <https://github.com/kubernetes/kubernetes/pull/66928>. + +### Project Quotas + +Project quotas are a form of filesystem quota that apply to arbitrary +groups of files, as opposed to file user or group ownership. They +were first implemented in XFS, as described here: +<http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/xfs-quotas.html>. + +Project quotas for ext4fs were [proposed in late +2014](https://lwn.net/Articles/623835/) and added to the Linux kernel +in early 2016, with +commit +[391f2a16b74b95da2f05a607f53213fc8ed24b8e](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=391f2a16b74b95da2f05a607f53213fc8ed24b8e). +They were designed to be compatible with XFS project quotas. + +Each inode contains a 32-bit project ID, to which optionally quotas +(hard and soft limits for blocks and inodes) may be applied. The +total blocks and inodes for all files with the given project ID are +maintained by the kernel. Project quotas can be managed from +userspace by means of the `xfs_quota(8)` command in foreign filesystem +(`-f`) mode; the traditional Linux quota tools do not manipulate +project quotas. Programmatically, they are managed by the `quotactl(2)` +system call, using in part the standard quota commands and in part the +XFS quota commands; the man page implies incorrectly that the XFS +quota commands apply only to XFS filesystems. + +The project ID applied to a directory is inherited by files created +under it. Files cannot be (hard) linked across directories with +different project IDs. A file's project ID cannot be changed by a +non-privileged user, but a privileged user may use the `xfs_io(8)` +command to change the project ID of a file. + +Filesystems using project quotas may be mounted with quotas either +enforced or not; the non-enforcing mode tracks usage without enforcing +it. A non-enforcing project quota may be implemented on a filesystem +mounted with enforcing quotas by setting a quota too large to be hit. +The maximum size that can be set varies with the filesystem; on a +64-bit filesystem it is 2^63-1 bytes for XFS and 2^58-1 bytes for +ext4fs. + +Conventionally, project quota mappings are stored in `/etc/projects` and +`/etc/projid`; these files exist for user convenience and do not have +any direct importance to the kernel. `/etc/projects` contains a mapping +from project ID to directory/file; this can be a one to many mapping +(the same project ID can apply to multiple directories or files, but +any given directory/file can be assigned only one project ID). +`/etc/projid` contains a mapping from named projects to project IDs. + +This proposal utilizes hard project quotas for both monitoring _and +enforcement_. Soft quotas are of no utility; they allow for temporary +overage that, after a programmable period of time, is converted to the +hard quota limit. + + +## Motivation + +The mechanism presently used to monitor storage consumption involves +use of `du` and `find` to periodically gather information about +storage and inode consumption of volumes. This mechanism suffers from +a number of drawbacks: + +* It is slow. If a volume contains a large number of files, walking + the directory can take a significant amount of time. There has been + at least one known report of nodes becoming not ready due to volume + metrics: <https://github.com/kubernetes/kubernetes/issues/62917> +* It is possible to conceal a file from the walker by creating it and + removing it while holding an open file descriptor on it. POSIX + behavior is to not remove the file until the last open file + descriptor pointing to it is removed. This has legitimate uses; it + ensures that a temporary file is deleted when the processes using it + exit, and it minimizes the attack surface by not having a file that + can be found by an attacker. The following pod does this; it will + never be caught by the present mechanism: + +```yaml +apiVersion: v1 +kind: Pod +max: +metadata: + name: "diskhog" +spec: + containers: + - name: "perl" + resources: + limits: + ephemeral-storage: "2048Ki" + image: "perl" + command: + - perl + - -e + - > + my $file = "/data/a/a"; open OUT, ">$file" or die "Cannot open $file: $!\n"; unlink "$file" or die "cannot unlink $file: $!\n"; my $a="0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789"; foreach my $i (0..200000000) { print OUT $a; }; sleep 999999 + volumeMounts: + - name: a + mountPath: /data/a + volumes: + - name: a + emptyDir: {} +``` +* It is reactive rather than proactive. It does not prevent a pod + from overshooting its limit; at best it catches it after the fact. + On a fast storage medium, such as NVMe, a pod may write 50 GB or + more of data before the housekeeping performed once per minute + catches up to it. If the primary volume is the root partition, this + will completely fill the partition, possibly causing serious + problems elsewhere on the system. This proposal does not address + this issue; _a future enforcing project would_. + +In many environments, these issues may not matter, but shared +multi-tenant environments need these issues addressed. + +### Goals + +These goals apply only to local ephemeral storage, as described in +<https://github.com/kubernetes/features/issues/361>. + +* Primary: improve performance of monitoring by using project quotas + in a non-enforcing way to collect information about storage + utilization of ephemeral volumes. +* Primary: detect storage used by pods that is concealed by deleted + files being held open. +* Primary: this will not interfere with the more common user and group + quotas. + +### Non-Goals + +* Application to storage other than local ephemeral storage. +* Application to container copy on write layers. That will be managed + by the container runtime. For a future project, we should work with + the runtimes to use quotas for their monitoring. +* Elimination of eviction as a means of enforcing ephemeral-storage + limits. Pods that hit their ephemeral-storage limit will still be + evicted by the kubelet even if their storage has been capped by + enforcing quotas. +* Enforcing node allocatable (limit over the sum of all pod's disk + usage, including e. g. images). +* Enforcing limits on total pod storage consumption by any means, such + that the pod would be hard restricted to the desired storage limit. + +### Future Work + +* _Enforce limits on per-volume storage consumption by using + enforced project quotas._ + +## Proposal + +This proposal applies project quotas to emptydir volumes on qualifying +filesystems (ext4fs and xfs with project quotas enabled). Project +quotas are applied by selecting an unused project ID (a 32-bit +unsigned integer), setting a limit on space and/or inode consumption, +and attaching the ID to one or more files. By default (and as +utilized herein), if a project ID is attached to a directory, it is +inherited by any files created under that directory. + +_If we elect to use the quota as enforcing, we impose a quota +consistent with the desired limit._ If we elect to use it as +non-enforcing, we impose a large quota that in practice cannot be +exceeded (2^63-1 bytes for XFS, 2^58-1 bytes for ext4fs). + +### Control over Use of Quotas + +At present, two feature gates control operation of quotas: + +* `LocalStorageCapacityIsolation` must be enabled for any use of + quotas. + +* `LocalStorageCapacityIsolationFSMonitoring` must be enabled in addition. If this is + enabled, quotas are used for monitoring, but not enforcement. At + present, this defaults to False, but the intention is that this will + default to True by initial release. + +* _`LocalStorageCapacityIsolationFSEnforcement` must be enabled, in addition to + `LocalStorageCapacityIsolationFSMonitoring`, to use quotas for enforcement._ + +### Operation Flow -- Applying a Quota + +* Caller (emptydir volume manager or container runtime) creates an + emptydir volume, with an empty directory at a location of its + choice. +* Caller requests that a quota be applied to a directory. +* Determine whether a quota can be imposed on the directory, by asking + each quota provider (one per filesystem type) whether it can apply a + quota to the directory. If no provider claims the directory, an + error status is returned to the caller. +* Select an unused project ID ([see below](#selecting-a-project-id)). +* Set the desired limit on the project ID, in a filesystem-dependent + manner ([see below](#notes-on-implementation)). +* Apply the project ID to the directory in question, in a + filesystem-dependent manner. + +An error at any point results in no quota being applied and no change +to the state of the system. The caller in general should not assume a +priori that the attempt will be successful. It could choose to reject +a request if a quota cannot be applied, but at this time it will +simply ignore the error and proceed as today. + +### Operation Flow -- Retrieving Storage Consumption + +* Caller (kubelet metrics code, cadvisor, container runtime) asks the + quota code to compute the amount of storage used under the + directory. +* Determine whether a quota applies to the directory, in a + filesystem-dependent manner ([see below](#notes-on-implementation)). +* If so, determine how much storage or how many inodes are utilized, + in a filesystem dependent manner. + +If the quota code is unable to retrieve the consumption, it returns an +error status and it is up to the caller to utilize a fallback +mechanism (such as the directory walk performed today). + +### Operation Flow -- Removing a Quota. + +* Caller requests that the quota be removed from a directory. +* Determine whether a project quota applies to the directory. +* Remove the limit from the project ID associated with the directory. +* Remove the association between the directory and the project ID. +* Return the project ID to the system to allow its use elsewhere ([see + below](#return-a-project-id-to-the-system)). +* Caller may delete the directory and its contents (normally it will). + +### Operation Notes + +#### Selecting a Project ID + +Project IDs are a shared space within a filesystem. If the same +project ID is assigned to multiple directories, the space consumption +reported by the quota will be the sum of that of all of the +directories. Hence, it is important to ensure that each directory is +assigned a unique project ID (unless it is desired to pool the storage +use of multiple directories). + +The canonical mechanism to record persistently that a project ID is +reserved is to store it in the `/etc/projid` (`projid[5]`) and/or +`/etc/projects` (`projects(5)`) files. However, it is possible to utilize +project IDs without recording them in those files; they exist for +administrative convenience but neither the kernel nor the filesystem +is aware of them. Other ways can be used to determine whether a +project ID is in active use on a given filesystem: + +* The quota values (in blocks and/or inodes) assigned to the project + ID are non-zero. +* The storage consumption (in blocks and/or inodes) reported under the + project ID are non-zero. + +The algorithm to be used is as follows: + +* Lock this instance of the quota code against re-entrancy. +* open and `flock()` the `/etc/project` and `/etc/projid` files, so that + other uses of this code are excluded. +* Start from a high number (the prototype uses 1048577). +* Iterate from there, performing the following tests: + * Is the ID reserved by this instance of the quota code? + * Is the ID present in `/etc/projects`? + * Is the ID present in `/etc/projid`? + * Are the quota values and/or consumption reported by the kernel + non-zero? This test is restricted to 128 iterations to ensure + that a bug here or elsewhere does not result in an infinite loop + looking for a quota ID. +* If an ID has been found: + * Add it to an in-memory copy of `/etc/projects` and `/etc/projid` so + that any other uses of project quotas do not reuse it. + * Write temporary copies of `/etc/projects` and `/etc/projid` that are + `flock()`ed + * If successful, rename the temporary files appropriately (if + rename of one succeeds but the other fails, we have a problem + that we cannot recover from, and the files may be inconsistent). +* Unlock `/etc/projid` and `/etc/projects`. +* Unlock this instance of the quota code. + +A minor variation of this is used if we want to reuse an existing +quota ID. + +#### Determine Whether a Project ID Applies To a Directory + +It is possible to determine whether a directory has a project ID +applied to it by requesting (via the `quotactl(2)` system call) the +project ID associated with the directory. Whie the specifics are +filesystem-dependent, the basic method is the same for at least XFS +and ext4fs. + +It is not possible to determine in constant operations the directory +or directories to which a project ID is applied. It is possible to +determine whether a given project ID has been applied to an existing +directory or files (although those will not be known); the reported +consumption will be non-zero. + +The code records internally the project ID applied to a directory, but +it cannot always rely on this. In particular, if the kubelet has +exited and has been restarted (and hence the quota applying to the +directory should be removed), the map from directory to project ID is +lost. If it cannot find a map entry, it falls back on the approach +discussed above. + +#### Return a Project ID To the System + +The algorithm used to return a project ID to the system is very +similar to the algorithm used to select a project ID, except of course +for selecting a project ID. It performs the same sequence of locking +`/etc/project` and `/etc/projid`, editing a copy of the file, and +restoring it. + +If the project ID is applied to multiple directories and the code can +determine that, it will not remove the project ID from `/etc/projid` +until the last reference is removed. While it is not anticipated in +this KEP that this mode of operation will be used, at least initially, +this can be detected even on kubelet restart by looking at the +reference count in `/etc/projects`. + + +### Implementation Details/Notes/Constraints [optional] + +#### Notes on Implementation + +The primary new interface defined is the quota interface in +`pkg/volume/util/quota/quota.go`. This defines five operations: + +* Does the specified directory support quotas? + +* Assign a quota to a directory. If a non-empty pod UID is provided, + the quota assigned is that of any other directories under this pod + UID; if an empty pod UID is provided, a unique quota is assigned. + +* Retrieve the consumption of the specified directory. If the quota + code cannot handle it efficiently, it returns an error and the + caller falls back on existing mechanism. + +* Retrieve the inode consumption of the specified directory; same + description as above. + +* Remove quota from a directory. If a non-empty pod UID is passed, it + is checked against that recorded in-memory (if any). The quota is + removed from the specified directory. This can be used even if + AssignQuota has not been used; it inspects the directory and removes + the quota from it. This permits stale quotas from an interrupted + kubelet to be cleaned up. + +Two implementations are provided: `quota_linux.go` (for Linux) and +`quota_unsupported.go` (for other operating systems). The latter +returns an error for all requests. + +As the quota mechanism is intended to support multiple filesystems, +and different filesystems require different low level code for +manipulating quotas, a provider is supplied that finds an appropriate +quota applier implementation for the filesystem in question. The low +level quota applier provides similar operations to the top level quota +code, with two exceptions: + +* No operation exists to determine whether a quota can be applied + (that is handled by the provider). + +* An additional operation is provided to determine whether a given + quota ID is in use within the filesystem (outside of `/etc/projects` + and `/etc/projid`). + +The two quota providers in the initial implementation are in +`pkg/volume/util/quota/extfs` and `pkg/volume/util/quota/xfs`. While +some quota operations do require different system calls, a lot of the +code is common, and factored into +`pkg/volume/util/quota/common/quota_linux_common_impl.go`. + +#### Notes on Code Changes + +The prototype for this project is mostly self-contained within +`pkg/volume/util/quota` and a few changes to +`pkg/volume/empty_dir/empty_dir.go`. However, a few changes were +required elsewhere: + +* The operation executor needs to pass the desired size limit to the + volume plugin where appropriate so that the volume plugin can impose + a quota. The limit is passed as 0 (do not use quotas), _positive + number (impose an enforcing quota if possible, measured in bytes),_ + or -1 (impose a non-enforcing quota, if possible) on the volume. + + This requires changes to + `pkg/volume/util/operationexecutor/operation_executor.go` (to add + `DesiredSizeLimit` to `VolumeToMount`), + `pkg/kubelet/volumemanager/cache/desired_state_of_world.go`, and + `pkg/kubelet/eviction/helpers.go` (the latter in order to determine + whether the volume is a local ephemeral one). + +* The volume manager (in `pkg/volume/volume.go`) changes the + `Mounter.SetUp` and `Mounter.SetUpAt` interfaces to take a new + `MounterArgs` type rather than an `FsGroup` (`*int64`). This is to + allow passing the desired size and pod UID (in the event we choose + to implement quotas shared between multiple volumes; [see + below](#alternative-quota-based-implementation)). This required + small changes to all volume plugins and their tests, but will in the + future allow adding additional data without having to change code + other than that which uses the new information. + +#### Testing Strategy + +The quota code is by an large not very amendable to unit tests. While +there are simple unit tests for parsing the mounts file, and there +could be tests for parsing the projects and projid files, the real +work (and risk) involves interactions with the kernel and with +multiple instances of this code (e. g. in the kubelet and the runtime +manager, particularly under stress). It also requires setup in the +form of a prepared filesystem. It would be better served by +appropriate end to end tests. + +### Risks and Mitigations + +* The SIG raised the possibility of a container being unable to exit + should we enforce quotas, and the quota interferes with writing the + log. This can be mitigated by either not applying a quota to the + log directory and using the du mechanism, or by applying a separate + non-enforcing quota to the log directory. + + As log directories are write-only by the container, and consumption + can be limited by other means (as the log is filtered by the + runtime), I do not consider the ability to write uncapped to the log + to be a serious exposure. + + Note in addition that even without quotas it is possible for writes + to fail due to lack of filesystem space, which is effectively (and + in some cases operationally) indistinguishable from exceeding quota, + so even at present code must be able to handle those situations. + +* Filesystem quotas may impact performance to an unknown degree. + Information on that is hard to come by in general, and one of the + reasons for using quotas is indeed to improve performance. If this + is a problem in the field, merely turning off quotas (or selectively + disabling project quotas) on the filesystem in question will avoid + the problem. Against the possibility that cannot be done + (because project quotas are needed for other purposes), we should + provide a way to disable use of quotas altogether via a feature + gate. + + A report <https://blog.pythonanywhere.com/110/> notes that an + unclean shutdown on Linux kernel versions between 3.11 and 3.17 can + result in a prolonged downtime while quota information is restored. + Unfortunately, [the link referenced + here](http://oss.sgi.com/pipermail/xfs/2015-March/040879.html) is no + longer available. + +* Bugs in the quota code could result in a variety of regression + behavior. For example, if a quota is incorrectly applied it could + result in ability to write no data at all to the volume. This could + be mitigated by use of non-enforcing quotas. XFS in particular + offers the `pqnoenforce` mount option that makes all quotas + non-enforcing. + + +## Graduation Criteria + +How will we know that this has succeeded? Gathering user feedback is +crucial for building high quality experiences and SIGs have the +important responsibility of setting milestones for stability and +completeness. Hopefully the content previously contained in [umbrella +issues][] will be tracked in the `Graduation Criteria` section. + +[umbrella issues]: N/A + +## Implementation History + +Major milestones in the life cycle of a KEP should be tracked in +`Implementation History`. Major milestones might include + +- the `Summary` and `Motivation` sections being merged signaling SIG + acceptance +- the `Proposal` section being merged signaling agreement on a + proposed design +- the date implementation started +- the first Kubernetes release where an initial version of the KEP was + available +- the version of Kubernetes where the KEP graduated to general + availability +- when the KEP was retired or superseded + +## Drawbacks [optional] + +* Use of quotas, particularly the less commonly used project quotas, + requires additional action on the part of the administrator. In + particular: + * ext4fs filesystems must be created with additional options that + are not enabled by default: +``` +mkfs.ext4 -O quota,project -Q usrquota,grpquota,prjquota _device_ +``` + * An additional option (`prjquota`) must be applied in `/etc/fstab` + * If the root filesystem is to be quota-enabled, it must be set in + the grub options. +* Use of project quotas for this purpose will preclude future use + within containers. + +## Alternatives [optional] + +I have considered two classes of alternatives: + +* Alternatives based on quotas, with different implementation + +* Alternatives based on loop filesystems without use of quotas + +### Alternative quota-based implementation + +Within the basic framework of using quotas to monitor and potentially +enforce storage utilization, there are a number of possible options: + +* Utilize per-volume non-enforcing quotas to monitor storage (the + first stage of this proposal). + + This mostly preserves the current behavior, but with more efficient + determination of storage utilization and the possibility of building + further on it. The one change from current behavior is the ability + to detect space used by deleted files. + +* Utilize per-volume enforcing quotas to monitor and enforce storage + (the second stage of this proposal). + + This allows partial enforcement of storage limits. As local storage + capacity isolation works at the level of the pod, and we have no + control of user utilization of ephemeral volumes, we would have to + give each volume a quota of the full limit. For example, if a pod + had a limit of 1 MB but had four ephemeral volumes mounted, it would + be possible for storage utilization to reach (at least temporarily) + 4MB before being capped. + +* Utilize per-pod enforcing user or group quotas to enforce storage + consumption, and per-volume non-enforcing quotas for monitoring. + + This would offer the best of both worlds: a fully capped storage + limit combined with efficient reporting. However, it would require + each pod to run under a distinct UID or GID. This may prevent pods + from using setuid or setgid or their variants, and would interfere + with any other use of group or user quotas within Kubernetes. + +* Utilize per-pod enforcing quotas to monitor and enforce storage. + + This allows for full enforcement of storage limits, at the expense + of being able to efficiently monitor per-volume storage + consumption. As there have already been reports of monitoring + causing trouble, I do not advise this option. + + A variant of this would report (1/N) storage for each covered + volume, so with a pod with a 4MiB quota and 1MiB total consumption, + spread across 4 ephemeral volumes, each volume would report a + consumption of 256 KiB. Another variant would change the API to + report statistics for all ephemeral volumes combined. I do not + advise this option. + +### Alternative loop filesystem-based implementation + +Another way of isolating storage is to utilize filesystems of +pre-determined size, using the loop filesystem facility within Linux. +It is possible to create a file and run `mkfs(8)` on it, and then to +mount that filesystem on the desired directory. This both limits the +storage available within that directory and enables quick retrieval of +it via `statfs(2)`. + +Cleanup of such a filesystem involves unmounting it and removing the +backing file. + +The backing file can be created as a sparse file, and the `discard` +option can be used to return unused space to the system, allowing for +thin provisioning. + +I conducted preliminary investigations into this. While at first it +appeared promising, it turned out to have multiple critical flaws: + +* If the filesystem is mounted without the `discard` option, it can + grow to the full size of the backing file, negating any possibility + of thin provisioning. If the file is created dense in the first + place, there is never any possibility of thin provisioning without + use of `discard`. + + If the backing file is created densely, it additionally may require + significant time to create if the ephemeral limit is large. + +* If the filesystem is mounted `nosync`, and is sparse, it is possible + for writes to succeed and then fail later with I/O errors when + synced to the backing storage. This will lead to data corruption + that cannot be detected at the time of write. + + This can easily be reproduced by e. g. creating a 64MB filesystem + and within it creating a 128MB sparse file and building a filesystem + on it. When that filesystem is in turn mounted, writes to it will + succeed, but I/O errors will be seen in the log and the file will be + incomplete: + +``` +# mkdir /var/tmp/d1 /var/tmp/d2 +# dd if=/dev/zero of=/var/tmp/fs1 bs=4096 count=1 seek=16383 +# mkfs.ext4 /var/tmp/fs1 +# mount -o nosync -t ext4 /var/tmp/fs1 /var/tmp/d1 +# dd if=/dev/zero of=/var/tmp/d1/fs2 bs=4096 count=1 seek=32767 +# mkfs.ext4 /var/tmp/d1/fs2 +# mount -o nosync -t ext4 /var/tmp/d1/fs2 /var/tmp/d2 +# dd if=/dev/zero of=/var/tmp/d2/test bs=4096 count=24576 + ...will normally succeed... +# sync + ...fails with I/O error!... +``` + +* If the filesystem is mounted `sync`, all writes to it are + immediately committed to the backing store, and the `dd` operation + above fails as soon as it fills up `/var/tmp/d1`. However, + performance is drastically slowed, particularly with small writes; + with 1K writes, I observed performance degradation in some cases + exceeding three orders of magnitude. + + I performed a test comparing writing 64 MB to a base (partitioned) + filesystem, to a loop filesystem without `sync`, and a loop + filesystem with `sync`. Total I/O was sufficient to run for at least + 5 seconds in each case. All filesystems involved were XFS. Loop + filesystems were 128 MB and dense. Times are in seconds. The + erratic behavior (e. g. the 65536 case) was involved was observed + repeatedly, although the exact amount of time and which I/O sizes + were affected varied. The underlying device was an HP EX920 1TB + NVMe SSD. + +| I/O Size | Partition | Loop w/sync | Loop w/o sync | +| ---: | ---: | ---: | ---: | +| 1024 | 0.104 | 0.120 | 140.390 | +| 4096 | 0.045 | 0.077 | 21.850 | +| 16384 | 0.045 | 0.067 | 5.550 | +| 65536 | 0.044 | 0.061 | 20.440 | +| 262144 | 0.043 | 0.087 | 0.545 | +| 1048576 | 0.043 | 0.055 | 7.490 | +| 4194304 | 0.043 | 0.053 | 0.587 | + + The only potentially viable combination in my view would be a dense + loop filesystem without sync, but that would render any thin + provisioning impossible. + +## Infrastructure Needed [optional] + +* Decision: who is responsible for quota management of all volume + types (and especially ephemeral volumes of all types). At present, + emptydir volumes are managed by the kubelet and logdirs and writable + layers by either the kubelet or the runtime, depending upon the + choice of runtime. Beyond the specific proposal that the runtime + should manage quotas for volumes it creates, there are broader + issues that I request assistance from the SIG in addressing. + +* Location of the quota code. If the quotas for different volume + types are to be managed by different components, each such component + needs access to the quota code. The code is substantial and should + not be copied; it would more appropriately be vendored. + +## References + +### Bugs Opened Against Filesystem Quotas + +The following is a list of known security issues referencing +filesystem quotas on Linux, and other bugs referencing filesystem +quotas in Linux since 2012. These bugs are not necessarily in the +quota system. + +#### CVE + +* *CVE-2012-2133* Use-after-free vulnerability in the Linux kernel + before 3.3.6, when huge pages are enabled, allows local users to + cause a denial of service (system crash) or possibly gain privileges + by interacting with a hugetlbfs filesystem, as demonstrated by a + umount operation that triggers improper handling of quota data. + + The issue is actually related to huge pages, not quotas + specifically. The demonstration of the vulnerability resulted in + incorrect handling of quota data. + +* *CVE-2012-3417* The good_client function in rquotad (rquota_svc.c) + in Linux DiskQuota (aka quota) before 3.17 invokes the hosts_ctl + function the first time without a host name, which might allow + remote attackers to bypass TCP Wrappers rules in hosts.deny (related + to rpc.rquotad; remote attackers might be able to bypass TCP + Wrappers rules). + + This issue is related to remote quota handling, which is not the use + case for the proposal at hand. + +#### Other Security Issues Without CVE + +* [Linux Kernel Quota Flaw Lets Local Users Exceed Quota Limits and + Create Large Files](https://securitytracker.com/id/1002610) + + A setuid root binary inheriting file descriptors from an + unprivileged user process may write to the file without respecting + quota limits. If this issue is still present, it would allow a + setuid process to exceed any enforcing limits, but does not affect + the quota accounting (use of quotas for monitoring). + +### Other Linux Quota-Related Bugs Since 2012 + +* [ext4: report delalloc reserve as non-free in statfs mangled by + project quota](https://lore.kernel.org/patchwork/patch/884530/) + + This bug, fixed in Feb. 2018, properly accounts for reserved but not + committed space in project quotas. At this point I have not + determined the impact of this issue. + +* [XFS quota doesn't work after rebooting because of + crash](https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1461730) + + This bug resulted in XFS quotas not working after a crash or forced + reboot. Under this proposal, Kubernetes would fall back to du for + monitoring should a bug of this nature manifest itself again. + +* [quota can show incorrect filesystem + name](https://bugzilla.redhat.com/show_bug.cgi?id=1326527) + + This issue, which will not be fixed, results in the quota command + possibly printing an incorrect filesystem name when used on remote + filesystems. It is a display issue with the quota command, not a + quota bug at all, and does not result in incorrect quota information + being reported. As this proposal does not utilize the quota command + or rely on filesystem name, or currently use quotas on remote + filesystems, it should not be affected by this bug. + +In addition, the e2fsprogs have had numerous fixes over the years. diff --git a/keps/sig-node/compute-device-assignment.md b/keps/sig-node/compute-device-assignment.md new file mode 100644 index 00000000..1ce72617 --- /dev/null +++ b/keps/sig-node/compute-device-assignment.md @@ -0,0 +1,150 @@ +--- +kep-number: 18 +title: Kubelet endpoint for device assignment observation details +authors: + - "@dashpole" + - "@vikaschoudhary16" +owning-sig: sig-node +reviewers: + - "@thockin" + - "@derekwaynecarr" + - "@dchen1107" + - "@vishh" +approvers: + - "@sig-node-leads" +editors: + - "@dashpole" + - "@vikaschoudhary16" +creation-date: "2018-07-19" +last-updated: "2018-07-19" +status: provisional +--- +# Kubelet endpoint for device assignment observation details + +Table of Contents +================= +* [Abstract](#abstract) +* [Background](#background) +* [Objectives](#objectives) +* [User Journeys](#user-journeys) + * [Device Monitoring Agents](#device-monitoring-agents) +* [Changes](#changes) +* [Potential Future Improvements](#potential-future-improvements) +* [Alternatives Considered](#alternatives-considered) + +## Abstract +In this document we will discuss the motivation and code changes required for introducing a kubelet endpoint to expose device to container bindings. + +## Background +[Device Monitoring](https://docs.google.com/document/d/1NYnqw-HDQ6Y3L_mk85Q3wkxDtGNWTxpsedsgw4NgWpg/edit?usp=sharing) requires external agents to be able to determine the set of devices in-use by containers and attach pod and container metadata for these devices. + +## Objectives + +* To remove current device-specific knowledge from the kubelet, such as [accellerator metrics](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/stats/v1alpha1/types.go#L229) +* To enable future use-cases requiring device-specific knowledge to be out-of-tree + +## User Journeys + +### Device Monitoring Agents + +* As a _Cluster Administrator_, I provide a set of devices from various vendors in my cluster. Each vendor independently maintains their own agent, so I run monitoring agents only for devices I provide. Each agent adheres to to the [node monitoring guidelines](https://docs.google.com/document/d/1_CdNWIjPBqVDMvu82aJICQsSCbh2BR-y9a8uXjQm4TI/edit?usp=sharing), so I can use a compatible monitoring pipeline to collect and analyze metrics from a variety of agents, even though they are maintained by different vendors. +* As a _Device Vendor_, I manufacture devices and I have deep domain expertise in how to run and monitor them. Because I maintain my own Device Plugin implementation, as well as Device Monitoring Agent, I can provide consumers of my devices an easy way to consume and monitor my devices without requiring open-source contributions. The Device Monitoring Agent doesn't have any dependencies on the Device Plugin, so I can decouple monitoring from device lifecycle management. My Device Monitoring Agent works by periodically querying the `/devices/<ResourceName>` endpoint to discover which devices are being used, and to get the container/pod metadata associated with the metrics: + + + + +## Changes + +Add a v1alpha1 Kubelet GRPC service, at `/var/lib/kubelet/pod-resources/kubelet.sock`, which returns information about the kubelet's assignment of devices to containers. It obtains this information from the internal state of the kubelet's Device Manager. The GRPC Service returns a single PodResourcesResponse, which is shown in proto below: +```protobuf +// PodResources is a service provided by the kubelet that provides information about the +// node resources consumed by pods and containers on the node +service PodResources { + rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {} +} + +// ListPodResourcesRequest is the request made to the PodResources service +message ListPodResourcesRequest {} + +// ListPodResourcesResponse is the response returned by List function +message ListPodResourcesResponse { + repeated PodResources pod_resources = 1; +} + +// PodResources contains information about the node resources assigned to a pod +message PodResources { + string name = 1; + string namespace = 2; + repeated ContainerResources containers = 3; +} + +// ContainerResources contains information about the resources assigned to a container +message ContainerResources { + string name = 1; + repeated ContainerDevices devices = 2; +} + +// ContainerDevices contains information about the devices assigned to a container +message ContainerDevices { + string resource_name = 1; + repeated string device_ids = 2; +} +``` + +### Potential Future Improvements + +* Add `ListAndWatch()` function to the GRPC endpoint so monitoring agents don't need to poll. +* Add identifiers for other resources used by pods to the `PodResources` message. + * For example, persistent volume location on disk + +## Alternatives Considered + +### Add v1alpha1 Kubelet GRPC service, at `/var/lib/kubelet/pod-resources/kubelet.sock`, which returns a list of [CreateContainerRequest](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/cri/runtime/v1alpha2/api.proto#L734)s used to create containers. +* Pros: + * Reuse an existing API for describing containers rather than inventing a new one +* Cons: + * It ties the endpoint to the CreateContainerRequest, and may prevent us from adding other information we want in the future + * It does not contain any additional information that will be useful to monitoring agents other than device, and contains lots of irrelevant information for this use-case. +* Notes: + * Does not include any reference to resource names. Monitoring agentes must identify devices by the device or environment variables passed to the pod or container. + +### Add a field to Pod Status. +* Pros: + * Allows for observation of container to device bindings local to the node through the `/pods` endpoint +* Cons: + * Only consumed locally, which doesn't justify an API change + * Device Bindings are immutable after allocation, and are _debatably_ observable (they can be "observed" from the local checkpoint file). Device bindings are generally a poor fit for status. + +### Use the Kubelet Device Manager Checkpoint file +* Allows for observability of device to container bindings through what exists in the checkpoint file + * Requires adding additional metadata to the checkpoint file as required by the monitoring agent +* Requires implementing versioning for the checkpoint file, and handling version skew between readers and the kubelet +* Future modifications to the checkpoint file are more difficult. + +### Add a field to the Pod Spec: +* A new object `ComputeDevice` will be defined and a new variable `ComputeDevices` will be added in the `Container` (Spec) object which will represent a list of `ComputeDevice` objects. +```golang +// ComputeDevice describes the devices assigned to this container for a given ResourceName +type ComputeDevice struct { + // DeviceIDs is the list of devices assigned to this container + DeviceIDs []string + // ResourceName is the name of the compute resource + ResourceName string +} + +// Container represents a single container that is expected to be run on the host. +type Container struct { + ... + // ComputeDevices contains the devices assigned to this container + // This field is alpha-level and is only honored by servers that enable the ComputeDevices feature. + // +optional + ComputeDevices []ComputeDevice + ... +} +``` +* During Kubelet pod admission, if `ComputeDevices` is found non-empty, specified devices will be allocated otherwise behaviour will remain same as it is today. +* Before starting the pod, the kubelet writes the assigned `ComputeDevices` back to the pod spec. + * Note: Writing to the Api Server and waiting to observe the updated pod spec in the kubelet's pod watch may add significant latency to pod startup. +* Allows devices to potentially be assigned by a custom scheduler. +* Serves as a permanent record of device assignments for the kubelet, and eliminates the need for the kubelet to maintain this state locally. + diff --git a/mentoring/OWNERS b/mentoring/OWNERS index d9b6c416..23413868 100644 --- a/mentoring/OWNERS +++ b/mentoring/OWNERS @@ -1,5 +1,6 @@ reviewers: - parispittman + - nikhita approvers: - parispittman - sig-contributor-experience-leads diff --git a/mentoring/group-mentee-guide.md b/mentoring/group-mentee-guide.md index 5430550d..4a4c7d12 100644 --- a/mentoring/group-mentee-guide.md +++ b/mentoring/group-mentee-guide.md @@ -23,7 +23,7 @@ Familiarize yourself with the [community membership requirements doc](/community These topics will be covered during bi-weekly standups/workshops. The suggested activities will be covered in the mentee's normal day to day. Know something that should be added? Start a convo/add a PR - your comments are appreciated. ### Current Member Cohort Topics -* Effective communication in our our ecosystem +* Effective communication in our ecosystem * Kubernetes Governance 101 (what's a SIG?, OWNERS files, steering committee, etc.) * Identifying & understanding issue backlog and prioritization * Contributing to testing (how to run tests and create new ones) diff --git a/sig-apps/README.md b/sig-apps/README.md index b14670a4..e16aa66d 100644 --- a/sig-apps/README.md +++ b/sig-apps/README.md @@ -10,6 +10,8 @@ To understand how this file is generated, see https://git.k8s.io/community/gener Covers deploying and operating applications in Kubernetes. We focus on the developer and devops experience of running applications in Kubernetes. We discuss how to define and run apps in Kubernetes, demo relevant tools and projects, and discuss areas of friction that can lead to suggesting improvements or feature requests. +The [charter](charter.md) defines the scope and governance of the Apps Special Interest Group. + ## Meetings * Regular SIG Meeting: [Mondays at 9:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:00&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/1LZLBGW2wRDwAfdBNHJjFfk9CFoyZPcIYGWU7R1PQ3ng/edit#). diff --git a/sig-apps/charter.md b/sig-apps/charter.md new file mode 100644 index 00000000..9c15be14 --- /dev/null +++ b/sig-apps/charter.md @@ -0,0 +1,53 @@ +# SIG Apps Charter + +This charter adheres to the conventions described in the [Kubernetes Charter README] and uses +the Roles and Organization Management outlined in [sig-governance]. + +## Scope + +SIG Apps covers developing, deploying, and operating applications on Kubernetes with a focus on the application developer and application operator experience. + +### In scope + +#### Code, Binaries and Services + +- APIs used for running applications (e.g., Workloads API) +- Tools and documentation to aid in ecosystem tool interoperability around apps (e.g., Application CRD/Controller) +- Grandfathered in tools used to aide in development of and management of workloads (e.g., Kompose) + +#### Cross-cutting and Externally Facing Processes + +- A discussion platform for solving app development and management problems +- Represent the needs and persona of application developers and operators + +### Out of scope + +- Code ownership of ecosystem tools. Discussion of the tools is in scope but ownership of them is outside the scope of Kubernetes aside from legacy situations +- Do not recommend one way to do things (e.g., picking a template language) +- Do not endorse one particular ecosystem tool + +## Roles and Organization Management + +This sig follows adheres to the Roles and Organization Management outlined in [sig-governance] +and opts-in to updates and modifications to [sig-governance]. + +### Additional responsibilities of Chairs + +- Report the SIG status at events and community meetings wherever possible +- Actively promote diversity and inclusion in the SIG +- Uphold the Kubernetes Code of Conduct especially in terms of personal behavior and responsibility +- Chairs oversee the subproject creation process + +### Deviations from [sig-governance] + +- Generic technical leads are not appropriate for this SIG because sub-projects maintain their processes +- Chairs follow the Technical Leads process in the subproject creation process +- Proposing and making decisions MAY be done without the use of KEPS so long as the decision is documented in a linkable medium. + +### Subproject Creation + +SIG Chairs following Technical Leads process defined in [sig-governance] + +[sig-governance]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md +[sig-subprojects]: https://github.com/kubernetes/community/blob/master/sig-YOURSIG/README.md#subprojects +[Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md diff --git a/sig-apps/minutes/2016-07-20.md b/sig-apps/minutes/2016-07-20.md index c2968734..01d7bf70 100644 --- a/sig-apps/minutes/2016-07-20.md +++ b/sig-apps/minutes/2016-07-20.md @@ -2,7 +2,7 @@ - Michelle Noorali gave an introduction and overview of the agenda - Janet Kuo gave an overview of Deployment features - - See her [blog post](http://blog.kubernetes.io/2016/04/using-deployment-objects-with.html ) + - See her [blog post](https://kubernetes.io/blog/2016/04/using-deployment-objects-with/) - See used [minikube](https://github.com/kubernetes/minikube) for the local cluster set up during her demo - Saad Ali gave an overview of Volume features and things to look forward to around Volumes - Check out his [presentation](https://docs.google.com/presentation/d/17w7GqwGE8kO9WPNAO1qC8NyS7dRw_oLBwqKznD9WqUs) diff --git a/sig-architecture/README.md b/sig-architecture/README.md index 7daf1b82..87aa8b05 100644 --- a/sig-architecture/README.md +++ b/sig-architecture/README.md @@ -33,28 +33,36 @@ The Chairs of the SIG run operations and processes governing the SIG. ## Subprojects The following subprojects are owned by sig-architecture: -- **api** +- **architecture-and-api-governance** + - Description: [Described below](#architecture-and-api-governance) - Owners: + - https://raw.githubusercontent.com/kubernetes/community/master/contributors/design-proposals/architecture/OWNERS + - https://raw.githubusercontent.com/kubernetes-sigs/architecture-tracking/master/OWNERS - https://raw.githubusercontent.com/kubernetes/api/master/OWNERS - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/OWNERS -- **kubernetes-template-project** - - Owners: - - https://raw.githubusercontent.com/kubernetes/kubernetes-template-project/master/OWNERS -- **spartakus** - - Owners: - - https://raw.githubusercontent.com/kubernetes-incubator/spartakus/master/OWNERS -- **steering** +- **conformance-definition** + - Description: [Described below](#conformance-definition) - Owners: - - https://raw.githubusercontent.com/kubernetes/steering/master/OWNERS -- **architecture-tracking** + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/test/conformance/testdata/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/test/conformance/OWNERS +- **kep-adoption-and-reviews** + - Description: [Described below](#kep-adoption-and-reviews) - Owners: - - https://raw.githubusercontent.com/kubernetes-sigs/architecture-tracking/master/OWNERS -- **universal-utils** + - https://raw.githubusercontent.com/kubernetes/community/master/keps/OWNERS +- **code-organization** + - Description: [Described below](#code-organization) - Owners: + - https://raw.githubusercontent.com/kubernetes/contrib/master/OWNERS - https://raw.githubusercontent.com/kubernetes/utils/master/OWNERS -- **contrib** + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/vendor/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/third_party/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/OWNERS +- **steering** + - Description: Placeholder until sigs.yaml supports committees as first-class groups. These repos are owned by the kubernetes steering committee, which is a wholly separate entity from SIG Architecture - Owners: - - https://raw.githubusercontent.com/kubernetes/contrib/master/OWNERS + - https://raw.githubusercontent.com/kubernetes/steering/master/OWNERS + - https://raw.githubusercontent.com/kubernetes-incubator/spartakus/master/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes-template-project/master/OWNERS ## GitHub Teams @@ -77,12 +85,45 @@ Note that the links to display team membership will only work if you are a membe * [Charter](charter.md) -## Processes owned and tracked by the SIG +# Details about SIG-Architecture sub-projects + +## Architecture and API Governance + +Establishing and documenting design principles, documenting and evolving the system architecture, reviewing, curating, and documenting new extension patterns + +Establishing and documenting conventions for system and user-facing APIs, define and operate the APl review process, final API implementation consistency validation, co-own top-level API directories with API machinery; maintaining, evolving, and enforcing the deprecation policy + +* [Kubernetes Design and Architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/architecture.md) +* [Design principles](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/principles.md) +* [API conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md) +* [API Review process](https://github.com/kubernetes/community/blob/master/sig-architecture/api-review-process.md) +* [Deprecation policy](https://kubernetes.io/docs/reference/deprecation-policy/) + +Please see the [Design documentation](https://github.com/kubernetes-sigs/architecture-tracking/projects/4) and [API Reviews](https://github.com/kubernetes-sigs/architecture-tracking/projects/3) tracking boards to follow the work of this sub-project. Please reach out to folks in the [OWNERS](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/OWNERS) file if you are interested in joining this effort. + +## Conformance Definition + +Reviewing, approving, and driving changes to the conformance test suite; reviewing, guiding, and creating new conformance profiles + +* [Conformance Tests](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/conformance.txt) +* [Test Guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/conformance-tests.md) + +Please see the [Conformance Test Review](https://github.com/kubernetes-sigs/architecture-tracking/projects/1) tracking board to follow the work for this sub-project. Please reach out to folks in the [OWNERS](https://github.com/kubernetes/kubernetes/blob/master/test/conformance/testdata/OWNERS) file if you are interested in joining this effort. There is a lot of overlap with the [Kubernetes Software Conformance Working Group](https://github.com/cncf/k8s-conformance/blob/master/README-WG.md) with this sub project as well. The github group [cncf-conformance-wg](https://github.com/orgs/kubernetes/teams/cncf-conformance-wg) enumerates the folks on this working group. Look for the `area/conformance` label in the kubernetes repositories to mark [issues](https://github.com/kubernetes/kubernetes/issues?q=is%3Aissue+is%3Aopen+label%3Aarea%2Fconformance) and [PRs](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+is%3Aopen+label%3Aarea%2Fconformance) + +## KEP Adoption and Reviews + +Develop and drive technical enhancement review process + +* [KEP Process](https://github.com/kubernetes/community/blob/master/keps/0001-kubernetes-enhancement-proposal-process.md) +* [Template](https://github.com/kubernetes/community/blob/master/keps/0000-kep-template.md) + +Please see the [KEP Tracking](https://github.com/kubernetes-sigs/architecture-tracking/projects/2) board to follow the work of this sub-project. Please reach out to folks in the [OWNERS](https://github.com/kubernetes/community/blob/master/keps/OWNERS) file if you are interested in joining this effort. + +## Code Organization + +Overall code organization, including github repositories and branching methodology, top-level and pkg OWNERS of kubernetes/kubernetes, vendoring -[Architecture Tracking Repository](https://github.com/kubernetes-sigs/architecture-tracking/) +Please reach out to folks in the [OWNERS](https://github.com/kubernetes/kubernetes/blob/master/vendor/OWNERS) file if you are interested in joining this effort. -* [API Reviews](https://github.com/kubernetes-sigs/architecture-tracking/projects/3) -* [KEP Reviews](https://github.com/kubernetes-sigs/architecture-tracking/projects/2) -* [Conformance Test Review](https://github.com/kubernetes-sigs/architecture-tracking/projects/1) <!-- END CUSTOM CONTENT --> diff --git a/sig-architecture/api-review-process.md b/sig-architecture/api-review-process.md index a7c281c0..989de05f 100644 --- a/sig-architecture/api-review-process.md +++ b/sig-architecture/api-review-process.md @@ -20,7 +20,7 @@ Because expert reviewer bandwidth is extremely limited, the process provides a c * Maintain the high standards of the project, including positive user interactions with APIs -* Provide review regardless of method of API defininition (built-in, Extension API Server, or Custom Resource Definition) +* Provide review regardless of method of API definition (built-in, Extension API Server, or Custom Resource Definition) * Provide review over both tightly coupled external projects and in-tree API changes. diff --git a/sig-auth/README.md b/sig-auth/README.md index eb616882..b967ba7d 100644 --- a/sig-auth/README.md +++ b/sig-auth/README.md @@ -40,6 +40,89 @@ The Chairs of the SIG run operations and processes governing the SIG. * [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-auth) * [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fauth) +## Subprojects + +The following subprojects are owned by sig-auth: +- **audit-logging** + - Description: Kubernetes API support for audit logging. + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/auditregistration/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/apis/audit/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/audit/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/plugin/pkg/audit/OWNERS +- **authenticators** + - Description: Kubernetes API support for authentication. + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubeapiserver/authenticator/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authenticator/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/kubernetes/typed/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/listers/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/pkg/apis/clientauthentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/plugin/pkg/client/auth/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/tools/auth/OWNERS +- **authorizers** + - Description: Kubernetes API support for authorization. + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubeapiserver/authorizer/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubectl/cmd/auth/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authorizer/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/kubernetes/typed/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/kubernetes/typed/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/listers/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/listers/rbac/OWNERS +- **certificates** + - Description: Certificates APIs and client infrastructure to support PKI. + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/certificates/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/certificates/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/certificates/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/authentication/request/x509/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/util/cert/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/util/certificate/OWNERS +- **encryption-at-rest** + - Description: API storage support for storing data encrypted at rest in etcd. + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/server/options/encryptionconfig/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/OWNERS +- **node-identity-and-isolation** + - Description: Node identity management (co-owned with sig-lifecycle), and authorization restrictions for isolating workloads on separate nodes (co-owned with sig-node). + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/certificates/approver/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubelet/certificate/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/noderestriction/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authorizer/node/OWNERS +- **policy-management** + - Description: API validation and policies enforced during admission, such as PodSecurityPolicy. Excludes run-time policies like NetworkPolicy and Seccomp. + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/imagepolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/policy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/security/podsecuritypolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/policy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/imagepolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/policy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/imagepolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/security/podsecuritypolicy/OWNERS +- **service-accounts** + - Description: Infrastructure implementing Kubernetes service account based workload identity. + - Owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/serviceaccount/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubelet/token/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/serviceaccount/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/serviceaccount/OWNERS + ## GitHub Teams The below teams can be mentioned on issues and PRs in order to get attention from the right people. diff --git a/sig-autoscaling/README.md b/sig-autoscaling/README.md index 294f69ff..60159c9a 100644 --- a/sig-autoscaling/README.md +++ b/sig-autoscaling/README.md @@ -8,7 +8,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener ---> # Autoscaling Special Interest Group -Covers development and maintenance of componets for automated scaling in Kubernetes. This includes automated vertical and horizontal pod autoscaling, initial resource estimation, cluster-proportional system component autoscaling, and autoscaling of Kubernetes clusters themselves. +Covers development and maintenance of components for automated scaling in Kubernetes. This includes automated vertical and horizontal pod autoscaling, initial resource estimation, cluster-proportional system component autoscaling, and autoscaling of Kubernetes clusters themselves. ## Meetings * Regular SIG Meeting: [Mondays at 14:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly/triweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=14:00&tz=UTC). diff --git a/sig-aws/README.md b/sig-aws/README.md index 20436738..e2fb1054 100644 --- a/sig-aws/README.md +++ b/sig-aws/README.md @@ -10,6 +10,8 @@ To understand how this file is generated, see https://git.k8s.io/community/gener Covers maintaining, supporting, and using Kubernetes hosted on AWS Cloud. +The [charter](charter.md) defines the scope and governance of the AWS Special Interest Group. + ## Meetings * Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:00&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/1-i0xQidlXnFEP9fXHWkBxqySkXwJnrGJP9OGyP2_P14/edit). @@ -43,6 +45,9 @@ The following subprojects are owned by sig-aws: - **aws-encryption-provider** - Owners: - https://raw.githubusercontent.com/kubernetes-sigs/aws-encryption-provider/master/OWNERS +- **aws-ebs-csi-driver** + - Owners: + - https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/OWNERS ## GitHub Teams diff --git a/sig-aws/charter.md b/sig-aws/charter.md new file mode 100644 index 00000000..8212cc67 --- /dev/null +++ b/sig-aws/charter.md @@ -0,0 +1,56 @@ +# SIG AWS Charter + +This charter adheres to the conventions described in the [Kubernetes Charter README] and uses the Roles and Organization Management outlined in [sig-governance]. + +## Scope + +SIG AWS is responsible for the creation and maintenance of subprojects (features/innovations) necessary to integrate AWS services for the operation and management of Kubernetes on AWS. SIG AWS also acts as a forum for Kubernetes on AWS users/developers to raise their feature requests and support issues with. SIG leads in collaboration with SIG members will make a best effort to triage known problems within one or two release cycle of issues being reported. SIG AWS in collaboration with SIG-Testing, SIG-Scalability and SIG-Docs is responsible for integration and maintenance of tests (e2e, periodic jobs, postsubmit jobs etc.); scale-tests (load, density tests) and documentation for the scope within the purview of this charter. + +### In scope + +Link to SIG [subprojects](https://github.com/kubernetes/community/tree/master/sig-aws#subprojects) + +#### Code, Binaries and Services + +Kubernetes integrations specific to AWS including: +- Integrations, interfaces, libraries and extension points for all AWS services such as IAM, storage, networking, loadbalancers, registry, security, monitoring/logging at the instance or container level +- Tools for Kubernetes APIs to work with AWS services including Amazon EKS +- Prow, testgrid, perf dashboard integrations to expand and maintain testing (e2e, jobs) and scale-testing (load, density) on AWS and Amazon EKS +- Support users on their issues and feature requests +- Documentation for all things Kubernetes on AWS + +#### Cross-cutting and Externally Facing Processes + +- Consult with other SIGs and the community on how to apply mechanisms owned by SIG + AWS. Examples include: + - Review escalation implications of feature and API designs as it relates to core Kubernetes components (etcd, kubelet, apiserver, controller manager, scheduler) + - CSI, CNI, CRI implementation and design + - Cloud provider implementation and design + - Best practices for hardening add-ons or other external integrations such as KMS, LB, others. + - Implementing and hardening tests, scale tests and documentation + +### Out of scope + +SIG AWS is not for discussing bugs or feature requests outside the scope of Kubernetes. For example, SIG AWS should not be used to discuss or resolve support requests related to AWS Services. It should also not be used to discuss topics that other, more specialized SIGs own (to avoid overlap). Examples of such scenarios include: +- Specification of CSI, CRI interfaces, cloudprovider binary (prefer: sig-storage, sig-node and sig-cloudprovider) +- Container runtime (prefer: sig-node and sig-networking) +- Resource quota (prefer: sig-scheduling) +- Resource availability (prefer: sig-apimachinery, sig-network, sig-node) +- Detailed design and scope of tests or tooling to run tests (prefer: sig-testing) +- Detailed design and scope of scale tests or tooling to run scale tests (prefer: sig-scalability) +- Troubleshooting and maintenance of test jobs related to kops (prefer: sig-cluster-lifecyle) +- Reporting specific vulnerabilities in Kubernetes. Please report using these instructions: https://kubernetes.io/security/ + +## Roles and Organization Management + +This SIG adheres to the Roles and Organization Management outlined in [sig-governance] +and opts-in to updates and modifications to [sig-governance]. + +### Subproject Creation + +SIG AWS delegates subproject approval to Chairs. Chairs also act as Technical Leads in SIG AWS. See [Subproject creation - Option 1]. + +[sig-governance]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md +[sig-subprojects]: https://github.com/kubernetes/community/blob/master/sig-aws/README.md#subprojects +[Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md +[Subproject creation - Option 1]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md#subproject-creation diff --git a/sig-azure/README.md b/sig-azure/README.md index 660aec79..0b2ea439 100644 --- a/sig-azure/README.md +++ b/sig-azure/README.md @@ -23,13 +23,14 @@ The [charter](charter.md) defines the scope and governance of the Azure Special The Chairs of the SIG run operations and processes governing the SIG. * Stephen Augustus (**[@justaugustus](https://github.com/justaugustus)**), Red Hat -* Shubheksha Jalan (**[@shubheksha](https://github.com/shubheksha)**), Microsoft +* Dave Strebel (**[@dstrebel](https://github.com/dstrebel)**), Microsoft ### Technical Leads The Technical Leads of the SIG establish new subprojects, decommission existing subprojects, and resolve cross-subproject technical issues and decisions. * Kal Khenidak (**[@khenidak](https://github.com/khenidak)**), Microsoft +* Pengfei Ni (**[@feiskyer](https://github.com/feiskyer)**), Microsoft ## Contact * [Slack](https://kubernetes.slack.com/messages/sig-azure) diff --git a/sig-azure/charter.md b/sig-azure/charter.md index bf282276..87f0324f 100644 --- a/sig-azure/charter.md +++ b/sig-azure/charter.md @@ -53,7 +53,7 @@ _With regards to leadership roles i.e., Chairs, Technical Leads, and Subproject - SIG meets bi-weekly on zoom with agenda in meeting notes - SHOULD be facilitated by chairs unless delegated to specific Members -- SIG overview and deep-dive sessions organized for Kubecon +- SIG overview and deep-dive sessions organized for KubeCon/CloudNativeCon - SHOULD be organized by chairs unless delegated to specific Members - Contributing instructions defined in the SIG CONTRIBUTING.md @@ -97,4 +97,4 @@ Issues impacting multiple subprojects in the SIG should be resolved by SIG Techn [super-majority]: https://en.wikipedia.org/wiki/Supermajority#Two-thirds_vote [KEP]: https://github.com/kubernetes/community/blob/master/keps/0000-kep-template.md [sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml#L1454 -[OWNERS]: contributors/devel/owners.md
\ No newline at end of file +[OWNERS]: contributors/devel/owners.md diff --git a/sig-big-data/resources.md b/sig-big-data/resources.md index 1c21ff1a..c7b3b2d0 100644 --- a/sig-big-data/resources.md +++ b/sig-big-data/resources.md @@ -1,15 +1,61 @@ # Resources +## Kubernetes integration status by big data product + ### Spark -* [Spark on Kubernetes Design Proposal](https://docs.google.com/document/d/1_bBzOZ8rKiOSjQg78DXOA3ZBIo_KkDJjqxVuq0yXdew/edit#) -* [Spark Dynamic Allocation Proposal](https://docs.google.com/document/d/1S9OMnFaeSf_UUxWpMpvC7ERcWx-jDr2g85MWri3Hccc/edit?usp=sharing) -* [SPARK-JIRA](https://issues.apache.org/jira/browse/SPARK-18278) -* [Kubernetes Issue #34377](https://github.com/kubernetes/kubernetes/issues/34377) -* [External Repository](https://github.com/apache-spark-on-k8s/spark) + +[Apache Spark](https://spark.apache.org) is a distributed data processing framework. + +##### Status + +Kubernetes is supported as a mainline Spark scheduler since [release 2.3](https://spark.apache.org/releases/spark-release-2-3-0.html), see [the detailed documentation](https://spark.apache.org/docs/latest/running-on-kubernetes.html). +That work was done after the [Spark on Kubernetes original Design Proposal](https://docs.google.com/document/d/1_bBzOZ8rKiOSjQg78DXOA3ZBIo_KkDJjqxVuq0yXdew/edit#) +in the [apache-spark-on-k8s git repo](https://github.com/apache-spark-on-k8s/spark). + +##### Activities + +Enhancements are under development, with a good overview given [in this blog post](https://databricks.com/blog/2018/09/26/whats-new-for-apache-spark-on-kubernetes-in-the-upcoming-apache-spark-2-4-release.html). + +* Work is underway for Spark 2.4 to improve support and integration with HDFS. + * Design Document: [How Spark on Kubernetes will access Secure HDFS](https://docs.google.com/document/d/1RBnXD9jMDjGonOdKJ2bA1lN4AAV_1RwpU_ewFuCNWKg/edit#heading=h.verdza2f4fyd) +* Shuffle service design + * Design Document [Improving Spark Shuffle Reliability](https://docs.google.com/document/d/1uCkzGGVG17oGC6BJ75TpzLAZNorvrAU3FRd2X-rVHSM/edit) + * JIRA issue [SPARK-25299: Use remote storage for persisting shuffle data](https://issues.apache.org/jira/browse/SPARK-25299) ### HDFS + +[Apache Hadoop HDFS](https://hadoop.apache.org/hdfs) is a distributed file system, the persistence layer for Hadoop. + +##### Status + +TODO, e.g. "No release yet." + +##### Activities + * [Data Locality Doc](https://docs.google.com/document/d/1TAC6UQDS3M2sin2msFcZ9UBBQFyyz4jFKWw5BM54cQo/edit) -* [External Repository](https://github.com/apache-spark-on-k8s/kubernetes-HDFS) +* ["HDFS on Kubernetes" git repository including Helm charts](https://github.com/apache-spark-on-k8s/kubernetes-HDFS) ### Airflow + +[Apache Airflow](https://airflow.apache.org) is a platform to programmatically author, schedule and monitor workflows. + +##### Status + +The [Kubernetes executor](https://airflow.apache.org/kubernetes.html) has been introduced with Airflow [release 1.10.0](https://github.com/apache/incubator-airflow/blob/master/CHANGELOG.txt) with support of Kubernetes 1.10. + +##### Activities + * [Airflow roadmap](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71013666) + +### Flink + +[Apache Flink](https://flink.apache.org) is a distributed data processing framework. + +##### Status + +Flink 1.6 supports [running a session or job cluster on Kubernetes](https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html). + +##### Activities + +* [Native support for Kubernetes as a Flink runtime](https://issues.apache.org/jira/browse/FLINK-9953) +* [Lyft is working on an operator](https://lists.apache.org/thread.html/aa941030440c1d9e34c35c0caf5ddd2456755337fc34a4edebb32929@%3Cdev.flink.apache.org%3E) diff --git a/sig-cli/migrated-from-wiki/roadmap-kubectl.md b/sig-cli/migrated-from-wiki/roadmap-kubectl.md index 4fbf7a84..e0a9adcd 100644 --- a/sig-cli/migrated-from-wiki/roadmap-kubectl.md +++ b/sig-cli/migrated-from-wiki/roadmap-kubectl.md @@ -13,7 +13,7 @@ If you'd like to contribute, please read the [conventions](/contributors/devel/k ### Add new commands / subcommands / flags * [Simplify support for multiple files](https://github.com/kubernetes/kubernetes/issues/24649) * Manifest that can specify multiple files / http(s) URLs - * [Default manifest manifest](https://github.com/kubernetes/kubernetes/issues/3268) (ala Dockerfile or Makefile) + * [Default manifest](https://github.com/kubernetes/kubernetes/issues/3268) (ala Dockerfile or Makefile) * Unpack archive (tgz, zip) and then invoke “-f” on that directory * URL shortening via default URL prefix * [Imperative `set` commands](https://github.com/kubernetes/kubernetes/issues/21648) diff --git a/sig-cloud-provider/README.md b/sig-cloud-provider/README.md index 00e1c564..52cf9b5e 100644 --- a/sig-cloud-provider/README.md +++ b/sig-cloud-provider/README.md @@ -50,6 +50,11 @@ The following subprojects are owned by sig-cloud-provider: - **cloud-provider-vsphere** - Owners: - https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/OWNERS +- **cloud-provider-extraction** + - Owners: + - https://raw.githubusercontent.com/kubernetes/community/master/sig-cloud-provider/cloud-provider-extraction/OWNERS + - Meetings: + - Weekly Sync removing the in-tree cloud providers led by @cheftako and @d-nishi: [Thursdays at 13:30 PT (Pacific Time)](https://docs.google.com/document/d/1KLsGGzNXQbsPeELCeF_q-f0h0CEGSe20xiwvcR2NlYM/edit) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=13:30&tz=PT%20%28Pacific%20Time%29). ## GitHub Teams diff --git a/sig-cloud-provider/cloud-provider-extraction/OWNERS b/sig-cloud-provider/cloud-provider-extraction/OWNERS new file mode 100644 index 00000000..77ce3290 --- /dev/null +++ b/sig-cloud-provider/cloud-provider-extraction/OWNERS @@ -0,0 +1,8 @@ +reviewers: + - cheftako + - d-nishi +approvers: + - cheftako + - d-nishi +labels: + - sig/cloud-provider diff --git a/sig-cluster-lifecycle/README.md b/sig-cluster-lifecycle/README.md index e41f2abd..3ef22062 100644 --- a/sig-cluster-lifecycle/README.md +++ b/sig-cluster-lifecycle/README.md @@ -31,6 +31,8 @@ The Cluster Lifecycle SIG examines how we should change Kubernetes to make it ea * [Meeting recordings](https://www.youtube.com/playlist?list=PL69nYSiGNLP29D0nYgAGWt1ZFqS9Z7lw4). * kops Office Hours: [Fridays at 09:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=09:00&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/12QkyL0FkNbWPcLFxxRGSPt_tNPBHbmni3YLY-lHny7E/edit). +* Kubespray Office Hours: [Wednesdays at 07:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=07:00&tz=PT%20%28Pacific%20Time%29). + * [Meeting notes and Agenda](https://docs.google.com/document/d/1oDI1rTwla393k6nEMkqz0RU9rUl3J1hov0kQfNcl-4o/edit). ## Leadership @@ -58,6 +60,9 @@ The following subprojects are owned by sig-cluster-lifecycle: - **cluster-api-provider-aws** - Owners: - https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/master/OWNERS +- **cluster-api-provider-digitalocean** + - Owners: + - https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-digitalocean/master/OWNERS - **cluster-api-provider-gcp** - Owners: - https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-gcp/master/OWNERS @@ -80,6 +85,7 @@ The following subprojects are owned by sig-cluster-lifecycle: - Owners: - https://raw.githubusercontent.com/kubernetes/kubeadm/master/OWNERS - https://raw.githubusercontent.com/kubernetes/kubernetes/master/cmd/kubeadm/OWNERS + - https://raw.githubusercontent.com/kubernetes/cluster-bootstrap/master/OWNERS - **kubeadm-dind-cluster** - Owners: - https://raw.githubusercontent.com/kubernetes-sigs/kubeadm-dind-cluster/master/OWNERS diff --git a/sig-contributor-experience/README.md b/sig-contributor-experience/README.md index 1dbe34ce..f989cd18 100644 --- a/sig-contributor-experience/README.md +++ b/sig-contributor-experience/README.md @@ -56,7 +56,7 @@ The following subprojects are owned by sig-contributor-experience: - https://raw.githubusercontent.com/kubernetes-sigs/contributor-site/master/OWNERS - **devstats** - Owners: - - Phillels + - https://raw.githubusercontent.com/kubernetes/community/master/sig-contributor-experience/devstats/OWNERS - **k8s.io** - Owners: - https://raw.githubusercontent.com/kubernetes/k8s.io/master/OWNERS diff --git a/sig-contributor-experience/contribex-survey-2018.csv b/sig-contributor-experience/contribex-survey-2018.csv new file mode 100644 index 00000000..fb63b2f8 --- /dev/null +++ b/sig-contributor-experience/contribex-survey-2018.csv @@ -0,0 +1,161 @@ +Respondent ID,Collector ID,End Date,Contributing Length,Level of Contributor Laddor,Interested in next level?,World Region,Time Zone,Contribute to other OSS?,Blocker: Code/Doc review,Blocker: Communication,Blocker: GH tools&processes (not our customized tooling),Blocker: Finding a/the right SIG,"Blocker: Our CI, labels, and crafted customized automation",Blocker: Debugging test failures,Blocker: Finding issues to work on,Blocker: Setting up dev env,Blocker: Having PRs rejected,Useful: /retest of flakes (fejta-bot) ,Useful: labeling of stale issues (fejta-bot),"Useful: issue commands like /assign, /kind bug (k8s-ci-robot)","Useful: PR commands like /approve, /lint (k8s-ci-robot) ",Useful: merging of approved PRs (k8s-merge-robot and k8s-ci-bot),Least Useful Tool/Something that needs to be automated,current notification volume and utility,Which areas could use additional automation?,Upstream supported at employer?,"How often do you contribute upstream (code, docs, issue triage, etc.)?",Contribute: code to k/k,Contribute: code in a k/* GH org,Contribute: Docs,Contribute: Testing and CI,Contribute: Events&Advocacy,Contribute: Community & PM; SIG Chair etc.,"Contribute: Plugins & Drivers (CSI, CNI, cloud providers)","Contribute: Related projects (Kubeadm, Helm, container runtimes, etc.)",Contribute: Not yet,Contribute: Other,Make project easier to contribute?,Attended: KC EU 2017,Attended: KC NA 17,Attended: KC EU 18,Attending: KC CN 18,Attending: KC NA 18,Attending: KC EU 19,"Attended: Ecosystem events, eg. Helm Summit",Attended: Other confs with a Kubernetes track (like DockerCon or ContainerDay),Does not attend conferences,How to make ContribSummits more valuable?,Attended: # of ContribSummits,Useful@ThursMtg: Demo,Useful@ThursMtg: KEP,Useful@ThursMtg: DevStats,Useful@ThursMtg: Release,Useful@ThursMtg: SIG Updates,Useful@ThursMtg: Announcements,Useful@ThursMtg: Shoutouts,Misc Thurs Mtg feedback,Most Important Project: Mentoring programs,Most Important Prj: GH Mgmt,Most Important Proj: Contributor Summits,Most Important Proj: Contributor Site,Most Important Proj: CommPlatform Consolidation/other options,Most Important Proj: DevStats,Most Important Proj: Keeping community safe,What projects are missing?,Generic Project Groupings from BN,Use freq: Google Groups/Mailing Lists,Use freq: Slack,Use freq: discuss.kubernetes.io,Use freq: Zoom Mtgs,"Use freq: GH (comments, issues, prs)","Use freq: Unofficial(Twitter, Reddit, etc.)",Use freq: SO,Use freq: YT Recordings,"Use freq: GDocs/Forms/Sheets, etc (meeting agendas, etc)",Check for news: k-dev ML,Check for news: discuss.kubernetes.io,Check for news: contribex ML,Check for news: Slack,Check for news: Twitter read first ,Check for news: A dedicated contributor site read first,Check for news: Kubernetes blog read first ,Check for news: k/community repo in GH (Issues and/or PRs) read first,Check for news: Other,Value in Slack?,HelpWanted &/or GoodFirstIssue label usage?,HelpWanted&GoodFirstIssue Other Comments,Interetested in mentoring GSoC or Outreachy?,Watched or participated in MoC?,Is MoC useful?,Mentoring Blocker?
+10249522900,216437765,10/01/2018 2:16:15 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+03:00,2-4,1,4,1,1,1,1,5,2,2,1,n/a,1,n/a,n/a,"So far haven't been able to use most of the commands, since org membership process has met some issues.",Lots of notifications but they are useful,Pinging reviewers to either approve or disapprove PRs and issues.,"Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,n/a,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"I would have time to contribute, but so far me asking for things to do has met only silence on Slack channels. Would be nice if mentoring would be set up.",n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,N/A,1,1,4,2,4,4,4,2,N/A,1,n/a,n/a,n/a,n/a,n/a,n/a,N/A,,4,5,2,3,5,1,1,3,5,n/a,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Other (please specify):,At SIG Node the labels aren't really used.,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10248186342,216437765,09/30/2018 10:00:19 AM,Just started,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Europe,UTC+01:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10247634817,216437765,09/29/2018 9:53:32 PM,1-2 years,Approver,"Yes, but not sure I have time.",North America,UTC-08:00,2-4,3,1,4,4,2,2,3,4,2,1,1,1,n/a,1,/lgtm triggers /approve if I’m an approved? Madness! A weaker action should not trigger a stronger action based on my social status.,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),Don’t know,"Yes, it’s part of my job",Several times a month,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,Use Gerrit,Kubecon Europe 2017,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,1,2,4,4,4,4,5,5,N/A,n/a,2,n/a,4,n/a,n/a,7,"Better, clearer documentation that’s organized",Better contributor documentation,3,5,1,5,4,2,1,1,5,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10247081052,216437765,09/29/2018 7:54:55 AM,2-3 years,Approver,"Yes, doing it on my own.",North America,UTC+06:00,2-4,2,4,2,1,4,2,1,1,1,n/a,n/a,n/a,1,1,"fejta-bot, I have never seen it respark conversation",Way too many notifications with no benefits,Automation onboarding,"No, but I’m able to use “free” time at work",A few times a week,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,n/a,0,3,5,3,4,5,5,3,No,n/a,n/a,n/a,4,n/a,n/a,n/a,N/A,,2,5,1,5,5,3,2,5,5,n/a,n/a,n/a,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,2,Employer doesn't support spending time mentoring
+10246592845,216437765,09/28/2018 11:06:00 PM,Just started,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Asia,UTC+05:30,"None, Kubernetes is my first one!",1,1,1,1,1,2,2,3,1,1,1,1,1,1,NA,Lots of notifications but they are useful,"An issue can have multiple PRs. But with current automation, issue gets closed if only one PR is closed. Need automation to check if all associated issues are closed and keep track of count.","No, but I’m able to use “free” time at work",A few times a week,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Could not think of any...,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,3,3,3,3,3,3,3,N/A,1,2,n/a,4,5,n/a,n/a,N/A,,3,3,1,1,5,2,2,5,3,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,A dedicated contributor site,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Other (please specify):,Didn't create issue,"A - Yes, I would love to mentor one or both programs",No,3,Don't know enough to mentor
+10246539346,216437765,09/28/2018 10:23:01 PM,3+ years,Subproject Owner,"Yes, but would like mentorship.",North America,UTC-05:00,One more,2,4,2,4,2,2,4,1,2,1,n/a,1,1,1,n/a,Right notifications at the right frequency,non cloud infrastructure,"Yes, it’s part of my job",A few times a year,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,n/a,n/a,n/a,5,5,5,n/a,It is already better. :),n/a,n/a,n/a,n/a,5,n/a,7,N/A,,4,4,1,3,4,1,2,2,2,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10246240889,216437765,09/28/2018 8:03:43 PM,3+ years,Approver,"No, I'm already an owner",North America,UTC-03:30,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10246162130,216437765,09/28/2018 7:29:33 PM,Less than 6 months,Approver,"Yes, but not sure I have time.",North America,UTC-05:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10246160102,216437765,09/28/2018 7:53:04 PM,Less than 6 months,Org Member,"Yes, but would like mentorship.",North America,UTC-08:00,"None, Kubernetes is my first one!",2,4,1,2,1,1,4,1,3,1,n/a,1,n/a,1,n/a,Lots of notifications but they are useful,e2e tests,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,n/a,n/a,Testing & Infrastructure,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon China 2018,Kubecon North America 2018,n/a,n/a,n/a,n/a,i have to experience it first,0,5,5,3,3,4,2,2,"have more feedback/discussion on PR, Pull request",1,n/a,3,n/a,n/a,6,n/a,deep dive of all components is needed,Deep Dives and Code Base Tour Videos/Content,1,2,1,4,4,1,2,3,4,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"A - Yes, I would love to mentor one or both programs",No,1,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10245935128,216437765,09/28/2018 6:25:08 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Asia,UTC+08:00,One more,4,5,4,3,4,3,3,5,3,n/a,n/a,1,1,n/a,N/A,Right notifications at the right frequency,N/A,It’s entirely on my own time,Don’t know yet,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Don’t contribute yet, hoping to start soon",n/a,N/A,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,5,5,5,5,5,5,5,N/A,1,n/a,n/a,n/a,n/a,n/a,n/a,N/A,,3,3,3,3,3,1,3,2,3,n/a,n/a,n/a,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"A - Yes, I would love to mentor one or both programs",Yes,3,Other (please specify):
+10243938380,216437765,09/27/2018 9:17:45 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+02:00,One more,3,3,2,3,2,4,4,2,4,1,n/a,1,1,1,Auto target release assignment to PRs (to provide a priority in review/merge process),Lots of notifications but they are useful,Dashboards | Finding a mentor | Navigating code base / dependencies -> documentation or visualization.,"No, but I’m able to use “free” time at work",Several times a month,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,N/A,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,I did not participate in one yet. Happy to help during the next one in Europe.,0,4,4,5,4,5,5,4,"At the beginning I thought there should be more context for new comers. Now I have mixed feelings, because that would promote being inclusive at the risk of becoming boring for regulars",1,2,n/a,4,n/a,n/a,n/a,"A place where I can find the list of items contrib-experience is working on, a bit more detailed plans and scopes as well as the priorities driving their execution. ",ContribEx proj mgmt,4,5,2,4,3,2,2,3,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10243887058,216437765,09/27/2018 8:34:56 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10243855578,216437765,09/27/2018 8:22:50 PM,Just started,Had no idea this was even a thing,Not really,North America,UTC-04:00,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10243848503,216437765,09/27/2018 8:37:14 PM,Less than 6 months,Org Member,"Yes, doing it on my own.",North America,UTC-07:00,"None, Kubernetes is my first one!",1,1,1,1,1,3,1,1,2,1,n/a,1,n/a,1,I think they're all useful.,Lots of notifications but they are useful,"Can we have a tool to smartly detect whether a PR has unnecessary un-squashed commits? Like this one https://github.com/kubernetes/kubernetes/pull/68289 has 3 commits, but should be squashed into 1. It will make people question k8s's code quality (or process) when they see those non-sense commits in the main branch history.","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,n/a,n/a,n/a,n/a,n/a,I think it's friendly enough to new contributors.,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,4,5,5,4,2,3,3,No at this moment.,n/a,n/a,n/a,4,n/a,n/a,n/a,N/A,,2,5,1,2,5,2,1,1,2,n/a,n/a,n/a,Slack,Twitter,A dedicated contributor site,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",Yes,4,Not enough time in the week
+10243846001,216437765,09/27/2018 8:39:44 PM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-08:00,2-4,2,2,4,3,3,3,2,2,2,1,1,1,1,1,"the /assign is a bit confusing. In here you need to assign to someone who is a org member.. However, in generic terms if i am working on a Issue then I would think I would /assign it to myself. I think the overall github workflow is my least favorite part. I would rather prefer something like gerrit frontend for reviews and merges. A lot of other open source projects use that mechanism. for .e.g openstack. And I think that is surely a much better way to do the review/rebase/merge process.",Right notifications at the right frequency,I think the current set of automation is fine.,"Yes, it’s part of my job",A few times a week,n/a,n/a,n/a,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,no suggestions at the moment,n/a,Kubecon North America 2017,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Based on the one conference I did attend, I loved it",1,3,3,3,3,3,3,3,nothing now,n/a,n/a,n/a,n/a,5,n/a,n/a,not sure at the moment,,3,5,1,4,4,1,1,3,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",Yes,4,Don't know enough to mentor
+10243783540,216437765,09/27/2018 8:08:16 PM,2-3 years,Subproject Owner,"Yes, but not sure I have time.",North America,UTC-07:00,4+,5,5,3,3,4,5,4,5,1,1,1,n/a,1,1,"Disagreement on what some of the labels and process are, and the difference in how they are per project.",Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),suggesting people to add as reviewers and approvers. We cannot bottleneck on the same 7 or so people.,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,Make it easier to get permissions to do things. Make it easy to have people help other people and get to the point where the have the permissions. We're having massive sprawl because it's easier to do things outside of the project than it is to do it inside.,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,I think they're good.,3,1,1,1,1,1,1,1,"I stopped showing up after the 40th version of ""here's how we at XYZ company install kubernetes""",1,2,n/a,n/a,n/a,6,n/a,GETTING MORE PEOPLE INTO REVIEWERS/APPROVERS. ,Reviewer and Approver Growth,5,5,2,4,5,3,2,3,5,n/a,n/a,n/a,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),likely that this never reaches me,no value for anyone,Yes,n/a,"B - No, I can’t/don’t want to",No,3,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10243694700,216437765,09/27/2018 8:01:24 PM,1-2 years,Org Member,"Yes, but not sure I have time.",North America,UTC-04:00,"None, Kubernetes is my first one!",2,2,2,2,2,3,3,4,2,1,1,1,1,1,"For reasons that are currently being debated, we've had trouble in sig-multicluster with the coupling of /approve /lgtm labels that have lead to unexpected merging of PRs. It seems that a better system is being developed through the conversation currently taking place. Otherwise, all of these tools have been very useful in my experience.","Right notifications are being made, but too frequently",I'm not sure,I’m a student,Several times a month,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"I have been lucky to have great support in getting to a point where I have a working development environment, introductions into the community and mentoring during my ramp up to contributing regularly to kubernetes (which was far more frequent than currently) so at this point, I don't feel like there are ways the project could make contributing easier for me.",Kubecon Europe 2017,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,1,4,4,4,4,4,4,5,"No, I think it's useful.",1,n/a,n/a,n/a,n/a,n/a,7,New Contributor playground. I'm not sure why it's not there but I think it's important to make sure there is support for new contributors to get up and running.,New contributor playground,4,5,2,4,4,2,1,3,4,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",Yes,5,Not enough time in the week
+10243216424,216437765,09/27/2018 4:26:23 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-08:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10243001543,216437765,09/27/2018 2:53:40 PM,Less than 6 months,Had no idea this was even a thing,"Yes, but not sure I have time.",North America,UTC-05:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10242890962,216437765,09/27/2018 2:31:05 PM,6 months -1 year,Org Member,"Yes, doing it on my own.",Europe,UTC+02:00,One more,3,3,2,2,1,1,3,3,3,1,n/a,1,1,1,N/A,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),"Ability to subscribe for certain PRs, issues and test failures","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Test automation (CI/CD) is just awful right now. Its UX is bad and I can hardly find why a test is failing. The flakyness of tests is an issue too.,n/a,n/a,Kubecon Europe 2018,n/a,n/a,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,3,1,1,3,3,3,3,N/A,1,n/a,n/a,n/a,n/a,n/a,n/a,N/A,,2,5,1,4,5,1,1,4,4,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,Zoom meetings,"yes, for both users and contributors",Yes,n/a,"A - Yes, I would love to mentor one or both programs",No,3,Don't know enough to mentor
+10242540704,216437765,09/27/2018 10:51:38 AM,6 months -1 year,Org Member,"Yes, but would like mentorship.",Asia,UTC+08:00,2-4,1,1,1,1,1,1,3,1,1,1,1,1,1,1,I hope we can send a email or notification to assigners if the PR n/a for some days.,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),I often absences from the SIG meetings due to the time zone. So I hope I can receive an email or notification of the meeting summary document or video.,"Yes, it’s part of my job",A few times a week,Core code inside of kubernetes/kubernetes,n/a,n/a,Testing & Infrastructure,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon China 2018,n/a,n/a,n/a,n/a,n/a,I hope we can have more Contributor Summits in China.,n/a,5,5,4,4,5,5,4,I hope we can output a summary document of each meeting.,n/a,n/a,n/a,4,n/a,n/a,n/a,Nothing,,5,5,3,3,5,1,5,3,5,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"A - Yes, I would love to mentor one or both programs",Yes,5,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10241906398,216437765,09/27/2018 2:12:15 AM,3+ years,Subproject Owner,"No, I'm already an owner",North America,UTC-07:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10241687270,216437765,09/27/2018 12:29:17 AM,2-3 years,Org Member,"Yes, but would like mentorship.",North America,UTC-07:00,One more,2,4,4,2,5,5,5,5,5,1,1,1,1,1,/lint I never knew it existed. hence did not know how to use it offline.,Way too many notifications with no benefits,- Ability to build and test as a developer the way current test-infra is building.,"Yes, it’s part of my job",Don’t know yet,n/a,n/a,n/a,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,"Better release management and compatibility matrix of components involved around the project (such as compatiblity matrix for kubernetes, containerd and CRI plugins, CSI, CNI, helm, docker) etc.,",Kubecon Europe 2017,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,Last year the mentors were helping in career mentoring. Not mentoring people in being a better contributor summit. I think we need to focus more on helping people be a better contributor.,2,1,2,3,1,2,1,1,The notes writing is sometimes vague if you miss the meeting. You have to watch the recordings. May be we could do a better job at pre-writing some sections that the speaker wants to cover in the notes section so that we have more time while write live notes.,1,2,3,n/a,5,n/a,7,Offline communities such as meetups need better guidelines and toolset.,Meetups,3,1,3,1,2,2,3,2,1,n/a,n/a,n/a,Slack,Twitter,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",Yes,2,Employer doesn't support spending time mentoring
+10241675978,216437765,09/27/2018 12:06:28 AM,2-3 years,Org Member,"Yes, but would like mentorship.",North America,UTC-07:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10241578617,216437765,09/26/2018 11:58:02 PM,1-2 years,Reviewer,"Yes, doing it on my own.",North America,UTC-08:00,4+,2,3,2,1,1,5,2,2,2,1,1,1,1,1,"""/retest"" why is this needed SOOOO OFTEN? Also could it self report when manually run ten times in a row on PR that there's a bug somewhere?",Right notifications at the right frequency,"Noting which line(s) in which OWNERS file(s) are seeming overworked...ie: if somebody's getting 5000 notifications via GitHub each day maybe it's not the notifications that's the problem but rather ownership and delegation? Acknowledging that individuals are buried in notifications, how does one go about understanding how to raise a priority interrupt on somebody or some SIG? Email address for direct email, Slack ID for ping, timezone, vacation/presence, etc. are reasonably obfuscated. A message of ""can anybody help with issue XX or PR YY"" in a SIG's slack channel seems in my experience to have less 50% chance of triggering action. Adding the SIG leaders or appropriate OWNERS to a @mention there similarly find low rate of success, but perhaps this is inactivity in specific SIGs. Are SIG leads using the https://k8s-gubernator.appspot.com/pr dashboard for their own queue or is it also too full of to-do items? Could a similar dashboard show thing things for example priority/critical-urgent, especially if they're ""languishing"" for some metric thereof? Could ""active contributor in area, but not in reviewer or approver list"" be tracked and suggested quarterly to SIG leadership for promotion, for example around the same time as their SIG update at The Community Meeting, where they can then announce how they're pro-actively growing their contributor base and solving ""The Approver Problem""?","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"While there is undeniably friction (and important focus on continual improvement), for a project as large and complex as kubernetes I just want to say a huge kudos to those who've done work to make contributing generally easy. The number one pain point I hear about from experienced developers newly coming to k8s is confusion about the state of presubmit test automation and why things feel so flakey in CI. Things like joining groups/channels/meetings, reading docs, make a GitHub account, sign the CLA, and even the custom labeling are sufficiently clear to a newcomer who has done some development before and reads what the bot tells them in response to their actions. But when they make a first even trivial docs or code fix to validate they understood the process enough to get a PR merged...and it all fails, they don't know what to do and they can't be expected (yet) as newcomers to go triage cross-project infrastructure flakes. Overall improving CI signal so it more consistently yields true positives and true negatives is a need.",Kubecon Europe 2017,Kubecon North America 2017,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",n/a,n/a,"continue to hone the content toward being completely pre-canned and reusable, modulo some small bits of annual updating to capture changes in community process, repo layout, etc.",1,4,4,4,3,5,3,3,Bring more consistency to the SIG updates and up the quality by requiring the SIG leadership presenting to have materials pre-shared instead of talking rapidly off the cuff with a volunteer trying to scribe on a topic for which they're not a subject matter expert.,1,n/a,3,n/a,n/a,n/a,n/a,"A corporate focus, but pragmatic. As the project grows, one dimension of its next level growth will be more corporate involvement from the 70+(?!) vendors/distributors/hosting providers. Many of these newcomers will have little open source experience, yet they will have been tasked with accomplishing some product related goal (implement a feature, enable platform specific CI, measure conformance, etc.). They may need special guidance so they do not overwhelm or undermine the project an instead can constructively and pragmatically help their sponsor companies navigate deeper kubernetes engagement. It's also important to not become OpenStack.",Onboarding vendor contributors in a nondisruptive way,4,5,3,5,5,2,1,4,5,kubernetes-dev mailing list,n/a,n/a,Slack,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,5,Not enough time in the week
+10241229093,216437765,09/26/2018 9:10:35 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+02:00,"None, Kubernetes is my first one!",2,1,2,3,1,2,2,1,1,n/a,n/a,1,1,1,/lint,Right notifications at the right frequency,don't know yet,I’m a student,Don’t know yet,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,N/A,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,n/a,n/a,n/a,3,3,3,n/a,N/A,n/a,2,3,n/a,n/a,n/a,n/a,N/A,,2,5,1,2,4,1,4,3,2,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Other (please specify):,Only contributed,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10241066609,216437765,09/26/2018 8:13:49 PM,2-3 years,Subproject Owner,"No, I'm already an owner",Europe,UTC+03:00,One more,5,3,2,3,3,4,4,2,2,1,n/a,1,1,1,"On a day-to-day basis I benefit the least from the automatic labeling of stale issues, but on a project level it's a good thing so I don't mind it.",Lots of notifications but they are useful,I don't know at the moment,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,Kubecon Europe 2017,Kubecon North America 2017,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,Kubecon Europe 2019,n/a,n/a,n/a,"N/A, I think the current format is good",3,3,4,3,4,5,4,3,N/A,1,n/a,n/a,n/a,n/a,n/a,n/a,N/A,,3,5,2,4,5,3,2,4,5,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"A - Yes, I would love to mentor one or both programs",No,3,"Nothing, am already a mentor"
+10240775393,216437765,09/26/2018 6:31:08 PM,1-2 years,Subproject Owner,"Yes, but not sure I have time.",North America,UTC-08:00,2-4,2,3,1,3,2,2,4,4,2,1,1,1,1,1,"All the tooling mentioned above is useful. I wish process/tooling for development was better/faster. For example, faster build and release.",Way too many notifications with no benefits,"Local - build, release, and test - faster, requiring less resources, more automated.","Yes, it’s part of my job",A few times a year,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,n/a,n/a,n/a,n/a,n/a,Find better ways to leverage new community member's desire to contribute.,n/a,n/a,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,"Coordinate with events, perhaps have it the same day as tutorials - in the afternoon. I like it when sig-updates/meetings are paired with the conference. For example, kubecon EU.",1,4,2,2,3,3,4,2,"Email agenda or at least sig-updates to a list, so that I can determine whether to attend or watch the session later without requiring me to go check the agenda.",1,2,3,n/a,n/a,n/a,7,"Improving build times or local development. More tests, which should probably have its own track. More on how users (not developers) can contribute.",Better contributor documentation,5,4,3,3,4,1,3,3,3,kubernetes-dev mailing list,Dedicated discuss.k8s.io forum for contributors,Contributor Experience mailing list,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10240715265,216437765,09/26/2018 6:01:32 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Asia,UTC-05:00,"None, Kubernetes is my first one!",1,2,3,4,5,2,1,2,2,1,n/a,n/a,n/a,n/a,n/a,Way too many notifications with no benefits,n/a,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,as,Kubecon Europe 2017,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,as,as,1,2,2,2,1,1,1,as,1,n/a,n/a,n/a,n/a,n/a,n/a,as,,1,2,2,2,2,2,1,2,2,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",n/a,n/a,n/a,n/a,n/a,n/a
+10240539297,216437765,09/30/2018 6:13:18 AM,2-3 years,Approver,"Yes, doing it on my own.",Asia,UTC+08:00,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10240281768,216437765,09/26/2018 3:09:56 PM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+02:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10240177938,216437765,09/26/2018 2:37:07 PM,3+ years,Approver,"Yes, but not sure I have time.",Europe,UTC+02:00,2-4,5,3,2,2,2,4,2,1,2,1,n/a,1,1,1,all tool are resonably useful,Way too many notifications with no benefits,"Cross-repo e2e tests (""stable"" kubernetes/kubernetes with another kubernetes/repo master, ""stable"" kubernetes/repo with kubernetes/kubernetes master, both master and so on).","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,"Faster reviews! In my SIG, I can find a reviewer easily, but anything cross-SIG is really painful to me. Even as a seasoned Kubernetes contributor I need to beg for reviews and approvals.",Kubecon Europe 2017,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,n/a,2,4,3,3,3,3,3,3,n/a,n/a,n/a,3,n/a,n/a,n/a,n/a,"Some ""developer news"" - often I find out that the way we did things in the past does not work any longer and a new process was established. These changes should be documented somewhere and announced on kubernetes-devel. Examples from 1.12: - CRD is now preferred for new API objects - coordination.k8s.io/Lease is now preferred for leader-election ",Contributor / Dev Announcement News,5,4,1,4,5,5,1,3,5,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10239891071,216437765,09/26/2018 11:25:00 AM,Just started,Org Member,"Yes, but would like mentorship.",Europe,UTC+01:00,One more,2,2,2,2,2,1,3,3,3,1,1,1,1,1,"Not clear what permissions are needed to issue commands like /assign, /labels, /milestone etc",Lots of notifications but they are useful,- Consolidation of stale/frozen issues - Better issue visualisation based on SIG areas - More automation on sig-release process overall,It’s entirely on my own time,Every day,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,1,3,3,3,3,3,3,3,n/a,1,2,n/a,4,n/a,n/a,n/a,"Code tour of the Kubernetes repo, but very detailed / deep dive. A tree of directories with text around them, a series of videos.",Deep Dives and Code Base Tour Videos/Content,5,5,2,4,4,1,1,2,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,3,Don't know enough to mentor
+10239602078,216437765,09/26/2018 6:31:38 AM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,One more,1,1,1,1,1,1,1,1,1,n/a,n/a,n/a,n/a,1,N/A,Lots of notifications but they are useful,N/A,"Yes, it’s part of my job",Don’t know yet,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Don’t contribute yet, hoping to start soon",n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,3,3,3,3,3,3,3,N/A,1,2,n/a,n/a,n/a,n/a,n/a,N/A,,1,1,1,1,1,1,1,1,1,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,no value for anyone,"No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,1,Don't know enough to mentor
+10239376480,216437765,09/26/2018 3:55:33 AM,1-2 years,Subproject Owner,"No, I'm already an owner",Asia,UTC+08:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10239217297,216437765,09/26/2018 2:20:46 AM,2-3 years,"I’m not an org member yet, but working on it","Yes, doing it on my own.",North America,UTC-08:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10239165899,216437765,09/26/2018 2:02:08 AM,3+ years,Subproject Owner,"No, I'm already an owner",North America,UTC-08:00,"None, Kubernetes is my first one!",1,1,1,1,1,1,1,1,1,1,n/a,n/a,n/a,1,n/a,"Right notifications are being made, but too frequently",management of github notifications,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,n/a,2,2,2,3,5,4,5,1,n/a,n/a,2,3,n/a,n/a,n/a,n/a,n/a,,4,5,1,4,4,1,2,2,4,kubernetes-dev mailing list,n/a,n/a,n/a,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,1,Not enough time in the week
+10239107208,216437765,09/26/2018 1:37:53 AM,1-2 years,Subproject Owner,"No, I'm already an owner",North America,UTC-07:00,One more,2,1,3,2,1,3,1,1,1,1,1,n/a,n/a,1,"Maybe /approve, /lint is least useful, but they are all useful in their own ways.",Way too many notifications with no benefits,None comes to my mind,"Yes, it’s part of my job",A few times a week,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"Github notifications can be improved. For example, right now github sends almost 1 email per test failure for PRs. An email for test failure of PR that I review is not important for me, but when there is comment or a new commit, I want to be notified. There is no good way of distinguishing these notifications. Github diff tool is not good. It does not show diffs compared to the previously reviewed code either. I think improvements to Github could possibly be one of the most useful areas of improvement.",n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,1,3,2,2,4,3,3,3,N/A,n/a,2,n/a,4,n/a,n/a,n/a,-,,4,5,2,3,5,1,1,3,4,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,SIG mailing list,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,3,"Nothing, am already a mentor"
+10239063862,216437765,09/26/2018 1:06:40 AM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,One more,2,2,2,2,2,2,1,1,1,n/a,n/a,n/a,1,1,Allow cc of non-k8s members (for PRs integrating with other environments or projects),Lots of notifications but they are useful,Testing,"Yes, it’s part of my job",Don’t know yet,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,4,4,3,4,3,3,3,N/A,1,n/a,n/a,n/a,n/a,n/a,n/a,N/A,,1,1,1,1,1,1,1,1,1,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,no value for anyone,"No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,1,Don't know enough to mentor
+10239007975,216437765,09/26/2018 12:40:37 AM,2-3 years,Subproject Owner,"No, I'm already an owner",North America,UTC-08:00,4+,4,4,3,2,3,4,3,4,3,n/a,1,n/a,n/a,n/a,In general all of the custom k8s bot stuff is confusing and frustrating. I’ve noticed my contributions have gone down because I just don’t understand it or trust it. Just seems really over engineered.,Lots of notifications but they are useful,Creating new repositories,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,Advocacy and events,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Lose the monorepo,Kubecon Europe 2017,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,Less presentations more productivity - nobody wants another lecture from elite Googlers,4,2,2,1,4,4,4,1,Nope it’s really good,n/a,n/a,n/a,n/a,n/a,n/a,7,All the technical things that are wrong with k8s,,5,5,1,5,4,5,1,1,4,kubernetes-dev mailing list,n/a,n/a,Slack,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,3,Not enough time in the week
+10238882946,216437765,09/25/2018 11:54:10 PM,Less than 6 months,Org Member,"Yes, doing it on my own.",North America,UTC-07:00,One more,1,1,1,1,1,3,2,1,1,1,1,1,1,1,I find the toolset absolutely amazing!! I wish other projects can borrow these tools and bots :-),Lots of notifications but they are useful,Some sort of reminder to reviewers/approvers beyond say 72 hrs if they have not had a chance to look/leave comments/LGTM/Approve,"No, but I’m able to use “free” time at work",Several times a month,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,Nothing right now,n/a,3,3,3,3,3,3,3,n/a,n/a,n/a,3,4,n/a,n/a,n/a,Nothing that I can think of,,5,3,1,2,4,1,1,1,2,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10238632679,216437765,09/25/2018 10:10:18 PM,1-2 years,Org Member,"Yes, but would like mentorship.",North America,UTC-05:00,2-4,2,2,1,3,2,3,3,1,1,n/a,n/a,1,1,1,fejta-bot looks a bit too chatty,Right notifications at the right frequency,n/a,"No, but I’m able to use “free” time at work",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,maybe not easier but definitely more interesting lihe sig-apimachinery,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,n/a,n/a,maybe better sig representation?,3,1,2,2,1,1,2,2,n/a,1,n/a,3,4,n/a,n/a,n/a,n/a,,2,1,2,1,1,2,2,2,2,kubernetes-dev mailing list,Dedicated discuss.k8s.io forum for contributors,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10238560621,216437765,09/25/2018 9:41:43 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+02:00,2-4,3,4,3,5,2,3,5,4,4,1,1,1,1,1,Na,Lots of notifications but they are useful,Na,It’s entirely on my own time,A few times a week,n/a,n/a,n/a,Testing & Infrastructure,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Well explained issues if is lookong for first contribution.,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,I would like to attend if have in some key cities like berlin. Ad hoc not only in kubercon. I can help to organize,0,4,4,4,4,4,4,4,Na,1,2,3,4,n/a,n/a,7,Na,,2,3,2,3,3,1,3,4,2,n/a,n/a,Contributor Experience mailing list,Slack,n/a,A dedicated contributor site,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10238483974,216437765,09/25/2018 9:14:16 PM,2-3 years,Org Member,"No, I'm already an owner",North America,UTC-07:00,One more,3,1,2,2,1,4,3,1,1,n/a,n/a,1,1,1,retest bot might be spammy,Lots of notifications but they are useful,hunt down issue owners :-),"Yes, it’s part of my job",Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,n/a,n/a,n/a,n/a,some sort of open discussions?,1,3,3,3,3,3,3,3,n/a,n/a,2,3,n/a,n/a,n/a,n/a,n/a,,3,5,1,3,5,3,1,2,2,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,4,Not enough time in the week
+10238478706,216437765,09/25/2018 9:27:19 PM,6 months -1 year,Reviewer,"Yes, but would like mentorship.",North America,UTC+08:00,One more,4,4,3,2,2,3,4,4,3,n/a,n/a,1,1,1,"fejta-bot. The few issues I've seen have been a thread of it marking an issue as stale, someone removing that designation, then it marking the issue as stale again (and sometimes repeat). It'd be great if the Netlify previews would link to the pages that had been changed instead of just linking to the homepage.",Way too many notifications with no benefits,See 9,"Yes, it’s part of my job",A few times a week,n/a,n/a,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,1,3,3,3,4,4,3,2,"Some demos drag on a little too much, but at the same time, there are others I think are too short. Maybe a stronger vetting process?",n/a,n/a,3,n/a,5,n/a,7,More doc contributions from CNCF members. It's not on the list.,Help get doc contributors from CNCF,5,5,1,3,4,2,2,3,5,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, just users","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10238468406,216437765,09/25/2018 8:59:31 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Asia,UTC-04:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10237728459,216437765,09/25/2018 4:26:18 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+02:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10237255415,216437765,09/25/2018 12:20:10 PM,Less than 6 months,Org Member,Not really,Europe,UTC+01:00,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10237216533,216437765,09/25/2018 12:07:13 PM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-05:00,"None, Kubernetes is my first one!",5,5,3,3,3,5,5,3,1,1,n/a,1,1,1,The /retest command could be better. Incorporating additional signals that can be used to inform contributors when something is likely flaky or not could improve the DX. This is especially relevant to folks who are new to the ecosystem like myself and haven't figured out how to navigate our way yet.,Way too many notifications with no benefits,Not sure; too new to the community.,"Yes, it’s part of my job",Don’t know yet,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,3,3,3,3,3,3,3,N/A,1,n/a,n/a,4,n/a,n/a,7,N/A,,1,1,1,3,3,1,1,1,3,n/a,n/a,n/a,n/a,n/a,A dedicated contributor site,n/a,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,1,Don't know enough to mentor
+10236778977,216437765,09/25/2018 5:26:19 AM,6 months -1 year,Org Member,"Yes, doing it on my own.",North America,UTC-05:00,One more,1,1,2,2,3,4,1,4,1,n/a,1,1,1,1,It would be great in k/website if you could regen netlify test with /retest or similar,Way too many notifications with no benefits,Not sure,"Yes, it’s part of my job",Several times a month,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Easier instructions for building the latest k/k master for testing.,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/a,0,3,3,2,5,4,4,3,I think it's great! Maybe a tips and tricks section for better contributing / involvement / using k8s,1,n/a,3,n/a,5,n/a,n/a,"Make onboarding and helping out easier, training, videos etc",Better contributor documentation,2,5,3,4,4,2,1,3,2,n/a,n/a,n/a,Slack,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,4,Don't know enough to mentor
+10236642556,216437765,09/25/2018 3:29:34 AM,Just started,Had no idea this was even a thing,"Yes, but not sure I have time.",North America,UTC-07:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10236528658,216437765,09/25/2018 2:24:53 AM,6 months -1 year,Reviewer,Not really,North America,UTC-07:00,2-4,5,4,3,1,2,3,3,2,1,n/a,n/a,1,1,1,Stale issues,Lots of notifications but they are useful,Cherry-picking a fix into multiple target branches Figuring out how to add Netlify to /retest,"Yes, it’s part of my job",A few times a week,n/a,n/a,Documentation,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,Other,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,2,2,2,5,4,4,5,Maybe a SIG of the week,1,n/a,3,4,n/a,n/a,7,I can't think of anything.,,5,5,1,3,4,1,1,2,3,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, just contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,3,Not enough time in the week
+10236308069,216437765,09/25/2018 12:20:21 AM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC±00:00,One more,5,3,1,1,1,4,4,3,1,1,1,1,1,1,I don't like the noise that people create by posting comments that only contain bot commands. I wish I could turn notifications for these comments off,"Right notifications are being made, but too frequently",I don't know yet,It’s entirely on my own time,Several times a month,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,5,5,4,5,4,3,4,N/A,1,n/a,n/a,n/a,n/a,n/a,n/a,Need better guides for contributors who just start their dive into the project,Better contributor documentation,5,4,1,2,4,1,2,2,2,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",Yes,3,Don't know enough to mentor
+10236209837,216437765,09/24/2018 11:24:06 PM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",North America,UTC-07:00,2-4,2,4,2,2,2,2,2,2,2,1,1,1,1,1,n/a,Right notifications at the right frequency,not enough context for an opinion,"Yes, it’s part of my job",A few times a year,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,n/a,0,3,3,3,4,4,4,4,zoom does not work on my computer. Is there an alternative?,n/a,n/a,n/a,4,n/a,n/a,n/a,not sure yet,,3,3,2,3,3,1,4,4,3,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10236079987,216437765,09/24/2018 10:25:30 PM,1-2 years,Org Member,"Yes, doing it on my own.",North America,UTC-07:00,"None, Kubernetes is my first one!",3,3,2,2,2,3,2,2,2,1,n/a,1,1,n/a,N/A,"Right notifications are being made, but too frequently",N/A,"Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,N/A,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,4,4,5,4,4,5,4,N/A,1,2,n/a,n/a,5,n/a,n/a,n/a,,2,2,1,2,5,3,1,1,5,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,n/a,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Employer doesn't support spending time mentoring
+10235937400,216437765,09/24/2018 9:47:42 PM,1-2 years,Org Member,"Yes, but not sure I have time.",North America,UTC-08:00,One more,3,3,4,1,3,3,3,4,3,n/a,n/a,1,1,1,Sometimes the stale issue robot is a bit annoying.,Way too many notifications with no benefits,N/A,"Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,Make notifications better. Fixing https://github.com/kubernetes/test-infra/issues/1723 should help,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,"Instead of having just one track, like we had in EU 2018, have a lot of tracks, like we had in NA 2017.",2,3,3,3,3,3,3,3,I don't attend it.,n/a,2,n/a,n/a,n/a,n/a,n/a,N/A,,5,2,1,2,4,3,2,2,4,kubernetes-dev mailing list,n/a,n/a,n/a,Twitter,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,1,Not enough time in the week
+10235874254,216437765,09/24/2018 9:04:03 PM,1-2 years,Had no idea this was even a thing,"Yes, doing it on my own.",North America,UTC-08:00,One more,5,3,3,3,3,1,2,4,3,1,n/a,n/a,n/a,n/a,/lint maybe? I've never used it.,Way too many notifications with no benefits,Feels like more could be done to make sure we have the right owners (ensure files are kept up-to-date w/r/t to who is actually most involved in code and review) throughout the tree,"Yes, it’s part of my job",A few times a week,Core code inside of kubernetes/kubernetes,n/a,n/a,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon China 2018,Kubecon North America 2018,n/a,n/a,n/a,n/a,n/a,0,4,2,2,3,4,4,3,More discussion of sig arch leads on project direction and key issues,1,2,n/a,n/a,n/a,n/a,7,don't know,,2,2,1,3,5,2,1,1,4,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10235813487,216437765,09/24/2018 8:38:52 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,4+,1,1,1,2,2,4,4,2,2,1,n/a,1,n/a,1,Stale issue labeler,Way too many notifications with no benefits,"More help in identifying issues for various groups (docs, core, etc.)","No, but I’m able to use “free” time at work",A few times a week,n/a,n/a,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"Easier path to becoming a member of the community to vote, shape structure, etc. Right now it's too exclusive.",n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,More time with core members,0,5,4,4,5,3,2,1,Make it a bit longer with better segmentation between topics,1,n/a,3,4,5,6,n/a,"Team check-in to see how everyone is doing in a psychological sense (are you stressed, was this a productive week, etc.) Having this data might help to identify burnout and what causes the community stress.",Burnout education,2,5,3,2,5,5,5,2,2,n/a,n/a,n/a,Slack,Twitter,A dedicated contributor site,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"A - Yes, I would love to mentor one or both programs",Yes,5,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10235772397,216437765,09/24/2018 8:20:24 PM,3+ years,Subproject Owner,"No, I'm already an owner",North America,UTC-07:00,2-4,4,4,2,2,2,5,1,2,2,1,1,1,1,1,"Automatically notifying test owners that their tests are breaking, kicking out their tests if they don't fix them",Way too many notifications with no benefits,Better filtering of notifications,"Yes, it’s part of my job",Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,A better notification dashboard,n/a,Kubecon North America 2017,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,n/a,"Ecosystem events, eg. Helm Summit",n/a,n/a,I'm happy with where it's at,3,3,4,3,4,5,4,5,Updates on releases other than the release currently under development: what is the schedule for patch releases?,1,2,3,n/a,5,6,7,Can't think of anything right now,,4,5,1,4,5,1,1,2,3,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,Slack,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,4,Not enough time in the week
+10235761961,216437765,09/24/2018 8:12:31 PM,1-2 years,Had no idea this was even a thing,"Yes, but not sure I have time.",North America,UTC-08:00,4+,2,3,4,2,2,3,2,5,1,1,1,1,n/a,n/a,ci-robot,Lots of notifications but they are useful,Not sure.,"Yes, it’s part of my job",Every day,n/a,n/a,n/a,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,n/a,Kubecon Europe 2019,n/a,n/a,n/a,N/A,1,4,3,2,4,3,3,2,Not sure.,n/a,n/a,3,n/a,5,n/a,n/a,Not sure.,,4,1,1,2,4,3,2,1,4,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,no value for anyone,"No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10235755196,216437765,09/24/2018 8:04:47 PM,6 months -1 year,Org Member,"Yes, but not sure I have time.",North America,UTC-08:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10235714842,216437765,09/24/2018 7:56:56 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,2-4,1,1,1,1,1,2,2,2,1,1,1,1,1,1,issue/PR command,Right notifications at the right frequency,list all issues/pr that relates to me,"Yes, it’s part of my job",A few times a year,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon China 2018,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,3,3,3,3,3,3,3,N/A,1,2,3,n/a,n/a,n/a,n/a,N/A,,5,1,1,5,5,1,1,5,1,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10235691040,216437765,09/24/2018 7:43:41 PM,2-3 years,Subproject Owner,"Yes, doing it on my own.",North America,UTC-07:00,4+,3,3,5,4,2,3,2,1,3,1,1,n/a,n/a,1,PR Commands,Way too many notifications with no benefits,Notifications/Communication,"Yes, it’s part of my job",A few times a week,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,More breakout sessions,1,1,3,3,2,3,1,1,N/A,1,2,3,n/a,5,n/a,7,N/A,,5,5,5,4,5,4,1,2,5,kubernetes-dev mailing list,n/a,n/a,Slack,Twitter,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, just users",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,4,Not enough time in the week
+10235587169,216437765,09/24/2018 7:04:12 PM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, doing it on my own.",North America,UTC-05:00,2-4,1,1,1,1,2,2,1,1,1,1,n/a,n/a,1,1,Haven't needed the stale issues bot.,Right notifications at the right frequency,Automated PR assignment,"Yes, it’s part of my job",Don’t know yet,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,No,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,5,3,3,4,3,4,4,N/A,n/a,2,n/a,n/a,n/a,n/a,7,N/A,,1,1,1,3,5,1,5,4,2,n/a,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,"yes, just contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10235454962,216437765,09/24/2018 6:05:38 PM,Less than 6 months,Org Member,"Yes, doing it on my own.",Europe,UTC+02:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10235434280,216437765,09/24/2018 6:13:43 PM,6 months -1 year,"I’m not an org member yet, but working on it",Not really,North America,UTC-07:00,4+,3,5,1,5,2,2,1,1,1,1,n/a,n/a,1,1,"/kind and /assign are presented as mandatory to a new contributor while they have no way of understanding how to make good use of it. I wish the process had a sense of how green a contributor is, and would fallback to a more guided experience for newcomers.",Way too many notifications with no benefits,"code quality metrics: coverage, complexity, security, ...","Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,responding to communication attempts like design discussions and proposals on github.,n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,1,4,2,1,4,3,4,1,no,1,n/a,3,n/a,5,n/a,7,N/A,,5,2,1,1,4,1,2,1,4,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,n/a,n/a,no value for anyone,"No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,1,Don't know enough to mentor
+10235423189,216437765,09/24/2018 6:09:18 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Europe,UTC+02:00,4+,4,4,1,1,1,2,5,1,1,1,n/a,1,1,1,Retesting for flakes is great,Way too many notifications with no benefits,detecting which PR broke build and revert,"Yes, it’s part of my job",A few times a week,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,n/a,n/a,"More organized networking, e.g. Open Space like picking topics to discuss and selecting by majority and discussing in groups etc",1,1,1,1,2,2,1,1,Been once,1,2,3,n/a,n/a,n/a,n/a,Don't know,,1,5,3,1,2,1,1,4,2,n/a,n/a,n/a,n/a,Twitter,n/a,n/a,n/a,n/a,"yes, just users",Not as much as I should because I forget,n/a,"A - Yes, I would love to mentor one or both programs",No,3,Not enough time in the week
+10234448880,216437765,09/24/2018 4:43:28 AM,3+ years,Had no idea this was even a thing,"Yes, but not sure I have time.",North America,UTC-08:00,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10234376003,216437765,09/24/2018 3:10:32 AM,Just started,Had no idea this was even a thing,Not really,South America,UTC+14:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10234309100,216437765,09/24/2018 1:46:04 AM,Just started,"I’m not an org member yet, but working on it","Yes, doing it on my own.",North America,UTC-07:00,2-4,3,1,1,2,1,5,4,5,2,1,1,1,1,1,They are all super useful.,Right notifications at the right frequency,Not sure,It’s entirely on my own time,A few times a year,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"For my first contribution, I could not for the life of me find instructions on how to run tests on the source code. Running the `make` file was failing. I found no documentation to get me unstuck. After much looking I found the `testing.md` file. Ran the commands and got failures, and again not finding any documentation that would help me move on. I'm aware that we have the Slack channel. I propose, however, that running make files and tests on the fresh source code should be a slam dunk (with proper documentation).",n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,Have material for newcomers. But I haven't attended on yet so this might be true already.,0,,4,4,4,4,4,3,No,1,n/a,n/a,4,n/a,n/a,n/a,Not sure.,,2,5,4,4,4,5,3,4,3,n/a,n/a,n/a,Slack,Twitter,A dedicated contributor site,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,4,Not enough time in the week
+10232672428,216437765,09/22/2018 5:41:11 AM,Just started,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Europe,UTC+02:00,4+,1,1,1,1,1,1,1,1,1,1,1,1,1,1,"automatic merging, i think a human review is more important",Right notifications at the right frequency,deployment,"No, but I’m able to use “free” time at work",Several times a month,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Don’t contribute yet, hoping to start soon",n/a,i don't known,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,n/a,0,3,3,3,5,3,5,4,n.a,1,2,n/a,4,5,6,7,i don't know,,2,2,3,1,4,2,4,3,2,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,no value for anyone,Yes,n/a,"A - Yes, I would love to mentor one or both programs",No,2,Not enough time in the week
+10231145970,216437765,09/21/2018 2:48:58 PM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Europe,UTC+01:00,One more,1,1,1,1,2,3,1,3,1,1,n/a,n/a,n/a,1,I have not had chance yet to explore all the tools.,Lots of notifications but they are useful,I was fine with the level of automation.,"Yes, it’s part of my job",Haven’t contributed in a while,Core code inside of kubernetes/kubernetes,n/a,Documentation,n/a,Advocacy and events,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Not really.,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,5,4,5,5,5,5,5,No,1,2,n/a,n/a,n/a,n/a,n/a,Nothing,,2,5,1,2,5,5,3,3,3,n/a,n/a,n/a,n/a,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10230167769,216437765,09/21/2018 1:38:45 AM,Just started,Had no idea this was even a thing,"Yes, doing it on my own.",North America,UTC-06:00,2-4,1,2,2,2,2,3,2,1,1,1,n/a,n/a,n/a,1,N\A,Lots of notifications but they are useful,not enough experience to answer this question,"No, but I’m able to use “free” time at work",Several times a month,n/a,n/a,n/a,n/a,Advocacy and events,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,4,4,1,4,4,5,3,thank you for the notes and recordings: they're really useful.,n/a,n/a,3,4,5,n/a,n/a,not sure.,,5,4,1,4,2,2,1,3,3,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10229628055,216437765,09/20/2018 8:43:23 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Asia,UTC+05:30,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10229617568,216437765,09/20/2018 8:59:11 PM,2-3 years,Reviewer,"Yes, doing it on my own.",North America,UTC-07:00,2-4,2,3,1,3,2,4,2,5,1,1,n/a,1,1,1,I wish there was some way we could auto-assign issues/PRs to SIGs,Way too many notifications with no benefits,See above.,"Yes, it’s part of my job",Every day,n/a,n/a,n/a,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"Better build/test env instructions, tools, for my own build env.",Kubecon Europe 2017,Kubecon North America 2017,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,no,4,2,5,3,5,5,4,5,We might consider changing the time; current 10am time is very hard for some timezones.,1,n/a,n/a,4,n/a,n/a,7,NA,,3,5,5,5,5,4,2,2,4,n/a,Dedicated discuss.k8s.io forum for contributors,n/a,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, just contributors",Not as much as I should because I forget,n/a,"A - Yes, I would love to mentor one or both programs",Yes,4,Not enough time in the week
+10229456886,216437765,09/20/2018 7:39:15 PM,2-3 years,Org Member,"Yes, but not sure I have time.",North America,UTC-08:00,4+,4,2,3,2,4,4,2,3,3,n/a,n/a,1,1,n/a,NA,Lots of notifications but they are useful,Not sure,It’s entirely on my own time,Haven’t contributed in a while,n/a,n/a,Documentation,n/a,Advocacy and events,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Add extra hours to the day,n/a,Kubecon North America 2017,n/a,n/a,n/a,n/a,n/a,n/a,n/a,More people able to attend,2,4,2,1,5,4,4,3,NA,1,n/a,3,n/a,n/a,n/a,7,Better onboarding for new users to become contributors.,Better contributor documentation,5,4,4,2,2,5,3,2,2,kubernetes-dev mailing list,n/a,n/a,n/a,Twitter,n/a,n/a,n/a,n/a,"yes, just users",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,1,Employer doesn't support spending time mentoring
+10228000437,216437765,09/20/2018 4:31:04 AM,2-3 years,"I’m not an org member yet, but working on it","Yes, doing it on my own.",North America,UTC-05:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10227941733,216437765,09/20/2018 3:59:48 AM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Asia,UTC+08:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10227903504,216437765,09/20/2018 8:14:19 PM,3+ years,Subproject Owner,"No, I'm already an owner",North America,UTC-07:00,One more,5,3,3,1,2,3,1,2,1,1,n/a,n/a,1,1,Automatic labeling of stale issues - I undo the bot's changes more often than not.,Way too many notifications with no benefits,"Concept of ""PR attention set"". I often find that PRs with 2+ approvers assigned are less likely to get approved, than when there is a single approver. My theory is that people opt to wait for the other person to take a first pass, or want to give them an opportunity to give an opinion before giving a final approval. If instead there were multiple people assigned, but only 1 person was ""up to bat"" at a time, it might help move PRs forward. Likewise, once I've approved something, I don't necessarily want to follow up on every review comment after.","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,Community & Project management; SIG Chair etc.,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,Other,"PR reviews are currently my biggest pain point, both as a code contributor (receiving reviews) and a reviewer. One of the challenges is growing our reviewer pool while still maintaining a high quality bar. To that end, I think we could do a much better job with staged reviews. For example, more junior members do a first pass review before handing off to senior members for final review & approval. I believe this was the intention with the reviewer & approver split, but in my experience this hasn't panned out, and needs more automation to work (see above suggestion about assigning a single reviewer). Another way to do staged reviews is have reviewers who focus on different aspects, such as: go readability & language idioms, comment language (we have many non-native english speakers), testing, etc. However, I don't want to need 7 different reviewers to sign off on my 10 line PR, so this approach would need more careful consideration.",n/a,Kubecon North America 2017,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,n/a,n/a,n/a,n/a,"Focus more on community & contributor issues. The kubecon EU 2018 summit had a lot of technical content, which while great is what the main conference is for. I would prefer to spend more time discussing the problems & challenges our community faces. This probably means heavy involvement by the steering committee & sig-contribex",2,3,3,3,3,3,3,3,"I never attend the community meeting unless I'm presenting, so please disregard my answers to the previous question. Why do I never attend? I simply have too many meetings, and need to draw the line somewhere. The community meeting is less likely to be directly relevant to me, compared with sig & working group meetings.",n/a,2,n/a,4,5,n/a,7,"Not to be a broken record, but more emphasis on improving the review bottleneck.",Reviewer and Approver Growth,5,4,1,4,5,5,2,1,4,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),Word of mouth,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10227754132,216437765,09/20/2018 2:10:48 AM,6 months -1 year,Approver,"Yes, doing it on my own.",North America,UTC-07:00,"None, Kubernetes is my first one!",3,2,3,2,3,4,2,5,2,1,1,1,1,1,"Stale issues feel least useful, because I often wish there were a better way of being notified an issue I created was stale.",Way too many notifications with no benefits,"Org membership in all k8s orgs, including k8s-sigs",It’s complicated,Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,Other,I'm very outgoing and outspoken and it has been easy for me to find at least someone willing to show me the path. I often wonder if that has been the experience for others.,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,"I am no sure about me, but I'm hoping to be able to roll out a ""speed dating"" event between potential contributors and SIGs to help people connect with opportunities.",1,3,4,2,4,4,4,5,"I wish the demos were less ""product show-and-tell"" and more ""general tutorials/walkthroughs""! For example, a testgrid walk through, a ""how to set up your dev env"" walkthrough, a minikube tutorial walkthrough, etc. Not every week but some weeks?",1,2,3,n/a,5,n/a,7,Detangling aspects of Kubernetes from the monorepo and greater repo cross-reference,,2,5,1,4,5,2,1,2,4,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,4,Other (please specify):
+10227655092,216437765,09/20/2018 12:53:42 AM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+02:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10227633920,216437765,09/20/2018 12:42:18 AM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, doing it on my own.",Europe,UTC+02:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10227498388,216437765,09/20/2018 12:23:37 AM,1-2 years,Org Member,"Yes, doing it on my own.",Europe,UTC+01:00,One more,1,2,1,1,1,4,3,1,3,n/a,n/a,1,1,1,Fejta-bot for flakes can flood you when having /approve /lgtm,Right notifications at the right frequency,Nothing i can think of,"No, but I’m able to use “free” time at work",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,More cross-SIG discussions and brainstorming,1,3,3,5,5,5,5,4,One of the best communities I’ve seen in the OSS landscape,1,n/a,n/a,4,5,n/a,n/a,Nothing,,3,5,2,5,5,4,1,5,5,n/a,n/a,n/a,Slack,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",Yes,5,Not enough time in the week
+10227329768,216437765,09/19/2018 10:29:58 PM,2-3 years,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+02:00,2-4,3,2,2,2,4,3,4,2,3,n/a,n/a,n/a,1,n/a,Fejta-bot,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),N/a,"Yes, it’s part of my job",Several times a month,n/a,n/a,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,n/a,Kubecon Europe 2019,n/a,n/a,n/a,N/A,0,4,3,3,4,4,4,3,"Too many groups, picking the right groups could be difficult sometime",1,2,n/a,n/a,n/a,n/a,7,N/A,,2,5,4,2,3,2,2,3,2,n/a,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10227062641,216437765,09/19/2018 9:00:11 PM,Less than 6 months,Approver,"Yes, but not sure I have time.",Europe,UTC+01:00,One more,2,1,1,1,2,2,3,2,1,1,1,n/a,n/a,1,"/assign and /kind are mostly abused, /approve could be integrated with GitHub reviews.",Right notifications at the right frequency,Infra (I'm talking specifically for Kubernetes/ingress-nginx),"No, but I’m able to use “free” time at work",Several times a month,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,n/a (haven't attended yet),0,4,5,3,3,5,4,3,"Nothing that comes to my mind, need to attend more.",1,n/a,3,4,n/a,n/a,7,n/a,,1,5,1,3,4,5,3,2,1,kubernetes-dev mailing list,n/a,n/a,n/a,Twitter,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,3,Employer doesn't support spending time mentoring
+10227015112,216437765,09/19/2018 8:08:51 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Europe,UTC+01:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10226936255,216437765,09/19/2018 7:50:29 PM,2-3 years,Reviewer,"Yes, doing it on my own.",North America,UTC-07:00,"None, Kubernetes is my first one!",1,3,3,1,4,1,1,1,1,1,1,1,1,1,I wish reviewers were not assigned until after tests pass.,Way too many notifications with no benefits,Is it possible to get email notifications for specific test failures?,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,3,3,3,3,3,3,3,N/A,n/a,n/a,n/a,n/a,n/a,6,n/a,Nothing,,2,5,1,3,5,5,1,2,5,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,2,Not enough time in the week
+10226800235,216437765,09/19/2018 6:59:36 PM,1-2 years,Had no idea this was even a thing,Not really,North America,UTC-08:00,2-4,3,3,1,2,2,3,3,3,2,1,n/a,n/a,n/a,1,All are pretty useful,Right notifications at the right frequency,None I know of,"No, but I’m able to use “free” time at work",Haven’t contributed in a while,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,4,4,2,5,5,3,1,Haven't attended enough to give good feedback,n/a,n/a,n/a,4,5,n/a,n/a,n/a,,1,4,1,1,3,1,2,2,1,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10226795801,216437765,09/19/2018 6:51:29 PM,1-2 years,Org Member,"Yes, but not sure I have time.",Europe,UTC+01:00,4+,3,2,1,1,1,1,4,1,1,n/a,n/a,n/a,1,1,automated issue closing,Way too many notifications with no benefits,listing related issues,"No, but I’m able to use “free” time at work",A few times a year,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,Kubecon Europe 2017,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,not sure,1,1,3,1,3,3,2,1,n/a,1,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,3,5,2,3,5,1,2,2,5,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,1,Not enough time in the week
+10224032912,216437765,09/18/2018 6:48:35 PM,Just started,"I’m not an org member yet, but working on it","Yes, doing it on my own.",North America,UTC-08:00,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10223696648,216437765,09/18/2018 4:44:00 PM,Less than 6 months,Org Member,"Yes, but would like mentorship.",Asia,UTC+05:30,2-4,2,4,3,4,3,3,5,1,1,1,1,1,1,1,N/A,"Right notifications are being made, but too frequently",Issue labeling could benefit from additional automation.,It’s entirely on my own time,Every day,Core code inside of kubernetes/kubernetes,n/a,Documentation,n/a,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,"I'd really like some mentorship to get started. K8s is a beast and it's very easy to get lost. I've been looking for stuff to work on for a while now, but it gets very confusing very quickly!",n/a,n/a,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,n/a,n/a,n/a,n/a,"I attended the beginner track last time, this time I'm planning to attend the more experienced track as I've been familiar with it for a while! I'll make sure to reach out post that :)",1,2,4,3,4,3,3,4,N/A,1,n/a,3,4,n/a,n/a,7,I'd really like to see some focus on the developer guide. It feels like it's all over the place atm.,Better contributor documentation/developer guide,2,5,3,3,5,3,1,3,3,n/a,Dedicated discuss.k8s.io forum for contributors,n/a,Slack,Twitter,A dedicated contributor site,n/a,n/a,n/a,"yes, for both users and contributors",Other (please specify):,I've not filed too many issues to be able to answer this correctly,"A - Yes, I would love to mentor one or both programs",No,3,Don't know enough to mentor
+10223602767,216437765,09/18/2018 3:50:36 PM,Less than 6 months,Org Member,"Yes, but would like mentorship.",Asia,UTC+05:30,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10222598172,216437765,09/18/2018 1:57:57 AM,Just started,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",North America,UTC-06:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10222294684,216437765,09/17/2018 11:12:38 PM,3+ years,Approver,"No, I'm already an owner",North America,UTC-08:00,4+,2,4,4,1,1,5,2,4,2,1,1,1,1,1,n/a,Way too many notifications with no benefits,n/a,"Yes, it’s part of my job",A few times a week,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,Kubecon Europe 2017,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,n/a,1,3,3,3,3,3,3,3,n/a,n/a,2,n/a,n/a,n/a,n/a,n/a,n/a,,5,2,1,3,5,2,1,1,5,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"A - Yes, I would love to mentor one or both programs",Yes,4,Not enough time in the week
+10222070141,216437765,09/17/2018 9:27:34 PM,Just started,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",North America,UTC-04:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10222067162,216437765,09/17/2018 9:35:53 PM,2-3 years,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",North America,UTC-08:00,2-4,1,3,3,2,2,2,2,2,2,1,n/a,1,1,1,The automatic stale is a but tough to deal with sometimes. As it feels like issues are being railroaded out of sight.,Lots of notifications but they are useful,Not Sure.,"Yes, it’s part of my job",Several times a month,n/a,n/a,Documentation,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,Other,n/a,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,5,5,3,3,3,4,5,It's actually pretty great.,1,n/a,3,4,n/a,n/a,7,N/A,,2,5,2,4,4,1,3,2,3,n/a,n/a,n/a,Slack,Twitter,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,4,"Nothing, am already a mentor"
+10222048043,216437765,09/17/2018 9:25:18 PM,6 months -1 year,Reviewer,Not really,North America,UTC-04:00,"None, Kubernetes is my first one!",2,3,4,2,3,4,1,2,2,1,n/a,1,n/a,1,"I've never seen the flakes retested automatically, I just wish they would be",Way too many notifications with no benefits,"Flake reporting is very manual, so I almost never do it.","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,n/a,0,4,3,2,2,1,2,2,"I don't usually attend, if I'm honest.",1,n/a,n/a,n/a,n/a,n/a,7,Managing the firehose of notifications,Notification control (communication pipelines),3,5,1,3,4,1,1,1,3,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,2,Not enough time in the week
+10222047195,216437765,09/17/2018 9:37:34 PM,Just started,Had no idea this was even a thing,"Yes, but would like mentorship.",North America,UTC+13:00,"None, Kubernetes is my first one!",5,5,5,5,5,5,5,5,5,n/a,n/a,n/a,n/a,1,-,Lots of notifications but they are useful,-,It’s complicated,Don’t know yet,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Don’t contribute yet, hoping to start soon",n/a,"I am someone who has never contributed but would like to. Overall, I think something like Google's Summer of Code except on a part time level for full time employees where people can get support and some hand holding through processes / targeting issues would be immensely useful. Its one thing to say ""documentation is useful"" and that's great. But even after understanding the processes of merging a PR there's a HUGE leap to contribute to actual Kuberenetes code. There's historical context on why things are implemented the way they are. Then gaining enough context on which issues to target is difficult. And even further then tackling the actual code. This needs a lot of initial investment on individual contributors.",n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,I would love contributor workshops for people who haven't contributed to kubernetes yet. Not sure if this is under the realm of contributor summit or not.,0,n/a,5,n/a,n/a,n/a,n/a,n/a,-,1,n/a,n/a,4,n/a,n/a,7,"I don't think this is missing per se. But I'd love to be paired with someone in the open source community that has sort of a project manager role. I have the competency to jump into a code base. What I lack is the context on what in the community is prioritized, what issues aren't time sensitive for new contributors, what issues I can take my time on, and someone to ask technical questions.",Issue Triage,1,4,1,3,5,3,1,1,5,n/a,n/a,n/a,n/a,Twitter,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Other (please specify):,I don't file issues,"B - No, I can’t/don’t want to",No,3,Other (please specify):
+10221989907,216437765,09/17/2018 8:53:16 PM,6 months -1 year,Had no idea this was even a thing,"Yes, but would like mentorship.",North America,UTC-06:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10221978535,216437765,09/17/2018 8:48:12 PM,Just started,Had no idea this was even a thing,"Yes, but not sure I have time.",North America,UTC-05:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10221960887,216437765,09/17/2018 8:40:25 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-05:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10221836566,216437765,09/17/2018 8:22:05 PM,1-2 years,Reviewer,"Yes, doing it on my own.",North America,UTC-04:00,4+,2,2,1,2,2,3,5,2,1,n/a,1,1,1,1,i think the retest is the least useful because I've never actually seen it work and I've had to manually re-run flakes...but knowing there is something that *should* rerun flakes makes me think I could do this better.,Lots of notifications but they are useful,"Github emails, maybe some kind of gmail filter creator that asks you what you want to hear about and then generates a filter for you? Too many emails from github though.","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,n/a,Documentation,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"Being able to find good first issues to help new folks with on boarding is by far, in my opinion, the biggest problem with contributing to kubernetes today.",n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,2,3,3,2,1,3,1,It's nice when I get time to attend it,1,n/a,n/a,n/a,n/a,n/a,7,"Ways to identify new issues. There was talk of a `good-first-issue` label, but that hasn't taken off yet. In order to get more people into the community we need to make it much easier for folks to find areas of interest that need improving. Maybe each sig could have a standard way of identifying good first issues. Maybe I'm looking for a sig trello board. I'm not sure, but joining a sig takes a long time to get up to speed. I usually have to attend several weeks of meetings before I can really figure out what's going on/what area of focus a sig has, etc.",Issue Triage,2,5,1,5,5,2,2,2,4,n/a,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Employer doesn't support spending time mentoring
+10221816526,216437765,09/17/2018 7:44:51 PM,Less than 6 months,Had no idea this was even a thing,"Yes, but would like mentorship.",North America,UTC-07:00,"None, Kubernetes is my first one!",2,3,3,3,1,3,4,4,1,1,1,1,1,1,N/A,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),N/A,"Yes, it’s part of my job",Every day,n/a,n/a,Documentation,n/a,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,n/a,3,2,1,3,3,3,3,n/a,1,2,n/a,n/a,n/a,n/a,n/a,N/A,,3,5,1,5,5,1,1,4,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,n/a,no value for anyone,Yes,n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10221801147,216437765,09/17/2018 8:27:52 PM,2-3 years,Subproject Owner,"No, I'm already an owner",North America,UTC-05:00,4+,3,3,5,3,3,5,3,3,2,1,1,1,1,1,N/A,Way too many notifications with no benefits,unsure,"Yes, it’s part of my job",A few times a week,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Improve the ability to triage test failures and be able to easily reproduce locally.,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,3,4,4,5,5,5,3,"No, I think that it does a good job of balancing information sharing and being respectful of people's time.",n/a,2,3,4,5,n/a,7,Improving the ability to identify the cause of test failures and the ability to reproduce test failures locally.,,4,5,1,4,4,2,1,3,5,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10221794836,216437765,09/17/2018 7:38:40 PM,3+ years,Approver,"No, I'm already an owner",North America,UTC-05:00,4+,4,2,3,1,2,4,2,1,4,n/a,n/a,n/a,n/a,1,The automation is byzantine and UX is not simple.,Way too many notifications with no benefits,Reassign reviewers/approvers on a timeout of a couple of weeks and no feedback.,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,Other,Flowchart on the process for new contributors.,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,"1/2 updates, 1/2 free form sig discussions.",All of them,5,4,1,4,3,4,4,Please don't monopolize folks time talking about byzantine automation changes unless there is a UX change that would affect their day 2 day lives.,1,n/a,n/a,4,n/a,n/a,n/a,N/A,,3,4,3,4,5,2,1,2,3,kubernetes-dev mailing list,Dedicated discuss.k8s.io forum for contributors,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,2,Not enough time in the week
+10221534874,216437765,09/24/2018 5:42:59 PM,1-2 years,Org Member,"Yes, doing it on my own.",North America,UTC+03:00,2-4,1,1,1,1,2,2,1,1,1,1,1,1,1,1,They'all are useful.,Lots of notifications but they are useful,n/a,"Yes, it’s part of my job",A few times a week,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2017,Kubecon North America 2017,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,n/a,n/a,n/a,n/a,n/a,1,1,2,2,1,2,2,2,n/a,n/a,n/a,n/a,4,n/a,n/a,n/a,n/a,,1,2,2,2,1,3,3,2,2,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,n/a,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,4,Not enough time in the week
+10221178738,216437765,09/17/2018 3:21:02 PM,2-3 years,Approver,"Yes, but would like mentorship.",Asia,UTC+08:00,"None, Kubernetes is my first one!",3,2,1,1,2,2,2,1,1,1,1,1,1,1,"automatic labeling of stale issues. Just close issues, not really help solving problems.",Way too many notifications with no benefits,NA,"Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,Advocacy and events,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2017,n/a,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,Make the planning part more visible.,1,3,n/a,n/a,3,3,n/a,n/a,Have more Asia time-zone friendly meeting.,1,n/a,3,4,5,n/a,n/a,na,,4,3,2,2,4,2,1,4,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, just contributors","No, because I don't think my issues qualify",n/a,"A - Yes, I would love to mentor one or both programs",No,2,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10221032938,216437765,09/17/2018 1:09:01 PM,3+ years,Subproject Owner,"Yes, doing it on my own.",Europe,UTC+02:00,2-4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,-,Lots of notifications but they are useful,"The problem itself is on the gh side, dealing with their notifications is cumbersome at the level k8s currently is. I have multiple filters in my inbox, but still the volume of emails is significantly large. ","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,n/a,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2017,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,-,1,5,5,5,5,5,5,5,-,1,n/a,3,n/a,5,n/a,7,-,,5,5,1,4,5,2,2,3,3,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10216867455,216437765,09/14/2018 9:31:24 AM,6 months -1 year,Reviewer,"Yes, but would like mentorship.",Asia,UTC+05:30,One more,1,1,1,1,1,3,2,3,1,1,1,1,1,1,n/a,Lots of notifications but they are useful,n/a,It’s entirely on my own time,A few times a week,n/a,n/a,Documentation,n/a,n/a,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,None.,n/a,n/a,n/a,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,n/a,n/a,N/A,n/a,5,5,4,5,5,5,4,none,1,n/a,3,4,n/a,n/a,n/a,not that i can think of.,,3,5,2,3,2,2,2,3,4,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"A - Yes, I would love to mentor one or both programs",No,4,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10216171567,216437765,09/14/2018 12:09:37 AM,1-2 years,Org Member,"Yes, but not sure I have time.",North America,UTC-07:00,One more,2,3,2,2,1,2,2,2,2,1,1,n/a,n/a,1,I've found /approve to not be that useful since GitHub's improved review features duplicate that.,Lots of notifications but they are useful,n/a,"Yes, it’s part of my job",A few times a week,n/a,n/a,Documentation,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"Perhaps greater discoverability for sub-projects, e.g. those in kubernetes-sigs/ repo",Kubecon Europe 2017,n/a,n/a,n/a,Kubecon North America 2018,n/a,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,The unconference style SIG sessions (deep-dive) were much later in the evening at the last KubeCon in Copenhagen. We find these sessions particularly useful for SIG Apps due to the large amount of things it covers. It would be good to have these at a more accessible time.,2,5,4,3,4,4,5,4,n/a,1,2,3,n/a,5,n/a,7,n/a,,4,5,2,4,5,5,3,4,4,kubernetes-dev mailing list,n/a,n/a,Slack,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"A - Yes, I would love to mentor one or both programs",No,3,Just not getting around to it
+10215312899,216437765,09/13/2018 5:44:35 PM,3+ years,Subproject Owner,"No, I'm already an owner",North America,UTC-05:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10214900593,216437765,09/13/2018 3:05:12 PM,1-2 years,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-05:00,"None, Kubernetes is my first one!",4,3,1,5,3,4,5,5,2,n/a,n/a,1,1,n/a,N/A,Right notifications at the right frequency,Unsure right now,"Yes, it’s part of my job",Several times a month,n/a,n/a,Documentation,n/a,Advocacy and events,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,More guided mentorship,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,5,3,3,4,5,5,4,none,1,n/a,3,4,n/a,n/a,n/a,none,,4,5,5,4,5,3,4,4,4,n/a,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,5,Don't know enough to mentor
+10214567885,216437765,09/13/2018 11:40:39 AM,Less than 6 months,Had no idea this was even a thing,"Yes, but not sure I have time.",Europe,UTC+01:00,4+,2,3,1,3,1,4,4,4,1,1,1,1,1,1,Don't know,Right notifications at the right frequency,Don't know,It’s complicated,A few times a year,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Don't know,n/a,n/a,Kubecon Europe 2018,n/a,n/a,Kubecon Europe 2019,n/a,n/a,n/a,N/A,0,3,3,3,3,3,3,3,No,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10213539213,216437765,09/12/2018 11:17:55 PM,2-3 years,Org Member,Not really,North America,UTC+08:00,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10213442456,216437765,09/12/2018 11:09:23 PM,1-2 years,Approver,"Yes, but would like mentorship.",Europe,UTC+02:00,"None, Kubernetes is my first one!",3,1,2,2,1,4,2,2,2,1,1,1,1,1,Sometimes the amount of GitHub notification is overhelming. It will be great to have an enhanced gubernetor PR/issue dashboard to keep things under control with less effort,Lots of notifications but they are useful,"Integration test, build and packaging, hacks/validation","No, but I’m able to use “free” time at work",A few times a week,Core code inside of kubernetes/kubernetes,n/a,Documentation,Testing & Infrastructure,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Help me find a work where I can works OSS full time,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Contributor summit is really valuable and deserve more space into the kubeconf Circus. Helps/Facilitation should be provided to contributors not backed by companies,1,4,4,5,5,4,5,5,You are doing a great work!,1,2,3,n/a,n/a,6,7,"Cncf ambassador program, I have no news about it Job boards, marching contributors careers and companies needs",,2,5,1,3,5,1,1,4,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,Sig meetings,"yes, for both users and contributors",Yes,n/a,"A - Yes, I would love to mentor one or both programs",No,3,Employer doesn't support spending time mentoring
+10212981609,216437765,09/12/2018 7:31:19 PM,6 months -1 year,Org Member,"Yes, doing it on my own.",North America,UTC-07:00,One more,2,1,3,4,4,2,2,3,2,n/a,n/a,1,1,1,N/A,"Right notifications are being made, but too frequently",N/A,"No, but I’m able to use “free” time at work",Several times a month,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,3,3,2,5,5,5,3,N/A,1,2,n/a,4,5,n/a,n/a,N/A,,4,5,1,3,4,1,2,2,3,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10212885572,216437765,09/12/2018 7:24:05 PM,Just started,Had no idea this was even a thing,Not really,Asia,UTC+05:30,2-4,1,1,2,2,2,3,2,2,1,1,1,1,1,1,No one at the moment,Right notifications at the right frequency,Test failure guidance maybe,It’s entirely on my own time,A few times a week,Core code inside of kubernetes/kubernetes,n/a,n/a,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,More hands-on workshops,,1,1,1,1,1,1,1,Nope,1,n/a,3,4,5,n/a,n/a,Nothing I can think of,,3,5,2,1,4,1,3,4,1,kubernetes-dev mailing list,n/a,n/a,Slack,Twitter,A dedicated contributor site,Kubernetes blog,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,2,Don't know enough to mentor
+10212715497,216437765,09/12/2018 6:02:27 PM,1-2 years,Subproject Owner,"No, I'm already an owner",North America,UTC-06:00,4+,4,4,1,3,3,2,4,2,2,1,n/a,n/a,1,1,"I need help sometimes as a maintainer identifying pull requests that have stalled out and need intervention or more attention. Maybe normalizing that things need help by adding a /bump command and making it super clear for new contributors, and in our bots first reply (the approve bot comment that you see immediately after a PR is submitted) saying ""Hey if this gets stuck or you are confused, just reply /bump"" so that people don't worry over how to find a nice way to get help?",Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),"More comments on 10. I need help watching for: * New PRs and issues (without watching the entire repo and every . single . comment) * Separating out replies to an issue that I commented on once, or was mentioned on once, vs notifications for when someone is actually mentioning my name _right now_. Basically GH's email notifications are a fire hose and the noise is much higher than the signal. So I'm losing a lot of important things because my email filtering foo isn't strong enough.","Yes, it’s part of my job",Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"I'd really appreciate help getting email notifications for relevant events without getting emailed for every single comment. GH is letting me down, octobox doesn't quite help either, I use the PR dashboard but that requires me to check, and it still pops things up for my attention that don't really need any action on my part.",n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",n/a,n/a,I wanted more interaction with the SIGs. At KubeCon EU '18 it was in a presenter format which wasn't at all what I was hoping for.,1,2,4,1,5,4,4,2,N/A,1,2,n/a,4,,n/a,n/a,N/A,,2,5,2,4,5,2,1,1,4,n/a,n/a,n/a,n/a,Twitter,A dedicated contributor site,n/a,n/a,Something that is announcements only,"yes, just users",Yes,n/a,"A - Yes, I would love to mentor one or both programs",Yes,3,Not enough time in the week
+10212675874,216437765,09/12/2018 5:32:07 PM,Just started,Had no idea this was even a thing,"Yes, but not sure I have time.",Europe,UTC+01:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10212308706,216437765,09/12/2018 9:33:55 PM,Less than 6 months,Had no idea this was even a thing,"Yes, but would like mentorship.",Europe,UTC±00:00,4+,1,2,3,3,3,2,2,2,2,n/a,n/a,1,n/a,n/a,automatic stale issues,Lots of notifications but they are useful,e2e testing,"No, but I’m able to use “free” time at work",A few times a year,n/a,n/a,Documentation,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)","Don’t contribute yet, hoping to start soon",n/a,Nope,Kubecon Europe 2017,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,It does feel sometimes that the contributor community is pretty insular at the conferences.,2,2,2,2,2,2,2,2,N/A,1,n/a,3,n/a,n/a,n/a,n/a,Overall SIG processes for new or people on the periphery,,1,3,1,1,4,1,2,3,1,n/a,n/a,n/a,Slack,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,2,Just not getting around to it
+10212242335,216437765,09/12/2018 2:21:53 PM,3+ years,Approver,"Yes, doing it on my own.",Europe,UTC+02:00,4+,1,4,4,2,1,3,1,2,2,1,1,1,1,1,/label by far. There is no autocompletion. I am lucky to have the permission to use the Github UI directly instead.,Way too many notifications with no benefits,"Escalation of approval requests. Approver often do not read all Github notifications. Often it is not clear whether the didn't see the notification, have no time, are out of office or simply don't like the PR but hesitate to comment. Especially from Europe this can be very frustrating with US approvers because pinging them directly is often in evening hours.","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2017,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",n/a,n/a,Unconference style instead of talks. Less steering commitee. Less Sig-Lead mini presentations. More discussion about core topics (to be collected in advance and voted for).,3,1,4,2,2,3,4,4,-,n/a,2,3,n/a,5,n/a,7,-,,2,5,1,3,5,4,2,3,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"A - Yes, I would love to mentor one or both programs",Yes,5,"Nothing, am already a mentor"
+10212214310,216437765,09/12/2018 1:53:46 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Europe,UTC±00:00,4+,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10212010909,216437765,09/12/2018 11:47:01 AM,1-2 years,Org Member,"Yes, but not sure I have time.",Europe,UTC+02:00,2-4,2,1,2,2,1,2,4,3,1,1,n/a,1,1,1,Labeling stale issues is useful if people care about them... most of the time I think they don't have enough time to do so.,Lots of notifications but they are useful,Squash commits (I think it's coming...),It’s entirely on my own time,A few times a week,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2017,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,4,5,3,5,5,5,5,"I like them like that, I only wish I had more time to attend.",1,n/a,n/a,n/a,5,n/a,n/a,N/A,,2,5,1,4,4,1,1,3,3,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,3,Don't know enough to mentor
+10212003967,216437765,09/12/2018 11:49:00 AM,6 months -1 year,Org Member,"Yes, doing it on my own.",Europe,UTC+01:00,2-4,3,3,2,2,3,2,4,2,1,1,1,1,1,1,"I hate it, when my PR fails on ""stupid"" ./hack/verify-* issues ... but I don't have a solution for that ...",Way too many notifications with no benefits,-,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,"Make prow / test-infra / kubetest / ... easier to understand, extend, and use. Make vendoring stuff easier (how can my repo vendor in parts of k/k, then itself be vendored into k/k without being a stagung repo)",n/a,n/a,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,5,4,3,4,5,4,3,-,1,n/a,n/a,4,,6,7,One thing that is also important to me: Not only focus on US (e.g. timezone wise) but also other regions,Globalization,4,5,1,4,5,2,1,3,4,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,random SIG meeting where this is brought up,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10211956716,216437765,09/12/2018 11:00:02 AM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, doing it on my own.",Europe,UTC+02:00,One more,4,3,2,1,2,1,1,2,1,n/a,n/a,1,1,1,Automatic retesting of flakes didn't seem to trigger on my PRs,Right notifications at the right frequency,IDK,"Yes, it’s part of my job",A few times a week,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,N/A,0,3,3,3,3,3,3,3,No,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10211906690,216437765,09/12/2018 10:25:40 AM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, doing it on my own.",Europe,UTC+01:00,2-4,2,1,5,3,3,3,3,2,2,1,1,n/a,1,1,-,Lots of notifications but they are useful,-,"Yes, it’s part of my job",Several times a month,n/a,n/a,Documentation,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,less SIGs,n/a,n/a,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,n/a,n/a,"walk through real code contributions in groups, come prepared with real issues suitable for that",1,5,4,3,3,3,2,2,-,1,2,3,4,5,n/a,n/a,-,,4,4,3,3,4,1,3,3,4,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,n/a,n/a,n/a,n/a,n/a,n/a,"yes, just contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10211899789,216437765,09/12/2018 10:20:12 AM,6 months -1 year,Org Member,"Yes, doing it on my own.",Asia,UTC+08:00,2-4,1,1,1,1,1,2,1,1,3,1,1,1,1,1,fejta-bot is too slow.,Way too many notifications with no benefits,The membership process.,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,n/a,n/a,Testing & Infrastructure,n/a,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,"No, i think the contributing process is clear to me.",n/a,n/a,n/a,Kubecon China 2018,n/a,n/a,n/a,n/a,n/a,N/A,N/A,2,2,3,3,,3,2,N/A,1,2,3,n/a,n/a,n/a,7,N/A,,2,2,2,3,5,1,3,1,3,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,no value for anyone,Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10211823751,216437765,09/12/2018 9:31:07 AM,1-2 years,Reviewer,"Yes, doing it on my own.",Europe,UTC+01:00,4+,5,2,1,2,1,1,1,1,2,1,1,1,1,1,none,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),"if a sig is labeled on a PR, members should get notification (e.g. auto mention sig-xxx-pr-reviews) sice sigs should be autolabeled by touching the code this would lead to better transparency ","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,get rid of Bazel - it is unnecessary barrier causing most merge conflicts on PRs forcing rebases (loosing lgtms and waiting for someone to re-review),n/a,n/a,Kubecon Europe 2018,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,n/a,n/a,make it 2 days :),1,1,3,1,4,4,5,4,none,n/a,2,3,n/a,n/a,n/a,7,n/a,,4,4,1,3,5,1,1,2,3,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,2,Not enough time in the week
+10211753707,216437765,09/12/2018 7:58:03 AM,Less than 6 months,"I’m not an org member yet, but working on it","Yes, doing it on my own.",North America,UTC-08:00,2-4,3,2,1,1,1,4,2,2,1,1,n/a,n/a,n/a,1,Automatic labeling of stale issues is the least useful feature.,Lots of notifications but they are useful,n/a,"Yes, it’s part of my job",A few times a week,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Better responsiveness from reviewers of PRs Reduce test flakiness,n/a,n/a,n/a,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,3,3,3,3,3,3,3,N/A,n/a,n/a,3,4,n/a,n/a,n/a,N/A,,3,5,2,4,5,1,2,2,3,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10211648722,216437765,09/12/2018 6:24:52 AM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Asia,UTC+05:30,2-4,1,3,1,4,4,4,4,2,3,n/a,1,1,1,1,"User need to remember each command, it could have been a bit easier if UI buttons were also present ",Lots of notifications but they are useful,not aware much of such other areas.. will share later on if come across..,"Yes, it’s part of my job",Every day,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Don’t contribute yet, hoping to start soon",Other,I came across a form to ask for mentor which was filled in.. but still I could not find any response for same.. it could have been better if someone can give a brief details on how to start contribution. different sig... their environment setup requirement. Actually these details would have helped me to choose the right SIG for me and have helped me to head start things... Looking forward for some response for form which I filled in regarding mentor.. ,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,5,4,4,4,3,4,3,I have started looking into it a couple of weeks ago.. will share these feedback in future..,1,2,3,n/a,n/a,n/a,7,There must be some spoc person for each SIG where we can ask for just some basic question(answers may be in form of email/docs etc),,3,5,3,4,2,1,3,4,3,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,Slack,n/a,A dedicated contributor site,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",Yes,3,Other (please specify):
+10211508705,216437765,09/12/2018 4:47:00 AM,2-3 years,Subproject Owner,"Yes, but would like mentorship.",Asia,UTC+08:00,2-4,4,4,1,1,1,3,3,1,1,1,1,1,1,1,"All above tools are useful, but it's better have a bot to ping approvers automatically if the PR hasn't got reviewed for a period.",Way too many notifications with no benefits,ping approvers,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,Advocacy and events,n/a,"Plugins & Drivers (CSI, CNI, cloud providers)","Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,n/a,Kubecon China 2018,n/a,n/a,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,1,5,5,5,5,5,5,5,Timezone is a main blocker for me. Maybe hold the meeting in multiple times?,,2,3,4,5,n/a,n/a,N/A,,3,5,3,2,5,2,2,3,4,n/a,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Yes,n/a,"A - Yes, I would love to mentor one or both programs",Yes,4,Not enough time in the week
+10211308450,216437765,09/12/2018 2:18:54 AM,Less than 6 months,"I’m not an org member yet, but working on it",Not really,Europe,UTC+01:00,2-4,2,2,2,3,3,3,4,4,3,n/a,n/a,1,1,n/a,I do not have much experience with these tools yet,Way too many notifications with no benefits,"I am too inexperienced to really say. The notifications are many, still trying to grasp it.","No, but I’m able to use “free” time at work",A few times a year,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,3,3,3,3,3,3,3,N/A,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10211251973,216437765,09/12/2018 2:17:31 AM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Asia,UTC+08:00,4+,2,1,1,2,3,1,3,1,1,1,1,1,1,1," k8s-ci-bot so simple and useful. Perhaps we need a document auto generate tool from source code comment, like Sphinx.",Right notifications at the right frequency,Messages archive? Maybe. I wish we can archive message about projects.,"Yes, it’s part of my job",Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,It feels great now.,n/a,n/a,n/a,Kubecon China 2018,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,"Not currently, can learn a lot of content.",2,4,3,4,5,4,2,4,Not yet.,1,2,n/a,n/a,5,n/a,n/a,Not yet. Thanks.,,5,3,2,3,5,3,4,3,4,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,n/a,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"A - Yes, I would love to mentor one or both programs",Yes,3,Don't know enough to mentor
+10211204730,216437765,09/12/2018 1:21:13 AM,1-2 years,Had no idea this was even a thing,"Yes, but would like mentorship.",Europe,UTC±00:00,4+,2,5,3,3,3,4,3,2,3,1,1,n/a,n/a,1,some issue commands may be something that could be done manually with less noise,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),Some push to reviewers before coming stale,"Yes, it’s part of my job",A few times a year,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,More information I guess,0,4,3,2,5,4,4,2,Nope,1,n/a,3,n/a,5,n/a,7,N/A,,2,3,4,1,5,1,5,3,4,n/a,n/a,n/a,n/a,Twitter,A dedicated contributor site,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
+10211174308,216437765,09/12/2018 1:06:14 AM,Less than 6 months,Reviewer,"Yes, but not sure I have time.",Europe,UTC+01:00,4+,3,4,1,2,1,4,3,3,3,1,n/a,1,1,1,Commands provide visibility,Way too many notifications with no benefits,no opinion,"No, but I’m able to use “free” time at work",Every day,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,I just want to say that the community is very friendly and welcoming - and that is a great start to get people on board!,n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,N/A,0,3,3,3,3,2,2,3,Give the scale at question 19 - is 1 good or bad?,,2,n/a,4,n/a,n/a,n/a,N/A,,5,5,1,3,5,3,2,1,3,n/a,Dedicated discuss.k8s.io forum for contributors,n/a,Slack,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,1,Don't know enough to mentor
+10211083820,216437765,09/12/2018 12:31:46 AM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but not sure I have time.",Europe,UTC+02:00,2-4,1,2,1,2,2,3,4,3,2,1,n/a,n/a,1,n/a,"Automatic labeling is a bit misleading, but I think that's because the labels were changed recently. I'd like that if you close one issues all PR related to that issues will closed",Lots of notifications but they are useful,I think that some kind of automatic standardization checks for all the related repos to the project should be useful.,"No, but I’m able to use “free” time at work",Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other,"I think that's impossible for newbie and part time contributors to catch up with the pace of the core products, however, I see lot of opportunities handling the low hanging fruit on new/incubator projects, helping with the bug triage and system administrations",n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,0,2,2,2,2,2,2,2,Continue to iterate and listen to the community,1,2,n/a,4,5,6,7,Centralize all that information,Better documentation,5,4,2,2,3,1,1,1,3,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10210970675,216437765,09/11/2018 11:26:17 PM,Just started,Reviewer,"Yes, but would like mentorship.",Asia,UTC+05:30,"None, Kubernetes is my first one!",3,2,3,1,4,4,3,4,1,n/a,1,1,n/a,n/a,-,Right notifications at the right frequency,-,"Yes, it’s part of my job",Several times a month,n/a,n/a,Documentation,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,Need more easy test environment deployment,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,-,,4,3,4,4,4,4,5,-,1,n/a,n/a,4,5,n/a,n/a,-,,1,1,3,2,4,2,3,4,1,n/a,Dedicated discuss.k8s.io forum for contributors,n/a,n/a,n/a,n/a,Kubernetes blog,n/a,n/a,no value for anyone,"No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",Yes,3,Not enough time in the week
+10210741104,216437765,09/11/2018 9:32:40 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-08:00,One more,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10210700667,216437765,09/11/2018 9:22:02 PM,3+ years,Subproject Owner,"No, I'm already an owner",North America,UTC-07:00,2-4,1,1,3,1,1,1,1,4,5,1,1,1,1,1,Requiring this survey question,Way too many notifications with no benefits,Fewer survey requirements,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,No,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,No,2,3,3,3,3,3,3,5,No,n/a,n/a,n/a,n/a,n/a,6,n/a,Making 1 mean I like the tool is extremely odd,,3,5,1,3,5,1,2,1,5,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Other (please specify):,Sometimes but don't think they work well,"B - No, I can’t/don’t want to",Yes,4,Wasn't connected with mentoring opportunities (reach out to us to get engaged!)
+10210653661,216437765,09/11/2018 8:55:20 PM,2-3 years,Org Member,Not really,North America,UTC-05:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10210647774,216437765,09/11/2018 9:03:27 PM,1-2 years,Had no idea this was even a thing,"Yes, but would like mentorship.",North America,UTC-06:00,2-4,3,3,3,3,3,3,3,3,3,n/a,1,1,1,1,Not sure,Lots of notifications but they are useful,Not sure,"Yes, it’s part of my job",A few times a year,n/a,n/a,Documentation,n/a,Advocacy and events,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,Other,Not sure,n/a,Kubecon North America 2017,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,N/A,1,5,4,3,3,3,4,2,N/A,1,2,3,n/a,n/a,n/a,n/a,N/A,,4,4,4,2,2,5,3,4,4,kubernetes-dev mailing list,n/a,n/a,Slack,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"A - Yes, I would love to mentor one or both programs",No,3,Don't know enough to mentor
+10210583897,216437765,09/11/2018 8:26:56 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10210532937,216437765,09/11/2018 8:17:59 PM,3+ years,Approver,"No, I'm already an owner",North America,UTC+08:00,"None, Kubernetes is my first one!",2,2,2,2,2,3,2,2,2,1,1,1,1,1,I'd like a /squash command that would take a PR and squash it for me. This would remove the one blocker from contributing to docs and KEPs purely from the Github UI.,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),Squashing via github command.,"Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,Other,n/a,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,no,4,5,4,3,2,2,2,2,no,n/a,2,n/a,n/a,n/a,n/a,n/a,nope,,3,1,1,3,3,3,2,1,5,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,no value for anyone,"No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,1,Not enough time in the week
+10210530309,216437765,09/11/2018 8:20:07 PM,3+ years,Approver,Not really,Europe,UTC+02:00,One more,3,2,2,2,1,4,3,3,2,1,n/a,1,1,1,I wish cherrypicks were automated,Way too many notifications with no benefits,Cherrypick process,"Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,Automate cherrypicking,n/a,n/a,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,No opinion,1,4,5,3,4,3,3,3,No opinion.,1,2,n/a,4,n/a,n/a,n/a,Nothing,,3,2,1,3,5,1,1,3,3,n/a,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,3,"Nothing, am already a mentor"
+10210524466,216437765,09/11/2018 8:11:33 PM,Just started,Had no idea this was even a thing,"Yes, but not sure I have time.",North America,UTC-08:00,"None, Kubernetes is my first one!",1,2,2,1,1,1,2,2,1,1,n/a,n/a,n/a,n/a,-,Lots of notifications but they are useful,-,It’s entirely on my own time,Don’t know yet,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Don’t contribute yet, hoping to start soon",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,-,0,5,5,4,3,3,3,3,-,n/a,n/a,n/a,n/a,n/a,6,n/a,-,,2,3,3,1,2,2,2,2,2,n/a,n/a,n/a,n/a,n/a,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",Yes,4,Don't know enough to mentor
+10210517563,216437765,09/11/2018 8:01:32 PM,Just started,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10210505379,216437765,09/11/2018 8:11:10 PM,3+ years,Approver,"No, I'm already an owner",North America,UTC-05:00,4+,2,3,1,2,2,5,1,2,1,n/a,n/a,1,1,1,"automatic retest is frequently overaggressive, continuously testing a PR that has never passed tests automated tooling around cherry-pick and release branch management would be very helpful",Lots of notifications but they are useful,"release branch management (cherry-picks, tracking when a fix needs to be backported, when backport is complete, etc)","Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,"Plugins & Drivers (CSI, CNI, cloud providers)",n/a,n/a,n/a,n/a,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,"clarify the intent of each session (information distribution, brainstorming, feedback gathering, etc). each type of session is valuable, but confusing the intent often leads to competing emphases between presenters/attendees",4,2,2,2,4,3,4,2,n/a,1,n/a,n/a,n/a,5,n/a,n/a,making common contributor pain points visible to owning sigs and ensuring that feedback loop informs sig actions,,3,5,1,3,5,2,4,2,4,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",Yes,3,Not enough time in the week
+10210487075,216437765,09/11/2018 8:04:00 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, doing it on my own.",North America,UTC-05:00,"None, Kubernetes is my first one!",3,1,2,2,3,3,4,4,1,n/a,n/a,n/a,n/a,1,N/A,Lots of notifications but they are useful,N/A,"Yes, it’s part of my job",A few times a week,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,N/A,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,5,4,2,4,4,4,4,N/A,1,n/a,n/a,n/a,n/a,n/a,7,N/A,,4,5,2,3,4,1,1,1,4,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I didn't know they were there",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10210482937,216437765,09/11/2018 7:57:46 PM,1-2 years,Reviewer,"Yes, but would like mentorship.",Asia,UTC+05:30,"None, Kubernetes is my first one!",4,2,2,2,1,4,5,3,4,1,1,1,1,1,NA,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),NA,It’s entirely on my own time,Every day,Core code inside of kubernetes/kubernetes,n/a,Documentation,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,"More hallway discussions, round table discussions and explanations about overall project architecture, goals, etc",1,2,5,4,5,4,5,5,"I think it's been mentioned previously but if SIG updates could include places where the SIG is looking for contributions or if they could point out good first issues, that would really awesome!",1,2,3,4,5,6,7,NA,,5,5,4,4,5,4,3,4,5,kubernetes-dev mailing list,Dedicated discuss.k8s.io forum for contributors,Contributor Experience mailing list,Slack,Twitter,n/a,Kubernetes blog,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",Yes,5,"Nothing, am already a mentor"
+10210478715,216437765,09/11/2018 7:54:05 PM,3+ years,Approver,Not really,North America,UTC-04:00,2-4,3,3,2,1,1,2,1,1,2,1,1,1,1,1,"If I had to choose, automatic labeling of stale issues. They're all useful, but the others are more important IMHO.",Lots of notifications but they are useful,Not sure,"Yes, it’s part of my job",A few times a year,Core code inside of kubernetes/kubernetes,n/a,n/a,n/a,n/a,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,n/a,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,Have more dedicated face to face time with other community members (hackathon style),2,4,4,2,3,3,4,4,N/A - I don't regularly attend,1,2,n/a,n/a,n/a,n/a,7,Not sure,,2,5,2,3,3,1,1,2,2,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,3,"Nothing, am already a mentor"
+10210468433,216437765,09/11/2018 7:57:49 PM,Less than 6 months,Org Member,"Yes, doing it on my own.",North America,UTC-08:00,"None, Kubernetes is my first one!",4,3,1,2,5,2,1,1,1,1,n/a,n/a,n/a,n/a,"Issue commands are noisy, awkward, and sometimes unintuitive.",Lots of notifications but they are useful,¯\_(ツ)_/¯,"Yes, it’s part of my job",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,n/a,n/a,n/a,n/a,n/a,Tests that don't flake repeatedly on every PR.,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,None,N/A,0,3,3,3,3,3,3,3,"I've not gone (and there was no N/A option, and the question as mandatory).",n/a,n/a,n/a,4,n/a,n/a,n/a,Reducing test flakes!,,5,5,1,3,4,1,1,1,3,kubernetes-dev mailing list,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors","No, because I don't think my issues qualify",n/a,"B - No, I can’t/don’t want to",No,3,Don't know enough to mentor
+10208654544,216437765,09/11/2018 12:30:19 AM,Less than 6 months,Org Member,"Yes, but not sure I have time.",Europe,UTC+02:00,2-4,1,1,2,2,1,3,4,2,2,n/a,n/a,1,1,1,"Mailing test failures is way too spammy. If you have a PR with failing tests, you can easily get way too much emails and it becomes hard to navigate.","Right notifications are being made, but too frequently",Nothing at all. All the most frequent task are automated.,"Yes, it’s part of my job",Several times a month,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,"Easier to find issues to work on. Looking by the issue tracker, it can be find to hard a right one by yourself. Usually, you first need to get in contact with SIG members and to see what you can help with with. Some ""automation"" around that part could be useful.",n/a,n/a,Kubecon Europe 2018,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,"Not sure :) I liked the format of contributor summer we had this year on KubeCon EU, and would definitely love something similar on upcoming conferences.",1,5,5,5,5,5,5,5,Community meeting is the best meeting :D,1,n/a,3,4,n/a,n/a,7,"Nothing at all, I think it covers the most important points.",,2,5,3,4,4,5,1,3,5,n/a,n/a,n/a,Slack,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,5,Don't know enough to mentor
+10208576582,216437765,09/11/2018 8:25:13 AM,1-2 years,Reviewer,"Yes, but not sure I have time.",North America,UTC-04:00,"None, Kubernetes is my first one!",5,5,1,2,5,2,2,3,3,n/a,n/a,1,n/a,n/a,n/a,Lots of notifications but they are useful,n/a,I’m a student,A few times a week,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon Europe 2017,Kubecon North America 2017,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,n/a,1,3,2,4,3,3,4,5,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10208225442,216437765,09/28/2018 9:09:36 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",Europe,UTC+01:00,"None, Kubernetes is my first one!",n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10208136069,216437765,09/10/2018 8:39:41 PM,6 months -1 year,"I’m not an org member yet, but working on it","Yes, but would like mentorship.",North America,UTC-07:00,"None, Kubernetes is my first one!",1,1,5,3,4,1,4,4,1,n/a,1,1,n/a,n/a,"Since all my contribution is documentation, I don't find any use in retesting, though I'm confident it would be if I was writing code.","Right notifications are being made, but too frequently",Maybe importing and correlation of documentation from outside sources,It’s complicated,Don’t know yet,n/a,n/a,Documentation,n/a,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,I'd love some review-able tutorials on github usage. It comes up just often enough for me to be rusty every time I use it.,n/a,Kubecon North America 2017,Kubecon Europe 2018,Kubecon China 2018,Kubecon North America 2018,Kubecon Europe 2019,n/a,Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,Maybe some active mentoring/pairing of new contributors with veterans to provide color during the contributor summits.,1,4,4,3,4,5,5,4,no,1,2,3,4,5,n/a,n/a,Maybe even more diversity and outreach programs?,Diversity Initatives,2,5,4,5,1,4,1,2,5,n/a,n/a,Contributor Experience mailing list,Slack,Twitter,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Other (please specify):,"No, because I'm not really filing many issues.","B - No, I can’t/don’t want to",Yes,5,Employer doesn't support spending time mentoring
+10208101835,216437765,09/10/2018 8:05:32 PM,6 months -1 year,Org Member,"Yes, but not sure I have time.",Europe,UTC+02:00,2-4,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10208029121,216437765,09/10/2018 7:44:22 PM,2-3 years,Subproject Owner,"No, I'm already an owner",North America,UTC-08:00,4+,1,1,1,1,1,1,1,1,1,1,1,1,1,1,N/A,Lots of notifications but they are useful,n/a,"No, but I’m able to use “free” time at work",Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,n/a,Testing & Infrastructure,n/a,Community & Project management; SIG Chair etc.,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,n/a,Nope,n/a,n/a,n/a,n/a,Kubecon North America 2018,n/a,n/a,n/a,n/a,N/A,0,1,3,2,5,3,5,5,Nope,n/a,2,n/a,4,5,n/a,n/a,N/A,,5,5,2,4,5,1,1,3,4,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,n/a,n/a,n/a,n/a,n/a,n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",Yes,4,Not enough time in the week
+10208020798,216437765,09/26/2018 4:25:29 AM,3+ years,Subproject Owner,"Yes, but not sure I have time.",South America,UTC-04:00,4+,3,3,3,3,3,3,3,3,3,1,n/a,1,1,n/a,.,Not enough notifications and I frequently miss important things (e.g when my review/approval is needed),.,"Yes, it’s part of my job",Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Other,n/a,n/a,Kubecon North America 2017,Kubecon Europe 2018,n/a,n/a,n/a,n/a,n/a,n/a,N/A,2,4,3,3,3,3,3,3,N/A,n/a,2,n/a,n/a,n/a,6,n/a,N/A,,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a
+10204125563,216437765,09/08/2018 6:20:19 AM,1-2 years,Subproject Owner,"Yes, doing it on my own.",North America,UTC-05:00,One more,3,2,3,2,1,3,3,3,2,1,1,1,1,1,n/a,Lots of notifications but they are useful,"KEP / Features / requests for new features, specifically for new Contributors or passerbys","Yes, it’s part of my job",Every day,n/a,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,Community & Project management; SIG Chair etc.,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,n/a,Kubecon North America 2018,Kubecon Europe 2019,"Ecosystem events, eg. Helm Summit",Other conferences with a Kubernetes track (like DockerCon or ContainerDay),n/a,"Haven't been to one yet, but a strong focus on all types of Contributors (e.g., non-code) and having leaders from those segments available is key.",0,4,5,4,5,5,5,5,I think it's pretty great. Would like to see more around product roadmap from SIGs,1,2,3,4,5,6,7,Process around creating a better feedback loop for non-contributors requesting improvements. Triage to delivery.,,5,5,3,5,5,3,1,3,5,kubernetes-dev mailing list,n/a,n/a,Slack,Twitter,n/a,n/a,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Not as much as I should because I forget,n/a,"B - No, I can’t/don’t want to",Yes,5,Not enough time in the week
+10203989035,216437765,09/08/2018 4:15:43 AM,1-2 years,Org Member,"Yes, doing it on my own.",Europe,UTC+03:00,2-4,3,3,3,1,4,5,1,3,2,n/a,1,1,1,1,n/a,Right notifications at the right frequency,n/a,n/a,Every day,Core code inside of kubernetes/kubernetes,Code inside of another repo in the kubernetes/* GitHub organization,Documentation,Testing & Infrastructure,Advocacy and events,n/a,n/a,"Related projects (Kubeadm, Helm, container runtimes, etc.)",n/a,Other,"less test flakes, faster code reviews",n/a,n/a,Kubecon Europe 2018,n/a,n/a,Kubecon Europe 2019,n/a,n/a,n/a,n/a,n/a,4,3,2,4,3,3,3,n/a,n/a,2,n/a,n/a,n/a,n/a,n/a,n/a,,5,5,3,5,5,1,1,5,5,kubernetes-dev mailing list,n/a,Contributor Experience mailing list,Slack,n/a,n/a,Kubernetes blog,k/community repo in GH (Issues and/or PRs),n/a,"yes, for both users and contributors",Yes,n/a,"B - No, I can’t/don’t want to",No,3,Not enough time in the week
\ No newline at end of file diff --git a/sig-contributor-experience/contribex-survey-2018.md b/sig-contributor-experience/contribex-survey-2018.md new file mode 100644 index 00000000..b549cbf0 --- /dev/null +++ b/sig-contributor-experience/contribex-survey-2018.md @@ -0,0 +1,146 @@ +Please read before using the data. + +| Data | Info | +| --- | --- | +Title | Kubernetes Contributor Experience Survey 2018 +Authors | @parispittman, @jberkus, and many contributor experience members +Tool Used | SurveyMonkey; @idvoretskyi entered into the tool from the CNCF account and exported the data +Start | September 08, 2018 (soft launch on Slack); September 11, 2018 (full launch on kubernetes-sig-contribex@googlegroups.com) +End | October 1, 2018 +Subject(s) | automation, community meeting, mentoring, communication, demographic information about contributors, events, +Langauage | English +Data Processing | All personal identifiers have been removed. 73 respondents provided their email addresses for follow up. They have been scrubbed. +Format | .csv +File Name | contribex-survey-2018.csv + +Many column headers have been changed due to length, provide context, or they produced a two header column from ordinal scale and ranking questions. Changes will be documented below. + +Some values represent a range of feelings/opinions. Check the question to find out the descriptive range. (ex: 1=least useful, 5=most useful) + + +### Two header column changes: + +Columns: K-S +Question: Please rate the below parts of the contribution process by how challenging they are, from 1 (not a problem) to 5 (a frequent blocker) + +Columns: T-X +Question: Which of the following tooling do you find useful? + +Columns: Y, AA, AN, AX, BG, BO +Question: open ended questions; may need to assign values to capture trends + +Columns: AD-AM +Question: What areas of Kubernetes do you contribute to? Please check all that apply. + +Columns: AO-AW +Question: What conferences have you previously attended or are planning to attend? + +Columns: AZ-BF +Question: How useful do you find each section of the Thursday's Community Meeting? (1 least useful; 5 most useful) + +Columns: BH-BN +Question: Some of the major projects we are working on are listed below, check one that is most important to you that we carry through to completion. + +Columns: BP-BX +Question: Of our various communications channels, please rate which ones you use and/or check most frequently on a 1-5 scale, where 1 is “never”, 3 is “several times a month” and 5 is “every day”. + +Columns: BY-CG +Question: Which of these channels is most likely to reach you first for news about decisions, changes, additions, and/or announcements to the contributor process or community matters? + +### Full Question List: + +1. How long have you been contributing to Kubernetes? +2. What level of the Contributor Ladder do you consider yourself to be on? +3. Are you interested in advancing to the next level of the Contributor Ladder? +4. What region of the world are you in? +5. What timezone are you most often in? (Check your UTC offset here) +6. How many other open source projects not in the Kubernetes ecosystem do you contribute to? +7. Please rate the below parts of the contribution process by how challenging they are, from 1 (not a problem) to 5 (a frequent blocker): +Code/Documentation review +Communication +GitHub tools and processes (not our customized tooling) +Finding the right SIG for your contributions +Our CI, labels, and crafted customized automation +Debugging test failures +Finding appropriate issues to work on +Setting up development environment +Having PRs rejected +8. Which of the following tooling do you find useful? +automatic /retest of flakes (fejta-bot) +automatic labeling of stale issues (fejta-bot) +issue commands like /assign, /kind bug (k8s-ci-robot) +PR commands like /approve, /lint (k8s-ci-robot) +automatic merging of approved PRs (k8s-merge-robot and k8s-ci-bot) +9. What tool above is the least useful and why? Wish something was automated that isn’t? List it here. +10. How do you perceive the current notification volume and utility? +11. Which areas could use additional automation? +12. Does your employer support your contributions to Kubernetes? +13. How often do you contribute upstream (code, docs, issue triage, etc.)? +14. What areas of Kubernetes do you contribute to? Please check all that apply. +Core code inside of kubernetes/kubernetes +Code inside of another repo in the kubernetes/* GitHub organization +Documentation +Testing & Infrastructure +Advocacy and events +Community & Project management; SIG Chair etc. +Plugins & Drivers (CSI, CNI, cloud providers) +Related projects (Kubeadm, Helm, container runtimes, etc.) +Don’t contribute yet, hoping to start soon +15. Are there specific ways the project could make contributing easier for you? +16. What conferences have you previously attended or are planning to attend? +Kubecon Europe 2017 +Kubecon North America 2017 +Kubecon Europe 2018 +Kubecon China 2018 +Kubecon North America 2018 +Kubecon Europe 2019 +Ecosystem events, eg. Helm Summit +Other conferences with a Kubernetes track (like DockerCon or ContainerDay) +None +17. Do you have any suggestions on how to make the Contributor Summits more valuable to you (N/A if not applicable)? +18. How many Kubernetes Contributor Summits have you attended? +19. How useful do you find each section of the Thursday's Community Meeting? (1 least useful; 5 most useful) +Demo +KEP of the Week +Devstats Chart of the Week +Release Updates +SIG Updates +Announcements +Shoutouts +20. Any feedback on how the community meeting can be better? +21. Some of the major projects we are working on are listed below, check one that is most important to you that we carry through to completion: +Mentoring programs for all levels +GitHub Management +Delivering valuable contributor summits at relevant events +Launching a contributor site for a one stop shop for tailored project news, info, docs, and calendar +Discovery and planning around communication and collaboration platforms to lead to potential centralization and/or consolidation +Improving DevStats +Keeping our community safe on our various communication platforms through moderation guidelines and new approaches +22. What is missing from that list entirely? Why? +23. Of our various communications channels, please rate which ones you use and/or check most frequently on a 1-5 scale, where 1 is “never”, 3 is “several times a month” and 5 is “every day”. +Google Groups/Mailing Lists +Slack +discuss.kubernetes.io +Zoom video conferencing/meetings +Discussions on Github Issues and PRs +Unofficial channels (IRC, Hangouts, Twitter, etc.) +StackOverflow +YouTube recordings (community meetings, SIG/WG meetings, etc.) +Google Docs/Forms/Sheets, etc (meeting agendas, etc) +24. Which of these channels is most likely to reach you first for news about decisions, changes, additions, and/or announcements to the contributor process or community matters? +kubernetes-dev mailing list +Dedicated discuss.k8s.io forum for contributors +Contributor Experience mailing list +Slack +Twitter +A dedicated contributor site +Kubernetes blog +k/community repo in GH (Issues and/or PRs) +25. Do you think Slack adds value to the project for users and/or contributors? +26. Have you ever used the Help Wanted and/or Good First Issue labels on issues you file to find contributors? +27. Are you interested in mentoring a Kubernetes upstream Intern for Outreachy or Google Summer of Code? We are also looking for organizations to sponsor if your employer is interested. +28. Have you watched or participated in an episode of our YouTube mentoring series Meet Our Contributors? +29. How useful did you find Meet Our Contributors? (1 - not useful at all; 5 - extremely useful)If you have suggestions on improvements, leave those in the feedback box at the end of the survey. +30. What remains a blocker to becoming a mentor? +31. Would you like us to follow up with you about any of your answers, above? If so, share your email address here: +32. Do you have any comments, questions, or clarifications for your answers on this survey? Leave the general feedback here:
\ No newline at end of file diff --git a/sig-contributor-experience/devstats/OWNERS b/sig-contributor-experience/devstats/OWNERS new file mode 100644 index 00000000..e4043397 --- /dev/null +++ b/sig-contributor-experience/devstats/OWNERS @@ -0,0 +1,16 @@ +# see https://go.k8s.io/owners + +reviewers: + - Phillels + - dims + - jberkus + - nikhita + - parispittman + - spiffxp +approvers: + - Phillels + - lukaszgryglicki + - jberkus + - spiffxp +labels: + - sig/contributor-experience diff --git a/sig-contributor-experience/devstats/README.md b/sig-contributor-experience/devstats/README.md new file mode 100644 index 00000000..28c396ef --- /dev/null +++ b/sig-contributor-experience/devstats/README.md @@ -0,0 +1,18 @@ +# devstats + +This file documents the devstats subproject. We are responsible for +continued advocacy of CNCF's devstats project within the kubernetes +community. The kubernetes project has a significant number of metrics +and workflows that are unique amongst CNCF projects, thus we are more +actively involved in ongoing development and maintenance of meaningful +devstats metrics and dashboards than most other CNCF projects. + +## Things we have done in the past + +- Graph of the Week at kubernetes community meetings + - TODO: list of meetings and graphs presented? +- Added descriptions to each of the devstats dashboards +- Adjusted repo groups to be generated from sigs.yaml instead of the + previous subjective/opaque groupings +- Consulted with the devstats maintainers to suggest new metrics and + new dashboards diff --git a/sig-contributor-experience/migrated-from-wiki/README.md b/sig-contributor-experience/migrated-from-wiki/README.md deleted file mode 100644 index c34a79fe..00000000 --- a/sig-contributor-experience/migrated-from-wiki/README.md +++ /dev/null @@ -1 +0,0 @@ -The content in here has been migrated from https://github.com/kubernetes/community/wiki and is likely severely out of date. Please contact this SIG if you have questions or ideas about where this content should go. diff --git a/sig-contributor-experience/migrated-from-wiki/effective-reviewable.md b/sig-contributor-experience/migrated-from-wiki/effective-reviewable.md deleted file mode 100644 index 1df0337c..00000000 --- a/sig-contributor-experience/migrated-from-wiki/effective-reviewable.md +++ /dev/null @@ -1,12 +0,0 @@ -*Or, one weird trick to make Reviewable awesome* - -The Kubernetes team is still new to _Reviewable_. As you discover new cool features and workflows, add them here. Once we have built up a good number of tricks we can reorganize this list. - -- Hold off on publishing comments (using the "Publish" button) until you have completed your review. [(source)](@pwittrock) -- When leaving comments, select a "disposition" (the button with your profile picture) to indicate whether the comment requires resolution, or is just advisory and hence requires no response. [(source)](@pwittrock) -- Change a comment's "disposition" to "close" those to which the author didn't respond explicitly but did address with satisfactory changes to the code. Otherwise, the comment hangs out there awaiting a response; in contrast to GitHub's review system, _Reviewable_ doesn't consider a change to the target line to be a sufficient indicator of resolution or obsolescence, which is a safer design. Use the <kbd>y</kbd> to acknowledge the current comment, which indicates that no further response is necessary. -- To "collapse" a whole file in the multi-file view, click the rightmost value in the revision range control. This is effectively saying, "Show no diffs." -- Use the red/green "eye" icon to indicate completion of and to keep track of which files you have reviewed. The <kbd>x</kbd> keyboard shortcut toggles the completion status of the file currently focused in the status bar across the top. -- Use the <kbd>p</kbd> and <kbd>n</kbd> keys to navigate to the previous and next unreviewed file—that is, those whose status is a red circle with a crossed white eye icon, meaning incomplete, as opposed to those with a green circle with a white eye, meaning complete. -- Use the <kbd>j</kbd> and <kbd>k</kbd> keys to navigate to the previous and next comment. Use <kbd>S-j</kbd> and <kbd>S-k</kbd> to navigate between the previous and next _unaddressed_ comment. Usually as the reviewer, you use the latter to go back and check on whether your previous suggestions were addressed. -- Reply with `+lgtm` to apply the "LGTM" label directly from _Reviewable_.
\ No newline at end of file diff --git a/sig-docs/README.md b/sig-docs/README.md index 46506d63..705a7c65 100644 --- a/sig-docs/README.md +++ b/sig-docs/README.md @@ -23,8 +23,8 @@ Covers documentation, doc processes, and doc publishing for Kubernetes. ### Chairs The Chairs of the SIG run operations and processes governing the SIG. -* Zach Corleissen (**[@zacharysarah](https://github.com/zacharysarah)**), Linux Foundation * Andrew Chen (**[@chenopis](https://github.com/chenopis)**), Google +* Zach Corleissen (**[@zacharysarah](https://github.com/zacharysarah)**), Linux Foundation * Jennifer Rondeau (**[@bradamant3](https://github.com/bradamant3)**), Heptio ## Contact @@ -49,10 +49,11 @@ Note that the links to display team membership will only work if you are a membe | Team Name | Details | Description | | --------- |:-------:| ----------- | -| @kubernetes/sig-docs-maintainers | [link](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers) | Documentation Maintainers | -| @kubernetes/sig-docs-pr-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews) | Documentation PR Reviewers | -| @kubernetes/sig-docs-ko-owners | [link](https://github.com/orgs/kubernetes/teams/sig-docs-ko-owners) | Korean L10n Repository Owners | -| @kubernetes/sig-docs-ja-owners | [link](https://github.com/orgs/kubernetes/teams/sig-docs-ja-owners) | Japanese L10n Repository Owners | +| @kubernetes/sig-docs-maintainers | [link](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers) | Documentation maintainers | +| @kubernetes/sig-docs-pr-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews) | Documentation PR reviews | +| @kubernetes/sig-docs-ko-owners | [link](https://github.com/orgs/kubernetes/teams/sig-docs-ko-owners) | Korean localization | +| @kubernetes/sig-docs-ja-owners | [link](https://github.com/orgs/kubernetes/teams/sig-docs-ja-owners) | Japanese localization | +| @kubernetes/sig-docs-zh-owners | [link](https://github.com/orgs/kubernetes/teams/sig-docs-zh-owners) | Chinese localization | <!-- BEGIN CUSTOM CONTENT --> ## Goals diff --git a/sig-governance.md b/sig-governance.md index 4f6e857c..b9a6760f 100644 --- a/sig-governance.md +++ b/sig-governance.md @@ -48,7 +48,20 @@ Guidelines for drafting a SIG Charter can be found [here](/committee-steering/go * Leads should [subscribe to the kubernetes-sig-leads mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-leads) * Submit a PR to add a row for the SIG to the table in the kubernetes/community README.md file, to create a kubernetes/community directory, and to add any SIG-related docs, schedules, roadmaps, etc. to your new kubernetes/community/SIG-foo directory. -#### Creating a mailing list +### Discussion Platforms + +Your SIG needs a place to discuss topics asynchronously. You have two options, a traditional mailing list via Google Groups, or a category on [discuss.kubernetes.io](discuss.kubernetes.io). The main difference is Groups is primarily email-based with a web UI tacked on, and Discuss is primarily a Web UI with email tacked-on. The other difference is that your SIG/WG is responsible for moderating your Google Group; with discuss you just depend on the usual community moderation. + +- Working Groups, due to their temporary nature, are strongly encouraged to consider using an existing SIG mailing list if appropriate, otherwise use a discuss category for less management overhead. +- SIGs, due to their usage of calendars, and Zoom accounts, are strongly encouraged to use a traditional mailing list. + +Choose one: + +#### Create a Category + +Post a message asking for a category in the [Site Feedback and Help](https://discuss.kubernetes.io/c/site-feedback) section and a moderator will create your category for you and provide you with a URL and mail address to post to. + +#### Creating a Google Group Create a Google Group at [https://groups.google.com/forum/#!creategroup](https://groups.google.com/forum/#!creategroup), following the procedure: @@ -57,13 +70,16 @@ Each SIG must have two discussion groups with the following settings. - kubernetes-sig-foo (the discussion group): - Anyone can view content. - Anyone can join. - - Anyone can post. + - Moderate messages from non-members of the group. - Only members can view the list of members. - kubernetes-sig-foo-leads (list for the leads, to be used with Zoom and Calendars) - Only members can view group content. - Anyone can apply to join. - - Anyone can post. + - Moderate messages from non-members of the group. - Only members can view the list of members. -- Groups should be created as e-mail lists with at least three owners (including parispittman at google.com and ihor.dvoretskyi at gmail.com); -- To add the owners, visit the Group Settings (drop-down menu on the right side), select Direct Add Members on the left side and add Paris and Ihor via email address (with a suitable welcome message); in Members/All Members select Ihor and Paris and assign to an "owner role" +- Groups should be created as e-mail lists with at least three owners (including parispittman at google.com and jorge@heptio.com and ihor@cncf.io) +- To add the owners, visit the Group Settings (drop-down menu on the right side), select Direct Add Members on the left side and add Paris, Jorge and Ihor via email address (with a suitable welcome message); in Members/All Members select Paris, Jorge, and Ihor and assign to an "owner role" - Set "View topics", "Post", "Join the Group" permissions to be "Public"; + +Familiarize yourself with the [moderation guidelines](https://github.com/kubernetes/community/blob/master/communication/moderation.md) for the project. Chairs should be cognizant that a new group will require an initial time investment moderation-wise as the group establishes itself. + diff --git a/sig-list.md b/sig-list.md index 438977b1..faf1fe25 100644 --- a/sig-list.md +++ b/sig-list.md @@ -28,14 +28,14 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Auth](sig-auth/README.md)|auth|* [Mike Danese](https://github.com/mikedanese), Google<br>* [Mo Khan](https://github.com/enj), Red Hat<br>* [Tim Allclair](https://github.com/tallclair), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-auth)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-auth)|* Regular SIG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Autoscaling](sig-autoscaling/README.md)|autoscaling|* [Marcin Wielgus](https://github.com/mwielgus), Google<br>* [Solly Ross](https://github.com/directxman12), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-autoscaling)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-autoscaling)|* Regular SIG Meeting: [Mondays at 14:00 UTC (biweekly/triweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[AWS](sig-aws/README.md)|aws|* [Justin Santa Barbara](https://github.com/justinsb)<br>* [Kris Nova](https://github.com/kris-nova), Heptio<br>* [Nishi Davidson](https://github.com/d-nishi), AWS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-aws)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-aws)|* Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> -|[Azure](sig-azure/README.md)|azure|* [Stephen Augustus](https://github.com/justaugustus), Red Hat<br>* [Shubheksha Jalan](https://github.com/shubheksha), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-azure)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-azure)|* Regular SIG Meeting: [Wednesdays at 16:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> +|[Azure](sig-azure/README.md)|azure|* [Stephen Augustus](https://github.com/justaugustus), Red Hat<br>* [Dave Strebel](https://github.com/dstrebel), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-azure)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-azure)|* Regular SIG Meeting: [Wednesdays at 16:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Big Data](sig-big-data/README.md)|big-data|* [Anirudh Ramanathan](https://github.com/foxish), Rockset<br>* [Erik Erlandson](https://github.com/erikerlandson), Red Hat<br>* [Yinan Li](https://github.com/liyinan926), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-big-data)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-big-data)|* Regular SIG Meeting: [Wednesdays at 17:00 UTC (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[CLI](sig-cli/README.md)|cli|* [Maciej Szulik](https://github.com/soltysh), Red Hat<br>* [Sean Sullivan](https://github.com/seans3), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cli)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cli)|* Regular SIG Meeting: [Wednesdays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> -|[Cloud Provider](sig-cloud-provider/README.md)|cloud-provider|* [Andrew Sy Kim](https://github.com/andrewsykim), DigitalOcean<br>* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation<br>* [Jago Macleod](https://github.com/jagosan), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cloud-provider)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider)|* Regular SIG Meeting: [Wednesdays at 1:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> -|[Cluster Lifecycle](sig-cluster-lifecycle/README.md)|cluster-lifecycle|* [Robert Bailey](https://github.com/roberthbailey), Google<br>* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)<br>* [Timothy St. Clair](https://github.com/timothysc), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular SIG Meeting: [Tuesdays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* kubeadm Office Hours: [Wednesdays at 09:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API office hours: [Wednesdays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API Provider Implementers' office hours (EMEA): [Wednesdays at 15:00 CEST (Central European Summer Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API Provider Implementers' office hours (US West Coast): [Tuesdays at 12:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API (AWS implementation) office hours: [Mondays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* kops Office Hours: [Fridays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> +|[Cloud Provider](sig-cloud-provider/README.md)|cloud-provider|* [Andrew Sy Kim](https://github.com/andrewsykim), DigitalOcean<br>* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation<br>* [Jago Macleod](https://github.com/jagosan), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cloud-provider)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider)|* Regular SIG Meeting: [Wednesdays at 1:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* (cloud-provider-extraction) Weekly Sync removing the in-tree cloud providers led by @cheftako and @d-nishi: [Thursdays at 13:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1KLsGGzNXQbsPeELCeF_q-f0h0CEGSe20xiwvcR2NlYM/edit)<br> +|[Cluster Lifecycle](sig-cluster-lifecycle/README.md)|cluster-lifecycle|* [Robert Bailey](https://github.com/roberthbailey), Google<br>* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)<br>* [Timothy St. Clair](https://github.com/timothysc), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular SIG Meeting: [Tuesdays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* kubeadm Office Hours: [Wednesdays at 09:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API office hours: [Wednesdays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API Provider Implementers' office hours (EMEA): [Wednesdays at 15:00 CEST (Central European Summer Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API Provider Implementers' office hours (US West Coast): [Tuesdays at 12:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API (AWS implementation) office hours: [Mondays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* kops Office Hours: [Fridays at 09:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Kubespray Office Hours: [Wednesdays at 07:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Cluster Ops](sig-cluster-ops/README.md)|cluster-ops|* [Rob Hirschfeld](https://github.com/zehicle), RackN<br>* [Jaice Singer DuMars](https://github.com/jdumars), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-ops)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops)|* Regular SIG Meeting: [Thursdays at 20:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Contributor Experience](sig-contributor-experience/README.md)|contributor-experience|* [Elsie Phillips](https://github.com/Phillels), CoreOS<br>* [Paris Pittman](https://github.com/parispittman), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-contribex)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-contribex)|* Regular SIG Meeting: [Wednesdays at 9:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> -|[Docs](sig-docs/README.md)|docs|* [Zach Corleissen](https://github.com/zacharysarah), Linux Foundation<br>* [Andrew Chen](https://github.com/chenopis), Google<br>* [Jennifer Rondeau](https://github.com/bradamant3), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-docs)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)|* Regular SIG Meeting: [Tuesdays at 17:30 UTC (weekly - except fourth Tuesday every month)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* APAC SIG Meeting: [Wednesdays at 02:00 UTC (monthly - fourth Wednesday every month)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> +|[Docs](sig-docs/README.md)|docs|* [Andrew Chen](https://github.com/chenopis), Google<br>* [Zach Corleissen](https://github.com/zacharysarah), Linux Foundation<br>* [Jennifer Rondeau](https://github.com/bradamant3), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-docs)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)|* Regular SIG Meeting: [Tuesdays at 17:30 UTC (weekly - except fourth Tuesday every month)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* APAC SIG Meeting: [Wednesdays at 02:00 UTC (monthly - fourth Wednesday every month)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[GCP](sig-gcp/README.md)|gcp|* [Adam Worrall](https://github.com/abgworrall), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-gcp)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-gcp)|* Regular SIG Meeting: [Thursdays at 16:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[IBMCloud](sig-ibmcloud/README.md)|ibmcloud|* [Khalid Ahmed](https://github.com/khahmed), IBM<br>* [Richard Theis](https://github.com/rtheis), IBM<br>* [Sahdev Zala](https://github.com/spzala), IBM<br>|* [Slack](https://kubernetes.slack.com/messages/sig-ibmcloud)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-ibmcloud)|* Regular SIG Meeting: [Wednesdays at 14:00 EST (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Instrumentation](sig-instrumentation/README.md)|instrumentation|* [Piotr Szczesniak](https://github.com/piosz), Google<br>* [Frederic Branczyk](https://github.com/brancz), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-instrumentation)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-instrumentation)|* Regular SIG Meeting: [Thursdays at 17:30 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> @@ -44,14 +44,14 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[Node](sig-node/README.md)|node|* [Dawn Chen](https://github.com/dchen1107), Google<br>* [Derek Carr](https://github.com/derekwaynecarr), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-node)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-node)|* Regular SIG Meeting: [Tuesdays at 10:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[OpenStack](sig-openstack/README.md)|openstack|* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation<br>* [David Lyle](https://github.com/dklyle), Intel<br>* [Robert Morse](https://github.com/rjmorse), Ticketmaster<br>|* [Slack](https://kubernetes.slack.com/messages/sig-openstack)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-openstack)|* Regular SIG Meeting: [Wednesdays at 16:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/15UwgLbEyZyXXxVtsThcSuPiJru4CuqU9p3ttZSfTaY4/edit)<br> |[PM](sig-pm/README.md)|pm|* [Aparna Sinha](https://github.com/apsinha), Google<br>* [Ihor Dvoretskyi](https://github.com/idvoretskyi), CNCF<br>* [Caleb Miles](https://github.com/calebamiles), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-pm)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-pm)|* Regular SIG Meeting: [Tuesdays at 18:30 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> -|[Release](sig-release/README.md)|release|* [Jaice Singer DuMars](https://github.com/jdumars), Google<br>* [Caleb Miles](https://github.com/calebamiles), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-release)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-release)|* Regular SIG Meeting: [Tuesdays at 21:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> +|[Release](sig-release/README.md)|release|* [Caleb Miles](https://github.com/calebamiles), Google<br>* [Stephen Augustus](https://github.com/justaugustus), Red Hat<br>* [Tim Pepper](https://github.com/tpepper), VMware<br>|* [Slack](https://kubernetes.slack.com/messages/sig-release)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-release)|* Regular SIG Meeting: [Tuesdays at 21:00 UTC (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Scalability](sig-scalability/README.md)|scalability|* [Wojciech Tyczynski](https://github.com/wojtek-t), Google<br>* [Bob Wise](https://github.com/countspongebob), AWS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-scalability)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-scale)|* Regular SIG Meeting: [Thursdays at 16:30 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> -|[Scheduling](sig-scheduling/README.md)|scheduling|* [Bobby (Babak) Salamat](https://github.com/bsalamat), Google<br>* [Klaus Ma](https://github.com/k82cn), IBM<br>|* [Slack](https://kubernetes.slack.com/messages/sig-scheduling)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-scheduling)|* 10AM PT Meeting: [Thursdays at 17:00 UTC (biweekly starting Thursday June 7, 2018)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* 5PM PT Meeting: [Thursdays at 24:00 UTC (biweekly starting Thursday June 14, 2018)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> +|[Scheduling](sig-scheduling/README.md)|scheduling|* [Bobby (Babak) Salamat](https://github.com/bsalamat), Google<br>* [Klaus Ma](https://github.com/k82cn), Huawei<br>|* [Slack](https://kubernetes.slack.com/messages/sig-scheduling)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-scheduling)|* 10AM PT Meeting: [Thursdays at 17:00 UTC (biweekly starting Thursday June 7, 2018)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* 5PM PT Meeting: [Thursdays at 24:00 UTC (biweekly starting Thursday June 14, 2018)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Service Catalog](sig-service-catalog/README.md)|service-catalog|* [Carolyn Van Slyck](https://github.com/carolynvs), Microsoft<br>* [Michael Kibbe](https://github.com/kibbles-n-bytes), Google<br>* [Doug Davis](https://github.com/duglin), IBM<br>* [Jay Boyd](https://github.com/jboyd01), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-service-catalog)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-service-catalog)|* Regular SIG Meeting: [Mondays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Storage](sig-storage/README.md)|storage|* [Saad Ali](https://github.com/saad-ali), Google<br>* [Bradley Childs](https://github.com/childsb), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-storage)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-storage)|* Regular SIG Meeting: [Thursdays at 9:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Testing](sig-testing/README.md)|testing|* [Aaron Crickenberger](https://github.com/spiffxp), Google<br>* [Erick Feja](https://github.com/fejta), Google<br>* [Steve Kuznetsov](https://github.com/stevekuznetsov), Red Hat<br>* [Timothy St. Clair](https://github.com/timothysc), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-testing)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-testing)|* Regular SIG Meeting: [Tuesdays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* (testing-commons) Testing Commons: [Wednesdays at 07:30 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[UI](sig-ui/README.md)|ui|* [Dan Romlein](https://github.com/danielromlein), Google<br>* [Sebastian Florek](https://github.com/floreks), Fujitsu<br>|* [Slack](https://kubernetes.slack.com/messages/sig-ui)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)|* Regular SIG Meeting: [Thursdays at 18:00 CET (Central European Time) (weekly)](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)<br> -|[VMware](sig-vmware/README.md)|vmware|* [Fabio Rapposelli](https://github.com/frapposelli), VMware<br>* [Steve Wong](https://github.com/cantbewong), VMware<br>|* [Slack](https://kubernetes.slack.com/messages/sig-vmware)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware)|* Regular SIG Meeting: [Thursdays at 18:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cloud Provider vSphere monthly syncup: [Wednesdays at 16:00 UTC (monthly - first Wednesday every month)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API Provider vSphere bi-weekly syncup: [Wednesdays at 18:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> +|[VMware](sig-vmware/README.md)|vmware|* [Fabio Rapposelli](https://github.com/frapposelli), VMware<br>* [Steve Wong](https://github.com/cantbewong), VMware<br>|* [Slack](https://kubernetes.slack.com/messages/sig-vmware)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware)|* Regular SIG Meeting: [Thursdays at 11:00 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cloud Provider vSphere monthly syncup: [Wednesdays at 09:00 PT (Pacific Time) (monthly - first Wednesday every month)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br>* Cluster API Provider vSphere bi-weekly syncup: [Wednesdays at 13:00 PT (Pacific Time) (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Windows](sig-windows/README.md)|windows|* [Michael Michael](https://github.com/michmike), Apprenda<br>* [Patrick Lang](https://github.com/patricklang), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-windows)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-windows)|* Regular SIG Meeting: [Tuesdays at 12:30 Eastern Standard Time (EST) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> ### Master Working Group List @@ -61,13 +61,13 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md) |[App Def](wg-app-def/README.md)|* [Antoine Legrand](https://github.com/ant31), CoreOS<br>* [Bryan Liles](https://github.com/bryanl), Heptio<br>* [Gareth Rushgrove](https://github.com/garethr), Docker<br>|* [Slack](https://kubernetes.slack.com/messages/wg-app-def)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-app-def)|* Regular WG Meeting: [Wednesdays at 16:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Apply](wg-apply/README.md)|* [Daniel Smith](https://github.com/lavalamp), Google<br>|* [Slack](https://kubernetes.slack.com/messages/wg-apply)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-apply)|* Regular WG Meeting: [Tuesdays at 9:30 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Container Identity](wg-container-identity/README.md)|* [Clayton Coleman](https://github.com/smarterclayton), Red Hat<br>* [Greg Castle](https://github.com/destijl), Google<br>|* [Slack](https://kubernetes.slack.com/messages/wg-container-identity)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-container-identity)|* Regular WG Meeting: [Wednesdays at 10:00 PDT (bi-weekly (On demand))](https://zoom.us/my/k8s.sig.auth)<br> -|[IoT Edge](wg-iot-edge/README.md)|* [Dejan Bosanac](https://github.com/dejanb), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/wg-iot-edge)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-iot-edge)|* Regular WG Meeting: [Fridays at 15:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> +|[IoT Edge](wg-iot-edge/README.md)|* [Cindy Xing](https://github.com/cindyxing), Huawei<br>* [Dejan Bosanac](https://github.com/dejanb), Red Hat<br>* [Preston Holmes](https://github.com/ptone), Google<br>* [Steve Wong](https://github.com/cantbewong), VMWare<br>|* [Slack](https://kubernetes.slack.com/messages/wg-iot-edge)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-iot-edge)|* Regular WG Meeting: [Fridays at 15:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Kubeadm Adoption](wg-kubeadm-adoption/README.md)|* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)<br>* [Justin Santa Barbara](https://github.com/justinsb)<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular WG Meeting: [Tuesdays at 18:00 UTC (bi-weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Machine Learning](wg-machine-learning/README.md)|* [Vishnu Kannan](https://github.com/vishh), Google<br>* [Kenneth Owens](https://github.com/kow3ns), Google<br>* [Balaji Subramaniam](https://github.com/balajismaniam), Intel<br>* [Connor Doyle](https://github.com/ConnorDoyle), Intel<br>|* [Slack](https://kubernetes.slack.com/messages/wg-machine-learning)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-machine-learning)|* Regular WG Meeting: [Thursdays at 13:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Multitenancy](wg-multitenancy/README.md)|* [David Oppenheimer](https://github.com/davidopp), Google<br>* [Jessie Frazelle](https://github.com/jessfraz), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/wg-multitenancy)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-multitenancy)|* Regular WG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Policy](wg-policy/README.md)|* [Howard Huang](https://github.com/hannibalhuang), Huawei<br>* [Torin Sandall](https://github.com/tsandall), Styra<br>* [Yisui Hu](https://github.com/easeway), Google<br>* [Erica von Buelow](https://github.com/ericavonb), Red Hat<br>* [Michael Elder](https://github.com/mdelder), IBM<br>|* [Slack](https://kubernetes.slack.com/messages/wg-policy)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-policy)|* Regular WG Meeting: [Wednesdays at 16:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> |[Resource Management](wg-resource-management/README.md)|* [Vishnu Kannan](https://github.com/vishh), Google<br>* [Derek Carr](https://github.com/derekwaynecarr), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/wg-resource-mgmt)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-resource-management)|* Regular WG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly (On demand))](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit)<br> -|[Security Audit](wg-security-audit/README.md)|* [Jessie Frazelle](https://github.com/jessfraz), Microsoft<br>* [Aaron Small](https://github.com/aasmall), Google<br>* [Joel Smith](https://github.com/joelsmith), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/)<br>* [Mailing List]()|* Regular WG Meeting: [Mondays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1RbC4SBZBlKth7IjYv_NaEpnmLGwMJ0ElpUOmsG-bdRA/edit)<br> +|[Security Audit](wg-security-audit/README.md)|* [Aaron Small](https://github.com/aasmall), Google<br>* [Joel Smith](https://github.com/joelsmith), Red Hat<br>* [Craig Ingram](https://github.com/cji), Salesforce<br>|* [Slack](https://kubernetes.slack.com/messages/)<br>* [Mailing List]()|* Regular WG Meeting: [Mondays at 13:00 PT (Pacific Time) (weekly)](https://docs.google.com/document/d/1RbC4SBZBlKth7IjYv_NaEpnmLGwMJ0ElpUOmsG-bdRA/edit)<br> <!-- BEGIN CUSTOM CONTENT --> <!-- END CUSTOM CONTENT --> diff --git a/sig-multicluster/README.md b/sig-multicluster/README.md index 24f3c4a1..62f368cc 100644 --- a/sig-multicluster/README.md +++ b/sig-multicluster/README.md @@ -10,6 +10,8 @@ To understand how this file is generated, see https://git.k8s.io/community/gener A Special Interest Group focused on solving common challenges related to the management of multiple Kubernetes clusters, and applications that exist therein. The SIG will be responsible for designing, discussing, implementing and maintaining API’s, tools and documentation related to multi-cluster administration and application management. This includes not only active automated approaches such as Cluster Federation, but also those that employ batch workflow-style continuous deployment systems like Spinnaker and others. Standalone building blocks for these and other similar systems (for example a cluster registry), and proposed changes to kubernetes core where appropriate will also be in scope. +The [charter](charter.md) defines the scope and governance of the Multicluster Special Interest Group. + ## Meetings * Regular SIG Meeting: [Tuesdays at 9:30 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:30&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/18mk62nOXE_MCSSnb4yJD_8UadtzJrYyJxFwbrgabHe8/edit). diff --git a/sig-multicluster/charter.md b/sig-multicluster/charter.md new file mode 100644 index 00000000..89df3865 --- /dev/null +++ b/sig-multicluster/charter.md @@ -0,0 +1,47 @@ +# SIG Multicluster Charter + +This charter adheres to the conventions described in the [Kubernetes Charter README] and uses +the Roles and Organization Management outlined in [sig-governance]. + +## Scope + +The scope of SIG Multicluster is limited to the following subprojects: + +- The [cluster-registry](https://github.com/kubernetes/cluster-registry) +- Kubernetes federation: + - [Federation v2](https://github.com/kubernetes-sigs/federation-v2) + - [Federation v1](https://github.com/kubernetes/federation) +- [Kubemci](https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress) + +### In scope + +See [SIG README]. + +#### Code, Binaries and Services + +SIG Multicluster code and binaries are limited to those from one of the SIG subprojects. + +#### Cross-cutting and Externally Facing Processes + +- Consult with other SIGs and the community on how the in-scope mechanisms + should work and integrate with other areas of the wider Kubernetes ecosystem + +### Out of scope + +- Software that creates or manages the lifecycle of Kubernetes clusters + +## Roles and Organization Management + +This sig follows adheres to the Roles and Organization Management outlined in [sig-governance] +and opts-in to updates and modifications to [sig-governance]. + +### Subproject Creation + +SIG Multicluster delegates subproject approval to Technical Leads. See [Subproject creation - Option 1]. + +[sig-governance]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md +[sig-subprojects]: https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md#subprojects +[sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml#L1042 +[Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md +[Subproject creation - Option 1]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md#subproject-creation +[SIG README]: https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md
\ No newline at end of file diff --git a/sig-release/README.md b/sig-release/README.md index 948a77bb..c61343e1 100644 --- a/sig-release/README.md +++ b/sig-release/README.md @@ -9,6 +9,8 @@ To understand how this file is generated, see https://git.k8s.io/community/gener # Release Special Interest Group +The [charter](charter.md) defines the scope and governance of the Release Special Interest Group. + ## Meetings * Regular SIG Meeting: [Tuesdays at 21:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=21:00&tz=UTC). * [Meeting notes and Agenda](https://docs.google.com/document/d/1Fu6HxXQu8wl6TwloGUEOXVzZ1rwZ72IAhglnaAMCPqA/edit?usp=sharing). @@ -19,8 +21,13 @@ To understand how this file is generated, see https://git.k8s.io/community/gener ### Chairs The Chairs of the SIG run operations and processes governing the SIG. -* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Google * Caleb Miles (**[@calebamiles](https://github.com/calebamiles)**), Google +* Stephen Augustus (**[@justaugustus](https://github.com/justaugustus)**), Red Hat +* Tim Pepper (**[@tpepper](https://github.com/tpepper)**), VMware + +## Emeritus Leads + +* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Google ## Contact * [Slack](https://kubernetes.slack.com/messages/sig-release) @@ -60,7 +67,7 @@ Note that the links to display team membership will only work if you are a membe | @kubernetes/sig-release-test-failures | [link](https://github.com/orgs/kubernetes/teams/sig-release-test-failures) | Test Failures and Triage | <!-- BEGIN CUSTOM CONTENT --> -[SIG Release][] has moved! +[SIG Release] has moved! [SIG Release]: https://github.com/kubernetes/sig-release <!-- END CUSTOM CONTENT --> diff --git a/sig-release/charter.md b/sig-release/charter.md new file mode 100644 index 00000000..7c8b08ea --- /dev/null +++ b/sig-release/charter.md @@ -0,0 +1,85 @@ +# SIG Release Charter + +This charter adheres to the conventions described in the [Kubernetes Charter README] and uses the Roles and Organization Management outlined in [sig-governance]. + +## Scope + +- Production of Kubernetes releases on a reliable schedule +- Ensure there is a consistent group of community members in place to support the release process across time +- Provide guidance and tooling to facilitate the production of automated releases +- Serve as a tightly integrated partner with other SIGs to empower SIGs to integrate their repositories into the release process + +### In scope + +- Ensuring quality Kubernetes releases + - Defining and staffing release roles to manage the resolution of release blocking criteria + - Defining and driving development processes (e.g. merge queues, cherrypicks) and release processes + (e.g. burndown meetings, cutting beta releases) with the intent of meeting the release schedule + - Managing the creation of release specific artifacts, including: + - Code branches + - Binary artifacts + - Release notes +- Continually improving release and development processes + - Working closely with SIG Contributor Experience to define and build tools to facilitate release process (e.g. dashboards) + - Working closely with SIG Testing to determine and implement tests, automation, and labeling required for stable releases + - Working with downstream communities responsible for packaging Kubernetes releases + - Working with other SIGs to agree upon the responsibilities of their SIG with respect to the release + - Defining and collecting metrics related to the release in order to measure progress over each release + - Facilitating release retrospectives +- Collaborating with downstream communities which build artifacts from Kubernetes releases + +### Out of scope + +#### Support + +SIG Release itself is not responsible for end user support or creation of patches for support streams. There are support forums for end users to ask questions and report bugs, subject matter experts in other SIGs triage and address issues and when necessary mark bug fixes for inclusion in a patch release. + +## Roles and Organization Management + +This SIG adheres to the Roles and Organization Management outlined in [sig-governance] and opts-in to updates and modifications to [sig-governance]. + +Specifically, the common guidelines (see: [sig-governance]) for continuity of membership within roles in the SIG are followed. + +### Deviations from [sig-governance] + +- SIG Release subprojects have subproject chairs +- SIG Release does not have top-level SIG Technical Leads. With few exceptions, technical decisions should be handled within the scope of the relevant SIG Release subproject. + +#### SIG Membership + +SIG Release has a concept of membership. SIG members can be occasionally called on to assist with decision making, especially as it relates to gathering historical context around existing policies and enacting new policy. + +SIG Release membership is represented by membership in the `sig-release` GitHub team. + +SIG Release membership is defined as the set of Kubernetes contributors included in: +- All SIG Release top-level subproject OWNERS files +- All documented former Release Team members holding Lead roles e.g., Enhancements Lead, Patch Release Team + +Subproject `approvers` and incoming Release Team Leads should ensure that new members are added to the `sig-release` GitHub team. + +SIG Release Chairs will periodically review the `sig-release` GitHub team to ensure it remains accurate and up-to-date. + +All SIG Release roles will be filled by SIG Release members, except where explicitly defined in other policy. A notable exception to this would be Release Team Shadows. + +It may be implied, given the criteria for SIG membership, but to be explicit: +- SIG Release membership is representative of people who actively contribute to the health of the SIG. Given that, those members should also be enabled to help drive SIG Release policy. +- SIG Chairs should represent the sentiment of and facilitate the decision making by SIG Members. + +### Subproject Creation + +Subprojects must be created by [KEP] proposal and accepted by [lazy-consensus]. + +In the event that lazy consensus cannot be reached: +- Fallback to a majority decision by SIG Chairs +- SIG Release Members may override the majority decision of SIG Chairs by a supermajority (60%) + +Additional requirements: +- KEP must establish subproject chairs +- [sigs.yaml] must be updated to include subproject information and OWNERS files with subproject chairs + + +[KEP]: /keps/0000-kep-template.md +[Kubernetes Charter README]: /committee-steering/governance/README.md +[lazy-consensus]: http://communitymgt.wikia.com/wiki/Lazy_consensus +[sig-governance]: /committee-steering/governance/sig-governance.md +[sigs.yaml]: /sigs.yaml diff --git a/sig-scalability/processes/formal-scalability-processes.md b/sig-scalability/processes/formal-scalability-processes.md index 997eec47..d215d01c 100644 --- a/sig-scalability/processes/formal-scalability-processes.md +++ b/sig-scalability/processes/formal-scalability-processes.md @@ -6,7 +6,7 @@ _by Shyam JVS, Google Inc_ ## Introduction -Scalability is a very crucial aspect of kubernetes and has allowed many customers to adopt it with confidence. K8s [started scaling to 5000](http://blog.kubernetes.io/2017/03/scalability-updates-in-kubernetes-1.6.html) nodes beginning from release 1.6. Building and maintaining a performant and scalable system needs conscious efforts from the whole developer community. Lack of solid measures have caused problems (both scalability and release-related) in the past - for e.g during [release-1.7](https://github.com/kubernetes/kubernetes/issues/47344), [release-1.8](https://github.com/kubernetes/kubernetes/issues/53255) and [in general](https://github.com/kubernetes/kubernetes/issues/56062). We need them to ensure that the effort is well-streamlined with proper checks and balances in place. Of course they may evolve over time to suit the community/project’s needs better. +Scalability is a very crucial aspect of kubernetes and has allowed many customers to adopt it with confidence. K8s [started scaling to 5000](https://kubernetes.io/blog/2017/03/scalability-updates-in-kubernetes-1.6) nodes beginning from release 1.6. Building and maintaining a performant and scalable system needs conscious efforts from the whole developer community. Lack of solid measures have caused problems (both scalability and release-related) in the past - for e.g during [release-1.7](https://github.com/kubernetes/kubernetes/issues/47344), [release-1.8](https://github.com/kubernetes/kubernetes/issues/53255) and [in general](https://github.com/kubernetes/kubernetes/issues/56062). We need them to ensure that the effort is well-streamlined with proper checks and balances in place. Of course they may evolve over time to suit the community/project’s needs better. ## Goal @@ -69,7 +69,7 @@ About 60% of scalability regressions are caught by these medium-scale jobs ([sou ### Testing / Post-submit phase -This phase constitutes the final layer of protection against regressions before cutting the release. We already have scalability CI jobs in place for this. The spectrum of scale they cover is quite wide, ranging from 100-node to 5000-node clusters (both for kubemark and real clusters). However, what what we need additionally is: +This phase constitutes the final layer of protection against regressions before cutting the release. We already have scalability CI jobs in place for this. The spectrum of scale they cover is quite wide, ranging from 100-node to 5000-node clusters (both for kubemark and real clusters). However, what we need additionally is: The ability for crucial scalability jobs to block submit-queue (with manual unblock ability)\ ([relevant feature request](https://github.com/kubernetes/kubernetes/issues/53255))\ diff --git a/sig-scalability/slos/dns_programming_latency.md b/sig-scalability/slos/dns_programming_latency.md new file mode 100644 index 00000000..bec37dfb --- /dev/null +++ b/sig-scalability/slos/dns_programming_latency.md @@ -0,0 +1,48 @@ +## Network programming latency SLIs/SLOs details + +### Definition + +| Status | SLI | SLO | +| --- | --- | --- | +| __WIP__ | Latency of programming a single in-cluster dns instance, measured from when service spec or list of its `Ready` pods change to when it is reflected in that dns instance, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile of (99th percentiles across all dns instances) per cluster-day <= X | + +### User stories +- As a user of vanilla Kubernetes, I want some guarantee how quickly in-cluster +DNS will start resolving service name to its newly started backends. +- As a user of vanilla Kubernetes, I want some guarantee how quickly in-cluster +DNS will stop resolving service name to its removed (or unhealthy) backends. +- As a user of vanilla Kubernetes, I wasn some guarantee how quickly newly +create services will be resolvable via in-cluster DNS. + +### Other notes +- We are consciously focusing on in-cluster DNS for the purpose of this SLI, +as external DNS resolution clearly depends on cloud provider or environment +in which the cluster is running (it hard to set the SLO for it). + +### Caveats +- The SLI is formulated for a single DNS instance, even though that value +itself is not very interesting for the user. +If there are multiple DNS instances in the cluster, the aggregation across +them is done only at the SLO level (and only that gives a value that is +interesting for the user). The reason for doing it this is feasibility for +efficiently computing that: + - if we would be doing aggregation at the SLI level (i.e. the SLI would be + formulated like "... reflected in in-cluster DNS and visible from 99% + of DNS instances"), computing that SLI would be extremely + difficult. It's because in order to decide e.g. whether pod transition to + Ready state is reflected, we would have to know when exactly it was reflected + in 99% of DNS instances. That requires tracking metrics on + per-change base (which we can't do efficiently). + - we admit that the SLO is a bit weaker in that form (i.e. it doesn't necessary + force that a given change is reflected in 99% of programmers with a given + 99th percentile latency), but it's close enough approximation. + +### How to measure the SLI. +There [network programming latency](./network_programming_latency.md) is +formulated in almost exactly the same way. As a result, the methodology for +measuring the SLI here is exactly the same and can be found +[here](./network_programming_latency.md#how-to-measure-the-sli). + +### Test scenario + +__TODO: Describe test scenario.__ diff --git a/sig-scalability/slos/network_programming_latency.md b/sig-scalability/slos/network_programming_latency.md index 38eeebf0..dc1dace2 100644 --- a/sig-scalability/slos/network_programming_latency.md +++ b/sig-scalability/slos/network_programming_latency.md @@ -60,7 +60,7 @@ this update: already present at storage layer, so it won't be hard to propagate that. 1. The in-cluster load-balancing programmer will export a prometheus metric once done with programming. The latency of the operation is defined as -difference betweem timestamp of then whe operation is done and timestamp +difference between timestamp of then whe operation is done and timestamp recorded in the newly introduced annotation. #### Caveats diff --git a/sig-scalability/slos/pod_startup_latency.md b/sig-scalability/slos/pod_startup_latency.md index 04fdd63b..7c52777e 100644 --- a/sig-scalability/slos/pod_startup_latency.md +++ b/sig-scalability/slos/pod_startup_latency.md @@ -38,7 +38,7 @@ is heavily application-dependent (and does't depend on Kubernetes itself). not obvious. We decided for the semantic of "when all its containers are reported as started and observed via watch", because: - we require all containers to be started (not e.g. the first one) to ensure - that the pod is started. We need to ensure that pontential regressions like + that the pod is started. We need to ensure that potential regressions like linearization of container startups within a pod will be catch by this SLI. - note that we don't require all container to be running - if some of them finished before the last one was started that is also fine. It is just diff --git a/sig-scalability/slos/slos.md b/sig-scalability/slos/slos.md index f1d56c7d..33d87eac 100644 --- a/sig-scalability/slos/slos.md +++ b/sig-scalability/slos/slos.md @@ -27,7 +27,7 @@ Our SLIs/SLOs need to have the following properties: arcane knowledge. We may also introduce internal(for developers only) SLIs, that may be useful -for understanding performance characterstic of the system, but for which +for understanding performance characteristic of the system, but for which we don't provide any guarantees for users (and thus don't require them to be that easily understandable). @@ -89,7 +89,7 @@ MUST satisfy thresholds defined in [thresholds file][]. ## Kubernetes SLIs/SLOs The currently existing SLIs/SLOs are enough to guarantee that cluster isn't -completely dead. However, the are not enough to satisfy user's needs in most +completely dead. However, they are not enough to satisfy user's needs in most of the cases. We are looking into extending the set of SLIs/SLOs to cover more parts of @@ -107,6 +107,7 @@ Prerequisite: Kubernetes cluster is available and serving. | __Official__ | Latency of non-streaming read-only API calls for every (resource, scope pair, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, for every (resource, scope) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day<sup>[1](#footnote1)</sup> (a) <= 1s if `scope=resource` (b) <= 5s if `scope=namespace` (c) <= 30s if `scope=cluster` | [Details](./api_call_latency.md) | | __Official__ | Startup latency of stateless and schedulable pods, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile per cluster-day<sup>[1](#footnote1)</sup> <= 5s | [Details](./pod_startup_latency.md) | | __WIP__ | Latency of programming a single (e.g. iptables on a given node) in-cluster load balancing mechanism, measured from when service spec or list of its `Ready` pods change to when it is reflected in load balancing mechanism, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile of (99th percentiles across all programmers (e.g. iptables)) per cluster-day<sup>[1](#footnote1)</sup> <= X | [Details](./network_programming_latency.md) | +| __WIP__ | Latency of programming a single in-cluster dns instance, measured from when service spec or list of its `Ready` pods change to when it is reflected in that dns instance, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile of (99th percentiles across all dns instances) per cluster-day <= X | [Details](./dns_programming_latency.md) | <a name="footnote1">\[1\]</a> For the purpose of visualization it will be a sliding window. However, for the purpose of reporting the SLO, it means one diff --git a/sig-scheduling/README.md b/sig-scheduling/README.md index b5e54571..dbfd263a 100644 --- a/sig-scheduling/README.md +++ b/sig-scheduling/README.md @@ -21,7 +21,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener The Chairs of the SIG run operations and processes governing the SIG. * Bobby (Babak) Salamat (**[@bsalamat](https://github.com/bsalamat)**), Google -* Klaus Ma (**[@k82cn](https://github.com/k82cn)**), IBM +* Klaus Ma (**[@k82cn](https://github.com/k82cn)**), Huawei ## Contact * [Slack](https://kubernetes.slack.com/messages/sig-scheduling) diff --git a/sig-storage/contributing.md b/sig-storage/contributing.md index 0c443c03..2e275e12 100644 --- a/sig-storage/contributing.md +++ b/sig-storage/contributing.md @@ -5,16 +5,16 @@ We recommend the following presentations, docs, and videos to help get familiar | Date | Title | Link | Description | | --- | --- | --- | --- | | - | Persistent Volume Framework | [Doc](http://kubernetes.io/docs/user-guide/persistent-volumes/) | Public user docs for Kubernetes Persistent Volume framework. -| 2018 May 03 | SIG Storage Intro | [Video](https://www.youtube.com/watch?v=GvrTl2T-Tts&list=PLj6h78yzYM2N8GdbjmhVU65KYm_68qBmo&index=164&t=0s) | An overview of SIG Storage By Saad Ali at Kubecon EU 2018. | -| 2018 May 04 | Kubernetes Storage Lingo 101 | [Video](https://www.youtube.com/watch?v=uSxlgK1bCuA&t=0s&index=300&list=PLj6h78yzYM2N8GdbjmhVU65KYm_68qBmo) | An overview of various terms used in Kubernetes storage and what they mean by Saad Ali at Kubecon EU 2018.| +| 2018 May 03 | SIG Storage Intro | [Video](https://www.youtube.com/watch?v=GvrTl2T-Tts&list=PLj6h78yzYM2N8GdbjmhVU65KYm_68qBmo&index=164&t=0s) | An overview of SIG Storage By Saad Ali at KubeCon/CloudNativeCon EU 2018. | +| 2018 May 04 | Kubernetes Storage Lingo 101 | [Video](https://www.youtube.com/watch?v=uSxlgK1bCuA&t=0s&index=300&list=PLj6h78yzYM2N8GdbjmhVU65KYm_68qBmo) | An overview of various terms used in Kubernetes storage and what they mean by Saad Ali at KubeCon/CloudNativeCon EU 2018.| | 2017 May 18 | Storage Classes & Dynamic Provisioning in Kubernetes |[Video](https://youtu.be/qktFhjJmFhg)| Intro to the basic Kubernetes storage concepts for users (direct volume reference, PV/PVC, and dynamic provisioning). | -| 2017 March 29 | Dynamic Provisioning and Storage Classes in Kubernetes |[Blog post](http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-classes-kubernetes.html)| Overview of Dynamic Provisioning and Storage Classes in Kubernetes at GA. | -| 2017 March 29 | How Kubernetes Storage Works | [Slides](https://docs.google.com/presentation/d/1Yl5JKifcncn0gSZf3e1dWspd8iFaWObLm9LxCaXZJIk/edit?usp=sharing) | Overview for developers on how Kubernetes storage works for KubeCon EU 2017 by Saad Ali +| 2017 March 29 | Dynamic Provisioning and Storage Classes in Kubernetes |[Blog post](https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/)| Overview of Dynamic Provisioning and Storage Classes in Kubernetes at GA. | +| 2017 March 29 | How Kubernetes Storage Works | [Slides](https://docs.google.com/presentation/d/1Yl5JKifcncn0gSZf3e1dWspd8iFaWObLm9LxCaXZJIk/edit?usp=sharing) | Overview for developers on how Kubernetes storage works for KubeCon/CloudNativeCon EU 2017 by Saad Ali | 2017 February 17 | Overview of Dynamic Provisioning for SIG Apps | [Video](https://youtu.be/NXUHmxXytUQ?t=10m33s) | Overview of Storage Classes and Dynamic Provisioning for SIG Apps -| 2016 October 7 | Dynamic Provisioning and Storage Classes in Kubernetes |[Blog post](http://blog.kubernetes.io/2016/10/dynamic-provisioning-and-storage-in-kubernetes.html)| Overview of Dynamic Provisioning and Storage Classes in Kubernetes at Beta. | +| 2016 October 7 | Dynamic Provisioning and Storage Classes in Kubernetes |[Blog post](https://kubernetes.io/blog/2016/10/dynamic-provisioning-and-storage-in-kubernetes/)| Overview of Dynamic Provisioning and Storage Classes in Kubernetes at Beta. | | 2016 July 26 | Overview of Basic Volume for SIG Apps | [Video](https://youtu.be/DrLGxkFdDNc?t=11m19s) | Overview of Basic Volume for SIG Apps -| 2016 March 25 | The State of State | [Video](https://www.youtube.com/watch?v=jsTQ24CLRhI&index=6&list=PLosInM-8doqcBy3BirmLM4S_pmox6qTw3) | The State of State at KubeCon EU 2016 by Matthew Bates -| 2016 March 25 | Kubernetes Storage 101 | [Video](https://www.youtube.com/watch?v=ZqTHe6Xj0Ek&list=PLosInM-8doqcBy3BirmLM4S_pmox6qTw3&index=38) | Kubernetes Storage 101 at KubeCon EU 2016 by Erin Boyd +| 2016 March 25 | The State of State | [Video](https://www.youtube.com/watch?v=jsTQ24CLRhI&index=6&list=PLosInM-8doqcBy3BirmLM4S_pmox6qTw3) | The State of State at KubeCon/CloudNativeCon EU 2016 by Matthew Bates +| 2016 March 25 | Kubernetes Storage 101 | [Video](https://www.youtube.com/watch?v=ZqTHe6Xj0Ek&list=PLosInM-8doqcBy3BirmLM4S_pmox6qTw3&index=38) | Kubernetes Storage 101 at KubeCon/CloudNativeCon EU 2016 by Erin Boyd Keep in mind that these artifacts reflect the state of the art at the time they were created. In Kubernetes we try very hard to maintain backwards compatibility, but Kubernetes is a fast moving project and we do add features going forward and attending the Storage SIG meetings and the Storage SIG Google group are both good ways of continually staying up to speed. diff --git a/sig-storage/volume-plugin-faq.md b/sig-storage/volume-plugin-faq.md index 1dc66a9a..bae94897 100644 --- a/sig-storage/volume-plugin-faq.md +++ b/sig-storage/volume-plugin-faq.md @@ -51,7 +51,7 @@ Container Storage Interface (CSI) is a standardized mechanism for Container Orch For more information about CSI, see: -* http://blog.kubernetes.io/2018/01/introducing-container-storage-interface.html +* https://kubernetes.io/blog/2018/01/introducing-container-storage-interface/ * [kubernetes-csi.github.io/docs](http://kubernetes-csi.github.io/docs) **What are the limitations of CSI?** diff --git a/sig-testing/README.md b/sig-testing/README.md index e6a814e1..385d6f66 100644 --- a/sig-testing/README.md +++ b/sig-testing/README.md @@ -35,7 +35,6 @@ The Chairs of the SIG run operations and processes governing the SIG. The following subprojects are owned by sig-testing: - **kind** - Description: Kubernetes IN Docker. Run Kubernetes test clusters on your local machine using Docker containers as nodes. - - Owners: - https://raw.githubusercontent.com/kubernetes-sigs/kind/master/OWNERS - **repo-publishing** @@ -43,7 +42,6 @@ The following subprojects are owned by sig-testing: - https://raw.githubusercontent.com/kubernetes/publishing-bot/master/OWNERS - **testing-commons** - Description: The Testing Commons is a subproject within the Kubernetes sig-testing community interested code structure, layout, and execution of common test code used throughout the kubernetes project - - Owners: - https://raw.githubusercontent.com/kubernetes-sigs/testing_frameworks/master/OWNERS - https://raw.githubusercontent.com/kubernetes/kubernetes/master/test/OWNERS diff --git a/sig-testing/charter.md b/sig-testing/charter.md new file mode 100644 index 00000000..d3cb74f0 --- /dev/null +++ b/sig-testing/charter.md @@ -0,0 +1,142 @@ +# SIG Testing Charter + +This charter adheres to the conventions described in the +[Kubernetes Charter README] and uses the Roles and Organization Management +outlined in [sig-governance]. + +## Scope + +SIG Testing is interested in effective testing of Kubernetes and automating +away project toil. We focus on creating and running tools and infrastructure +that make it easier for the community to write and run tests, and to +contribute, analyze and act upon test results. + +Although we are not responsible for ongoing test maintenance (see +[Out of Scope] below), we will act as an escalation point of last resort for +remediation if it is clear that misbehaving tests are harming the immediate +health of the project. + +### In scope + +#### Code, Binaries and Services + +- Project CI and workflow automation via tools such as [prow] and [tide] +- Infrastructure to support running project CI at scale, including tools + such as [boskos], [ghproxy] and [greenhouse] +- Providing a place and schema in which to upload test results for + contributors who wish to provide additional test results not generated + by the project's CI +- Extraction, display and analysis of test artifacts via tools like + [gubernator], [kettle], [testgrid], [triage] and [velodrome] +- Configuration management of jobs and ensuring they use a consistent + process via tools such as [job configs], [kubetest] +- Tools that facilitate configuration management of github such as + [peribolos] and [label_sync] +- Tools that facilitate local testing of kubernetes such as [greenhouse] + and [kind] +- Jobs that automate away project toil, such as [periodic jobs that run as + @fejta-bot] +- Ensuring all of the above is kept running on a best effort basis +- Tools, frameworks and libraries that make it possible to write tests against + kubernetes such as e2e\* or integration test frameworks. + + \* Note that while we are the current de facto owners of the kubernetes e2e + test framework, we are not staffed to actively maintain or rewrite it and + welcome contributors looking to take on this responsibility. + +#### Cross-cutting and Externally Facing Processes + +##### Ongoing Support + +- The [Release Team test-infra role] is staffed by a member of SIG Testing, as + such their responsibilities are within the scope of this SIG, including + the maintenance of release jobs +- We actively collaborate with SIG Contributor Experience, often producing + tooling that they are responsible for using to implement polices and + processes that they own, e.g. the Github Administration subproject uses + [peribolos] and [label_sync] to reduce the toil involved +- We reserve the right to halt automation and infrastructure that we own, + or disable tests that we don't own if the project as a whole is being + impacted +- We are actively assisting with the transition of project infrastructure to + the CNCF and enabling non-Googlers to support this + +##### Deploying Changes + +We aspire to remain agile and deploy quickly, while ensuring a disruption-free +experience for project contributors. As such, the amount of notice we provide +and the amount of consensus we seek is driven by our estimation of risk. We +don't currently define risk in terms of objective metrics, so here is a rough +description of the guidelines we follow. We anticipate refining these over +time. + +- **Low risk** changes do not break existing contributor workflows, are easy + to roll back, and impact at most a few project repos or SIGs. These should + be reviewed by another member of SIG Testing or the affected SIG(s), + preferably an approver. + +- **Medium risk** changes may impact existing contributor workflows, should be + easy to roll back, and may impact all of the project's repos. These should + be shared with SIG Contributor Experience, may require a lazy consensus + issue with [kubernetes-dev@] notice. + +- **High risk changes** likely break existing contributor workflows, may be + difficult to roll back, and likely impact all of the project's repos. These + require a consultation with SIG Contributor Experience, and a lazy consensus + issue with [kubernetes-dev@] notice. + +### Out of Scope + +- We are not responsible for writing, fixing nor actively troubleshooting tests + for features or subprojects owned by other SIGs +- We are not responsible for ongoing maintenance of the project's CI Signal, + as this is driven by tests and jobs owned by other SIGs. We do however have + an interest in producing tools to help improve the signal. + +## Roles and Organization Management + +This sig adheres to the Roles and Organization Management outlined in +[sig-governance] and opts-in to updates and modifications to [sig-governance]. + +### Deviations from [sig-governance] + +- Chairs also fulfill the role of Tech Lead +- Proposing and making decisions _MAY_ be done without the use of KEPS so long + as the decision is documented in a linkable medium. We prefer to use issues + on [kubernetes/test-infra] to document technical decisions, and mailing list + threads on [kubernetes-sig-testing@] to document administrative decisions on + leadership, meetings and subprojects. +- We do not consistently review sig-testing testgrid dashboards as part of our + meetings + +### Subproject Creation + +Subprojects are created by Tech Leads following the process defined in [sig-governance] + + +[sig-governance]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md +[Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md +[lazy consensus]: http://en.osswiki.info/concepts/lazy_consensus + +[periodic jobs that run as @fejta-bot]: https://git.k8s.io/test-infra/config/jobs/kubernetes/test-infra/fejta-bot-periodics.yaml +[boskos]: https://git.k8s.io/test-infra/boskos +[ghproxy]: https://git.k8s.io/test-infra/ghproxy +[greenhouse]: https://git.k8s.io/test-infra/greenhouse +[gubernator]: http://k8s-gubernator.appspot.com +[job configs]: https://git.k8s.io/test-infra/config/jobs +[kettle]: https://git.k8s.io/test-infra/kettle +[kind]: https://github.com/kubernetes-sigs/kind +[kubetest]: https://git.k8s.io/test-infra/kubetest +[label_sync]: https://git.k8s.io/test-infra/label_sync +[peribolos]: https://git.k8s.io/test-infra/prow/cmd/peribolos +[planter]: https://git.k8s.io/test-infra/planter +[prow]: https://prow.k8s.io +[testgrid]: https://testgrid.k8s.io +[tide]: https://prow.k8s.io/tide +[triage]: https://go.k8s.io/triage +[velodrome]: https://velodrome.k8s.io + +[Release Team test-infra role]: https://git.k8s.io/sig-release/release-team/role-handbooks/test-infra +[kubernetes-dev@]: https://groups.google.com/forum/#!forum/kubernetes-dev +[kubernetes-sig-testing@]: https://groups.google.com/forum/#!forum/kubernetes-sig-testing +[kubernetes/test-infra]: https://git.k8s.io/test-infra diff --git a/sig-vmware/README.md b/sig-vmware/README.md index 4f7d185a..4a085a7f 100644 --- a/sig-vmware/README.md +++ b/sig-vmware/README.md @@ -11,13 +11,13 @@ To understand how this file is generated, see https://git.k8s.io/community/gener Bring together members of the VMware and Kubernetes community to maintain, support and run Kubernetes on VMware platforms. ## Meetings -* Regular SIG Meeting: [Thursdays at 18:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=18:00&tz=UTC). +* Regular SIG Meeting: [Thursdays at 11:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=11:00&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/1RV0nVtlPoAtM0DQwNYxYCC9lHfiHpTNatyv4bek6XtA/edit?usp=sharing). * [Meeting recordings](https://www.youtube.com/playlist?list=PLutJyDdkKQIqKv-Zq8WbyibQtemChor9y). -* Cloud Provider vSphere monthly syncup: [Wednesdays at 16:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (monthly - first Wednesday every month). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:00&tz=UTC). +* Cloud Provider vSphere monthly syncup: [Wednesdays at 09:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (monthly - first Wednesday every month). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=09:00&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/1B0NmmKVh8Ea5hnNsbUsJC7ZyNCsq_6NXl5hRdcHlJgY/edit?usp=sharing). * [Meeting recordings](https://www.youtube.com/playlist?list=PLutJyDdkKQIpOT4bOfuO3MEMHvU1tRqyR). -* Cluster API Provider vSphere bi-weekly syncup: [Wednesdays at 18:00 UTC](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=18:00&tz=UTC). +* Cluster API Provider vSphere bi-weekly syncup: [Wednesdays at 13:00 PT (Pacific Time)](https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=13:00&tz=PT%20%28Pacific%20Time%29). * [Meeting notes and Agenda](https://docs.google.com/document/d/1jQrQiOW75uWraPk4b_LWtCTHwT7EZwrWWwMdxeWOEvk/edit?usp=sharing). * [Meeting recordings](https://www.youtube.com/playlist?list=PLutJyDdkKQIovV-AONxMa2cyv-_5LAYiu). diff --git a/sig-windows/charter.md b/sig-windows/charter.md new file mode 100644 index 00000000..0c76c4a1 --- /dev/null +++ b/sig-windows/charter.md @@ -0,0 +1,48 @@ +# SIG Windows Charter + +This charter adheres to the conventions described in the [Kubernetes Charter README] and uses +the Roles and Organization Management outlined in [sig-governance]. + +## Scope + +The scope of SIG Windows is the operation of Kubernetes on the Windows operating system. +This includes maintaining the interface between Kubernetes and containers on Windows +as well as maintaining the pieces of Kubernetes (e.g. the kube-proxy) where there is a +Windows specific implementation. + +### In scope + +#### Code, Binaries and Services + +- Windows specific code in all parts of the codebase. +- Testing of Windows specific features and clusters + +#### Cross-cutting and Externally Facing Processes + +- Work with other SIGs on areas where Windows and Linux (and possibly other OSes in the future) deviate from one another in terms of functionality. + + +## Roles and Organization Management + +This sig follows adheres to the Roles and Organization Management outlined in [sig-governance] +and opts-in to updates and modifications to [sig-governance]. + +### Additional responsibilities of Chairs + +None + +### Additional responsibilities of Tech Leads + +None + +### Deviations from [sig-governance] + +None + +### Subproject Creation + +Federation of Subprojects + +[sig-governance]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance.md +[sig-subprojects]: https://github.com/kubernetes/community/blob/master/sig-YOURSIG/README.md#subprojects +[Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md
\ No newline at end of file @@ -113,7 +113,7 @@ sigs: We discuss how to define and run apps in Kubernetes, demo relevant tools and projects, and discuss areas of friction that can lead to suggesting improvements or feature requests. - charter_link: + charter_link: charter.md label: apps leadership: chairs: @@ -233,28 +233,37 @@ sigs: - name: sig-architecture-test-failures description: Test Failures and Triage subprojects: - - name: api + - name: architecture-and-api-governance + description: "[Described below](#architecture-and-api-governance)" owners: + - https://raw.githubusercontent.com/kubernetes/community/master/contributors/design-proposals/architecture/OWNERS + - https://raw.githubusercontent.com/kubernetes-sigs/architecture-tracking/master/OWNERS - https://raw.githubusercontent.com/kubernetes/api/master/OWNERS - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/OWNERS - - name: kubernetes-template-project + - name: conformance-definition + description: "[Described below](#conformance-definition)" owners: - - https://raw.githubusercontent.com/kubernetes/kubernetes-template-project/master/OWNERS - - name: spartakus + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/test/conformance/testdata/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/test/conformance/OWNERS + - name: kep-adoption-and-reviews + description: "[Described below](#kep-adoption-and-reviews)" owners: - - https://raw.githubusercontent.com/kubernetes-incubator/spartakus/master/OWNERS - - name: steering - owners: - - https://raw.githubusercontent.com/kubernetes/steering/master/OWNERS - - name: architecture-tracking - owners: - - https://raw.githubusercontent.com/kubernetes-sigs/architecture-tracking/master/OWNERS - - name: universal-utils + - https://raw.githubusercontent.com/kubernetes/community/master/keps/OWNERS + - name: code-organization + description: "[Described below](#code-organization)" owners: + - https://raw.githubusercontent.com/kubernetes/contrib/master/OWNERS # solely to steward moving code _out_ of here to more appropriate places - https://raw.githubusercontent.com/kubernetes/utils/master/OWNERS - - name: contrib # solely to steward moving code _out_ of here to more appropriate places + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/vendor/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/third_party/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/OWNERS + - name: steering + description: Placeholder until sigs.yaml supports committees as first-class groups. These repos are owned by the kubernetes steering committee, which is a wholly separate entity from SIG Architecture owners: - - https://raw.githubusercontent.com/kubernetes/contrib/master/OWNERS + - https://raw.githubusercontent.com/kubernetes/steering/master/OWNERS + - https://raw.githubusercontent.com/kubernetes-incubator/spartakus/master/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes-template-project/master/OWNERS + - name: Auth dir: sig-auth mission_statement: > @@ -316,10 +325,103 @@ sigs: description: Design Proposals - name: sig-auth-test-failures description: Test Failures and Triage + subprojects: + - name: audit-logging + description: > + Kubernetes API support for audit logging. + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/auditregistration/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/apis/audit/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/audit/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/plugin/pkg/audit/OWNERS + - name: authenticators + description: > + Kubernetes API support for authentication. + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubeapiserver/authenticator/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authenticator/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/kubernetes/typed/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/listers/authentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/pkg/apis/clientauthentication/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/plugin/pkg/client/auth/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/tools/auth/OWNERS + - name: authorizers + description: > + Kubernetes API support for authorization. + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubeapiserver/authorizer/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubectl/cmd/auth/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authorizer/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/plugin/pkg/authorizer/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/kubernetes/typed/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/kubernetes/typed/rbac/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/listers/authorization/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/listers/rbac/OWNERS + - name: certificates + description: > + Certificates APIs and client infrastructure to support PKI. + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/certificates/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/certificates/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/certificates/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/authentication/request/x509/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/util/cert/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/util/certificate/OWNERS + - name: encryption-at-rest + description: > + API storage support for storing data encrypted at rest in etcd. + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/server/options/encryptionconfig/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/OWNERS + - name: node-identity-and-isolation + description: > + Node identity management (co-owned with sig-lifecycle), and + authorization restrictions for isolating workloads on separate nodes + (co-owned with sig-node). + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/certificates/approver/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubelet/certificate/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/noderestriction/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/auth/authorizer/node/OWNERS + - name: policy-management + description: > + API validation and policies enforced during admission, such as + PodSecurityPolicy. Excludes run-time policies like NetworkPolicy and + Seccomp. + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/imagepolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/policy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/security/podsecuritypolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/policy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/imagepolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/apis/policy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/imagepolicy/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/security/podsecuritypolicy/OWNERS + - name: service-accounts + description: > + Infrastructure implementing Kubernetes service account based workload + identity. + owners: + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/serviceaccount/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubelet/token/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/serviceaccount/OWNERS + - https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/serviceaccount/OWNERS - name: Autoscaling dir: sig-autoscaling mission_statement: > - Covers development and maintenance of componets for automated scaling in + Covers development and maintenance of components for automated scaling in Kubernetes. This includes automated vertical and horizontal pod autoscaling, initial resource estimation, cluster-proportional system component autoscaling, and autoscaling of Kubernetes clusters themselves. @@ -385,7 +487,7 @@ sigs: dir: sig-aws mission_statement: > Covers maintaining, supporting, and using Kubernetes hosted on AWS Cloud. - charter_link: + charter_link: charter.md label: aws leadership: chairs: @@ -424,6 +526,9 @@ sigs: - name: aws-encryption-provider owners: - https://raw.githubusercontent.com/kubernetes-sigs/aws-encryption-provider/master/OWNERS + - name: aws-ebs-csi-driver + owners: + - https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/OWNERS - name: Azure dir: sig-azure mission_statement: > @@ -436,13 +541,16 @@ sigs: - name: Stephen Augustus github: justaugustus company: Red Hat - - name: Shubheksha Jalan - github: shubheksha + - name: Dave Strebel + github: dstrebel company: Microsoft tech_leads: - name: Kal Khenidak github: khenidak company: Microsoft + - name: Pengfei Ni + github: feiskyer + company: Microsoft meetings: - description: Regular SIG Meeting day: Wednesday @@ -650,6 +758,16 @@ sigs: - name: cloud-provider-vsphere owners: - https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/OWNERS + - name: cloud-provider-extraction + owners: + - https://raw.githubusercontent.com/kubernetes/community/master/sig-cloud-provider/cloud-provider-extraction/OWNERS + meetings: + - description: Weekly Sync removing the in-tree cloud providers led by @cheftako and @d-nishi + day: Thursday + time: "13:30" + tz: "PT (Pacific Time)" + frequency: weekly + url: https://docs.google.com/document/d/1KLsGGzNXQbsPeELCeF_q-f0h0CEGSe20xiwvcR2NlYM/edit - name: Cluster Lifecycle dir: sig-cluster-lifecycle mission_statement: > @@ -724,6 +842,13 @@ sigs: frequency: biweekly url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit archive_url: https://docs.google.com/document/d/12QkyL0FkNbWPcLFxxRGSPt_tNPBHbmni3YLY-lHny7E/edit + - description: Kubespray Office Hours + day: Wednesday + time: "07:00" + tz: "PT (Pacific Time)" + frequency: biweekly + url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit + archive_url: https://docs.google.com/document/d/1oDI1rTwla393k6nEMkqz0RU9rUl3J1hov0kQfNcl-4o/edit contact: slack: sig-cluster-lifecycle mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle @@ -742,6 +867,9 @@ sigs: - name: cluster-api-provider-aws owners: - https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/master/OWNERS + - name: cluster-api-provider-digitalocean + owners: + - https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-digitalocean/master/OWNERS - name: cluster-api-provider-gcp owners: - https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-gcp/master/OWNERS @@ -764,6 +892,7 @@ sigs: owners: - https://raw.githubusercontent.com/kubernetes/kubeadm/master/OWNERS - https://raw.githubusercontent.com/kubernetes/kubernetes/master/cmd/kubeadm/OWNERS + - https://raw.githubusercontent.com/kubernetes/cluster-bootstrap/master/OWNERS - name: kubeadm-dind-cluster owners: - https://raw.githubusercontent.com/kubernetes-sigs/kubeadm-dind-cluster/master/OWNERS @@ -874,7 +1003,7 @@ sigs: - https://raw.githubusercontent.com/kubernetes-sigs/contributor-site/master/OWNERS - name: devstats owners: - - Phillels + - https://raw.githubusercontent.com/kubernetes/community/master/sig-contributor-experience/devstats/OWNERS - name: k8s.io owners: - https://raw.githubusercontent.com/kubernetes/k8s.io/master/OWNERS @@ -893,12 +1022,12 @@ sigs: label: docs leadership: chairs: - - name: Zach Corleissen - github: zacharysarah - company: Linux Foundation - name: Andrew Chen github: chenopis company: Google + - name: Zach Corleissen + github: zacharysarah + company: Linux Foundation - name: Jennifer Rondeau github: bradamant3 company: Heptio @@ -924,13 +1053,15 @@ sigs: mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-docs teams: - name: sig-docs-maintainers - description: Documentation Maintainers + description: Documentation maintainers - name: sig-docs-pr-reviews - description: Documentation PR Reviewers + description: Documentation PR reviews - name: sig-docs-ko-owners - description: Korean L10n Repository Owners + description: Korean localization - name: sig-docs-ja-owners - description: Japanese L10n Repository Owners + description: Japanese localization + - name: sig-docs-zh-owners + description: Chinese localization subprojects: - name: reference-docs owners: @@ -1086,7 +1217,7 @@ sigs: systems like Spinnaker and others. Standalone building blocks for these and other similar systems (for example a cluster registry), and proposed changes to kubernetes core where appropriate will also be in scope. - charter_link: + charter_link: charter.md label: multicluster leadership: chairs: @@ -1373,16 +1504,23 @@ sigs: - https://raw.githubusercontent.com/kubernetes/features/master/OWNERS - name: Release dir: sig-release - charter_link: + charter_link: charter.md label: release leadership: chairs: - - name: Jaice Singer DuMars - github: jdumars - company: Google - name: Caleb Miles github: calebamiles company: Google + - name: Stephen Augustus + github: justaugustus + company: Red Hat + - name: Tim Pepper + github: tpepper + company: VMware + emeritus_leads: + - name: Jaice Singer DuMars + github: jdumars + company: Google meetings: - description: Regular SIG Meeting day: Tuesday @@ -1495,7 +1633,7 @@ sigs: company: Google - name: Klaus Ma github: k82cn - company: IBM + company: Huawei meetings: - description: 10AM PT Meeting day: Thursday @@ -1802,24 +1940,24 @@ sigs: meetings: - description: Regular SIG Meeting day: Thursday - time: "18:00" - tz: "UTC" + time: "11:00" + tz: "PT (Pacific Time)" frequency: bi-weekly url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit archive_url: https://docs.google.com/document/d/1RV0nVtlPoAtM0DQwNYxYCC9lHfiHpTNatyv4bek6XtA/edit?usp=sharing recordings_url: https://www.youtube.com/playlist?list=PLutJyDdkKQIqKv-Zq8WbyibQtemChor9y - description: Cloud Provider vSphere monthly syncup day: Wednesday - time: "16:00" - tz: "UTC" + time: "09:00" + tz: "PT (Pacific Time)" frequency: monthly - first Wednesday every month url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit archive_url: https://docs.google.com/document/d/1B0NmmKVh8Ea5hnNsbUsJC7ZyNCsq_6NXl5hRdcHlJgY/edit?usp=sharing recordings_url: https://www.youtube.com/playlist?list=PLutJyDdkKQIpOT4bOfuO3MEMHvU1tRqyR - description: Cluster API Provider vSphere bi-weekly syncup day: Wednesday - time: "18:00" - tz: "UTC" + time: "13:00" + tz: "PT (Pacific Time)" frequency: bi-weekly url: https://docs.google.com/document/d/1FQx0BPlkkl1Bn0c9ocVBxYIKojpmrS1CFP5h0DI68AE/edit archive_url: https://docs.google.com/document/d/1jQrQiOW75uWraPk4b_LWtCTHwT7EZwrWWwMdxeWOEvk/edit?usp=sharing @@ -2107,9 +2245,18 @@ workinggroups: A Working Group dedicated to discussing, designing and documenting using Kubernetes for developing and deploying IoT and Edge specific applications leadership: chairs: + - name: Cindy Xing + github: cindyxing + company: Huawei - name: Dejan Bosanac github: dejanb company: Red Hat + - name: Preston Holmes + github: ptone + company: Google + - name: Steve Wong + github: cantbewong + company: VMWare meetings: - description: Regular WG Meeting day: Friday @@ -2128,15 +2275,15 @@ workinggroups: charter_link: leadership: chairs: - - name: Jessie Frazelle - github: jessfraz - company: Microsoft - name: Aaron Small github: aasmall company: Google - name: Joel Smith github: joelsmith company: Red Hat + - name: Craig Ingram + github: cji + company: Salesforce meetings: - description: Regular WG Meeting day: Monday diff --git a/wg-cluster-api/OWNERS b/wg-cluster-api/OWNERS deleted file mode 100644 index f5e18dba..00000000 --- a/wg-cluster-api/OWNERS +++ /dev/null @@ -1,6 +0,0 @@ -reviewers: - - wg-cluster-api-leads -approvers: - - wg-cluster-api-leads -labels: - - wg/cluster-api diff --git a/wg-cluster-api/README.md b/wg-cluster-api/README.md deleted file mode 100644 index 5bceee8c..00000000 --- a/wg-cluster-api/README.md +++ /dev/null @@ -1,28 +0,0 @@ -<!--- -This is an autogenerated file! - -Please do not edit this file directly, but instead make changes to the -sigs.yaml file in the project root. - -To understand how this file is generated, see https://git.k8s.io/community/generator/README.md ----> -# Cluster API Working Group - -Define a portable API that represents a Kubernetes cluster. The API will contain the control plane and its configuration and the underlying infrastructure (nodes, node pools, etc). - -## Meetings -* Regular WG Meeting: [s at ]() (). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=&tz=). - * [Meeting notes and Agenda](https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit). - -## Organizers - -* Kris Nova (**[@kris-nova](https://github.com/kris-nova)**), Heptio -* Robert Bailey (**[@roberthbailey](https://github.com/roberthbailey)**), Google - -## Contact -* [Slack](https://kubernetes.slack.com/messages/cluster-api) -* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle) - -<!-- BEGIN CUSTOM CONTENT --> - -<!-- END CUSTOM CONTENT --> diff --git a/wg-iot-edge/README.md b/wg-iot-edge/README.md index 43c28e2b..0a204461 100644 --- a/wg-iot-edge/README.md +++ b/wg-iot-edge/README.md @@ -16,7 +16,10 @@ A Working Group dedicated to discussing, designing and documenting using Kuberne ## Organizers +* Cindy Xing (**[@cindyxing](https://github.com/cindyxing)**), Huawei * Dejan Bosanac (**[@dejanb](https://github.com/dejanb)**), Red Hat +* Preston Holmes (**[@ptone](https://github.com/ptone)**), Google +* Steve Wong (**[@cantbewong](https://github.com/cantbewong)**), VMWare ## Contact * [Slack](https://kubernetes.slack.com/messages/wg-iot-edge) diff --git a/wg-security-audit/README.md b/wg-security-audit/README.md index fd3abda3..9c87e9e3 100644 --- a/wg-security-audit/README.md +++ b/wg-security-audit/README.md @@ -15,14 +15,16 @@ Perform a security audit on k8s with a vendor and produce as artifacts a threat ## Organizers -* Jessie Frazelle (**[@jessfraz](https://github.com/jessfraz)**), Microsoft * Aaron Small (**[@aasmall](https://github.com/aasmall)**), Google * Joel Smith (**[@joelsmith](https://github.com/joelsmith)**), Red Hat +* Craig Ingram (**[@cji](https://github.com/cji)**), Salesforce ## Contact * [Slack](https://kubernetes.slack.com/messages/) * [Mailing list]() <!-- BEGIN CUSTOM CONTENT --> - +## Request For Proposals + +The RFP will be open between 2018/10/29 and 2018/11/26 and has been published [here](https://github.com/kubernetes/community/blob/master/wg-security-audit/RFP.md). <!-- END CUSTOM CONTENT --> diff --git a/wg-security-audit/RFP.md b/wg-security-audit/RFP.md new file mode 100644 index 00000000..633ccb19 --- /dev/null +++ b/wg-security-audit/RFP.md @@ -0,0 +1,118 @@ +# Request for Proposal + +## Kubernetes Third Party Security Audit + +The Kubernetes Third-Party Audit Working Group (working group, henceforth) is soliciting proposals from select Information Security vendors for a comprehensive security audit of the Kubernetes Project. + +### Eligible Vendors + +Only the following vendors will be permitted to submit proposals: + +- NCC Group +- Trail of Bits +- Cure53 +- Bishop Fox +- Insomnia +- Atredis Partners + +If your proposal includes sub-contractors, please include relevant details from their firm such as CVs, past works, etc. + +### RFP Process + +This RFP will be open between 2018/10/29 and 2019/11/26. + +The working group will answer questions for the first two weeks of this period. + +Questions can be submitted [here](https://docs.google.com/forms/d/e/1FAIpQLSd5rXSDYQ0KMjzSEGxv0pkGxInkdW1NEQHvUJpxgX3y0o9IEw/viewform?usp=sf_link). All questions will be answered publicly in this document. + +Proposals must include CVs, resumes, and/or example reports from staff that will be working on the project. + +- 2018/10/29: RFP Open, Question period open +- 2018/11/12: Question period closes +- 2018/11/26: RFP Closes +- 2018/12/04: The working group will announce vendor selection + +## Audit Scope + +The scope of the audit is the most recent release (1.12) of the core [Kubernetes project](https://github.com/kubernetes/kubernetes). + +- Findings within the [bug bounty program](https://github.com/kubernetes/community/blob/master/contributors/guide/bug-bounty.md) scope are in scope. + + We want the focus of the audit to be on bugs on Kubernetes. While Kubernetes relies upon a container runtimes such as Docker and CRI-O, we aren't looking for (for example) container escapes that rely upon bugs in the container runtime (unless, for example, the escape is made possible by a defect in the way that Kubernetes sets up the container). + +### Focus Areas + +The Kubernetes Third-Party Audit Working Group is specifically interested in the following areas. Proposals should indicate their level of expertise in these fields as it relates to Kubernetes. + +- Networking +- Cryptography +- Authentication & Authorization (including Role Based Access Controls) +- Secrets management +- Multi-tenancy isolation: Specifically soft (non-hostile co-tenants) + +### Out of Scope + +Findings specifically excluded from the [bug bounty program](https://github.com/kubernetes/community/blob/master/contributors/guide/bug-bounty.md) scope are out of scope. + +## Methodology + +We are allowing 8 weeks for the audit, start date can be negioated after vendor selection. We recognize that November and December can be very high utilization periods for security vendors. + +The audit should not be treated as a penetration test, or red team exercise. It should be comprehensive and not end with the first successful exploit or critical vulnerability. + +The vendor should perform both source code analysis as well as live evaluation of Kubernetes. + +The vendor should document the Kubernetes configuration and architecture that the audit was performed against for the creation of a "audited reference architecture" artifact. The working group must approve this configuration before the audit continues. + +The working group will establish a 60 minute kick-off meeting to answer any initial questions and explain Kubernetes architecture. + +The working group will be available weekly to meet with the selected vendor, will and provide subject matter experts for requested components. + +The vendor must report urgent security issues immediately to both the working group and security@kubernetes.io. + +## Confidentiality and Embargo + +All information gathered and artifacts created as a part of the audit must not be shared outside the vendor or the working group without the explicit consent of the working group. + +## Artifacts + +The audit should result in the following artifacts, which will be made public after any sensitive security issues are mitigated. + +- Findings report, including an executive summary + +- Audited reference architecture specification. Should take the form of a summary and associated configuration yaml files. + +- Formal threat model + +- Any proof of concept exploits that we can use to investigate and fix defects + +- Retrospective white paper(s) on important security considerations in Kubernetes + + *This artifact can be provided up to 3 weeks after deadline for the others.* + + - E.g. [NCC Group: Understanding hardening linux containers](https://www.nccgroup.trust/globalassets/our-research/us/whitepapers/2016/april/ncc_group_understanding_hardening_linux_containers-1-1.pdf) + - E.g. [NCC Group: Abusing Privileged and Unprivileged Linux + Containers](https://www.nccgroup.trust/globalassets/our-research/us/whitepapers/2016/june/container_whitepaper.pdf) + +## Q & A + +| # | Question | Answer | +|---|----------|--------| +| 1 | The RFP says that any area included in the out of scope section of the k8s bug bounty programme is not in-scope of this review. There are some areas which are out of scope of the bug bounty which would appear to be relatively core to k8s, for example Kubernetes on Windows. Can we have 100% confirmation that these areas are out of scope? | Yes. If you encounter a vulnerability in Kubernetes' use of an out-of-scope element, like etcd or the container network interface (to Calico, Weave, Flannel, ...), that is in scope. If you encounter a direct vulnerability in a third-party component during the audit you should follow the embargo section of the RFP. | +| 2 | On the subject of target Distribution and configuration option review:<br> The RFP mentions an "audited reference architecture".<br> - Is the expectation that this will be based on a specific k8s install mechanism (e.g. kubeadm)? <br> - On a related note is it expected that High Availability configurations (e.g. multiple control plane nodes) should be included.<br> - The assessment mentions Networking as a focus area. Should a specific set of network plugins (e.g. weave, calico, flannel) be considered as in-scope or are all areas outside of the core Kubernetes code for this out of scope.<br> - Where features of Kubernetes have been deprecated but not removed in 1.12, should they be considered in-scope or not? | 1. No, we are interested in the final topology -- the installation mechanism, as well as its default configuration, is tangental. The purpose is to contextualise the findings.<br>2. High-availability configurations should be included. For confinement of level of effort, vendor could create one single-master configuration and one high-availability configuration.<br>3. All plugins are out of scope per the bug bounty scope -- for clarification regarding the interface to plug-ins, please see the previous question.<br> 4. Deprecated features should be considered out of scope | +| 3 | On the subject of dependencies:<br>- Will any of the project dependencies be in scope for the assessment? (e.g. https://github.com/kubernetes/kubernetes/blob/master/Godeps/Godeps.json) | Project dependencies are in scope in the sense that they are **allowed** to be tested, but they should not be considered a **required** testing area. We would be interested in cases where Kubernetes is exploitable due to a vulnerability in a project depdendency. Vulnerabilities found in third-party dependencies should follow the embargo section of the RFP.| +| 4 | Is the 8 weeks mentioned in the scope intended to be a limit on effort applied to the review, or just the timeframe for the review to occur in? | This is only a restriction on time frame, but is not intended to convey level of effort. | +| 5| Will the report be released in its entirety after the issues have been remediated? | Yes. | +| 6| What goals must be met to make this project a success? | We have several goals in mind:<br>1) Document a full and complete understanding of Kubernetes’ dataflow.<br>2) Achieve a reasonable understanding of potential vulnerability vectors for subsequent research.<br>3) Creation of artifacts that help third parties make a practical assessment of Kubernetes’ security position.<br>4) Eliminate design and architecture-level vulnerabilities.<br>5) Discover the most significant vulnerabilities, in both number and severity. | +| 7 | Would you be open to two firms partnering on the proposal? | Yes, however both firms should collaborate on the proposal and individual contributors should all provide C.V.s or past works.| +| 8| From a deliverables perspective, will the final report (aside from the whitepaper) be made public? | Yes. | +| 9| The bug bounty document states the following is in scope, "Community maintained stable cloud platform plugins", however will the scope of the assessment include review of the cloud providers' k8s implementation? Reference of cloud providers: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ | Cloud provider-specific issues are excluded from the scope. | +| 10| The bug bounty doc lists supply chain attacks as in scope and also says, "excluding social engineering attacks against maintainers". We can assume phishing these individuals is out of scope, but does the exclusion of social engineering against maintainers include all attacks involving individuals? For example, if we were to discover that one of these developers accidentally committed their SSH keys to a git repo unassociated with k8s and we could use these keys to gain access to the k8s project. Is that in scope? | Attacks against individual developers, such as the example provided, are out of scope for this engagement. | +| 11| While suppression of logs is explicitly in scope, is log injection also in scope? | Log injection is in scope for the purposes of this audit.| +| 12| Are all the various networking implementations in scope for the assessment? Ref: https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model | Please refer to question 1. | +| 13| What does the working group refer to with formal threat model? Would STRIDE be a formal threat model in that sense?| A formal threat model should include a comprehensive dataflow diagram which shows data moving between different trust levels and assesses threats to that data using a system like STRIDE as the data moves between each process/component. Many good examples are present in Threat Modeling: Designing for Security by Adam Shostack. | +| 14| Does Kubernetes uses any GoLang non-standard signing libraries? | An initial investigation has not uncovered any, however with a code base as large as Kubernetes, it is possible. | +| 15| Does Kubernetes implement any cryptographic primitives on its own, i.e. primitives which are not part of the standard libraries? | An initial investigation has not uncovered any, however with a code base as large as Kubernetes, it is possible. | +| 16| Presuming that live testing is part of the project, how does the working group see the "audited reference architecture" being defined? Is there a representative deployment, or a document describing a "default installation" that you foresee the engagement team using to inform the buildout of a test environment?| The purpose of the reference architecture is to define and document the configuration against which live testing was preformed. It should be generated collaboratively with the working group at the beginning of the project. We will want it to represent at least a common configuration, as in practice Kubernetes itself has no default configuration. It should take the form of a document detailing the set-up and configuration steps the vendor took to create their environment, ensuring an easily repeatable reference implementation. | +| 17| The RFP describes ""networking and multi-tenancy isolation"" as one of the focus areas. <br/><br/>Can you describe for us what these terms mean to you? Can you also help us understand how you define a soft non-hostile co-tenant? Is a _hostile_ co-tenant also in scope?| By networking we mean vulnerabilities related to communication within and to/from the cluster: container to container, pod to pod, pod to service, and external to internal communications as described in [the networking documentation](https://kubernetes.io/docs/concepts/cluster-administration/networking/). <br/><br/>The concept of soft multi-tenancy is that you have a single cluster being shared by applications or groups within the same company or organization, with less intended restrictions of a hard multi-tenant platform like a PaaS that hosts multiple distinct and potentially hostile competing customers on a single cluster which requires stricter security assumptions. These definitions may vary by group and use case, but the idea is that you can have a cluster with multiple groups with their own namespaces, isolated by networking/storage/RBAC roles."| +| 18| In the Artifacts section, you describe a Formal Threat Model as one of the outputs of the engagement. Can you expound on what this means to you? Are there any representative public examples you could point us to?| Please refer to question 13.| |
