summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorIhor Dvoretskyi <ihor@linux.com>2018-06-05 14:52:40 +0000
committerIhor Dvoretskyi <ihor@linux.com>2018-06-05 14:52:40 +0000
commitbed39ba418bde30d95bdcb969bc13dfbd779621c (patch)
tree56b519975c95fd10cea63fffb8524fe49423dfc8
parent759cd201e0d76ee30c8b7aa5e750620917fd8c00 (diff)
parent920d87ea659ca1fe238b3bdb9c94e4d834451fdb (diff)
sig-list.md updated
Signed-off-by: Ihor Dvoretskyi <ihor@linux.com>
-rw-r--r--Gopkg.lock8
-rw-r--r--Gopkg.toml6
-rw-r--r--OWNERS_ALIASES9
-rw-r--r--SECURITY_CONTACTS13
-rw-r--r--committee-steering/governance/README.md59
-rw-r--r--committee-steering/governance/sig-governance-requirements.md2
-rw-r--r--committee-steering/governance/sig-governance-template-short.md30
-rw-r--r--communication/README.md15
-rw-r--r--communication/moderation.md63
-rw-r--r--communication/zoom-guidelines.md45
-rw-r--r--community-membership.md2
-rw-r--r--contributors/design-proposals/api-machinery/aggregated-api-servers.md2
-rw-r--r--contributors/design-proposals/apps/controller_history.md2
-rw-r--r--contributors/design-proposals/apps/daemonset-update.md2
-rw-r--r--contributors/design-proposals/apps/statefulset-update.md2
-rw-r--r--contributors/design-proposals/auth/proc-mount-type.md93
-rw-r--r--contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md89
-rw-r--r--contributors/design-proposals/node/cri-windows.md2
-rw-r--r--contributors/design-proposals/node/kubelet-cri-logging.md17
-rw-r--r--contributors/design-proposals/node/node-usernamespace-remapping.md209
-rw-r--r--contributors/design-proposals/scheduling/rescheduling.md2
-rw-r--r--contributors/design-proposals/scheduling/taint-node-by-condition.md7
-rw-r--r--contributors/design-proposals/storage/container-storage-interface.md2
-rw-r--r--contributors/design-proposals/storage/grow-volume-size.md2
-rw-r--r--contributors/design-proposals/storage/pv-to-rbd-mapping.md2
-rw-r--r--contributors/design-proposals/storage/svcacct-token-volume-source.md148
-rw-r--r--contributors/design-proposals/storage/volume-topology-scheduling.md777
-rw-r--r--contributors/devel/api_changes.md13
-rw-r--r--contributors/devel/coding-conventions.md3
-rw-r--r--contributors/devel/development.md4
-rw-r--r--contributors/devel/faster_reviews.md4
-rw-r--r--contributors/devel/flexvolume.md8
-rw-r--r--contributors/devel/go-code.md3
-rw-r--r--contributors/devel/owners.md4
-rw-r--r--contributors/devel/pull-requests.md4
-rw-r--r--contributors/devel/release/OWNERS8
-rw-r--r--contributors/devel/release/README.md3
-rw-r--r--contributors/devel/release/issues.md3
-rw-r--r--contributors/devel/release/patch-release-manager.md3
-rw-r--r--contributors/devel/release/patch_release.md3
-rw-r--r--contributors/devel/release/scalability-validation.md3
-rw-r--r--contributors/devel/release/testing.md3
-rw-r--r--contributors/devel/scalability-good-practices.md4
-rw-r--r--contributors/devel/scheduler.md2
-rw-r--r--contributors/devel/security-release-process.md3
-rw-r--r--contributors/guide/README.md2
-rw-r--r--contributors/guide/contributor-cheatsheet.md13
-rw-r--r--contributors/guide/github-workflow.md16
-rw-r--r--contributors/new-contributor-playground/OWNERS14
-rw-r--r--contributors/new-contributor-playground/README.md12
-rw-r--r--contributors/new-contributor-playground/hello-from-copenhagen.md4
-rw-r--r--contributors/new-contributor-playground/new-contributor-notes.md350
-rw-r--r--contributors/new-contributor-playground/new-contributors.md5
-rw-r--r--events/2016/developer-summit-2016/application_service_definition_notes.md2
-rw-r--r--events/2018/05-contributor-summit/README.md41
-rw-r--r--events/2018/05-contributor-summit/clientgo-notes.md139
-rw-r--r--events/2018/05-contributor-summit/crds-notes.md92
-rw-r--r--events/2018/05-contributor-summit/devtools-notes.md63
-rw-r--r--events/2018/05-contributor-summit/networking-notes.md129
-rw-r--r--events/2018/05-contributor-summit/new-contributor-workshop.md99
-rw-r--r--events/2018/05-contributor-summit/steering-update.md13
-rw-r--r--events/community-meeting.md8
-rw-r--r--events/office-hours.md4
-rw-r--r--generator/README.md2
-rw-r--r--keps/0008-20180430-promote-sysctl-annotations-to-fields.md225
-rw-r--r--keps/0009-node-heartbeat.md392
-rw-r--r--keps/NEXT_KEP_NUMBER2
-rw-r--r--keps/sig-cli/0008-kustomize.md222
-rw-r--r--keps/sig-cloud-provider/0002-cloud-controller-manager.md (renamed from keps/0002-controller-manager.md)129
-rw-r--r--keps/sig-cluster-lifecycle/0008-20180504-kubeadm-config-beta.md145
-rw-r--r--keps/sig-network/0010-20180314-coredns-GA-proposal.md126
-rw-r--r--keps/sig-network/0011-ipvs-proxier.md574
-rw-r--r--keps/sig-network/0012-20180518-coredns-default-proposal.md88
-rw-r--r--mentoring/meet-our-contributors.md18
-rw-r--r--org-owners-guide.md2
-rw-r--r--setting-up-cla-check.md2
-rw-r--r--sig-api-machinery/README.md18
-rw-r--r--sig-architecture/README.md2
-rw-r--r--sig-azure/README.md18
-rw-r--r--sig-azure/charter.md100
-rw-r--r--sig-cli/README.md3
-rw-r--r--sig-cloud-provider/CHARTER.md100
-rw-r--r--sig-cloud-provider/OWNERS6
-rw-r--r--sig-cloud-provider/README.md75
-rw-r--r--sig-cluster-ops/README.md2
-rw-r--r--sig-contributor-experience/projects.md2
-rw-r--r--sig-governance.md6
-rw-r--r--sig-list.md21
-rw-r--r--sig-multicluster/README.md2
-rw-r--r--sig-release/README.md2
-rw-r--r--sig-scalability/README.md35
-rw-r--r--sig-scalability/blogs/scalability-regressions-case-studies.md2
-rw-r--r--sig-scalability/slis/apimachinery_slis.md196
-rw-r--r--sig-scalability/slos/api_call_latency.md47
-rw-r--r--sig-scalability/slos/api_extensions_latency.md6
-rw-r--r--sig-scalability/slos/extending_slo.md72
-rw-r--r--sig-scalability/slos/pod_startup_latency.md54
-rw-r--r--sig-scalability/slos/slos.md148
-rw-r--r--sig-scalability/slos/system_throughput.md28
-rw-r--r--sig-scalability/slos/throughput_burst_slo.md26
-rw-r--r--sig-scalability/slos/watch_latency.md17
-rw-r--r--sig-scalability/tools/performance-comparison-tool.md112
-rw-r--r--sig-scheduling/README.md3
-rw-r--r--sig-storage/contributing.md2
-rw-r--r--sig-testing/README.md2
-rw-r--r--sig-vmware/README.md31
-rw-r--r--sigs.yaml164
-rw-r--r--vendor/github.com/client9/misspell/.gitignore33
-rw-r--r--vendor/github.com/client9/misspell/.travis.yml11
-rw-r--r--vendor/github.com/client9/misspell/Dockerfile37
-rw-r--r--vendor/github.com/client9/misspell/Makefile84
-rw-r--r--vendor/github.com/client9/misspell/README.md416
-rw-r--r--vendor/github.com/client9/misspell/benchmark_test.go105
-rw-r--r--vendor/github.com/client9/misspell/case_test.go42
-rw-r--r--vendor/github.com/client9/misspell/cmd/misspell/main.go1
-rw-r--r--vendor/github.com/client9/misspell/falsepositives_test.go136
-rw-r--r--vendor/github.com/client9/misspell/goreleaser.yml29
-rwxr-xr-xvendor/github.com/client9/misspell/install-misspell.sh318
-rw-r--r--vendor/github.com/client9/misspell/legal.go1
-rw-r--r--vendor/github.com/client9/misspell/mime_test.go39
-rw-r--r--vendor/github.com/client9/misspell/notwords_test.go27
-rw-r--r--vendor/github.com/client9/misspell/replace_test.go119
-rwxr-xr-xvendor/github.com/client9/misspell/scripts/commit-msg.sh2
-rwxr-xr-xvendor/github.com/client9/misspell/scripts/goreleaser.sh3
-rwxr-xr-xvendor/github.com/client9/misspell/scripts/pre-commit.sh2
-rwxr-xr-xvendor/github.com/client9/misspell/scripts/update-godownloader.sh9
-rw-r--r--vendor/github.com/client9/misspell/stringreplacer_test.gox421
-rw-r--r--vendor/github.com/client9/misspell/url_test.go105
-rw-r--r--vendor/github.com/client9/misspell/words_test.go35
-rw-r--r--vendor/gopkg.in/yaml.v2/.travis.yml9
-rw-r--r--vendor/gopkg.in/yaml.v2/NOTICE13
-rw-r--r--vendor/gopkg.in/yaml.v2/README.md133
-rw-r--r--vendor/gopkg.in/yaml.v2/apic.go55
-rw-r--r--vendor/gopkg.in/yaml.v2/decode.go240
-rw-r--r--vendor/gopkg.in/yaml.v2/decode_test.go1017
-rw-r--r--vendor/gopkg.in/yaml.v2/emitterc.go11
-rw-r--r--vendor/gopkg.in/yaml.v2/encode.go136
-rw-r--r--vendor/gopkg.in/yaml.v2/encode_test.go501
-rw-r--r--vendor/gopkg.in/yaml.v2/example_embedded_test.go41
-rw-r--r--vendor/gopkg.in/yaml.v2/readerc.go20
-rw-r--r--vendor/gopkg.in/yaml.v2/resolve.go80
-rw-r--r--vendor/gopkg.in/yaml.v2/scannerc.go29
-rw-r--r--vendor/gopkg.in/yaml.v2/sorter.go9
-rw-r--r--vendor/gopkg.in/yaml.v2/suite_test.go12
-rw-r--r--vendor/gopkg.in/yaml.v2/writerc.go65
-rw-r--r--vendor/gopkg.in/yaml.v2/yaml.go125
-rw-r--r--vendor/gopkg.in/yaml.v2/yamlh.go30
-rw-r--r--wg-apply/README.md2
-rw-r--r--wg-cloud-provider/cloud-provider-requirements.md87
-rw-r--r--wg-container-identity/README.md2
-rwxr-xr-xwg-machine-learning/README.md16
151 files changed, 5700 insertions, 4831 deletions
diff --git a/Gopkg.lock b/Gopkg.lock
index 734b7b2b..53091521 100644
--- a/Gopkg.lock
+++ b/Gopkg.lock
@@ -7,14 +7,14 @@
".",
"cmd/misspell"
]
- revision = "59894abde931a32630d4e884a09c682ed20c5c7c"
- version = "v0.3.0"
+ revision = "b90dc15cfd220ecf8bbc9043ecb928cef381f011"
+ version = "v0.3.4"
[[projects]]
- branch = "v2"
name = "gopkg.in/yaml.v2"
packages = ["."]
- revision = "eb3733d160e74a9c7e442f435eb3bea458e1d19f"
+ revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
+ version = "v2.2.1"
[solve-meta]
analyzer-name = "dep"
diff --git a/Gopkg.toml b/Gopkg.toml
index dbcf1103..a9c8f5c1 100644
--- a/Gopkg.toml
+++ b/Gopkg.toml
@@ -1,2 +1,6 @@
-
required = ["github.com/client9/misspell/cmd/misspell"]
+
+[prune]
+ go-tests = true
+ unused-packages = true
+ non-go = true
diff --git a/OWNERS_ALIASES b/OWNERS_ALIASES
index add87e14..7f183878 100644
--- a/OWNERS_ALIASES
+++ b/OWNERS_ALIASES
@@ -21,9 +21,10 @@ aliases:
- kris-nova
- countspongebob
sig-azure-leads:
- - slack
+ - justaugustus
+ - shubheksha
+ - khenidak
- colemickens
- - jdumars
sig-big-data-leads:
- foxish
- erikerlandson
@@ -31,6 +32,10 @@ aliases:
- soltysh
- pwittrock
- AdoHe
+ sig-cloud-provider-leads:
+ - andrewsykim
+ - hogepodge
+ - jagosan
sig-cluster-lifecycle-leads:
- lukemarsden
- roberthbailey
diff --git a/SECURITY_CONTACTS b/SECURITY_CONTACTS
new file mode 100644
index 00000000..81091860
--- /dev/null
+++ b/SECURITY_CONTACTS
@@ -0,0 +1,13 @@
+# Defined below are the security contacts for this repo.
+#
+# They are the contact point for the Product Security Team to reach out
+# to for triaging and handling of incoming issues.
+#
+# The below names agree to abide by the
+# [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
+# and will be removed and replaced if they violate that agreement.
+#
+# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
+# INSTRUCTIONS AT https://kubernetes.io/security/
+
+cblecker
diff --git a/committee-steering/governance/README.md b/committee-steering/governance/README.md
index c08fe079..7321c055 100644
--- a/committee-steering/governance/README.md
+++ b/committee-steering/governance/README.md
@@ -1,23 +1,32 @@
-# SIG Governance Template
+# SIG Charter Guide
-## Goals
+All Kubernetes SIGs must define a charter defining the scope and governance of the SIG.
-The following documents outline recommendations and requirements for SIG governance structure and provide
-template documents for SIGs to adapt. The goals are to define the baseline needs for SIGs to self govern
-and organize in a way that addresses the needs of the core Kubernetes project.
+- The scope must define what areas the SIG is responsible for directing and maintaining.
+- The governance must outline the responsibilities within the SIG as well as the roles
+ owning those responsibilities.
-The documents are focused on:
+## Steps to create a SIG charter
-- Outlining organizational responsibilities
-- Outlining organizational roles
-- Outlining processes and tools
+1. Copy the template into a new file under community/sig-*YOURSIG*/charter.md ([sig-architecture example])
+2. Read the [Recommendations and requirements] so you have context for the template
+3. Customize your copy of the template for your SIG. Feel free to make adjustments as needed.
+4. Update [sigs.yaml] with the individuals holding the roles as defined in the template.
+5. Add subprojects owned by your SIG to the [sigs.yaml]
+5. Create a pull request with a draft of your charter.md and sigs.yaml changes. Communicate it within your SIG
+ and get feedback as needed.
+6. Send the SIG Charter out for review to steering@kubernetes.io. Include the subject "SIG Charter Proposal: YOURSIG"
+ and a link to the PR in the body.
+7. Typically expect feedback within a week of sending your draft. Expect longer time if it falls over an
+ event such as Kubecon or holidays. Make any necessary changes.
+8. Once accepted, the steering committee will ratify the PR by merging it.
-Specific attention has been given to:
+## Steps to update an existing SIG charter
-- The role of technical leadership
-- The role of operational leadership
-- Process for agreeing upon technical decisions
-- Process for ensuring technical assets remain healthy
+- For significant changes, or any changes that could impact other SIGs, such as the scope, create a
+ PR and send it to the steering committee for review with the subject: "SIG Charter Update: YOURSIG"
+- For minor updates to that only impact issues or areas within the scope of the SIG the SIG Chairs should
+ facilitate the change.
## How to use the templates
@@ -35,6 +44,26 @@ and project.
- [Short Template]
+## Goals
+
+The following documents outline recommendations and requirements for SIG charters and provide
+template documents for SIGs to adapt. The goals are to define the baseline needs for SIGs to
+self govern and exercise ownership over an area of the Kubernetes project.
+
+The documents are focused on:
+
+- Defining SIG scope
+- Outlining organizational responsibilities
+- Outlining organizational roles
+- Outlining processes and tools
+
+Specific attention has been given to:
+
+- The role of technical leadership
+- The role of operational leadership
+- Process for agreeing upon technical decisions
+- Process for ensuring technical assets remain healthy
+
## FAQ
See [frequently asked questions]
@@ -42,3 +71,5 @@ See [frequently asked questions]
[Recommendations and requirements]: sig-governance-requirements.md
[Short Template]: sig-governance-template-short.md
[frequently asked questions]: FAQ.md
+[sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml
+[sig-architecture example]: ../../sig-architecture/charter.md
diff --git a/committee-steering/governance/sig-governance-requirements.md b/committee-steering/governance/sig-governance-requirements.md
index ac398e1e..e5e621c0 100644
--- a/committee-steering/governance/sig-governance-requirements.md
+++ b/committee-steering/governance/sig-governance-requirements.md
@@ -64,7 +64,7 @@ All technical assets *MUST* be owned by exactly 1 SIG subproject. The following
- *SHOULD* define a level of commitment for decisions that have gone through the formal process
(e.g. when is a decision revisited or reversed)
-- *MUST* How technical assets of project remain healthy and can be released
+- *MUST* define how technical assets of project remain healthy and can be released
- Publicly published signals used to determine if code is in a healthy and releasable state
- Commitment and process to *only* release when signals say code is releasable
- Commitment and process to ensure assets are in a releasable state for milestones / releases
diff --git a/committee-steering/governance/sig-governance-template-short.md b/committee-steering/governance/sig-governance-template-short.md
index 98aed04d..620d764a 100644
--- a/committee-steering/governance/sig-governance-template-short.md
+++ b/committee-steering/governance/sig-governance-template-short.md
@@ -1,8 +1,23 @@
-# SIG Governance Template (Short Version)
+# SIG YOURSIG Charter
+
+This charter adheres to the conventions described in the [Kubernetes Charter README].
+
+## Scope
+
+This section defines the scope of things that would fall under ownership by this SIG.
+It must be used when determining whether subprojects should fall into this SIG.
+
+### In scope
+
+Outline of what falls into the scope of this SIG
+
+### Out of scope
+
+Outline of things that could be confused as falling into this SIG but don't
## Roles
-Membership for roles tracked in: <link to OWNERS file>
+Membership for roles tracked in: [sigs.yaml]
- Chair
- Run operations and processes governing the SIG
@@ -39,7 +54,7 @@ Membership for roles tracked in: <link to OWNERS file>
- *MAY* select additional subproject owners through a [super-majority] vote amongst subproject owners. This
*SHOULD* be supported by a majority of subproject contributors (through [lazy-consensus] with fallback on voting).
- Number: 3-5
- - Defined in [sigs.yaml] [OWNERS] files
+ - Defined in [OWNERS] files that are specified in [sigs.yaml]
- Members
- *MUST* maintain health of at least one subproject or the health of the SIG
@@ -50,6 +65,14 @@ Membership for roles tracked in: <link to OWNERS file>
- *MAY* participate in decision making for the subprojects they hold roles in
- Includes all reviewers and approvers in [OWNERS] files for subprojects
+- Security Contact
+ - *MUST* be a contact point for the Product Security Team to reach out to for
+ triaging and handling of incoming issues
+ - *MUST* accept the [Embargo Policy](https://github.com/kubernetes/sig-release/blob/master/security-release-process-documentation/security-release-process.md#embargo-policy)
+ - Defined in `SECURITY_CONTACTS` files, this is only relevant to the root file in
+ the repository, there is a template
+ [here](https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS)
+
## Organizational management
- SIG meets bi-weekly on zoom with agenda in meeting notes
@@ -120,3 +143,4 @@ Issues impacting multiple subprojects in the SIG should be resolved by either:
[KEP]: https://github.com/kubernetes/community/blob/master/keps/0000-kep-template.md
[sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml#L1454
[OWNERS]: contributors/devel/owners.md
+[Kubernetes Charter README]: https://github.com/kubernetes/community/blob/master/committee-steering/governance/README.md
diff --git a/communication/README.md b/communication/README.md
index 078c8bd5..fd6f0a52 100644
--- a/communication/README.md
+++ b/communication/README.md
@@ -10,7 +10,7 @@ The Kubernetes community abides by the [CNCF code of conduct]. Here is an excer
## SIGs
-Kubernetes encompasses many projects, organized into [SIGs](sig-list.md).
+Kubernetes encompasses many projects, organized into [SIGs](/sig-list.md).
Some communication has moved into SIG-specific channels - see
a given SIG subdirectory for details.
@@ -41,11 +41,15 @@ please [file an issue].
## Mailing lists
-Development announcements and discussions appear on the Google group
-[kubernetes-dev] (send mail to `kubernetes-dev@googlegroups.com`).
+Kubernetes mailing lists are hosted through Google Groups. To
+receive these lists' emails,
+[join](https://support.google.com/groups/answer/1067205) the groups
+relevant to you, as you would any other Google Group.
-Users trade notes on the Google group
-[kubernetes-users] (send mail to `kubernetes-users@googlegroups.com`).
+* [kubernetes-announce] broadcasts major project announcements such as releases and security issues
+* [kubernetes-dev] hosts development announcements and discussions around developing kubernetes itself
+* [kubernetes-users] is where kubernetes users trade notes
+* Additional Google groups exist and can be joined for discussion related to each SIG and Working Group. These are linked from the [SIG list](/sig-list.md).
## Accessing community documents
@@ -92,6 +96,7 @@ Kubernetes is the main focus of CloudNativeCon/KubeCon, held every spring in Eur
[iCal url]: https://calendar.google.com/calendar/ical/cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com/public/basic.ics
[Kubernetes Community Meeting Agenda]: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY/edit#
[kubernetes-community-video-chat]: https://groups.google.com/forum/#!forum/kubernetes-community-video-chat
+[kubernetes-announce]: https://groups.google.com/forum/#!forum/kubernetes-announce
[kubernetes-dev]: https://groups.google.com/forum/#!forum/kubernetes-dev
[kubernetes-users]: https://groups.google.com/forum/#!forum/kubernetes-users
[kubernetes.slackarchive.io]: https://kubernetes.slackarchive.io
diff --git a/communication/moderation.md b/communication/moderation.md
new file mode 100644
index 00000000..31f43376
--- /dev/null
+++ b/communication/moderation.md
@@ -0,0 +1,63 @@
+# Moderation on Kubernetes Communications Channels
+
+This page describes the rules and best practices for people chosen to moderate Kubernetes communications channels.
+This includes, Slack and the mailing lists and _any communication tool_ used in an official manner by the project.
+
+## Roles and Responsibilities
+
+As part of volunteering to become a moderator you are now representative of the Kubernetes community and it is your responsibility to remain aware of your contributions in this space.
+These responsibilities apply to all Kubernetes official channels.
+
+Moderators _MUST_:
+
+- Take action as specified by these Kubernetes Moderator Guidelines.
+ - You are empowered to take _immediate action_ when there is a violation. You do not need to wait for review or approval if an egregious violation has occurred. Make a judgement call based on our Code of Conduct and Values (see below).
+ - Removing a bad actor or content from the medium is preferable to letting it sit there.
+- Abide by the documented tasks and actions required of moderators.
+- Ensure that the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) is in effect on all official Kubernetes communication channels.
+- Become familiar with the [Kubernetes Community Values](https://github.com/kubernetes/steering/blob/master/values.md).
+- Take care of spam as soon as possible, which may mean taking action by removing a member from that resource.
+- Foster a safe and productive environment by being aware of potential multiple cultural differences between Kubernetes community members.
+- Understand that you might be contacted by moderators, community managers, and other users via private email or a direct message.
+- Report egregious behavior to steering@k8s.io.
+
+Moderators _SHOULD_:
+
+- Exercise compassion and empathy when communicating and collaborating with other community members.
+- Understand the difference between a user abusing the resource or just having difficulty expressing comments and questions in English.
+- Be an example and role model to others in the community.
+- Remember to check and recognize if you need take a break when you become frustrated or find yourself in a heated debate.
+- Help your colleagues if you recognize them in one of the [stages of burnout](https://opensource.com/business/15/12/avoid-burnout-live-happy).
+- Be helpful and have fun!
+
+## Violations
+
+The Kubernetes [Steering Committee](https://github.com/kubernetes/steering) will have the final authority regarding escalated moderation matters. Violations of the Code of Conduct will be handled on a case by case basis. Depending on severity this can range up to and including removal of the person from the community, though this is extremely rare.
+
+## Specific Guidelines
+
+These guidelines are for tool-specific policies that don't fit under a general umbrella.
+
+### Mailing Lists
+
+
+### Slack
+
+- [Slack Guidelines](./slack-guidelines.md)
+
+### Zoom
+
+- [Zoom Guidelines](./zoom-guidelines.md)
+
+
+### References and Resources
+
+Thanks to the following projects for making their moderation guidelines public, allowing us to build on the shoulders of giants.
+Moderators are encouraged to learn how other projects moderate and learn from them in order to improve our guidelines:
+
+- Mozilla's [Forum Moderation Guidelines](https://support.mozilla.org/en-US/kb/moderation-guidelines)
+- OASIS [How to Moderate a Mailing List](https://www.oasis-open.org/khelp/kmlm/user_help/html/mailing_list_moderation.html)
+- Community Spark's [How to effectively moderate forums](http://www.communityspark.com/how-to-effectively-moderate-forums/)
+- [5 tips for more effective community moderation](https://www.socialmediatoday.com/social-business/5-tips-more-effective-community-moderation)
+- [8 Helpful Moderation Tips for Community Managers](https://sproutsocial.com/insights/tips-community-managers/)
+- [Setting Up Community Guidelines for Moderation](https://www.getopensocial.com/blog/community-management/setting-community-guidelines-moderation)
diff --git a/communication/zoom-guidelines.md b/communication/zoom-guidelines.md
new file mode 100644
index 00000000..18aec1e1
--- /dev/null
+++ b/communication/zoom-guidelines.md
@@ -0,0 +1,45 @@
+# Zoom Guidelines
+
+Zoom is the main video communication platform for Kubernetes.
+It is used for running the [community meeting](https://github.com/kubernetes/community/blob/master/events/community-meeting.md) and SIG meetings.
+Since the Zoom meetings are open to the general public, a Zoom host has to moderate a meeting if a person is in violation of the code of conduct.
+
+These guidelines are meant as a tool to help Kubernetes members manage their Zoom resources.
+Check the main [moderation](./moderation.md) page for more information on other tools and general moderation guidelines.
+
+## Code of Conduct
+Kubernetes adheres to Cloud Native Compute Foundation's [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md) throughout the project, and includes all communication mediums.
+
+## Moderation
+
+Zoom has documentation on how to use their moderation tools:
+
+- https://support.zoom.us/hc/en-us/articles/201362603-Host-Controls-in-a-Meeting
+
+Check the "Screen Share Controls" (via the ^ next to Share Screen): Select who can share in your meeting and if you want only the host or any participant to be able to start a new share when someone is sharing.
+
+You can also put an attendee on hold. This allows the host(s) to put attendee on hold to temporarily remove an attendee from the meeting.
+
+Unfortunately, Zoom doesn't have the ability to ban or block people from joining - especially if they have the invitation to that channel and the meeting id is publicly known.
+
+It is required that a host be comfortable with how to use these moderation tools. It is strongly encouraged that at least two people in a given SIG are comfortable with the moderation tools.
+
+## Meeting Archive Videos
+
+If a violation has been addressed by a host and it has been recorded by Zoom, the video should be edited before being posted on the [Kubernetes channel](https://www.youtube.com/c/kubernetescommunity).
+
+Contact [SIG Contributor Experience](https://github.com/kubernetes/community/tree/master/sig-contributor-experience) if you need help to edit a video before posting it to the public.
+
+## Admins
+
+- @parispittman
+- @castrojo
+
+Each SIG should have at least one person with a paid Zoom account.
+See the [SIG Creation procedure](https://github.com/kubernetes/community/blob/master/sig-governance.md#sig-creation-procedure) document on how to set up an initial account.
+
+The Zoom licenses are managed by the [CNCF Service Desk](https://github.com/cncf/servicedesk).
+
+## Escalating and/Reporting a Problem
+
+Issues that cannot be handle via normal moderation can be escalated to the [Kubernetes steering committee](https://github.com/kubernetes/steering).
diff --git a/community-membership.md b/community-membership.md
index f87594a6..ea4e1a9a 100644
--- a/community-membership.md
+++ b/community-membership.md
@@ -222,7 +222,7 @@ The following apply to the subproject for which one would be an owner.
**Status:** Removed
-The Maintainer role has been removed and replaced with a greater focus on [owner](#owner)s.
+The Maintainer role has been removed and replaced with a greater focus on [OWNERS].
[code reviews]: contributors/devel/collab.md
[community expectations]: contributors/guide/community-expectations.md
diff --git a/contributors/design-proposals/api-machinery/aggregated-api-servers.md b/contributors/design-proposals/api-machinery/aggregated-api-servers.md
index c5f8ca1a..d436c6b9 100644
--- a/contributors/design-proposals/api-machinery/aggregated-api-servers.md
+++ b/contributors/design-proposals/api-machinery/aggregated-api-servers.md
@@ -31,7 +31,7 @@ aggregated servers.
* Developers should be able to write their own API server and cluster admins
should be able to add them to their cluster, exposing new APIs at runtime. All
of this should not require any change to the core kubernetes API server.
-* These new APIs should be seamless extension of the core kubernetes APIs (ex:
+* These new APIs should be seamless extensions of the core kubernetes APIs (ex:
they should be operated upon via kubectl).
## Non Goals
diff --git a/contributors/design-proposals/apps/controller_history.md b/contributors/design-proposals/apps/controller_history.md
index af58fad2..6e313ce8 100644
--- a/contributors/design-proposals/apps/controller_history.md
+++ b/contributors/design-proposals/apps/controller_history.md
@@ -390,7 +390,7 @@ the following command.
### Rollback
-For future work, `kubeclt rollout undo` can be implemented in the general case
+For future work, `kubectl rollout undo` can be implemented in the general case
as an extension of the [above](#viewing-history ).
```bash
diff --git a/contributors/design-proposals/apps/daemonset-update.md b/contributors/design-proposals/apps/daemonset-update.md
index aea7e244..f4ce1256 100644
--- a/contributors/design-proposals/apps/daemonset-update.md
+++ b/contributors/design-proposals/apps/daemonset-update.md
@@ -42,7 +42,7 @@ Here are some potential requirements that haven't been covered by this proposal:
- Uptime is critical for each pod of a DaemonSet during an upgrade (e.g. the time
from a DaemonSet pods being killed to recreated and healthy should be < 5s)
- Each DaemonSet pod can still fit on the node after being updated
-- Some DaemonSets require the node to be drained before the DeamonSet's pod on it
+- Some DaemonSets require the node to be drained before the DaemonSet's pod on it
is updated (e.g. logging daemons)
- DaemonSet's pods are implicitly given higher priority than non-daemons
- DaemonSets can only be operated by admins (i.e. people who manage nodes)
diff --git a/contributors/design-proposals/apps/statefulset-update.md b/contributors/design-proposals/apps/statefulset-update.md
index 27d3000f..b4089011 100644
--- a/contributors/design-proposals/apps/statefulset-update.md
+++ b/contributors/design-proposals/apps/statefulset-update.md
@@ -747,7 +747,7 @@ kubectl rollout undo statefulset web
### Rolling Forward
Rolling back is usually the safest, and often the fastest, strategy to mitigate
deployment failure, but rolling forward is sometimes the only practical solution
-for stateful applications (e.g. A users has a minor configuration error but has
+for stateful applications (e.g. A user has a minor configuration error but has
already modified the storage format for the application). Users can use
sequential `kubectl apply`'s to update the StatefulSet's current
[target state](#target-state). The StatefulSet's `.Spec.GenerationPartition`
diff --git a/contributors/design-proposals/auth/proc-mount-type.md b/contributors/design-proposals/auth/proc-mount-type.md
new file mode 100644
index 00000000..073fc23e
--- /dev/null
+++ b/contributors/design-proposals/auth/proc-mount-type.md
@@ -0,0 +1,93 @@
+# ProcMount/ProcMountType Option
+
+## Background
+
+Currently the way docker and most other container runtimes work is by masking
+and setting as read-only certain paths in `/proc`. This is to prevent data
+from being exposed into a container that should not be. However, there are
+certain use-cases where it is necessary to turn this off.
+
+## Motivation
+
+For end-users who would like to run unprivileged containers using user namespaces
+_nested inside_ CRI containers, we need an option to have a `ProcMount`. That is,
+we need an option to designate explicitly turn off masking and setting
+read-only of paths so that we can
+mount `/proc` in the nested container as an unprivileged user.
+
+Please see the following filed issues for more information:
+- [opencontainers/runc#1658](https://github.com/opencontainers/runc/issues/1658#issuecomment-373122073)
+- [moby/moby#36597](https://github.com/moby/moby/issues/36597)
+- [moby/moby#36644](https://github.com/moby/moby/pull/36644)
+
+Please also see the [use case for building images securely in kubernetes](https://github.com/jessfraz/blog/blob/master/content/post/building-container-images-securely-on-kubernetes.md).
+
+Unmasking the paths in `/proc` option really only makes sense for when a user
+is nesting
+unprivileged containers with user namespaces as it will allow more information
+than is necessary to the program running in the container spawned by
+kubernetes.
+
+The main use case for this option is to run
+[genuinetools/img](https://github.com/genuinetools/img) inside a kubernetes
+container. That program then launches sub-containers that take advantage of
+user namespaces and re-mask /proc and set /proc as read-only. So therefore
+there is no concern with having an unmasked proc open in the top level container.
+
+It should be noted that this is different that the host /proc. It is still
+a newly mounted /proc just the container runtimes will not mask the paths.
+
+Since the only use case for this option is to run unprivileged nested
+containers,
+this option should only be allowed or used if the user in the container is not `root`.
+This can be easily enforced with `MustRunAs`.
+Since the user inside is still unprivileged,
+doing things to `/proc` would be off limits regardless, since linux user
+support already prevents this.
+
+## Existing SecurityContext objects
+
+Kubernetes defines `SecurityContext` for `Container` and `PodSecurityContext`
+for `PodSpec`. `SecurityContext` objects define the related security options
+for Kubernetes containers, e.g. selinux options.
+
+To support "ProcMount" options in Kubernetes, it is proposed to make
+the following changes:
+
+## Changes of SecurityContext objects
+
+Add a new `string` type field named `ProcMountType` will hold the viable
+options for `procMount` to the `SecurityContext`
+definition.
+
+By default,`procMount` is `default`, aka the same behavior as today and the
+paths are masked.
+
+This will look like the following in the spec:
+
+```go
+type ProcMountType string
+
+const (
+ // DefaultProcMount uses the container runtime default ProcType. Most
+ // container runtimes mask certain paths in /proc to avoid accidental security
+ // exposure of special devices or information.
+ DefaultProcMount ProcMountType = "Default"
+
+ // UnmaskedProcMount bypasses the default masking behavior of the container
+ // runtime and ensures the newly created /proc the container stays in tact with
+ // no modifications.
+ UnmaskedProcMount ProcMountType = "Unmasked"
+)
+
+procMount *ProcMountType
+```
+
+This requires changes to the CRI runtime integrations so that
+kubelet will add the specific `unmasked` or `whatever_it_is_named` option.
+
+## Pod Security Policy changes
+
+A new `[]ProcMountType{}` field named `allowedProcMounts` will be added to the Pod
+Security Policy as well to gate the allowed ProcMountTypes a user is allowed to
+set. This field will default to `[]ProcMountType{ DefaultProcMount }`.
diff --git a/contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md b/contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md
new file mode 100644
index 00000000..827df5a8
--- /dev/null
+++ b/contributors/design-proposals/network/support_traffic_shaping_for_kubelet_cni.md
@@ -0,0 +1,89 @@
+# Support traffic shaping for CNI network plugin
+
+Version: Alpha
+
+Authors: @m1093782566
+
+## Motivation and background
+
+Currently the kubenet code supports applying basic traffic shaping during pod setup. This will happen if bandwidth-related annotations have been added to the pod's metadata, for example:
+
+```json
+{
+ "kind": "Pod",
+ "metadata": {
+ "name": "iperf-slow",
+ "annotations": {
+ "kubernetes.io/ingress-bandwidth": "10M",
+ "kubernetes.io/egress-bandwidth": "10M"
+ }
+ }
+}
+```
+
+Our current implementation uses the `linux tc` to add an download(ingress) and upload(egress) rate limiter using 1 root `qdisc`, 2 `class `(one for ingress and one for egress) and 2 `filter`(one for ingress and one for egress attached to the ingress and egress classes respectively).
+
+Kubelet CNI code doesn't support it yet, though CNI has already added a [traffic sharping plugin](https://github.com/containernetworking/plugins/tree/master/plugins/meta/bandwidth). We can replicate the behavior we have today in kubenet for kubelet CNI network plugin if we feel this is an important feature.
+
+## Goal
+
+Support traffic shaping for CNI network plugin in Kubernetes.
+
+## Non-goal
+
+CNI plugins to implement this sort of traffic shaping guarantee.
+
+## Proposal
+
+If kubelet starts up with `network-plugin = cni` and user enabled traffic shaping via the network plugin configuration, it would then populate the `runtimeConfig` section of the config when calling the `bandwidth` plugin.
+
+Traffic shaping in Kubelet CNI network plugin can work with ptp and bridge network plugins.
+
+### Pod Setup
+
+When we create a pod with bandwidth configuration in its metadata, for example,
+
+```json
+{
+ "kind": "Pod",
+ "metadata": {
+ "name": "iperf-slow",
+ "annotations": {
+ "kubernetes.io/ingress-bandwidth": "10M",
+ "kubernetes.io/egress-bandwidth": "10M"
+ }
+ }
+}
+```
+
+Kubelet would firstly parse the ingress and egress bandwidth values and transform them to Kbps because both `ingressRate` and `egressRate` in cni bandwidth plugin are in Kbps. A user would add something like this to their CNI config list if they want to enable traffic shaping via the plugin:
+
+```json
+{
+ "type": "bandwidth",
+ "capabilities": {"trafficShaping": true}
+}
+```
+
+Kubelet would then populate the `runtimeConfig` section of the config when calling the `bandwidth` plugin:
+
+```json
+{
+ "type": "bandwidth",
+ "runtimeConfig": {
+ "trafficShaping": {
+ "ingressRate": "X",
+ "egressRate": "Y"
+ }
+ }
+}
+```
+
+### Pod Teardown
+
+When we delete a pod, kubelet will bulid the runtime config for calling cni plugin `DelNetwork/DelNetworkList` API, which will remove this pod's bandwidth configuration.
+
+## Next step
+
+* Support ingress and egress burst bandwidth in Pod.
+* Graduate annotations to Pod Spec.
diff --git a/contributors/design-proposals/node/cri-windows.md b/contributors/design-proposals/node/cri-windows.md
index e1a7f1fa..6589d985 100644
--- a/contributors/design-proposals/node/cri-windows.md
+++ b/contributors/design-proposals/node/cri-windows.md
@@ -85,7 +85,7 @@ The implementation will mainly be in two parts:
In both parts, we need to implement:
* Fork code for Windows from Linux.
-* Convert from Resources.Requests and Resources.Limits to Windows configuration in CRI, and convert from Windows configration in CRI to container configuration.
+* Convert from Resources.Requests and Resources.Limits to Windows configuration in CRI, and convert from Windows configuration in CRI to container configuration.
To implement resource controls for Windows containers, refer to [this MSDN documentation](https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/resource-controls) and [Docker's conversion to OCI spec](https://github.com/moby/moby/blob/master/daemon/oci_windows.go).
diff --git a/contributors/design-proposals/node/kubelet-cri-logging.md b/contributors/design-proposals/node/kubelet-cri-logging.md
index a19ff3f5..12d0624d 100644
--- a/contributors/design-proposals/node/kubelet-cri-logging.md
+++ b/contributors/design-proposals/node/kubelet-cri-logging.md
@@ -142,11 +142,22 @@ extend this by maintaining a metadata file in the pod directory.
**Log format**
The runtime should decorate each log entry with a RFC 3339Nano timestamp
-prefix, the stream type (i.e., "stdout" or "stderr"), and ends with a newline.
+prefix, the stream type (i.e., "stdout" or "stderr"), the tags of the log
+entry, the log content that ends with a newline.
+The `tags` fields can support multiple tags, delimited by `:`. Currently, only
+one tag is defined in CRI to support multi-line log entries: partial or full.
+Partial (`P`) is used when a log entry is split into multiple lines by the
+runtime, and the entry has not ended yet. Full (`F`) indicates that the log
+entry is completed -- it is either a single-line entry, or this is the last
+line of the muiltple-line entry.
+
+For example,
```
-2016-10-06T00:17:09.669794202Z stdout The content of the log entry 1
-2016-10-06T00:17:10.113242941Z stderr The content of the log entry 2
+2016-10-06T00:17:09.669794202Z stdout F The content of the log entry 1
+2016-10-06T00:17:09.669794202Z stdout P First line of log entry 2
+2016-10-06T00:17:09.669794202Z stdout P Second line of the log entry 2
+2016-10-06T00:17:10.113242941Z stderr F Last line of the log entry 2
```
With the knowledge, kubelet can parses the logs and serve them for `kubectl
diff --git a/contributors/design-proposals/node/node-usernamespace-remapping.md b/contributors/design-proposals/node/node-usernamespace-remapping.md
new file mode 100644
index 00000000..75cb0888
--- /dev/null
+++ b/contributors/design-proposals/node/node-usernamespace-remapping.md
@@ -0,0 +1,209 @@
+# Support Node-Level User Namespaces Remapping
+
+- [Summary](#summary)
+- [Motivation](#motivation)
+- [Goals](#goals)
+- [Non-Goals](#non-goals)
+- [Use Stories](#user-stories)
+- [Proposal](#proposal)
+- [Future Work](#future-work)
+- [Risks and Mitigations](risks-and-mitigations)
+- [Graduation Criteria](graduation-criteria)
+- [Alternatives](alternatives)
+
+
+_Authors:_
+
+* Mrunal Patel &lt;mpatel@redhat.com&gt;
+* Jan Pazdziora &lt;jpazdziora@redhat.com&gt;
+* Vikas Choudhary &lt;vichoudh@redhat.com&gt;
+
+## Summary
+Container security consists of many different kernel features that work together to make containers secure. User namespaces is one such feature that enables interesting possibilities for containers by allowing them to be root inside the container while not being root on the host. This gives more capabilities to the containers while protecting the host from the container being root and adds one more layer to container security.
+In this proposal we discuss:
+- use-cases/user-stories that benefit from this enhancement
+- implementation design and scope for alpha release
+- long-term roadmap to fully support this feature beyond alpha
+
+## Motivation
+From user_namespaces(7):
+> User namespaces isolate security-related identifiers and attributes, in particular, user IDs and group IDs, the root directory, keys, and capabilities. A process's user and group IDs can be different inside and outside a user namespace. In particular, a process can have a normal unprivileged user ID outside a user namespace while at the same time having a user ID of 0 inside the namespace; in other words, the process has full privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace.
+
+In order to run Pods with software which expects to run as root or with elevated privileges while still containing the processes and protecting both the Nodes and other Pods, Linux kernel mechanism of user namespaces can be used make the processes in the Pods view their environment as having the privileges, while on the host (Node) level these processes appear as without privileges or with privileges only affecting processes in the same Pods
+
+The purpose of using user namespaces in Kubernetes is to let the processes in Pods think they run as one uid set when in fact they run as different “real” uids on the Nodes.
+
+In this text, most everything said about uids can also be applied to gids.
+
+## Goals
+Enable user namespace support in a kubernetes cluster so that workloads that work today also work with user namespaces enabled at runtime. Furthermore, make workloads that require root/privileged user inside the container, safer for the node using the additional security of user namespaces. Containers will run in a user namespace different from user-namespace of the underlying host.
+
+## Non-Goals
+- Non-goal is to support pod/container level user namespace isolation. There can be images using different users but on the node, pods/containers running with these images will share common user namespace remapping configuration. In other words, all containers on a node share a common user-namespace range.
+- Remote volumes support eg. NFS
+
+## User Stories
+- As a cluster admin, I want to protect the node from the rogue container process(es) running inside pod containers with root privileges. If such a process is able to break out into the node, it could be a security issue.
+- As a cluster admin, I want to support all the images irrespective of what user/group that image is using.
+- As a cluster admin, I want to allow some pods to disable user namespaces if they require elevated privileges.
+
+## Proposal
+Proposal is to support user-namespaces for the pod containers. This can be done at two levels:
+- Node-level : This proposal explains this part in detail.
+- Namespace-Level/Pod-level: Plan is to target this in future due to missing support in the low level system components such as runtimes and kernel. More on this in the `Future Work` section.
+
+Node-level user-namespace support means that, if feature is enabled, all pods on a node will share a common user-namespace, common UID(and GID) range (which is a subset of node’s total UIDs(and GIDs)). This common user-namespace is runtime’s default user-namespace range which is remapped to containers’ UIDs(and GID), starting with the first UID as container’s ‘root’.
+In general Linux convention, UID(or GID) mapping consists of three parts:
+1. Host (U/G)ID: First (U/G)ID of the range on the host that is being remapped to the (U/G)IDs in the container user-namespace
+2. Container (U/G)ID: First (U/G)ID of the range in the container namespace and this is mapped to the first (U/G)ID on the host(mentioned in previous point).
+3. Count/Size: Total number of consecutive mapping between host and container user-namespaces, starting from the first one (including) mentioned above.
+
+As an example, `host_id 1000, container_id 0, size 10`
+In this case, 1000 to 1009 on host will be mapped to 0 to 9 inside the container.
+
+User-namespace support should be enabled only when container runtime on the node supports user-namespace remapping and is enabled in its configuration. To enable user-namespaces, feature-gate flag will need to be passed to Kubelet like this `--feature-gates=”NodeUserNamespace=true”`
+
+A new CRI API, `GetRuntimeConfigInfo` will be added. Kubelet will use this API:
+- To verify if user-namespace remapping is enabled at runtime. If found disabled, kubelet will fail to start
+- To determine the default user-namespace range at the runtime, starting UID of which is mapped to the UID '0' of the container.
+
+### Volume Permissions
+Kubelet will change the file permissions, i.e chown, at `/var/lib/kubelet/pods` prior to any container start to get file permissions updated according to remapped UID and GID.
+This proposal will work only for local volumes and not with remote volumes such as NFS.
+
+### How to disable `NodeUserNamespace` for a specific pod
+This can be done in two ways:
+- **Alpha:** Implicitly using host namespace for the pod containers
+This support is already present (currently it seems broken, will be fixed) in Kubernetes as an experimental functionality, which can be enabled using `feature-gates=”ExperimentalHostUserNamespaceDefaulting=true”`.
+If Pod-Security-Policy is configured to allow the following to be requested by a pod, host user-namespace will be enabled for the container:
+ - host namespaces (pid, ipc, net)
+ - non-namespaced capabilities (mknod, sys_time, sys_module)
+ - the pod contains a privileged container or using host path volumes.
+ - https://github.com/kubernetes/kubernetes/commit/d0d78f478ce0fb9d5e121db3b7c6993b482af82c#diff-a53fa76e941e0bdaee26dcbc435ad2ffR437 introduced via https://github.com/kubernetes/kubernetes/commit/d0d78f478ce0fb9d5e121db3b7c6993b482af82c.
+
+- **Beta:** Explicit API to request host user-namespace in pod spec
+ This is being targeted under Beta graduation plans.
+
+### CRI API Changes
+Proposed CRI API changes:
+
+```golang
+// Runtime service defines the public APIs for remote container runtimes
+service RuntimeService {
+ // Version returns the runtime name, runtime version, and runtime API version.
+ rpc Version(VersionRequest) returns (VersionResponse) {}
+ …….
+ …….
+ // GetRuntimeConfigInfo returns the configuration details of the runtime.
+ rpc GetRuntimeConfigInfo(GetRuntimeConfigInfoRequest) returns (GetRuntimeConfigInfoResponse) {}
+}
+// LinuxIDMapping represents a single user namespace mapping in Linux.
+message LinuxIDMapping {
+ // container_id is the starting id for the mapping inside the container.
+ uint32 container_id = 1;
+ // host_id is the starting id for the mapping on the host.
+ uint32 host_id = 2;
+ // size is the length of the mapping.
+ uint32 size = 3;
+}
+
+message LinuxUserNamespaceConfig {
+ // is_enabled, if true indicates that user-namespaces are supported and enabled in the container runtime
+ bool is_enabled = 1;
+ // uid_mappings is an array of user id mappings.
+ repeated LinuxIDMapping uid_mappings = 1;
+ // gid_mappings is an array of group id mappings.
+ repeated LinuxIDMapping gid_mappings = 2;
+}
+message GetRuntimeConfig {
+ LinuxUserNamespaceConfig user_namespace_config = 1;
+}
+
+message GetRuntimeConfigInfoRequest {}
+
+message GetRuntimeConfigInfoResponse {
+ GetRuntimeConfig runtime_config = 1
+}
+
+...
+
+// NamespaceOption provides options for Linux namespaces.
+message NamespaceOption {
+ // Network namespace for this container/sandbox.
+ // Note: There is currently no way to set CONTAINER scoped network in the Kubernetes API.
+ // Namespaces currently set by the kubelet: POD, NODE
+ NamespaceMode network = 1;
+ // PID namespace for this container/sandbox.
+ // Note: The CRI default is POD, but the v1.PodSpec default is CONTAINER.
+ // The kubelet's runtime manager will set this to CONTAINER explicitly for v1 pods.
+ // Namespaces currently set by the kubelet: POD, CONTAINER, NODE
+ NamespaceMode pid = 2;
+ // IPC namespace for this container/sandbox.
+ // Note: There is currently no way to set CONTAINER scoped IPC in the Kubernetes API.
+ // Namespaces currently set by the kubelet: POD, NODE
+ NamespaceMode ipc = 3;
+ // User namespace for this container/sandbox.
+ // Note: There is currently no way to set CONTAINER scoped user namespace in the Kubernetes API.
+ // The container runtime should ignore this if user namespace is NOT enabled.
+ // POD is the default value. Kubelet will set it to NODE when trying to use host user-namespace
+ // Namespaces currently set by the kubelet: POD, NODE
+ NamespaceMode user = 4;
+}
+
+```
+
+### Runtime Support
+- Docker: Here is the [user-namespace documentation](https://docs.docker.com/engine/security/userns-remap/) and this is the [implementation PR](https://github.com/moby/moby/pull/12648)
+ - Concerns:
+Docker API does not provide user-namespace mapping. Therefore to handle `GetRuntimeConfigInfo` API, changes will be done in `dockershim` to read system files, `/etc/subuid` and `/etc/subgid`, for figuring out default user-namespace mapping. `/info` api will be used to figure out if user-namespace is enabled and `Docker Root Dir` will be used to figure out host uid mapped to the uid `0` in container. eg. `Docker Root Dir: /var/lib/docker/2131616.2131616` this shows host uid `2131616` will be mapped to uid `0`
+- CRI-O: https://github.com/kubernetes-incubator/cri-o/pull/1519
+- Containerd: https://github.com/containerd/containerd/blob/129167132c5e0dbd1b031badae201a432d1bd681/container_opts_unix.go#L149
+
+### Implementation Roadmap
+#### Phase 1: Support in Kubelet, Alpha, [Target: Kubernetes v1.11]
+- Add feature gate `NodeUserNamespace`, disabled by default
+- Add new CRI API, `GetRuntimeConfigInfo()`
+- Add logic in Kubelet to handle pod creation which includes parsing GetRuntimeConfigInfo response and changing file-permissions in /var/lib/kubelet with learned userns mapping.
+- Add changes in dockershim to implement GetRuntimeConfigInfo() for docker runtime
+- Add changes in CRI-O to implement userns support and GetRuntimeConfigInfo() support
+- Unit test cases
+- e2e tests
+
+#### Phase 2: Beta Support [Target: Kubernetes v1.12]
+- PSP integration
+- To grow ExperimentalHostUserNamespaceDefaulting from experimental feature gate to a Kubelet flag
+- API changes to allow pod able to request HostUserNamespace in pod spec
+- e2e tests
+
+### References
+- Default host user namespace via experimental flag
+ - https://github.com/kubernetes/kubernetes/pull/31169
+- Enable userns support for containers launched by kubelet
+ - https://github.com/kubernetes/features/issues/127
+- Track Linux User Namespaces in the Pod Security Policy
+ - https://github.com/kubernetes/kubernetes/issues/59152
+- Add support for experimental-userns-remap-root-uid and experimental-userns-remap-root-gid options to match the remapping used by the container runtime.
+ - https://github.com/kubernetes/kubernetes/pull/55707
+- rkt User Namespaces Background
+ - https://coreos.com/rkt/docs/latest/devel/user-namespaces.html
+
+## Future Work
+### Namespace-Level/Pod-Level user-namespace support
+There is no runtime today which supports creating containers with a specified user namespace configuration. For example here is the discussion related to this support in Docker https://github.com/moby/moby/issues/28593
+Once user-namespace feature in the runtimes has evolved to support container’s request for a specific user-namespace mapping(UID and GID range), we can extend current Node-Level user-namespace support in Kubernetes to support Namespace-level isolation(or if desired even pod-level isolation) by dividing and allocating learned mapping from runtime among Kubernetes namespaces (or pods, if desired). From end-user UI perspective, we dont expect any change in the UI related to user namespaces support.
+### Remote Volumes
+Remote Volumes support should be investigated and should be targeted in future once support is there at lower infra layers.
+
+
+## Risks and Mitigations
+The main risk with this change stems from the fact that processes in Pods will run with different “real” uids than they used to, while expecting the original uids to make operations on the Nodes or consistently access shared persistent storage.
+- This can be mitigated by turning the feature on gradually, per-Pod or per Kubernetes namespace.
+- For the Kubernetes' cluster Pods (that provide the Kubernetes functionality), testing of their behaviour and ability to run in user namespaced setups is crucial.
+
+## Graduation Criteria
+- PSP integration
+- API changes to allow pod able to request host user namespace using for example, `HostUserNamespace: True`, in pod spec
+- e2e tests
+
+## Alternatives
+User Namespace mappings can be passed explicitly through kubelet flags similar to https://github.com/kubernetes/kubernetes/pull/55707 but we do not prefer this option because this is very much prone to mis-configuration.
diff --git a/contributors/design-proposals/scheduling/rescheduling.md b/contributors/design-proposals/scheduling/rescheduling.md
index db960934..32d86a27 100644
--- a/contributors/design-proposals/scheduling/rescheduling.md
+++ b/contributors/design-proposals/scheduling/rescheduling.md
@@ -28,7 +28,7 @@ implied. However, describing the process as "moving" the pod is approximately ac
and easier to understand, so we will use this terminology in the document.
We use the term "rescheduling" to describe any action the system takes to move an
-already-running pod. The decision may be made and executed by any component; we wil
+already-running pod. The decision may be made and executed by any component; we will
introduce the concept of a "rescheduler" component later, but it is not the only
component that can do rescheduling.
diff --git a/contributors/design-proposals/scheduling/taint-node-by-condition.md b/contributors/design-proposals/scheduling/taint-node-by-condition.md
index 550e9cd9..2e352d4f 100644
--- a/contributors/design-proposals/scheduling/taint-node-by-condition.md
+++ b/contributors/design-proposals/scheduling/taint-node-by-condition.md
@@ -19,8 +19,8 @@ In addition to this, with taint-based-eviction, the Node Controller already tain
| ------------------ | ------------------ | ------------ | -------- |
|Ready |True | - | |
| |False | NoExecute | node.kubernetes.io/not-ready |
-| |Unknown | NoExecute | node.kubernetes.io/unreachable |
-|OutOfDisk |True | NoSchedule | node.kubernetes.io/out-of-disk |
+| |Unknown | NoExecute | node.kubernetes.io/unreachable |
+|OutOfDisk |True | NoSchedule | node.kubernetes.io/out-of-disk |
| |False | - | |
| |Unknown | - | |
|MemoryPressure |True | NoSchedule | node.kubernetes.io/memory-pressure |
@@ -32,6 +32,9 @@ In addition to this, with taint-based-eviction, the Node Controller already tain
|NetworkUnavailable |True | NoSchedule | node.kubernetes.io/network-unavailable |
| |False | - | |
| |Unknown | - | |
+|PIDPressure |True | NoSchedule | node.kubernetes.io/pid-pressure |
+| |False | - | |
+| |Unknown | - | |
For example, if a CNI network is not detected on the node (e.g. a network is unavailable), the Node Controller will taint the node with `node.kubernetes.io/network-unavailable=:NoSchedule`. This will then allow users to add a toleration to their `PodSpec`, ensuring that the pod can be scheduled to this node if necessary. If the kubelet did not update the node’s status after a grace period, the Node Controller will only taint the node with `node.kubernetes.io/unreachable`; it will not taint the node with any unknown condition.
diff --git a/contributors/design-proposals/storage/container-storage-interface.md b/contributors/design-proposals/storage/container-storage-interface.md
index 1522539a..27e10bd1 100644
--- a/contributors/design-proposals/storage/container-storage-interface.md
+++ b/contributors/design-proposals/storage/container-storage-interface.md
@@ -314,7 +314,7 @@ The attach/detach controller,running as part of the kube-controller-manager bina
When the controller decides to attach a CSI volume, it will call the in-tree CSI volume plugin’s attach method. The in-tree CSI volume plugin’s attach method will do the following:
1. Create a new `VolumeAttachment` object (defined in the “Communication Channels” section) to attach the volume.
- * The name of the of the `VolumeAttachment` object will be `pv-<SHA256(PVName+NodeName)>`.
+ * The name of the `VolumeAttachment` object will be `pv-<SHA256(PVName+NodeName)>`.
* `pv-` prefix is used to allow using other scheme(s) for inline volumes in the future, with their own prefix.
* SHA256 hash is to reduce length of `PVName` plus `NodeName` string, each of which could be max allowed name length (hexadecimal representation of SHA256 is 64 characters).
* `PVName` is `PV.name` of the attached PersistentVolume.
diff --git a/contributors/design-proposals/storage/grow-volume-size.md b/contributors/design-proposals/storage/grow-volume-size.md
index 4fb53292..a968d91c 100644
--- a/contributors/design-proposals/storage/grow-volume-size.md
+++ b/contributors/design-proposals/storage/grow-volume-size.md
@@ -198,7 +198,7 @@ we have considered following options:
Cons:
* I don't know if there is a pattern that exists in kube today for shipping shell scripts that are called out from code in Kubernetes. Flex is
- different because, none of the flex scripts are shipped with Kuberntes.
+ different because, none of the flex scripts are shipped with Kubernetes.
3. Ship resizing tools in a container.
diff --git a/contributors/design-proposals/storage/pv-to-rbd-mapping.md b/contributors/design-proposals/storage/pv-to-rbd-mapping.md
index a64a1018..8071cbbe 100644
--- a/contributors/design-proposals/storage/pv-to-rbd-mapping.md
+++ b/contributors/design-proposals/storage/pv-to-rbd-mapping.md
@@ -55,7 +55,7 @@ the RBD image.
### Pros
- Simple to implement
- Does not cause regression in RBD image names, which remains same as earlier.
-- The metada information is not immediately visible to RBD admins
+- The metadata information is not immediately visible to RBD admins
### Cons
- NA
diff --git a/contributors/design-proposals/storage/svcacct-token-volume-source.md b/contributors/design-proposals/storage/svcacct-token-volume-source.md
new file mode 100644
index 00000000..3069e677
--- /dev/null
+++ b/contributors/design-proposals/storage/svcacct-token-volume-source.md
@@ -0,0 +1,148 @@
+# Service Account Token Volumes
+
+Authors:
+ @smarterclayton
+ @liggitt
+ @mikedanese
+
+## Summary
+
+Kubernetes is able to provide pods with unique identity tokens that can prove
+the caller is a particular pod to a Kubernetes API server. These tokens are
+injected into pods as secrets. This proposal proposes a new mechanism of
+distribution with support for [improved service account tokens][better-tokens]
+and explores how to migrate from the existing mechanism backwards compatibly.
+
+## Motivation
+
+Many workloads running on Kubernetes need to prove to external parties who they
+are in order to participate in a larger application environment. This identity
+must be attested to by the orchestration system in a way that allows a third
+party to trust that an arbitrary container on the cluster is who it says it is.
+In addition, infrastructure running on top of Kubernetes needs a simple
+mechanism to communicate with the Kubernetes APIs and to provide more complex
+tooling. Finally, a significant set of security challenges are associated with
+storing service account tokens as secrets in Kubernetes and limiting the methods
+whereby malicious parties can get access to these tokens will reduce the risk of
+platform compromise.
+
+As a platform, Kubernetes should evolve to allow identity management systems to
+provide more powerful workload identity without breaking existing use cases, and
+provide a simple out of the box workload identity that is sufficient to cover
+the requirements of bootstrapping low-level infrastructure running on
+Kubernetes. We expect that other systems to cover the more advanced scenarios,
+and see this effort as necessary glue to allow more powerful systems to succeed.
+
+With this feature, we hope to provide a backwards compatible replacement for
+service account tokens that strengthens the security and improves the
+scalability of the platform.
+
+## Proposal
+
+Kubernetes should implement a ServiceAccountToken volume projection that
+maintains a service account token requested by the node from the TokenRequest
+API.
+
+### Token Volume Projection
+
+A new volume projection will be implemented with an API that closely matches the
+TokenRequest API.
+
+```go
+type ProjectedVolumeSource struct {
+ Sources []VolumeProjection
+ DefaultMode *int32
+}
+
+type VolumeProjection struct {
+ Secret *SecretProjection
+ DownwardAPI *DownwardAPIProjection
+ ConfigMap *ConfigMapProjection
+ ServiceAccountToken *ServiceAccountTokenProjection
+}
+
+// ServiceAccountTokenProjection represents a projected service account token
+// volume. This projection can be used to insert a service account token into
+// the pods runtime filesystem for use against APIs (Kubernetes API Server or
+// otherwise).
+type ServiceAccountTokenProjection struct {
+ // Audience is the intended audience of the token. A recipient of a token
+ // must identify itself with an identifier specified in the audience of the
+ // token, and otherwise should reject the token. The audience defaults to the
+ // identifier of the apiserver.
+ Audience string
+ // ExpirationSeconds is the requested duration of validity of the service
+ // account token. As the token approaches expiration, the kubelet volume
+ // plugin will proactively rotate the service account token. The kubelet will
+ // start trying to rotate the token if the token is older than 80 percent of
+ // its time to live or if the token is older than 24 hours.Defaults to 1 hour
+ // and must be at least 10 minutes.
+ ExpirationSeconds int64
+ // Path is the relative path of the file to project the token into.
+ Path string
+}
+```
+
+A volume plugin implemented in the kubelet will project a service account token
+sourced from the TokenRequest API into volumes created from
+ProjectedVolumeSources. As the token approaches expiration, the kubelet volume
+plugin will proactively rotate the service account token. The kubelet will start
+trying to rotate the token if the token is older than 80 percent of its time to
+live or if the token is older than 24 hours.
+
+To replace the current service account token secrets, we also need to inject the
+clusters CA certificate bundle. Initially we will deploy to data in a configmap
+per-namespace and reference it using a ConfigMapProjection.
+
+A projected volume source that is equivalent to the current service account
+secret:
+
+```yaml
+sources:
+- serviceAccountToken:
+ expirationSeconds: 3153600000 # 100 years
+ path: token
+- configMap:
+ name: kube-cacrt
+ items:
+ - key: ca.crt
+ path: ca.crt
+- downwardAPI:
+ items:
+ - path: namespace
+ fieldRef: metadata.namespace
+```
+
+
+This fixes one scalability issue with the current service account token
+deployment model where secret GETs are a large portion of overall apiserver
+traffic.
+
+A projected volume source that requests a token for vault and Istio CA:
+
+```yaml
+sources:
+- serviceAccountToken:
+ path: vault-token
+ audience: vault
+- serviceAccountToken:
+ path: istio-token
+ audience: ca.istio.io
+```
+
+### Alternatives
+
+1. Instead of implementing a service account token volume projection, we could
+ implement all injection as a flex volume or CSI plugin.
+ 1. Both flex volume and CSI are alpha and are unlikely to graduate soon.
+ 1. Virtual kubelets (like Fargate or ACS) may not be able to run flex
+ volumes.
+ 1. Service account tokens are a fundamental part of our API.
+1. Remove service accounts and service account tokens completely from core, use
+ an alternate mechanism that sits outside the platform.
+ 1. Other core features need service account integration, leading to all
+ users needing to install this extension.
+ 1. Complicates installation for the majority of users.
+
+
+[better-tokens]: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/bound-service-account-tokens.md
diff --git a/contributors/design-proposals/storage/volume-topology-scheduling.md b/contributors/design-proposals/storage/volume-topology-scheduling.md
index 2603e225..402ca0f9 100644
--- a/contributors/design-proposals/storage/volume-topology-scheduling.md
+++ b/contributors/design-proposals/storage/volume-topology-scheduling.md
@@ -1,24 +1,36 @@
# Volume Topology-aware Scheduling
-Authors: @msau42
+Authors: @msau42, @lichuqiang
This document presents a detailed design for making the default Kubernetes
scheduler aware of volume topology constraints, and making the
PersistentVolumeClaim (PVC) binding aware of scheduling decisions.
+## Definitions
+* Topology: Rules to describe accessibility of an object with respect to
+ location in a cluster.
+* Domain: A grouping of locations within a cluster. For example, 'node1',
+ 'rack10', 'zone5'.
+* Topology Key: A description of a general class of domains. For example,
+ 'node', 'rack', 'zone'.
+* Hierarchical domain: Domain that can be fully encompassed in a larger domain.
+ For example, the 'zone1' domain can be fully encompassed in the 'region1'
+ domain.
+* Failover domain: A domain that a workload intends to run in at a later time.
## Goals
-* Allow a Pod to request one or more topology-constrained Persistent
-Volumes (PV) that are compatible with the Pod's other scheduling
-constraints, such as resource requirements and affinity/anti-affinity
-policies.
-* Support arbitrary PV topology constraints (i.e. node,
-rack, zone, foo, bar).
-* Support topology constraints for statically created PVs and dynamically
-provisioned PVs.
+* Allow topology to be specified for both pre-provisioned and dynamic
+ provisioned PersistentVolumes so that the Kubernetes scheduler can correctly
+ place a Pod using such a volume to an appropriate node.
+* Support arbitrary PV topology domains (i.e. node, rack, zone, foo, bar)
+ without encoding each as first class objects in the Kubernetes API.
+* Allow the Kubernetes scheduler to influence where a volume is provisioned or
+ which pre-provisioned volume to bind to based on scheduling constraints on the
+ Pod requesting a volume, such as Pod resource requirements and
+ affinity/anti-affinity policies.
* No scheduling latency performance regression for Pods that do not use
-topology-constrained PVs.
-
+ PVs with topology.
+* Allow administrators to restrict allowed topologies per StorageClass.
## Non Goals
* Fitting a pod after the initial PVC binding has been completed.
@@ -36,13 +48,34 @@ operator to schedule them together. Another alternative is to merge the two
pods into one.
* For two+ pods non-simultaneously sharing a PVC, this scenario could be
handled by pod priorities and preemption.
+* Provisioning multi-domain volumes where all the domains will be able to run
+ the workload. For example, provisioning a multi-zonal volume and making sure
+ the pod can run in all zones.
+ * Scheduler cannot make decisions based off of future resource requirements,
+ especially if those resources can fluctuate over time. For applications that
+ use such multi-domain storage, the best practice is to either:
+ * Configure cluster autoscaling with enough resources to accommodate
+ failing over the workload to any of the other failover domains.
+ * Manually configure and overprovision the failover domains to
+ accommodate the resource requirements of the workload.
+* Scheduler supporting volume topologies that are independent of the node's
+ topologies.
+ * The Kubernetes scheduler only handles topologies with respect to the
+ workload and the nodes it runs on. If a storage system is deployed on an
+ independent topology, it will be up to provisioner to correctly spread the
+ volumes for a workload. This could be facilitated as a separate feature
+ by:
+ * Passing the Pod's OwnerRef to the provisioner, and the provisioner
+ spreading volumes for Pods with the same OwnerRef
+ * Adding Volume Anti-Affinity policies, and passing those to the
+ provisioner.
## Problem
Volumes can have topology constraints that restrict the set of nodes that the
volume can be accessed on. For example, a GCE PD can only be accessed from a
single zone, and a local disk can only be accessed from a single node. In the
-future, there could be other topology constraints, such as rack or region.
+future, there could be other topology domains, such as rack or region.
A pod that uses such a volume must be scheduled to a node that fits within the
volume’s topology constraints. In addition, a pod can have further constraints
@@ -70,16 +103,21 @@ binding happens without considering if multiple PVCs are related, it is very lik
for the two PVCs to be bound to local disks on different nodes, making the pod
unschedulable.
* For multizone clusters and deployments requesting multiple dynamically provisioned
-zonal PVs, each PVC Is provisioned independently, and is likely to provision each PV
-In different zones, making the pod unschedulable.
+zonal PVs, each PVC is provisioned independently, and is likely to provision each PV
+in different zones, making the pod unschedulable.
To solve the issue of initial volume binding and provisioning causing an impossible
pod placement, volume binding and provisioning should be more tightly coupled with
pod scheduling.
-## New Volume Topology Specification
-To specify a volume's topology constraints in Kubernetes, the PersistentVolume
+## Volume Topology Specification
+First, volumes need a way to express topology constraints against nodes. Today, it
+is done for zonal volumes by having explicit logic to process zone labels on the
+PersistentVolume. However, this is not easily extendable for volumes with other
+topology keys.
+
+Instead, to support a generic specification, the PersistentVolume
object will be extended with a new NodeAffinity field that specifies the
constraints. It will closely mirror the existing NodeAffinity type used by
Pods, but we will use a new type so that we will not be bound by existing and
@@ -107,18 +145,27 @@ weights, but will not be included in the initial implementation.
The advantages of this NodeAffinity field vs the existing method of using zone labels
on the PV are:
-* We don't need to expose first-class labels for every topology domain.
-* Implementation does not need to be updated every time a new topology domain
+* We don't need to expose first-class labels for every topology key.
+* Implementation does not need to be updated every time a new topology key
is added to the cluster.
* NodeSelector is able to express more complex topology with ANDs and ORs.
+* NodeAffinity aligns with how topology is represented with other Kubernetes
+ resources.
Some downsides include:
* You can have a proliferation of Node labels if you are running many different
kinds of volume plugins, each with their own topology labeling scheme.
+* The NodeSelector is more expressive than what most storage providers will
+ need. Most storage providers only need a single topology key with
+ one or more domains. Non-hierarchical domains may present implementation
+ challenges, and it will be difficult to express all the functionality
+ of a NodeSelector in a non-Kubernetes specification like CSI.
### Example PVs with NodeAffinity
#### Local Volume
+In this example, the volume can only be accessed from nodes that have the
+label key `kubernetes.io/hostname` and label value `node-1`.
```
apiVersion: v1
kind: PersistentVolume
@@ -141,6 +188,9 @@ spec:
```
#### Zonal Volume
+In this example, the volume can only be accessed from nodes that have the
+label key `failure-domain.beta.kubernetes.io/zone` and label value
+`us-central1-a`.
```
apiVersion: v1
kind: PersistentVolume
@@ -164,6 +214,9 @@ spec:
```
#### Multi-Zonal Volume
+In this example, the volume can only be accessed from nodes that have the
+label key `failure-domain.beta.kubernetes.io/zone` and label value
+`us-central1-a` OR `us-central1-b`.
```
apiVersion: v1
kind: PersistentVolume
@@ -187,19 +240,154 @@ spec:
- us-central1-b
```
-### Default Specification
-Existing admission controllers and dynamic provisioners for zonal volumes
-will be updated to specify PV NodeAffinity in addition to the existing zone
-and region labels. This will handle newly created PV objects.
+#### Multi Label Volume
+In this example, the volume needs two labels to uniquely identify the topology.
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ Name: rack-volume-1
+spec:
+ capacity:
+ storage: 100Gi
+ storageClassName: my-class
+ csi:
+ driver: my-rack-storage-driver
+ volumeHandle: my-vol
+ volumeAttributes:
+ foo: bar
+ nodeAffinity:
+ required:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: failure-domain.beta.kubernetes.io/zone
+ operator: In
+ values:
+ - us-central1-a
+ - key: foo.io/rack
+ operator: In
+ values:
+ - rack1
+```
+
+### Zonal PV Upgrade and Downgrade
+Upgrading of zonal PVs to use the new PV.NodeAffinity API can be phased in as
+follows:
+
+1. Update PV label admission controllers to specify the new PV.NodeAffinity. New
+ PVs created will automatically use the new PV.NodeAffinity. Existing PVs are
+ not updated yet, so on a downgrade, existing PVs are unaffected. New PVCs
+ should be deleted and recreated if there were problems with this feature.
+2. Once PV.NodeAffinity is GA, deprecate the VolumeZoneChecker scheduler
+ predicate. Add a zonal PV upgrade controller to convert existing PVs. At this
+ point, if there are issues with this feature, then on a downgrade, the
+ VolumeScheduling feature would also need to be disabled.
+3. After deprecation period, remove VolumeZoneChecker predicate and PV upgrade
+ controller.
+
+The zonal PV upgrade controller will convert existing PVs leveraging the
+existing zonal scheduling logic using labels to PV.NodeAffinity. It will keep
+the existing labels for backwards compatibility.
+
+For example, this zonal volume:
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: zonal-volume-1
+ labels:
+ failure-domain.beta.kubernetes.io/zone: us-central1-a
+ failure-domain.beta.kubernetes.io/region: us-central1
+spec:
+ capacity:
+ storage: 100Gi
+ storageClassName: my-class
+ gcePersistentDisk:
+ diskName: my-disk
+ fsType: ext4
+```
+
+will be converted to:
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: zonal-volume-1
+ labels:
+ failure-domain.beta.kubernetes.io/zone: us-central1-a
+ failure-domain.beta.kubernetes.io/region: us-central1
+spec:
+ capacity:
+ storage: 100Gi
+ storageClassName: my-class
+ gcePersistentDisk:
+ diskName: my-disk
+ fsType: ext4
+ nodeAffinity:
+ required:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: failure-domain.beta.kubernetes.io/zone
+ operator: In
+ values:
+ - us-central1-a
+ - key: failure-domain.beta.kubernetes.io/region
+ operator: In
+ values:
+ - us-central1
+```
-Existing PV objects will have to be upgraded to use the new NodeAffinity field.
-This does not have to occur instantaneously, and can be updated within the
-deprecation period.
+### Multi-Zonal PV Upgrade
+The zone label for multi-zonal volumes need to be specially parsed.
-TODO: This can be done through one of the following methods:
-- Manual updates/scripts
-- cluster/update-storage-objects.sh?
-- A new PV update controller
+For example, this multi-zonal volume:
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: multi-zonal-volume-1
+ labels:
+ failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
+ failure-domain.beta.kubernetes.io/region: us-central1
+spec:
+ capacity:
+ storage: 100Gi
+ storageClassName: my-class
+ gcePersistentDisk:
+ diskName: my-disk
+ fsType: ext4
+```
+
+will be converted to:
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: zonal-volume-1
+ labels:
+ failure-domain.beta.kubernetes.io/zone: us-central1-a__us-central1-b
+ failure-domain.beta.kubernetes.io/region: us-central1
+spec:
+ capacity:
+ storage: 100Gi
+ storageClassName: my-class
+ gcePersistentDisk:
+ diskName: my-disk
+ fsType: ext4
+ nodeAffinity:
+ required:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: failure-domain.beta.kubernetes.io/zone
+ operator: In
+ values:
+ - us-central1-a
+ - us-central1-b
+ - key: failure-domain.beta.kubernetes.io/region
+ operator: In
+ values:
+ - us-central1
+```
### Bound PVC Enforcement
For PVCs that are already bound to a PV with NodeAffinity, enforcement is
@@ -225,82 +413,362 @@ Both binding decisions of:
will be considered by the scheduler, so that all of a Pod's scheduling
constraints can be evaluated at once.
-The rest of this document describes the detailed design for implementing this
-new volume binding behavior.
-
-
-## New Volume Binding Design
-The design can be broken up into a few areas:
-* User-facing API to invoke new behavior
-* Integrating PV binding with pod scheduling
-* Binding multiple PVCs as a single transaction
-* Recovery from kubelet rejection of pod
-* Making dynamic provisioning topology-aware
-
-For the alpha phase, only the user-facing API and PV binding and scheduler
-integration are necessary. The remaining areas can be handled in beta and GA
-phases.
+The detailed design for implementing this new volume binding behavior will be
+described later in the scheduler integration section.
-### User-facing API
-In alpha, this feature is controlled by a feature gate, VolumeScheduling, and
-must be configured in the kube-scheduler and kube-controller-manager.
+## Delayed Volume Binding
+Today, volume binding occurs immediately once a PersistentVolumeClaim is
+created. In order for volume binding to take into account all of a pod's other scheduling
+constraints, volume binding must be delayed until a Pod is being scheduled.
-A new StorageClass field will be added to control the volume binding behavior.
+A new StorageClass field `BindingMode` will be added to control the volume
+binding behavior.
```
type StorageClass struct {
...
- VolumeBindingMode *VolumeBindingMode
+ BindingMode *BindingMode
}
-type VolumeBindingMode string
+type BindingMode string
const (
- VolumeBindingImmediate VolumeBindingMode = "Immediate"
- VolumeBindingWaitForFirstConsumer VolumeBindingMode = "WaitForFirstConsumer"
+ BindingImmediate BindingMode = "Immediate"
+ BindingWaitForFirstConsumer BindingMode = "WaitForFirstConsumer"
)
```
-`VolumeBindingImmediate` is the default and current binding method.
+`BindingImmediate` is the default and current binding method.
-This approach allows us to introduce the new binding behavior gradually and to
-be able to maintain backwards compatibility without deprecation of previous
-behavior. However, it has a few downsides:
+This approach allows us to:
+* Introduce the new binding behavior gradually.
+* Maintain backwards compatibility without deprecation of previous
+ behavior. Any automation that waits for PVCs to be bound before scheduling Pods
+ will not break.
+* Support scenarios where volume provisioning for globally-accessible volume
+ types could take a long time, where volume provisioning is a planned
+ event well in advance of workload deployment.
+
+However, it has a few downsides:
* StorageClass will be required to get the new binding behavior, even if dynamic
-provisioning is not used (in the case of local storage).
-* We have to maintain two different paths for volume binding.
+ provisioning is not used (in the case of local storage).
+* We have to maintain two different code paths for volume binding.
* We will be depending on the storage admin to correctly configure the
-StorageClasses for the volume types that need the new binding behavior.
+ StorageClasses for the volume types that need the new binding behavior.
* User experience can be confusing because PVCs could have different binding
-behavior depending on the StorageClass configuration. We will mitigate this by
-adding a new PVC event to indicate if binding will follow the new behavior.
+ behavior depending on the StorageClass configuration. We will mitigate this by
+ adding a new PVC event to indicate if binding will follow the new behavior.
+
+
+## Dynamic Provisioning with Topology
+To make dynamic provisioning aware of pod scheduling decisions, delayed volume
+binding must also be enabled. The scheduler will pass its selected node to the
+dynamic provisioner, and the provisioner will create a volume in the topology
+domain that the selected node is part of. The domain depends on the volume
+plugin. Zonal volume plugins will create the volume in the zone where the
+selected node is in. The local volume plugin will create the volume on the
+selected node.
-### Integrating binding with scheduling
-For the alpha phase, the focus is on static provisioning of PVs to support
-persistent local storage.
+### End to End Zonal Example
+This is an example of the most common use case for provisioning zonal volumes.
+For this use case, the user's specs are unchanged. Only one change
+to the StorageClass is needed to enable delayed volume binding.
+1. Admin sets up StorageClass, setting up delayed volume binding.
+```
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: standard
+provisioner: kubernetes.io/gce-pd
+bindingMode: WaitForFirstConsumer
+parameters:
+ type: pd-standard
+```
+2. Admin launches provisioner. For in-tree plugins, nothing needs to be done.
+3. User creates PVC. Nothing changes in the spec, although now the PVC won't be
+ immediately bound.
+```
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-pvc
+spec:
+ storageClassName: standard
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Gi
+```
+4. User creates Pod. Nothing changes in the spec.
+```
+apiVersion: v1
+kind: Pod
+metadata:
+ name: my-pod
+spec:
+ containers:
+ ...
+ volumes:
+ - name: my-vol
+ persistentVolumeClaim:
+ claimName: my-pvc
+```
+5. Scheduler picks a node that can satisfy the Pod and
+ [passes it](#pv-controller-changes) to the provisioner.
+6. Provisioner dynamically provisions a PV that can be accessed from
+ that node.
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ Name: volume-1
+spec:
+ capacity:
+ storage: 100Gi
+ storageClassName: standard
+ gcePersistentDisk:
+ diskName: my-disk
+ fsType: ext4
+ nodeAffinity:
+ required:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: failure-domain.beta.kubernetes.io/zone
+ operator: In
+ values:
+ - us-central1-a
+```
+7. Pod gets scheduled to the node.
+
+
+### Restricting Topology
+For the common use case, volumes will be provisioned in whatever topology domain
+the scheduler has decided is best to run the workload. Users may impose further
+restrictions by setting label/node selectors, and pod affinity/anti-affinity
+policies on their Pods. All those policies will be taken into account when
+dynamically provisioning a volume.
+
+While less common, administrators may want to further restrict what topology
+domains are available to a StorageClass. To support these administrator
+policies, an AllowedTopology field can also be specified in the
+StorageClass to restrict the topology domains for dynamic provisioning.
+This is not expected to be a common use case, and there are some caveats,
+described below.
+
+```
+type StorageClass struct {
+ ...
+
+ // Restrict the node topologies where volumes can be dynamically provisioned.
+ // Each volume plugin defines its own supported topology specifications.
+ // Each entry in AllowedTopologies is ORed.
+ AllowedTopologies []TopologySelector
+}
+
+type TopologySelector struct {
+ // Topology must meet all of the TopologySelectorLabelRequirements
+ // These requirements are ANDed.
+ MatchLabelExpressions []TopologySelectorLabelRequirement
+}
+
+// Topology requirement expressed as Node labels.
+type TopologySelectorLabelRequirement struct{
+ // Topology label key
+ Key string
+ // Topology must match at least one of the label Values for the given label Key.
+ // Each entry in Values is ORed.
+ Values []string
+}
+```
+
+A nil value means there are no topology restrictions. A scheduler predicate
+will evaluate a non-nil value when considering dynamic provisioning for a node.
+
+The AllowedTopologies will also be provided to provisioners as a new field, detailed in
+the provisioner section. Provisioners can use the allowed topology information
+in the following scenarios:
+* StorageClass is using the default immediate binding mode. This is the
+ legacy topology-unaware behavior. In this scenario, the volume could be
+ provisioned in a domain that cannot run the Pod since it doesn't take any
+ scheduler input.
+* For volumes that span multiple domains, the AllowedTopologies can restrict those
+ additional domains. However, special care must be taken to avoid specifying
+ conflicting topology constraints in the Pod. For example, the administrator could
+ restrict a multi-zonal volume to zones 'zone1' and 'zone2', but the Pod could have
+ constraints that restrict it to 'zone1' and 'zone3'. If 'zone1'
+ fails, the Pod cannot be scheduled to the intended failover zone.
+
+Note that if delayed binding is enabled and the volume spans only a single domain,
+then the AllowedTopologies can be ignored by the provisioner because the
+scheduler would have already taken it into account when it selects the node.
+
+Kubernetes will leave validation and enforcement of the AllowedTopologies content up
+to the provisioner.
+
+Support in the GCE PD and AWS EBS provisioners for the existing `zone` and `zones`
+parameters will not be deprecated due to the CSI in-tree migration requirement
+of CSI plugins supporting all the previous functionality of in-tree plugins, and
+CSI plugin versioning being independent of Kubernetes versions.
+
+Admins must already create a new StorageClass with delayed volume binding to use
+this feature, so the documentation can encourage use of the AllowedTopologies
+instead of existing zone parameters. A plugin-specific admission controller
+can also validate that both zone and AllowedTopologies are not specified,
+although the CSI plugin should still be robust to handle this configuration
+error.
+
+##### Alternatives
+A new restricted TopologySelector is used here instead of reusing
+VolumeNodeAffinity because the provisioning operation requires
+allowed topologies to be explicitly enumerated, while NodeAffinity and
+NodeSelectors allow for non-explicit expressions of topology values (i.e.,
+operators NotIn, Exists, DoesNotExist, Gt, Lt). It would be difficult for
+provisioners to evaluate all the expressions without having to enumerate all the
+Nodes in the cluster.
+
+Another alternative is to have a list of allowed PV topologies, where each PV
+topology is exactly the same as a single PV topology. This expression can become
+very verbose for volume types that have multi-dimensional topologies or multiple
+selections. As an example, for a multi-zonal volume that needs to select
+two zones, if an administrator wants to restrict the selection to 4 zones, then
+all 6 combinations need to be explicitly enumerated.
+
+Another alternative is to expand ResourceQuota to support topology constraints.
+However, ResourceQuota is currently only evaluated during admission, and not
+scheduling.
+
+#### Zonal Example
+This example restricts the volumes provisioned to zones us-central1-a and
+us-central1-b.
+```
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: zonal-class
+provisioner: kubernetes.io/gce-pd
+parameters:
+ type: pd-standard
+allowedTopologies:
+- matchLabelExpressions:
+ - key: failure-domain.beta.kubernetes.io/zone
+ values:
+ - us-central1-a
+ - us-central1-b
+```
+
+#### Multi-Zonal Example
+This example restricts the volume's primary and failover zones
+to us-central1-a, us-central1-b and us-central1-c. The regional PD
+provisioner will pick two out of the three zones to provision in.
+```
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: multi-zonal-class
+provisioner: kubernetes.io/gce-pd
+parameters:
+ type: pd-standard
+ replication-type: regional-pd
+allowedTopologies:
+- matchLabelExpressions:
+ - key: failure-domain.beta.kubernetes.io/zone
+ values:
+ - us-central1-a
+ - us-central1-b
+ - us-central1-c
+```
+
+Topologies that are incompatible with the storage provider parameters
+will be enforced by the provisioner. For example, dynamic provisioning
+of regional PDs will fail if provisioning is restricted to fewer than
+two zones in all regions. This configuration will cause provisioning to fail:
+```
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: multi-zonal-class
+provisioner: kubernetes.io/gce-pd
+parameters:
+ type: pd-standard
+ replication-type: regional-pd
+allowedTopologies:
+- matchLabelExpressions:
+ - key: failure-domain.beta.kubernetes.io/zone
+ values:
+ - us-central1-a
+```
+
+#### Multi Label Example
+This example restricts the volume's topology to nodes that
+have the following labels:
+
+* "zone: us-central1-a" and "rack: rack1" or,
+* "zone: us-central1-b" and "rack: rack1" or,
+* "zone: us-central1-b" and "rack: rack2"
+
+```
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: something-fancy
+provisioner: rack-based-provisioner
+parameters:
+allowedTopologies:
+- matchLabelExpressions:
+ - key: zone
+ values:
+ - us-central1-a
+ - key: rack
+ values:
+ - rack1
+- matchLabelExpressions:
+ - key: zone
+ values:
+ - us-central1-b
+ - key: rack
+ values:
+ - rack1
+ - rack2
+```
+
+
+## Feature Gates
+PersistentVolume.NodeAffinity and StorageClas.BindingMode fields will be
+controlled by the VolumeScheduling feature gate, and must be configured in the
+kube-scheduler, kube-controller-manager, and all kubelets.
+
+The StorageClass.AllowedTopology field will be controlled
+by the DynamicProvisioningScheduling feature gate, and must be configured in the
+kube-scheduler and kube-controller-manager.
+
+
+## Integrating volume binding with pod scheduling
For the new volume binding mode, the proposed new workflow is:
-1. Admin statically creates PVs and/or StorageClasses.
+1. Admin pre-provisions PVs and/or StorageClasses.
2. User creates unbound PVC and there are no prebound PVs for it.
3. **NEW:** PVC binding and provisioning is delayed until a pod is created that
references it.
4. User creates a pod that uses the PVC.
5. Pod starts to get processed by the scheduler.
-6. **NEW:** A new predicate function, called MatchUnboundPVCs, will look at all of
-a Pod’s unbound PVCs, and try to find matching PVs for that node based on the
-PV topology. If there are no matching PVs, then it checks if dynamic
-provisioning is possible for that node.
+6. **NEW:** A new predicate function, called CheckVolumeBinding, will process
+both bound and unbound PVCs of the Pod. It will validate the VolumeNodeAffinity
+for bound PVCs. For unbound PVCs, it will try to find matching PVs for that node
+based on the PV NodeAffinity. If there are no matching PVs, then it checks if
+dynamic provisioning is possible for that node based on StorageClass
+AllowedTopologies.
7. **NEW:** The scheduler continues to evaluate priorities. A new priority
-function, called PrioritizeUnboundPVCs, will get the PV matches per PVC per
+function, called PrioritizeVolumes, will get the PV matches per PVC per
node, and compute a priority score based on various factors.
8. **NEW:** After evaluating all the existing predicates and priorities, the
-scheduler will pick a node, and call a new assume function, AssumePVCs,
+scheduler will pick a node, and call a new assume function, AssumePodVolumes,
passing in the Node. The assume function will check if any binding or
provisioning operations need to be done. If so, it will update the PV cache to
-mark the PVs with the chosen PVCs.
+mark the PVs with the chosen PVCs and queue the Pod for volume binding.
9. **NEW:** If PVC binding or provisioning is required, we do NOT AssumePod.
-Instead, a new bind function, BindPVCs, will be called asynchronously, passing
+Instead, a new bind function, BindPodVolumes, will be called asynchronously, passing
in the selected node. The bind function will prebind the PV to the PVC, or
trigger dynamic provisioning. Then, it always sends the Pod through the
scheduler again for reasons explained later.
@@ -328,15 +796,18 @@ avoid these error conditions are to:
* Separate out volumes that the user prebinds from the volumes that are
available for the system to choose from by StorageClass.
-#### PV Controller Changes
+### PV Controller Changes
When the feature gate is enabled, the PV controller needs to skip binding
unbound PVCs with VolumBindingWaitForFirstConsumer and no prebound PVs
to let it come through the scheduler path.
Dynamic provisioning will also be skipped if
-VolumBindingWaitForFirstConsumer is set. The scheduler will signal to
+VolumBindingWaitForFirstConsumer is set. The scheduler will signal to
the PV controller to start dynamic provisioning by setting the
-`annStorageProvisioner` annotation in the PVC.
+`annSelectedNode` annotation in the PVC. If provisioning fails, the PV
+controller can signal back to the scheduler to retry dynamic provisioning by
+removing the `annSelectedNode` annotation. For external provisioners, the
+external provisioner needs to remove the annotation.
No other state machine changes are required. The PV controller continues to
handle the remaining scenarios without any change.
@@ -344,14 +815,39 @@ handle the remaining scenarios without any change.
The methods to find matching PVs for a claim and prebind PVs need to be
refactored for use by the new scheduler functions.
-#### Scheduler Changes
+### Dynamic Provisioning interface changes
+The dynamic provisioning interfaces will be updated to pass in:
+* selectedNode, when late binding is enabled on the StorageClass
+* allowedTopologies, when it is set in the StorageClass
+
+If selectedNode is set, the provisioner should get its appropriate topology
+labels from the Node object, and provision a volume based on those topology
+values. In the common use case for a volume supporting a single topology domain,
+if nodeName is set, then allowedTopologies can be ignored by the provisioner.
+However, multi-domain volume provisioners may still need to look at
+allowedTopologies to restrict the remaining domains.
+
+In-tree provisioners:
+```
+Provision(selectedNode *v1.Node, allowedTopologies *storagev1.VolumeProvisioningTopology) (*v1.PersistentVolume, error)
+```
+
+External provisioners:
+* selectedNode will be represented by the PVC annotation "volume.alpha.kubernetes.io/selectedNode".
+ Value is the name of the node.
+* allowedTopologies must be obtained by looking at the StorageClass for the PVC.
-##### Predicate
+#### New Permissions
+Provisioners will need to be able to get Node and StorageClass objects.
+
+### Scheduler Changes
+
+#### Predicate
A new predicate function checks all of a Pod's unbound PVCs can be satisfied
by existing PVs or dynamically provisioned PVs that are
topologically-constrained to the Node.
```
-MatchUnboundPVCs(pod *v1.Pod, node *v1.Node) (canBeBound bool, err error)
+CheckVolumeBinding(pod *v1.Pod, node *v1.Node) (canBeBound bool, err error)
```
1. If all the Pod’s PVCs are bound, return true.
2. Otherwise try to find matching PVs for all of the unbound PVCs in order of
@@ -361,69 +857,72 @@ decreasing requested capacity.
5. Temporarily cache this PV choice for the PVC per Node, for fast
processing later in the priority and bind functions.
6. Return true if all PVCs are matched.
-7. If there are still unmatched PVCs, check if dynamic provisioning is possible.
-For this alpha phase, the provisioner is not topology aware, so the predicate
-will just return true if there is a provisioner specified in the StorageClass
-(internal or external).
+7. If there are still unmatched PVCs, check if dynamic provisioning is possible,
+ by evaluating StorageClass.AllowedTopology. If so,
+ temporarily cache this decision in the PVC per Node.
8. Otherwise return false.
-##### Priority
+#### Priority
After all the predicates run, there is a reduced set of Nodes that can fit a
Pod. A new priority function will rank the remaining nodes based on the
unbound PVCs and their matching PVs.
```
-PrioritizeUnboundPVCs(pod *v1.Pod, filteredNodes HostPriorityList) (rankedNodes HostPriorityList, err error)
+PrioritizeVolumes(pod *v1.Pod, filteredNodes HostPriorityList) (rankedNodes HostPriorityList, err error)
```
1. For each Node, get the cached PV matches for the Pod’s PVCs.
2. Compute a priority score for the Node using the following factors:
1. How close the PVC’s requested capacity and PV’s capacity are.
- 2. Matching static PVs is preferred over dynamic provisioning because we
+ 2. Matching pre-provisioned PVs is preferred over dynamic provisioning because we
assume that the administrator has specifically created these PVs for
the Pod.
TODO (beta): figure out weights and exact calculation
-##### Assume
+#### Assume
Once all the predicates and priorities have run, then the scheduler picks a
Node. Then we can bind or provision PVCs for that Node. For better scheduler
performance, we’ll assume that the binding will likely succeed, and update the
-PV cache first. Then the actual binding API update will be made
+PV and PVC caches first. Then the actual binding API update will be made
asynchronously, and the scheduler can continue processing other Pods.
-For the alpha phase, the AssumePVCs function will be directly called by the
+For the alpha phase, the AssumeVolumes function will be directly called by the
scheduler. We’ll consider creating a generic scheduler interface in a
subsequent phase.
```
-AssumePVCs(pod *v1.Pod, node *v1.Node) (pvcBindingRequired bool, err error)
+AssumePodVolumes(pod *v1.pod, node *v1.node) (pvcbindingrequired bool, err error)
```
1. If all the Pod’s PVCs are bound, return false.
-2. For static PV binding:
+2. For pre-provisioned PV binding:
1. Get the cached matching PVs for the PVCs on that Node.
2. Validate the actual PV state.
3. Mark PV.ClaimRef in the PV cache.
4. Cache the PVs that need binding in the Pod object.
3. For in-tree and external dynamic provisioning:
- 1. Cache the PVCs that need provisioning in the Pod object.
-4. Return true.
+ 1. Mark the PVC annSelectedNode in the PVC cache.
+ 2. Cache the PVCs that need provisioning in the Pod object.
+4. Return true
+
+#### Bind
+If AssumePodVolumes returns pvcBindingRequired, then Pod is queued for volume
+binding and provisioning. A separate go routine will process this queue and
+call the BindPodVolumes function.
-##### Bind
-If AssumePVCs returns pvcBindingRequired, then the BindPVCs function is called
-as a go routine. Otherwise, we can continue with assuming and binding the Pod
+Otherwise, we can continue with assuming and binding the Pod
to the Node.
-For the alpha phase, the BindUnboundPVCs function will be directly called by the
+For the alpha phase, the BindVolumes function will be directly called by the
scheduler. We’ll consider creating a generic scheduler interface in a subsequent
phase.
```
-BindUnboundPVCs(pod *v1.Pod, node *v1.Node) (err error)
+BindPodVolumes(pod *v1.Pod, node *v1.Node) (err error)
```
-1. For static PV binding:
+1. For pre-provisioned PV binding:
1. Prebind the PV by updating the `PersistentVolume.ClaimRef` field.
2. If the prebind fails, revert the cache updates.
2. For in-tree and external dynamic provisioning:
- 1. Set `annStorageProvisioner` on the PVC.
+ 1. Set `annSelectedNode` on the PVC.
3. Send Pod back through scheduling, regardless of success or failure.
1. In the case of success, we need one more pass through the scheduler in
order to evaluate other volume predicates that require the PVC to be bound, as
@@ -433,16 +932,16 @@ described below.
TODO: pv controller has a high resync frequency, do we need something similar
for the scheduler too
-##### Access Control
-Scheduler will need PV update permissions for prebinding static PVs, and PVC
-modify permissions for triggering dynamic provisioning.
+#### Access Control
+Scheduler will need PV update permissions for prebinding pre-provisioned PVs, and PVC
+update permissions for triggering dynamic provisioning.
-##### Pod preemption considerations
-The MatchUnboundPVs predicate does not need to be re-evaluated for pod
+#### Pod preemption considerations
+The CheckVolumeBinding predicate does not need to be re-evaluated for pod
preemption. Preempting a pod that uses a PV will not free up capacity on that
node because the PV lifecycle is independent of the Pod’s lifecycle.
-##### Other scheduler predicates
+#### Other scheduler predicates
Currently, there are a few existing scheduler predicates that require the PVC
to be bound. The bound assumption needs to be changed in order to work with
this new workflow.
@@ -452,7 +951,7 @@ running predicates? One possible way is to mark at the beginning of scheduling
a Pod if all PVCs were bound. Then we can check if a second scheduler pass is
needed.
-###### Max PD Volume Count Predicate
+##### Max PD Volume Count Predicate
This predicate checks the maximum number of PDs per node is not exceeded. It
needs to be integrated into the binding decision so that we don’t bind or
provision a PV if it’s going to cause the node to exceed the max PD limit. But
@@ -460,7 +959,7 @@ until it is integrated, we need to make one more pass in the scheduler after all
the PVCs are bound. The current copy of the predicate in the default scheduler
has to remain to account for the already-bound volumes.
-###### Volume Zone Predicate
+##### Volume Zone Predicate
This predicate makes sure that the zone label on a PV matches the zone label of
the node. If the volume is not bound, this predicate can be ignored, as the
binding logic will take into account zone constraints on the PV.
@@ -475,18 +974,18 @@ This predicate needs to remain in the default scheduler to handle the
already-bound volumes using the old zonal labeling. It can be removed once that
mechanism is deprecated and unsupported.
-###### Volume Node Predicate
+##### Volume Node Predicate
This is a new predicate added in 1.7 to handle the new PV node affinity. It
evaluates the node affinity against the node’s labels to determine if the pod
can be scheduled on that node. If the volume is not bound, this predicate can
be ignored, as the binding logic will take into account the PV node affinity.
-##### Caching
+#### Caching
There are two new caches needed in the scheduler.
The first cache is for handling the PV/PVC API binding updates occurring
-asynchronously with the main scheduler loop. `AssumePVCs` needs to store
-the updated API objects before `BindUnboundPVCs` makes the API update, so
+asynchronously with the main scheduler loop. `AssumeVolumes` needs to store
+the updated API objects before `BindVolumes` makes the API update, so
that future binding decisions will not choose any assumed PVs. In addition,
if the API update fails, the cached updates need to be reverted and restored
with the actual API object. The cache will return either the cached-only
@@ -507,6 +1006,8 @@ all the volume predicates are fully run once all PVCs are bound.
* Caching PV matches per node decisions that the predicate had made. This is
an optimization to avoid walking through all the PVs again in priority and
assume functions.
+* Caching PVC dynamic provisioning decisions per node that the predicate had
+ made.
#### Performance and Optimizations
Let:
@@ -524,14 +1025,7 @@ PVs for every node, so its running time is O(NV).
A few optimizations can be made to improve the performance:
-1. Optimizing for PVs that don’t use node affinity (to prevent performance
-regression):
- 1. Index the PVs by StorageClass and only search the PV list with matching
-StorageClass.
- 2. Keep temporary state in the PVC cache if we previously succeeded or
-failed to match PVs, and if none of the PVs have node affinity. Then we can
-skip PV matching on subsequent nodes, and just return the result of the first
-attempt.
+1. PVs that don’t use node affinity should not be using delayed binding.
2. Optimizing for PVs that have node affinity:
1. When a static PV is created, if node affinity is present, evaluate it
against all the nodes. For each node, keep an in-memory map of all its PVs
@@ -541,7 +1035,7 @@ match against the PVs in the node’s PV map instead of the cluster-wide PV list
For the alpha phase, the optimizations are not required. However, they should
be required for beta and GA.
-#### Packaging
+### Packaging
The new bind logic that is invoked by the scheduler can be packaged in a few
ways:
* As a library to be directly called in the default scheduler
@@ -556,7 +1050,7 @@ for more race conditions due to the caches being out of sync.
because the scheduler’s cache and PV controller’s cache have different interfaces
and private methods.
-##### Extender cons
+#### Extender cons
However, the cons of the extender approach outweighs the cons of the library
approach.
@@ -578,18 +1072,18 @@ Kubernetes.
With all this complexity, the library approach is the most feasible in a single
release time frame, and aligns better with the current Kubernetes architecture.
-#### Downsides
+### Downsides
-##### Unsupported Use Cases
+#### Unsupported Use Cases
The following use cases will not be supported for PVCs with a StorageClass with
-VolumeBindingWaitForFirstConsumer:
+BindingWaitForFirstConsumer:
* Directly setting Pod.Spec.NodeName
* DaemonSets
These two use cases will bypass the default scheduler and thus will not
trigger PV binding.
-##### Custom Schedulers
+#### Custom Schedulers
Custom schedulers, controllers and operators that handle pod scheduling and want
to support this new volume binding mode will also need to handle the volume
binding decision.
@@ -604,7 +1098,7 @@ easier for custom schedulers to include in their own implementation.
In general, many advanced scheduling features have been added into the default
scheduler, such that it is becoming more difficult to run without it.
-##### HA Master Upgrades
+#### HA Master Upgrades
HA masters adds a bit of complexity to this design because the active scheduler
process and active controller-manager (PV controller) process can be on different
nodes. That means during an HA master upgrade, the scheduler and controller-manager
@@ -624,9 +1118,9 @@ all dependencies are at the required versions.
For alpha, this is not concerning, but it needs to be solved by GA.
-#### Other Alternatives Considered
+### Other Alternatives Considered
-##### One scheduler function
+#### One scheduler function
An alternative design considered was to do the predicate, priority and bind
functions all in one function at the end right before Pod binding, in order to
reduce the number of passes we have to make over all the PVs. However, this
@@ -641,7 +1135,7 @@ on a Node that the higher priority pod still cannot run on due to PVC
requirements. For that reason, the PVC binding decision needs to be have its
predicate function separated out and evaluated with the rest of the predicates.
-##### Pull entire PVC binding into the scheduler
+#### Pull entire PVC binding into the scheduler
The proposed design only has the scheduler initiating the binding transaction
by prebinding the PV. An alternative is to pull the whole two-way binding
transaction into the scheduler, but there are some complex scenarios that
@@ -653,7 +1147,7 @@ scheduler’s Pod sync loop cannot handle:
Handling these scenarios in the scheduler’s Pod sync loop is not possible, so
they have to remain in the PV controller.
-##### Keep all PVC binding in the PV controller
+#### Keep all PVC binding in the PV controller
Instead of initiating PV binding in the scheduler, have the PV controller wait
until the Pod has been scheduled to a Node, and then try to bind based on the
chosen Node. A new scheduling predicate is still needed to filter and match
@@ -685,7 +1179,7 @@ can make a lot of wrong decisions after the restart.
evaluated. To solve this, all the volume predicates need to also be built into
the PV controller when matching possible PVs.
-##### Move PVC binding to kubelet
+#### Move PVC binding to kubelet
Looking into the future, with the potential for NUMA-aware scheduling, you could
have a sub-scheduler on each node to handle the pod scheduling within a node. It
could make sense to have the volume binding as part of this sub-scheduler, to make
@@ -699,7 +1193,7 @@ to just that node, but for zonal storage, it could see all the PVs in that zone.
In addition, the sub-scheduler is just a thought at this point, and there are no
concrete proposals in this area yet.
-### Binding multiple PVCs in one transaction
+## Binding multiple PVCs in one transaction
There are no plans to handle this, but a possible solution is presented here if the
need arises in the future. Since the scheduler is serialized, a partial binding
failure should be a rare occurrence and would only be caused if there is a user or
@@ -720,25 +1214,12 @@ If scheduling fails, update all bound PVCs with an annotation,
are clean. Scheduler and kubelet needs to reject pods with PVCs that are
undergoing rollback.
-### Recovering from kubelet rejection of pod
+## Recovering from kubelet rejection of pod
We can use the same rollback mechanism as above to handle this case.
If kubelet rejects a pod, it will go back to scheduling. If the scheduler
cannot find a node for the pod, then it will encounter scheduling failure and
initiate the rollback.
-### Making dynamic provisioning topology aware
-TODO (beta): Design details
-
-For alpha, we are not focusing on this use case. But it should be able to
-follow the new workflow closely with some modifications.
-* The FindUnboundPVCs predicate function needs to get provisionable capacity per
-topology dimension from the provisioner somehow.
-* The PrioritizeUnboundPVCs priority function can add a new priority score factor
-based on available capacity per node.
-* The BindUnboundPVCs bind function needs to pass in the node to the provisioner.
-The internal and external provisioning APIs need to be updated to take in a node
-parameter.
-
## Testing
@@ -752,7 +1233,7 @@ parameter.
* Multiple PVCs specified in a pod
* Positive: Enough local PVs available on a single node
* Negative: Not enough local PVs available on a single node
-* Fallback to dynamic provisioning if unsuitable static PVs
+* Fallback to dynamic provisioning if unsuitable pre-provisioned PVs
### Unit tests
* All PVCs found a match on first node. Verify match is best suited based on
diff --git a/contributors/devel/api_changes.md b/contributors/devel/api_changes.md
index 2440902e..303c43a8 100644
--- a/contributors/devel/api_changes.md
+++ b/contributors/devel/api_changes.md
@@ -365,10 +365,14 @@ being required otherwise.
### Edit defaults.go
If your change includes new fields for which you will need default values, you
-need to add cases to `pkg/apis/<group>/<version>/defaults.go` (the core v1 API
-is special, its defaults.go is at `pkg/api/v1/defaults.go`. For simplicity, we
-will not mention this special case in the rest of the article). Of course, since
-you have added code, you have to add a test:
+need to add cases to `pkg/apis/<group>/<version>/defaults.go`
+
+*Note:* In the past the core v1 API
+was special. Its `defaults.go` used to live at `pkg/api/v1/defaults.go`.
+If you see code referencing that path, you can be sure its outdated. Now the core v1 api lives at
+`pkg/apis/core/v1/defaults.go` which follows the above convention.
+
+Of course, since you have added code, you have to add a test:
`pkg/apis/<group>/<version>/defaults_test.go`.
Do use pointers to scalars when you need to distinguish between an unset value
@@ -601,7 +605,6 @@ Due to the fast changing nature of the project, the following content is probabl
to generate protobuf IDL and marshallers.
* You must add the new version to
[cmd/kube-apiserver/app#apiVersionPriorities](https://github.com/kubernetes/kubernetes/blob/v1.8.0-alpha.2/cmd/kube-apiserver/app/aggregator.go#L172)
- to let the aggregator list it. This list will be removed before release 1.8.
* You must setup storage for the new version in
[pkg/registry/group_name/rest](https://github.com/kubernetes/kubernetes/blob/v1.8.0-alpha.2/pkg/registry/authentication/rest/storage_authentication.go)
diff --git a/contributors/devel/coding-conventions.md b/contributors/devel/coding-conventions.md
deleted file mode 100644
index 23775c55..00000000
--- a/contributors/devel/coding-conventions.md
+++ /dev/null
@@ -1,3 +0,0 @@
-This document has been moved to https://git.k8s.io/community/contributors/guide/coding-conventions.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/development.md b/contributors/devel/development.md
index cd3f84b7..29ba0bd7 100644
--- a/contributors/devel/development.md
+++ b/contributors/devel/development.md
@@ -134,7 +134,9 @@ development environment, please [set one up](http://golang.org/doc/code.html).
| 1.5, 1.6 | 1.7 - 1.7.5 |
| 1.7 | 1.8.1 |
| 1.8 | 1.8.3 |
-| 1.9+ | 1.9.1 |
+| 1.9 | 1.9.1 |
+| 1.10 | 1.9.1 |
+| 1.11+ | 1.10.1 |
Ensure your GOPATH and PATH have been configured in accordance with the Go
environment instructions.
diff --git a/contributors/devel/faster_reviews.md b/contributors/devel/faster_reviews.md
deleted file mode 100644
index d0fe7e37..00000000
--- a/contributors/devel/faster_reviews.md
+++ /dev/null
@@ -1,4 +0,0 @@
-The contents of this file have been moved to https://git.k8s.io/community/contributors/guide/pull-requests.md.
- <!--
- This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
- -->
diff --git a/contributors/devel/flexvolume.md b/contributors/devel/flexvolume.md
index 627e88b1..1dfc9668 100644
--- a/contributors/devel/flexvolume.md
+++ b/contributors/devel/flexvolume.md
@@ -132,7 +132,7 @@ Note: Secrets are passed only to "mount/unmount" call-outs.
See [nginx-lvm.yaml] & [nginx-nfs.yaml] for a quick example on how to use Flexvolume in a pod.
-[lvm]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/lvm
-[nfs]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nfs
-[nginx-lvm.yaml]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nginx-lvm.yaml
-[nginx-nfs.yaml]: https://git.k8s.io/kubernetes/examples/volumes/flexvolume/nginx-nfs.yaml
+[lvm]: https://git.k8s.io/examples/staging/volumes/flexvolume/lvm
+[nfs]: https://git.k8s.io/examples/staging/volumes/flexvolume/nfs
+[nginx-lvm.yaml]: https://git.k8s.io/examples/staging/volumes/flexvolume/nginx-lvm.yaml
+[nginx-nfs.yaml]: https://git.k8s.io/examples/staging/volumes/flexvolume/nginx-nfs.yaml
diff --git a/contributors/devel/go-code.md b/contributors/devel/go-code.md
deleted file mode 100644
index 4454e400..00000000
--- a/contributors/devel/go-code.md
+++ /dev/null
@@ -1,3 +0,0 @@
-This document's content has been rolled into https://git.k8s.io/community/contributors/guide/coding-conventions.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/owners.md b/contributors/devel/owners.md
deleted file mode 100644
index 1be75e5f..00000000
--- a/contributors/devel/owners.md
+++ /dev/null
@@ -1,4 +0,0 @@
-This document has been moved to https://git.k8s.io/community/contributors/guide/owners.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
-
diff --git a/contributors/devel/pull-requests.md b/contributors/devel/pull-requests.md
deleted file mode 100644
index c793df8c..00000000
--- a/contributors/devel/pull-requests.md
+++ /dev/null
@@ -1,4 +0,0 @@
-This file has been moved to https://git.k8s.io/community/contributors/guide/pull-requests.md.
-<!--
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
---> \ No newline at end of file
diff --git a/contributors/devel/release/OWNERS b/contributors/devel/release/OWNERS
deleted file mode 100644
index afb042fa..00000000
--- a/contributors/devel/release/OWNERS
+++ /dev/null
@@ -1,8 +0,0 @@
-reviewers:
- - saad-ali
- - pwittrock
- - steveperry-53
- - chenopis
- - spiffxp
-approvers:
- - sig-release-leads
diff --git a/contributors/devel/release/README.md b/contributors/devel/release/README.md
deleted file mode 100644
index d6eb9d6c..00000000
--- a/contributors/devel/release/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/README.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/issues.md b/contributors/devel/release/issues.md
deleted file mode 100644
index cccf12e9..00000000
--- a/contributors/devel/release/issues.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/issues.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/patch-release-manager.md b/contributors/devel/release/patch-release-manager.md
deleted file mode 100644
index da1290e5..00000000
--- a/contributors/devel/release/patch-release-manager.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The original content of this file has been migrated to https://git.k8s.io/sig-release/release-process-documentation/release-team-guides/patch-release-manager-playbook.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/patch_release.md b/contributors/devel/release/patch_release.md
deleted file mode 100644
index 1b074759..00000000
--- a/contributors/devel/release/patch_release.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/patch_release.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/scalability-validation.md b/contributors/devel/release/scalability-validation.md
deleted file mode 100644
index 8a943227..00000000
--- a/contributors/devel/release/scalability-validation.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/scalability-validation.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/testing.md b/contributors/devel/release/testing.md
deleted file mode 100644
index 2ae76112..00000000
--- a/contributors/devel/release/testing.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/testing.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/scalability-good-practices.md b/contributors/devel/scalability-good-practices.md
deleted file mode 100644
index ef274c27..00000000
--- a/contributors/devel/scalability-good-practices.md
+++ /dev/null
@@ -1,4 +0,0 @@
-This document has been moved to https://git.k8s.io/community/contributors/guide/scalability-good-practices.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
-
diff --git a/contributors/devel/scheduler.md b/contributors/devel/scheduler.md
index d8da4631..486b04a9 100644
--- a/contributors/devel/scheduler.md
+++ b/contributors/devel/scheduler.md
@@ -84,7 +84,7 @@ scheduling policies to apply, and can add new ones.
The policies that are applied when scheduling can be chosen in one of two ways.
The default policies used are selected by the functions `defaultPredicates()` and `defaultPriorities()` in
[pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithmprovider/defaults/defaults.go).
-However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](http://releases.k8s.io/HEAD/examples/scheduler-policy-config.json) for an example
+However, the choice of policies can be overridden by passing the command-line flag `--policy-config-file` to the scheduler, pointing to a JSON file specifying which scheduling policies to use. See [examples/scheduler-policy-config.json](https://git.k8s.io/examples/staging/scheduler-policy-config.json) for an example
config file. (Note that the config file format is versioned; the API is defined in [pkg/scheduler/api](http://releases.k8s.io/HEAD/pkg/scheduler/api/)).
Thus to add a new scheduling policy, you should modify [pkg/scheduler/algorithm/predicates/predicates.go](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/predicates/predicates.go) or add to the directory [pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/pkg/scheduler/algorithm/priorities/), and either register the policy in `defaultPredicates()` or `defaultPriorities()`, or use a policy config file.
diff --git a/contributors/devel/security-release-process.md b/contributors/devel/security-release-process.md
deleted file mode 100644
index e0b55f68..00000000
--- a/contributors/devel/security-release-process.md
+++ /dev/null
@@ -1,3 +0,0 @@
-The original content of this file has been migrated to https://git.k8s.io/sig-release/security-release-process-documentation/security-release-process.md
-
-This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/guide/README.md b/contributors/guide/README.md
index ddb65111..ad5cf2e1 100644
--- a/contributors/guide/README.md
+++ b/contributors/guide/README.md
@@ -208,7 +208,7 @@ If you haven't noticed by now, we have a large, lively, and friendly open-source
## Events
-Kubernetes is the main focus of CloudNativeCon/KubeCon, held twice per year in EMEA and in North America. Information about these and other community events is available on the CNCF [events](https://www.cncf.io/events/) pages.
+Kubernetes is the main focus of KubeCon + CloudNativeCon, held three times per year in China, Europe and in North America. Information about these and other community events is available on the CNCF [events](https://www.cncf.io/events/) pages.
### Meetups
diff --git a/contributors/guide/contributor-cheatsheet.md b/contributors/guide/contributor-cheatsheet.md
index 8f21cd84..e9591afc 100644
--- a/contributors/guide/contributor-cheatsheet.md
+++ b/contributors/guide/contributor-cheatsheet.md
@@ -17,10 +17,11 @@ A list of common resources when contributing to Kubernetes.
- [Gubernator Dashboard - k8s.reviews](https://k8s-gubernator.appspot.com/pr)
- [Submit Queue](https://submit-queue.k8s.io)
- [Bot commands](https://go.k8s.io/bot-commands)
-- [Release Buckets](http://gcsweb.k8s.io/gcs/kubernetes-release/)
+- [GitHub labels](https://go.k8s.io/github-labels)
+- [Release Buckets](https://gcsweb.k8s.io/gcs/kubernetes-release/)
- Developer Guide
- - [Cherry Picking Guide](/contributors/devel/cherry-picks.md) - [Queue](http://cherrypick.k8s.io/#/queue)
-- [https://k8s-code.appspot.com/](https://k8s-code.appspot.com/) - Kubernetes Code Search, maintained by [@dims](https://github.com/dims)
+ - [Cherry Picking Guide](/contributors/devel/cherry-picks.md) - [Queue](https://cherrypick.k8s.io/#/queue)
+- [Kubernetes Code Search](https://cs.k8s.io/), maintained by [@dims](https://github.com/dims)
## SIGs and Working Groups
@@ -39,8 +40,10 @@ A list of common resources when contributing to Kubernetes.
## Tests
- [Current Test Status](https://prow.k8s.io/)
-- [Aggregated Failures](https://storage.googleapis.com/k8s-gubernator/triage/index.html)
-- [Test Grid](https://k8s-testgrid.appspot.com/)
+- [Aggregated Failures](https://go.k8s.io/triage)
+- [Test Grid](https://testgrid.k8s.io)
+- [Test Health](https://go.k8s.io/test-health)
+- [Test History](https://go.k8s.io/test-history)
## Other
diff --git a/contributors/guide/github-workflow.md b/contributors/guide/github-workflow.md
index a1429258..ac747abc 100644
--- a/contributors/guide/github-workflow.md
+++ b/contributors/guide/github-workflow.md
@@ -74,6 +74,22 @@ git checkout -b myfeature
Then edit code on the `myfeature` branch.
#### Build
+The following section is a quick start on how to build Kubernetes locally, for more detailed information you can see [kubernetes/build](https://git.k8s.io/kubernetes/build/README.md).
+The best way to validate your current setup is to build a small part of Kubernetes. This way you can address issues without waiting for the full build to complete. To build a specific part of Kubernetes use the `WHAT` environment variable to let the build scripts know you want to build only a certain package/executable.
+
+```sh
+make WHAT=cmd/{$package_you_want}
+```
+
+*Note:* This applies to all top level folders under kubernetes/cmd.
+
+So for the cli, you can run:
+
+```sh
+make WHAT=cmd/kubectl
+```
+
+If everything checks out you will have an executable in the `_output/bin` directory to play around with.
*Note:* If you are using `CDPATH`, you must either start it with a leading colon, or unset the variable. The make rules and scripts to build require the current directory to come first on the CD search path in order to properly navigate between directories.
diff --git a/contributors/new-contributor-playground/OWNERS b/contributors/new-contributor-playground/OWNERS
new file mode 100644
index 00000000..8a6b7bb7
--- /dev/null
+++ b/contributors/new-contributor-playground/OWNERS
@@ -0,0 +1,14 @@
+reviewers:
+ - parispittman
+ - guineveresaenger
+ - jberkus
+ - errordeveloper
+ - tpepper
+ - spiffxp
+approvers:
+ - parispittman
+ - guineveresaenger
+ - jberkus
+ - errordeveloper
+labels:
+ - area/new-contributor-track
diff --git a/contributors/new-contributor-playground/README.md b/contributors/new-contributor-playground/README.md
new file mode 100644
index 00000000..b5946581
--- /dev/null
+++ b/contributors/new-contributor-playground/README.md
@@ -0,0 +1,12 @@
+# Welcome to KubeCon Copenhagen's New Contributor Track!
+
+Hello new contributors!
+
+This subfolder of [kubernetes/community](https://github.com/kubernetes/community) will be used as a safe space for participants in the New Contributor Onboarding Track to familiarize themselves with (some of) the Kubernetes Project's review and pull request processes.
+
+The label associated with this track is `area/new-contributor-track`.
+
+*If you are not currently attending or organizing this event, please DO NOT create issues or pull requests against this section of the community repo.*
+
+A [Youtube playlist](https://www.youtube.com/playlist?list=PL69nYSiGNLP3M5X7stuD7N4r3uP2PZQUx) of this workshop has been posted, and an outline of content to videos can be found [here](http://git.k8s.io/community/events/2018/05-contributor-summit).
+
diff --git a/contributors/new-contributor-playground/hello-from-copenhagen.md b/contributors/new-contributor-playground/hello-from-copenhagen.md
new file mode 100644
index 00000000..29467efd
--- /dev/null
+++ b/contributors/new-contributor-playground/hello-from-copenhagen.md
@@ -0,0 +1,4 @@
+# Hello from Copenhagen!
+
+Hello everyone who's attending the Contributor Summit at KubeCon + CloudNativeCon in Copenhagen!
+Great to see so many amazing people interested in contributing to Kubernetes :)
diff --git a/contributors/new-contributor-playground/new-contributor-notes.md b/contributors/new-contributor-playground/new-contributor-notes.md
new file mode 100644
index 00000000..1858fd85
--- /dev/null
+++ b/contributors/new-contributor-playground/new-contributor-notes.md
@@ -0,0 +1,350 @@
+# Kubernetes New Contributor Workshop - KubeCon EU 2018 - Notes
+
+Joining in the beginning was onboarding on a yacht
+Now is more onboarding a BIG cruise ship.
+
+Will be a Hard schedule, and let's hope we can achieve everything
+Sig-contributor-experience -> from Non-member contributors to Owner
+
+## SIG presentation
+
+- SIG-docs & SIG-contributor-experience: **Docs and website** contribution
+- SIG-testing: **Testing** contribution
+- SIG-\* (*depends on the area to contribute on*): **Code** contribution
+
+**=> Find your first topics**: bug, feature, learning, community development and documentation
+
+Table exercise: Introduce yourself and give a tip on where you want to contribute in Kubernetes
+
+
+## Communication in the community
+
+Kubernetes community is like a Capybara: community members are really cool with everyone and they are from a lot of different horizons.
+
+- Tech question on Slack and Stack Overflow, not on Github
+- A lot of discussion will be involve when GH issues and PR are opened. Don't be frustrated
+- Stay patient because there is a lot of contribution
+
+When in doubt, **ask on Slack**
+
+Other communication channels:
+
+- Community meetings
+- Mailing lists
+- @ on Github
+- Office Hour
+- Kubernetes meetups https://www.meetup.com/topics/kubernetes
+
+on https://kubernetes.io/community, there is the schedule for all the SIG/Working group meeting.
+If you want to join or create a meetup. Go to **slack#sig-contribex**
+
+## SIG - Special Interest Group
+
+Semi-autonomous teams:
+- Own leaders & charteers
+- Code, Github repo, Slack, mailing, meeting responsibility
+
+### Types
+
+[SIG List](https://github.com/kubernetes/community/blob/master/sig-list.md)
+
+1. Features Area
+ - sig-auth
+ - sig-apps
+ - sig-autoscaling
+ - sig-big-data
+ - sig-cli
+ - sig-multicluster
+ - sig-network
+ - sig-node
+ - sig-scalability
+ - sig-scheduling
+ - sig-service-catalog
+ - sig-storage
+ - sig-ui
+2. Plumbing
+ - sig-cluster-lifecycle
+ - sig-api-machinary
+ - sig-instrumentation
+3. Cloud Providers *(currently working on moving cloudprovider code out of Core)*
+ - sig-aws
+ - sig-azure
+ - sig-gcp
+ - sig-ibmcloud
+ - sig-openstack
+4. Meta
+ - sig-architecture: For all general architectural decision
+ - sig-contributor-experience: Helping contributor and community experience
+ - sig-product-management: Long-term decision
+ - sig-release
+ - sig-testing: In charge of all the test for Kubernetes
+5. Docs
+ - sig-docs: for documentation and website
+
+## Working groups and "Subproject"
+
+From working group to "subproject".
+
+For specific: tools (ex. Helm), goals (ex. Resource Management) or areas (ex. Machine Learning).
+
+Working groups change around more frequently than SIGs, and some might be temporary.
+
+- wg-app-def
+- wg-apply
+- wg-cloud-provider
+- wg-cluster-api
+- wg-container-identity
+- ...
+
+### Picking the right SIG:
+1. Figure out which area you would like to contribute to
+2. Find out which SIG / WG / subproject covers that (tip: ask on #sig-contribex Slack channel)
+3. Join that SIG / WG / subproject (you should also join the main SIG when joining a WG / subproject)
+
+## Tour des repositories
+
+Everything will be refactored (cleaning, move, merged,...)
+
+### Core repository
+- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)
+
+### Project
+
+- [kubernetes/Community](https://github.com/kubernetes/Community): Kubecon, proposition, Code of conduct and Contribution guideline, SIG-list
+- [kubernetes/Features](https://github.com/kubernetes/Features): Features proposal for future release
+- [kubernetes/Steering](https://github.com/kubernetes/Steering)
+- [kubernetes/Test-Infra](https://github.com/kubernetes/Test-Infra): All related to test except Perf
+- [kubernetes/Perf-Tests](https://github.com/kubernetes/Perf-Tests):
+
+### Docs/Website
+
+- website
+- kubernetes-cn
+- kubernetes-ko
+
+### Developer Tools
+
+- sample-controller*
+- sample- apiserver*
+- code-generator*
+- k8s.io
+- kubernetes-template-project: For new github repo
+
+### Staging repositories
+
+Mirror of core part for easy vendoring
+
+### SIG repositories
+
+- release
+- federation
+- autoscaler
+
+### Cloud Providers
+
+No AWS
+
+### Tools & Products
+
+- kubeadm
+- kubectl
+- kops
+- helm
+- charts
+- kompose
+- ingress-nginx
+- minikube
+- dashboard
+- heapster
+- kubernetes-anywhere
+- kube-openapi
+
+### 2nd Namespace: Kubernetes-sigs
+
+Too much places for Random/Incubation stuff.
+No working path for **promotion/deprecation**
+
+In future:
+1. start in Kubernetes-sigs
+2. SIGs determine when and how the project will be **promoted/deprecated**
+
+Those repositories can have their own rules:
+- Approval
+- Ownership
+- ...
+
+## Contribution
+
+### First Bug report
+
+```
+- Bug or Feature
+
+- What happened
+
+- How to reproduce
+
+```
+
+ ### Issues as specifications
+
+
+Most of k8s change start with an issue:
+
+- Feature proposal
+- API changes proposal
+- Specification
+
+### From Issue to Code/Docs
+
+1. Start with an issue
+2. Apply all appropriate labels
+3. cc SIG leads and concerned devs
+4. Raise the issue at a SIG meeting or on mailing list
+5. If *Lazy consensus*, submit a PR
+
+### Required labels https://github.com/kubernetes/test-infra/blob/master/label_sync/labels.md
+
+#### On creation
+- `sig/\*`: the sig the issue belong too
+- `kind/\*`:
+ - bug
+ - feature
+ - documentation
+ - design
+ - failing-test
+
+#### For issue closed as port of **triage**
+
+- `triage/duplicate`
+- `triage/needs-information`
+- `triage/support`
+- `triage/unreproduceable`
+- `triage/unresolved`
+
+#### Prority
+
+- `priority/critical-urgent`
+- `priority/important-soon`
+- `priority/important-longtem`
+- `priority/backlog`
+- `priority/awaiting-evidence`
+
+#### Area
+
+Free for dedicated issue area
+
+- `area/kubectl`
+- `area/api`
+- `area/dns`
+- `area/platform/gcp`
+
+#### help-wanted
+
+Currently mostly complicated things
+
+#### SOON
+
+`good-first-issue`
+
+## Making a contribution by Pull Request
+
+We will go through the typical PR process on kubernetes repos.
+
+We will play there: [community/contributors/new-contributor-playground at master · kubernetes/community · GitHub](https://github.com/kubernetes/community/tree/master/contributors/new-contributor-playground)
+
+1. When we contribute to any kubernetes repository, **fork it**
+
+2. Do your modification in your fork
+```
+$ git clone git@github.com:jgsqware/community.git $GOPATH/src/github.com/kubernetes/community
+$ git remote add upstream https://github.com/kubernetes/community.git
+$ git remote -v
+origin git@github.com:jgsqware/community.git (fetch)
+origin git@github.com:jgsqware/community.git (push)
+upstream git@github.com:kubernetes/community.git (fetch)
+upstream git@github.com:kubernetes/community.git (push)
+$ git checkout -b kubecon
+Switched to a new branch 'kubecon'
+
+## DO YOUR MODIFCATION IN THE CODE##
+
+$ git add contributors/new-contributor-playground/new-contibutor-playground-xyz.md
+$ git commit
+
+
+### IN YOUR COMMIT EDITOR ###
+
+ Adding a new contributors file
+
+ We are currently experimenting PR process in the kubernetes repository.
+
+$ git push -u origin kubecon
+```
+
+3. Create a Pull request via Github
+4. If needed, sign the CLA to make valid your contribution
+5. Read the `k8s-ci-robot` message and `/assign @reviewer` recommended by the `k8s-ci-robot`
+6. wait for a `LTGM` label from one of the `OWNER/reviewers`
+7. wait for approval from one of `OWNER/approvers`
+8. `k8s-ci-robot` will automatically merge the PR
+
+`needs-ok-to-test` is used for non-member contributor to validate the pull request
+
+## Test infrastructure
+
+> How bot toll you when you mess up
+
+At the end of a PR there is a bunch of test.
+2 types:
+ - required: Always run and needed to pass to validate the PR (eg. end-to-end test)
+ - not required: Needed in specific condition (eg. modifying on ly specific part of code)
+
+If something failed, click on `details` and check the test failure logs to see what happened.
+There is `junit-XX.log` with the list of test executed and `e2e-xxxxx` folder with all the component logs.
+To check if the test failed because of your PR or another one, you can click on the **TOP** `pull-request-xxx` link and you will see the test-grid and check if your failing test is failing in other PR too.
+
+If you want to retrigger the test manually, you can comment the PR with `/retest` and `k8s-ci-robot` will retrigger the tests.
+
+## SIG-Docs contribution
+
+Anyone can contribute to docs.
+
+### Kubernetes docs
+
+- Websites URL
+- Github Repository
+- k8s slack: #sig-docs
+
+### Working with docs
+
+Docs use `k8s-ci-robot`. Approval process is the same as for any k8s repo.
+In docs, `master` branch is the current version of the docs. So always branch from `master`. It's continuous deployment
+For a specific release docs, branch from `release-1.X`.
+
+## Local build and Test
+
+The code: [kubernetes/kubernetes]
+The process: [kubernetes/community]
+
+### Dev Env
+
+You need:
+- Go
+- Docker
+
+
+- Lot of RAM and CPU and 10 GB of space
+- best to use Linux
+- place you k8s repo fork in:
+ - `$GOPATH/src/k8s.io/kubernetes`
+- `cd $GOPATH/src/k8s.io/kubernetes`
+- build: `./build/run.sh make`
+ - Build is incremental, keep running `./build/run.sh make` til it works
+- To build variant: `make WHAT="kubectl"`
+- Building kubectl on Mac for linux: `KUBE_*_PLATFORM="linux/amd64" make WHAT "kubectl"`
+
+There is `build` documentation there: https://git.k8s.io/kubernetes/build
+
+### Testing
+There is `test` documentation there: https://git.k8s.io/community/contributor/guide
diff --git a/contributors/new-contributor-playground/new-contributors.md b/contributors/new-contributor-playground/new-contributors.md
new file mode 100644
index 00000000..6604eb65
--- /dev/null
+++ b/contributors/new-contributor-playground/new-contributors.md
@@ -0,0 +1,5 @@
+# Hello everyone!
+
+Please feel free to talk amongst yourselves or ask questions if you need help
+
+First commit at kubecon from @mitsutaka \ No newline at end of file
diff --git a/events/2016/developer-summit-2016/application_service_definition_notes.md b/events/2016/developer-summit-2016/application_service_definition_notes.md
index e8f4c0c5..8cf3bb9d 100644
--- a/events/2016/developer-summit-2016/application_service_definition_notes.md
+++ b/events/2016/developer-summit-2016/application_service_definition_notes.md
@@ -16,7 +16,7 @@ We need the 80% case, Fabric8 is a good example of this. We need a good set of
We also need to look at how to get developer feedback on this so that we're building what they need. Pradeepto did a comparison of Kompose vs. Docker Compose for simplicity/usability.
-One of the things we're discussing the Kompose API. We want to get rid of this and supply something which people can use directly with kuberntes. A bunch of shops only have developers. Someone asked though what's so complicated with Kube definitions. Have we identified what gives people trouble with this? We push too many concepts on developers too quickly. We want some high-level abstract types which represent the 95% use case. Then we could decompose these to the real types.
+One of the things we're discussing the Kompose API. We want to get rid of this and supply something which people can use directly with kubernetes. A bunch of shops only have developers. Someone asked though what's so complicated with Kube definitions. Have we identified what gives people trouble with this? We push too many concepts on developers too quickly. We want some high-level abstract types which represent the 95% use case. Then we could decompose these to the real types.
What's the gap between compose files and the goal? As an example, say you want to run a webserver pod. You have to deal with ingress, and service, and replication controller, and a bunch of other things. What's the equivalent of "docker run" which is easy to get. The critical thing is how fast you can learn it.
diff --git a/events/2018/05-contributor-summit/README.md b/events/2018/05-contributor-summit/README.md
index 8240888e..a3eb54b9 100644
--- a/events/2018/05-contributor-summit/README.md
+++ b/events/2018/05-contributor-summit/README.md
@@ -13,35 +13,40 @@ In some sense, the summit is a real-life extension of the community meetings and
## Registration
- [Sign the CLA](/CLA.md) if you have not done so already.
-- [Fill out this Google Form](https://goo.gl/forms/TgoUiqbqZLkyZSZw1)
+- [Fill out this Google Form](https://goo.gl/forms/TgoUiqbqZLkyZSZw1) - Registration is now <b> closed.</b>
## When and Where
- Tuesday, May 1, 2018 (before Kubecon EU)
-- Bella Center
-- Copenhagen, Denmark
+- Bella Center, Copenhagen, Denmark
+- Registration and breakfast start at 8am in Room C1-M0
+- Happy hour reception onsite to close at 5:30pm
-All day event with a happy hour reception to close
-## Agenda
+There is a [Slack channel](https://kubernetes.slack.com/messages/contributor-summit) (#contributor-summit) for you to use during the summit to pass URLs, notes, reserve the hallway track room, etc.
+
+
+## Agenda
### Morning
-| Time | Track One | Track Two | Track Three |
-| ----------- | ------------------------------- | ---------------------------- | -------------- |
-| 8:00 | Registration and Breakfast | | |
-| 9:00-9:15 | Welcome and Introduction | | |
-| 9:15-9:30 | Steering Committee Update | | |
+| Time | Track One - Room: C1-M1 | Track Two - Room: C1-M2 | Track Three - Room: B4-M5 |
+| ----------- | ------------------------------- | ---------------------------- | -------------- |
+| 8:00 | Registration and Breakfast - <b>Room: C1-M0</b> | | |
+| 9:00-9:15 | | Welcome and Introduction | |
+| 9:15-9:30 | | Steering Committee Update | |
+| | | | |
+| | [New Contributor Workshop](/events/2018/05-contributor-summit/new-contributor-workshop.md) | Current Contributor Workshop | Docs Sprint |
| | | | |
-| | New Contributor Workshop | Current Contributor Workshop | Docs Sprint |
-| | | | |
-| 9:30-10:00 | Session | Unconference | |
-| 10:00-10:50 | Session | Unconference | |
+| 9:30-10:00 | Part 1 | What's next in networking? Lead: thockin | |
+| 10:00-10:50 | Part 2 | CRDs and Aggregation - future and pain points. Lead: sttts | |
| 10:50-11:00 | B R E A K | B R E A K | |
-| 11:00-12:00 | Session | Unconference | |
-| 12:00-1:00 | Session | Unconference | |
+| 11:00-12:00 | Part 3 | client-go and API extensions. Lead: munnerz | |
+| 12:00-1:00 | Part 4 | Developer Tools. Leads: errordeveloper and r2d4 | |
| 1:00-2:00 | Lunch (Provided) | Lunch (Provided) | |
+*Note: The New Contributor Workshop will be a single continuous training, rather than being divided into sessions as the Current Contributor track is. New contributors should plan to stay for the whole 3 hours. [Outline here](/events/2018/05-contributor-summit/new-contributor-workshop.md).*
+
### Afternoon
| Time | Track One |
@@ -55,9 +60,9 @@ All day event with a happy hour reception to close
- SIG Updates (~5 minutes per SIG)
- 2 slides per SIG, focused on cross-SIG issues, not internal SIG discussions (those are for Kubecon)
- Identify potential issues that might affect multiple SIGs across the project
- - One-to-many announcements about changes a SIG expects that might affect others
+ - One-to-many announcements about changes a SIG expects that might affect others
- Track Leads
- - New Contributor Workshop - Josh Berkus
+ - New Contributor Workshop - Josh Berkus, Guinevere Saenger, Ilya Dmitrichenko
- Current Contributor Workshop - Paris Pittman
- SIG Updates - Jorge Castro
diff --git a/events/2018/05-contributor-summit/clientgo-notes.md b/events/2018/05-contributor-summit/clientgo-notes.md
new file mode 100644
index 00000000..98695813
--- /dev/null
+++ b/events/2018/05-contributor-summit/clientgo-notes.md
@@ -0,0 +1,139 @@
+# Client-go
+**Lead:** munnerz with assist from lavalamp
+**Slides:** combined with the CRD session [here](https://www.dropbox.com/s/n2fczhlbnoabug0/API%20extensions%20contributor%20summit.pdf?dl=0) (CRD is first; client-go is after)
+**Thanks to our notetakers:** kragniz, mrbobbytales, directxman12, onyiny-ang
+
+## Goals for the Session
+
+* What is currently painful when building a controller
+* Questions around best practices
+* As someone new:
+ * What is hard to grasp?
+* As someone experienced:
+ * What important bits of info do you think are critical
+
+
+## Pain points when building controller
+* A lot of boilerplate
+ * Work queues
+ * HasSynced functions
+ * Re-queuing
+* Lack of deep documentation in these areas
+ * Some documentation exists, bot focused on k/k core
+* Securing webhooks & APIServers
+* Validation schemas
+* TLS, the number of certs is a pain point
+ * It is hard right now, the internal k8s CA has been used a bit.
+ * OpenShift has a 'serving cert controller' that will generate a cert based on an annotation that might be able to possibly integrate upstream.
+ * Election has been problematic and the Scaling API is low-level and hard to use. doesn't work well if resource has multiple meanings of scale (eg multiple pools of nodes)
+* Registering CRDs, what's the best way to go about it?
+ * No best way to do it, but has been deployed with application
+ * Personally, deploy the CRDs first for RBAC reasons
+* Declarative API on one end that has to be translated to translated to a transactional API on the other end (e.g. ingress). Controller trying to change quite a few things.
+ * You can do locking, but it has to be built.
+* Q: how do you deal with "rolling back" if the underlying infrastructure
+ that you're describing says no on an operation?
+ * A: use validating webhook?
+ * A: use status to keep track of things?
+ * A: two types of controllers: `kube --> kube` and `kube --> external`,
+ they work differently
+ * A: Need a record that keeps track of things in progress. e.g. status. Need more info on how to properly tackle this problem.
+
+
+## Best practices
+(discussion may be shown by Q: for question or A: for audience or answer)
+* How do you keep external resources up to date with Kubernetes resources?
+ * A: the original intention was to use the sync period on the controller if
+ you watch external resources, use that
+ * Should you set resync period to never if you're not dealing with
+ external resources?
+ * A: Yes, it's not a bug if watch fails to deliver things right
+ * A: controller automatically relists on connection issues, resync
+ interval is *only* for external resources
+ * maybe should be renamed to make it clear it's for external resources
+* how many times to update status per sync?
+ * A: use status conditions to communicate "fluffy" status to user
+ (messages, what might be blocked, etc, in HPA), use fields to
+ communicate "crunchy" status (last numbers we saw, last metrics, state
+ I need later).
+* How do I generate nice docs (markdown instead of swagger)
+ * A: kubebuilder (kubernetes-sigs/kubebuilder) generates docs out of the
+ box
+ * A: Want to have IDL pipeline that runs on native types to run on CRDs,
+ run on docs generator
+* Conditions vs fields
+ * used to check a pods’ state
+ * "don't use conditions too much"; other features require the use of conditions, status is unsure
+ * What does condition mean in this context
+ * Additional fields that can have `ready` with a msg, represents `state`.
+ * Limit on states that the object can be in.
+ * Use conditions to reflect the state of the world, is something blocked etc.
+ * Conditions were created to allow for mixed mode of clients, old clients can ignore some conditions while new clients can follow them. Designed to make it easier to extend status without breaking clients.
+* Validating webhooks vs OpenAPI schema
+* Can we write a test that spins up main API server in process?
+ * Can do that current in some k/k tests, but not easy to consume
+ * vendoring is hard
+ * Currently have a bug where you have to serve aggregated APIs on 443,
+ so that might complicate things
+* How are people testing extensions?
+ * Anyone reusing upstream dind cluster?
+ * People looking for a good way to test them.
+ * kube-builder uses the sig-testing framework to bring up a local control plane and use that to test against. (@pwittrock)
+* How do you start cluster for e2es?
+ * Spin up a full cluster with kubeadm and run tests against that
+ * integration tests -- pull in packages that will build the clusters
+* Q: what CIs are you using?
+ * A: Circle CI and then spin up new VMs to host cluster
+ * Mirtantis has a tool for a multi-node dind cluster for testing
+* #testing-commons channel on stack. 27 page document on this--link will be put in slides
+* Deploying and managing Validating/Mutating webhooks?
+ * how complex should they be?
+* When to use subresources?
+ * Are people switching to api agg to use this today?
+ * Really just for status and scale
+ * Why not use subresources today with scale?
+ * multiple replicas fields
+ * doesn't fit polymorphic structure that exists
+ * pwittrock@: kubectl side, scale
+ * want to push special kubectl verbs into subresources to make kubectl
+ more tolerant to version skew
+
+## Other Questions
+
+* Q: Client-go generated listers, what is the reason for two separate interfaces to retrieve from client and cache?
+ * A: historical, but some things are better done local vs on the server.
+* issues: client-set interface allows you to pass special options that allow you to do interesting stuff on the API server which isn't necessarily possible in the lister.
+ * started as same function call and then diverged
+ * lister gives you slice of pointers
+ * clientset gives you a slice of not pointers
+ * a lot of people would take return from clientset and then convert it to a slice of pointers so the listers helped avoid having to do deep copies every time. TLDR: interfaces are not identical
+* Where should questions go on this topic for now?
+ * A: most goes to sig-api-machinery right now
+ * A : Controller related stuff would probably be best for sig-apps
+* Q: Staleness of data, how are people dealing with keeping data up to date with external data?
+ * A: Specify sync period on your informer, will put everything through the loop and hit external resources.
+* Q: With strictly kubernetes resources, should your sync period be never? aka does the watch return everything.
+ * A: The watch should return everything and should be used if its strictly k8s in and k8s out, no need to set the sync period.
+* Q: What about controllers in other languages than go?
+ * A: [metacontroller](https://github.com/GoogleCloudPlatform/metacontroller) There are client libs in other languages, missing piece is work queue,
+ informer, etc
+* Cluster API controllers cluster, machineset, deployment, have a copy of
+ deployment code for machines. Can we move this code into a library?
+ * A: it's a lot of work, someone needs to do it
+ * A: Janet Kuo is a good person to talk to (worked on getting core workloads
+ API to GA) about opinions on all of this
+* Node name duplication caused issues with AWS and long-term caches
+ * make sure to store UIDs if you cache across reboot
+
+## Moving Forwards
+* How do share/disseminate knowledge (SIG PlatformDev?)
+ * Most SIGs maintain their own controllers
+ * Wiki? Developer Docs working group?
+ * Existing docs focus on in-tree development. Dedicated 'extending kubernetes' section?
+* Git-book being developed for kubebuilder (book.kubebuilder.io); would appreciate feedback @pwittrock
+* API extensions authors meetups?
+* How do we communicate this knowledge for core kubernetes controllers
+ * Current-day: code review, hallway conversations
+* Working group for platform development kit?
+* Q: where should we discuss/have real time conversations?
+ * A: #sig-apimachinery, or maybe #sig-apps in slack (or mailing lists) for the workloads controllers
diff --git a/events/2018/05-contributor-summit/crds-notes.md b/events/2018/05-contributor-summit/crds-notes.md
new file mode 100644
index 00000000..a07094b8
--- /dev/null
+++ b/events/2018/05-contributor-summit/crds-notes.md
@@ -0,0 +1,92 @@
+# CRDs - future and painpoints
+**Lead:** sttts
+**Slides:** combined with the client-go session [here](https://www.dropbox.com/s/n2fczhlbnoabug0/API%20extensions%20contributor%20summit.pdf?dl=0)
+**Thanks to our notetakers:** mrbobbytales, kragniz, tpepper, and onyiny-ang
+
+## outlook - aggregation
+* API stable since 1.10. There is a lack of tools and library support.
+* GSoC project with @xmudrii: share etcd storage
+ * `kubectl create etcdstorage your api-server`
+* Store custom data in etcd
+
+## outlook custom resources
+
+1.11:
+* alpha: multiplier versions with/without conversion
+* alpha: pruning - blocker for GA - unspecified fields are removed
+ * deep change of semantics of custom resources
+ * from JSON blob store to schema based storage
+* alpha: defaulting - defaults from openapi validation schema are applied
+* alpha: graceful deletion - (maybe? PR exists)
+* alpha: server side printing columns for `kubectl get` customization
+* beta: subresources - alpha in 1.10
+* will have additionalProperties with extensible string map
+ * mutually exclusive with properties
+
+1.12
+* multiple versions with declarative field renames
+* strict create mode (issue #5889)
+
+Missing from Roadmap:
+ - Additional Properties: Forbid additional fields
+ - Unknown fields are silently dropped instead of erroring
+ - Istio used CRD extensively: proto requires some kind of verification and CRDs are JSON
+ - currently planning to go to GA without proto support
+ - possibly in the longer term to plan
+ - Resource Quotas for Custom Resources
+ - doable, we know how but not currently implemented
+ - Defaulting: mutating webhook will default things when they are written
+ - Is Validation going to be required in the future
+ - poll the audience!
+ - gauging general sense of validation requirements (who wants them, what's missing?)
+ - missing: references to core types aren't allowed/can't be defined -- this can lead to versioning complications
+ - limit CRDs clusterwide such that the don't affect all namespaces
+ - no good discussion about how to improve this yet
+ - feel free to start one!
+ - Server side printing columns, per resource type needs to come from server -- client could be in different version than server and highlight wrong columns
+
+Autoscaling is alpha today hopefully beta in 1.11
+
+## The Future: Versioning
+* Most asked feature, coming..but slowly
+* two types, "noConversion" and "Declarative Conversion"
+* "NoConversion" versioning
+ * maybe in 1.11
+ * ONLY change is apiGroup
+ * Run multiple versions at the same time, they are not converted
+
+* "Declarative Conversion" 1.12
+* declarative rename e.g
+```
+spec:
+ group: kubecon.io
+ version: v1
+ conversions:
+ declarative:
+ renames:
+ from: v1pha1
+ to: v1
+ old: spec.foo
+ new: bar
+```
+* Support for webhook?
+ * not currently, very hard to implement
+ * complex problem for end user
+ * current need is really only changing for single fields
+ * Trying to avoid complexity by adding a lot of conversions
+
+## Questions:
+* When should someone move to their own API Server
+ * At the moment, telling people to start with CRDs. If you need an aggregated API server for custom versioning or other specific use-cases.
+* How do I update everything to a new object version?
+ * Have to touch every object.
+* are protobuf support in the future?
+ * possibly, likely yes
+* update on resource quotas for CRDs
+ * PoC PR current out, it's doable just not quite done
+* Is validation field going to be required?
+ * Eventually, yes? Some work being done to make CRDs work well with `kubectl apply`
+* Can CRDs be cluster wide but viewable to only some users.
+ * It's been discussed, but hasn't been tackled.
+* Is there support for CRDs in kubectl output?
+ * server side printing columns will make things easier for client tooling output. Versioning is important for client vs server versioning.
diff --git a/events/2018/05-contributor-summit/devtools-notes.md b/events/2018/05-contributor-summit/devtools-notes.md
new file mode 100644
index 00000000..c22477e4
--- /dev/null
+++ b/events/2018/05-contributor-summit/devtools-notes.md
@@ -0,0 +1,63 @@
+# Developer Tools:
+**Leads:** errordeveloper, r2d4
+**Slides:** n/a
+**Thanks to our notetakers:** mrbobbytales, onyiny-ang
+
+What APIs should we target, what parts of the developer workflow haven't been covered yet?
+
+* Do you think the Developer tools for Kubernetes is a solved problem?
+ * A: No
+
+### Long form responses from SIG Apps survey
+* Need to talk about developer experience
+* Kubernetes Community can do a lot more in helping evangelize Software development workflow, including CI/CD. Just expecting some guidelines on the more productive ways to write software that runs in k8s.
+* Although my sentiment is neutral on kube, it is getting better as more tools are emerging to allow my devs to stick to app development and not get distracted by kube items. There is a lot of tooling available which is a dual edge sword, these tools range greatly in usability robustness and security. So it takes a lot of effort to...
+
+### Current State of Developer Experience
+* Many Tools
+* Mostly incompatible
+* Few end-to-end workflows
+
+### Comments and Questions
+* Idea from scaffold to normalize the interface for builders, be able to swap them out behind the scenes.
+* Possible to formalize these as CRDs?
+* Lots of choices, helm, other templating, kompose etc..
+* So much flexibility in the Kubernetes API that it can become complicated for new developers coming up.
+ * Debug containers might make things easier for developers to work through building and troubleshooting their app.
+* Domains and workflow are so different from companies that everyone has their own opinionated solution.
+* Lots of work being done in the app def working group to define what an app is.
+* app CRD work should make things easier for developers.
+* Break out developer workflow into stages and try and work through expanding them, e.g. develop/debug
+* debug containers are looking to be used both in prod and developer workflows
+* Tool in sig-cli called kustomize, was previously 'konflate'?
+* Hard to talk about all these topics as there isn't the language to talk about these classes of tools.
+* @jacob investigation into application definition: re: phases, its not just build, deploy, debug, its build, deploy, lifecycle, debug. Managing lifecycle is still a problem, '1-click deploy' doesn't handle lifecycle.
+* @Bryan Liles: thoughts about why this is hard:
+ * kubectl helm apply objects in different orders
+ * objects vs abstractions
+ * some people love [ksonnet](https://ksonnet.io/), some hate it. Kubernetes concepts are introduced differently to different people so not everyone is starting with the same base. Thus, some tools are harder for some people to grasp than others. Shout out to everyone who's trying to work through it * Being tied to one tool breaks compatibility across providers.
+* Debug containers are great for break-glass scenarios
+* CoreOS had an operator that handled the entire stack, additional objects could be created and certain metrics attached.
+ * Everything is open source now, etcd, prometheus operator
+* Tools are applying things in different orders, and this can be a problem across tooling
+* People who depend on startup order also tend to have reliability problems as they have their own operational problems, should try and engineer around it.
+* Can be hard if going crazy on high-level abstractions, can make things overly complicated and there are a slew of constraints in play.
+* Ordering constraints are needed for certain garbage collection tasks, having ordering may actually be useful.
+* Some groups have avoided high-level DSLs because people should understand readiness/livelness probes etc. Developers may have a learning curve, but worthwhile when troubleshooting and getting into the weeds.
+* Lots of people don't want to get into it at all, they want to put in a few details on a db etc and get it.
+* Maybe standardize on a set of labels to on things that should be managed as a group. Helm is one implementation, it should go beyond helm.
+ * There is a PR that is out there that might take care of some of this.
+* Everyone has their own "style" when it comes to this space.
+* Break the phases and components in the development and deployment workflow into sub-problems and they may be able to actually be tackled. Right now the community seems to tackling everything at once and developing different tools to do the same thing.
+* build UI that displays the whole thing as a list and allows easy creation/destruction of cluster
+ * avoid tools that would prevent portability
+* objects rendered to file somehow: happens at runtime, additional operator that takes care of the sack
+ * 3, 4 minor upgrades without breakage
+* @Daniel Smith: start up order problems = probably bigger problems, order shouldn't need to matter but in the real world sometimes it does
+* platform team, internal paths team (TSL like theme), etc. In some cases it's best to go crazy focusing on the abstractions--whole lot of plumbing that needs to happen to get everything working properly
+* Well defined order of creation may not be a bad thing. ie. ensure objects aren't created that are immediately garbage collected.
+* Taking a step back from being contributors and put on developer hats to consider the tool sprawl that exists and is not necessarily compatible across different aspects of kubernetes. Is there anyway to consolidate them and make them more standardized?
+* Split into sub-problems
+
+## How can we get involved?
+- SIG-Apps - join the conversation on slack, mailing list, or weekly Monday meeting
diff --git a/events/2018/05-contributor-summit/networking-notes.md b/events/2018/05-contributor-summit/networking-notes.md
new file mode 100644
index 00000000..d1220fde
--- /dev/null
+++ b/events/2018/05-contributor-summit/networking-notes.md
@@ -0,0 +1,129 @@
+# Networking
+**Lead:** thockin
+**Slides:** [here](https://docs.google.com/presentation/d/1Qb2fbyTClpl-_DYJtNSReIllhetlOSxFWYei4Zt0qFU/edit#slide=id.g2264d16f0b_0_14)
+**Thanks to our notetakers:** onyiny-ang, mrbobbytales, tpepper
+
+
+This session is not declaring what's being implemented next, but rather laying out the problems that loom.
+
+## Coming soon
+ - kube-proxy with IPVS
+ - currently beta
+ - core DNS replacing kube DNS
+ - currently beta
+ - pod "ready++"
+ - allow external systems to participate in rolling updates. Say your load-balancer takes 5-10 seconds to program, when you bring up new pod and take down old pod the load balancer has lost old backends but hasn't yet added new backends. The external dependency like this becomes a gating pod decorator.
+ - adds configuration to pod to easily verify readiness
+ - design agreed upon, alpha (maybe) in 1.11
+
+## Ingress
+* The lowest common-denominator API. This is really limiting for users, especially compared to modern software L7 proxies.
+* annotation model of markup limits portability
+* ingress survey reports:
+ * people want portability
+ * everyone uses non-portable features…
+ * 2018 L7 requirements are dramatically higher than what they were and many vendors don’t support that level of functionality.
+* Possible Solution? Routes
+ * openshift uses routes
+ * heptio prototyping routes currently
+* All things considered, requirements are driving it closer and closer to istio
+Possibility, poach some of the ideas and add them to kubernetes native.
+
+## Istio
+(as a potential solution)
+- maturing rapidly with good APIs and support
+- Given that plus istio is not part of kubernetes, it's unlikely near term to become a default or required part of a k8s deployment. The general ideas around istio style service mesh could be more native in k8s.
+
+## Topology and node-local Services
+- demand for node-local network and service discovery but how to go about it?
+ - e.g. “I want to talk to the logging daemon on my current host”
+ - special-case topology?
+ - client-side choice
+- These types of services should not be a service proper.
+
+## Multi-network
+
+- certain scenarios demand multi-network
+- A pod can be in multiple networks at once. You might have different quality of service on different networks (eg: fast/expensive, slower/cheaper), or different connectivity (eg: the rack-internal network).
+- Tackling scenarios like NFV
+- need deeper changes like multiple pod IPs but also need to avoid repeating old mistakes
+- SIG-Network WG designing a PoC -- If interested jump on SIG-network WG weekly call
+- Q: Would this PoC help if virtual-kubelets were used to span cloud providers? Spanning latency domains in networks is also complicated. Many parts of k8s are chatty, assuming a cluster internal low-latency connectivity.
+
+## Net Plugins vs Device Plugins
+- These plugins do not coordinate today and are difficult to work around
+- gpu that is also an infiniband device
+- causes problems because network and device are very different with verbs etc
+- problems encountered with having to schedule devices and network together at the same time.
+“I want a gpu on this host that has a gpu attached and I want it to be the same deviec”
+PoC available to make this work, but its rough and a problem right now.
+- Resources WG and networking SIG are discussing this challenging problem
+- SIGs/WGs. Conversation may feel like a cycle, but @thockin feels it is a spiral that is slowly converging and he has a doc he can share covering the evolving thinking.
+
+## Net Plugins, gRPC, Services
+- tighter coupling between netplugins and kube-proxy could be useful
+- grpc is awesome for plugins, why not use a grpc network plugin
+- pass services to network plugin to bypass kube-proxy, give more awareness to the network plugin and enable more functionality.
+
+## IPv6
+- beta but **no** support for dual-stack (v4 & v6 at the same time)
+- Need deeper changes like multiple pod IPs (need to change the pod API--see Multi-network)
+- https://github.com/kubernetes/features/issues/563
+
+## Services v3
+
+- Services + Endpoints have a grab-bag of features which is not ideal; "grew organically"
+- Need to start segmenting the "core" API group
+ - write API in a way that is more obvious
+ - split things out and reflect it in API
+- Opportunity to rethink and refactor:
+ - Endpoints -> Endpoint?
+ - split the grouping construct from the “gazintas”
+ - virtualIP, network, dns name moves into the service
+ - EOL troublesome features
+ - port remapping
+
+## DNS Reboot
+- We abuse DNS and mess up our DNS schema
+ - it's possible to write queries in DNS that take over names
+ - @thockin has a doc with more information about the details of this
+ - Why can't I use more than 6 web domains? bugzilla circa 1996
+- problem: its possible to write queries in dns that write over names
+ - create a namespace called “com” and an app named “google” and it’ll cause a problem
+- “svc” is an artifact and should not be a part of dns
+- issues with certain underlying libraries
+- Changing it is hard (if we care about compatibility)
+- Can we fix DNS spec or use "enlightened" DNS servers
+ - Smart proxies on behalf of pods that do the searching and become a “better” dns
+- External DNS
+- Creates DNS entries in external system (route53)
+- Currently in incubator, not sure on status, possibly might move out of incubator, but unsure on path forward
+
+Perf and Scalability
+- iptables is krufty. nftables implementation should be better.
+- ebpf implementation (eg; Cilium) has potential
+
+## Questions:
+
+- Consistent mechanism to continue progress but maintain backwards compatibility
+- External DNS was not mentioned -- blue/green traffic switching
+ - synchronizes kubernetes resources into various Kubernetes services
+ - it's in incubator right now (deprecated)
+ - unsure of the future trajectory
+ - widely used in production
+ - relies sometimes on annotations and ingress
+- Q: Device plugins. . .spiraling around and hoping for eventual convergence/simplification
+ - A: Resource management on device/net plugin, feels like things are going in a spiral, but progress is being made, it is a very difficult problem and hard to keep all design points tracked. Trying to come to consensus on it all.
+- Q: Would CoreDNS be the best place for the plugins and other modes for DNS proxy etc.
+ - loss of packets are a problem -- long tail of latency
+ - encourage cloud providers to support gRPC
+- Q: With the issues talked about earlier, why can’t istio be integrated natively?
+ - A: Istio can't be required/default: still green
+ - today we can't proclaim that Kubernetes must support Istio
+ - probably not enough community support this year (not everyone is using it at this point)
+- Q: Thoughts on k8s v2?
+ - A: Things will not just be turned off, things must be phased out and over the course of years, especially for services which have been core for some time.
+
+## Take Aways:
+- This is not a comprehensive list of everything that is up and coming
+- A lot of work went into all of these projects
diff --git a/events/2018/05-contributor-summit/new-contributor-workshop.md b/events/2018/05-contributor-summit/new-contributor-workshop.md
new file mode 100644
index 00000000..9a45b06f
--- /dev/null
+++ b/events/2018/05-contributor-summit/new-contributor-workshop.md
@@ -0,0 +1,99 @@
+Kubernetes Summit: New Contributor Workshop
+
+*This was presented as one continuous 3-hour training with a break. For purposes of live coding exercises, participants were asked to bring a laptop with git installed.*
+
+This course was captured on video, and the playlist can be found [here](https://www.youtube.com/playlist?list=PL69nYSiGNLP3M5X7stuD7N4r3uP2PZQUx).
+
+*Course Playlist [Part One](https://www.youtube.com/watch?v=obyAKf39H38&list=PL69nYSiGNLP3M5X7stuD7N4r3uP2PZQUx&t=0s&index=1):*
+* Opening
+ * Welcome contributors
+ * Who this is for
+ * Program
+ * The contributor ladder
+* CLA signing
+ * Why we have a CLA
+ * Going through the signing process
+* Choose Your Own Adventure: Figuring out where to contribute
+ * Docs & Website
+ * Testing
+ * Community management
+ * Code
+ * Main code
+ * Drivers, platforms, plugins, subprojects
+ * Finding your first topic
+ * Things that fit into your work at work
+ * Interest match
+ * Skills match
+ * Choose your own adventure exercise
+* Let's talk: Communication
+ * Importance of communication
+ * Community standards and courtesy
+ * Mailing Lists (esp Kube-dev)
+ * Slack
+ * Github Issues & PRs
+ * Zoom meetings & calendar
+ * Office hours, MoC, other events
+ * Meetups
+ * Communication exercise
+* The SIG system
+ * What are SIGs and WGs
+ * Finding the right SIG
+ * Most active SIGs
+ * SIG Membership, governance
+ * WGs and Subprojects
+* Repositories
+ * Tour de Repo
+ * Core Repo
+ * Website/docs
+ * Testing
+ * Other core repos
+ * Satellite Repos
+ * Owners files
+ * Repo membership
+* BREAK (20min)
+
+*Course Playlist [Part Two](https://www.youtube.com/watch?v=PERboIaNdcI&list=PL69nYSiGNLP3M5X7stuD7N4r3uP2PZQUx&t=0s&index=2):*
+* Contributing by Issue: Josh (15 min) (1:42)
+ * Finding the right repo
+ * What makes a good issue
+ * Issues as spec for changes
+ * Labels
+ * label framework
+ * required labels
+ * Following up and communication
+* Contributing by PR (with walkthrough)
+ * bugs vs. features vs. KEP
+ * PR approval process
+ * More Labels
+ * Finding a reviewer
+ * Following-up and communication
+ * On you: rebasing, test troubleshooting
+* Test infrastructure
+ * Automated tests
+ * Understanding test failures
+* Doc Contributions
+ * Upcoming changes to docs
+ * Building docs locally
+ * Doc review process
+
+*Course Playlist [Part Three](https://www.youtube.com/watch?v=Z3pLlp6nckI&list=PL69nYSiGNLP3M5X7stuD7N4r3uP2PZQUx&t=0s&index=3):*
+
+* Code Contributions: Build and Test
+ * Local core kubernetes build
+ * Running unit tests
+ * Troubleshooting build problems
+* Releases
+ * Brief on Release schedule
+ * Release schedule details
+ * Release Team Opportunities (shadows)
+* Going beyond
+ * Org membership
+ * Meetups & CNCF ambassador
+ * Mentorship opportunties
+ * Group Mentoring
+ * GSOC/Outreachy
+ * Release Team
+ * Meet Our Contributors
+ * 1-on-1 ad-hoc mentoring
+ * Kubernetes beginner tutorials
+ * Check your own progress on devstats
diff --git a/events/2018/05-contributor-summit/steering-update.md b/events/2018/05-contributor-summit/steering-update.md
new file mode 100644
index 00000000..00941b55
--- /dev/null
+++ b/events/2018/05-contributor-summit/steering-update.md
@@ -0,0 +1,13 @@
+# Steering Committee Update
+**Leads:** pwittrock, timothysc
+**Thanks to our notetaker:** tpepper
+
+* incubation is deprecated, "associated" projects are a thing
+* WG are horizontal across SIGs and are ephemeral. Subprojects own a piece
+ of code and relate to a SIG. Example: SIG-Cluster-Lifecycle with
+ kubeadm, kops, etc. under it.
+* SIG charters: PR a proposed new SIG with the draft charter. Discussion
+ can then happen on GitHub around the evolving charter. This is cleaner
+ and more efficient than discussing on mailing list.
+* K8s values doc updated by Sarah Novotny
+* changes to voting roles and rules are in the works
diff --git a/events/community-meeting.md b/events/community-meeting.md
index 5a9ae68a..e3d8a7ab 100644
--- a/events/community-meeting.md
+++ b/events/community-meeting.md
@@ -1,12 +1,10 @@
# Kubernetes Weekly Community Meeting
-We have PUBLIC and RECORDED [weekly meeting](https://zoom.us/my/kubernetescommunity) every Thursday at 6pm UTC (1pm EST / 10am PST)
+We have PUBLIC and RECORDED [weekly meeting](https://zoom.us/my/kubernetescommunity) every Thursday at [5pm UTC](https://www.google.com/search?q=5pm+UTC).
-Map that to your local time with this [timezone table](https://www.google.com/search?q=1800+in+utc)
+See it on the web at [calendar.google.com](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles) , or paste this [iCal url](https://calendar.google.com/calendar/ical/cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com/public/basic.ics) into any [iCal client](https://en.wikipedia.org/wiki/ICalendar). Do NOT copy the meetings over to a your personal calendar, you will miss meeting updates. Instead use your client's calendaring feature to say you are attending the meeting so that any changes made to meetings will be reflected on your personal calendar.
-See it on the web at [calendar.google.com](https://calendar.google.com/calendar/embed?src=cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com&ctz=America/Los_Angeles) , or paste this [iCal url](https://calendar.google.com/calendar/ical/cgnt364vd8s86hr2phapfjc6uk%40group.calendar.google.com/public/basic.ics) into any [iCal client](https://en.wikipedia.org/wiki/ICalendar). Do NOT copy the meetings over to a your perosnal calendar, you will miss meeting updates. Instead use your client's calendaring feature to say you are attending the meeting so that any changes made to meetings will be reflected on your personal calendar.
-
-All meetings are archived on the [Youtube Channel](https://www.youtube.com/watch?v=onlFHICYB4Q&list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ)
+All meetings are archived on the [Youtube Channel](https://www.youtube.com/playlist?list=PL69nYSiGNLP1pkHsbPjzAewvMgGUpkCnJ).
Quick links:
diff --git a/events/office-hours.md b/events/office-hours.md
index cdd443a5..11ebca0e 100644
--- a/events/office-hours.md
+++ b/events/office-hours.md
@@ -6,8 +6,8 @@ Office Hours is a live stream where we answer live questions about Kubernetes fr
Third Wednesday of every month, there are two sessions:
-- European Edition: [2pm UTC](https://www.timeanddate.com/worldclock/fixedtime.html?msg=Kubernetes+Office+Hours+%28European+Edition%29&iso=20171115T14&p1=136&ah=1)
-- Western Edition: [9pm UTC](https://www.timeanddate.com/worldclock/fixedtime.html?msg=Kubernetes+Office+Hours+%28Western+Edition%29&iso=20171115T13&p1=1241)
+- European Edition: [1pm UTC](https://www.google.com/search?q=1pm+UTC)
+- Western Edition: [8pm UTC](https://www.google.com/search?q=8pm+UTC)
Tune into the [Kubernetes YouTube Channel](https://www.youtube.com/c/KubernetesCommunity/live) to follow along.
diff --git a/generator/README.md b/generator/README.md
index b00c8f1b..b75fa49d 100644
--- a/generator/README.md
+++ b/generator/README.md
@@ -16,7 +16,7 @@ The documentation follows a template and uses the values from [`sigs.yaml`](/sig
**Time Zone gotcha**:
Time zones make everything complicated.
-And Daylight Savings time makes it even more complicated.
+And Daylight Saving time makes it even more complicated.
Meetings are specified with a time zone and we generate a link to http://www.thetimezoneconverter.com/ so that people can easily convert it to their local time zone.
To make this work you need to specify the time zone in a way that that web site recognizes.
Practically, that means US pacific time must be `PT (Pacific Time)`.
diff --git a/keps/0008-20180430-promote-sysctl-annotations-to-fields.md b/keps/0008-20180430-promote-sysctl-annotations-to-fields.md
new file mode 100644
index 00000000..8966b818
--- /dev/null
+++ b/keps/0008-20180430-promote-sysctl-annotations-to-fields.md
@@ -0,0 +1,225 @@
+---
+kep-number: 8
+title: Protomote sysctl annotations to fields
+authors:
+ - "@ingvagabund"
+owning-sig: sig-node
+participating-sigs:
+ - sig-auth
+reviewers:
+ - "@sjenning"
+ - "@derekwaynecarr"
+approvers:
+ - "@sjenning "
+ - "@derekwaynecarr"
+editor:
+creation-date: 2018-04-30
+last-updated: 2018-05-02
+status: provisional
+see-also:
+replaces:
+superseded-by:
+---
+
+# Promote sysctl annotations to fields
+
+## Table of Contents
+
+* [Promote sysctl annotations to fields](#promote-sysctl-annotations-to-fields)
+ * [Table of Contents](#table-of-contents)
+ * [Summary](#summary)
+ * [Motivation](#motivation)
+ * [Promote annotations to fields](#promote-annotations-to-fields)
+ * [Promote --experimental-allowed-unsafe-sysctls kubelet flag to kubelet config api option](#promote---experimental-allowed-unsafe-sysctls-kubelet-flag-to-kubelet-config-api-option)
+ * [Gate the feature](#gate-the-feature)
+ * [Proposal](#proposal)
+ * [User Stories](#user-stories)
+ * [Implementation Details/Notes/Constraints](#implementation-detailsnotesconstraints)
+ * [Risks and Mitigations](#risks-and-mitigations)
+ * [Graduation Criteria](#graduation-criteria)
+ * [Implementation History](#implementation-history)
+
+## Summary
+
+Setting the `sysctl` parameters through annotations provided a successful story
+for defining better constraints of running applications.
+The `sysctl` feature has been tested by a number of people without any serious
+complaints. Promoting the annotations to fields (i.e. to beta) is another step in making the
+`sysctl` feature closer towards the stable API.
+
+Currently, the `sysctl` provides `security.alpha.kubernetes.io/sysctls` and `security.alpha.kubernetes.io/unsafe-sysctls` annotations that can be used
+in the following way:
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: sysctl-example
+ annotations:
+ security.alpha.kubernetes.io/sysctls: kernel.shm_rmid_forced=1
+ security.alpha.kubernetes.io/unsafe-sysctls: net.ipv4.route.min_pmtu=1000,kernel.msgmax=1 2 3
+ spec:
+ ...
+ ```
+
+ The goal is to transition into native fields on pods:
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: sysctl-example
+ spec:
+ securityContext:
+ sysctls:
+ - name: kernel.shm_rmid_forced
+ value: 1
+ - name: net.ipv4.route.min_pmtu
+ value: 1000
+ unsafe: true
+ - name: kernel.msgmax
+ value: "1 2 3"
+ unsafe: true
+ ...
+ ```
+
+The `sysctl` design document with more details and rationals is available at [design-proposals/node/sysctl.md](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/sysctl.md#pod-api-changes)
+
+## Motivation
+
+As mentioned in [contributors/devel/api_changes.md#alpha-field-in-existing-api-version](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#alpha-field-in-existing-api-version):
+
+> Previously, annotations were used for experimental alpha features, but are no longer recommended for several reasons:
+>
+> They expose the cluster to "time-bomb" data added as unstructured annotations against an earlier API server (https://issue.k8s.io/30819)
+> They cannot be migrated to first-class fields in the same API version (see the issues with representing a single value in multiple places in backward compatibility gotchas)
+>
+> The preferred approach adds an alpha field to the existing object, and ensures it is disabled by default:
+>
+> ...
+
+The annotations as a means to set `sysctl` are no longer necessary.
+The original intent of annotations was to provide additional description of Kubernetes
+objects through metadata.
+It's time to separate the ability to annotate from the ability to change sysctls settings
+so a cluster operator can elevate the distinction between experimental and supported usage
+of the feature.
+
+### Promote annotations to fields
+
+* Introduce native `sysctl` fields in pods through `spec.securityContext.sysctl` field as:
+
+ ```yaml
+ sysctl:
+ - name: SYSCTL_PATH_NAME
+ value: SYSCTL_PATH_VALUE
+ unsafe: true # optional field
+ ```
+
+* Introduce native `sysctl` fields in [PSP](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) as:
+
+ ```yaml
+ apiVersion: v1
+ kind: PodSecurityPolicy
+ metadata:
+ name: psp-example
+ spec:
+ sysctls:
+ - kernel.shmmax
+ - kernel.shmall
+ - net.*
+ ```
+
+ More examples at [design-proposals/node/sysctl.md#allowing-only-certain-sysctls](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/node/sysctl.md#allowing-only-certain-sysctls)
+
+### Promote `--experimental-allowed-unsafe-sysctls` kubelet flag to kubelet config api option
+
+As there is no longer a need to consider the `sysctl` feature experimental,
+the list of unsafe sysctls can be configured accordingly through:
+
+```go
+// KubeletConfiguration contains the configuration for the Kubelet
+type KubeletConfiguration struct {
+ ...
+ // Whitelist of unsafe sysctls or unsafe sysctl patterns (ending in *).
+ // Default: nil
+ // +optional
+ AllowedUnsafeSysctls []string `json:"allowedUnsafeSysctls,omitempty"`
+}
+```
+
+Upstream issue: https://github.com/kubernetes/kubernetes/issues/61669
+
+### Gate the feature
+
+As the `sysctl` feature stabilizes, it's time to gate the feature [1] and enable it by default.
+
+* Expected feature gate key: `Sysctls`
+* Expected default value: `true`
+
+With the `Sysctl` feature enabled, both sysctl fields in `Pod` and `PodSecurityPolicy`
+and the whitelist of unsafed sysctls are acknowledged.
+If disabled, the fields and the whitelist are just ignored.
+
+[1] https://kubernetes.io/docs/reference/feature-gates/
+
+## Proposal
+
+This is where we get down to the nitty gritty of what the proposal actually is.
+
+### User Stories
+
+* As a cluster admin, I want to have `sysctl` feature versioned so I can assure backward compatibility
+ and proper transformation between versioned to internal representation and back..
+* As a cluster admin, I want to be confident the `sysctl` feature is stable enough and well supported so
+ applications are properly isolated
+* As a cluster admin, I want to be able to apply the `sysctl` constraints on the cluster level so
+ I can define the default constraints for all pods.
+
+### Implementation Details/Notes/Constraints
+
+Extending `SecurityContext` struct with `Sysctls` field:
+
+```go
+// PodSecurityContext holds pod-level security attributes and common container settings.
+// Some fields are also present in container.securityContext. Field values of
+// container.securityContext take precedence over field values of PodSecurityContext.
+type PodSecurityContext struct {
+ ...
+ // Sysctls is a white list of allowed sysctls in a pod spec.
+ Sysctls []Sysctl `json:"sysctls,omitempty"`
+}
+```
+
+Extending `PodSecurityPolicySpec` struct with `Sysctls` field:
+
+```go
+// PodSecurityPolicySpec defines the policy enforced on sysctls.
+type PodSecurityPolicySpec struct {
+ ...
+ // Sysctls is a white list of allowed sysctls in a pod spec.
+ Sysctls []Sysctl `json:"sysctls,omitempty"`
+}
+```
+
+Following steps in [devel/api_changes.md#alpha-field-in-existing-api-version](https://github.com/kubernetes/community/blob/master/contributors/devel/api_changes.md#alpha-field-in-existing-api-version)
+during implemention.
+
+Validation checks implemented as part of [#27180](https://github.com/kubernetes/kubernetes/pull/27180).
+
+### Risks and Mitigations
+
+We need to assure backward compatibility, i.e. object specifications with `sysctl` annotations
+must still work after the graduation.
+
+## Graduation Criteria
+
+* API changes allowing to configure the pod-scoped `sysctl` via `spec.securityContext` field.
+* API changes allowing to configure the cluster-scoped `sysctl` via `PodSecurityPolicy` object
+* Promote `--experimental-allowed-unsafe-sysctls` kubelet flag to kubelet config api option
+* feature gate enabled by default
+* e2e tests
+
+## Implementation History
+
+The `sysctl` feature is tracked as part of [features#34](https://github.com/kubernetes/features/issues/34).
+This is one of the goals to promote the annotations to fields.
diff --git a/keps/0009-node-heartbeat.md b/keps/0009-node-heartbeat.md
new file mode 100644
index 00000000..ab8bf5bc
--- /dev/null
+++ b/keps/0009-node-heartbeat.md
@@ -0,0 +1,392 @@
+---
+kep-number: 8
+title: Efficient Node Heartbeat
+authors:
+ - "@wojtek-t"
+ - "with input from @bgrant0607, @dchen1107, @yujuhong, @lavalamp"
+owning-sig: sig-node
+participating-sigs:
+ - sig-scalability
+ - sig-apimachinery
+ - sig-scheduling
+reviewers:
+ - "@deads2k"
+ - "@lavalamp"
+approvers:
+ - "@dchen1107"
+ - "@derekwaynecarr"
+editor: TBD
+creation-date: 2018-04-27
+last-updated: 2018-04-27
+status: implementable
+see-also:
+ - https://github.com/kubernetes/kubernetes/issues/14733
+ - https://github.com/kubernetes/kubernetes/pull/14735
+replaces:
+ - n/a
+superseded-by:
+ - n/a
+---
+
+# Efficient Node Heartbeats
+
+## Table of Contents
+
+Table of Contents
+=================
+
+* [Efficient Node Heartbeats](#efficient-node-heartbeats)
+ * [Table of Contents](#table-of-contents)
+ * [Summary](#summary)
+ * [Motivation](#motivation)
+ * [Goals](#goals)
+ * [Non-Goals](#non-goals)
+ * [Proposal](#proposal)
+ * [Risks and Mitigations](#risks-and-mitigations)
+ * [Graduation Criteria](#graduation-criteria)
+ * [Implementation History](#implementation-history)
+ * [Alternatives](#alternatives)
+ * [Dedicated “heartbeat” object instead of “leader election” one](#dedicated-heartbeat-object-instead-of-leader-election-one)
+ * [Events instead of dedicated heartbeat object](#events-instead-of-dedicated-heartbeat-object)
+ * [Reuse the Component Registration mechanisms](#reuse-the-component-registration-mechanisms)
+ * [Split Node object into two parts at etcd level](#split-node-object-into-two-parts-at-etcd-level)
+ * [Delta compression in etcd](#delta-compression-in-etcd)
+ * [Replace etcd with other database](#replace-etcd-with-other-database)
+
+## Summary
+
+Node heartbeats are necessary for correct functioning of Kubernetes cluster.
+This proposal makes them significantly cheaper from both scalability and
+performance perspective.
+
+## Motivation
+
+While running different scalability tests we observed that in big enough clusters
+(more than 2000 nodes) with non-trivial number of images used by pods on all
+nodes (10-15), we were hitting etcd limits for its database size. That effectively
+means that etcd enters "alert mode" and stops accepting all write requests.
+
+The underlying root cause is combination of:
+
+- etcd keeping both current state and transaction log with copy-on-write
+- node heartbeats being pontetially very large objects (note that images
+ are only one potential problem, the second are volumes and customers
+ want to mount 100+ volumes to a single node) - they may easily exceed 15kB;
+ even though the patch send over network is small, in etcd we store the
+ whole Node object
+- Kubelet sending heartbeats every 10s
+
+This proposal presents a proper solution for that problem.
+
+
+Note that currently (by default):
+
+- Lack of NodeStatus update for `<node-monitor-grace-period>` (default: 40s)
+ results in NodeController marking node as NotReady (pods are no longer
+ scheduled on that node)
+- Lack of NodeStatus updates for `<pod-eviction-timeout>` (default: 5m)
+ results in NodeController starting pod evictions from that node
+
+We would like to preserve that behavior.
+
+
+### Goals
+
+- Reduce size of etcd by making node heartbeats cheaper
+
+### Non-Goals
+
+The following are nice-to-haves, but not primary goals:
+
+- Reduce resource usage (cpu/memory) of control plane (e.g. due to processing
+ less and/or smaller objects)
+- Reduce watch-related load on Node objects
+
+## Proposal
+
+We propose introducing a new `Lease` built-in API in the newly create API group
+`coordination.k8s.io`. To make it easily reusable for other purposes it will
+be namespaced. Its schema will be as following:
+
+```
+type Lease struct {
+ metav1.TypeMeta `json:",inline"`
+ // Standard object's metadata.
+ // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // +optional
+ ObjectMeta metav1.ObjectMeta `json:"metadata,omitempty"`
+
+ // Specification of the Lease.
+ // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // +optional
+ Spec LeaseSpec `json:"spec,omitempty"`
+}
+
+type LeaseSpec struct {
+ HolderIdentity string `json:"holderIdentity"`
+ LeaseDurationSeconds int32 `json:"leaseDurationSeconds"`
+ AcquireTime metav1.MicroTime `json:"acquireTime"`
+ RenewTime metav1.MicroTime `json:"renewTime"`
+ LeaseTransitions int32 `json:"leaseTransitions"`
+}
+```
+
+The Spec is effectively of already existing (and thus proved) [LeaderElectionRecord][].
+The only difference is using `MicroTime` instead of `Time` for better precision.
+That would hopefully allow us go get directly to Beta.
+
+We will use that object to represent node heartbeat - for each Node there will
+be a corresponding `Lease` object with Name equal to Node name in a newly
+created dedicated namespace (we considered using `kube-system` namespace but
+decided that it's already too overloaded).
+That namespace should be created automatically (similarly to "default" and
+"kube-system", probably by NodeController) and never be deleted (so that nodes
+don't require permission for it).
+
+We considered using CRD instead of built-in API. However, even though CRDs are
+`the new way` for creating new APIs, they don't yet have versioning support
+and are significantly less performant (due to lack of protobuf support yet).
+We also don't know whether we could seamlessly transition storage from a CRD
+to a built-in API if we ran into a performance or any other problems.
+As a result, we decided to proceed with built-in API.
+
+
+With this new API in place, we will change Kubelet so that:
+
+1. Kubelet is periodically computing NodeStatus every 10s (at it is now), but that will
+ be independent from reporting status
+1. Kubelet is reporting NodeStatus if:
+ - there was a meaningful change in it (initially we can probably assume that every
+ change is meaningful, including e.g. images on the node)
+ - or it didn’t report it over last `node-status-update-period` seconds
+1. Kubelet creates and periodically updates its own Lease object and frequency
+ of those updates is independent from NodeStatus update frequency.
+
+In the meantime, we will change `NodeController` to treat both updates of NodeStatus
+object as well as updates of the new `Lease` object corresponding to a given
+node as healthiness signal from a given Kubelet. This will make it work for both old
+and new Kubelets.
+
+We should also:
+
+1. audit all other existing core controllers to verify if they also don’t require
+ similar changes in their logic ([ttl controller][] being one of the examples)
+1. change controller manager to auto-register that `Lease` CRD
+1. ensure that `Lease` resource is deleted when corresponding node is
+ deleted (probably via owner references)
+1. [out-of-scope] migrate all LeaderElection code to use that CRD
+
+Once all the code changes are done, we will:
+
+1. start updating `Lease` object every 10s by default, at the same time
+ reducing frequency of NodeStatus updates initially to 40s by default.
+ We will reduce it further later.
+ Note that it doesn't reduce frequency by which Kubelet sends "meaningful"
+ changes - it only impacts the frequency of "lastHeartbeatTime" changes.
+ <br> TODO: That still results in higher average QPS. It should be acceptable but
+ needs to be verified.
+1. announce that we are going to reduce frequency of NodeStatus updates further
+ and give people 1-2 releases to switch their code to use `Lease`
+ object (if they relied on frequent NodeStatus changes)
+1. further reduce NodeStatus updates frequency to not less often than once per
+ 1 minute.
+ We can’t stop periodically updating NodeStatus as it would be API breaking change,
+ but it’s fine to reduce its frequency (though we should continue writing it at
+ least once per eviction period).
+
+
+To be considered:
+
+1. We may consider reducing frequency of NodeStatus updates to once every 5 minutes
+ (instead of 1 minute). That would help with performance/scalability even more.
+ Caveats:
+ - NodeProblemDetector is currently updating (some) node conditions every 1 minute
+ (unconditionally, because lastHeartbeatTime always changes). To make reduction
+ of NodeStatus updates frequency really useful, we should also change NPD to
+ work in a similar mode (check periodically if condition changes, but report only
+ when something changed or no status was reported for a given time) and decrease
+ its reporting frequency too.
+ - In general, we recommend to keep frequencies of NodeStatus reporting in both
+ Kubelet and NodeProblemDetector in sync (once all changes will be done) and
+ that should be reflected in [NPD documentation][].
+ - Note that reducing frequency to 1 minute already gives us almost 6x improvment.
+ It seems more than enough for any foreseeable future assuming we won’t
+ significantly increase the size of object Node.
+ Note that if we keep adding node conditions owned by other components, the
+ number of writes of Node object will go up. But that issue is separate from
+ that proposal.
+
+Other notes:
+
+1. Additional advantage of using Lease for that purpose would be the
+ ability to exclude it from audit profile and thus reduce the audit logs footprint.
+
+[LeaderElectionRecord]: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/client-go/tools/leaderelection/resourcelock/interface.go#L37
+[ttl controller]: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/ttl/ttl_controller.go#L155
+[NPD documentation]: https://kubernetes.io/docs/tasks/debug-application-cluster/monitor-node-health/
+[kubernetes/kubernetes#63667]: https://github.com/kubernetes/kubernetes/issues/63677
+
+### Risks and Mitigations
+
+Increasing default frequency of NodeStatus updates may potentially break clients
+relying on frequent Node object updates. However, in non-managed solutions, customers
+will still be able to restore previous behavior by setting appropriate flag values.
+Thus, changing defaults to what we recommend is the path to go with.
+
+## Graduation Criteria
+
+The API can be immediately promoted to Beta, as the API is effectively a copy of
+already existing LeaderElectionRecord. It will be promoted to GA once it's gone
+a sufficient amount of time as Beta with no changes.
+
+The changes in components logic (Kubelet, NodeController) should be done behind
+a feature gate. We suggest making that enabled by default once the feature is
+implemented.
+
+## Implementation History
+
+- RRRR-MM-DD: KEP Summary, Motivation and Proposal merged
+
+## Alternatives
+
+We considered a number of alternatives, most important mentioned below.
+
+### Dedicated “heartbeat” object instead of “leader election” one
+
+Instead of introducing and using “lease” object, we considered
+introducing a dedicated “heartbeat” object for that purpose. Apart from that,
+all the details about the solution remain pretty much the same.
+
+Pros:
+
+- Conceptually easier to understand what the object is for
+
+Cons:
+
+- Introduces a new, narrow-purpose API. Lease is already used by other
+ components, implemented using annotations on Endpoints and ConfigMaps.
+
+### Events instead of dedicated heartbeat object
+
+Instead of introducing a dedicated object, we considered using “Event” object
+for that purpose. At the high-level the solution looks very similar.
+The differences from the initial proposal are:
+
+- we use existing “Event” api instead of introducing a new API
+- we create a dedicated namespace; events that should be treated as healthiness
+ signal by NodeController will be written by Kubelets (unconditionally) to that
+ namespace
+- NodeController will be watching only Events from that namespace to avoid
+ processing all events in the system (the volume of all events will be huge)
+- dedicated namespace also helps with security - we can give access to write to
+ that namespace only to Kubelets
+
+Pros:
+
+- No need to introduce new API
+ - We can use that approach much earlier due to that.
+- We already need to optimize event throughput - separate etcd instance we have
+ for them may help with tuning
+- Low-risk roll-forward/roll-back: no new objects is involved (node controller
+ starts watching events, kubelet just reduces the frequency of heartbeats)
+
+Cons:
+
+- Events are conceptually “best-effort” in the system:
+ - they may be silently dropped in case of problems in the system (the event recorder
+ library doesn’t retry on errors, e.g. to not make things worse when control-plane
+ is starved)
+ - currently, components reporting events don’t even know if it succeeded or not (the
+ library is built in a way that you throw the event into it and are not notified if
+ that was successfully submitted or not).
+ Kubelet sending any other update has full control on how/if retry errors.
+ - lack of fairness mechanisms means that even when some events are being successfully
+ send, there is no guarantee that any event from a given Kubelet will be submitted
+ over a given time period
+ So this would require a different mechanism of reporting those “heartbeat” events.
+- Once we have “request priority” concept, I think events should have the lowest one.
+ Even though no particular heartbeat is important, guarantee that some heartbeats will
+ be successfully send it crucial (not delivering any of them will result in unnecessary
+ evictions or not-scheduling to a given node). So heartbeats should be of the highest
+ priority. OTOH, node heartbeats are one of the most important things in the system
+ (not delivering them may result in unnecessary evictions), so they should have the
+ highest priority.
+- No core component in the system is currently watching events
+ - it would make system’s operation harder to explain
+- Users watch Node objects for heartbeats (even though we didn’t recommend it).
+ Introducing a new object for the purpose of heartbeat will allow those users to
+ migrate, while using events for that purpose breaks that ability. (Watching events
+ may put us in tough situation also from performance reasons.)
+- Deleting all events (e.g. event etcd failure + playbook response) should continue to
+ not cause a catastrophic failure and the design will need to account for this.
+
+### Reuse the Component Registration mechanisms
+
+Kubelet is one of control-place components (shared controller). Some time ago, Component
+Registration proposal converged into three parts:
+
+- Introducing an API for registering non-pod endpoints, including readiness information: #18610
+- Changing endpoints controller to also watch those endpoints
+- Identifying some of those endpoints as “components”
+
+We could reuse that mechanism to represent Kubelets as non-pod endpoint API.
+
+Pros:
+
+- Utilizes desired API
+
+Cons:
+
+- Requires introducing that new API
+- Stabilizing the API would take some time
+- Implementing that API requires multiple changes in different components
+
+### Split Node object into two parts at etcd level
+
+We may stick to existing Node API and solve the problem at storage layer. At the
+high level, this means splitting the Node object into two parts in etcd (frequently
+modified one and the rest).
+
+Pros:
+
+- No need to introduce new API
+- No need to change any components other than kube-apiserver
+
+Cons:
+
+- Very complicated to support watch
+- Not very generic (e.g. splitting Spec and Status doesn’t help, it needs to be just
+ heartbeat part)
+- [minor] Doesn’t reduce amount of data that should be processed in the system (writes,
+ reads, watches, …)
+
+### Delta compression in etcd
+
+An alternative for the above can be solving this completely at the etcd layer. To
+achieve that, instead of storing full updates in etcd transaction log, we will just
+store “deltas” and snapshot the whole object only every X seconds/minutes.
+
+Pros:
+
+- Doesn’t require any changes to any Kubernetes components
+
+Cons:
+
+- Computing delta is tricky (etcd doesn’t understand Kubernetes data model, and
+ delta between two protobuf-encoded objects is not necessary small)
+- May require a major rewrite of etcd code and not even be accepted by its maintainers
+- More expensive computationally to get an object in a given resource version (which
+ is what e.g. watch is doing)
+
+### Replace etcd with other database
+
+Instead of using etcd, we may also consider using some other open-source solution.
+
+Pros:
+
+- Doesn’t require new API
+
+Cons:
+
+- We don’t even know if there exists solution that solves our problems and can be used.
+- Migration will take us years.
diff --git a/keps/NEXT_KEP_NUMBER b/keps/NEXT_KEP_NUMBER
index 45a4fb75..b1bd38b6 100644
--- a/keps/NEXT_KEP_NUMBER
+++ b/keps/NEXT_KEP_NUMBER
@@ -1 +1 @@
-8
+13
diff --git a/keps/sig-cli/0008-kustomize.md b/keps/sig-cli/0008-kustomize.md
new file mode 100644
index 00000000..e014896d
--- /dev/null
+++ b/keps/sig-cli/0008-kustomize.md
@@ -0,0 +1,222 @@
+---
+kep-number: 8
+title: Kustomize
+authors:
+ - "@pwittrock"
+ - "@monopole"
+owning-sig: sig-cli
+participating-sigs:
+ - sig-cli
+reviewers:
+ - "@droot"
+approvers:
+ - "@maciej"
+editor: "@droot"
+creation-date: 2018-05-5
+last-updated: 2018-05-5
+status: implemented
+see-also:
+ - n/a
+replaces:
+ - kinflate # Old name for kustomize
+superseded-by:
+ - n/a
+---
+
+# Kustomize
+
+## Table of Contents
+
+- [Kustomize](#kustomize)
+ - [Table of Contents](#table-of-contents)
+ - [Summary](#summary)
+ - [Motivation](#motivation)
+ - [Goals](#goals)
+ - [Non-Goals](#non-goals)
+ - [Proposal](#proposal)
+ - [Implementation Details/Notes/Constraints [optional]](#implementation-detailsnotesconstraints-optional)
+ - [Risks and Mitigations](#risks-and-mitigations)
+ - [Risks of Not Having a Solution](#risks-of-not-having-a-solution)
+ - [Graduation Criteria](#graduation-criteria)
+ - [Implementation History](#implementation-history)
+ - [Drawbacks](#drawbacks)
+ - [Alternatives](#alternatives)
+ - [FAQ](#faq)
+
+## Summary
+
+Declarative specification of Kubernetes objects is the recommended way to manage Kubernetes
+production workloads, however gaps in the kubectl tooling force users to write their own scripting and
+tooling to augment the declarative tools with preprocessing transformations.
+While most of theser transformations already exist as imperative kubectl commands, they are not natively accessible
+from a declarative workflow.
+
+This KEP describes how `kustomize` addresses this problem by providing a declarative format for users to access
+the imperative kubectl commands they are already familiar natively from declarative workflows.
+
+## Motivation
+
+The kubectl command provides a cli for:
+
+- accessing the Kubernetes apis through json or yaml configuration
+- porcelain commands for generating and transforming configuration off of commandline flags.
+
+Examples:
+
+- Generate a configmap or secret from a text or binary file
+ - `kubectl create configmap`, `kubectl create secret`
+ - Users can manage their configmaps and secrets text and binary files
+
+- Create or update fields that cut across other fields and objects
+ - `kubectl label`, `kubectl annotate`
+ - Users can add and update labels for all objects composing an application
+
+- Transform an existing declarative configuration without forking it
+ - `kubectl patch`
+ - Users may generate multiple variations of the same workload
+
+- Transform live resources arbitrarily without auditing
+ - `kubectl edit`
+
+To create a Secret from a binary file, users must first base64 encode the binary file and then create a Secret yaml
+config from the resulting data. Because the source of truth is actually the binary file, not the config,
+users must write scripting and tooling to keep the 2 sources consistent.
+
+Instead, users should be able to access the simple, but necessary, functionality available in the imperative
+kubectl commands from their declarative workflow.
+
+#### Long standing issues
+
+Kustomize addresses a number of long standing issues in kubectl.
+
+- Declarative enumeration of multiple files [kubernetes/kubernetes#24649](https://github.com/kubernetes/kubernetes/issues/24649)
+- Declarative configmap and secret creation: [kubernetes/kubernetes#24744](https://github.com/kubernetes/kubernetes/issues/24744), [kubernetes/kubernetes#30337](https://github.com/kubernetes/kubernetes/issues/30337)
+- Configmap rollouts: [kubernetes/kubernetes#22368](https://github.com/kubernetes/kubernetes/issues/22368)
+ - [Example in kustomize](https://github.com/kubernetes-sigs/kustomize/tree/master/examples/helloWorld#how-this-works-with-kustomize)
+- Name/label scoping and safer pruning: [kubernetes/kubernetes#1698](https://github.com/kubernetes/kubernetes/issues/1698)
+ - [Example in kustomize](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/breakfast.md#demo-configure-breakfast)
+- Template-free add-on customization: [kubernetes/kubernetes#23233](https://github.com/kubernetes/kubernetes/issues/23233)
+ - [Example in kustomize](https://github.com/kubernetes-sigs/kustomize/tree/master/examples/helloWorld#staging-kustomization)
+
+### Goals
+
+- Declarative support for defining ConfigMaps and Secrets generated from binary and text files
+- Declarative support for adding or updating cross-cutting fields
+ - labels & selectors
+ - annotations
+ - names (as transformation of the original name)
+- Declarative support for applying patches to transform arbitrary fields
+ - use strategic-merge-patch format
+- Ease of integration with CICD systems that maintain configuration in a version control repository
+ as a single source of truth, and take action (build, test, deploy, etc.) when that truth changes (gitops).
+
+### Non-Goals
+
+#### Exposing every imperative kubectl command in a declarative fashion
+
+The scope of kustomize is limited only to functionality gaps that would otherwise prevent users from
+defining their workloads in a purely declarative manner (e.g. without writing scripts to perform pre-processing
+or linting). Commands such as `kubectl run`, `kubectl create deployment` and `kubectl edit` are unnecessary
+in a declarative workflow because a Deployment can easily be managed as declarative config.
+
+#### Providing a simpler facade on top of the Kubernetes APIs
+
+The community has developed a number of facades in front of the Kubernetes APIs using
+templates or DSLs. Attempting to provide an alternative interface to the Kubernetes API is
+a non-goal. Instead the focus is on:
+
+- Facilitating simple cross-cutting transformations on the raw config that would otherwise require other tooling such
+ as *sed*
+- Generating configuration when the source of truth resides elsewhere
+- Patching existing configuration with transformations
+
+## Proposal
+
+### Capabilities
+
+**Note:** This proposal has already been implemented in `github.com/kubernetes/kubectl`.
+
+Define a new meta config format called *kustomization.yaml*.
+
+#### *kustomization.yaml* will allow users to reference config files
+
+- Path to config yaml file (similar to `kubectl apply -f <file>`)
+- Urls to config yaml file (similar to `kubectl apply -f <url>`)
+- Path to *kustomization.yaml* file (takes the output of running kustomize)
+
+#### *kustomization.yaml* will allow users to generate configs from files
+
+- ConfigMap (`kubectl create configmap`)
+- Secret (`kubectl create secret`)
+
+#### *kustomization.yaml* will allow users to apply transformations to configs
+
+- Label (`kubectl label`)
+- Annotate (`kubectl annotate`)
+- Strategic-Merge-Patch (`kubectl patch`)
+- Name-Prefix
+
+### UX
+
+Kustomize will also contain subcommands to facilitate authoring *kustomization.yaml*.
+
+#### Edit
+
+The edit subcommands will allow users to modify the *kustomization.yaml* through cli commands containing
+helpful messaging and documentation.
+
+- Add ConfigMap - like `kubectl create configmap` but declarative in *kustomization.yaml*
+- Add Secret - like `kubectl create secret` but declarative in *kustomization.yaml*
+- Add Resource - adds a file reference to *kustomization.yaml*
+- Set NamePrefix - adds NamePrefix declaration to *kustomization.yaml*
+
+#### Diff
+
+The diff subcommand will allow users to see a diff of the original and transformed configuration files
+
+- Generated config (configmap) will show the files as created
+- Transformations (name prefix) will show the files as modified
+
+### Implementation Details/Notes/Constraints [optional]
+
+Kustomize has already been implemented in the `github.com/kubernetes/kubectl` repo, and should be moved to a
+separate repo for the subproject.
+
+Kustomize was initially developed as its own cli, however once it has matured, it should be published
+as a subcommand of kubectl or as a statically linked plugin. It should also be more tightly integrated with apply.
+
+- Create the *kustomize* sig-cli subproject and update sigs.yaml
+- Move the existing kustomize code from `github.com/kubernetes/kubectl` to `github.com/kubernetes-sigs/kustomize`
+
+### Risks and Mitigations
+
+
+### Risks of Not Having a Solution
+
+By not providing a viable option for working directly with Kubernetes APIs as json or
+yaml config, we risk the ecosystem becoming fragmented with various bespoke API facades.
+By ensuring the raw Kubernetes API json or yaml is a usable approach for declaratively
+managing applications, even tools that do not use the Kubernetes API as their native format can
+better work with one another through transformation to a common format.
+
+## Graduation Criteria
+
+- Dogfood kustomize by either:
+ - moving one or more of our own (OSS Kubernetes) services to it.
+ - getting user feedback from one or more mid or large application deployments using kustomize.
+- Publish kustomize as a subcommand of kubectl.
+
+## Implementation History
+
+kustomize was implemented in the kubectl repo before subprojects became a first class thing in Kubernetes.
+The code has been fully implemented, but it must be moved to a proper location.
+
+## Drawbacks
+
+
+## Alternatives
+
+1. Users write their own bespoke scripts to generate and transform the config before it is applied.
+2. Users don't work with the API directly, and use or develop DSLs for interacting with Kubernetes.
+
+## FAQs
diff --git a/keps/0002-controller-manager.md b/keps/sig-cloud-provider/0002-cloud-controller-manager.md
index 1316b123..c3ce25c2 100644
--- a/keps/0002-controller-manager.md
+++ b/keps/sig-cloud-provider/0002-cloud-controller-manager.md
@@ -4,18 +4,22 @@ title: Cloud Provider Controller Manager
authors:
- "@cheftako"
- "@calebamiles"
+ - "@hogepodge"
owning-sig: sig-apimachinery
participating-sigs:
- sig-apps
- sig-aws
- sig-azure
+ - sig-cloud-provider
- sig-gcp
- sig-network
- sig-openstack
- sig-storage
reviewers:
- - "@wlan0"
+ - "@andrewsykim"
- "@calebamiles"
+ - "@hogepodge"
+ - "@jagosan"
approvers:
- "@thockin"
editor: TBD
@@ -41,16 +45,21 @@ replaces:
- [API Server Changes](#api-server-changes)
- [Volume Management Changes](#volume-management-changes)
- [Deployment Changes](#deployment-changes)
+ - [Implementation Details/Notes/Constraints](#implementation-detailsnotesconstraints)
+ - [Repository Requirements](#repository-requirements)
+ - [Notes for Repository Requirements](#notes-for-repository-requirements)
+ - [Repository Timeline](#repository-timeline)
- [Security Considerations](#security-considerations)
- [Graduation Criteria](#graduation-criteria)
- [Graduation to Beta](#graduation-to-beta)
- [Process Goals](#process-goals)
+ - [Implementation History](#implementation-history)
- [Alternatives](#alternatives)
## Summary
We want to remove any cloud provider specific logic from the kubernetes/kubernetes repo. We want to restructure the code
-to make is easy for any cloud provider to extend the kubernetes core in a consistent manner for their cloud. New cloud
+to make it easy for any cloud provider to extend the kubernetes core in a consistent manner for their cloud. New cloud
providers should look at the [Creating a Custom Cluster from Scratch](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider)
and the [cloud provider interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go#L31)
which will need to be implemented.
@@ -208,8 +217,8 @@ taints.
### API Server Changes
-Finally, in the kube-apiserver, the cloud provider is used for transferring SSH keys to all of the nodes, and within an a
-dmission controller for setting labels on persistent volumes.
+Finally, in the kube-apiserver, the cloud provider is used for transferring SSH keys to all of the nodes, and within an
+admission controller for setting labels on persistent volumes.
Kube-apiserver uses the cloud provider for two purposes
@@ -220,7 +229,7 @@ Kube-apiserver uses the cloud provider for two purposes
Volumes need cloud providers, but they only need **specific** cloud providers. The majority of volume management logic
resides in the controller manager. These controller loops need to be moved into the cloud-controller manager. The cloud
-controller manager also needs a mechanism to read parameters for initilization from cloud config. This can be done via
+controller manager also needs a mechanism to read parameters for initialization from cloud config. This can be done via
config maps.
There are two entirely different approach to refactoring volumes -
@@ -257,6 +266,102 @@ In case of the cloud-controller-manager, the deployment should be deleted using
kubectl delete -f cloud-controller-manager.yml
```
+### Implementation Details/Notes/Constraints
+
+#### Repository Requirements
+
+**This is a proposed structure, and may change during the 1.11 release cycle.
+WG-Cloud-Provider will work with individual sigs to refine these requirements
+to maintain consistency while meeting the technical needs of the provider
+maintainers**
+
+Each cloud provider hosted within the `kubernetes` organization shall have a
+single repository named `kubernetes/cloud-provider-<provider_name>`. Those
+repositories shall have the following structure:
+
+* A `cloud-controller-manager` subdirectory that contains the implementation
+ of the provider-specific cloud controller.
+* A `docs` subdirectory.
+* A `docs/cloud-controller-manager.md` file that describes the options and
+ usage of the cloud controller manager code.
+* A `docs/testing.md` file that describes how the provider code is tested.
+* A `Makefile` with a `test` entrypoint to run the provider tests.
+
+Additionally, the repository should have:
+
+* A `docs/getting-started.md` file that describes the installation and basic
+ operation of the cloud controller manager code.
+
+Where the provider has additional capabilities, the repository should have
+the following subdirectories that contain the common features:
+
+* `dns` for DNS provider code.
+* `cni` for the Container Network Interface (CNI) driver.
+* `csi` for the Container Storage Interface (CSI) driver.
+* `flex` for the Flex Volume driver.
+* `installer` for custom installer code.
+
+Each repository may have additional directories and files that are used for
+additional feature that include but are not limited to:
+
+* Other provider specific testing.
+* Additional documentation, including examples and developer documentation.
+* Dependencies on provider-hosted or other external code.
+
+
+##### Notes for Repository Requirements
+
+This purpose of these requirements is to define a common structure for the
+cloud provider repositories owned by current and future cloud provider SIGs.
+In accordance with the
+[WG-Cloud-Provider Charter](https://docs.google.com/document/d/1m4Kvnh_u_9cENEE9n1ifYowQEFSgiHnbw43urGJMB64/edit#)
+to "define a set of common expected behaviors across cloud providers", this
+proposal defines the location and structure of commonly expected code.
+
+As each provider can and will have additional features that go beyond expected
+common code, requirements only apply to the location of the
+following code:
+
+* Cloud Controller Manager implementations.
+* Documentation.
+
+This document may be amended with additional locations that relate to enabling
+consistent upstream testing, independent storage drivers, and other code with
+common integration hooks may be added
+
+The development of the
+[Cloud Controller Manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager)
+and
+[Cloud Provider Interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go)
+has enabled the provider SIGs to develop external providers that
+capture the core functionality of the upstream providers. By defining the
+expected locations and naming conventions of where the external provider code
+is, we will create a consistent experience for:
+
+* Users of the providers, who will have easily understandable conventions for
+ discovering and using all of the providers.
+* SIG-Docs, who will have a common hook for building or linking to externally
+ managed documentation
+* SIG-Testing, who will be able to use common entry points for enabling
+ provider-specific e2e testing.
+* Future cloud provider authors, who will have a common framework and examples
+ from which to build and share their code base.
+
+##### Repository Timeline
+
+To facilitate community development, providers named in the
+[Makes SIGs responsible for implementations of `CloudProvider`](https://github.com/kubernetes/community/pull/1862)
+patch can immediately migrate their external provider work into their named
+repositories.
+
+Each provider will work to implement the required structure during the
+Kubernetes 1.11 development cycle, with conformance by the 1.11 release.
+WG-Cloud-Provider may actively change repository requirements during the
+1.11 release cycle to respond to collective SIG technical needs.
+
+After the 1.11 release all current and new provider implementations must
+conform with the requirements outlined in this document.
+
### Security Considerations
Make sure that you consider the impact of this feature from the point of view of Security.
@@ -307,6 +412,20 @@ is proposed to
- serve as a repository for user experience reports related to Cloud Providers
which live within the Kubernetes GitHub organization or desire to do so
+Major milestones:
+
+- March 18, 2018: Accepted proposal for repository requirements.
+
+*Major milestones in the life cycle of a KEP should be tracked in `Implementation History`.
+Major milestones might include
+
+- the `Summary` and `Motivation` sections being merged signaling SIG acceptance
+- the `Proposal` section being merged signaling agreement on a proposed design
+- the date implementation started
+- the first Kubernetes release where an initial version of the KEP was available
+- the version of Kubernetes where the KEP graduated to general availability
+- when the KEP was retired or superseded*
+
The ultimate intention of WG Cloud Provider is to prevent multiple classes
of software purporting to be an implementation of the Cloud Provider interface
from fracturing the Kubernetes Community while also ensuring that new Cloud
diff --git a/keps/sig-cluster-lifecycle/0008-20180504-kubeadm-config-beta.md b/keps/sig-cluster-lifecycle/0008-20180504-kubeadm-config-beta.md
new file mode 100644
index 00000000..f2009693
--- /dev/null
+++ b/keps/sig-cluster-lifecycle/0008-20180504-kubeadm-config-beta.md
@@ -0,0 +1,145 @@
+---
+kep-number: draft-20180412
+title: Kubeadm Config Draft
+authors:
+ - "@liztio"
+owning-sig: sig-cluster-lifecycle
+participating-sigs: []
+reviewers:
+ - "@timothysc"
+approvers:
+ - TBD
+editor: TBD
+creation-date: 2018-04-12
+last-updated: 2018-04-12
+status: draft
+see-also: []
+replaces: []
+superseded-by: []
+---
+
+# Kubeadm Config to Beta
+
+## Table of Contents
+
+A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.
+
+<!-- markdown-toc start - Don't edit this section. Run M-x markdown-toc-refresh-toc -->
+**Table of Contents**
+
+- [Kubeadm Config to Beta](#kubeadm-config-to-beta)
+ - [Table of Contents](#table-of-contents)
+ - [Summary](#summary)
+ - [Motivation](#motivation)
+ - [Goals](#goals)
+ - [Non-Goals](#non-goals)
+ - [Proposal](#proposal)
+ - [User Stories [optional]](#user-stories-optional)
+ - [As a user upgrading with Kubeadm, I want the upgrade process to not fail with unfamiliar configuration.](#as-a-user-upgrading-with-kubeadm-i-want-the-upgrade-process-to-not-fail-with-unfamiliar-configuration)
+ - [As a infrastructure system using kubeadm, I want to be able to write configuration files that always work.](#as-a-infrastructure-system-using-kubeadm-i-want-to-be-able-to-write-configuration-files-that-always-work)
+ - [Implementation Details/Notes/Constraints](#implementation-detailsnotesconstraints)
+ - [Risks and Mitigations](#risks-and-mitigations)
+ - [Graduation Criteria](#graduation-criteria)
+ - [Implementation History](#implementation-history)
+ - [Alternatives](#alternatives)
+
+<!-- markdown-toc end -->
+
+## Summary
+
+Kubeadm uses MasterConfiguraton for two distinct but similar operations: Initialising a new cluster and upgrading an existing cluster.
+The former is typically created by hand by an administrator.
+It is stored on disk and passed to `kubeadm init` via command line flag.
+The latter is produced by kubeadm using supplied configuration files, command line options, and internal defaults.
+It will be stored in a ConfigMap so upgrade operations can find.
+
+Right now the configuration format is unversioned.
+This means configuration file formats can change between kubeadm versions and there's no safe way to update the configuration format.
+
+We propose a stable versioning of this configuration, `v1alpha2` and eventually `v1beta1`.
+Version information will be _mandatory_ going forward, both for user-generated configuration files and machine-generated configuration maps.
+
+There as an [existing document][config] describing current Kubernetes best practices around component configuration.
+
+[config]: https://docs.google.com/document/d/1FdaEJUEh091qf5B98HM6_8MS764iXrxxigNIdwHYW9c/edit#heading=h.nlhhig66a0v6
+
+## Motivation
+
+After 1.10.0, we discovered a bug in the upgrade process.
+The `MasterConfiguraton` embedded a [struct that had changed][proxyconfig], which caused a backwards-incompatible change to the configuration format.
+This caused `kubeadm upgrade` to fail, because a newer version of kubeadm was attempting to deserialise an older version of the struct.
+
+Because the configuration is often written and read by different versions of kubeadm compiled by different versions of kubernetes,
+it's very important for this configuration file to be well-versioned.
+
+[proxyconfig]: https://github.com/kubernetes/kubernetes/commit/57071d85ee2c27332390f0983f42f43d89821961
+
+### Goals
+
+* kubeadm init fails if a configuration file isn't versioned
+* the config map written out contains a version
+* the configuration struct does not embed any other structs
+* existing configuration files are converted on upgrade to a known, stable version
+* structs should be sparsely populated
+* all structs should have reasonable defaults so an empty config is still sensible
+
+### Non-Goals
+
+* kubeadm is able to read and write configuration files for older and newer versions of kubernetes than it was compiled with
+* substantially changing the schema of the `MasterConfiguration`
+
+## Proposal
+
+The concrete proposal is as follows.
+
+1. Immediately start writing Kind and Version information into the `MasterConfiguraton` struct.
+2. Define the previous (1.9) version of the struct as `v1alpha1`.
+3. Duplicate the KubeProxyConfig struct that caused the schema change, adding the old version to the `v1alpha1` struct.
+3. Create a new `v1alpha2` directory mirroring the existing [`v1alpha1`][v1alpha1], which matches the 1.10 schema.
+ This version need not duplicate the file as well.
+2. Warn users if their configuration files do not have a version and kind
+4. Use [apimachinery's conversion][conversion] library to design migrations from the old (v1alpha1) versions to the new (v1alpha2) versions
+5. Determine the changes for v1beta1
+6. With v1beta1, enforce presence of version numbers in config files and ConfigMaps, erroring if not present.
+
+[conversion]: https://godoc.org/k8s.io/apimachinery/pkg/conversion
+[v1alpha1]: https://github.com/kubernetes/kubernetes/tree/d7d4381961f4eb2a4b581160707feb55731e324e/cmd/kubeadm/app/apis/kubeadm
+
+### User Stories [optional]
+
+#### As a user upgrading with Kubeadm, I want the upgrade process to not fail with unfamiliar configuration.
+
+In the past, the haphazard nature of the versioning system has meant it was hard to provide strong guarantees between versions.
+Implementing strong version guarantees mean any given configuration generated in the past by kubeadm will work with a future version of kubeadm.
+Deprecations can happen in the future in well-regulated ways.
+
+#### As a infrastructure system using kubeadm, I want to be able to write configuration files that always work.
+
+Having a configuration file that changes without notice makes it very difficult to write software that integrates with kubeadm.
+By providing strong version guarantees, we can guarantee that the files these tools produce will work with a given version of kubeadm.
+
+### Implementation Details/Notes/Constraints
+
+The incident that caused the breakage in alpha wasn't a field changed it Kubeadm, it was a struct [referenced][struct] inside the `MasterConfiguration` struct.
+By completely owning our own configuration, changes in the rest of the project can't unknowingly affect us.
+When we do need to interface with the rest of the project, we will do so explicitly in code and be protected by the compiler.
+
+[struct]: https://github.com/kubernetes/kubernetes/blob/d7d4381961f4eb2a4b581160707feb55731e324e/cmd/kubeadm/app/apis/kubeadm/v1alpha1/types.go#L285
+
+### Risks and Mitigations
+
+Moving to a strongly versioned configuration from a weakly versioned one must be done carefully so as not break kubeadm for existing users.
+We can start requiring versions of the existing `v1alpha1` format, issuing warnings to users when Version and Kind aren't present.
+These fields can be used today, they're simply ignored.
+In the future, we could require them, and transition to using `v1alpha1`.
+
+## Graduation Criteria
+
+This KEP can be considered complete once all currently supported versions of Kubeadm write out `v1beta1`-version structs.
+
+## Implementation History
+
+## Alternatives
+
+Rather than creating our own copies of all structs in the `MasterConfiguration` struct, we could instead continue embedding the structs.
+To provide our guarantees, we would have to invest a lot more in automated testing for upgrades.
diff --git a/keps/sig-network/0010-20180314-coredns-GA-proposal.md b/keps/sig-network/0010-20180314-coredns-GA-proposal.md
new file mode 100644
index 00000000..54494eea
--- /dev/null
+++ b/keps/sig-network/0010-20180314-coredns-GA-proposal.md
@@ -0,0 +1,126 @@
+---
+kep-number: 10
+title: Graduate CoreDNS to GA
+authors:
+ - "@johnbelamaric"
+ - "@rajansandeep"
+owning-sig: sig-network
+participating-sigs:
+ - sig-cluster-lifecycle
+reviewers:
+ - "@bowei"
+ - "@thockin"
+approvers:
+ - "@thockin"
+editor: "@rajansandeep"
+creation-date: 2018-03-21
+last-updated: 2018-05-18
+status: provisional
+see-also: https://github.com/kubernetes/community/pull/2167
+---
+
+# Graduate CoreDNS to GA
+
+## Table of Contents
+
+* [Summary](#summary)
+* [Motivation](#motivation)
+ * [Goals](#goals)
+ * [Non-Goals](#non-goals)
+* [Proposal](#proposal)
+ * [User Cases](#use-cases)
+* [Graduation Criteria](#graduation-criteria)
+* [Implementation History](#implementation-history)
+
+## Summary
+
+CoreDNS is sister CNCF project and is the successor to SkyDNS, on which kube-dns is based. It is a flexible, extensible
+authoritative DNS server and directly integrates with the Kubernetes API. It can serve as cluster DNS,
+complying with the [dns spec](https://git.k8s.io/dns/docs/specification.md). As an independent project,
+it is more actively developed than kube-dns and offers performance and functionality beyond what kube-dns has. For more details, see the [introductory presentation](https://docs.google.com/presentation/d/1v6Coq1JRlqZ8rQ6bv0Tg0usSictmnN9U80g8WKxiOjQ/edit#slide=id.g249092e088_0_181), or [coredns.io](https://coredns.io), or the [CNCF webinar](https://youtu.be/dz9S7R8r5gw).
+
+Currently, we are following the road-map defined [here](https://github.com/kubernetes/features/issues/427). CoreDNS is Beta in Kubernetes v1.10, which can be installed as an alternate to kube-dns.
+The purpose of this proposal is to graduate CoreDNS to GA.
+
+## Motivation
+
+* CoreDNS is more flexible and extensible than kube-dns.
+* CoreDNS is easily extensible and maintainable using a plugin architecture.
+* CoreDNS has fewer moving parts than kube-dns, taking advantage of the plugin architecture, making it a single executable and single process.
+* It is written in Go, making it memory-safe (kube-dns includes dnsmasq which is not).
+* CoreDNS has [better performance](https://github.com/kubernetes/community/pull/1100#issuecomment-337747482) than [kube-dns](https://github.com/kubernetes/community/pull/1100#issuecomment-338329100) in terms of greater QPS, lower latency, and lower memory consumption.
+
+### Goals
+
+* Bump up CoreDNS to be GA.
+* Make CoreDNS available as an image in a Kubernetes repository (To Be Defined) and ensure a workflow/process to update the CoreDNS versions in the future.
+ May be deferred to [next KEP](https://github.com/kubernetes/community/pull/2167) if goal not achieved in time.
+* Provide a kube-dns to CoreDNS upgrade path with configuration translation in `kubeadm`.
+* Provide a CoreDNS to CoreDNS upgrade path in `kubeadm`.
+
+### Non-Goals
+
+* Translation of CoreDNS ConfigMap back to kube-dns (i.e., downgrade).
+* Translation configuration of kube-dns to equivalent CoreDNS that is defined outside of the kube-dns ConfigMap. For example, modifications to the manifest or `dnsmasq` configuration.
+* Fate of kube-dns in future releases, i.e. deprecation path.
+* Making [CoreDNS the default](https://github.com/kubernetes/community/pull/2167) in every installer.
+
+## Proposal
+
+The proposed solution is to enable the selection of CoreDNS as a GA cluster service discovery DNS for Kubernetes.
+Some of the most used deployment tools have been upgraded by the CoreDNS team, in cooperation of the owners of these tools, to be able to deploy CoreDNS:
+* kubeadm
+* kube-up
+* minikube
+* kops
+
+For other tools, each maintainer would have to add the upgrade to CoreDNS.
+
+### Use Cases
+
+* CoreDNS supports all functionality of kube-dns and also addresses [several use-cases kube-dns lacks](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/coredns.md#use-cases). Some of the Use Cases are as follows:
+ * Supporting [Autopath](https://coredns.io/plugins/autopath/), which reduces the high query load caused by the long DNS search path in Kubernetes.
+ * Making an alias for an external name [#39792](https://github.com/kubernetes/kubernetes/issues/39792)
+
+* By default, the user experience would be unchanged. For more advanced uses, existing users would need to modify the ConfigMap that contains the CoreDNS configuration file.
+* Since CoreDNS has more supporting features than kube-dns, there will be no path to retain the CoreDNS configuration in case a user wants to switch to kube-dns.
+
+#### Configuring CoreDNS
+
+The CoreDNS configuration file is called a `Corefile` and syntactically is the same as a [Caddyfile](https://caddyserver.com/docs/caddyfile). The file consists of multiple stanzas called _server blocks_.
+Each of these represents a set of zones for which that server block should respond, along with the list of plugins to apply to a given request. More details on this can be found in the
+[Corefile Explained](https://coredns.io/2017/07/23/corefile-explained/) and [How Queries Are Processed](https://coredns.io/2017/06/08/how-queries-are-processed-in-coredns/) blog entries.
+
+The following can be expected when CoreDNS is graduated to GA.
+
+#### Kubeadm
+
+* The CoreDNS feature-gates flag will be marked as GA.
+* As Kubeadm maintainers chose to deploy CoreDNS as the default Cluster DNS for Kubernetes 1.11:
+ * CoreDNS will be installed by default in a fresh install of Kubernetes via kubeadm.
+ * For users upgrading Kubernetes via kubeadm, it will install CoreDNS by default whether the user had kube-dns or CoreDNS in a previous kubernetes version.
+ * In case a user wants to install kube-dns instead of CoreDNS, they have to set the feature-gate of CoreDNS to false. `--feature-gates=CoreDNS=false`
+* When choosing to install CoreDNS, the configmap of a previously installed kube-dns will be automatically translated to the equivalent CoreDNS configmap.
+
+#### Kube-up
+
+* CoreDNS will be installed when the environment variable `CLUSTER_DNS_CORE_DNS` is set to `true`. The default value is `false`.
+
+#### Minikube
+
+* CoreDNS to be an option in the add-on manager, with CoreDNS disabled by default.
+
+## Graduation Criteria
+
+* Verify that all e2e conformance and DNS related tests (xxx-kubernetes-e2e-gce, ci-kubernetes-e2e-gce-gci-ci-master and filtered by `--ginkgo.skip=\\[Slow\\]|\\[Serial\\]|\\[Disruptive\\]|\\[Flaky\\]|\\[Feature:.+\\]`) run successfully for CoreDNS.
+ None of the tests successful with Kube-DNS should be failing with CoreDNS.
+* Add CoreDNS as part of the e2e Kubernetes scale runs and ensure tests are not failing.
+* Extend [perf-tests](https://github.com/kubernetes/perf-tests/tree/master/dns) for CoreDNS.
+* Add a dedicated DNS related tests in e2e scalability test [Feature:performance].
+
+## Implementation History
+
+* 20170912 - [Feature proposal](https://github.com/kubernetes/features/issues/427) for CoreDNS to be implemented as the default DNS in Kubernetes.
+* 20171108 - Successfully released [CoreDNS as an Alpha feature-gate in Kubernetes v1.9](https://github.com/kubernetes/kubernetes/pull/52501).
+* 20180226 - CoreDNS graduation to Incubation in CNCF.
+* 20180305 - Support for Kube-dns configmap translation and move up [CoreDNS to Beta](https://github.com/kubernetes/kubernetes/pull/58828) for Kubernetes v1.10.
diff --git a/keps/sig-network/0011-ipvs-proxier.md b/keps/sig-network/0011-ipvs-proxier.md
new file mode 100644
index 00000000..4d25ab7f
--- /dev/null
+++ b/keps/sig-network/0011-ipvs-proxier.md
@@ -0,0 +1,574 @@
+---
+kep-number: TBD
+title: IPVS Load Balancing Mode in Kubernetes
+status: implemented
+authors:
+ - "@rramkumar1"
+owning-sig: sig-network
+reviewers:
+ - "@thockin"
+ - "@m1093782566"
+approvers:
+ - "@thockin"
+ - "@m1093782566"
+editor:
+ - "@thockin"
+ - "@m1093782566"
+creation-date: 2018-03-21
+---
+
+# IPVS Load Balancing Mode in Kubernetes
+
+**Note: This is a retroactive KEP. Credit goes to @m1093782566, @haibinxie, and @quinton-hoole for all information & design in this KEP.**
+
+**Important References: https://github.com/kubernetes/community/pull/692/files**
+
+## Table of Contents
+
+* [Summary](#summary)
+* [Motivation](#motivation)
+ * [Goals](#goals)
+ * [Non\-goals](#non-goals)
+* [Proposal](#proposal)
+ * [Kube-Proxy Parameter Changes](#kube-proxy-parameter-changes)
+ * [Build Changes](#build-changes)
+ * [Deployment Changes](#deployment-changes)
+ * [Design Considerations](#design-considerations)
+ * [IPVS service network topology](#ipvs-service-network-topology)
+ * [Port remapping](#port-remapping)
+ * [Falling back to iptables](#falling-back-to-iptables)
+ * [Supporting NodePort service](#supporting-nodeport-service)
+ * [Supporting ClusterIP service](#supporting-clusterip-service)
+ * [Supporting LoadBalancer service](#supporting-loadbalancer-service)
+ * [Session Affinity](#session-affinity)
+ * [Cleaning up inactive rules](#cleaning-up-inactive-rules)
+ * [Sync loop pseudo code](#sync-loop-pseudo-code)
+* [Graduation Criteria](#graduation-criteria)
+* [Implementation History](#implementation-history)
+* [Drawbacks](#drawbacks)
+* [Alternatives](#alternatives)
+
+## Summary
+
+We are building a new implementation of kube proxy built on top of IPVS (IP Virtual Server).
+
+## Motivation
+
+As Kubernetes grows in usage, the scalability of its resources becomes more and more
+important. In particular, the scalability of services is paramount to the adoption of Kubernetes
+by developers/companies running large workloads. Kube Proxy, the building block of service routing
+has relied on the battle-hardened iptables to implement the core supported service types such as
+ClusterIP and NodePort. However, iptables struggles to scale to tens of thousands of services because
+it is designed purely for firewalling purposes and is based on in-kernel rule chains. On the
+other hand, IPVS is specifically designed for load balancing and uses more efficient data structures
+under the hood. For more information on the performance benefits of IPVS vs. iptables, take a look
+at these [slides](https://docs.google.com/presentation/d/1BaIAywY2qqeHtyGZtlyAp89JIZs59MZLKcFLxKE6LyM/edit?usp=sharing).
+
+### Goals
+
+* Improve the performance of services
+
+### Non-goals
+
+None
+
+### Challenges and Open Questions [optional]
+
+None
+
+
+## Proposal
+
+### Kube-Proxy Parameter Changes
+
+***Parameter: --proxy-mode***
+In addition to existing userspace and iptables modes, IPVS mode is configured via --proxy-mode=ipvs. In the initial implementation, it implicitly uses IPVS [NAT](http://www.linuxvirtualserver.org/VS-NAT.html) mode.
+
+***Parameter: --ipvs-scheduler***
+A new kube-proxy parameter will be added to specify the IPVS load balancing algorithm, with the parameter being --ipvs-scheduler. If it’s not configured, then round-robin (rr) is default value. If it’s incorrectly configured, then kube-proxy will exit with error message.
+ * rr: round-robin
+ * lc: least connection
+ * dh: destination hashing
+ * sh: source hashing
+ * sed: shortest expected delay
+ * nq: never queue
+For more details, refer to http://kb.linuxvirtualserver.org/wiki/Ipvsadm
+
+In future, we can implement service specific scheduler (potentially via annotation), which has higher priority and overwrites the value.
+
+***Parameter: --cleanup-ipvs***
+Similar to the --cleanup-iptables parameter, if true, cleanup IPVS configuration and IPTables rules that are created in IPVS mode.
+
+***Parameter: --ipvs-sync-period***
+Maximum interval of how often IPVS rules are refreshed (e.g. '5s', '1m'). Must be greater than 0.
+
+***Parameter: --ipvs-min-sync-period***
+Minimum interval of how often the IPVS rules are refreshed (e.g. '5s', '1m'). Must be greater than 0.
+
+
+### Build Changes
+
+No changes at all. The IPVS implementation is built on [docker/libnetwork](https://godoc.org/github.com/docker/libnetwork/ipvs) IPVS library, which is a pure-golang implementation and talks to kernel via socket communication.
+
+### Deployment Changes
+
+IPVS kernel module installation is beyond Kubernetes. It’s assumed that IPVS kernel modules are installed on the node before running kube-proxy. When kube-proxy starts, if the proxy mode is IPVS, kube-proxy would validate if IPVS modules are installed on the node. If it’s not installed, then kube-proxy will fall back to the iptables proxy mode.
+
+### Design Considerations
+
+#### IPVS service network topology
+
+We will create a dummy interface and assign all kubernetes service ClusterIP's to the dummy interface (default name is `kube-ipvs0`). For example,
+
+```shell
+# ip link add kube-ipvs0 type dummy
+# ip addr
+...
+73: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
+ link/ether 26:1f:cc:f8:cd:0f brd ff:ff:ff:ff:ff:ff
+
+#### Assume 10.102.128.4 is service Cluster IP
+# ip addr add 10.102.128.4/32 dev kube-ipvs0
+...
+73: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
+ link/ether 1a:ce:f5:5f:c1:4d brd ff:ff:ff:ff:ff:ff
+ inet 10.102.128.4/32 scope global kube-ipvs0
+ valid_lft forever preferred_lft forever
+```
+
+Note that the relationship between a Kubernetes service and an IPVS service is `1:N`. Consider a Kubernetes service that has more than one access IP. For example, an External IP type service has 2 access IP's (ClusterIP and External IP). Then the IPVS proxier will create 2 IPVS services - one for Cluster IP and the other one for External IP.
+
+The relationship between a Kubernetes endpoint and an IPVS destination is `1:1`.
+For instance, deletion of a Kubernetes service will trigger deletion of the corresponding IPVS service and address bound to dummy interface.
+
+
+#### Port remapping
+
+There are 3 proxy modes in ipvs - NAT (masq), IPIP and DR. Only NAT mode supports port remapping. We will use IPVS NAT mode in order to supporting port remapping. The following example shows ipvs mapping service port `3080` to container port `8080`.
+
+```shell
+# ipvsadm -ln
+IP Virtual Server version 1.2.1 (size=4096)
+Prot LocalAddress:Port Scheduler Flags
+ -> RemoteAddress:Port Forward Weight ActiveConn InActConn
+TCP 10.102.128.4:3080 rr
+ -> 10.244.0.235:8080 Masq 1 0 0
+ -> 10.244.1.237:8080 Masq 1 0 0
+
+```
+
+#### Falling back to iptables
+
+IPVS proxier will employ iptables in doing packet filtering, SNAT and supporting NodePort type service. Specifically, ipvs proxier will fall back on iptables in the following 4 scenarios.
+
+* kube-proxy start with --masquerade-all=true
+* Specify cluster CIDR in kube-proxy startup
+* Load Balancer Source Ranges is specified for LB type service
+* Support NodePort type service
+
+And, IPVS proxier will maintain 5 kubernetes-specific chains in nat table
+
+- KUBE-POSTROUTING
+- KUBE-MARK-MASQ
+- KUBE-MARK-DROP
+
+`KUBE-POSTROUTING`, `KUBE-MARK-MASQ`, ` KUBE-MARK-DROP` are maintained by kubelet and ipvs proxier won't create them. IPVS proxier will make sure chains `KUBE-MARK-SERVICES` and `KUBE-NODEPORTS` exist in its sync loop.
+
+**1. kube-proxy start with --masquerade-all=true**
+
+If kube-proxy starts with `--masquerade-all=true`, the IPVS proxier will masquerade all traffic accessing service ClusterIP, which behaves same as what iptables proxier does.
+Suppose there is a serivice with Cluster IP `10.244.5.1` and port `8080`:
+
+```shell
+# iptables -t nat -nL
+
+Chain PREROUTING (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain OUTPUT (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain POSTROUTING (policy ACCEPT)
+target prot opt source destination
+KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
+
+Chain KUBE-POSTROUTING (1 references)
+target prot opt source destination
+MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
+
+Chain KUBE-MARK-DROP (0 references)
+target prot opt source destination
+MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
+
+Chain KUBE-MARK-MASQ (6 references)
+target prot opt source destination
+MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
+
+Chain KUBE-SERVICES (2 references)
+target prot opt source destination
+KUBE-MARK-MASQ tcp -- 0.0.0.0/0 10.244.5.1 /* default/foo:http cluster IP */ tcp dpt:8080
+```
+
+**2. Specify cluster CIDR in kube-proxy startup**
+
+If kube-proxy starts with `--cluster-cidr=<cidr>`, the IPVS proxier will masquerade off-cluster traffic accessing service ClusterIP, which behaves same as what iptables proxier does.
+Suppose kube-proxy is provided with the cluster cidr `10.244.16.0/24`, and service Cluster IP is `10.244.5.1` and port is `8080`:
+
+```shell
+# iptables -t nat -nL
+
+Chain PREROUTING (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain OUTPUT (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain POSTROUTING (policy ACCEPT)
+target prot opt source destination
+KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
+
+Chain KUBE-POSTROUTING (1 references)
+target prot opt source destination
+MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
+
+Chain KUBE-MARK-DROP (0 references)
+target prot opt source destination
+MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
+
+Chain KUBE-MARK-MASQ (6 references)
+target prot opt source destination
+MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
+
+Chain KUBE-SERVICES (2 references)
+target prot opt source destination
+KUBE-MARK-MASQ tcp -- !10.244.16.0/24 10.244.5.1 /* default/foo:http cluster IP */ tcp dpt:8080
+```
+
+**3. Load Balancer Source Ranges is specified for LB type service**
+
+When service's `LoadBalancerStatus.ingress.IP` is not empty and service's `LoadBalancerSourceRanges` is specified, IPVS proxier will install iptables rules which looks like what is shown below.
+
+Suppose service's `LoadBalancerStatus.ingress.IP` is `10.96.1.2` and service's `LoadBalancerSourceRanges` is `10.120.2.0/24`:
+
+```shell
+# iptables -t nat -nL
+
+Chain PREROUTING (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain OUTPUT (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain POSTROUTING (policy ACCEPT)
+target prot opt source destination
+KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
+
+Chain KUBE-POSTROUTING (1 references)
+target prot opt source destination
+MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
+
+Chain KUBE-MARK-DROP (0 references)
+target prot opt source destination
+MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
+
+Chain KUBE-MARK-MASQ (6 references)
+target prot opt source destination
+MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
+
+Chain KUBE-SERVICES (2 references)
+target prot opt source destination
+ACCEPT tcp -- 10.120.2.0/24 10.96.1.2 /* default/foo:http loadbalancer IP */ tcp dpt:8080
+DROP tcp -- 0.0.0.0/0 10.96.1.2 /* default/foo:http loadbalancer IP */ tcp dpt:8080
+```
+
+**4. Support NodePort type service**
+
+Please check the section below.
+
+#### Supporting NodePort service
+
+For supporting NodePort type service, iptables will recruit the existing implementation in the iptables proxier. For example,
+
+```shell
+# kubectl describe svc nginx-service
+Name: nginx-service
+...
+Type: NodePort
+IP: 10.101.28.148
+Port: http 3080/TCP
+NodePort: http 31604/TCP
+Endpoints: 172.17.0.2:80
+Session Affinity: None
+
+# iptables -t nat -nL
+
+[root@100-106-179-225 ~]# iptables -t nat -nL
+Chain PREROUTING (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain OUTPUT (policy ACCEPT)
+target prot opt source destination
+KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
+
+Chain KUBE-SERVICES (2 references)
+target prot opt source destination
+KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.101.28.148 /* default/nginx-service:http cluster IP */ tcp dpt:3080
+KUBE-SVC-6IM33IEVEEV7U3GP tcp -- 0.0.0.0/0 10.101.28.148 /* default/nginx-service:http cluster IP */ tcp dpt:3080
+KUBE-NODEPORTS all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
+
+Chain KUBE-NODEPORTS (1 references)
+target prot opt source destination
+KUBE-MARK-MASQ tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service:http */ tcp dpt:31604
+KUBE-SVC-6IM33IEVEEV7U3GP tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service:http */ tcp dpt:31604
+
+Chain KUBE-SVC-6IM33IEVEEV7U3GP (2 references)
+target prot opt source destination
+KUBE-SEP-Q3UCPZ54E6Q2R4UT all -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service:http */
+Chain KUBE-SEP-Q3UCPZ54E6Q2R4UT (1 references)
+target prot opt source destination
+KUBE-MARK-MASQ all -- 172.17.0.2 0.0.0.0/0 /* default/nginx-service:http */
+DNAT
+```
+
+#### Supporting ClusterIP service
+
+When creating a ClusterIP type service, IPVS proxier will do 3 things:
+
+* make sure dummy interface exists in the node
+* bind service cluster IP to the dummy interface
+* create an IPVS service whose address corresponds to the Kubernetes service Cluster IP.
+
+For example,
+
+```shell
+# kubectl describe svc nginx-service
+Name: nginx-service
+...
+Type: ClusterIP
+IP: 10.102.128.4
+Port: http 3080/TCP
+Endpoints: 10.244.0.235:8080,10.244.1.237:8080
+Session Affinity: None
+
+# ip addr
+...
+73: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
+ link/ether 1a:ce:f5:5f:c1:4d brd ff:ff:ff:ff:ff:ff
+ inet 10.102.128.4/32 scope global kube-ipvs0
+ valid_lft forever preferred_lft forever
+
+# ipvsadm -ln
+IP Virtual Server version 1.2.1 (size=4096)
+Prot LocalAddress:Port Scheduler Flags
+ -> RemoteAddress:Port Forward Weight ActiveConn InActConn
+TCP 10.102.128.4:3080 rr
+ -> 10.244.0.235:8080 Masq 1 0 0
+ -> 10.244.1.237:8080 Masq 1 0 0
+```
+
+### Support LoadBalancer service
+
+IPVS proxier will NOT bind LB's ingress IP to the dummy interface. When creating a LoadBalancer type service, ipvs proxier will do 4 things:
+
+- Make sure dummy interface exists in the node
+- Bind service cluster IP to the dummy interface
+- Create an ipvs service whose address corresponding to kubernetes service Cluster IP
+- Iterate LB's ingress IPs, create an ipvs service whose address corresponding LB's ingress IP
+
+For example,
+
+```shell
+# kubectl describe svc nginx-service
+Name: nginx-service
+...
+IP: 10.102.128.4
+Port: http 3080/TCP
+Endpoints: 10.244.0.235:8080
+Session Affinity: None
+
+#### Only bind Cluter IP to dummy interface
+# ip addr
+...
+73: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN qlen 1000
+ link/ether 1a:ce:f5:5f:c1:4d brd ff:ff:ff:ff:ff:ff
+ inet 10.102.128.4/32 scope global kube-ipvs0
+ valid_lft forever preferred_lft forever
+
+#### Suppose LB's ingress IPs {10.96.1.2, 10.93.1.3}. IPVS proxier will create 1 ipvs service for cluster IP and 2 ipvs services for LB's ingree IP. Each ipvs service has its destination.
+# ipvsadm -ln
+IP Virtual Server version 1.2.1 (size=4096)
+Prot LocalAddress:Port Scheduler Flags
+ -> RemoteAddress:Port Forward Weight ActiveConn InActConn
+TCP 10.102.128.4:3080 rr
+ -> 10.244.0.235:8080 Masq 1 0 0
+TCP 10.96.1.2:3080 rr
+ -> 10.244.0.235:8080 Masq 1 0 0
+TCP 10.96.1.3:3080 rr
+ -> 10.244.0.235:8080 Masq 1 0 0
+```
+
+Since there is a need of supporting access control for `LB.ingress.IP`. IPVS proxier will fall back on iptables. Iptables will drop any packet which is not from `LB.LoadBalancerSourceRanges`. For example,
+
+```shell
+# iptables -A KUBE-SERVICES -d {ingress.IP} --dport {service.Port} -s {LB.LoadBalancerSourceRanges} -j ACCEPT
+```
+
+When the packet reach the end of chain, ipvs proxier will drop it.
+
+```shell
+# iptables -A KUBE-SERVICES -d {ingress.IP} --dport {service.Port} -j KUBE-MARK-DROP
+```
+
+### Support Only NodeLocal Endpoints
+
+Similar to iptables proxier, when a service has the "Only NodeLocal Endpoints" annotation, ipvs proxier will only proxy traffic to endpoints in the local node.
+
+```shell
+# kubectl describe svc nginx-service
+Name: nginx-service
+...
+IP: 10.102.128.4
+Port: http 3080/TCP
+Endpoints: 10.244.0.235:8080, 10.244.1.235:8080
+Session Affinity: None
+
+#### Assume only endpoint 10.244.0.235:8080 is in the same host with kube-proxy
+
+#### There should be 1 destination for ipvs service.
+[root@SHA1000130405 home]# ipvsadm -ln
+IP Virtual Server version 1.2.1 (size=4096)
+Prot LocalAddress:Port Scheduler Flags
+ -> RemoteAddress:Port Forward Weight ActiveConn InActConn
+TCP 10.102.128.4:3080 rr
+ -> 10.244.0.235:8080 Masq 1 0 0
+```
+
+#### Session affinity
+
+IPVS support client IP session affinity (persistent connection). When a service specifies session affinity, the IPVS proxier will set a timeout value (180min=10800s by default) in the IPVS service. For example,
+
+```shell
+# kubectl describe svc nginx-service
+Name: nginx-service
+...
+IP: 10.102.128.4
+Port: http 3080/TCP
+Session Affinity: ClientIP
+
+# ipvsadm -ln
+IP Virtual Server version 1.2.1 (size=4096)
+Prot LocalAddress:Port Scheduler Flags
+ -> RemoteAddress:Port Forward Weight ActiveConn InActConn
+TCP 10.102.128.4:3080 rr persistent 10800
+```
+
+#### Cleaning up inactive rules
+
+It seems difficult to distinguish if an IPVS service is created by the IPVS proxier or other processes. Currently we assume IPVS rules will be created only by the IPVS proxier on a node, so we can clear all IPVSrules on a node. We should add warnings in documentation and flag comments.
+
+#### Sync loop pseudo code
+
+Similar to the iptables proxier, the IPVS proxier will do a full sync loop in a configured period. Also, each update on a Kubernetes service or endpoint will trigger an IPVS service or destination update. For example,
+
+* Creating a Kubernetes service will trigger creating a new IPVS service.
+* Updating a Kubernetes service(for instance, change session affinity) will trigger updating an existing IPVS service.
+* Deleting a Kubernetes service will trigger deleting an IPVS service.
+* Adding an endpoint for a Kubernetes service will trigger adding a destination for an existing IPVS service.
+* Updating an endpoint for a Kubernetes service will trigger updating a destination for an existing IPVS service.
+* Deleting an endpoint for a Kubernetes service will trigger deleting a destination for an existing IPVS service.
+
+Any IPVS service or destination updates will send an update command to kernel via socket communication, which won't take a service down.
+
+The sync loop pseudo code is shown below:
+
+```go
+func (proxier *Proxier) syncProxyRules() {
+ When service or endpoint update, begin sync ipvs rules and iptables rules if needed.
+ ensure dummy interface exists, if not, create one.
+ for svcName, svcInfo := range proxier.serviceMap {
+ // Capture the clusterIP.
+ construct ipvs service from svcInfo
+ Set session affinity flag and timeout value for ipvs service if specified session affinity
+ bind Cluster IP to dummy interface
+ call libnetwork API to create ipvs service and destinations
+
+ // Capture externalIPs.
+ if externalIP is local then hold the svcInfo.Port so that can install ipvs rules on it
+ construct ipvs service from svcInfo
+ Set session affinity flag and timeout value for ipvs service if specified session affinity
+ call libnetwork API to create ipvs service and destinations
+
+ // Capture load-balancer ingress.
+ for _, ingress := range svcInfo.LoadBalancerStatus.Ingress {
+ if ingress.IP != "" {
+ if len(svcInfo.LoadBalancerSourceRanges) != 0 {
+ install specific iptables
+ }
+ construct ipvs service from svcInfo
+ Set session affinity flag and timeout value for ipvs service if specified session affinity
+ call libnetwork API to create ipvs service and destinations
+ }
+ }
+
+ // Capture nodeports.
+ if svcInfo.NodePort != 0 {
+ fall back on iptables, recruit existing iptables proxier implementation
+ }
+
+ call libnetwork API to clean up legacy ipvs services which is inactive any longer
+ unbind service address from dummy interface
+ clean up legacy iptables chains and rules
+ }
+}
+```
+
+## Graduation Criteria
+
+### Beta -> GA
+
+The following requirements should be met before moving from Beta to GA. It is
+suggested to file an issue which tracks all the action items.
+
+- [ ] Testing
+ - [ ] 48 hours of green e2e tests.
+ - [ ] Flakes must be identified and filed as issues.
+ - [ ] Integrate with scale tests and. Failures should be filed as issues.
+- [ ] Development work
+ - [ ] Identify all pending changes/refactors. Release blockers must be prioritized and fixed.
+ - [ ] Identify all bugs. Release blocking bugs must be identified and fixed.
+- [ ] Docs
+ - [ ] All user-facing documentation must be updated.
+
+### GA -> Future
+
+__TODO__
+
+## Implementation History
+
+**In chronological order**
+
+1. https://github.com/kubernetes/kubernetes/pull/46580
+
+2. https://github.com/kubernetes/kubernetes/pull/52528
+
+3. https://github.com/kubernetes/kubernetes/pull/54219
+
+4. https://github.com/kubernetes/kubernetes/pull/57268
+
+5. https://github.com/kubernetes/kubernetes/pull/58052
+
+
+## Drawbacks [optional]
+
+None
+
+## Alternatives [optional]
+
+None
diff --git a/keps/sig-network/0012-20180518-coredns-default-proposal.md b/keps/sig-network/0012-20180518-coredns-default-proposal.md
new file mode 100644
index 00000000..f4540704
--- /dev/null
+++ b/keps/sig-network/0012-20180518-coredns-default-proposal.md
@@ -0,0 +1,88 @@
+---
+kep-number: 11
+title: Switch CoreDNS to the default DNS
+authors:
+ - "@johnbelamaric"
+ - "@rajansandeep"
+owning-sig: sig-network
+participating-sigs:
+ - sig-cluster-lifecycle
+reviewers:
+ - "@bowei"
+ - "@thockin"
+approvers:
+ - "@thockin"
+editor: "@rajansandeep"
+creation-date: 2018-05-18
+last-updated: 2018-05-18
+status: provisional
+---
+
+# Switch CoreDNS to the default DNS
+
+## Table of Contents
+
+* [Summary](#summary)
+* [Goals](#goals)
+* [Proposal](#proposal)
+ * [User Cases](#use-cases)
+* [Graduation Criteria](#graduation-criteria)
+* [Implementation History](#implementation-history)
+
+## Summary
+
+CoreDNS is now well-established in Kubernetes as the DNS service, with CoreDNS starting as an alpha feature from Kubernetes v1.9 to now being GA in v1.11.
+After successfully implementing the road-map defined [here](https://github.com/kubernetes/features/issues/427), CoreDNS is GA in Kubernetes v1.11, which can be installed as an alternate to kube-dns in tools like kubeadm, kops, minikube and kube-up.
+Following the [KEP to graduate CoreDNS to GA](https://github.com/kubernetes/community/pull/1956), the purpose of this proposal is to make CoreDNS as the default DNS for Kubernetes, replacing kube-dns.
+
+## Goals
+* Make CoreDNS the default DNS for Kubernetes for all the remaining install tools (kube-up, kops, minikube).
+* Make CoreDNS available as an image in a Kubernetes repository (To Be Defined) and ensure a workflow/process to update the CoreDNS versions in the future.
+ This goal is carried over from the [previous KEP](https://github.com/kubernetes/community/pull/1956), in case it cannot be completed there.
+
+## Proposal
+
+The proposed solution is to enable CoreDNS as the default cluster service discovery DNS for Kubernetes.
+Some of the most used deployment tools will be upgraded by the CoreDNS team, in cooperation with the owners of these tools, to be able to deploy CoreDNS as default:
+* kubeadm (already done for Kubernetes v1.11)
+* kube-up
+* minikube
+* kops
+
+For other tools, each maintainer would have to add the upgrade to CoreDNS.
+
+### Use Cases
+
+Use cases for CoreDNS has been well defined in the [previous KEP](https://github.com/kubernetes/community/pull/1956).
+The following can be expected when CoreDNS is made the default DNS.
+
+#### Kubeadm
+
+* CoreDNS is already the default DNS from Kubernetes v1.11 and shall continue be the default DNS.
+* In case users want to install kube-dns instead of CoreDNS, they have to set the feature-gate of CoreDNS to false. `--feature-gates=CoreDNS=false`
+
+#### Kube-up
+
+* CoreDNS will now become the default DNS.
+* To install kube-dns in place of CoreDNS, set the environment variable `CLUSTER_DNS_CORE_DNS` to `false`.
+
+#### Minikube
+
+* CoreDNS to be enabled by default in the add-on manager, with kube-dns disabled by default.
+
+#### Kops
+
+* CoreDNS will now become the default DNS.
+
+## Graduation Criteria
+
+* Add CoreDNS image in a Kubernetes repository (To Be Defined) and ensure a workflow/process to update the CoreDNS versions in the future.
+* Have a certain number (To Be Defined) of clusters of significant size (To Be Defined) adopting and running CoreDNS as their default DNS.
+
+## Implementation History
+
+* 20170912 - [Feature proposal](https://github.com/kubernetes/features/issues/427) for CoreDNS to be implemented as the default DNS in Kubernetes.
+* 20171108 - Successfully released [CoreDNS as an Alpha feature-gate in Kubernetes v1.9](https://github.com/kubernetes/kubernetes/pull/52501).
+* 20180226 - CoreDNS graduation to Incubation in CNCF.
+* 20180305 - Support for Kube-dns configmap translation and move up [CoreDNS to Beta](https://github.com/kubernetes/kubernetes/pull/58828) for Kubernetes v1.10.
+* 20180515 - CoreDNS was added as [GA and the default DNS in kubeadm](https://github.com/kubernetes/kubernetes/pull/63509) for Kubernetes v1.11.
diff --git a/mentoring/meet-our-contributors.md b/mentoring/meet-our-contributors.md
index 7e3aea7c..72f4a922 100644
--- a/mentoring/meet-our-contributors.md
+++ b/mentoring/meet-our-contributors.md
@@ -1,4 +1,4 @@
-# Meet Our Contributors - Ask Us Anything!
+# Meet Our Contributors - Ask Us Anything!
When Slack seems like it’s going too fast, and you just need a quick answer from a human...
@@ -6,18 +6,18 @@ Meet Our Contributors gives you a monthly one-hour opportunity to ask questions
## When:
Every first Wednesday of the month at the following times. Grab a copy of the calendar to yours from [kubernetes.io/community](https://kubernetes.io/community/)
-* 03:30pm UTC
-* 09:00pm UTC
+* 02:30pm UTC
+* 08:00pm UTC
-Tune into the [Kubernetes YouTube Channel](https://www.youtube.com/c/KubernetesCommunity/live) to follow along with video and [#meet-our-contributors](https://kubernetes.slack.com/messages/meet-our-contributors) on Slack for questions and discourse.
+Tune into the [Kubernetes YouTube Channel](https://www.youtube.com/c/KubernetesCommunity/live) to follow along with video and [#meet-our-contributors](https://kubernetes.slack.com/messages/meet-our-contributors) on Slack for questions and discourse.
-## What’s on-topic:
+## What’s on-topic:
* How our contributors got started with k8s
* Advice for getting attention on your PR
* GitHub tooling and automation
* Your first commit
* kubernetes/community
-* Testing
+* Testing
## What’s off-topic:
* End-user questions (Check out [#office-hours](https://kubernetes.slack.com/messages/office-hours) on slack and details [here](/events/office-hours.md))
@@ -33,15 +33,13 @@ Questions will be on a first-come, first-served basis. First half will be dedica
### Code snip / PR for peer code review / Suggestion for part of codebase walk through:
* At least 24 hours before the session to slack channel (#meet-our-contributors)
-Problems will be picked based on time commitment needed, skills of the reviewer, and if a large amount are submitted, need for the project.
+Problems will be picked based on time commitment needed, skills of the reviewer, and if a large amount are submitted, need for the project.
## Call for Volunteers:
-Contributors - [sign up to answer questions!](https://goo.gl/uhEJ33)
+Contributors - [sign up to answer questions!](https://goo.gl/uhEJ33)
Expectations of volunteers:
* Be on 5 mins early. You can look at questions in the queue by joining the #meet-our-contributors slack channel to give yourself some prep.
* Expect questions about the contribution process, membership, navigating the kubernetes seas, testing, and general questions about you and your path to open source/kubernetes. It's ok if you don't know the answer!
* We will be using video chat (zoom but live streaming through YouTube) but voice only is fine if you are more comfortable with that.
* Be willing to provide suggestions and feedback to make this better!
-
-
diff --git a/org-owners-guide.md b/org-owners-guide.md
index 530611dc..d0217f1d 100644
--- a/org-owners-guide.md
+++ b/org-owners-guide.md
@@ -70,7 +70,7 @@ Each organization should have the following teams:
- `foo-reviewers`: granted read access to the `foo` repo; intended to be used as
a notification mechanism for interested/active contributors for the `foo` repo
- a `bots` team
- - should contain bots such as @k8s-ci-robot and @linuxfoundation that are
+ - should contain bots such as @k8s-ci-robot and @thelinuxfoundation that are
necessary for org and repo automation
- an `owners` team
- should be populated by everyone who has `owner` privileges to the org
diff --git a/setting-up-cla-check.md b/setting-up-cla-check.md
index b988f4fb..bb344190 100644
--- a/setting-up-cla-check.md
+++ b/setting-up-cla-check.md
@@ -21,7 +21,7 @@ the Linux Foundation CNCF CLA check for your repositories, please read on.
- Pull request: checked
- Issue comment: checked
- Active: checked
-1. Add the [@linuxfoundation](https://github.com/linuxfoundation) GitHub user as an **Owner**
+1. Add the [@thelinuxfoundation](https://github.com/thelinuxfoundation) GitHub user as an **Owner**
to your organization or repo to ensure the CLA status can be applied on PR's
1. After you send an invite, contact the [Linux Foundation](mailto:helpdesk@rt.linuxfoundation.org); and cc [Chris Aniszczyk](mailto:caniszczyk@linuxfoundation.org), [Ihor Dvoretskyi](mailto:ihor@cncf.io), [Eric Searcy](mailto:eric@linuxfoundation.org) (to ensure that the invite gets accepted).
1. Finally, open up a test PR to check that:
diff --git a/sig-api-machinery/README.md b/sig-api-machinery/README.md
index be681156..16d8ee9c 100644
--- a/sig-api-machinery/README.md
+++ b/sig-api-machinery/README.md
@@ -47,34 +47,34 @@ The following subprojects are owned by sig-api-machinery:
- **universal-machinery**
- Owners:
- https://raw.githubusercontent.com/kubernetes/apimachinery/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/apimachinery/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apimachinery/OWNERS
- **server-frameworks**
- Owners:
- https://raw.githubusercontent.com/kubernetes/apiserver/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/apiserver/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/OWNERS
- **server-crd**
- Owners:
- https://raw.githubusercontent.com/kubernetes/apiextensions-apiserver/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/apiextensions-apiserver/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiextensions-apiserver/OWNERS
- **server-api-aggregation**
- Owners:
- https://raw.githubusercontent.com/kubernetes/kube-aggregator/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/kube-aggregator/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/kube-aggregator/OWNERS
- **server-sdk**
- Owners:
- https://raw.githubusercontent.com/kubernetes/sample-apiserver/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/sample-apiserver/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/sample-apiserver/OWNERS
- https://raw.githubusercontent.com/kubernetes/sample-controller/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/sample-controller/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/sample-controller/OWNERS
- https://raw.githubusercontent.com/kubernetes-incubator/apiserver-builder/master/OWNERS
- **idl-schema-client-pipeline**
- Owners:
- https://raw.githubusercontent.com/kubernetes/gengo/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/code-generator/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/code-generator/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/code-generator/OWNERS
- https://raw.githubusercontent.com/kubernetes/kube-openapi/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/api/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/api/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/OWNERS
- https://raw.githubusercontent.com/kubernetes-client/gen/master/OWNERS
- **kubernetes-clients**
- Owners:
@@ -90,7 +90,7 @@ The following subprojects are owned by sig-api-machinery:
- https://raw.githubusercontent.com/kubernetes-client/typescript/master/OWNERS
- https://raw.githubusercontent.com/kubernetes-incubator/client-python/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/client-go/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/client-go/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/OWNERS
- **universal-utils**
- Owners:
- https://raw.githubusercontent.com/kubernetes/utils/master/OWNERS
diff --git a/sig-architecture/README.md b/sig-architecture/README.md
index df3724d7..4e37d2b6 100644
--- a/sig-architecture/README.md
+++ b/sig-architecture/README.md
@@ -21,7 +21,7 @@ The Architecture SIG maintains and evolves the design principles of Kubernetes,
The Chairs of the SIG run operations and processes governing the SIG.
* Brian Grant (**[@bgrant0607](https://github.com/bgrant0607)**), Google
-* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Microsoft
+* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Google
## Contact
* [Slack](https://kubernetes.slack.com/messages/sig-architecture)
diff --git a/sig-azure/README.md b/sig-azure/README.md
index 9811e2a1..fc2686cf 100644
--- a/sig-azure/README.md
+++ b/sig-azure/README.md
@@ -11,7 +11,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener
A Special Interest Group for building, deploying, maintaining, supporting, and using Kubernetes on Azure.
## Meetings
-* Regular SIG Meeting: [Wednesdays at 16:00 UTC](https://zoom.us/j/2015551212) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:00&tz=UTC).
+* Regular SIG Meeting: [Wednesdays at 16:00 UTC](https://zoom.us/j/2015551212) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:00&tz=UTC).
* [Meeting notes and Agenda](https://docs.google.com/document/d/1SpxvmOgHDhnA72Z0lbhBffrfe9inQxZkU9xqlafOW9k/edit).
* [Meeting recordings](https://www.youtube.com/watch?v=yQLeUKi_dwg&list=PL69nYSiGNLP2JNdHwB8GxRs2mikK7zyc4).
@@ -20,9 +20,15 @@ A Special Interest Group for building, deploying, maintaining, supporting, and u
### Chairs
The Chairs of the SIG run operations and processes governing the SIG.
-* Jason Hansen (**[@slack](https://github.com/slack)**), Microsoft
+* Stephen Augustus (**[@justaugustus](https://github.com/justaugustus)**), Red Hat
+* Shubheksha Jalan (**[@shubheksha](https://github.com/shubheksha)**), Microsoft
+
+### Technical Leads
+The Technical Leads of the SIG establish new subprojects, decommission existing
+subprojects, and resolve cross-subproject technical issues and decisions.
+
+* Kal Khenidak (**[@khenidak](https://github.com/khenidak)**), Microsoft
* Cole Mickens (**[@colemickens](https://github.com/colemickens)**), Red Hat
-* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Microsoft
## Contact
* [Slack](https://kubernetes.slack.com/messages/sig-azure)
@@ -47,7 +53,13 @@ Monitor these for Github activity if you are not a member of the team.
| Team Name | Details | Google Groups | Description |
| --------- |:-------:|:-------------:| ----------- |
+| @kubernetes/sig-azure-api-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-azure-api-reviews) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-azure-api-reviews) | API Changes and Reviews |
+| @kubernetes/sig-azure-bugs | [link](https://github.com/orgs/kubernetes/teams/sig-azure-bugs) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-azure-bugs) | Bug Triage and Troubleshooting |
+| @kubernetes/sig-azure-feature-requests | [link](https://github.com/orgs/kubernetes/teams/sig-azure-feature-requests) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-azure-feature-requests) | Feature Requests |
| @kubernetes/sig-azure-misc | [link](https://github.com/orgs/kubernetes/teams/sig-azure-misc) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-azure-misc) | General Discussion |
+| @kubernetes/sig-azure-pr-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-azure-pr-reviews) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-azure-pr-reviews) | PR Reviews |
+| @kubernetes/sig-azure-proposals | [link](https://github.com/orgs/kubernetes/teams/sig-azure-proposals) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-azure-proposals) | Design Proposals |
+| @kubernetes/sig-azure-test-failures | [link](https://github.com/orgs/kubernetes/teams/sig-azure-test-failures) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-azure-test-failures) | Test Failures and Triage |
<!-- BEGIN CUSTOM CONTENT -->
diff --git a/sig-azure/charter.md b/sig-azure/charter.md
new file mode 100644
index 00000000..c31a2bad
--- /dev/null
+++ b/sig-azure/charter.md
@@ -0,0 +1,100 @@
+# SIG Azure Charter
+
+_The following is a charter for the Kubernetes Special Interest Group for Azure. It delineates the roles of SIG leadership, SIG members, as well as the organizational processes for the SIG, both as they relate to project management and technical processes for SIG subprojects._
+
+## Roles
+
+### SIG Chairs
+
+- Run operations and processes governing the SIG
+- Seed members established at SIG founding
+- Chairs MAY decide to step down at anytime and propose a replacement. Use lazy consensus amongst chairs with fallback on majority vote to accept proposal. This SHOULD be supported by a majority of SIG Members.
+- Chairs MAY select additional chairs through a [super-majority] vote amongst chairs. This SHOULD be supported by a majority of SIG Members.
+- Chairs MUST remain active in the role and are automatically removed from the position if they are unresponsive for &gt; 3 months and MAY be removed if not proactively working with other chairs to fulfill responsibilities. Coordinated leaves of absence serve as exception to this requirement.
+- Number: 2 - 3
+- Defined in [sigs.yaml]
+
+### SIG Technical Leads
+
+- Establish new subprojects
+- Decommission existing subprojects
+- Resolve X-Subproject technical issues and decisions
+- Technical Leads MUST remain active in the role and are automatically removed from the position if they are unresponsive for &gt; 3 months and MAY be removed if not proactively working with other chairs to fulfill responsibilities. Coordinated leaves of absence serve as exception to this requirement.
+- Number: 2 - 3
+- Defined in [sigs.yaml]
+
+### Subproject Owners
+
+- Scoped to a subproject defined in [sigs.yaml]
+- Seed members established at subproject founding
+- MUST be an escalation point for technical discussions and decisions in the subproject
+- MUST set milestone priorities or delegate this responsibility
+- MUST remain active in the role and are automatically removed from the position if they are unresponsive for &gt; 3 months. Coordinated leaves of absence serve as exception to this requirement.
+- MAY be removed if not proactively working with other Subproject Owners to fulfill responsibilities.
+- MAY decide to step down at anytime and propose a replacement. Use [lazy-consensus] amongst subproject owners with fallback on majority vote to accept proposal. This SHOULD be supported by a majority of subproject contributors (those having some role in the subproject).
+- MAY select additional subproject owners through a [super-majority] vote amongst subproject owners. This SHOULD be supported by a majority of subproject contributors (through [lazy-consensus] with fallback on voting).
+- Number: 3 - 5
+- Defined in [sigs.yaml] [OWNERS] files
+
+**IMPORTANT**
+
+_With regards to leadership roles i.e., Chairs, Technical Leads, and Subproject Owners, we MUST, as a SIG, ensure that positions are held by a committee of members across a diverse set of companies. This allows for thoughtful discussion and structural management that can serve the needs of every consumer of Kubernetes on Azure._
+
+### Members
+
+- MUST maintain health of at least one subproject or the health of the SIG
+- MUST show sustained contributions to at least one subproject or to the SIG
+- SHOULD hold some documented role or responsibility in the SIG and / or at least one subproject (e.g. reviewer, approver, etc)
+- MAY build new functionality for subprojects
+- MAY participate in decision making for the subprojects they hold roles in
+- Includes all reviewers and approvers in [OWNERS] files for subprojects
+
+## Organizational management
+
+- SIG meets bi-weekly on zoom with agenda in meeting notes
+ - SHOULD be facilitated by chairs unless delegated to specific Members
+- SIG overview and deep-dive sessions organized for Kubecon
+ - SHOULD be organized by chairs unless delegated to specific Members
+- Contributing instructions defined in the SIG CONTRIBUTING.md
+
+### Project management
+
+#### Subproject creation
+
+Subprojects
+may be created by [KEP] proposal and accepted by [lazy-consensus] with fallback on majority vote of SIG Technical Leads. The result SHOULD be supported by the majority of SIG members.
+
+- KEP MUST establish subproject owners
+- [sigs.yaml] MUST be updated to include subproject information and [OWNERS] files with subproject owners
+- Where subprojects processes differ from the SIG governance, they MUST document how
+ - e.g., if subprojects release separately - they must document how release and planning is performed
+
+Subprojects must define how releases are performed and milestones are set.
+
+Example:
+- Release milestones
+ - Follows the kubernetes/kubernetes release milestones and schedule
+ - Priorities for upcoming release are discussed during the SIG meeting following the preceding release and shared through a PR. Priorities are finalized before feature freeze.
+- Code and artifacts are published as part of the kubernetes/kubernetes release
+
+### Technical processes
+
+Subprojects of the SIG MUST use the following processes unless explicitly following alternatives they have defined.
+
+- Proposing and making decisions
+ - Proposals sent as [KEP] PRs and published to Google group as announcement
+ - Follow [KEP] decision making process
+
+- Test health
+ - Canonical health of code published to
+ - Consistently broken tests automatically send an alert to
+ - SIG members are responsible for responding to broken tests alert. PRs that break tests should be rolled back if not fixed within 24 hours (business hours).
+ - Test dashboard checked and reviewed at start of each SIG meeting. Owners assigned for any broken tests. and followed up during the next SIG meeting.
+
+Issues impacting multiple subprojects in the SIG should be resolved by SIG Technical Leads, with fallback to consensus of subproject owners.
+
+[lazy-consensus]: http://communitymgt.wikia.com/wiki/Lazy_consensus
+[super-majority]: https://en.wikipedia.org/wiki/Supermajority#Two-thirds_vote
+[KEP]: https://github.com/kubernetes/community/blob/master/keps/0000-kep-template.md
+[sigs.yaml]: https://github.com/kubernetes/community/blob/master/sigs.yaml#L1454
+[OWNERS]: contributors/devel/owners.md \ No newline at end of file
diff --git a/sig-cli/README.md b/sig-cli/README.md
index 6c3a1892..f44c4e86 100644
--- a/sig-cli/README.md
+++ b/sig-cli/README.md
@@ -40,6 +40,9 @@ The following subprojects are owned by sig-cli:
- Owners:
- https://raw.githubusercontent.com/kubernetes/kubectl/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubectl/OWNERS
+- **kustomize**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/OWNERS
## GitHub Teams
diff --git a/sig-cloud-provider/CHARTER.md b/sig-cloud-provider/CHARTER.md
new file mode 100644
index 00000000..1e5d6016
--- /dev/null
+++ b/sig-cloud-provider/CHARTER.md
@@ -0,0 +1,100 @@
+# SIG Cloud Provider Charter
+
+## Mission
+The Cloud Provider SIG ensures that the Kubernetes ecosystem is evolving in a way that is neutral to all (public and private) cloud providers. It will be responsible for establishing standards and requirements that must be met by all providers to ensure optimal integration with Kubernetes.
+
+## Subprojects & Areas of Focus
+
+* Maintaining parts of the Kubernetes project that allows Kubernetes to integrate with the underlying provider. This includes but are not limited to:
+ * [cloud provider interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go)
+ * [cloud-controller-manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager)
+ * Deployment tooling which has historically resided under [cluster/](https://github.com/kubernetes/kubernetes/tree/release-1.11/cluster)
+* Code ownership for all cloud providers that fall under the kubernetes organization and have opted to be subprojects of SIG Cloud Provider. Following the guidelines around subprojects we anticipate providers will have full autonomy to maintain their own repositories, however, official code ownership will still belong to SIG Cloud Provider.
+ * [cloud-provider-azure](https://github.com/kubernetes/cloud-provider-azure)
+ * [cloud-provider-gcp](https://github.com/kubernetes/cloud-provider-gcp)
+ * [cloud-provider-openstack](https://github.com/kubernetes/cloud-provider-openstack)
+ * [cloud-provider-vsphere](https://github.com/kubernetes/cloud-provider-vsphere)
+* Standards for documentation that should be included by all providers.
+* Defining processes/standards for E2E tests that should be reported by all providers
+* Developing future functionality in Kubernetes to support use cases common to all providers while also allowing custom and pluggable implementations when required, some examples include but are not limited to:
+ * Extendable node status’ and machine states based on provider
+ * Extendable node address types based on provider
+ * See also [Cloud Controller Manager KEP](https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md)
+* The collection of user experience reports from Kubernetes operators running on provider subprojects; and the delivery of roadmap information to SIG PM
+
+## Organizational Management
+
+* Six months after this charter is first ratified, it MUST be reviewed and re-approved by the SIG in order to evaluate the assumptions made in its initial drafting
+* SIG meets bi-weekly on zoom with agenda in meeting notes.
+ * SHOULD be facilitated by chairs unless delegated to specific Members
+* The SIG MUST make a best effort to provide leadership opportunities to individuals who represent different races, national origins, ethnicities, genders, abilities, sexual preferences, ages, backgrounds, levels of educational achievement, and socioeconomic statuses
+
+## Subproject Creation
+
+Each Kubernetes provider will (eventually) be a subproject under SIG Cloud Provider. To add new sub projects (providers), SIG Cloud Provider will maintain an open list of requirements that must be satisfied.
+The current requirements can be seen [here](https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#repository-requirements). Each provider subproject is entitled to create 1..N repositories related to cluster turn up or operation on their platform, subject to technical standards set by SIG Cloud Provider.
+Creation of a repository SHOULD follow the KEP process to preserve the motivation for the repository and any additional instructions for how other SIGs (e.g SIG Documentation and SIG Release) should interact with the repository
+
+Subprojects that fall under SIG Cloud Provider may also be features in Kubernetes that is requested or needed by all, or at least a large majority of providers. The creation process for these subprojects will follow the usual KEP process.
+
+## Subproject Retirement
+
+Subprojects representing Kubernetes providers may be retired given they do not satisfy requirements for more than 6 months. Final decisions for retirement should be supported by a majority of SIG members using [lazy consensus](http://communitymgt.wikia.com/wiki/Lazy_consensus). Once retired any code related to that provider will be archived into the kubernetes-retired organization.
+
+Subprojects representing Kubernetes features may be retired at any point given a lack of development or a lack of demand. Final decisions for retirement should be supported by a majority of SIG members, ideally from every provider. Once retired, any code related to that subproject will be archived into the kubernetes-retired organization.
+
+
+## Technical Processes
+Subprojects (providers) of the SIG MUST use the following processes unless explicitly following alternatives they have defined.
+
+* Proposals will be sent as [KEP](https://github.com/kubernetes/community/blob/master/keps/0000-kep-template.md) PRs, and published to the official group mailing list as an announcement
+* Proposals, once submitted, SHOULD be placed on the next full meeting agenda
+* Decisions within the scope of individual subprojects should be made by lazy consensus by subproject owners, with fallback to majority vote by subproject owners; if a decision can’t be made, it should be escalated to the SIG Chairs
+* Issues impacting multiple subprojects in the SIG should be resolved by consensus of the owners of the involved subprojects; if a decision can’t be made, it should be escalated to the SIG Chairs
+
+## Roles
+The following roles are required for the SIG to function properly. In the event that any role is unfilled, the SIG will make a best effort to fill it. Any decisions reliant on a missing role will be postponed until the role is filled.
+
+
+### Chairs
+* 3 chairs are required
+* Run operations and processes governing the SIG
+* An initial set of chairs was established at the time the SIG was founded.
+* Chairs MAY decide to step down at anytime and propose a replacement, who must be approved by all of the other chairs. This SHOULD be supported by a majority of SIG Members.
+* Chairs MAY select additional chairs using lazy consensus amongst SIG Members.
+* Chairs MUST remain active in the role and are automatically removed from the position if they are unresponsive for > 3 months and MAY be removed by consensus of the other Chairs and members if not proactively working with other Chairs to fulfill responsibilities.
+* Chairs WILL be asked to step down if there is inappropriate behavior or code of conduct issues
+* SIG Cloud Provider cannot have more than one chair from any one company.
+
+### Subproject/Provider Owners
+* There should be at least 1 representative per subproject/provider (though 3 is recommended to avoid deadlock) as specified in the OWNERS file of each cloud provider repository.
+* MUST be an escalation point for technical discussions and decisions in the subproject/provider
+* MUST set milestone priorities or delegate this responsibility
+* MUST remain active in the role and are automatically removed from the position if they are unresponsive for > 3 months and MAY be removed by consensus of other subproject owners and Chairs if not proactively working with other Subproject Owners to fulfill responsibilities.
+* MAY decide to step down at anytime and propose a replacement. This can be done by updating the OWNERS file for any subprojects.
+* MAY select additional subproject owners by updating the OWNERs file.
+* WILL be asked to step down if there is inappropriate behavior or code of conduct issues
+
+### SIG Members
+
+Approvers and reviewers in the OWNERS file of all subprojects under SIG Cloud Provider.
+
+## Long Term Goals
+
+The long term goal of SIG Cloud Provider is to promote a vendor neutral ecosystem for our community. Vendors wanting to support Kubernetes should feel equally empowered to do so
+as any of today’s existing cloud providers; but more importantly ensuring a high quality user experience across providers. The SIG will act as a central group for developing
+the Kubernetes project in a way that ensures all providers share common privileges and responsibilities. Below are some concrete goals on how SIG Cloud Provider plans to accomplish this.
+
+### Consolidating Existing Cloud SIGs
+
+SIG Cloud Provider will aim to eventually consolidate existing cloud provider SIGs and have each provider instead form a subproject under it. The subprojects would drive the development of
+individual providers and work closely with SIG Cloud Provider to ensure compatibility with Kubernetes. With this model, code ownership for new and existing providers will belong to SIG Cloud Provider,
+limiting SIG sprawl as more providers support Kubernetes. Existing SIGs representing cloud providers are highly encouraged to opt-in as sub-projects under SIG Cloud Provider but are not required to do.
+As a SIG opts-in, it will operate to ensure a smooth transition, typically over the course of 3 release cycles.
+
+### Supporting New Cloud Providers
+
+One of the primary goals of SIG Cloud Provider is to become an entrypoint for new providers wishing to support Kubernetes on their platform and ensuring technical excellence from each of those providers.
+SIG Cloud Provider will accomplish this by maintaining documentation around how new providers can get started and managing the set of requirements that must be met to onboard them. In addition to
+onboarding new providers, the entire lifecycle of providers would also fall under the responsibility of SIG Cloud Provider, which may involve clean up work if a provider decides to no longer support Kubernetes.
+
diff --git a/sig-cloud-provider/OWNERS b/sig-cloud-provider/OWNERS
new file mode 100644
index 00000000..1c2834b1
--- /dev/null
+++ b/sig-cloud-provider/OWNERS
@@ -0,0 +1,6 @@
+reviewers:
+ - sig-cloud-provider-leads
+approvers:
+ - sig-cloud-provider-leads
+labels:
+ - sig/cloud-provider
diff --git a/sig-cloud-provider/README.md b/sig-cloud-provider/README.md
new file mode 100644
index 00000000..a5a7119d
--- /dev/null
+++ b/sig-cloud-provider/README.md
@@ -0,0 +1,75 @@
+<!---
+This is an autogenerated file!
+
+Please do not edit this file directly, but instead make changes to the
+sigs.yaml file in the project root.
+
+To understand how this file is generated, see https://git.k8s.io/community/generator/README.md
+-->
+# Cloud Provider Special Interest Group
+
+Ensures that the Kubernetes ecosystem is evolving in a way that is neutral to all (public and private) cloud providers. It will be responsible for establishing standards and requirements that must be met by all providers to ensure optimal integration with Kubernetes.
+
+## Meetings
+* Regular SIG Meeting: [Wednesdays at 10:00 PT (Pacific Time)](https://zoom.us/my/sigcloudprovider) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=10:00&tz=PT%20%28Pacific%20Time%29).
+ * [Meeting notes and Agenda](TODO).
+ * [Meeting recordings](TODO).
+
+## Leadership
+
+### Chairs
+The Chairs of the SIG run operations and processes governing the SIG.
+
+* Andrew Sy Kim (**[@andrewsykim](https://github.com/andrewsykim)**), DigitalOcean
+* Chris Hoge (**[@hogepodge](https://github.com/hogepodge)**), OpenStack Foundation
+* Jago Macleod (**[@jagosan](https://github.com/jagosan)**), Google
+
+## Contact
+* [Slack](https://kubernetes.slack.com/messages/sig-cloud-provider)
+* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider)
+* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fcloud-provider)
+
+## Subprojects
+
+The following subprojects are owned by sig-cloud-provider:
+- **kubernetes-cloud-provider**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/cmd/cloud-controller-manager/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/cloud/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/cloudprovider/OWNERS
+- **cloud-provider-azure**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-azure/master/OWNERS
+- **cloud-provider-gcp**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-gcp/master/OWNERS
+- **cloud-provider-openstack**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/OWNERS
+- **cloud-provider-vsphere**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/OWNERS
+
+## GitHub Teams
+
+The below teams can be mentioned on issues and PRs in order to get attention from the right people.
+Note that the links to display team membership will only work if you are a member of the org.
+
+The google groups contain the archive of Github team notifications.
+Mentioning a team on Github will CC its group.
+Monitor these for Github activity if you are not a member of the team.
+
+| Team Name | Details | Google Groups | Description |
+| --------- |:-------:|:-------------:| ----------- |
+| @kubernetes/sig-cloud-provider-api-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-provider-api-reviews) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider-api-reviews) | API Changes and Reviews |
+| @kubernetes/sig-cloud-provider-bugs | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-provider-bugs) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider-bugs) | Bug Triage and Troubleshooting |
+| @kubernetes/sig-cloud-provider-feature-requests | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-provider-feature-requests) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider-feature-requests) | Feature Requests |
+| @kubernetes/sig-cloud-provider-maintainers | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-provider-maintainers) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider-maintainers) | Cloud Providers Maintainers |
+| @kubernetes/sig-cloud-providers-misc | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-providers-misc) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-providers-misc) | General Discussion |
+| @kubernetes/sig-cloud-provider-pr-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-provider-pr-reviews) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider-pr-reviews) | PR Reviews |
+| @kubernetes/sig-cloud-provider-proposals | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-provider-proposals) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider-proposals) | Design Proposals |
+| @kubernetes/sig-cloud-provider-test-failures | [link](https://github.com/orgs/kubernetes/teams/sig-cloud-provider-test-failures) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider-test-failures) | Test Failures and Triage |
+
+<!-- BEGIN CUSTOM CONTENT -->
+
+<!-- END CUSTOM CONTENT -->
diff --git a/sig-cluster-ops/README.md b/sig-cluster-ops/README.md
index e8244532..026a68ab 100644
--- a/sig-cluster-ops/README.md
+++ b/sig-cluster-ops/README.md
@@ -21,7 +21,7 @@ Promote operability and interoperability of Kubernetes clusters. We focus on sha
The Chairs of the SIG run operations and processes governing the SIG.
* Rob Hirschfeld (**[@zehicle](https://github.com/zehicle)**), RackN
-* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Microsoft
+* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Google
## Contact
* [Slack](https://kubernetes.slack.com/messages/sig-cluster-ops)
diff --git a/sig-contributor-experience/projects.md b/sig-contributor-experience/projects.md
index 17bb4fb0..641f6c38 100644
--- a/sig-contributor-experience/projects.md
+++ b/sig-contributor-experience/projects.md
@@ -23,7 +23,7 @@ Project | Owner(s)/Lead(s) | Description | Q1, Q2, Later
[Meet Our Contributors](/mentoring/meet-our-contributors.md) | @parispittman | Monthly web series similar to user office hours that allows anyone to ask new and current contributors questions about our process, ecosystem, or their stories in open source | Q1 - ongoing
[Outreachy](/mentoring/README.md) | @parispittman | Document new features, create new conceptual content, create new user paths | Q1
[Google Summer of Code](/mentoring/google-summer-of-code.md) | @nikhita | Kubernetes participation in Google Summer of Code for students | Q1 - ongoing
-["Buddy" Program](https://github.com/kubernetes/community/issues/1803) | @parispittman, @chrisshort | 1 hour 1:1 sessions for new and current contributors to have dedicated time; meet our contributors but personal | Q2
+["Buddy" Program](https://github.com/kubernetes/community/issues/1803) | @parispittman, @chris-short | 1 hour 1:1 sessions for new and current contributors to have dedicated time; meet our contributors but personal | Q2
## Contributor Documentation
Ensure the contribution process is well documented, discoverable, and consistent across repos to deliver the best contributor experience.
diff --git a/sig-governance.md b/sig-governance.md
index f2e0e3e5..254dab69 100644
--- a/sig-governance.md
+++ b/sig-governance.md
@@ -22,8 +22,8 @@ In order to standardize Special Interest Group efforts, create maximum transpare
### Prerequisites
-* Propose the new SIG publicly, including a brief mission statement, by emailing kubernetes-dev@googlegroups.com and kubernetes-users@googlegroups.com, then wait a couple of days for feedback
-* Ask a repo maintainer to create a github label, if one doesn't already exist: sig/foo
+* Propose the new SIG publicly, including a brief mission statement, by emailing kubernetes-dev@googlegroups.com and kubernetes-users@googlegroups.com, then wait a couple of days for feedback.
+* Ask a repo maintainer to create a github label, if one doesn't already exist: sig/foo.
* Request a new [kubernetes.slack.com](http://kubernetes.slack.com) channel (#sig-foo) from the #slack-admins channel. New users can join at [slack.kubernetes.io](http://slack.kubernetes.io).
* Slack activity is archived at [kubernetes.slackarchive.io](http://kubernetes.slackarchive.io). To start archiving a new channel invite the slackarchive bot to the channel via `/invite @slackarchive`
* Organize video meetings as needed. No need to wait for the [Weekly Community Video Conference](community/README.md) to discuss. Please report summary of SIG activities there.
@@ -54,7 +54,7 @@ Create Google Groups at [https://groups.google.com/forum/#!creategroup](https://
* Create groups using the name conventions below;
* Groups should be created as e-mail lists with at least three owners (including sarahnovotny at google.com and ihor.dvoretskyi at gmail.com);
* To add the owners, visit the Group Settings (drop-down menu on the right side), select Direct Add Members on the left side and add Sarah and Ihor via email address (with a suitable welcome message); in Members/All Members select Ihor and Sarah and assign to an "owner role";
-* Set "View topics", "Post", "Join the Group" permissions to be "Public"
+* Set "View topics", "Post", "Join the Group" permissions to be "Public";
Name convention:
diff --git a/sig-list.md b/sig-list.md
index b69f16d6..a7a8dce2 100644
--- a/sig-list.md
+++ b/sig-list.md
@@ -24,15 +24,16 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md)
|------|-------|--------|---------|----------|
|[API Machinery](sig-api-machinery/README.md)|api-machinery|* [Daniel Smith](https://github.com/lavalamp), Google<br>* [David Eads](https://github.com/deads2k), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-api-machinery)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-api-machinery)|* Regular SIG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/apimachinery)<br>
|[Apps](sig-apps/README.md)|apps|* [Matt Farina](https://github.com/mattfarina), Samsung SDS<br>* [Adnan Abdulhussein](https://github.com/prydonius), Bitnami<br>* [Kenneth Owens](https://github.com/kow3ns), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-apps)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-apps)|* Regular SIG Meeting: [Mondays at 9:00 PT (Pacific Time) (weekly)](https://zoom.us/my/sig.apps)<br>* (charts) Charts Chat: [Tuesdays at 9:00 PT (Pacific Time) (biweekly)](https://zoom.us/j/166909412)<br>* (helm) Helm Developer call: [Thursdays at 9:30 PT (Pacific Time) (weekly)](https://zoom.us/j/4526666954)<br>
-|[Architecture](sig-architecture/README.md)|architecture|* [Brian Grant](https://github.com/bgrant0607), Google<br>* [Jaice Singer DuMars](https://github.com/jdumars), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-architecture)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-architecture)|* Regular SIG Meeting: [Thursdays at 15:30 UTC (weekly)](https://zoom.us/j/9690526922)<br>
+|[Architecture](sig-architecture/README.md)|architecture|* [Brian Grant](https://github.com/bgrant0607), Google<br>* [Jaice Singer DuMars](https://github.com/jdumars), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-architecture)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-architecture)|* Regular SIG Meeting: [Thursdays at 15:30 UTC (weekly)](https://zoom.us/j/9690526922)<br>
|[Auth](sig-auth/README.md)|auth|* [Eric Chiang](https://github.com/ericchiang), Red Hat<br>* [Jordan Liggitt](https://github.com/liggitt), Red Hat<br>* [Tim Allclair](https://github.com/tallclair), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-auth)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-auth)|* Regular SIG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/k8s.sig.auth)<br>
|[Autoscaling](sig-autoscaling/README.md)|autoscaling|* [Marcin Wielgus](https://github.com/mwielgus), Google<br>* [Solly Ross](https://github.com/directxman12), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-autoscaling)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-autoscaling)|* Regular SIG Meeting: [Mondays at 14:00 UTC (biweekly/triweekly)](https://zoom.us/my/k8s.sig.autoscaling)<br>
|[AWS](sig-aws/README.md)|aws|* [Justin Santa Barbara](https://github.com/justinsb)<br>* [Kris Nova](https://github.com/kris-nova), Heptio<br>* [Bob Wise](https://github.com/countspongebob), AWS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-aws)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-aws)|* Regular SIG Meeting: [Fridays at 9:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/k8ssigaws)<br>
-|[Azure](sig-azure/README.md)|azure|* [Jason Hansen](https://github.com/slack), Microsoft<br>* [Cole Mickens](https://github.com/colemickens), Red Hat<br>* [Jaice Singer DuMars](https://github.com/jdumars), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-azure)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-azure)|* Regular SIG Meeting: [Wednesdays at 16:00 UTC (weekly)](https://zoom.us/j/2015551212)<br>
+|[Azure](sig-azure/README.md)|azure|* [Stephen Augustus](https://github.com/justaugustus), Red Hat<br>* [Shubheksha Jalan](https://github.com/shubheksha), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-azure)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-azure)|* Regular SIG Meeting: [Wednesdays at 16:00 UTC (biweekly)](https://zoom.us/j/2015551212)<br>
|[Big Data](sig-big-data/README.md)|big-data|* [Anirudh Ramanathan](https://github.com/foxish), Google<br>* [Erik Erlandson](https://github.com/erikerlandson), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-big-data)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-big-data)|* Regular SIG Meeting: [Wednesdays at 17:00 UTC (weekly)](https://zoom.us/my/sig.big.data)<br>
|[CLI](sig-cli/README.md)|cli|* [Maciej Szulik](https://github.com/soltysh), Red Hat<br>* [Phillip Wittrock](https://github.com/pwittrock), Google<br>* [Tony Ado](https://github.com/AdoHe), Alibaba<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cli)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cli)|* Regular SIG Meeting: [Wednesdays at 09:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/sigcli)<br>
+|[Cloud Provider](sig-cloud-provider/README.md)|cloud-provider|* [Andrew Sy Kim](https://github.com/andrewsykim), DigitalOcean<br>* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation<br>* [Jago Macleod](https://github.com/jagosan), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cloud-provider)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider)|* Regular SIG Meeting: [Wednesdays at 10:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/sigcloudprovider)<br>
|[Cluster Lifecycle](sig-cluster-lifecycle/README.md)|cluster-lifecycle|* [Luke Marsden](https://github.com/lukemarsden), Weave<br>* [Robert Bailey](https://github.com/roberthbailey), Google<br>* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)<br>* [Timothy St. Clair](https://github.com/timothysc), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular SIG Meeting: [Tuesdays at 09:00 PT (Pacific Time) (weekly)](https://zoom.us/j/166836%E2%80%8B624)<br>* kubeadm Office Hours: [Wednesdays at 09:00 PT (Pacific Time) (weekly)](https://zoom.us/j/166836%E2%80%8B624)<br>* Cluster API working group: [Wednesdays at 10:00 PT (Pacific Time) (weekly)](https://zoom.us/j/166836%E2%80%8B624)<br>* kops Office Hours: [Fridays at 09:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/k8ssigaws)<br>
-|[Cluster Ops](sig-cluster-ops/README.md)|cluster-ops|* [Rob Hirschfeld](https://github.com/zehicle), RackN<br>* [Jaice Singer DuMars](https://github.com/jdumars), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-ops)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops)|* Regular SIG Meeting: [Thursdays at 20:00 UTC (biweekly)](https://zoom.us/j/297937771)<br>
+|[Cluster Ops](sig-cluster-ops/README.md)|cluster-ops|* [Rob Hirschfeld](https://github.com/zehicle), RackN<br>* [Jaice Singer DuMars](https://github.com/jdumars), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-ops)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-ops)|* Regular SIG Meeting: [Thursdays at 20:00 UTC (biweekly)](https://zoom.us/j/297937771)<br>
|[Contributor Experience](sig-contributor-experience/README.md)|contributor-experience|* [Elsie Phillips](https://github.com/Phillels), CoreOS<br>* [Paris Pittman](https://github.com/parispittman), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-contribex)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-contribex)|* Regular SIG Meeting: [Wednesdays at 9:30 PT (Pacific Time) (weekly)](https://zoom.us/j/7658488911)<br>
|[Docs](sig-docs/README.md)|docs|* [Zach Corleissen](https://github.com/zacharysarah), Linux Foundation<br>* [Andrew Chen](https://github.com/chenopis), Google<br>* [Jared Bhatti](https://github.com/jaredbhatti), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-docs)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-docs)|* Regular SIG Meeting: [Tuesdays at 17:30 UTC (weekly)](https://zoom.us/j/678394311)<br>
|[GCP](sig-gcp/README.md)|gcp|* [Adam Worrall](https://github.com/abgworrall), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-gcp)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-gcp)|* Regular SIG Meeting: [Thursdays at 16:00 UTC (biweekly)](https://zoom.us/j/761149873)<br>
@@ -42,15 +43,21 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md)
|[Network](sig-network/README.md)|network|* [Tim Hockin](https://github.com/thockin), Google<br>* [Dan Williams](https://github.com/dcbw), Red Hat<br>* [Casey Davenport](https://github.com/caseydavenport), Tigera<br>|* [Slack](https://kubernetes.slack.com/messages/sig-network)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-network)|* Regular SIG Meeting: [Thursdays at 14:00 PT (Pacific Time) (biweekly)](https://zoom.us/j/5806599998)<br>
|[Node](sig-node/README.md)|node|* [Dawn Chen](https://github.com/dchen1107), Google<br>* [Derek Carr](https://github.com/derekwaynecarr), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-node)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-node)|* Regular SIG Meeting: [Tuesdays at 10:00 PT (Pacific Time) (weekly)](https://zoom.us/j/4799874685)<br>
|[OpenStack](sig-openstack/README.md)|openstack|* [Chris Hoge](https://github.com/hogepodge), OpenStack Foundation<br>* [David Lyle](https://github.com/dklyle), Intel<br>* [Robert Morse](https://github.com/rjmorse), Ticketmaster<br>|* [Slack](https://kubernetes.slack.com/messages/sig-openstack)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-openstack)|* Regular SIG Meeting: [Wednesdays at 16:00 PT (Pacific Time) (biweekly)](https://zoom.us/j/417251241)<br>
+<<<<<<< HEAD
|[PM](sig-pm/README.md)|pm|* [Aparna Sinha](https://github.com/apsinha), Google<br>* [Ihor Dvoretskyi](https://github.com/idvoretskyi), CNCF<br>* [Caleb Miles](https://github.com/calebamiles), Google<br>|* [Slack](https://kubernetes.slack.com/messages/kubernetes-pm)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-pm)|* Regular SIG Meeting: [Tuesdays at 18:30 UTC (biweekly)](https://zoom.us/j/845373595)<br>
|[Release](sig-release/README.md)|release|* [Jaice Singer DuMars](https://github.com/jdumars), Microsoft<br>* [Caleb Miles](https://github.com/calebamiles), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-release)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-release)|* Regular SIG Meeting: [Tuesdays at 21:00 UTC (biweekly)](https://zoom.us/j/664772523)<br>
|[Scalability](sig-scalability/README.md)|scalability|* [Wojciech Tyczynski](https://github.com/wojtek-t), Google<br>* [Bob Wise](https://github.com/countspongebob), Samsung SDS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-scalability)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-scale)|* Regular SIG Meeting: [Thursdays at 16:30 UTC (bi-weekly)](https://zoom.us/j/989573207)<br>
+=======
+|[Product Management](sig-product-management/README.md)|none|* [Aparna Sinha](https://github.com/apsinha), Google<br>* [Ihor Dvoretskyi](https://github.com/idvoretskyi), CNCF<br>* [Caleb Miles](https://github.com/calebamiles), Google<br>|* [Slack](https://kubernetes.slack.com/messages/kubernetes-pm)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-pm)|* Regular SIG Meeting: [Tuesdays at 16:00 UTC (biweekly)](https://zoom.us/j/845373595)<br>
+|[Release](sig-release/README.md)|release|* [Jaice Singer DuMars](https://github.com/jdumars), Google<br>* [Caleb Miles](https://github.com/calebamiles), Google<br>|* [Slack](https://kubernetes.slack.com/messages/sig-release)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-release)|* Regular SIG Meeting: [Tuesdays at 21:00 UTC (biweekly)](https://zoom.us/j/664772523)<br>
+|[Scalability](sig-scalability/README.md)|scalability|* [Wojciech Tyczynski](https://github.com/wojtek-t), Google<br>* [Bob Wise](https://github.com/countspongebob), AWS<br>|* [Slack](https://kubernetes.slack.com/messages/sig-scalability)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-scale)|* Regular SIG Meeting: [Thursdays at 16:30 UTC (bi-weekly)](https://zoom.us/j/989573207)<br>
+>>>>>>> upstream/master
|[Scheduling](sig-scheduling/README.md)|scheduling|* [Bobby (Babak) Salamat](https://github.com/bsalamat), Google<br>* [Klaus Ma](https://github.com/k82cn), IBM<br>|* [Slack](https://kubernetes.slack.com/messages/sig-scheduling)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-scheduling)|* Regular SIG Meeting: [Thursdays at 20:00 UTC (biweekly)](https://zoom.us/j/7767391691)<br>
|[Service Catalog](sig-service-catalog/README.md)|service-catalog|* [Paul Morie](https://github.com/pmorie), Red Hat<br>* [Aaron Schlesinger](https://github.com/arschles), Microsoft<br>* [Ville Aikas](https://github.com/vaikas-google), Google<br>* [Doug Davis](https://github.com/duglin), IBM<br>|* [Slack](https://kubernetes.slack.com/messages/sig-service-catalog)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-service-catalog)|* Regular SIG Meeting: [Mondays at 13:00 PT (Pacific Time) (weekly)](https://zoom.us/j/7201225346)<br>
|[Storage](sig-storage/README.md)|storage|* [Saad Ali](https://github.com/saad-ali), Google<br>* [Bradley Childs](https://github.com/childsb), Red Hat<br>|* [Slack](https://kubernetes.slack.com/messages/sig-storage)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-storage)|* Regular SIG Meeting: [Thursdays at 9:00 PT (Pacific Time) (biweekly)](https://zoom.us/j/614261834)<br>
-|[Testing](sig-testing/README.md)|testing|* [Aaron Crickenberger](https://github.com/spiffxp), Samsung SDS<br>* [Erick Feja](https://github.com/fejta), Google<br>* [Steve Kuznetsov](https://github.com/stevekuznetsov), Red Hat<br>* [Timothy St. Clair](https://github.com/timothysc), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-testing)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-testing)|* Regular SIG Meeting: [Tuesdays at 13:00 PT (Pacific Time) (weekly)](https://zoom.us/my/k8s.sig.testing)<br>* (testing-commons) Testing Commons: [Wednesdays at 07:30 PT (Pacific Time) (bi-weekly)](https://zoom.us/my/k8s.sig.testing)<br>
+|[Testing](sig-testing/README.md)|testing|* [Aaron Crickenberger](https://github.com/spiffxp)<br>* [Erick Feja](https://github.com/fejta), Google<br>* [Steve Kuznetsov](https://github.com/stevekuznetsov), Red Hat<br>* [Timothy St. Clair](https://github.com/timothysc), Heptio<br>|* [Slack](https://kubernetes.slack.com/messages/sig-testing)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-testing)|* Regular SIG Meeting: [Tuesdays at 13:00 PT (Pacific Time) (weekly)](https://zoom.us/my/k8s.sig.testing)<br>* (testing-commons) Testing Commons: [Wednesdays at 07:30 PT (Pacific Time) (bi-weekly)](https://zoom.us/my/k8s.sig.testing)<br>
|[UI](sig-ui/README.md)|ui|* [Dan Romlein](https://github.com/danielromlein), Google<br>* [Sebastian Florek](https://github.com/floreks), Fujitsu<br>|* [Slack](https://kubernetes.slack.com/messages/sig-ui)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)|* Regular SIG Meeting: [Thursdays at 18:00 CET (Central European Time) (weekly)](https://groups.google.com/forum/#!forum/kubernetes-sig-ui)<br>
-|[VMware](sig-vmware/README.md)|vmware|* [Fabio Rapposelli](https://github.com/frapposelli), VMware<br>* [Steve Wong](https://github.com/cantbewong), VMware<br>|* [Slack](https://kubernetes.slack.com/messages/sig-vmware)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware)|* Regular SIG Meeting: [Thursdays at 18:00 UTC (bi-weekly)](https://zoom.us/j/183662780)<br>
+|[VMware](sig-vmware/README.md)|vmware|* [Fabio Rapposelli](https://github.com/frapposelli), VMware<br>* [Steve Wong](https://github.com/cantbewong), VMware<br>|* [Slack](https://kubernetes.slack.com/messages/sig-vmware)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware)|* Regular SIG Meeting: [Thursdays at 18:00 UTC (bi-weekly)](https://zoom.us/j/183662780)<br>* Cloud Provider vSphere weekly syncup: [Wednesdays at 16:30 UTC (weekly)](https://zoom.us/j/584244729)<br>
|[Windows](sig-windows/README.md)|windows|* [Michael Michael](https://github.com/michmike), Apprenda<br>* [Patrick Lang](https://github.com/patricklang), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/sig-windows)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-windows)|* Regular SIG Meeting: [Tuesdays at 12:30 Eastern Standard Time (EST) (weekly)](https://zoom.us/my/sigwindows)<br>
### Master Working Group List
@@ -58,10 +65,10 @@ When the need arises, a [new SIG can be created](sig-creation-procedure.md)
| Name | Organizers | Contact | Meetings |
|------|------------|---------|----------|
|[App Def](wg-app-def/README.md)|* [Antoine Legrand](https://github.com/ant31), CoreOS<br>* [Bryan Liles](https://github.com/bryanl), Heptio<br>* [Gareth Rushgrove](https://github.com/garethr), Docker<br>|* [Slack](https://kubernetes.slack.com/messages/wg-app-def)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-app-def)|* Regular WG Meeting: [Wednesdays at 16:00 UTC (bi-weekly)](https://zoom.us/j/748123863)<br>
-|[Apply](wg-apply/README.md)|* [Daniel Smith](https://github.com/lavalamp), Google<br>|* [Slack](https://kubernetes.slack.com/messages/wg-apply)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-apply)|* Regular WG Meeting: [Tuesdays at 9:30 PT (Pacific Time) (weekly)]()<br>
+|[Apply](wg-apply/README.md)|* [Daniel Smith](https://github.com/lavalamp), Google<br>|* [Slack](https://kubernetes.slack.com/messages/wg-apply)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-apply)|* Regular WG Meeting: [Tuesdays at 9:30 PT (Pacific Time) (weekly)](https://zoom.us/my/apimachinery)<br>
|[Cloud Provider](wg-cloud-provider/README.md)|* [Sidhartha Mani](https://github.com/wlan0), Caascade Labs<br>* [Jago Macleod](https://github.com/jagosan), Google<br>|* [Slack](https://kubernetes.slack.com/messages/wg-cloud-provider)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-cloud-provider)|* Regular WG Meeting: [Wednesdays at 10:00 PT (Pacific Time) (weekly)](https://zoom.us/my/cloudprovider)<br>
|[Cluster API](wg-cluster-api/README.md)|* [Kris Nova](https://github.com/kris-nova), Heptio<br>* [Robert Bailey](https://github.com/roberthbailey), Google<br>|* [Slack](https://kubernetes.slack.com/messages/cluster-api)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular WG Meeting: [s at ()]()<br>
-|[Container Identity](wg-container-identity/README.md)|* [Clayton Coleman](https://github.com/smarterclayton), Red Hat<br>* [Greg Gastle](https://github.com/destijl), Google<br>|* [Slack](https://kubernetes.slack.com/messages/wg-container-identity)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-container-identity)|* Regular WG Meeting: [Tuesdays at 15:00 UTC (bi-weekly (On demand))](TBD)<br>
+|[Container Identity](wg-container-identity/README.md)|* [Clayton Coleman](https://github.com/smarterclayton), Red Hat<br>* [Greg Castle](https://github.com/destijl), Google<br>|* [Slack](https://kubernetes.slack.com/messages/wg-container-identity)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-container-identity)|* Regular WG Meeting: [Tuesdays at 15:00 UTC (bi-weekly (On demand))](TBD)<br>
|[Kubeadm Adoption](wg-kubeadm-adoption/README.md)|* [Lucas Käldström](https://github.com/luxas), Luxas Labs (occasionally contracting for Weaveworks)<br>* [Justin Santa Barbara](https://github.com/justinsb)<br>|* [Slack](https://kubernetes.slack.com/messages/sig-cluster-lifecycle)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)|* Regular WG Meeting: [Tuesdays at 18:00 UTC (bi-weekly)](https://zoom.us/j/166836%E2%80%8B624)<br>
|[Machine Learning](wg-machine-learning/README.md)|* [Vishnu Kannan](https://github.com/vishh), Google<br>* [Kenneth Owens](https://github.com/kow3ns), Google<br>* [Balaji Subramaniam](https://github.com/balajismaniam), Intel<br>* [Connor Doyle](https://github.com/ConnorDoyle), Intel<br>|* [Slack](https://kubernetes.slack.com/messages/wg-machine-learning)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-machine-learning)|* Regular WG Meeting: [Thursdays at 13:00 PT (Pacific Time) (biweekly)](https://zoom.us/j/4799874685)<br>
|[Multitenancy](wg-multitenancy/README.md)|* [David Oppenheimer](https://github.com/davidopp), Google<br>* [Jessie Frazelle](https://github.com/jessfraz), Microsoft<br>|* [Slack](https://kubernetes.slack.com/messages/wg-multitenancy)<br>* [Mailing List](https://groups.google.com/forum/#!forum/kubernetes-wg-multitenancy)|* Regular WG Meeting: [Wednesdays at 11:00 PT (Pacific Time) (biweekly)](https://zoom.us/my/k8s.sig.auth)<br>
diff --git a/sig-multicluster/README.md b/sig-multicluster/README.md
index 78328fd4..aeb09ffe 100644
--- a/sig-multicluster/README.md
+++ b/sig-multicluster/README.md
@@ -8,7 +8,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener
-->
# Multicluster Special Interest Group
-A Special Interest Group focussed on solving common challenges related to the management of multiple Kubernetes clusters, and applications that exist therein. The SIG will be responsible for designing, discussing, implementing and maintaining API’s, tools and documentation related to multi-cluster administration and application management. This includes not only active automated approaches such as Cluster Federation, but also those that employ batch workflow-style continuous deployment systems like Spinnaker and others. Standalone building blocks for these and other similar systems (for example a cluster registry), and proposed changes to kubernetes core where appropriate will also be in scope.
+A Special Interest Group focused on solving common challenges related to the management of multiple Kubernetes clusters, and applications that exist therein. The SIG will be responsible for designing, discussing, implementing and maintaining API’s, tools and documentation related to multi-cluster administration and application management. This includes not only active automated approaches such as Cluster Federation, but also those that employ batch workflow-style continuous deployment systems like Spinnaker and others. Standalone building blocks for these and other similar systems (for example a cluster registry), and proposed changes to kubernetes core where appropriate will also be in scope.
## Meetings
* Regular SIG Meeting: [Tuesdays at 9:30 PT (Pacific Time)](https://zoom.us/my/k8s.mc) (biweekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:30&tz=PT%20%28Pacific%20Time%29).
diff --git a/sig-release/README.md b/sig-release/README.md
index 96fc31aa..adcf7608 100644
--- a/sig-release/README.md
+++ b/sig-release/README.md
@@ -19,7 +19,7 @@ To understand how this file is generated, see https://git.k8s.io/community/gener
### Chairs
The Chairs of the SIG run operations and processes governing the SIG.
-* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Microsoft
+* Jaice Singer DuMars (**[@jdumars](https://github.com/jdumars)**), Google
* Caleb Miles (**[@calebamiles](https://github.com/calebamiles)**), Google
## Contact
diff --git a/sig-scalability/README.md b/sig-scalability/README.md
index a22134e2..e6a979cb 100644
--- a/sig-scalability/README.md
+++ b/sig-scalability/README.md
@@ -23,7 +23,7 @@ For more details about our objectives please review our [Scaling And Performance
The Chairs of the SIG run operations and processes governing the SIG.
* Wojciech Tyczynski (**[@wojtek-t](https://github.com/wojtek-t)**), Google
-* Bob Wise (**[@countspongebob](https://github.com/countspongebob)**), Samsung SDS
+* Bob Wise (**[@countspongebob](https://github.com/countspongebob)**), AWS
## Contact
* [Slack](https://kubernetes.slack.com/messages/sig-scalability)
@@ -62,32 +62,17 @@ Monitor these for Github activity if you are not a member of the team.
<!-- BEGIN CUSTOM CONTENT -->
## Upcoming 2018 Meeting Dates
- * 1/18
- * 2/1
- * 2/15
- * 3/1
- * 3/15
- * 3/29
- * 4/12
- * 4/26
- * 5/10
- * 5/24
* 6/7
* 6/21
+ * 7/5
+ * 7/19
+ * 8/2
+ * 8/16
+ * 8/30
+ * 9/13
+ * 9/27
-## Scalability SLOs
+## Scalability/performance SLIs and SLOs
-We officially support two different SLOs:
-
-1. "API-responsiveness":
- 99% of all API calls return in less than 1s
-
-1. "Pod startup time:
- 99% of pods (with pre-pulled images) start within 5s
-
-This should be valid on appropriate hardware up to a 5000 node cluster with 30 pods/node. We eventually want to expand that to 100 pods/node.
-
-For more details how do we measure those, you can look at: http://blog.kubernetes.io/2015_09_01_archive.html
-
-We are working on refining existing SLOs and defining more for other areas of the system.
+Check out [SLIs/SLOs page](./slos/slos.md).
<!-- END CUSTOM CONTENT -->
diff --git a/sig-scalability/blogs/scalability-regressions-case-studies.md b/sig-scalability/blogs/scalability-regressions-case-studies.md
index 686a2bf8..31a757df 100644
--- a/sig-scalability/blogs/scalability-regressions-case-studies.md
+++ b/sig-scalability/blogs/scalability-regressions-case-studies.md
@@ -37,4 +37,4 @@ This document is a compilation of some interesting scalability/performance regre
- On many occasions our scalability tests caught critical/risky bugs which were missed by most other tests. If not caught, those could've seriously jeopardized production-readiness of k8s.
- SIG-Scalability has caught/fixed several important issues that span across various components, features and SIGs.
- Around 60% of times (possibly even more), we catch scalability regressions with just our medium-scale (and fast) tests, i.e gce-100 and kubemark-500. Making them run as presubmits should act as a strong shield against regressions.
-- Majority of the remaining ones are caught by our large-scale (and slow) tests, i.e kubemark-5k and gce-2k. Making them as post-submit blokcers (given they're "usually" quite healthy) should act as a second layer of protection against regressions.
+- Majority of the remaining ones are caught by our large-scale (and slow) tests, i.e kubemark-5k and gce-2k. Making them as post-submit blockers (given they're "usually" quite healthy) should act as a second layer of protection against regressions.
diff --git a/sig-scalability/slis/apimachinery_slis.md b/sig-scalability/slis/apimachinery_slis.md
deleted file mode 100644
index 512548ee..00000000
--- a/sig-scalability/slis/apimachinery_slis.md
+++ /dev/null
@@ -1,196 +0,0 @@
-# API-machinery SLIs and SLOs
-
-The document was converted from [Google Doc]. Please refer to the original for
-extended commentary and discussion.
-
-## Background
-
-Scalability is an important aspect of the Kubernetes. However, Kubernetes is
-such a large system that we need to manage users expectations in this area.
-To achieve it, we are in process of redefining what does it mean that
-Kubernetes supports X-node clusters - this doc describes the high-level
-proposal. In this doc we are describing API-machinery related SLIs we would
-like to introduce and suggest which of those should eventually have a
-corresponding SLO replacing current "99% of API calls return in under 1s" one.
-
-The SLOs we are proposing in this doc are our goal - they may not be currently
-satisfied. As a result, while in the future we would like to block the release
-when we are violating SLOs, we first need to understand where exactly we are
-now, define and implement proper tests and potentially improve the system.
-Only once this is done, we may try to introduce a policy of blocking the
-release on SLO violation. But this is out of scope of this doc.
-
-
-### SLIs and SLOs proposal
-
-Below we introduce all SLIs and SLOs we would like to have in the api-machinery
-area. A bunch of those are not easy to understand for users, as they are
-designed for developers or performance tracking of higher level
-user-understandable SLOs. The user-oriented one (which we want to publicly
-announce) are additionally highlighted with bold.
-
-### Prerequisite
-
-Kubernetes cluster is available and serving.
-
-### Latency<sup>[1](#footnote1)</sup> of API calls for single objects
-
-__***SLI1: Non-streaming API calls for single objects (POST, PUT, PATCH, DELETE,
-GET) latency for every (resource, verb) pair, measured as 99th percentile over
-last 5 minutes***__
-
-__***SLI2: 99th percentile for (resource, verb) pairs \[excluding virtual and
-aggregated resources and Custom Resource Definitions\] combined***__
-
-__***SLO: In default Kubernetes installation, 99th percentile of SLI2
-per cluster-day<sup>[2](#footnote2)</sup> <= 1s***__
-
-User stories:
-- As a user of vanilla Kubernetes, I want some guarantee how quickly I get the
-response from an API call.
-- As an administrator of Kubernetes cluster, if I know characteristics of my
-external dependencies of apiserver (e.g custom admission plugins, webhooks and
-initializers) I want to be able to provide guarantees for API calls latency to
-users of my cluster
-
-Background:
-- We obviously can’t give any guarantee in general, because cluster
-administrators are allowed to register custom admission plugins, webhooks
-and/or initializers, which we don’t have any control about and they obviously
-impact API call latencies.
-- As a result, we define the SLIs to be very generic (no matter how your
-cluster is set up), but we provide SLO only for default installations (where we
-have control over what apiserver is doing). This doesn’t provide a false
-impression, that we provide guarantee no matter how the cluster is setup and
-what is installed on top of it.
-- At the same time, API calls are part of pretty much every non-trivial workflow
-in Kubernetes, so this metric is a building block for less trivial SLIs and
-SLOs.
-
-Other notes:
-- The SLO has to be satisfied independently from from the used encoding. This
-makes the mix of client important while testing. However, we assume that all
-`core` components communicate with apiserver with protocol buffers (otherwise
-the SLO doesn’t have to be satisfied).
-- In case of GET requests, user has an option to opt-in for accepting
-potentially stale data (the request is then served from cache and not hitting
-underlying storage). However, the SLO has to be satisfied even if all requests
-ask for up-to-date data, which again makes careful choice of requests in tests
-important while testing.
-
-
-### Latency of API calls for multiple objects
-
-__***SLI1: Non-streaming API calls for multiple objects (LIST) latency for
-every (resource, verb) pair, measure as 99th percentile over last 5 minutes***__
-
-__***SLI2: 99th percentile for (resource, verb) pairs [excluding virtual and
-aggregated resources and Custom Resource Definitions] combined***__
-
-__***SLO1: In default Kubernetes installation, 99th percentile of SLI2 per
-cluster-day***__
-- __***is <= 1s if total number of objects of the same type as resource in the
-system <= X***__
-- __***is <= 5s if total number of objects of the same type as resource in the
-system <= Y***__
-- __***is <= 30s if total number of objects of the same types as resource in the
-system <= Z***__
-
-User stories:
-- As a user of vanilla Kubernetes, I want some guarantee how quickly I get the
-response from an API call.
-- As an administrator of Kubernetes cluster, if I know characteristics of my
-external dependencies of apiserver (e.g custom admission plugins, webhooks and
-initializers) I want to be able to provide guarantees for API calls latency to
-users of my cluster.
-
-Background:
-- On top of arguments from latency of API calls for single objects, LIST
-operations are crucial part of watch-related frameworks, which in turn are
-responsible for overall system performance and responsiveness.
-- The above SLO is user-oriented and may have significant buffer in threshold.
-In fact, the latency of the request should be proportional to the amount of
-work to do (which in our case is number of objects of a given type (potentially
-in a requested namespace if specified)) plus some constant overhead. For better
-tracking of performance, we define the other SLIs which are supposed to be
-purely internal (developer-oriented)
-
-
-_SLI3: Non-streaming API calls for multiple objects (LIST) latency minus 1s
-(maxed with 0) divided by number of objects in the collection
-<sup>[3](#footnote3)</sup> (which may be many more than the number of returned
-objects) for every (resource, verb) pair, measured as 99th percentile over
-last 5 minutes._
-
-_SLI4: 99th percentile for (resource, verb) pairs [excluding virtual and
-aggregated resources and Custom Resource Definitions] combined_
-
-_SLO2: In default Kubernetes installation, 99th percentile of SLI4 per
-cluster-day <= Xms_
-
-
-### Watch latency
-
-_SLI1: API-machinery watch latency (measured from the moment when object is
-stored in database to when it’s ready to be sent to all watchers), measured
-as 99th percentile over last 5 minutes_
-
-_SLO1 (developer-oriented): 99th percentile of SLI1 per cluster-day <= Xms_
-
-User stories:
-- As an administrator, if system is slow, I would like to know if the root
-cause is slow api-machinery or something farther the path (lack of network
-bandwidth, slow or cpu-starved controllers, ...).
-
-Background:
-- Pretty much all control loops in Kubernetes are watch-based, so slow watch
-means slow system in general. As a result, we want to give some guarantees on
-how fast it is.
-- Note that how we measure it, silently assumes no clock-skew in case of HA
-clusters.
-
-
-### Admission plugin latency
-
-_SLI1: Admission latency for each admission plugin type, measured as 99th
-percentile over last 5 minutes_
-
-User stories:
-- As an administrator, if API calls are slow, I would like to know if this is
-because slow admission plugins and if so which ones are responsible.
-
-
-### Webhook latency
-
-_SLI1: Webhook call latency for each webhook type, measured as 99th percentile
-over last 5 minutes_
-
-User stories:
-- As an administrator, if API calls are slow, I would like to know if this is
-because slow webhooks and if so which ones are responsible.
-
-
-### Initializer latency
-
-_SLI1: Initializer latency for each initializer, measured as 99th percentile
-over last 5 minutes_
-
-User stories:
-- As an administrator, if API calls are slow, I would like to know if this is
-because of slow initializers and if so which ones are responsible.
-
----
-<a name="footnote1">\[1\]</a>By latency of API call in this doc we mean time
-from the moment when apiserver gets the request to last byte of response sent
-to the user.
-
-<a name="footnote2">\[2\]</a> For the purpose of visualization it will be a
-sliding window. However, for the purpose of reporting the SLO, it means one
-point per day (whether SLO was satisfied on a given day or not).
-
-<a name="footnote3">\[3\]</a>A collection contains: (a) all objects of that
-type for cluster-scoped resources, (b) all object of that type in a given
-namespace for namespace-scoped resources.
-
-
-[Google Doc]: https://docs.google.com/document/d/1Q5qxdeBPgTTIXZxdsFILg7kgqWhvOwY8uROEf0j5YBw/edit#
diff --git a/sig-scalability/slos/api_call_latency.md b/sig-scalability/slos/api_call_latency.md
new file mode 100644
index 00000000..65d7dc26
--- /dev/null
+++ b/sig-scalability/slos/api_call_latency.md
@@ -0,0 +1,47 @@
+## API call latency SLIs/SLOs details
+
+### User stories
+- As a user of vanilla Kubernetes, I want some guarantee how quickly I get the
+response from an API call.
+- As an administrator of Kubernetes cluster, if I know characteristics of my
+external dependencies of apiserver (e.g custom admission plugins, webhooks and
+initializers) I want to be able to provide guarantees for API calls latency to
+users of my cluster
+
+### Other notes
+- We obviously can’t give any guarantee in general, because cluster
+administrators are allowed to register custom admission plugins, webhooks
+and/or initializers, which we don’t have any control about and they obviously
+impact API call latencies.
+- As a result, we define the SLIs to be very generic (no matter how your
+cluster is set up), but we provide SLO only for default installations (where we
+have control over what apiserver is doing). This doesn’t provide a false
+impression, that we provide guarantee no matter how the cluster is setup and
+what is installed on top of it.
+- At the same time, API calls are part of pretty much every non-trivial workflow
+in Kubernetes, so this metric is a building block for less trivial SLIs and
+SLOs.
+- The SLO for latency for read-only API calls of a given type may have significant
+buffer in threshold. In fact, the latency of the request should be proportional to
+the amount of work to do (which is number of objects of a given type in a given
+scope) plus some constant overhead. For better tracking of performance, we
+may want to define purely internal SLI of "latency per object". But that
+isn't in near term plans.
+
+### Caveats
+- The SLO has to be satisfied independently from used encoding in user-originated
+requests. This makes mix of client important while testing. However, we assume
+that all `core` components communicate with apiserver using protocol buffers.
+- In case of GET requests, user has an option opt-in for accepting potentially
+stale data (being served from cache) and the SLO again has to be satisfied
+independently of that. This makes the careful choice of requests in tests
+important.
+
+### TODOs
+- We may consider treating `non-namespaced` resources as a separate bucket in
+the future. However, it may not make sense if the number of those may be
+comparable with `namespaced` ones.
+
+### Test scenario
+
+__TODO: Descibe test scenario.__
diff --git a/sig-scalability/slos/api_extensions_latency.md b/sig-scalability/slos/api_extensions_latency.md
new file mode 100644
index 00000000..2681422c
--- /dev/null
+++ b/sig-scalability/slos/api_extensions_latency.md
@@ -0,0 +1,6 @@
+## API call extension points latency SLIs details
+
+### User stories
+- As an administrator, if API calls are slow, I would like to know if this is
+because slow extension points (admission plugins, webhooks, initializers) and
+if so which ones are responsible for it.
diff --git a/sig-scalability/slos/extending_slo.md b/sig-scalability/slos/extending_slo.md
deleted file mode 100644
index 5cbbb87f..00000000
--- a/sig-scalability/slos/extending_slo.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# Extended Kubernetes scalability SLOs
-
-## Goal
-The goal of this effort is to extend SLOs which Kubernetes cluster has to meet to support given number of Nodes. As of April 2017 we have only two SLOs:
-- API-responsiveness: 99% of all API calls return in less than 1s
-- Pod startup time: 99% of Pods (with pre-pulled images) start within 5s
-which are enough to guarantee that cluster doesn't feel completely dead, but not enough to guarantee that it satisfies user's needs.
-
-We're going to define more SLOs based on most important indicators, and standardize the format in which we speak about our objectives. Our SLOs need to have two properties:
-- They need to be testable, i.e. we need to have a benchmark to measure if it's met,
-- They need to be expressed in a way that's possible to understand by a user not intimately familiar with the system internals, i.e. formulation can't depend on some arcane knowledge.
-
-On the other hand we do not require that:
-- SLOs are possible to monitor in a running cluster, i.e. not all SLOs need to be easily translatable to SLAs. Being able to benchmark is enough for us.
-
-## Split metrics from environment
-Currently what me measure and how we measure it is tightly coupled. This means that we don't have good environmental constraint suggestions for users (e.g. how many Pods per Namespace we support, how many Endpoints per Service, how to setup the cluster etc.). We need to decide on what's reasonable and make the environment explicit.
-
-## Split SLOs by kind
-Current SLOs implicitly assume that the cluster is in a "steady state". By this we mean that we assume that there's only some, limited, number of things going during benchmarking. We need to make this assumption explicit and split SLOs into two categories: steady-state SLOs and burst SLOs.
-
-## Steady state SLOs
-With steady state SLO we want to give users the data about system's behavior during normal operation. We define steady state by limiting the churn on the cluster.
-
-This includes current SLOs:
-- API call latency
-- E2e Pod startup latency
-
-By churn we understand a measure of amount changes happening in the cluster. Its formal(-ish) definition will follow, but informally it can be thought about as number of user-issued requests per second plus number of pods affected by those requests.
-
-More formally churn per second is defined as:
-```
-#Pod creations + #PodSpec updates + #user originated requests in a given second
-```
-The last part is necessary only to get rid of situations when user is spamming API server with various requests. In ordinary circumstances we expect it to be in the order of 1-2.
-
-## Burst SLOs
-With burst SLOs we want to give user idea on how system behaves under the heavy load, i.e. when one want the system to do something as quickly as possible, not caring too much about response time for a single request. Note that this voids all steady-state SLOs.
-
-This includes the new SLO:
-- Pod startup throughput
-
-## Environment
-A Kubernetes cluster in which we benchmark SLOs needs to meet the following criteria:
-- Run a single appropriately sized master machine
-- Main etcd runs as a single instance on the master machine
-- Events are stored in a separate etcd instance running on the master machine
-- Kubernetes version is at least 1.X.Y
-- Components configuration = _?_
-
-_TODO: NEED AN HA CONFIGURATION AS WELL_
-
-## SLO template
-All our performance SLOs should be defined using the following template:
-
----
-
-# SLO: *TL;DR description of the SLO*
-## (Burst|Steady state) foo bar SLO
-
-### Summary
-_One-two sentences describing the SLO, that's possible to understand by the majority of the community_
-
-### User Stories
-_A Few user stories showing in what situations users might be interested in this SLO, and why other ones are not enough_
-
-## Full definition
-### Test description
-_Precise description of test scenario, including maximum number of Pods per Controller, objects per namespace, and anything else that even remotely seems important_
-
-### Formal definition (can be skipped if the same as title/summary)
-_Precise and as formal as possible definition of SLO. This does not necessarily need to be easily understandable by layman_
diff --git a/sig-scalability/slos/pod_startup_latency.md b/sig-scalability/slos/pod_startup_latency.md
new file mode 100644
index 00000000..f8943a45
--- /dev/null
+++ b/sig-scalability/slos/pod_startup_latency.md
@@ -0,0 +1,54 @@
+## Pod startup latency SLI/SLO details
+
+### User stories
+- As a user of vanilla Kubernetes, I want some guarantee how quickly my pods
+will be started.
+
+### Other notes
+- Only schedulable and stateless pods contribute to the SLI:
+ - If there is no space in the cluster to place the pod, there is not much
+ we can do about it (it is task for Cluster Autoscaler which should have
+ separate SLIs/SLOs).
+ - If placing a pod requires preempting other pods, that may heavily depend
+ on the application (e.g. on their graceful termination period). We don't
+ want that to contribute to this SLI.
+ - Mounting disks required by non-stateless pods may potentially also require
+ non-negligible time, not fully dependent on Kubernetes.
+- We are explicitly excluding image pulling from time the SLI. This is
+because it highly depends on locality of the image, image registry performance
+characteristic (e.g. throughput), image size itself, etc. Since we have
+no control over any of those (and all of those would significantly affect SLI)
+we decided to simply exclude it.
+- We are also explicitly excluding time to run init containers, as, again, this
+is heavily application-dependent (and does't depend on Kubernetes itself).
+- The answer to question "when pod should be considered as started" is also
+not obvious. We decided for the semantic of "when all its containers are
+reported as started and observed via watch", because:
+ - we require all containers to be started (not e.g. the first one) to ensure
+ that the pod is started. We need to ensure that pontential regressions like
+ linearization of container startups within a pod will be catch by this SLI.
+ - note that we don't require all container to be running - if some of them
+ finished before the last one was started that is also fine. It is just
+ required that all of them has been started (at least once).
+ - we don't want to rely on "readiness checks", because they heavily
+ depend on the application. If the application takes couple minutes to
+ initialize before it starts responding to readiness checks, that shouldn't
+ count towards Kubernetes performance.
+ - even if your application started, many control loops in Kubernetes will
+ not fire before they will observe that. If Kubelet is not able to report
+ the status due to some reason, other parts of the system will not have
+ a way to learn about it - this is why reporting part is so important
+ here.
+ - since watch is so centric to Kubernetes (and many control loops are
+ triggered by specific watch events), observing the status of pod is
+ also part of the SLI (as this is the moment when next control loops
+ can potentially be fired).
+
+### TODOs
+- We should try to provide guarantees for non-stateless pods (the threshold
+may be higher for them though).
+- Revisit whether we want "watch pod status" part to be included in the SLI.
+
+### Test scenario
+
+__TODO: Descibe test scenario.__
diff --git a/sig-scalability/slos/slos.md b/sig-scalability/slos/slos.md
new file mode 100644
index 00000000..946aa612
--- /dev/null
+++ b/sig-scalability/slos/slos.md
@@ -0,0 +1,148 @@
+# Kubernetes scalability and performance SLIs/SLOs
+
+## What Kubernetes guarantees?
+
+One of the important aspects of Kubernetes is its scalability and performance
+characteristic. As Kubernetes user or operator/administrator of a cluster
+you would expect to have some guarantees in those areas.
+
+The goal of this doc is to organize the guarantees that Kubernetes provides
+in these areas.
+
+## What do we require from SLIs/SLOs?
+
+We are going to define more SLIs and SLOs based on the most important indicators
+in the system.
+
+Our SLOs need to have the following properties:
+- <b> They need to be testable </b> <br/>
+ That means that we need to have a benchmark to measure if it's met.
+- <b> They need to be understandable for users </b> <br/>
+ In particular, they need to be understandable for people not familiar
+ with the system internals, i.e. their formulation can't depend on some
+ arcane knowledge.
+
+However, we may introduce some internal (for developers only) SLIs, that
+may be useful for understanding performance characterstic of the system,
+but for which we don't provide any guarantees for users and thus may not
+be fully understandable for users.
+
+On the other hand, we do NOT require that our SLOs:
+- are measurable in a running cluster (though that's desired if possible) <br/>
+ In other words, not SLOs need to be easily translatable to SLAs.
+ Being able to benchmark is enough for us.
+
+## Types of SLOs
+
+While SLIs are very generic and don't really depend on anything (they just
+define what and how we measure), it's not the case for SLOs.
+SLOs provide guarantees, and satisfying them may depend on meeting some
+specific requirements.
+
+As a result, we build our SLOs in "you promise, we promise" format.
+That means, that we provide you a guarantee only if you satisfy the requirement
+that we put on you.
+
+As a consequence we introduce the two types of SLOs.
+
+### Steady state SLOs
+
+With steady state SLOs, we provide guarantees about system's behavior during
+normal operations. We are able to provide much more guarantees in that situation.
+
+```Definition
+We define system to be in steady state when the cluster churn per second is <= 20, where
+
+churn = #(Pod spec creations/updates/deletions) + #(user originated requests) in a given second
+```
+
+### Burst SLO
+
+With burst SLOs, we provide guarantees on how system behaves under the heavy load
+(when user wants the system to do something as quickly as possible not caring too
+much about response time).
+
+## Environment
+
+In order to meet the SLOs, system must run in the environment satisfying
+the following criteria:
+- Runs a single or more appropriate sized master machines
+- Main etcd running on master machine(s)
+- Events are stored in a separate etcd running on the master machine(s)
+- Kubernetes version is at least X.Y.Z
+- ...
+
+__TODO: Document other necessary configuration.__
+
+## Thresholds
+
+To make the cluster eligible for SLO, users also can't have too many objects in
+their clusters. More concretely, the number of different objects in the cluster
+MUST satisfy thresholds defined in [thresholds file][].
+
+[thresholds file]: https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md
+
+
+## Kubernetes SLIs/SLOs
+
+The currently existing SLIs/SLOs are enough to guarantee that cluster isn't
+completely dead. However, the are not enough to satisfy user's needs in most
+of the cases.
+
+We are looking into extending the set of SLIs/SLOs to cover more parts of
+Kubernetes.
+
+```
+Prerequisite: Kubernetes cluster is available and serving.
+```
+
+### Steady state SLIs/SLOs
+
+| Status | SLI | SLO | User stories, test scenarios, ... |
+| --- | --- | --- | --- |
+| __Official__ | Latency<sup>[1](#footnote1)</sup> of mutating<sup>[2](#footnote2)</sup> API calls for single objects for every (resource, verb) pair, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, for every (resource, verb) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day<sup>[3](#footnote3)</sup> <= 1s | [Details](./api_call_latency.md) |
+| __Official__ | Latency<sup>[1](#footnote1)</sup> of non-streaming read-only<sup>[4](#footnote3)</sup> API calls for every (resource, scope<sup>[5](#footnote4)</sup>) pair, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, for every (resource, scope) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day (a) <= 1s if `scope=resource` (b) <= 5s if `scope=namespace` (c) <= 30s if `scope=cluster` | [Details](./api_call_latency.md) |
+| __Official__ | Startup latency of stateless<sup>[6](#footnode6)</sup> and schedulable<sup>[7](#footnote7)</sup> pods, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch, measured as 99th percentile over last 5 minutes | In default Kubernetes installation, 99th percentile per cluster-day <= 5s | [Details](./pod_startup_latency.md) |
+
+<a name="footnote1">\[1\]</a>By latency of API call in this doc we mean time
+from the moment when apiserver gets the request to last byte of response sent
+to the user.
+
+<a name="footnote2">\[2\]</a>By mutating API calls we mean POST, PUT, DELETE
+and PATCH.
+
+<a name="footnote3">\[3\]</a> For the purpose of visualization it will be a
+sliding window. However, for the purpose of reporting the SLO, it means one
+point per day (whether SLO was satisfied on a given day or not).
+
+<a name="footnote4">\[4\]</a>By non-streaming read-only API calls we mean GET
+requests without `watch=true` option set. (Note that in Kubernetes internally
+it translates to both GET and LIST calls).
+
+<a name="footnote5">\[5\]</a>A scope of a request can be either (a) `resource`
+if the request is about a single object, (b) `namespace` if it is about objects
+from a single namespace or (c) `cluster` if it spawns objects from multiple
+namespaces.
+
+<a name="footnode6">[6\]</a>A `stateless pod` is defined as a pod that doesn't
+mount volumes with sources other than secrets, config maps, downward API and
+empty dir.
+
+<a name="footnode7">[7\]</a>By schedulable pod we mean a pod that can be
+scheduled in the cluster without causing any preemption.
+
+### Burst SLIs/SLOs
+
+| Status | SLI | SLO | User stories, test scenarios, ... |
+| --- | --- | --- | --- |
+| WIP | Time to start 30\*#nodes pods, measured from test scenario start until observing last Pod as ready | Benchmark: when all images present on all Nodes, 99th percentile <= X minutes | [Details](./system_throughput.md) |
+
+### Other SLIs
+
+| Status | SLI | User stories, ... |
+| --- | --- | --- |
+| WIP | Watch latency for every resource, (from the moment when object is stored in database to when it's ready to be sent to all watchers), measured as 99th percentile over last 5 minutes | TODO |
+| WIP | Admission latency for each admission plugin type, measured as 99th percentile over last 5 minutes | [Details](./api_extensions_latency.md) |
+| WIP | Webhook call latency for each webhook type, measured as 99th percentile over last 5 minutes | [Details](./api_extensions_latency.md) |
+| WIP | Initializer latency for each initializer, measured as 99th percentile over last 5 minutes | [Details](./api_extensions_latency.md) |
+
diff --git a/sig-scalability/slos/system_throughput.md b/sig-scalability/slos/system_throughput.md
new file mode 100644
index 00000000..5691b46d
--- /dev/null
+++ b/sig-scalability/slos/system_throughput.md
@@ -0,0 +1,28 @@
+## System throughput SLI/SLO details
+
+### User stories
+- As a user, I want a guarantee that my workload of X pods can be started
+ within a given time
+- As a user, I want to understand how quickly I can react to a dramatic
+ change in workload profile when my workload exhibits very bursty behavior
+ (e.g. shop during Back Friday Sale)
+- As a user, I want a guarantee how quickly I can recreate the whole setup
+ in case of a serious disaster which brings the whole cluster down.
+
+### Test scenario
+- Start with a healthy (all nodes ready, all cluster addons already running)
+ cluster with N (>0) running pause pods per node.
+- Create a number of `Namespaces` and a number of `Deployments` in each of them.
+- All `Namespaces` should be isomorphic, possibly excluding last one which should
+ run all pods that didn't fit in the previous ones.
+- Single namespace should run 5000 `Pods` in the following configuration:
+ - one big `Deployment` running ~1/3 of all `Pods` from this `namespace`
+ - medium `Deployments`, each with 120 `Pods`, in total running ~1/3 of all
+ `Pods` from this `namespace`
+ - small `Deployment`, each with 10 `Pods`, in total running ~1/3 of all `Pods`
+ from this `Namespace`
+- Each `Deployment` should be covered by a single `Service`.
+- Each `Pod` in any `Deployment` contains two pause containers, one `Secret`
+ other than default `ServiceAccount` and one `ConfigMap`. Additionally it has
+ resource requests set and doesn't use any advanced scheduling features or
+ init containers.
diff --git a/sig-scalability/slos/throughput_burst_slo.md b/sig-scalability/slos/throughput_burst_slo.md
deleted file mode 100644
index e579acb1..00000000
--- a/sig-scalability/slos/throughput_burst_slo.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# SLO: Kubernetes cluster of size at least X is able to start Y Pods in Z minutes
-**This is a WIP SLO doc - something that we want to meet, but we may not be there yet**
-
-## Burst Pod Startup Throughput SLO
-### User Stories
-- User is running a workload of X total pods and wants to ensure that it can be started in Y time.
-- User is running a system that exhibits very bursty behavior (e.g. shop during Black Friday Sale) and wants to understand how quickly they can react to a dramatic change in workload profile.
-- User is running a huge serving app on a huge cluster. He wants to know how quickly he can recreate his whole setup in case of a serious disaster which will bring the whole cluster down.
-
-Current steady state SLOs are do not provide enough data to make these assessments about burst behavior.
-## SLO definition (full)
-### Test setup
-Standard performance test kubernetes setup, as describe in [the doc](../extending_slo.md#environment).
-### Test scenario is following:
-- Start with a healthy (all nodes ready, all cluster addons already running) cluster with N (>0) running pause Pods/Node.
-- Create a number of Deployments that run X Pods and Namespaces necessary to create them.
-- All namespaces should be isomorphic, possibly excluding last one which should run all Pods that didn't fit in the previous ones.
-- Single Namespace should run at most 5000 Pods in the following configuration:
- - one big Deployment running 1/3 of all Pods from this Namespace (1667 for 5000 Pod Namespace)
- - medium Deployments, each of which is not running more than 120 Pods, running in total 1/3 of all Pods from this Namespace (14 Deployments with 119 Pods each for 5000 Pod Namespace)
- - small Deployments, each of which is not running more than 10 Pods, running in total 1/3 of all Pods from this Namespace (238 Deployments with 7 Pods each for 5000 Pod Namespace)
-- Each Deployment is covered by a single Service.
-- Each Pod in any Deployment contains two pause containers, one secret other than ServiceAccount and one ConfigMap, has resource request set and doesn't use any advanced scheduling features (Affinities, etc.) or init containers.
-- Measure the time between starting the test and moment when last Pod is started according to it's Kubelet. Note that pause container is ready just after it's started, which may not be true for more complex containers that use nontrivial readiness probes.
-### Definition
-Kubernetes cluster of size at least X adhering to the environment definition, when running the specified test, 99th percentile of time necessary to start Y pods from the time when user created all controllers to the time when Kubelet starts the last Pod from the set is no greater than Z minutes, assuming that all images are already present on all Nodes. \ No newline at end of file
diff --git a/sig-scalability/slos/watch_latency.md b/sig-scalability/slos/watch_latency.md
new file mode 100644
index 00000000..2e698b4b
--- /dev/null
+++ b/sig-scalability/slos/watch_latency.md
@@ -0,0 +1,17 @@
+## Watch latency SLI details
+
+### User stories
+- As an administrator, if Kubernetes is slow, I would like to know if the root
+cause of it is slow api-machinery (slow watch) or something farther the path
+(lack of network bandwidth, slow or cpu-starved controllers, ...)
+
+### Other notes
+- Pretty much all control loops in Kubernetes are watch-based. As a result
+slow watch means slow system in general.
+- Note that how we measure it silently assumes no clock-skew in case of
+cluster with multiple masters.
+
+### TODOs
+- Longer term, we would like to provide some guarantees on watch latency
+(e.g. 99th percentile of SLI per cluster-day <= Xms). However, we are not
+there yet.
diff --git a/sig-scalability/tools/performance-comparison-tool.md b/sig-scalability/tools/performance-comparison-tool.md
deleted file mode 100644
index 650b1f93..00000000
--- a/sig-scalability/tools/performance-comparison-tool.md
+++ /dev/null
@@ -1,112 +0,0 @@
-# Performance Comparison Tool
-
-_by Shyam JVS, Google Inc (reviewed by Marek Grabowski & Wojciech Tyczysnki)_
-
-## BACKGROUND
-
-Kubemark is a performance testing tool which we use to run simulated kubernetes clusters. The primary use case is scalability testing, as creating simulated clusters is faster and requires less resources than creating real ones. For more information about kubemark, take a look into the [doc](https://github.com/kubernetes/community/blob/master/contributors/devel/kubemark-guide.md).
-
-## OBJECTIVE
-
-After recent updates, kubemark caught up with the setup of real clusters w.r.t. having all core cluster components, some add-ons and some daemons. Now we want to be able to say confidently that kubemark really reflects performance problems/bottlenecks in real clusters. Thus, our goals are to:
-
-- Make kubemark mimic real clusters in performance enough to allow reasoning about meeting performance SLOs using kubemark runs.
-- Formalize the notion of “similar performance” and set up a tool for doing the comparison.
-
-## DESIGN OVERVIEW
-
-We assume that we want to benchmark a test T across two variants A and B. For the benchmarking to be meaningful, these two variants should be running in a similar environment (eg. one in real-cluster and one in kubemark) and at identical scale (eg. both run 2k nodes). At a high-level, the tool should:
-
-- *choose the set of runs* of tests T executed on both A and B environments to use for comparison,
-- *obtain the relevant metrics* for the runs chosen for comparison,
-- *compute the similarity* for each individual metric, across both the samples,
-- *compute overall similarity* of A and B, using similarity values of all metrics.
-
-Final output of the tool will be the answer to the question "are environments A and B similar enough with respect to chosen metrics" given some notion of similarity. The result will contain similarity measure for each metric and a similarity measure for the whole test. E.g.
-
-```
-Performance comparison results:
-API call latencies:
-GET Pod: 0.95
-PUT Pod: 0.92
-...
-E2e Pod startup latencies: 0.99
-Total similarity measure: 0.95
-```
-
-## DESIGN DETAILS
-
-Performance Comparison Tool's infrastructure is designed to be easily extensible and portable. It'll allow for easy modification/extension of default logic and it'll be possible to run it on any environment that can build go binaries, and have access to relevant data.
-
-It'll consist of a single binary that will be able to read series of test results either from Google Cloud Storage buckets or from local disk, extract relevant metrics from those results, compute given similarity function for all metrics, and finally combine those similarities in the final result.
-
-Moving parts of the system are:
-
-- tests to compare (including the source: local or GCS)
-- set of metrics to compare
-- definition of similarity measure for single metrics
-- definition of similarity measure for whole test (combined metrics)
-
-Below we discuss default choices we made.
-
-### Choosing tests to compare
-
-When running the comparison we need to decide on which tests to include and how to get their data. In the first version of our comparison tool we support only GCS and local sources for data with a well defined structure. We expect to have a bucket/directory with results for each run of the test. Each of those subdirectories need to have dedicated files for metrics for those runs in some well-defined format (like json). We'll expose a flag that'll allow for choosing only a subset of runs (subdirectories) to read.
-
-By default we'll use GCS source and the last ‘n’ (TBD) runs of either tests for comparing. ‘n’ will be a configurable parameter of the tool as it could vary depending on the requirements of various tests.
-
-### Choosing set of metrics to compare
-
-User will be able to support a set of metrics to include into comparison. The only requirement for those metrics is to have a single numeric value.
-
-In the initial version we'll default to the following metrics that are most directly visible in k8s performance:
-
-- Percentiles (90%, 95%, 99%) of pod startup latency
-- Percentiles (90%, 95%, 99%) of api request serving latency (split by resource and verb)
-
-The framework itself will be extensible, so it'll be easy to add new metrics. In the future we plan to add:
-
-- etcd request latencies
-- pod startup throughput
-- resource usage of control-plane components
-
-Because performance results tend to vary a lot, especially when metrics are small (e.g. API call latencies of low tens of milliseconds) due to various processes that happen on the machine (most notably go garbage collector running), before doing any comparison we need to reduce the noise in the data. Those normalization procedures will be defined in the code for each supported metric. For initial ones we're going to set a cutoff threshold and substitute all values smaller than it with the threshold:
-
-- for API call latencies it'll be 50ms
-- for e2e Pod startup latency it'll be 1s
-
-### What do we mean by similarity between single metric series?
-
-For each metric we're considering we'll get a series of results, which we'll treat as a series of experiments from a single probability distribution. We have one such series for either tests we want to compare. The question we want to answer is whether their underlying distributions are "similar enough".
-
-Initially we’ll use a simple test to determine if the metrics are similar. We find the ratio of the metric’s averages from either series and check if that ratio is in the interval \[0.66, 1.50\] (i.e. one does not differ from the other by more than 33%). We deem the metric as matched if and only if It lies in the interval. We can switch to more advanced statistical tests on distributions of these metrics in future if needed.
-
-### What do we mean by similarity for whole test?
-
-Once we have calculated the similarity measures for all the metrics in the metrics set, we need to decide how to compute combined similarity score.
-
-We classify each metric as matched/mismatched based on the above test. We could have more than just binary classification in future if needed. Finally, the overall comparison result would be computed as:
-
-- PASS, if at least 90% of the metrics in our set matched
-- FAIL, otherwise
-
-(Note: If a metric consistently mismatches across multiple rounds of comparison, it needs fixing)
-
-## RELEVANCE & SCOPE
-
-- This tool can benefit the community in the following ways:
- - Having this tool as open-source would make the process of testing on simulated clusters and claims about performance on real clusters using performance on simulated clusters more clear and transparent.
- - Since performance on simulated clusters indicates the kubernetes side of performance rather than that on the side of the underlying provider infra, it can help the community / kubernetes providers be assured that there indeed are no scalability problems on the side of kubernetes.
-- This tool can be extended in future for other use cases like:
- - Compare two different samples of runs from the same test to see which metrics have improved / degraded over time.
- - Run comparison with more advanced statistical tests that validate hypotheses about similarity of the underlying distributions of the metric series and see if the distributions follow some known family of distribution functions.
-
---------------------------
-
-**NOTES FOR INTERESTED CONTRIBUTORS**
-
-This tool has been implemented and the code for it lies [here](https://github.com/kubernetes/perf-tests/tree/master/benchmark). Further, we have setup an [automated CI job](https://k8s-testgrid.appspot.com/perf-tests#kubemark-100-benchmark) that runs this benchmark periodically and compares the metrics across our 100-node kubemark and 100-node real-cluster runs from the last 24 hrs.
-
-If you want to contribute to this tool, file bugs or help with understanding/resolving differences we’re currently observing across kubemark and real-cluster (e.g [#44701](https://github.com/kubernetes/kubernetes/issues/44701)), ping us on “sig-scale” kubernetes slack channel and/or write an email to `kubernetes-sig-scale@googlegroups.com`.
-
-We have some interesting challenges in store for you, that span multiple parts of the system.
diff --git a/sig-scheduling/README.md b/sig-scheduling/README.md
index f1276a2f..b75d77a0 100644
--- a/sig-scheduling/README.md
+++ b/sig-scheduling/README.md
@@ -43,6 +43,9 @@ The following subprojects are owned by sig-scheduling:
- Owners:
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/cmd/kube-scheduler/OWNERS
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/scheduler/OWNERS
+- **poseidon**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes-sigs/poseidon/master/OWNERS
## GitHub Teams
diff --git a/sig-storage/contributing.md b/sig-storage/contributing.md
index 37ee510a..c2d596e2 100644
--- a/sig-storage/contributing.md
+++ b/sig-storage/contributing.md
@@ -5,6 +5,8 @@ We recommend the following presentations, docs, and videos to help get familiar
| Date | Title | Link | Description |
| --- | --- | --- | --- |
| - | Persistent Volume Framework | [Doc](http://kubernetes.io/docs/user-guide/persistent-volumes/) | Public user docs for Kubenretes Persistent Volume framework.
+| 2018 May 03 | SIG Storage Intro | [Video](https://www.youtube.com/watch?v=GvrTl2T-Tts&list=PLj6h78yzYM2N8GdbjmhVU65KYm_68qBmo&index=164&t=0s) | An overview of SIG Storage By Saad Ali at Kubecon EU 2018. |
+| 2018 May 04 | Kubernetes Storage Lingo 101 | [Video](https://www.youtube.com/watch?v=uSxlgK1bCuA&t=0s&index=300&list=PLj6h78yzYM2N8GdbjmhVU65KYm_68qBmo) | An overview of various terms used in Kubernetes storage and what they mean by Saad Ali at Kubecon EU 2018.|
| 2017 May 18 | Storage Classes & Dynamic Provisioning in Kubernetes |[Video](https://youtu.be/qktFhjJmFhg)| Intro to the basic Kubernetes storage concepts for users (direct volume reference, PV/PVC, and dynamic provisioning). |
| 2017 March 29 | Dynamic Provisioning and Storage Classes in Kubernetes |[Blog post](http://blog.kubernetes.io/2017/03/dynamic-provisioning-and-storage-classes-kubernetes.html)| Overview of Dynamic Provisioning and Storage Classes in Kubernetes at GA. |
| 2017 March 29 | How Kubernetes Storage Works | [Slides](https://docs.google.com/presentation/d/1Yl5JKifcncn0gSZf3e1dWspd8iFaWObLm9LxCaXZJIk/edit?usp=sharing) | Overview for developers on how Kubernetes storage works for KubeCon EU 2017 by Saad Ali
diff --git a/sig-testing/README.md b/sig-testing/README.md
index 2381d8b2..e79f8e95 100644
--- a/sig-testing/README.md
+++ b/sig-testing/README.md
@@ -20,7 +20,7 @@ Interested in how we can most effectively test Kubernetes. We're interested spec
### Chairs
The Chairs of the SIG run operations and processes governing the SIG.
-* Aaron Crickenberger (**[@spiffxp](https://github.com/spiffxp)**), Samsung SDS
+* Aaron Crickenberger (**[@spiffxp](https://github.com/spiffxp)**)
* Erick Feja (**[@fejta](https://github.com/fejta)**), Google
* Steve Kuznetsov (**[@stevekuznetsov](https://github.com/stevekuznetsov)**), Red Hat
* Timothy St. Clair (**[@timothysc](https://github.com/timothysc)**), Heptio
diff --git a/sig-vmware/README.md b/sig-vmware/README.md
index e0c830c2..cfdc3192 100644
--- a/sig-vmware/README.md
+++ b/sig-vmware/README.md
@@ -13,6 +13,10 @@ Bring together members of the VMware and Kubernetes community to maintain, suppo
## Meetings
* Regular SIG Meeting: [Thursdays at 18:00 UTC](https://zoom.us/j/183662780) (bi-weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=18:00&tz=UTC).
* [Meeting notes and Agenda](https://docs.google.com/document/d/1RV0nVtlPoAtM0DQwNYxYCC9lHfiHpTNatyv4bek6XtA/edit?usp=sharing).
+ * [Meeting recordings](https://www.youtube.com/playlist?list=PLutJyDdkKQIqKv-Zq8WbyibQtemChor9y).
+* Cloud Provider vSphere weekly syncup: [Wednesdays at 16:30 UTC](https://zoom.us/j/584244729) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=16:30&tz=UTC).
+ * [Meeting notes and Agenda](https://docs.google.com/document/d/1B0NmmKVh8Ea5hnNsbUsJC7ZyNCsq_6NXl5hRdcHlJgY/edit?usp=sharing).
+ * [Meeting recordings](https://www.youtube.com/playlist?list=PLutJyDdkKQIpOT4bOfuO3MEMHvU1tRqyR).
## Leadership
@@ -27,6 +31,33 @@ The Chairs of the SIG run operations and processes governing the SIG.
* [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware)
* [Open Community Issues/PRs](https://github.com/kubernetes/community/labels/sig%2Fvmware)
+## Subprojects
+
+The following subprojects are owned by sig-vmware:
+- **cloud-provider-vsphere**
+ - Owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/OWNERS
+
+## GitHub Teams
+
+The below teams can be mentioned on issues and PRs in order to get attention from the right people.
+Note that the links to display team membership will only work if you are a member of the org.
+
+The google groups contain the archive of Github team notifications.
+Mentioning a team on Github will CC its group.
+Monitor these for Github activity if you are not a member of the team.
+
+| Team Name | Details | Google Groups | Description |
+| --------- |:-------:|:-------------:| ----------- |
+| @kubernetes/sig-vmware-api-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-api-reviews) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-api-reviews) | API Changes and Reviews |
+| @kubernetes/sig-vmware-bugs | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-bugs) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-bugs) | Bug Triage and Troubleshooting |
+| @kubernetes/sig-vmware-feature-requests | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-feature-requests) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-feature-requests) | Feature Requests |
+| @kubernetes/sig-vmware-members | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-members) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-members) | Release Team Members |
+| @kubernetes/sig-vmware-misc | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-misc) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-misc) | General Discussion |
+| @kubernetes/sig-vmware-pr-reviews | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-pr-reviews) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-pr-reviews) | PR Reviews |
+| @kubernetes/sig-vmware-proposals | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-proposals) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-proposals) | Design Proposals |
+| @kubernetes/sig-vmware-test-failures | [link](https://github.com/orgs/kubernetes/teams/sig-vmware-test-failures) | [link](https://groups.google.com/forum/#!forum/kubernetes-sig-vmware-test-failures) | Test Failures and Triage |
+
<!-- BEGIN CUSTOM CONTENT -->
<!-- END CUSTOM CONTENT -->
diff --git a/sigs.yaml b/sigs.yaml
index 4818035f..267c6e9d 100644
--- a/sigs.yaml
+++ b/sigs.yaml
@@ -59,34 +59,34 @@ sigs:
- name: universal-machinery # i.e., both client and server
owners:
- https://raw.githubusercontent.com/kubernetes/apimachinery/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/apimachinery/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apimachinery/OWNERS
- name: server-frameworks
owners:
- https://raw.githubusercontent.com/kubernetes/apiserver/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/apiserver/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiserver/OWNERS
- name: server-crd
owners:
- https://raw.githubusercontent.com/kubernetes/apiextensions-apiserver/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/apiextensions-apiserver/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/apiextensions-apiserver/OWNERS
- name: server-api-aggregation
owners:
- https://raw.githubusercontent.com/kubernetes/kube-aggregator/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/kube-aggregator/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/kube-aggregator/OWNERS
- name: server-sdk
owners:
- https://raw.githubusercontent.com/kubernetes/sample-apiserver/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/sample-apiserver/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/sample-apiserver/OWNERS
- https://raw.githubusercontent.com/kubernetes/sample-controller/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/sample-controller/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/sample-controller/OWNERS
- https://raw.githubusercontent.com/kubernetes-incubator/apiserver-builder/master/OWNERS
- name: idl-schema-client-pipeline
owners:
- https://raw.githubusercontent.com/kubernetes/gengo/master/OWNERS # possibly should be totally separate
- https://raw.githubusercontent.com/kubernetes/code-generator/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/code-generator/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/code-generator/OWNERS
- https://raw.githubusercontent.com/kubernetes/kube-openapi/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/api/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/api/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/OWNERS
- https://raw.githubusercontent.com/kubernetes-client/gen/master/OWNERS
- name: kubernetes-clients
owners:
@@ -102,7 +102,7 @@ sigs:
- https://raw.githubusercontent.com/kubernetes-client/typescript/master/OWNERS
- https://raw.githubusercontent.com/kubernetes-incubator/client-python/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/client-go/master/OWNERS
- - https://raw.githubusercontent.com/kubernetes/kubernetes/staging/src/k8s.io/client-go/master/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/client-go/OWNERS
- name: universal-utils # There is no reason why this is in api-machinery
owners:
- https://raw.githubusercontent.com/kubernetes/utils/master/OWNERS
@@ -235,7 +235,7 @@ sigs:
company: Google
- name: Jaice Singer DuMars
github: jdumars
- company: Microsoft
+ company: Google
meetings:
- description: Regular SIG Meeting
day: Thursday
@@ -413,21 +413,25 @@ sigs:
label: azure
leadership:
chairs:
- - name: Jason Hansen
- github: slack
+ - name: Stephen Augustus
+ github: justaugustus
+ company: Red Hat
+ - name: Shubheksha Jalan
+ github: shubheksha
+ company: Microsoft
+ tech_leads:
+ - name: Kal Khenidak
+ github: khenidak
company: Microsoft
- name: Cole Mickens
github: colemickens
company: Red Hat
- - name: Jaice Singer DuMars
- github: jdumars
- company: Microsoft
meetings:
- description: Regular SIG Meeting
day: Wednesday
time: "16:00"
tz: "UTC"
- frequency: weekly
+ frequency: biweekly
url: https://zoom.us/j/2015551212
archive_url: https://docs.google.com/document/d/1SpxvmOgHDhnA72Z0lbhBffrfe9inQxZkU9xqlafOW9k/edit
recordings_url: https://www.youtube.com/watch?v=yQLeUKi_dwg&list=PL69nYSiGNLP2JNdHwB8GxRs2mikK7zyc4
@@ -435,8 +439,20 @@ sigs:
slack: sig-azure
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-azure
teams:
+ - name: sig-azure-api-reviews
+ description: API Changes and Reviews
+ - name: sig-azure-bugs
+ description: Bug Triage and Troubleshooting
+ - name: sig-azure-feature-requests
+ description: Feature Requests
- name: sig-azure-misc
description: General Discussion
+ - name: sig-azure-pr-reviews
+ description: PR Reviews
+ - name: sig-azure-proposals
+ description: Design Proposals
+ - name: sig-azure-test-failures
+ description: Test Failures and Triage
subprojects:
- name: cloud-provider-azure
owners:
@@ -541,6 +557,76 @@ sigs:
owners:
- https://raw.githubusercontent.com/kubernetes/kubectl/master/OWNERS
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/kubectl/OWNERS
+ - name: kustomize
+ owners:
+ # "owners" entry
+ - https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/OWNERS
+ - name: Cloud Provider
+ dir: sig-cloud-provider
+ mission_statement: >
+ Ensures that the Kubernetes ecosystem is evolving in a way that is neutral to all
+ (public and private) cloud providers. It will be responsible for establishing
+ standards and requirements that must be met by all providers to ensure optimal
+ integration with Kubernetes.
+ label: cloud-provider
+ leadership:
+ chairs:
+ - name: Andrew Sy Kim
+ github: andrewsykim
+ company: DigitalOcean
+ - name: Chris Hoge
+ github: hogepodge
+ company: OpenStack Foundation
+ - name: Jago Macleod
+ github: jagosan
+ company: Google
+ meetings:
+ - description: Regular SIG Meeting
+ day: Wednesday
+ time: "10:00"
+ tz: "PT (Pacific Time)"
+ frequency: biweekly
+ url: https://zoom.us/my/sigcloudprovider
+ archive_url: TODO
+ recordings_url: TODO
+ contact:
+ slack: sig-cloud-provider
+ mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-cloud-provider
+ teams:
+ - name: sig-cloud-provider-api-reviews
+ description: API Changes and Reviews
+ - name: sig-cloud-provider-bugs
+ description: Bug Triage and Troubleshooting
+ - name: sig-cloud-provider-feature-requests
+ description: Feature Requests
+ - name: sig-cloud-provider-maintainers
+ description: Cloud Providers Maintainers
+ - name: sig-cloud-providers-misc
+ description: General Discussion
+ - name: sig-cloud-provider-pr-reviews
+ description: PR Reviews
+ - name: sig-cloud-provider-proposals
+ description: Design Proposals
+ - name: sig-cloud-provider-test-failures
+ description: Test Failures and Triage
+ subprojects:
+ - name: kubernetes-cloud-provider
+ owners:
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/cmd/cloud-controller-manager/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/cloud/OWNERS
+ - https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/cloudprovider/OWNERS
+ - name: cloud-provider-azure
+ owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-azure/master/OWNERS
+ - name: cloud-provider-gcp
+ owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-gcp/master/OWNERS
+ - name: cloud-provider-openstack
+ owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/OWNERS
+ - name: cloud-provider-vsphere
+ owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/OWNERS
- name: Cluster Lifecycle
dir: sig-cluster-lifecycle
mission_statement: >
@@ -655,7 +741,7 @@ sigs:
company: RackN
- name: Jaice Singer DuMars
github: jdumars
- company: Microsoft
+ company: Google
meetings:
- description: Regular SIG Meeting
day: Thursday
@@ -914,7 +1000,7 @@ sigs:
- name: Multicluster
dir: sig-multicluster
mission_statement: >
- A Special Interest Group focussed on solving common challenges related to the
+ A Special Interest Group focused on solving common challenges related to the
management of multiple Kubernetes clusters, and applications that exist therein.
The SIG will be responsible for designing, discussing, implementing and maintaining
API’s, tools and documentation related to multi-cluster administration and application
@@ -1216,7 +1302,7 @@ sigs:
chairs:
- name: Jaice Singer DuMars
github: jdumars
- company: Microsoft
+ company: Google
- name: Caleb Miles
github: calebamiles
company: Google
@@ -1283,7 +1369,7 @@ sigs:
company: Google
- name: Bob Wise
github: countspongebob
- company: Samsung SDS
+ company: AWS
meetings:
- description: Regular SIG Meeting
day: Thursday
@@ -1372,6 +1458,9 @@ sigs:
owners:
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/cmd/kube-scheduler/OWNERS
- https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/scheduler/OWNERS
+ - name: poseidon
+ owners:
+ - https://raw.githubusercontent.com/kubernetes-sigs/poseidon/master/OWNERS
- name: Service Catalog
dir: sig-service-catalog
mission_statement: >
@@ -1488,7 +1577,6 @@ sigs:
chairs:
- name: Aaron Crickenberger
github: spiffxp
- company: Samsung SDS
- name: Erick Feja
github: fejta
company: Google
@@ -1599,10 +1687,39 @@ sigs:
frequency: bi-weekly
url: https://zoom.us/j/183662780
archive_url: https://docs.google.com/document/d/1RV0nVtlPoAtM0DQwNYxYCC9lHfiHpTNatyv4bek6XtA/edit?usp=sharing
- recordings_url:
+ recordings_url: https://www.youtube.com/playlist?list=PLutJyDdkKQIqKv-Zq8WbyibQtemChor9y
+ - description: Cloud Provider vSphere weekly syncup
+ day: Wednesday
+ time: "16:30"
+ tz: "UTC"
+ frequency: weekly
+ url: https://zoom.us/j/584244729
+ archive_url: https://docs.google.com/document/d/1B0NmmKVh8Ea5hnNsbUsJC7ZyNCsq_6NXl5hRdcHlJgY/edit?usp=sharing
+ recordings_url: https://www.youtube.com/playlist?list=PLutJyDdkKQIpOT4bOfuO3MEMHvU1tRqyR
contact:
slack: sig-vmware
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-sig-vmware
+ teams:
+ - name: sig-vmware-api-reviews
+ description: API Changes and Reviews
+ - name: sig-vmware-bugs
+ description: Bug Triage and Troubleshooting
+ - name: sig-vmware-feature-requests
+ description: Feature Requests
+ - name: sig-vmware-members
+ description: Release Team Members
+ - name: sig-vmware-misc
+ description: General Discussion
+ - name: sig-vmware-pr-reviews
+ description: PR Reviews
+ - name: sig-vmware-proposals
+ description: Design Proposals
+ - name: sig-vmware-test-failures
+ description: Test Failures and Triage
+ subprojects:
+ - name: cloud-provider-vsphere
+ owners:
+ - https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/OWNERS
- name: Windows
dir: sig-windows
mission_statement: >
@@ -1669,7 +1786,7 @@ workinggroups:
- name: Clayton Coleman
github: smarterclayton
company: Red Hat
- - name: Greg Gastle
+ - name: Greg Castle
github: destijl
company: Google
meetings:
@@ -1778,6 +1895,7 @@ workinggroups:
time: "9:30"
tz: "PT (Pacific Time)"
frequency: weekly
+ url: https://zoom.us/my/apimachinery
contact:
slack: wg-apply
mailing_list: https://groups.google.com/forum/#!forum/kubernetes-wg-apply
diff --git a/vendor/github.com/client9/misspell/.gitignore b/vendor/github.com/client9/misspell/.gitignore
deleted file mode 100644
index aaca9b8f..00000000
--- a/vendor/github.com/client9/misspell/.gitignore
+++ /dev/null
@@ -1,33 +0,0 @@
-dist/
-bin/
-
-# editor turds
-*~
-*.gz
-*.bz2
-*.csv
-
-# Compiled Object files, Static and Dynamic libs (Shared Objects)
-*.o
-*.a
-*.so
-
-# Folders
-_obj
-_test
-
-# Architecture specific extensions/prefixes
-*.[568vq]
-[568vq].out
-
-*.cgo1.go
-*.cgo2.c
-_cgo_defun.c
-_cgo_gotypes.go
-_cgo_export.*
-
-_testmain.go
-
-*.exe
-*.test
-*.prof
diff --git a/vendor/github.com/client9/misspell/.travis.yml b/vendor/github.com/client9/misspell/.travis.yml
deleted file mode 100644
index 36a50df6..00000000
--- a/vendor/github.com/client9/misspell/.travis.yml
+++ /dev/null
@@ -1,11 +0,0 @@
-sudo: required
-dist: trusty
-language: go
-go:
- - 1.8.3
-git:
- depth: 1
-script:
- - make -e ci
-after_success:
- - test -n "$TRAVIS_TAG" && ./scripts/goreleaser.sh
diff --git a/vendor/github.com/client9/misspell/Dockerfile b/vendor/github.com/client9/misspell/Dockerfile
deleted file mode 100644
index 1b6486ec..00000000
--- a/vendor/github.com/client9/misspell/Dockerfile
+++ /dev/null
@@ -1,37 +0,0 @@
-FROM golang:1.8.1-alpine
-MAINTAINER https://github.com/client9/misspell
-
-# cache buster
-RUN echo 3
-
-# git is needed for "go get" below
-RUN apk add --no-cache git make
-
-# these are my standard testing / linting tools
-RUN /bin/true \
- && go get -u github.com/alecthomas/gometalinter \
- && gometalinter --install \
- && rm -rf /go/src /go/pkg
-#
-# * SCOWL word list
-#
-# Downloads
-# http://wordlist.aspell.net/dicts/
-# --> http://app.aspell.net/create
-#
-
-# use en_US large size
-# use regular size for others
-ENV SOURCE_US_BIG http://app.aspell.net/create?max_size=70&spelling=US&max_variant=2&diacritic=both&special=hacker&special=roman-numerals&download=wordlist&encoding=utf-8&format=inline
-
-# should be able tell difference between English variations using this
-ENV SOURCE_US http://app.aspell.net/create?max_size=60&spelling=US&max_variant=1&diacritic=both&download=wordlist&encoding=utf-8&format=inline
-ENV SOURCE_GB_ISE http://app.aspell.net/create?max_size=60&spelling=GBs&max_variant=2&diacritic=both&download=wordlist&encoding=utf-8&format=inline
-ENV SOURCE_GB_IZE http://app.aspell.net/create?max_size=60&spelling=GBz&max_variant=2&diacritic=both&download=wordlist&encoding=utf-8&format=inline
-ENV SOURCE_CA http://app.aspell.net/create?max_size=60&spelling=CA&max_variant=2&diacritic=both&download=wordlist&encoding=utf-8&format=inline
-
-RUN /bin/true \
- && mkdir /scowl-wl \
- && wget -O /scowl-wl/words-US-60.txt ${SOURCE_US} \
- && wget -O /scowl-wl/words-GB-ise-60.txt ${SOURCE_GB_ISE}
-
diff --git a/vendor/github.com/client9/misspell/Makefile b/vendor/github.com/client9/misspell/Makefile
deleted file mode 100644
index 0ccf7486..00000000
--- a/vendor/github.com/client9/misspell/Makefile
+++ /dev/null
@@ -1,84 +0,0 @@
-CONTAINER=nickg/misspell
-
-install: ## install misspell into GOPATH/bin
- go install ./cmd/misspell
-
-build: hooks ## build and lint misspell
- go install ./cmd/misspell
- gometalinter \
- --vendor \
- --deadline=60s \
- --disable-all \
- --enable=vet \
- --enable=golint \
- --enable=gofmt \
- --enable=goimports \
- --enable=gosimple \
- --enable=staticcheck \
- --enable=ineffassign \
- --exclude=/usr/local/go/src/net/lookup_unix.go \
- ./...
- go test .
-
-test: ## run all tests
- go test .
-
-# the grep in line 2 is to remove misspellings in the spelling dictionary
-# that trigger false positives!!
-falsepositives: /scowl-wl
- cat /scowl-wl/words-US-60.txt | \
- grep -i -v -E "payed|Tyre|Euclidian|nonoccurence|dependancy|reenforced|accidently|surprize|dependance|idealogy|binominal|causalities|conquerer|withing|casette|analyse|analogue|dialogue|paralyse|catalogue|archaeolog|clarinettist|catalyses|cancell|chisell|ageing|cataloguing" | \
- misspell -debug -error
- cat /scowl-wl/words-GB-ise-60.txt | \
- grep -v -E "payed|nonoccurence|withing" | \
- misspell -locale=UK -debug -error
-# cat /scowl-wl/words-GB-ize-60.txt | \
-# grep -v -E "withing" | \
-# misspell -debug -error
-# cat /scowl-wl/words-CA-60.txt | \
-# grep -v -E "withing" | \
-# misspell -debug -error
-
-bench: ## run benchmarks
- go test -bench '.*'
-
-clean: ## clean up time
- rm -rf dist/ bin/
- go clean ./...
- git gc --aggressive
-
-ci: ## run test like travis-ci does, requires docker
- docker run --rm \
- -v $(PWD):/go/src/github.com/client9/misspell \
- -w /go/src/github.com/client9/misspell \
- ${CONTAINER} \
- make build falsepositives
-
-docker-build: ## build a docker test image
- docker build -t ${CONTAINER} .
-
-docker-pull: ## pull latest test image
- docker pull ${CONTAINER}
-
-docker-console: ## log into the test image
- docker run --rm -it \
- -v $(PWD):/go/src/github.com/client9/misspell \
- -w /go/src/github.com/client9/misspell \
- ${CONTAINER} sh
-
-.git/hooks/pre-commit: scripts/pre-commit.sh
- cp -f scripts/pre-commit.sh .git/hooks/pre-commit
-.git/hooks/commit-msg: scripts/commit-msg.sh
- cp -f scripts/commit-msg.sh .git/hooks/commit-msg
-hooks: .git/hooks/pre-commit .git/hooks/commit-msg ## install git precommit hooks
-
-.PHONY: help ci console docker-build bench
-
-# https://www.client9.com/self-documenting-makefiles/
-help:
- @awk -F ':|##' '/^[^\t].+?:.*?##/ {\
- printf "\033[36m%-30s\033[0m %s\n", $$1, $$NF \
- }' $(MAKEFILE_LIST)
-.DEFAULT_GOAL=help
-.PHONY=help
-
diff --git a/vendor/github.com/client9/misspell/README.md b/vendor/github.com/client9/misspell/README.md
deleted file mode 100644
index f7a2e8b4..00000000
--- a/vendor/github.com/client9/misspell/README.md
+++ /dev/null
@@ -1,416 +0,0 @@
-[![Build Status](https://travis-ci.org/client9/misspell.svg?branch=master)](https://travis-ci.org/client9/misspell) [![Go Report Card](https://goreportcard.com/badge/github.com/client9/misspell)](https://goreportcard.com/report/github.com/client9/misspell) [![GoDoc](https://godoc.org/github.com/client9/misspell?status.svg)](https://godoc.org/github.com/client9/misspell) [![Coverage](http://gocover.io/_badge/github.com/client9/misspell)](http://gocover.io/github.com/client9/misspell) [![license](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://raw.githubusercontent.com/client9/misspell/master/LICENSE)
-
-Correct commonly misspelled English words... quickly.
-
-### Install
-
-
-If you just want a binary and to start using `misspell`:
-
-```
-curl -L -o ./install-misspell.sh https://git.io/misspell
-sh ./install-misspell.sh
-```
-
-will install as `./bin/misspell`. You can adjust the download location using the `-b` flag. File a ticket if you want another platform supported.
-
-
-If you use [Go](https://golang.org/), the best way to run `misspell` is by using [gometalinter](#gometalinter). Otherwise, install `misspell` the old-fashioned way:
-
-```
-go get -u github.com/client9/misspell/cmd/misspell
-```
-
-and misspell will be in your `GOPATH`
-
-### Usage
-
-
-```bash
-$ misspell all.html your.txt important.md files.go
-your.txt:42:10 found "langauge" a misspelling of "language"
-
-# ^ file, line, column
-```
-
-```
-$ misspell -help
-Usage of misspell:
- -debug
- Debug matching, very slow
- -error
- Exit with 2 if misspelling found
- -f string
- 'csv', 'sqlite3' or custom Golang template for output
- -i string
- ignore the following corrections, comma separated
- -j int
- Number of workers, 0 = number of CPUs
- -legal
- Show legal information and exit
- -locale string
- Correct spellings using locale perferances for US or UK. Default is to use a neutral variety of English. Setting locale to US will correct the British spelling of 'colour' to 'color'
- -o string
- output file or [stderr|stdout|] (default "stdout")
- -q Do not emit misspelling output
- -source string
- Source mode: auto=guess, go=golang source, text=plain or markdown-like text (default "auto")
- -w Overwrite file with corrections (default is just to display)
-```
-
-## FAQ
-
-* [Automatic Corrections](#correct)
-* [Converting UK spellings to US](#locale)
-* [Using pipes and stdin](#stdin)
-* [Golang special support](#golang)
-* [gometalinter support](#gometalinter)
-* [CSV Output](#csv)
-* [Using SQLite3](#sqlite)
-* [Changing output format](#output)
-* [Checking a folder recursively](#recursive)
-* [Performance](#performance)
-* [Known Issues](#issues)
-* [Debugging](#debug)
-* [False Negatives and missing words](#missing)
-* [Origin of Word Lists](#words)
-* [Software License](#license)
-* [Problem statement](#problem)
-* [Other spelling correctors](#others)
-* [Other ideas](#otherideas)
-
-<a name="correct"></a>
-### How can I make the corrections automatically?
-
-Just add the `-w` flag!
-
-```
-$ misspell -w all.html your.txt important.md files.go
-your.txt:9:21:corrected "langauge" to "language"
-
-# ^booyah
-```
-
-<a name="locale"></a>
-### How do I convert British spellings to American (or vice-versa)?
-
-Add the `-locale US` flag!
-
-```bash
-$ misspell -locale US important.txt
-important.txt:10:20 found "colour" a misspelling of "color"
-```
-
-Add the `-locale UK` flag!
-
-```bash
-$ echo "My favorite color is blue" | misspell -locale UK
-stdin:1:3:found "favorite color" a misspelling of "favourite colour"
-```
-
-Help is appreciated as I'm neither British nor an
-expert in the English language.
-
-<a name="recursive"></a>
-### How do you check an entire folder recursively?
-
-Just list a directory you'd like to check
-
-```bash
-misspell .
-misspell aDirectory anotherDirectory aFile
-```
-
-You can also run misspell recursively using the following shell tricks:
-
-```bash
-misspell directory/**/*
-```
-
-or
-
-```bash
-find . -type f | xargs misspell
-```
-
-You can select a type of file as well. The following examples selects all `.txt` files that are *not* in the `vendor` directory:
-
-```bash
-find . -type f -name '*.txt' | grep -v vendor/ | xargs misspell -error
-```
-
-<a name="stdin"></a>
-### Can I use pipes or `stdin` for input?
-
-Yes!
-
-Print messages to `stderr` only:
-
-```bash
-$ echo "zeebra" | misspell
-stdin:1:0:found "zeebra" a misspelling of "zebra"
-```
-
-Print messages to `stderr`, and corrected text to `stdout`:
-
-```bash
-$ echo "zeebra" | misspell -w
-stdin:1:0:corrected "zeebra" to "zebra"
-zebra
-```
-
-Only print the corrected text to `stdout`:
-
-```bash
-$ echo "zeebra" | misspell -w -q
-zebra
-```
-
-<a name="golang"></a>
-### Are there special rules for golang source files?
-
-Yes! If the file ends in `.go`, then misspell will only check spelling in
-comments.
-
-If you want to force a file to be checked as a golang source, use `-source=go`
-on the command line. Conversely, you can check a golang source as if it were
-pure text by using `-source=text`. You might want to do this since many
-variable names have misspellings in them!
-
-### Can I check only-comments in other other programming languages?
-
-I'm told the using `-source=go` works well for ruby, javascript, java, c and
-c++.
-
-It doesn't work well for python and bash.
-
-<a name="gometalinter"></a>
-### Does this work with gometalinter?
-
-[gometalinter](https://github.com/alecthomas/gometalinter) runs
-multiple golang linters. Starting on [2016-06-12](https://github.com/alecthomas/gometalinter/pull/134)
-gometalinter supports `misspell` natively but it is disabled by default.
-
-```bash
-# update your copy of gometalinter
-go get -u github.com/alecthomas/gometalinter
-
-# install updates and misspell
-gometalinter --install --update
-```
-
-To use, just enable `misspell`
-
-```
-gometalinter --enable misspell ./...
-```
-
-Note that gometalinter only checks golang files, and uses the default options
-of `misspell`
-
-You may wish to run this on your plaintext (.txt) and/or markdown files too.
-
-
-<a name="csv"></a>
-### How Can I Get CSV Output?
-
-Using `-f csv`, the output is standard comma-seprated values with headers in the first row.
-
-```
-misspell -f csv *
-file,line,column,typo,corrected
-"README.md",9,22,langauge,language
-"README.md",47,25,langauge,language
-```
-
-<a name="sqlite"></a>
-### How can I export to SQLite3?
-
-Using `-f sqlite`, the output is a [sqlite3](https://www.sqlite.org/index.html) dump-file.
-
-```bash
-$ misspell -f sqlite * > /tmp/misspell.sql
-$ cat /tmp/misspell.sql
-
-PRAGMA foreign_keys=OFF;
-BEGIN TRANSACTION;
-CREATE TABLE misspell(
- "file" TEXT,
- "line" INTEGER,i
- "column" INTEGER,i
- "typo" TEXT,
- "corrected" TEXT
-);
-INSERT INTO misspell VALUES("install.txt",202,31,"immediatly","immediately");
-# etc...
-COMMIT;
-```
-
-```bash
-$ sqlite3 -init /tmp/misspell.sql :memory: 'select count(*) from misspell'
-1
-```
-
-With some tricks you can directly pipe output to sqlite3 by using `-init /dev/stdin`:
-
-```
-misspell -f sqlite * | sqlite3 -init /dev/stdin -column -cmd '.width 60 15' ':memory' \
- 'select substr(file,35),typo,count(*) as count from misspell group by file, typo order by count desc;'
-```
-
-<a name="ignore"></a>
-### How can I ignore rules?
-
-Using the `-i "comma,separated,rules"` flag you can specify corrections to ignore.
-
-For example, if you were to run `misspell -w -error -source=text` against document that contains the string `Guy Finkelshteyn Braswell`, misspell would change the text to `Guy Finkelstheyn Bras well`. You can then
-determine the rules to ignore by reverting the change and running the with the `-debug` flag. You can then see
-that the corrections were `htey -> they` and `aswell -> as well`. To ignore these two rules, you add `-i "htey,aswell"` to
-your command. With debug mode on, you can see it print the corrections, but it will no longer make them.
-
-<a name="output"></a>
-### How can I change the output format?
-
-Using the `-f template` flag you can pass in a
-[golang text template](https://golang.org/pkg/text/template/) to format the output.
-
-One can use `printf "%q" VALUE` to safely quote a value.
-
-The default template is compatible with [gometalinter](https://github.com/alecthomas/gometalinter)
-```
-{{ .Filename }}:{{ .Line }}:{{ .Column }}:corrected {{ printf "%q" .Original }} to "{{ printf "%q" .Corrected }}"
-```
-
-To just print probable misspellings:
-
-```
--f '{{ .Original }}'
-```
-
-<a name="problem"></a>
-### What problem does this solve?
-
-This corrects commonly misspelled English words in computer source
-code, and other text-based formats (`.txt`, `.md`, etc).
-
-It is designed to run quickly so it can be
-used as a [pre-commit hook](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks)
-with minimal burden on the developer.
-
-It does not work with binary formats (e.g. Word, etc).
-
-It is not a complete spell-checking program nor a grammar checker.
-
-<a name="others"></a>
-### What are other misspelling correctors and what's wrong with them?
-
-Some other misspelling correctors:
-
-* https://github.com/vlajos/misspell_fixer
-* https://github.com/lyda/misspell-check
-* https://github.com/lucasdemarchi/codespell
-
-They all work but had problems that prevented me from using them at scale:
-
-* slow, all of the above check one misspelling at a time (i.e. linear) using regexps
-* not MIT/Apache2 licensed (or equivalent)
-* have dependencies that don't work for me (python3, bash, linux sed, etc)
-* don't understand American vs. British English and sometimes makes unwelcome "corrections"
-
-That said, they might be perfect for you and many have more features
-than this project!
-
-<a name="performance"></a>
-### How fast is it?
-
-Misspell is easily 100x to 1000x faster than other spelling correctors. You
-should be able to check and correct 1000 files in under 250ms.
-
-This uses the mighty power of golang's
-[strings.Replacer](https://golang.org/pkg/strings/#Replacer) which is
-a implementation or variation of the
-[Aho–Corasick algorithm](https://en.wikipedia.org/wiki/Aho–Corasick_algorithm).
-This makes multiple substring matches *simultaneously*.
-
-In addition this uses multiple CPU cores to work on multiple files.
-
-<a name="issues"></a>
-### What problems does it have?
-
-Unlike the other projects, this doesn't know what a "word" is. There may be
-more false positives and false negatives due to this. On the other hand, it
-sometimes catches things others don't.
-
-Either way, please file bugs and we'll fix them!
-
-Since it operates in parallel to make corrections, it can be non-obvious to
-determine exactly what word was corrected.
-
-<a name="debug"></a>
-### It's making mistakes. How can I debug?
-
-Run using `-debug` flag on the file you want. It should then print what word
-it is trying to correct. Then [file a
-bug](https://github.com/client9/misspell/issues) describing the problem.
-Thanks!
-
-<a name="missing"></a>
-### Why is it making mistakes or missing items in golang files?
-
-The matching function is *case-sensitive*, so variable names that are multiple
-worlds either in all-upper or all-lower case sometimes can cause false
-positives. For instance a variable named `bodyreader` could trigger a false
-positive since `yrea` is in the middle that could be corrected to `year`.
-Other problems happen if the variable name uses a English contraction that
-should use an apostrophe. The best way of fixing this is to use the
-[Effective Go naming
-conventions](https://golang.org/doc/effective_go.html#mixed-caps) and use
-[camelCase](https://en.wikipedia.org/wiki/CamelCase) for variable names. You
-can check your code using [golint](https://github.com/golang/lint)
-
-<a name="license"></a>
-### What license is this?
-
-The main code is [MIT](https://github.com/client9/misspell/blob/master/LICENSE).
-
-Misspell also makes uses of the Golang standard library and contains a modified version of Golang's [strings.Replacer](https://golang.org/pkg/strings/#Replacer)
-which are covered under a [BSD License](https://github.com/golang/go/blob/master/LICENSE). Type `misspell -legal` for more details or see [legal.go](https://github.com/client9/misspell/blob/master/legal.go)
-
-<a name="words"></a>
-### Where do the word lists come from?
-
-It started with a word list from
-[Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines).
-Unfortunately, this list had to be highly edited as many of the words are
-obsolete or based from mistakes on mechanical typewriters (I'm guessing).
-
-Additional words were added based on actually mistakes seen in
-the wild (meaning self-generated).
-
-Variations of UK and US spellings are based on many sources including:
-
-* http://www.tysto.com/uk-us-spelling-list.html (with heavy editing, many are incorrect)
-* http://www.oxforddictionaries.com/us/words/american-and-british-spelling-american (excellent site but incomplete)
-* Diffing US and UK [scowl dictionaries](http://wordlist.aspell.net)
-
-American English is more accepting of spelling variations than is British
-English, so "what is American or not" is subject to opinion. Corrections and help welcome.
-
-<a name="otherideas"></a>
-### What are some other enhancements that could be done?
-
-Here's some ideas for enhancements:
-
-*Capitalization of proper nouns* could be done (e.g. weekday and month names, country names, language names)
-
-*Opinionated US spellings* US English has a number of words with alternate
-spellings. Think [adviser vs.
-advisor](http://grammarist.com/spelling/adviser-advisor/). While "advisor" is not wrong, the opinionated US
-locale would correct "advisor" to "adviser".
-
-*Versioning* Some type of versioning is needed so reporting mistakes and errors is easier.
-
-*Feedback* Mistakes would be sent to some server for agregation and feedback review.
-
-*Contractions and Apostrophes* This would optionally correct "isnt" to
-"isn't", etc.
diff --git a/vendor/github.com/client9/misspell/benchmark_test.go b/vendor/github.com/client9/misspell/benchmark_test.go
deleted file mode 100644
index d8126db3..00000000
--- a/vendor/github.com/client9/misspell/benchmark_test.go
+++ /dev/null
@@ -1,105 +0,0 @@
-package misspell
-
-import (
- "bytes"
- "io/ioutil"
- "testing"
-)
-
-var (
- sampleClean string
- sampleDirty string
- tmpCount int
- tmp string
- rep *Replacer
-)
-
-func init() {
-
- buf := bytes.Buffer{}
- for i := 0; i < len(DictMain); i += 2 {
- buf.WriteString(DictMain[i+1] + " ")
- if i%5 == 0 {
- buf.WriteString("\n")
- }
- }
- sampleClean = buf.String()
- sampleDirty = sampleClean + DictMain[0] + "\n"
- rep = New()
-}
-
-// BenchmarkCleanString takes a clean string (one with no errors)
-func BenchmarkCleanString(b *testing.B) {
- b.ResetTimer()
- b.ReportAllocs()
- var updated string
- var diffs []Diff
- var count int
- for n := 0; n < b.N; n++ {
- updated, diffs = rep.Replace(sampleClean)
- count += len(diffs)
- }
-
- // prevent compilier optimizations
- tmpCount = count
- tmp = updated
-}
-
-func discardDiff(_ Diff) {
- tmpCount++
-}
-
-// BenchmarkCleanStream takes a clean reader (no misspells) and outputs to a buffer
-func BenchmarkCleanStream(b *testing.B) {
- b.ResetTimer()
- b.ReportAllocs()
- tmpCount = 0
- buf := bytes.NewBufferString(sampleClean)
- out := bytes.NewBuffer(make([]byte, 0, len(sampleClean)+100))
- for n := 0; n < b.N; n++ {
- buf.Reset()
- buf.WriteString(sampleClean)
- out.Reset()
- rep.ReplaceReader(buf, out, discardDiff)
- }
-}
-
-// BenchmarkCleanStreamDiscard takes a clean reader and discards output
-func BenchmarkCleanStreamDiscard(b *testing.B) {
- b.ResetTimer()
- b.ReportAllocs()
-
- buf := bytes.NewBufferString(sampleClean)
- tmpCount = 0
- for n := 0; n < b.N; n++ {
- buf.Reset()
- buf.WriteString(sampleClean)
- rep.ReplaceReader(buf, ioutil.Discard, discardDiff)
- }
-}
-
-// BenchmarkCleanString takes a clean string (one with no errors)
-func BenchmarkDirtyString(b *testing.B) {
- b.ResetTimer()
- b.ReportAllocs()
- var updated string
- var diffs []Diff
- var count int
- for n := 0; n < b.N; n++ {
- updated, diffs = rep.Replace(sampleDirty)
- count += len(diffs)
- }
-
- // prevent compilier optimizations
- tmpCount = count
- tmp = updated
-}
-
-func BenchmarkCompile(b *testing.B) {
- r := New()
- b.ReportAllocs()
- b.ResetTimer()
- for n := 0; n < b.N; n++ {
- r.Compile()
- }
-}
diff --git a/vendor/github.com/client9/misspell/case_test.go b/vendor/github.com/client9/misspell/case_test.go
deleted file mode 100644
index 1705cf07..00000000
--- a/vendor/github.com/client9/misspell/case_test.go
+++ /dev/null
@@ -1,42 +0,0 @@
-package misspell
-
-import (
- "reflect"
- "testing"
-)
-
-func TestCaseStyle(t *testing.T) {
- cases := []struct {
- word string
- want WordCase
- }{
- {"lower", CaseLower},
- {"what's", CaseLower},
- {"UPPER", CaseUpper},
- {"Title", CaseTitle},
- {"CamelCase", CaseUnknown},
- {"camelCase", CaseUnknown},
- }
-
- for pos, tt := range cases {
- got := CaseStyle(tt.word)
- if tt.want != got {
- t.Errorf("Case %d %q: want %v got %v", pos, tt.word, tt.want, got)
- }
- }
-}
-
-func TestCaseVariations(t *testing.T) {
- cases := []struct {
- word string
- want []string
- }{
- {"that's", []string{"that's", "That's", "THAT'S"}},
- }
- for pos, tt := range cases {
- got := CaseVariations(tt.word, CaseStyle(tt.word))
- if !reflect.DeepEqual(tt.want, got) {
- t.Errorf("Case %d %q: want %v got %v", pos, tt.word, tt.want, got)
- }
- }
-}
diff --git a/vendor/github.com/client9/misspell/cmd/misspell/main.go b/vendor/github.com/client9/misspell/cmd/misspell/main.go
index 3d2c2b4d..174d79d8 100644
--- a/vendor/github.com/client9/misspell/cmd/misspell/main.go
+++ b/vendor/github.com/client9/misspell/cmd/misspell/main.go
@@ -1,3 +1,4 @@
+// The misspell command corrects commonly misspelled English words in source files.
package main
import (
diff --git a/vendor/github.com/client9/misspell/falsepositives_test.go b/vendor/github.com/client9/misspell/falsepositives_test.go
deleted file mode 100644
index 445cb2d1..00000000
--- a/vendor/github.com/client9/misspell/falsepositives_test.go
+++ /dev/null
@@ -1,136 +0,0 @@
-package misspell
-
-import (
- "testing"
-)
-
-func TestFalsePositives(t *testing.T) {
- cases := []string{
- "importEnd",
- "drinkeries",
- "subscripting",
- "unprojected",
- "updaters",
- "templatize",
- "requesters",
- "requestors",
- "replicaset",
- "parallelise",
- "parallelize",
- "perceptron", // http://foldoc.org/perceptron
- "perceptrons", // ^^
- "convertors", // alt spelling
- "adventurers",
- " s.svc.GetObject ",
- "infinitie.net",
- "foo summaries\n",
- "thru",
- "publically",
- "6YUO5", // base64
- "cleaner", // triggered by "cleane->cleanser" and partial word FP
- " http.Redirect(w, req, req.URL.Path, http.StatusFound) ",
- "url is http://zeebra.com ",
- "path is /zeebra?zeebra=zeebra ",
- "Malcom_McLean",
- "implementor", // alt spelling, see https://github.com/client9/misspell/issues/46
- "searchtypes",
- " witness",
- "returndata",
- "UNDERSTOOD",
- "textinterface",
- " committed ",
- "committed",
- "Bengali",
- "Portuguese",
- "scientists",
- "causally",
- "embarrassing",
- "setuptools", // python package
- "committing",
- "guises",
- "disguise",
- "begging",
- "cmo",
- "cmos",
- "borked",
- "hadn't",
- "Iceweasel",
- "summarised",
- "autorenew",
- "travelling",
- "republished",
- "fallthru",
- "pruning",
- "deb.VersionDontCare",
- "authtag",
- "intrepid",
- "usefully",
- "there",
- "definite",
- "earliest",
- "Japanese",
- "international",
- "excellent",
- "gracefully",
- "carefully",
- "class",
- "include",
- "process",
- "address",
- "attempt",
- "large",
- "although",
- "specific",
- "taste",
- "against",
- "successfully",
- "unsuccessfully",
- "occurred",
- "agree",
- "controlled",
- "publisher",
- "strategy",
- "geoposition",
- "paginated",
- "happened",
- "relative",
- "computing",
- "language",
- "manual",
- "token",
- "into",
- "nothing",
- "datatool",
- "propose",
- "learnt",
- "tolerant",
- "whitehat",
- "monotonic",
- "comprised",
- "indemnity",
- "flattened",
- "interrupted",
- "inotify",
- "occasional",
- "forging",
- "ampersand",
- "decomposition",
- "commit",
- "programmer", // "grammer"
- // "requestsinserted",
- "seeked", // technical word
- "bodyreader", // variable name
- "cantPrepare", // variable name
- "dontPrepare", // variable name
- "\\nto", // https://github.com/client9/misspell/issues/93
- "4f8b42c22dd3729b519ba6f68d2da7cc5b2d606d05daed5ad5128cc03e6c6358", // https://github.com/client9/misspell/issues/97
- }
- r := New()
- r.Debug = true
- for casenum, tt := range cases {
- got, _ := r.Replace(tt)
- if got != tt {
- t.Errorf("%d: %q got converted to %q", casenum, tt, got)
- }
- }
-}
diff --git a/vendor/github.com/client9/misspell/goreleaser.yml b/vendor/github.com/client9/misspell/goreleaser.yml
deleted file mode 100644
index 2bd738f8..00000000
--- a/vendor/github.com/client9/misspell/goreleaser.yml
+++ /dev/null
@@ -1,29 +0,0 @@
-# goreleaser.yml
-# https://github.com/goreleaser/goreleaser
-build:
- main: cmd/misspell/main.go
- binary: misspell
- ldflags: -s -w -X main.version={{.Version}}
- goos:
- - darwin
- - linux
- - windows
- goarch:
- - amd64
- env:
- - CGO_ENABLED=0
- ignore:
- - goos: darwin
- goarch: 386
- - goos: windows
- goarch: 386
-
-archive:
- name_template: "{{ .Binary }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
- replacements:
- amd64: 64bit
- 386: 32bit
- darwin: mac
-
-snapshot:
- name_template: SNAPSHOT-{{.Commit}}
diff --git a/vendor/github.com/client9/misspell/install-misspell.sh b/vendor/github.com/client9/misspell/install-misspell.sh
deleted file mode 100755
index 8e0ff5d9..00000000
--- a/vendor/github.com/client9/misspell/install-misspell.sh
+++ /dev/null
@@ -1,318 +0,0 @@
-#!/bin/sh
-set -e
-# Code generated by godownloader. DO NOT EDIT.
-#
-
-usage() {
- this=$1
- cat <<EOF
-$this: download go binaries for client9/misspell
-
-Usage: $this [-b] bindir [version]
- -b sets bindir or installation directory, default "./bin"
- [version] is a version number from
- https://github.com/client9/misspell/releases
- If version is missing, then an attempt to find the latest will be found.
-
-Generated by godownloader
- https://github.com/goreleaser/godownloader
-
-EOF
- exit 2
-}
-
-parse_args() {
- #BINDIR is ./bin unless set be ENV
- # over-ridden by flag below
-
- BINDIR=${BINDIR:-./bin}
- while getopts "b:h?" arg; do
- case "$arg" in
- b) BINDIR="$OPTARG" ;;
- h | \?) usage "$0" ;;
- esac
- done
- shift $((OPTIND - 1))
- VERSION=$1
-}
-# this function wraps all the destructive operations
-# if a curl|bash cuts off the end of the script due to
-# network, either nothing will happen or will syntax error
-# out preventing half-done work
-execute() {
- TMPDIR=$(mktmpdir)
- echo "$PREFIX: downloading ${TARBALL_URL}"
- http_download "${TMPDIR}/${TARBALL}" "${TARBALL_URL}"
-
- echo "$PREFIX: verifying checksums"
- http_download "${TMPDIR}/${CHECKSUM}" "${CHECKSUM_URL}"
- hash_sha256_verify "${TMPDIR}/${TARBALL}" "${TMPDIR}/${CHECKSUM}"
-
- (cd "${TMPDIR}" && untar "${TARBALL}")
- install -d "${BINDIR}"
- install "${TMPDIR}/${BINARY}" "${BINDIR}/"
- echo "$PREFIX: installed as ${BINDIR}/${BINARY}"
-}
-is_supported_platform() {
- platform=$1
- found=1
- case "$platform" in
- darwin/amd64) found=0 ;;
- linux/amd64) found=0 ;;
- esac
- case "$platform" in
- darwin/386) found=1 ;;
- esac
- return $found
-}
-check_platform() {
- if is_supported_platform "$PLATFORM"; then
- # optional logging goes here
- true
- else
- echo "${PREFIX}: platform $PLATFORM is not supported. Make sure this script is up-to-date and file request at https://github.com/${PREFIX}/issues/new"
- exit 1
- fi
-}
-adjust_version() {
- if [ -z "${VERSION}" ]; then
- echo "$PREFIX: checking GitHub for latest version"
- VERSION=$(github_last_release "$OWNER/$REPO")
- fi
- # if version starts with 'v', remove it
- VERSION=${VERSION#v}
-}
-adjust_format() {
- # change format (tar.gz or zip) based on ARCH
- true
-}
-adjust_os() {
- # adjust archive name based on OS
- case ${OS} in
- 386) OS=32bit ;;
- amd64) OS=64bit ;;
- darwin) OS=mac ;;
- esac
- true
-}
-adjust_arch() {
- # adjust archive name based on ARCH
- case ${ARCH} in
- 386) ARCH=32bit ;;
- amd64) ARCH=64bit ;;
- darwin) ARCH=mac ;;
- esac
- true
-}
-
-cat /dev/null <<EOF
-------------------------------------------------------------------------
-https://github.com/client9/shlib - portable posix shell functions
-Public domain - http://unlicense.org
-https://github.com/client9/shlib/blob/master/LICENSE.md
-but credit (and pull requests) appreciated.
-------------------------------------------------------------------------
-EOF
-is_command() {
- command -v "$1" >/dev/null
-}
-uname_os() {
- os=$(uname -s | tr '[:upper:]' '[:lower:]')
- echo "$os"
-}
-uname_arch() {
- arch=$(uname -m)
- case $arch in
- x86_64) arch="amd64" ;;
- x86) arch="386" ;;
- i686) arch="386" ;;
- i386) arch="386" ;;
- aarch64) arch="arm64" ;;
- armv5*) arch="arm5" ;;
- armv6*) arch="arm6" ;;
- armv7*) arch="arm7" ;;
- esac
- echo ${arch}
-}
-uname_os_check() {
- os=$(uname_os)
- case "$os" in
- darwin) return 0 ;;
- dragonfly) return 0 ;;
- freebsd) return 0 ;;
- linux) return 0 ;;
- android) return 0 ;;
- nacl) return 0 ;;
- netbsd) return 0 ;;
- openbsd) return 0 ;;
- plan9) return 0 ;;
- solaris) return 0 ;;
- windows) return 0 ;;
- esac
- echo "$0: uname_os_check: internal error '$(uname -s)' got converted to '$os' which is not a GOOS value. Please file bug at https://github.com/client9/shlib"
- return 1
-}
-uname_arch_check() {
- arch=$(uname_arch)
- case "$arch" in
- 386) return 0 ;;
- amd64) return 0 ;;
- arm64) return 0 ;;
- armv5) return 0 ;;
- armv6) return 0 ;;
- armv7) return 0 ;;
- ppc64) return 0 ;;
- ppc64le) return 0 ;;
- mips) return 0 ;;
- mipsle) return 0 ;;
- mips64) return 0 ;;
- mips64le) return 0 ;;
- s390x) return 0 ;;
- amd64p32) return 0 ;;
- esac
- echo "$0: uname_arch_check: internal error '$(uname -m)' got converted to '$arch' which is not a GOARCH value. Please file bug report at https://github.com/client9/shlib"
- return 1
-}
-untar() {
- tarball=$1
- case "${tarball}" in
- *.tar.gz | *.tgz) tar -xzf "${tarball}" ;;
- *.tar) tar -xf "${tarball}" ;;
- *.zip) unzip "${tarball}" ;;
- *)
- echo "Unknown archive format for ${tarball}"
- return 1
- ;;
- esac
-}
-mktmpdir() {
- test -z "$TMPDIR" && TMPDIR="$(mktemp -d)"
- mkdir -p "${TMPDIR}"
- echo "${TMPDIR}"
-}
-http_download() {
- local_file=$1
- source_url=$2
- header=$3
- headerflag=''
- destflag=''
- if is_command curl; then
- cmd='curl --fail -sSL'
- destflag='-o'
- headerflag='-H'
- elif is_command wget; then
- cmd='wget -q'
- destflag='-O'
- headerflag='--header'
- else
- echo "http_download: unable to find wget or curl"
- return 1
- fi
- if [ -z "$header" ]; then
- $cmd $destflag "$local_file" "$source_url"
- else
- $cmd $headerflag "$header" $destflag "$local_file" "$source_url"
- fi
-}
-github_api() {
- local_file=$1
- source_url=$2
- header=""
- case "$source_url" in
- https://api.github.com*)
- test -z "$GITHUB_TOKEN" || header="Authorization: token $GITHUB_TOKEN"
- ;;
- esac
- http_download "$local_file" "$source_url" "$header"
-}
-github_last_release() {
- owner_repo=$1
- giturl="https://api.github.com/repos/${owner_repo}/releases/latest"
- html=$(github_api - "$giturl")
- version=$(echo "$html" | grep -m 1 "\"tag_name\":" | cut -f4 -d'"')
- test -z "$version" && return 1
- echo "$version"
-}
-hash_sha256() {
- TARGET=${1:-/dev/stdin}
- if is_command gsha256sum; then
- hash=$(gsha256sum "$TARGET") || return 1
- echo "$hash" | cut -d ' ' -f 1
- elif is_command sha256sum; then
- hash=$(sha256sum "$TARGET") || return 1
- echo "$hash" | cut -d ' ' -f 1
- elif is_command shasum; then
- hash=$(shasum -a 256 "$TARGET" 2>/dev/null) || return 1
- echo "$hash" | cut -d ' ' -f 1
- elif is_command openssl; then
- hash=$(openssl -dst openssl dgst -sha256 "$TARGET") || return 1
- echo "$hash" | cut -d ' ' -f a
- else
- echo "hash_sha256: unable to find command to compute sha-256 hash"
- return 1
- fi
-}
-hash_sha256_verify() {
- TARGET=$1
- checksums=$2
- if [ -z "$checksums" ]; then
- echo "hash_sha256_verify: checksum file not specified in arg2"
- return 1
- fi
- BASENAME=${TARGET##*/}
- want=$(grep "${BASENAME}" "${checksums}" 2>/dev/null | tr '\t' ' ' | cut -d ' ' -f 1)
- if [ -z "$want" ]; then
- echo "hash_sha256_verify: unable to find checksum for '${TARGET}' in '${checksums}'"
- return 1
- fi
- got=$(hash_sha256 "$TARGET")
- if [ "$want" != "$got" ]; then
- echo "hash_sha256_verify: checksum for '$TARGET' did not verify ${want} vs $got"
- return 1
- fi
-}
-cat /dev/null <<EOF
-------------------------------------------------------------------------
-End of functions from https://github.com/client9/shlib
-------------------------------------------------------------------------
-EOF
-
-OWNER=client9
-REPO=misspell
-BINARY=misspell
-FORMAT=tar.gz
-OS=$(uname_os)
-ARCH=$(uname_arch)
-PREFIX="$OWNER/$REPO"
-PLATFORM="${OS}/${ARCH}"
-GITHUB_DOWNLOAD=https://github.com/${OWNER}/${REPO}/releases/download
-
-uname_os_check "$OS"
-uname_arch_check "$ARCH"
-
-parse_args "$@"
-
-check_platform
-
-adjust_version
-
-adjust_format
-
-adjust_os
-
-adjust_arch
-
-echo "$PREFIX: found version ${VERSION} for ${OS}/${ARCH}"
-
-NAME=${BINARY}_${VERSION}_${OS}_${ARCH}
-TARBALL=${NAME}.${FORMAT}
-TARBALL_URL=${GITHUB_DOWNLOAD}/v${VERSION}/${TARBALL}
-CHECKSUM=${REPO}_checksums.txt
-CHECKSUM_URL=${GITHUB_DOWNLOAD}/v${VERSION}/${CHECKSUM}
-
-# Adjust binary name if windows
-if [ "$OS" = "windows" ]; then
- BINARY="${BINARY}.exe"
-fi
-
-execute
diff --git a/vendor/github.com/client9/misspell/legal.go b/vendor/github.com/client9/misspell/legal.go
index da0e6bd4..20076974 100644
--- a/vendor/github.com/client9/misspell/legal.go
+++ b/vendor/github.com/client9/misspell/legal.go
@@ -1,3 +1,4 @@
+// Package misspell corrects commonly misspelled English words in source files.
package misspell
// Legal provides licensing info.
diff --git a/vendor/github.com/client9/misspell/mime_test.go b/vendor/github.com/client9/misspell/mime_test.go
deleted file mode 100644
index 26acc06e..00000000
--- a/vendor/github.com/client9/misspell/mime_test.go
+++ /dev/null
@@ -1,39 +0,0 @@
-package misspell
-
-import (
- "testing"
-)
-
-func TestIsBinaryFile(t *testing.T) {
- cases := []struct {
- path string
- want bool
- }{
- {"foo.png", true},
- {"foo.PNG", true},
- {"README", false},
- {"foo.txt", false},
- }
-
- for num, tt := range cases {
- if isBinaryFilename(tt.path) != tt.want {
- t.Errorf("Case %d: %s was not %v", num, tt.path, tt.want)
- }
- }
-}
-
-func TestIsSCMPath(t *testing.T) {
- cases := []struct {
- path string
- want bool
- }{
- {"foo.png", false},
- {"foo/.git/whatever", true},
- }
-
- for num, tt := range cases {
- if isSCMPath(tt.path) != tt.want {
- t.Errorf("Case %d: %s was not %v", num, tt.path, tt.want)
- }
- }
-}
diff --git a/vendor/github.com/client9/misspell/notwords_test.go b/vendor/github.com/client9/misspell/notwords_test.go
deleted file mode 100644
index e52e1aab..00000000
--- a/vendor/github.com/client9/misspell/notwords_test.go
+++ /dev/null
@@ -1,27 +0,0 @@
-package misspell
-
-import (
- "testing"
-)
-
-func TestNotWords(t *testing.T) {
- cases := []struct {
- word string
- want string
- }{
- {" /foo/bar abc", " abc"},
- {"X/foo/bar abc", "X/foo/bar abc"},
- {"[/foo/bar] abc", "[ ] abc"},
- {"/", "/"},
- {"x nickg@client9.xxx y", "x y"},
- {"x infinitie.net y", "x y"},
- {"(s.svc.GetObject(", "( ("},
- {"\\nto", " to"},
- }
- for pos, tt := range cases {
- got := RemoveNotWords(tt.word)
- if got != tt.want {
- t.Errorf("%d want %q got %q", pos, tt.want, got)
- }
- }
-}
diff --git a/vendor/github.com/client9/misspell/replace_test.go b/vendor/github.com/client9/misspell/replace_test.go
deleted file mode 100644
index 538f5bad..00000000
--- a/vendor/github.com/client9/misspell/replace_test.go
+++ /dev/null
@@ -1,119 +0,0 @@
-package misspell
-
-import (
- "strings"
- "testing"
-)
-
-func TestReplaceIgnore(t *testing.T) {
- cases := []struct {
- ignore string
- text string
- }{
- {"knwo,gae", "https://github.com/Unknwon, github.com/hnakamur/gaesessions"},
- }
- for line, tt := range cases {
- r := New()
- r.RemoveRule(strings.Split(tt.ignore, ","))
- r.Compile()
- got, _ := r.Replace(tt.text)
- if got != tt.text {
- t.Errorf("%d: Replace files want %q got %q", line, tt.text, got)
- }
- }
-}
-
-func TestReplaceLocale(t *testing.T) {
- cases := []struct {
- orig string
- want string
- }{
- {"The colours are pretty", "The colors are pretty"},
- {"summaries", "summaries"},
- }
-
- r := New()
- r.AddRuleList(DictAmerican)
- r.Compile()
- for line, tt := range cases {
- got, _ := r.Replace(tt.orig)
- if got != tt.want {
- t.Errorf("%d: ReplaceLocale want %q got %q", line, tt.orig, got)
- }
- }
-}
-
-func TestReplace(t *testing.T) {
- cases := []struct {
- orig string
- want string
- }{
- {"I live in Amercia", "I live in America"},
- {"grill brocoli now", "grill broccoli now"},
- {"There is a zeebra", "There is a zebra"},
- {"foo other bar", "foo other bar"},
- {"ten fiels", "ten fields"},
- {"Closeing Time", "Closing Time"},
- {"closeing Time", "closing Time"},
- {" TOOD: foobar", " TODO: foobar"},
- {" preceed ", " precede "},
- {"preceeding", "preceding"},
- {"functionallity", "functionality"},
- }
- r := New()
- for line, tt := range cases {
- got, _ := r.Replace(tt.orig)
- if got != tt.want {
- t.Errorf("%d: Replace files want %q got %q", line, tt.orig, got)
- }
- }
-}
-
-func TestCheckReplace(t *testing.T) {
- r := Replacer{
- engine: NewStringReplacer("foo", "foobar", "runing", "running"),
- corrected: map[string]string{
- "foo": "foobar",
- "runing": "running",
- },
- }
-
- s := "nothing at all"
- news, diffs := r.Replace(s)
- if s != news || len(diffs) != 0 {
- t.Errorf("Basic recheck failed: %q vs %q", s, news)
- }
-
- //
- // Test single, correct,.Correctedacements
- //
- s = "foo"
- news, diffs = r.Replace(s)
- if news != "foobar" || len(diffs) != 1 || diffs[0].Original != "foo" && diffs[0].Corrected != "foobar" && diffs[0].Column != 0 {
- t.Errorf("basic recheck1 failed %q vs %q", s, news)
- }
- s = "foo junk"
- news, diffs = r.Replace(s)
- if news != "foobar junk" || len(diffs) != 1 || diffs[0].Original != "foo" && diffs[0].Corrected != "foobar" && diffs[0].Column != 0 {
- t.Errorf("basic recheck2 failed %q vs %q, %v", s, news, diffs[0])
- }
-
- s = "junk foo"
- news, diffs = r.Replace(s)
- if news != "junk foobar" || len(diffs) != 1 || diffs[0].Original != "foo" && diffs[0].Corrected != "foobar" && diffs[0].Column != 5 {
- t.Errorf("basic recheck3 failed: %q vs %q", s, news)
- }
-
- s = "junk foo junk"
- news, diffs = r.Replace(s)
- if news != "junk foobar junk" || len(diffs) != 1 || diffs[0].Original != "foo" && diffs[0].Corrected != "foobar" && diffs[0].Column != 5 {
- t.Errorf("basic recheck4 failed: %q vs %q", s, news)
- }
-
- // Incorrect.Correctedacements
- s = "food pruning"
- news, _ = r.Replace(s)
- if news != s {
- t.Errorf("incorrect.Correctedacement failed: %q vs %q", s, news)
- }
-}
diff --git a/vendor/github.com/client9/misspell/scripts/commit-msg.sh b/vendor/github.com/client9/misspell/scripts/commit-msg.sh
deleted file mode 100755
index 3655bd00..00000000
--- a/vendor/github.com/client9/misspell/scripts/commit-msg.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-#!/bin/sh -ex
-misspell -error "$1"
diff --git a/vendor/github.com/client9/misspell/scripts/goreleaser.sh b/vendor/github.com/client9/misspell/scripts/goreleaser.sh
deleted file mode 100755
index 99a1bd1e..00000000
--- a/vendor/github.com/client9/misspell/scripts/goreleaser.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/sh -e
-# autorelease based on tag
-test -n "$TRAVIS_TAG" && curl -sL https://git.io/goreleaser | bash
diff --git a/vendor/github.com/client9/misspell/scripts/pre-commit.sh b/vendor/github.com/client9/misspell/scripts/pre-commit.sh
deleted file mode 100755
index 291c45ad..00000000
--- a/vendor/github.com/client9/misspell/scripts/pre-commit.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-#!/bin/sh -ex
-make ci
diff --git a/vendor/github.com/client9/misspell/scripts/update-godownloader.sh b/vendor/github.com/client9/misspell/scripts/update-godownloader.sh
deleted file mode 100755
index 8d933e2f..00000000
--- a/vendor/github.com/client9/misspell/scripts/update-godownloader.sh
+++ /dev/null
@@ -1,9 +0,0 @@
-#!/bin/sh -ex
-#
-# This updates the 'godownloader-*.sh' scripts from upstream
-# This is done manually
-#
-SOURCE=https://raw.githubusercontent.com/goreleaser/godownloader/master/samples
-curl --fail -o godownloader-misspell.sh "$SOURCE/godownloader-misspell.sh"
-chmod a+x godownloader-misspell.sh
-
diff --git a/vendor/github.com/client9/misspell/stringreplacer_test.gox b/vendor/github.com/client9/misspell/stringreplacer_test.gox
deleted file mode 100644
index 70da997f..00000000
--- a/vendor/github.com/client9/misspell/stringreplacer_test.gox
+++ /dev/null
@@ -1,421 +0,0 @@
-// Copyright 2009 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package misspell_test
-
-import (
- "bytes"
- "fmt"
- "strings"
- "testing"
-
- . "github.com/client9/misspell"
-)
-
-var htmlEscaper = NewStringReplacer(
- "&", "&amp;",
- "<", "&lt;",
- ">", "&gt;",
- `"`, "&quot;",
- "'", "&apos;",
-)
-
-var htmlUnescaper = NewStringReplacer(
- "&amp;", "&",
- "&lt;", "<",
- "&gt;", ">",
- "&quot;", `"`,
- "&apos;", "'",
-)
-
-// The http package's old HTML escaping function.
-func oldHTMLEscape(s string) string {
- s = strings.Replace(s, "&", "&amp;", -1)
- s = strings.Replace(s, "<", "&lt;", -1)
- s = strings.Replace(s, ">", "&gt;", -1)
- s = strings.Replace(s, `"`, "&quot;", -1)
- s = strings.Replace(s, "'", "&apos;", -1)
- return s
-}
-
-var capitalLetters = NewStringReplacer("a", "A", "b", "B")
-
-// TestReplacer tests the replacer implementations.
-func TestReplacer(t *testing.T) {
- type testCase struct {
- r *StringReplacer
- in, out string
- }
- var testCases []testCase
-
- // str converts 0xff to "\xff". This isn't just string(b) since that converts to UTF-8.
- str := func(b byte) string {
- return string([]byte{b})
- }
- var s []string
-
- // inc maps "\x00"->"\x01", ..., "a"->"b", "b"->"c", ..., "\xff"->"\x00".
- for i := 0; i < 256; i++ {
- s = append(s, str(byte(i)), str(byte(i+1)))
- }
- inc := NewStringReplacer(s...)
-
- // Test cases with 1-byte old strings, 1-byte new strings.
- testCases = append(testCases,
- testCase{capitalLetters, "brad", "BrAd"},
- testCase{capitalLetters, strings.Repeat("a", (32<<10)+123), strings.Repeat("A", (32<<10)+123)},
- testCase{capitalLetters, "", ""},
-
- testCase{inc, "brad", "csbe"},
- testCase{inc, "\x00\xff", "\x01\x00"},
- testCase{inc, "", ""},
-
- testCase{NewStringReplacer("a", "1", "a", "2"), "brad", "br1d"},
- )
-
- // repeat maps "a"->"a", "b"->"bb", "c"->"ccc", ...
- s = nil
- for i := 0; i < 256; i++ {
- n := i + 1 - 'a'
- if n < 1 {
- n = 1
- }
- s = append(s, str(byte(i)), strings.Repeat(str(byte(i)), n))
- }
- repeat := NewStringReplacer(s...)
-
- // Test cases with 1-byte old strings, variable length new strings.
- testCases = append(testCases,
- testCase{htmlEscaper, "No changes", "No changes"},
- testCase{htmlEscaper, "I <3 escaping & stuff", "I &lt;3 escaping &amp; stuff"},
- testCase{htmlEscaper, "&&&", "&amp;&amp;&amp;"},
- testCase{htmlEscaper, "", ""},
-
- testCase{repeat, "brad", "bbrrrrrrrrrrrrrrrrrradddd"},
- testCase{repeat, "abba", "abbbba"},
- testCase{repeat, "", ""},
-
- testCase{NewStringReplacer("a", "11", "a", "22"), "brad", "br11d"},
- )
-
- // The remaining test cases have variable length old strings.
-
- testCases = append(testCases,
- testCase{htmlUnescaper, "&amp;amp;", "&amp;"},
- testCase{htmlUnescaper, "&lt;b&gt;HTML&apos;s neat&lt;/b&gt;", "<b>HTML's neat</b>"},
- testCase{htmlUnescaper, "", ""},
-
- testCase{NewStringReplacer("a", "1", "a", "2", "xxx", "xxx"), "brad", "br1d"},
-
- testCase{NewStringReplacer("a", "1", "aa", "2", "aaa", "3"), "aaaa", "1111"},
-
- testCase{NewStringReplacer("aaa", "3", "aa", "2", "a", "1"), "aaaa", "31"},
- )
-
- // gen1 has multiple old strings of variable length. There is no
- // overall non-empty common prefix, but some pairwise common prefixes.
- gen1 := NewStringReplacer(
- "aaa", "3[aaa]",
- "aa", "2[aa]",
- "a", "1[a]",
- "i", "i",
- "longerst", "most long",
- "longer", "medium",
- "long", "short",
- "xx", "xx",
- "x", "X",
- "X", "Y",
- "Y", "Z",
- )
- testCases = append(testCases,
- testCase{gen1, "fooaaabar", "foo3[aaa]b1[a]r"},
- testCase{gen1, "long, longerst, longer", "short, most long, medium"},
- testCase{gen1, "xxxxx", "xxxxX"},
- testCase{gen1, "XiX", "YiY"},
- testCase{gen1, "", ""},
- )
-
- // gen2 has multiple old strings with no pairwise common prefix.
- gen2 := NewStringReplacer(
- "roses", "red",
- "violets", "blue",
- "sugar", "sweet",
- )
- testCases = append(testCases,
- testCase{gen2, "roses are red, violets are blue...", "red are red, blue are blue..."},
- testCase{gen2, "", ""},
- )
-
- // gen3 has multiple old strings with an overall common prefix.
- gen3 := NewStringReplacer(
- "abracadabra", "poof",
- "abracadabrakazam", "splat",
- "abraham", "lincoln",
- "abrasion", "scrape",
- "abraham", "isaac",
- )
- testCases = append(testCases,
- testCase{gen3, "abracadabrakazam abraham", "poofkazam lincoln"},
- testCase{gen3, "abrasion abracad", "scrape abracad"},
- testCase{gen3, "abba abram abrasive", "abba abram abrasive"},
- testCase{gen3, "", ""},
- )
-
- // foo{1,2,3,4} have multiple old strings with an overall common prefix
- // and 1- or 2- byte extensions from the common prefix.
- foo1 := NewStringReplacer(
- "foo1", "A",
- "foo2", "B",
- "foo3", "C",
- )
- foo2 := NewStringReplacer(
- "foo1", "A",
- "foo2", "B",
- "foo31", "C",
- "foo32", "D",
- )
- foo3 := NewStringReplacer(
- "foo11", "A",
- "foo12", "B",
- "foo31", "C",
- "foo32", "D",
- )
- foo4 := NewStringReplacer(
- "foo12", "B",
- "foo32", "D",
- )
- testCases = append(testCases,
- testCase{foo1, "fofoofoo12foo32oo", "fofooA2C2oo"},
- testCase{foo1, "", ""},
-
- testCase{foo2, "fofoofoo12foo32oo", "fofooA2Doo"},
- testCase{foo2, "", ""},
-
- testCase{foo3, "fofoofoo12foo32oo", "fofooBDoo"},
- testCase{foo3, "", ""},
-
- testCase{foo4, "fofoofoo12foo32oo", "fofooBDoo"},
- testCase{foo4, "", ""},
- )
-
- // genAll maps "\x00\x01\x02...\xfe\xff" to "[all]", amongst other things.
- allBytes := make([]byte, 256)
- for i := range allBytes {
- allBytes[i] = byte(i)
- }
- allString := string(allBytes)
- genAll := NewStringReplacer(
- allString, "[all]",
- "\xff", "[ff]",
- "\x00", "[00]",
- )
- testCases = append(testCases,
- testCase{genAll, allString, "[all]"},
- testCase{genAll, "a\xff" + allString + "\x00", "a[ff][all][00]"},
- testCase{genAll, "", ""},
- )
-
- // Test cases with empty old strings.
-
- blankToX1 := NewStringReplacer("", "X")
- blankToX2 := NewStringReplacer("", "X", "", "")
- blankHighPriority := NewStringReplacer("", "X", "o", "O")
- blankLowPriority := NewStringReplacer("o", "O", "", "X")
- blankNoOp1 := NewStringReplacer("", "")
- blankNoOp2 := NewStringReplacer("", "", "", "A")
- blankFoo := NewStringReplacer("", "X", "foobar", "R", "foobaz", "Z")
- testCases = append(testCases,
- testCase{blankToX1, "foo", "XfXoXoX"},
- testCase{blankToX1, "", "X"},
-
- testCase{blankToX2, "foo", "XfXoXoX"},
- testCase{blankToX2, "", "X"},
-
- testCase{blankHighPriority, "oo", "XOXOX"},
- testCase{blankHighPriority, "ii", "XiXiX"},
- testCase{blankHighPriority, "oiio", "XOXiXiXOX"},
- testCase{blankHighPriority, "iooi", "XiXOXOXiX"},
- testCase{blankHighPriority, "", "X"},
-
- testCase{blankLowPriority, "oo", "OOX"},
- testCase{blankLowPriority, "ii", "XiXiX"},
- testCase{blankLowPriority, "oiio", "OXiXiOX"},
- testCase{blankLowPriority, "iooi", "XiOOXiX"},
- testCase{blankLowPriority, "", "X"},
-
- testCase{blankNoOp1, "foo", "foo"},
- testCase{blankNoOp1, "", ""},
-
- testCase{blankNoOp2, "foo", "foo"},
- testCase{blankNoOp2, "", ""},
-
- testCase{blankFoo, "foobarfoobaz", "XRXZX"},
- testCase{blankFoo, "foobar-foobaz", "XRX-XZX"},
- testCase{blankFoo, "", "X"},
- )
-
- // single string replacer
-
- abcMatcher := NewStringReplacer("abc", "[match]")
-
- testCases = append(testCases,
- testCase{abcMatcher, "", ""},
- testCase{abcMatcher, "ab", "ab"},
- testCase{abcMatcher, "abc", "[match]"},
- testCase{abcMatcher, "abcd", "[match]d"},
- testCase{abcMatcher, "cabcabcdabca", "c[match][match]d[match]a"},
- )
-
- // Issue 6659 cases (more single string replacer)
-
- noHello := NewStringReplacer("Hello", "")
- testCases = append(testCases,
- testCase{noHello, "Hello", ""},
- testCase{noHello, "Hellox", "x"},
- testCase{noHello, "xHello", "x"},
- testCase{noHello, "xHellox", "xx"},
- )
-
- // No-arg test cases.
-
- nop := NewStringReplacer()
- testCases = append(testCases,
- testCase{nop, "abc", "abc"},
- testCase{nop, "", ""},
- )
-
- // Run the test cases.
-
- for i, tc := range testCases {
- if s := tc.r.Replace(tc.in); s != tc.out {
- t.Errorf("%d. strings.Replace(%q) = %q, want %q", i, tc.in, s, tc.out)
- }
- var buf bytes.Buffer
- n, err := tc.r.WriteString(&buf, tc.in)
- if err != nil {
- t.Errorf("%d. WriteString: %v", i, err)
- continue
- }
- got := buf.String()
- if got != tc.out {
- t.Errorf("%d. WriteString(%q) wrote %q, want %q", i, tc.in, got, tc.out)
- continue
- }
- if n != len(tc.out) {
- t.Errorf("%d. WriteString(%q) wrote correct string but reported %d bytes; want %d (%q)",
- i, tc.in, n, len(tc.out), tc.out)
- }
- }
-}
-
-type errWriter struct{}
-
-func (errWriter) Write(p []byte) (n int, err error) {
- return 0, fmt.Errorf("unwritable")
-}
-
-func BenchmarkGenericNoMatch(b *testing.B) {
- str := strings.Repeat("A", 100) + strings.Repeat("B", 100)
- generic := NewStringReplacer("a", "A", "b", "B", "12", "123") // varying lengths forces generic
- for i := 0; i < b.N; i++ {
- generic.Replace(str)
- }
-}
-
-func BenchmarkGenericMatch1(b *testing.B) {
- str := strings.Repeat("a", 100) + strings.Repeat("b", 100)
- generic := NewStringReplacer("a", "A", "b", "B", "12", "123")
- for i := 0; i < b.N; i++ {
- generic.Replace(str)
- }
-}
-
-func BenchmarkGenericMatch2(b *testing.B) {
- str := strings.Repeat("It&apos;s &lt;b&gt;HTML&lt;/b&gt;!", 100)
- for i := 0; i < b.N; i++ {
- htmlUnescaper.Replace(str)
- }
-}
-
-func benchmarkSingleString(b *testing.B, pattern, text string) {
- r := NewStringReplacer(pattern, "[match]")
- b.SetBytes(int64(len(text)))
- b.ResetTimer()
- for i := 0; i < b.N; i++ {
- r.Replace(text)
- }
-}
-
-func BenchmarkSingleMaxSkipping(b *testing.B) {
- benchmarkSingleString(b, strings.Repeat("b", 25), strings.Repeat("a", 10000))
-}
-
-func BenchmarkSingleLongSuffixFail(b *testing.B) {
- benchmarkSingleString(b, "b"+strings.Repeat("a", 500), strings.Repeat("a", 1002))
-}
-
-func BenchmarkSingleMatch(b *testing.B) {
- benchmarkSingleString(b, "abcdef", strings.Repeat("abcdefghijklmno", 1000))
-}
-
-func BenchmarkByteByteNoMatch(b *testing.B) {
- str := strings.Repeat("A", 100) + strings.Repeat("B", 100)
- for i := 0; i < b.N; i++ {
- capitalLetters.Replace(str)
- }
-}
-
-func BenchmarkByteByteMatch(b *testing.B) {
- str := strings.Repeat("a", 100) + strings.Repeat("b", 100)
- for i := 0; i < b.N; i++ {
- capitalLetters.Replace(str)
- }
-}
-
-func BenchmarkByteStringMatch(b *testing.B) {
- str := "<" + strings.Repeat("a", 99) + strings.Repeat("b", 99) + ">"
- for i := 0; i < b.N; i++ {
- htmlEscaper.Replace(str)
- }
-}
-
-func BenchmarkHTMLEscapeNew(b *testing.B) {
- str := "I <3 to escape HTML & other text too."
- for i := 0; i < b.N; i++ {
- htmlEscaper.Replace(str)
- }
-}
-
-func BenchmarkHTMLEscapeOld(b *testing.B) {
- str := "I <3 to escape HTML & other text too."
- for i := 0; i < b.N; i++ {
- oldHTMLEscape(str)
- }
-}
-
-func BenchmarkByteStringReplacerWriteString(b *testing.B) {
- str := strings.Repeat("I <3 to escape HTML & other text too.", 100)
- buf := new(bytes.Buffer)
- for i := 0; i < b.N; i++ {
- htmlEscaper.WriteString(buf, str)
- buf.Reset()
- }
-}
-
-func BenchmarkByteReplacerWriteString(b *testing.B) {
- str := strings.Repeat("abcdefghijklmnopqrstuvwxyz", 100)
- buf := new(bytes.Buffer)
- for i := 0; i < b.N; i++ {
- capitalLetters.WriteString(buf, str)
- buf.Reset()
- }
-}
-
-// BenchmarkByteByteReplaces compares byteByteImpl against multiple Replaces.
-func BenchmarkByteByteReplaces(b *testing.B) {
- str := strings.Repeat("a", 100) + strings.Repeat("b", 100)
- for i := 0; i < b.N; i++ {
- strings.Replace(strings.Replace(str, "a", "A", -1), "b", "B", -1)
- }
-}
diff --git a/vendor/github.com/client9/misspell/url_test.go b/vendor/github.com/client9/misspell/url_test.go
deleted file mode 100644
index 0cf9ce26..00000000
--- a/vendor/github.com/client9/misspell/url_test.go
+++ /dev/null
@@ -1,105 +0,0 @@
-package misspell
-
-import (
- "strings"
- "testing"
-)
-
-// Test suite partiall from https://mathiasbynens.be/demo/url-regex
-//
-func TestStripURL(t *testing.T) {
- cases := []string{
- "HTTP://FOO.COM/BLAH_BLAH",
- "http://foo.com/blah_blah",
- "http://foo.com/blah_blah/",
- "http://foo.com/blah_blah_(wikipedia)",
- "http://foo.com/blah_blah_(wikipedia)_(again)",
- "http://www.example.com/wpstyle/?p=364",
- "https://www.example.com/foo/?bar=baz&inga=42&quux",
- "http://✪df.ws/123",
- "http://userid:password@example.com:8080",
- "http://userid:password@example.com:8080/",
- "http://userid@example.com",
- "http://userid@example.com/",
- "http://userid@example.com:8080",
- "http://userid@example.com:8080/",
- "http://userid:password@example.com",
- "http://userid:password@example.com/",
- "http://142.42.1.1/",
- "http://142.42.1.1:8080/",
- "http://➡.ws/䨹",
- "http://⌘.ws",
- "http://⌘.ws/",
- "http://foo.com/blah_(wikipedia)#cite-1",
- "http://foo.com/blah_(wikipedia)_blah#cite-1",
- "http://foo.com/unicode_(✪)_in_parens",
- "http://foo.com/(something)?after=parens",
- "http://☺.damowmow.com/a",
- "http://code.google.com/events/#&product=browser",
- "http://j.mp",
- "ftp://foo.bar/baz",
- "http://foo.bar/?q=Test%20URL-encoded%20stuff",
- "http://مثال.إختبار",
- "http://例子.测试",
- "http://उदाहरण.परीक्षा",
- "http://-.~_!$&'()*+,;=:%40:80%2f::::::@example.com",
- "http://1337.net",
- "http://a.b-c.de",
- "http://223.255.255.254",
- }
-
- for num, tt := range cases {
- got := strings.TrimSpace(StripURL(tt))
- if len(got) != 0 {
- t.Errorf("case %d: unable to match %q", num, tt)
- }
- }
-
- cases = []string{
- "http://",
- "http://.",
- "http://..",
- "http://../",
- "http://?",
- "http://??",
- "http://??/",
- "http://#",
- "http://##",
- "http://##/",
- "http://foo.bar?q=Spaces should be encoded",
- "//",
- "//a",
- "///a",
- "///",
- "http:///a",
- "foo.com",
- "rdar://1234",
- "h://test",
- "http:// shouldfail.com",
- ":// should fail",
- "http://foo.bar/foo(bar)baz quux",
- "ftps://foo.bar/",
- //"http://-error-.invalid/",
- //"http://a.b--c.de/",
- //"http://-a.b.co",
- //"http://a.b-.co",
- //"http://0.0.0.0",
- //"http://10.1.1.0",
- //"http://10.1.1.255",
- //"http://224.1.1.1",
- //"http://1.1.1.1.1",
- //"http://123.123.123",
- //"http://3628126748",
- "http://.www.foo.bar/",
- //"http://www.foo.bar./",
- "http://.www.foo.bar./",
- //"http://10.1.1.1",
- }
-
- for num, tt := range cases {
- got := strings.TrimSpace(StripURL(tt))
- if len(got) == 0 {
- t.Errorf("case %d: incorrect match %q", num, tt)
- }
- }
-}
diff --git a/vendor/github.com/client9/misspell/words_test.go b/vendor/github.com/client9/misspell/words_test.go
deleted file mode 100644
index 31fcf284..00000000
--- a/vendor/github.com/client9/misspell/words_test.go
+++ /dev/null
@@ -1,35 +0,0 @@
-package misspell
-
-import (
- "sort"
- "testing"
-)
-
-type sortByLen []string
-
-func (a sortByLen) Len() int { return len(a) }
-func (a sortByLen) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
-func (a sortByLen) Less(i, j int) bool {
- if len(a[i]) == len(a[j]) {
- // if words are same size, then use
- // normal alphabetical order
- return a[i] < a[j]
- }
- // INVERTED -- biggest words first
- return len(a[i]) > len(a[j])
-}
-
-func TestWordSort(t *testing.T) {
- if len(DictMain)%2 == 1 {
- t.Errorf("Dictionary is a not a multiple of 2")
- }
- words := make([]string, 0, len(DictMain)/2)
- for i := 0; i < len(DictMain); i += 2 {
- words = append(words, DictMain[i])
- }
- if !sort.IsSorted(sortByLen(words)) {
- t.Errorf("Words not sorted by len, by alpha!")
- t.Errorf("Words.go is autogenerated -- do not edit.")
- t.Errorf("File issue instead.")
- }
-}
diff --git a/vendor/gopkg.in/yaml.v2/.travis.yml b/vendor/gopkg.in/yaml.v2/.travis.yml
deleted file mode 100644
index 004172a2..00000000
--- a/vendor/gopkg.in/yaml.v2/.travis.yml
+++ /dev/null
@@ -1,9 +0,0 @@
-language: go
-
-go:
- - 1.4
- - 1.5
- - 1.6
- - tip
-
-go_import_path: gopkg.in/yaml.v2
diff --git a/vendor/gopkg.in/yaml.v2/NOTICE b/vendor/gopkg.in/yaml.v2/NOTICE
new file mode 100644
index 00000000..866d74a7
--- /dev/null
+++ b/vendor/gopkg.in/yaml.v2/NOTICE
@@ -0,0 +1,13 @@
+Copyright 2011-2016 Canonical Ltd.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
diff --git a/vendor/gopkg.in/yaml.v2/README.md b/vendor/gopkg.in/yaml.v2/README.md
deleted file mode 100644
index 7a512d67..00000000
--- a/vendor/gopkg.in/yaml.v2/README.md
+++ /dev/null
@@ -1,133 +0,0 @@
-# YAML support for the Go language
-
-Introduction
-------------
-
-The yaml package enables Go programs to comfortably encode and decode YAML
-values. It was developed within [Canonical](https://www.canonical.com) as
-part of the [juju](https://juju.ubuntu.com) project, and is based on a
-pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML)
-C library to parse and generate YAML data quickly and reliably.
-
-Compatibility
--------------
-
-The yaml package supports most of YAML 1.1 and 1.2, including support for
-anchors, tags, map merging, etc. Multi-document unmarshalling is not yet
-implemented, and base-60 floats from YAML 1.1 are purposefully not
-supported since they're a poor design and are gone in YAML 1.2.
-
-Installation and usage
-----------------------
-
-The import path for the package is *gopkg.in/yaml.v2*.
-
-To install it, run:
-
- go get gopkg.in/yaml.v2
-
-API documentation
------------------
-
-If opened in a browser, the import path itself leads to the API documentation:
-
- * [https://gopkg.in/yaml.v2](https://gopkg.in/yaml.v2)
-
-API stability
--------------
-
-The package API for yaml v2 will remain stable as described in [gopkg.in](https://gopkg.in).
-
-
-License
--------
-
-The yaml package is licensed under the Apache License 2.0. Please see the LICENSE file for details.
-
-
-Example
--------
-
-Some more examples can be found in the "examples" folder.
-
-```Go
-package main
-
-import (
- "fmt"
- "log"
-
- "gopkg.in/yaml.v2"
-)
-
-var data = `
-a: Easy!
-b:
- c: 2
- d: [3, 4]
-`
-
-type T struct {
- A string
- B struct {
- RenamedC int `yaml:"c"`
- D []int `yaml:",flow"`
- }
-}
-
-func main() {
- t := T{}
-
- err := yaml.Unmarshal([]byte(data), &t)
- if err != nil {
- log.Fatalf("error: %v", err)
- }
- fmt.Printf("--- t:\n%v\n\n", t)
-
- d, err := yaml.Marshal(&t)
- if err != nil {
- log.Fatalf("error: %v", err)
- }
- fmt.Printf("--- t dump:\n%s\n\n", string(d))
-
- m := make(map[interface{}]interface{})
-
- err = yaml.Unmarshal([]byte(data), &m)
- if err != nil {
- log.Fatalf("error: %v", err)
- }
- fmt.Printf("--- m:\n%v\n\n", m)
-
- d, err = yaml.Marshal(&m)
- if err != nil {
- log.Fatalf("error: %v", err)
- }
- fmt.Printf("--- m dump:\n%s\n\n", string(d))
-}
-```
-
-This example will generate the following output:
-
-```
---- t:
-{Easy! {2 [3 4]}}
-
---- t dump:
-a: Easy!
-b:
- c: 2
- d: [3, 4]
-
-
---- m:
-map[a:Easy! b:map[c:2 d:[3 4]]]
-
---- m dump:
-a: Easy!
-b:
- c: 2
- d:
- - 3
- - 4
-```
-
diff --git a/vendor/gopkg.in/yaml.v2/apic.go b/vendor/gopkg.in/yaml.v2/apic.go
index 95ec014e..1f7e87e6 100644
--- a/vendor/gopkg.in/yaml.v2/apic.go
+++ b/vendor/gopkg.in/yaml.v2/apic.go
@@ -2,7 +2,6 @@ package yaml
import (
"io"
- "os"
)
func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) {
@@ -48,9 +47,9 @@ func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err
return n, nil
}
-// File read handler.
-func yaml_file_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
- return parser.input_file.Read(buffer)
+// Reader read handler.
+func yaml_reader_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) {
+ return parser.input_reader.Read(buffer)
}
// Set a string input.
@@ -64,12 +63,12 @@ func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) {
}
// Set a file input.
-func yaml_parser_set_input_file(parser *yaml_parser_t, file *os.File) {
+func yaml_parser_set_input_reader(parser *yaml_parser_t, r io.Reader) {
if parser.read_handler != nil {
panic("must set the input source only once")
}
- parser.read_handler = yaml_file_read_handler
- parser.input_file = file
+ parser.read_handler = yaml_reader_read_handler
+ parser.input_reader = r
}
// Set the source encoding.
@@ -81,14 +80,13 @@ func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) {
}
// Create a new emitter object.
-func yaml_emitter_initialize(emitter *yaml_emitter_t) bool {
+func yaml_emitter_initialize(emitter *yaml_emitter_t) {
*emitter = yaml_emitter_t{
buffer: make([]byte, output_buffer_size),
raw_buffer: make([]byte, 0, output_raw_buffer_size),
states: make([]yaml_emitter_state_t, 0, initial_stack_size),
events: make([]yaml_event_t, 0, initial_queue_size),
}
- return true
}
// Destroy an emitter object.
@@ -102,9 +100,10 @@ func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
return nil
}
-// File write handler.
-func yaml_file_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
- _, err := emitter.output_file.Write(buffer)
+// yaml_writer_write_handler uses emitter.output_writer to write the
+// emitted text.
+func yaml_writer_write_handler(emitter *yaml_emitter_t, buffer []byte) error {
+ _, err := emitter.output_writer.Write(buffer)
return err
}
@@ -118,12 +117,12 @@ func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]by
}
// Set a file output.
-func yaml_emitter_set_output_file(emitter *yaml_emitter_t, file io.Writer) {
+func yaml_emitter_set_output_writer(emitter *yaml_emitter_t, w io.Writer) {
if emitter.write_handler != nil {
panic("must set the output target only once")
}
- emitter.write_handler = yaml_file_write_handler
- emitter.output_file = file
+ emitter.write_handler = yaml_writer_write_handler
+ emitter.output_writer = w
}
// Set the output encoding.
@@ -252,41 +251,41 @@ func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) {
//
// Create STREAM-START.
-func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) bool {
+func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) {
*event = yaml_event_t{
typ: yaml_STREAM_START_EVENT,
encoding: encoding,
}
- return true
}
// Create STREAM-END.
-func yaml_stream_end_event_initialize(event *yaml_event_t) bool {
+func yaml_stream_end_event_initialize(event *yaml_event_t) {
*event = yaml_event_t{
typ: yaml_STREAM_END_EVENT,
}
- return true
}
// Create DOCUMENT-START.
-func yaml_document_start_event_initialize(event *yaml_event_t, version_directive *yaml_version_directive_t,
- tag_directives []yaml_tag_directive_t, implicit bool) bool {
+func yaml_document_start_event_initialize(
+ event *yaml_event_t,
+ version_directive *yaml_version_directive_t,
+ tag_directives []yaml_tag_directive_t,
+ implicit bool,
+) {
*event = yaml_event_t{
typ: yaml_DOCUMENT_START_EVENT,
version_directive: version_directive,
tag_directives: tag_directives,
implicit: implicit,
}
- return true
}
// Create DOCUMENT-END.
-func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) bool {
+func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) {
*event = yaml_event_t{
typ: yaml_DOCUMENT_END_EVENT,
implicit: implicit,
}
- return true
}
///*
@@ -348,7 +347,7 @@ func yaml_sequence_end_event_initialize(event *yaml_event_t) bool {
}
// Create MAPPING-START.
-func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) bool {
+func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) {
*event = yaml_event_t{
typ: yaml_MAPPING_START_EVENT,
anchor: anchor,
@@ -356,15 +355,13 @@ func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte
implicit: implicit,
style: yaml_style_t(style),
}
- return true
}
// Create MAPPING-END.
-func yaml_mapping_end_event_initialize(event *yaml_event_t) bool {
+func yaml_mapping_end_event_initialize(event *yaml_event_t) {
*event = yaml_event_t{
typ: yaml_MAPPING_END_EVENT,
}
- return true
}
// Destroy an event object.
@@ -471,7 +468,7 @@ func yaml_event_delete(event *yaml_event_t) {
// } context
// tag_directive *yaml_tag_directive_t
//
-// context.error = YAML_NO_ERROR // Eliminate a compliler warning.
+// context.error = YAML_NO_ERROR // Eliminate a compiler warning.
//
// assert(document) // Non-NULL document object is expected.
//
diff --git a/vendor/gopkg.in/yaml.v2/decode.go b/vendor/gopkg.in/yaml.v2/decode.go
index db1f5f20..e4e56e28 100644
--- a/vendor/gopkg.in/yaml.v2/decode.go
+++ b/vendor/gopkg.in/yaml.v2/decode.go
@@ -4,6 +4,7 @@ import (
"encoding"
"encoding/base64"
"fmt"
+ "io"
"math"
"reflect"
"strconv"
@@ -22,19 +23,22 @@ type node struct {
kind int
line, column int
tag string
- value string
- implicit bool
- children []*node
- anchors map[string]*node
+ // For an alias node, alias holds the resolved alias.
+ alias *node
+ value string
+ implicit bool
+ children []*node
+ anchors map[string]*node
}
// ----------------------------------------------------------------------------
// Parser, produces a node tree out of a libyaml event stream.
type parser struct {
- parser yaml_parser_t
- event yaml_event_t
- doc *node
+ parser yaml_parser_t
+ event yaml_event_t
+ doc *node
+ doneInit bool
}
func newParser(b []byte) *parser {
@@ -42,21 +46,30 @@ func newParser(b []byte) *parser {
if !yaml_parser_initialize(&p.parser) {
panic("failed to initialize YAML emitter")
}
-
if len(b) == 0 {
b = []byte{'\n'}
}
-
yaml_parser_set_input_string(&p.parser, b)
+ return &p
+}
- p.skip()
- if p.event.typ != yaml_STREAM_START_EVENT {
- panic("expected stream start event, got " + strconv.Itoa(int(p.event.typ)))
+func newParserFromReader(r io.Reader) *parser {
+ p := parser{}
+ if !yaml_parser_initialize(&p.parser) {
+ panic("failed to initialize YAML emitter")
}
- p.skip()
+ yaml_parser_set_input_reader(&p.parser, r)
return &p
}
+func (p *parser) init() {
+ if p.doneInit {
+ return
+ }
+ p.expect(yaml_STREAM_START_EVENT)
+ p.doneInit = true
+}
+
func (p *parser) destroy() {
if p.event.typ != yaml_NO_EVENT {
yaml_event_delete(&p.event)
@@ -64,16 +77,35 @@ func (p *parser) destroy() {
yaml_parser_delete(&p.parser)
}
-func (p *parser) skip() {
- if p.event.typ != yaml_NO_EVENT {
- if p.event.typ == yaml_STREAM_END_EVENT {
- failf("attempted to go past the end of stream; corrupted value?")
+// expect consumes an event from the event stream and
+// checks that it's of the expected type.
+func (p *parser) expect(e yaml_event_type_t) {
+ if p.event.typ == yaml_NO_EVENT {
+ if !yaml_parser_parse(&p.parser, &p.event) {
+ p.fail()
}
- yaml_event_delete(&p.event)
+ }
+ if p.event.typ == yaml_STREAM_END_EVENT {
+ failf("attempted to go past the end of stream; corrupted value?")
+ }
+ if p.event.typ != e {
+ p.parser.problem = fmt.Sprintf("expected %s event but got %s", e, p.event.typ)
+ p.fail()
+ }
+ yaml_event_delete(&p.event)
+ p.event.typ = yaml_NO_EVENT
+}
+
+// peek peeks at the next event in the event stream,
+// puts the results into p.event and returns the event type.
+func (p *parser) peek() yaml_event_type_t {
+ if p.event.typ != yaml_NO_EVENT {
+ return p.event.typ
}
if !yaml_parser_parse(&p.parser, &p.event) {
p.fail()
}
+ return p.event.typ
}
func (p *parser) fail() {
@@ -81,6 +113,10 @@ func (p *parser) fail() {
var line int
if p.parser.problem_mark.line != 0 {
line = p.parser.problem_mark.line
+ // Scanner errors don't iterate line before returning error
+ if p.parser.error == yaml_SCANNER_ERROR {
+ line++
+ }
} else if p.parser.context_mark.line != 0 {
line = p.parser.context_mark.line
}
@@ -103,7 +139,8 @@ func (p *parser) anchor(n *node, anchor []byte) {
}
func (p *parser) parse() *node {
- switch p.event.typ {
+ p.init()
+ switch p.peek() {
case yaml_SCALAR_EVENT:
return p.scalar()
case yaml_ALIAS_EVENT:
@@ -118,7 +155,7 @@ func (p *parser) parse() *node {
// Happens when attempting to decode an empty buffer.
return nil
default:
- panic("attempted to parse unknown event: " + strconv.Itoa(int(p.event.typ)))
+ panic("attempted to parse unknown event: " + p.event.typ.String())
}
}
@@ -134,19 +171,20 @@ func (p *parser) document() *node {
n := p.node(documentNode)
n.anchors = make(map[string]*node)
p.doc = n
- p.skip()
+ p.expect(yaml_DOCUMENT_START_EVENT)
n.children = append(n.children, p.parse())
- if p.event.typ != yaml_DOCUMENT_END_EVENT {
- panic("expected end of document event but got " + strconv.Itoa(int(p.event.typ)))
- }
- p.skip()
+ p.expect(yaml_DOCUMENT_END_EVENT)
return n
}
func (p *parser) alias() *node {
n := p.node(aliasNode)
n.value = string(p.event.anchor)
- p.skip()
+ n.alias = p.doc.anchors[n.value]
+ if n.alias == nil {
+ failf("unknown anchor '%s' referenced", n.value)
+ }
+ p.expect(yaml_ALIAS_EVENT)
return n
}
@@ -156,29 +194,29 @@ func (p *parser) scalar() *node {
n.tag = string(p.event.tag)
n.implicit = p.event.implicit
p.anchor(n, p.event.anchor)
- p.skip()
+ p.expect(yaml_SCALAR_EVENT)
return n
}
func (p *parser) sequence() *node {
n := p.node(sequenceNode)
p.anchor(n, p.event.anchor)
- p.skip()
- for p.event.typ != yaml_SEQUENCE_END_EVENT {
+ p.expect(yaml_SEQUENCE_START_EVENT)
+ for p.peek() != yaml_SEQUENCE_END_EVENT {
n.children = append(n.children, p.parse())
}
- p.skip()
+ p.expect(yaml_SEQUENCE_END_EVENT)
return n
}
func (p *parser) mapping() *node {
n := p.node(mappingNode)
p.anchor(n, p.event.anchor)
- p.skip()
- for p.event.typ != yaml_MAPPING_END_EVENT {
+ p.expect(yaml_MAPPING_START_EVENT)
+ for p.peek() != yaml_MAPPING_END_EVENT {
n.children = append(n.children, p.parse(), p.parse())
}
- p.skip()
+ p.expect(yaml_MAPPING_END_EVENT)
return n
}
@@ -187,7 +225,7 @@ func (p *parser) mapping() *node {
type decoder struct {
doc *node
- aliases map[string]bool
+ aliases map[*node]bool
mapType reflect.Type
terrors []string
strict bool
@@ -198,11 +236,13 @@ var (
durationType = reflect.TypeOf(time.Duration(0))
defaultMapType = reflect.TypeOf(map[interface{}]interface{}{})
ifaceType = defaultMapType.Elem()
+ timeType = reflect.TypeOf(time.Time{})
+ ptrTimeType = reflect.TypeOf(&time.Time{})
)
func newDecoder(strict bool) *decoder {
d := &decoder{mapType: defaultMapType, strict: strict}
- d.aliases = make(map[string]bool)
+ d.aliases = make(map[*node]bool)
return d
}
@@ -251,7 +291,7 @@ func (d *decoder) callUnmarshaler(n *node, u Unmarshaler) (good bool) {
//
// If n holds a null value, prepare returns before doing anything.
func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unmarshaled, good bool) {
- if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "" && n.implicit) {
+ if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "~" || n.value == "" && n.implicit) {
return out, false, false
}
again := true
@@ -308,16 +348,13 @@ func (d *decoder) document(n *node, out reflect.Value) (good bool) {
}
func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
- an, ok := d.doc.anchors[n.value]
- if !ok {
- failf("unknown anchor '%s' referenced", n.value)
- }
- if d.aliases[n.value] {
+ if d.aliases[n] {
+ // TODO this could actually be allowed in some circumstances.
failf("anchor '%s' value contains itself", n.value)
}
- d.aliases[n.value] = true
- good = d.unmarshal(an, out)
- delete(d.aliases, n.value)
+ d.aliases[n] = true
+ good = d.unmarshal(n.alias, out)
+ delete(d.aliases, n)
return good
}
@@ -329,7 +366,7 @@ func resetMap(out reflect.Value) {
}
}
-func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
+func (d *decoder) scalar(n *node, out reflect.Value) bool {
var tag string
var resolved interface{}
if n.tag == "" && !n.implicit {
@@ -353,9 +390,26 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
}
return true
}
- if s, ok := resolved.(string); ok && out.CanAddr() {
- if u, ok := out.Addr().Interface().(encoding.TextUnmarshaler); ok {
- err := u.UnmarshalText([]byte(s))
+ if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
+ // We've resolved to exactly the type we want, so use that.
+ out.Set(resolvedv)
+ return true
+ }
+ // Perhaps we can use the value as a TextUnmarshaler to
+ // set its value.
+ if out.CanAddr() {
+ u, ok := out.Addr().Interface().(encoding.TextUnmarshaler)
+ if ok {
+ var text []byte
+ if tag == yaml_BINARY_TAG {
+ text = []byte(resolved.(string))
+ } else {
+ // We let any value be unmarshaled into TextUnmarshaler.
+ // That might be more lax than we'd like, but the
+ // TextUnmarshaler itself should bowl out any dubious values.
+ text = []byte(n.value)
+ }
+ err := u.UnmarshalText(text)
if err != nil {
fail(err)
}
@@ -366,46 +420,54 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
case reflect.String:
if tag == yaml_BINARY_TAG {
out.SetString(resolved.(string))
- good = true
- } else if resolved != nil {
+ return true
+ }
+ if resolved != nil {
out.SetString(n.value)
- good = true
+ return true
}
case reflect.Interface:
if resolved == nil {
out.Set(reflect.Zero(out.Type()))
+ } else if tag == yaml_TIMESTAMP_TAG {
+ // It looks like a timestamp but for backward compatibility
+ // reasons we set it as a string, so that code that unmarshals
+ // timestamp-like values into interface{} will continue to
+ // see a string and not a time.Time.
+ // TODO(v3) Drop this.
+ out.Set(reflect.ValueOf(n.value))
} else {
out.Set(reflect.ValueOf(resolved))
}
- good = true
+ return true
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
switch resolved := resolved.(type) {
case int:
if !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
- good = true
+ return true
}
case int64:
if !out.OverflowInt(resolved) {
out.SetInt(resolved)
- good = true
+ return true
}
case uint64:
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
- good = true
+ return true
}
case float64:
if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) {
out.SetInt(int64(resolved))
- good = true
+ return true
}
case string:
if out.Type() == durationType {
d, err := time.ParseDuration(resolved)
if err == nil {
out.SetInt(int64(d))
- good = true
+ return true
}
}
}
@@ -414,44 +476,49 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
case int:
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
- good = true
+ return true
}
case int64:
if resolved >= 0 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
- good = true
+ return true
}
case uint64:
if !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
- good = true
+ return true
}
case float64:
if resolved <= math.MaxUint64 && !out.OverflowUint(uint64(resolved)) {
out.SetUint(uint64(resolved))
- good = true
+ return true
}
}
case reflect.Bool:
switch resolved := resolved.(type) {
case bool:
out.SetBool(resolved)
- good = true
+ return true
}
case reflect.Float32, reflect.Float64:
switch resolved := resolved.(type) {
case int:
out.SetFloat(float64(resolved))
- good = true
+ return true
case int64:
out.SetFloat(float64(resolved))
- good = true
+ return true
case uint64:
out.SetFloat(float64(resolved))
- good = true
+ return true
case float64:
out.SetFloat(resolved)
- good = true
+ return true
+ }
+ case reflect.Struct:
+ if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() {
+ out.Set(resolvedv)
+ return true
}
case reflect.Ptr:
if out.Type().Elem() == reflect.TypeOf(resolved) {
@@ -459,13 +526,11 @@ func (d *decoder) scalar(n *node, out reflect.Value) (good bool) {
elem := reflect.New(out.Type().Elem())
elem.Elem().Set(reflect.ValueOf(resolved))
out.Set(elem)
- good = true
+ return true
}
}
- if !good {
- d.terror(n, tag, out)
- }
- return good
+ d.terror(n, tag, out)
+ return false
}
func settableValueOf(i interface{}) reflect.Value {
@@ -482,6 +547,10 @@ func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
switch out.Kind() {
case reflect.Slice:
out.Set(reflect.MakeSlice(out.Type(), l, l))
+ case reflect.Array:
+ if l != out.Len() {
+ failf("invalid array: want %d elements but got %d", out.Len(), l)
+ }
case reflect.Interface:
// No type hints. Will have to use a generic sequence.
iface = out
@@ -500,7 +569,9 @@ func (d *decoder) sequence(n *node, out reflect.Value) (good bool) {
j++
}
}
- out.Set(out.Slice(0, j))
+ if out.Kind() != reflect.Array {
+ out.Set(out.Slice(0, j))
+ }
if iface.IsValid() {
iface.Set(out)
}
@@ -561,7 +632,7 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
}
e := reflect.New(et).Elem()
if d.unmarshal(n.children[i+1], e) {
- out.SetMapIndex(k, e)
+ d.setMapIndex(n.children[i+1], out, k, e)
}
}
}
@@ -569,6 +640,14 @@ func (d *decoder) mapping(n *node, out reflect.Value) (good bool) {
return true
}
+func (d *decoder) setMapIndex(n *node, out, k, v reflect.Value) {
+ if d.strict && out.MapIndex(k) != zeroValue {
+ d.terrors = append(d.terrors, fmt.Sprintf("line %d: key %#v already set in map", n.line+1, k.Interface()))
+ return
+ }
+ out.SetMapIndex(k, v)
+}
+
func (d *decoder) mappingSlice(n *node, out reflect.Value) (good bool) {
outt := out.Type()
if outt.Elem() != mapItemType {
@@ -616,6 +695,10 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
elemType = inlineMap.Type().Elem()
}
+ var doneFields []bool
+ if d.strict {
+ doneFields = make([]bool, len(sinfo.FieldsList))
+ }
for i := 0; i < l; i += 2 {
ni := n.children[i]
if isMerge(ni) {
@@ -626,6 +709,13 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
continue
}
if info, ok := sinfo.FieldsMap[name.String()]; ok {
+ if d.strict {
+ if doneFields[info.Id] {
+ d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s already set in type %s", ni.line+1, name.String(), out.Type()))
+ continue
+ }
+ doneFields[info.Id] = true
+ }
var field reflect.Value
if info.Inline == nil {
field = out.Field(info.Num)
@@ -639,9 +729,9 @@ func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) {
}
value := reflect.New(elemType).Elem()
d.unmarshal(n.children[i+1], value)
- inlineMap.SetMapIndex(name, value)
+ d.setMapIndex(n.children[i+1], inlineMap, name, value)
} else if d.strict {
- d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in struct %s", n.line+1, name.String(), out.Type()))
+ d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in type %s", ni.line+1, name.String(), out.Type()))
}
}
return true
diff --git a/vendor/gopkg.in/yaml.v2/decode_test.go b/vendor/gopkg.in/yaml.v2/decode_test.go
deleted file mode 100644
index 713b1ee9..00000000
--- a/vendor/gopkg.in/yaml.v2/decode_test.go
+++ /dev/null
@@ -1,1017 +0,0 @@
-package yaml_test
-
-import (
- "errors"
- . "gopkg.in/check.v1"
- "gopkg.in/yaml.v2"
- "math"
- "net"
- "reflect"
- "strings"
- "time"
-)
-
-var unmarshalIntTest = 123
-
-var unmarshalTests = []struct {
- data string
- value interface{}
-}{
- {
- "",
- &struct{}{},
- }, {
- "{}", &struct{}{},
- }, {
- "v: hi",
- map[string]string{"v": "hi"},
- }, {
- "v: hi", map[string]interface{}{"v": "hi"},
- }, {
- "v: true",
- map[string]string{"v": "true"},
- }, {
- "v: true",
- map[string]interface{}{"v": true},
- }, {
- "v: 10",
- map[string]interface{}{"v": 10},
- }, {
- "v: 0b10",
- map[string]interface{}{"v": 2},
- }, {
- "v: 0xA",
- map[string]interface{}{"v": 10},
- }, {
- "v: 4294967296",
- map[string]int64{"v": 4294967296},
- }, {
- "v: 0.1",
- map[string]interface{}{"v": 0.1},
- }, {
- "v: .1",
- map[string]interface{}{"v": 0.1},
- }, {
- "v: .Inf",
- map[string]interface{}{"v": math.Inf(+1)},
- }, {
- "v: -.Inf",
- map[string]interface{}{"v": math.Inf(-1)},
- }, {
- "v: -10",
- map[string]interface{}{"v": -10},
- }, {
- "v: -.1",
- map[string]interface{}{"v": -0.1},
- },
-
- // Simple values.
- {
- "123",
- &unmarshalIntTest,
- },
-
- // Floats from spec
- {
- "canonical: 6.8523e+5",
- map[string]interface{}{"canonical": 6.8523e+5},
- }, {
- "expo: 685.230_15e+03",
- map[string]interface{}{"expo": 685.23015e+03},
- }, {
- "fixed: 685_230.15",
- map[string]interface{}{"fixed": 685230.15},
- }, {
- "neginf: -.inf",
- map[string]interface{}{"neginf": math.Inf(-1)},
- }, {
- "fixed: 685_230.15",
- map[string]float64{"fixed": 685230.15},
- },
- //{"sexa: 190:20:30.15", map[string]interface{}{"sexa": 0}}, // Unsupported
- //{"notanum: .NaN", map[string]interface{}{"notanum": math.NaN()}}, // Equality of NaN fails.
-
- // Bools from spec
- {
- "canonical: y",
- map[string]interface{}{"canonical": true},
- }, {
- "answer: NO",
- map[string]interface{}{"answer": false},
- }, {
- "logical: True",
- map[string]interface{}{"logical": true},
- }, {
- "option: on",
- map[string]interface{}{"option": true},
- }, {
- "option: on",
- map[string]bool{"option": true},
- },
- // Ints from spec
- {
- "canonical: 685230",
- map[string]interface{}{"canonical": 685230},
- }, {
- "decimal: +685_230",
- map[string]interface{}{"decimal": 685230},
- }, {
- "octal: 02472256",
- map[string]interface{}{"octal": 685230},
- }, {
- "hexa: 0x_0A_74_AE",
- map[string]interface{}{"hexa": 685230},
- }, {
- "bin: 0b1010_0111_0100_1010_1110",
- map[string]interface{}{"bin": 685230},
- }, {
- "bin: -0b101010",
- map[string]interface{}{"bin": -42},
- }, {
- "decimal: +685_230",
- map[string]int{"decimal": 685230},
- },
-
- //{"sexa: 190:20:30", map[string]interface{}{"sexa": 0}}, // Unsupported
-
- // Nulls from spec
- {
- "empty:",
- map[string]interface{}{"empty": nil},
- }, {
- "canonical: ~",
- map[string]interface{}{"canonical": nil},
- }, {
- "english: null",
- map[string]interface{}{"english": nil},
- }, {
- "~: null key",
- map[interface{}]string{nil: "null key"},
- }, {
- "empty:",
- map[string]*bool{"empty": nil},
- },
-
- // Flow sequence
- {
- "seq: [A,B]",
- map[string]interface{}{"seq": []interface{}{"A", "B"}},
- }, {
- "seq: [A,B,C,]",
- map[string][]string{"seq": []string{"A", "B", "C"}},
- }, {
- "seq: [A,1,C]",
- map[string][]string{"seq": []string{"A", "1", "C"}},
- }, {
- "seq: [A,1,C]",
- map[string][]int{"seq": []int{1}},
- }, {
- "seq: [A,1,C]",
- map[string]interface{}{"seq": []interface{}{"A", 1, "C"}},
- },
- // Block sequence
- {
- "seq:\n - A\n - B",
- map[string]interface{}{"seq": []interface{}{"A", "B"}},
- }, {
- "seq:\n - A\n - B\n - C",
- map[string][]string{"seq": []string{"A", "B", "C"}},
- }, {
- "seq:\n - A\n - 1\n - C",
- map[string][]string{"seq": []string{"A", "1", "C"}},
- }, {
- "seq:\n - A\n - 1\n - C",
- map[string][]int{"seq": []int{1}},
- }, {
- "seq:\n - A\n - 1\n - C",
- map[string]interface{}{"seq": []interface{}{"A", 1, "C"}},
- },
-
- // Literal block scalar
- {
- "scalar: | # Comment\n\n literal\n\n \ttext\n\n",
- map[string]string{"scalar": "\nliteral\n\n\ttext\n"},
- },
-
- // Folded block scalar
- {
- "scalar: > # Comment\n\n folded\n line\n \n next\n line\n * one\n * two\n\n last\n line\n\n",
- map[string]string{"scalar": "\nfolded line\nnext line\n * one\n * two\n\nlast line\n"},
- },
-
- // Map inside interface with no type hints.
- {
- "a: {b: c}",
- map[interface{}]interface{}{"a": map[interface{}]interface{}{"b": "c"}},
- },
-
- // Structs and type conversions.
- {
- "hello: world",
- &struct{ Hello string }{"world"},
- }, {
- "a: {b: c}",
- &struct{ A struct{ B string } }{struct{ B string }{"c"}},
- }, {
- "a: {b: c}",
- &struct{ A *struct{ B string } }{&struct{ B string }{"c"}},
- }, {
- "a: {b: c}",
- &struct{ A map[string]string }{map[string]string{"b": "c"}},
- }, {
- "a: {b: c}",
- &struct{ A *map[string]string }{&map[string]string{"b": "c"}},
- }, {
- "a:",
- &struct{ A map[string]string }{},
- }, {
- "a: 1",
- &struct{ A int }{1},
- }, {
- "a: 1",
- &struct{ A float64 }{1},
- }, {
- "a: 1.0",
- &struct{ A int }{1},
- }, {
- "a: 1.0",
- &struct{ A uint }{1},
- }, {
- "a: [1, 2]",
- &struct{ A []int }{[]int{1, 2}},
- }, {
- "a: 1",
- &struct{ B int }{0},
- }, {
- "a: 1",
- &struct {
- B int "a"
- }{1},
- }, {
- "a: y",
- &struct{ A bool }{true},
- },
-
- // Some cross type conversions
- {
- "v: 42",
- map[string]uint{"v": 42},
- }, {
- "v: -42",
- map[string]uint{},
- }, {
- "v: 4294967296",
- map[string]uint64{"v": 4294967296},
- }, {
- "v: -4294967296",
- map[string]uint64{},
- },
-
- // int
- {
- "int_max: 2147483647",
- map[string]int{"int_max": math.MaxInt32},
- },
- {
- "int_min: -2147483648",
- map[string]int{"int_min": math.MinInt32},
- },
- {
- "int_overflow: 9223372036854775808", // math.MaxInt64 + 1
- map[string]int{},
- },
-
- // int64
- {
- "int64_max: 9223372036854775807",
- map[string]int64{"int64_max": math.MaxInt64},
- },
- {
- "int64_max_base2: 0b111111111111111111111111111111111111111111111111111111111111111",
- map[string]int64{"int64_max_base2": math.MaxInt64},
- },
- {
- "int64_min: -9223372036854775808",
- map[string]int64{"int64_min": math.MinInt64},
- },
- {
- "int64_neg_base2: -0b111111111111111111111111111111111111111111111111111111111111111",
- map[string]int64{"int64_neg_base2": -math.MaxInt64},
- },
- {
- "int64_overflow: 9223372036854775808", // math.MaxInt64 + 1
- map[string]int64{},
- },
-
- // uint
- {
- "uint_min: 0",
- map[string]uint{"uint_min": 0},
- },
- {
- "uint_max: 4294967295",
- map[string]uint{"uint_max": math.MaxUint32},
- },
- {
- "uint_underflow: -1",
- map[string]uint{},
- },
-
- // uint64
- {
- "uint64_min: 0",
- map[string]uint{"uint64_min": 0},
- },
- {
- "uint64_max: 18446744073709551615",
- map[string]uint64{"uint64_max": math.MaxUint64},
- },
- {
- "uint64_max_base2: 0b1111111111111111111111111111111111111111111111111111111111111111",
- map[string]uint64{"uint64_max_base2": math.MaxUint64},
- },
- {
- "uint64_maxint64: 9223372036854775807",
- map[string]uint64{"uint64_maxint64": math.MaxInt64},
- },
- {
- "uint64_underflow: -1",
- map[string]uint64{},
- },
-
- // float32
- {
- "float32_max: 3.40282346638528859811704183484516925440e+38",
- map[string]float32{"float32_max": math.MaxFloat32},
- },
- {
- "float32_nonzero: 1.401298464324817070923729583289916131280e-45",
- map[string]float32{"float32_nonzero": math.SmallestNonzeroFloat32},
- },
- {
- "float32_maxuint64: 18446744073709551615",
- map[string]float32{"float32_maxuint64": float32(math.MaxUint64)},
- },
- {
- "float32_maxuint64+1: 18446744073709551616",
- map[string]float32{"float32_maxuint64+1": float32(math.MaxUint64 + 1)},
- },
-
- // float64
- {
- "float64_max: 1.797693134862315708145274237317043567981e+308",
- map[string]float64{"float64_max": math.MaxFloat64},
- },
- {
- "float64_nonzero: 4.940656458412465441765687928682213723651e-324",
- map[string]float64{"float64_nonzero": math.SmallestNonzeroFloat64},
- },
- {
- "float64_maxuint64: 18446744073709551615",
- map[string]float64{"float64_maxuint64": float64(math.MaxUint64)},
- },
- {
- "float64_maxuint64+1: 18446744073709551616",
- map[string]float64{"float64_maxuint64+1": float64(math.MaxUint64 + 1)},
- },
-
- // Overflow cases.
- {
- "v: 4294967297",
- map[string]int32{},
- }, {
- "v: 128",
- map[string]int8{},
- },
-
- // Quoted values.
- {
- "'1': '\"2\"'",
- map[interface{}]interface{}{"1": "\"2\""},
- }, {
- "v:\n- A\n- 'B\n\n C'\n",
- map[string][]string{"v": []string{"A", "B\nC"}},
- },
-
- // Explicit tags.
- {
- "v: !!float '1.1'",
- map[string]interface{}{"v": 1.1},
- }, {
- "v: !!null ''",
- map[string]interface{}{"v": nil},
- }, {
- "%TAG !y! tag:yaml.org,2002:\n---\nv: !y!int '1'",
- map[string]interface{}{"v": 1},
- },
-
- // Non-specific tag (Issue #75)
- {
- "v: ! test",
- map[string]interface{}{"v": "test"},
- },
-
- // Anchors and aliases.
- {
- "a: &x 1\nb: &y 2\nc: *x\nd: *y\n",
- &struct{ A, B, C, D int }{1, 2, 1, 2},
- }, {
- "a: &a {c: 1}\nb: *a",
- &struct {
- A, B struct {
- C int
- }
- }{struct{ C int }{1}, struct{ C int }{1}},
- }, {
- "a: &a [1, 2]\nb: *a",
- &struct{ B []int }{[]int{1, 2}},
- }, {
- "b: *a\na: &a {c: 1}",
- &struct {
- A, B struct {
- C int
- }
- }{struct{ C int }{1}, struct{ C int }{1}},
- },
-
- // Bug #1133337
- {
- "foo: ''",
- map[string]*string{"foo": new(string)},
- }, {
- "foo: null",
- map[string]string{"foo": ""},
- }, {
- "foo: null",
- map[string]interface{}{"foo": nil},
- },
-
- // Ignored field
- {
- "a: 1\nb: 2\n",
- &struct {
- A int
- B int "-"
- }{1, 0},
- },
-
- // Bug #1191981
- {
- "" +
- "%YAML 1.1\n" +
- "--- !!str\n" +
- `"Generic line break (no glyph)\n\` + "\n" +
- ` Generic line break (glyphed)\n\` + "\n" +
- ` Line separator\u2028\` + "\n" +
- ` Paragraph separator\u2029"` + "\n",
- "" +
- "Generic line break (no glyph)\n" +
- "Generic line break (glyphed)\n" +
- "Line separator\u2028Paragraph separator\u2029",
- },
-
- // Struct inlining
- {
- "a: 1\nb: 2\nc: 3\n",
- &struct {
- A int
- C inlineB `yaml:",inline"`
- }{1, inlineB{2, inlineC{3}}},
- },
-
- // Map inlining
- {
- "a: 1\nb: 2\nc: 3\n",
- &struct {
- A int
- C map[string]int `yaml:",inline"`
- }{1, map[string]int{"b": 2, "c": 3}},
- },
-
- // bug 1243827
- {
- "a: -b_c",
- map[string]interface{}{"a": "-b_c"},
- },
- {
- "a: +b_c",
- map[string]interface{}{"a": "+b_c"},
- },
- {
- "a: 50cent_of_dollar",
- map[string]interface{}{"a": "50cent_of_dollar"},
- },
-
- // Duration
- {
- "a: 3s",
- map[string]time.Duration{"a": 3 * time.Second},
- },
-
- // Issue #24.
- {
- "a: <foo>",
- map[string]string{"a": "<foo>"},
- },
-
- // Base 60 floats are obsolete and unsupported.
- {
- "a: 1:1\n",
- map[string]string{"a": "1:1"},
- },
-
- // Binary data.
- {
- "a: !!binary gIGC\n",
- map[string]string{"a": "\x80\x81\x82"},
- }, {
- "a: !!binary |\n " + strings.Repeat("kJCQ", 17) + "kJ\n CQ\n",
- map[string]string{"a": strings.Repeat("\x90", 54)},
- }, {
- "a: !!binary |\n " + strings.Repeat("A", 70) + "\n ==\n",
- map[string]string{"a": strings.Repeat("\x00", 52)},
- },
-
- // Ordered maps.
- {
- "{b: 2, a: 1, d: 4, c: 3, sub: {e: 5}}",
- &yaml.MapSlice{{"b", 2}, {"a", 1}, {"d", 4}, {"c", 3}, {"sub", yaml.MapSlice{{"e", 5}}}},
- },
-
- // Issue #39.
- {
- "a:\n b:\n c: d\n",
- map[string]struct{ B interface{} }{"a": {map[interface{}]interface{}{"c": "d"}}},
- },
-
- // Custom map type.
- {
- "a: {b: c}",
- M{"a": M{"b": "c"}},
- },
-
- // Support encoding.TextUnmarshaler.
- {
- "a: 1.2.3.4\n",
- map[string]net.IP{"a": net.IPv4(1, 2, 3, 4)},
- },
- {
- "a: 2015-02-24T18:19:39Z\n",
- map[string]time.Time{"a": time.Unix(1424801979, 0).In(time.UTC)},
- },
-
- // Encode empty lists as zero-length slices.
- {
- "a: []",
- &struct{ A []int }{[]int{}},
- },
-
- // UTF-16-LE
- {
- "\xff\xfe\xf1\x00o\x00\xf1\x00o\x00:\x00 \x00v\x00e\x00r\x00y\x00 \x00y\x00e\x00s\x00\n\x00",
- M{"ñoño": "very yes"},
- },
- // UTF-16-LE with surrogate.
- {
- "\xff\xfe\xf1\x00o\x00\xf1\x00o\x00:\x00 \x00v\x00e\x00r\x00y\x00 \x00y\x00e\x00s\x00 \x00=\xd8\xd4\xdf\n\x00",
- M{"ñoño": "very yes 🟔"},
- },
-
- // UTF-16-BE
- {
- "\xfe\xff\x00\xf1\x00o\x00\xf1\x00o\x00:\x00 \x00v\x00e\x00r\x00y\x00 \x00y\x00e\x00s\x00\n",
- M{"ñoño": "very yes"},
- },
- // UTF-16-BE with surrogate.
- {
- "\xfe\xff\x00\xf1\x00o\x00\xf1\x00o\x00:\x00 \x00v\x00e\x00r\x00y\x00 \x00y\x00e\x00s\x00 \xd8=\xdf\xd4\x00\n",
- M{"ñoño": "very yes 🟔"},
- },
-
- // YAML Float regex shouldn't match this
- {
- "a: 123456e1\n",
- M{"a": "123456e1"},
- }, {
- "a: 123456E1\n",
- M{"a": "123456E1"},
- },
-}
-
-type M map[interface{}]interface{}
-
-type inlineB struct {
- B int
- inlineC `yaml:",inline"`
-}
-
-type inlineC struct {
- C int
-}
-
-func (s *S) TestUnmarshal(c *C) {
- for i, item := range unmarshalTests {
- c.Logf("test %d: %q", i, item.data)
- t := reflect.ValueOf(item.value).Type()
- var value interface{}
- switch t.Kind() {
- case reflect.Map:
- value = reflect.MakeMap(t).Interface()
- case reflect.String:
- value = reflect.New(t).Interface()
- case reflect.Ptr:
- value = reflect.New(t.Elem()).Interface()
- default:
- c.Fatalf("missing case for %s", t)
- }
- err := yaml.Unmarshal([]byte(item.data), value)
- if _, ok := err.(*yaml.TypeError); !ok {
- c.Assert(err, IsNil)
- }
- if t.Kind() == reflect.String {
- c.Assert(*value.(*string), Equals, item.value)
- } else {
- c.Assert(value, DeepEquals, item.value)
- }
- }
-}
-
-func (s *S) TestUnmarshalNaN(c *C) {
- value := map[string]interface{}{}
- err := yaml.Unmarshal([]byte("notanum: .NaN"), &value)
- c.Assert(err, IsNil)
- c.Assert(math.IsNaN(value["notanum"].(float64)), Equals, true)
-}
-
-var unmarshalErrorTests = []struct {
- data, error string
-}{
- {"v: !!float 'error'", "yaml: cannot decode !!str `error` as a !!float"},
- {"v: [A,", "yaml: line 1: did not find expected node content"},
- {"v:\n- [A,", "yaml: line 2: did not find expected node content"},
- {"a: *b\n", "yaml: unknown anchor 'b' referenced"},
- {"a: &a\n b: *a\n", "yaml: anchor 'a' value contains itself"},
- {"value: -", "yaml: block sequence entries are not allowed in this context"},
- {"a: !!binary ==", "yaml: !!binary value contains invalid base64 data"},
- {"{[.]}", `yaml: invalid map key: \[\]interface \{\}\{"\."\}`},
- {"{{.}}", `yaml: invalid map key: map\[interface\ \{\}\]interface \{\}\{".":interface \{\}\(nil\)\}`},
- {"%TAG !%79! tag:yaml.org,2002:\n---\nv: !%79!int '1'", "yaml: did not find expected whitespace"},
-}
-
-func (s *S) TestUnmarshalErrors(c *C) {
- for _, item := range unmarshalErrorTests {
- var value interface{}
- err := yaml.Unmarshal([]byte(item.data), &value)
- c.Assert(err, ErrorMatches, item.error, Commentf("Partial unmarshal: %#v", value))
- }
-}
-
-var unmarshalerTests = []struct {
- data, tag string
- value interface{}
-}{
- {"_: {hi: there}", "!!map", map[interface{}]interface{}{"hi": "there"}},
- {"_: [1,A]", "!!seq", []interface{}{1, "A"}},
- {"_: 10", "!!int", 10},
- {"_: null", "!!null", nil},
- {`_: BAR!`, "!!str", "BAR!"},
- {`_: "BAR!"`, "!!str", "BAR!"},
- {"_: !!foo 'BAR!'", "!!foo", "BAR!"},
- {`_: ""`, "!!str", ""},
-}
-
-var unmarshalerResult = map[int]error{}
-
-type unmarshalerType struct {
- value interface{}
-}
-
-func (o *unmarshalerType) UnmarshalYAML(unmarshal func(v interface{}) error) error {
- if err := unmarshal(&o.value); err != nil {
- return err
- }
- if i, ok := o.value.(int); ok {
- if result, ok := unmarshalerResult[i]; ok {
- return result
- }
- }
- return nil
-}
-
-type unmarshalerPointer struct {
- Field *unmarshalerType "_"
-}
-
-type unmarshalerValue struct {
- Field unmarshalerType "_"
-}
-
-func (s *S) TestUnmarshalerPointerField(c *C) {
- for _, item := range unmarshalerTests {
- obj := &unmarshalerPointer{}
- err := yaml.Unmarshal([]byte(item.data), obj)
- c.Assert(err, IsNil)
- if item.value == nil {
- c.Assert(obj.Field, IsNil)
- } else {
- c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
- c.Assert(obj.Field.value, DeepEquals, item.value)
- }
- }
-}
-
-func (s *S) TestUnmarshalerValueField(c *C) {
- for _, item := range unmarshalerTests {
- obj := &unmarshalerValue{}
- err := yaml.Unmarshal([]byte(item.data), obj)
- c.Assert(err, IsNil)
- c.Assert(obj.Field, NotNil, Commentf("Pointer not initialized (%#v)", item.value))
- c.Assert(obj.Field.value, DeepEquals, item.value)
- }
-}
-
-func (s *S) TestUnmarshalerWholeDocument(c *C) {
- obj := &unmarshalerType{}
- err := yaml.Unmarshal([]byte(unmarshalerTests[0].data), obj)
- c.Assert(err, IsNil)
- value, ok := obj.value.(map[interface{}]interface{})
- c.Assert(ok, Equals, true, Commentf("value: %#v", obj.value))
- c.Assert(value["_"], DeepEquals, unmarshalerTests[0].value)
-}
-
-func (s *S) TestUnmarshalerTypeError(c *C) {
- unmarshalerResult[2] = &yaml.TypeError{[]string{"foo"}}
- unmarshalerResult[4] = &yaml.TypeError{[]string{"bar"}}
- defer func() {
- delete(unmarshalerResult, 2)
- delete(unmarshalerResult, 4)
- }()
-
- type T struct {
- Before int
- After int
- M map[string]*unmarshalerType
- }
- var v T
- data := `{before: A, m: {abc: 1, def: 2, ghi: 3, jkl: 4}, after: B}`
- err := yaml.Unmarshal([]byte(data), &v)
- c.Assert(err, ErrorMatches, ""+
- "yaml: unmarshal errors:\n"+
- " line 1: cannot unmarshal !!str `A` into int\n"+
- " foo\n"+
- " bar\n"+
- " line 1: cannot unmarshal !!str `B` into int")
- c.Assert(v.M["abc"], NotNil)
- c.Assert(v.M["def"], IsNil)
- c.Assert(v.M["ghi"], NotNil)
- c.Assert(v.M["jkl"], IsNil)
-
- c.Assert(v.M["abc"].value, Equals, 1)
- c.Assert(v.M["ghi"].value, Equals, 3)
-}
-
-type proxyTypeError struct{}
-
-func (v *proxyTypeError) UnmarshalYAML(unmarshal func(interface{}) error) error {
- var s string
- var a int32
- var b int64
- if err := unmarshal(&s); err != nil {
- panic(err)
- }
- if s == "a" {
- if err := unmarshal(&b); err == nil {
- panic("should have failed")
- }
- return unmarshal(&a)
- }
- if err := unmarshal(&a); err == nil {
- panic("should have failed")
- }
- return unmarshal(&b)
-}
-
-func (s *S) TestUnmarshalerTypeErrorProxying(c *C) {
- type T struct {
- Before int
- After int
- M map[string]*proxyTypeError
- }
- var v T
- data := `{before: A, m: {abc: a, def: b}, after: B}`
- err := yaml.Unmarshal([]byte(data), &v)
- c.Assert(err, ErrorMatches, ""+
- "yaml: unmarshal errors:\n"+
- " line 1: cannot unmarshal !!str `A` into int\n"+
- " line 1: cannot unmarshal !!str `a` into int32\n"+
- " line 1: cannot unmarshal !!str `b` into int64\n"+
- " line 1: cannot unmarshal !!str `B` into int")
-}
-
-type failingUnmarshaler struct{}
-
-var failingErr = errors.New("failingErr")
-
-func (ft *failingUnmarshaler) UnmarshalYAML(unmarshal func(interface{}) error) error {
- return failingErr
-}
-
-func (s *S) TestUnmarshalerError(c *C) {
- err := yaml.Unmarshal([]byte("a: b"), &failingUnmarshaler{})
- c.Assert(err, Equals, failingErr)
-}
-
-type sliceUnmarshaler []int
-
-func (su *sliceUnmarshaler) UnmarshalYAML(unmarshal func(interface{}) error) error {
- var slice []int
- err := unmarshal(&slice)
- if err == nil {
- *su = slice
- return nil
- }
-
- var intVal int
- err = unmarshal(&intVal)
- if err == nil {
- *su = []int{intVal}
- return nil
- }
-
- return err
-}
-
-func (s *S) TestUnmarshalerRetry(c *C) {
- var su sliceUnmarshaler
- err := yaml.Unmarshal([]byte("[1, 2, 3]"), &su)
- c.Assert(err, IsNil)
- c.Assert(su, DeepEquals, sliceUnmarshaler([]int{1, 2, 3}))
-
- err = yaml.Unmarshal([]byte("1"), &su)
- c.Assert(err, IsNil)
- c.Assert(su, DeepEquals, sliceUnmarshaler([]int{1}))
-}
-
-// From http://yaml.org/type/merge.html
-var mergeTests = `
-anchors:
- list:
- - &CENTER { "x": 1, "y": 2 }
- - &LEFT { "x": 0, "y": 2 }
- - &BIG { "r": 10 }
- - &SMALL { "r": 1 }
-
-# All the following maps are equal:
-
-plain:
- # Explicit keys
- "x": 1
- "y": 2
- "r": 10
- label: center/big
-
-mergeOne:
- # Merge one map
- << : *CENTER
- "r": 10
- label: center/big
-
-mergeMultiple:
- # Merge multiple maps
- << : [ *CENTER, *BIG ]
- label: center/big
-
-override:
- # Override
- << : [ *BIG, *LEFT, *SMALL ]
- "x": 1
- label: center/big
-
-shortTag:
- # Explicit short merge tag
- !!merge "<<" : [ *CENTER, *BIG ]
- label: center/big
-
-longTag:
- # Explicit merge long tag
- !<tag:yaml.org,2002:merge> "<<" : [ *CENTER, *BIG ]
- label: center/big
-
-inlineMap:
- # Inlined map
- << : {"x": 1, "y": 2, "r": 10}
- label: center/big
-
-inlineSequenceMap:
- # Inlined map in sequence
- << : [ *CENTER, {"r": 10} ]
- label: center/big
-`
-
-func (s *S) TestMerge(c *C) {
- var want = map[interface{}]interface{}{
- "x": 1,
- "y": 2,
- "r": 10,
- "label": "center/big",
- }
-
- var m map[interface{}]interface{}
- err := yaml.Unmarshal([]byte(mergeTests), &m)
- c.Assert(err, IsNil)
- for name, test := range m {
- if name == "anchors" {
- continue
- }
- c.Assert(test, DeepEquals, want, Commentf("test %q failed", name))
- }
-}
-
-func (s *S) TestMergeStruct(c *C) {
- type Data struct {
- X, Y, R int
- Label string
- }
- want := Data{1, 2, 10, "center/big"}
-
- var m map[string]Data
- err := yaml.Unmarshal([]byte(mergeTests), &m)
- c.Assert(err, IsNil)
- for name, test := range m {
- if name == "anchors" {
- continue
- }
- c.Assert(test, Equals, want, Commentf("test %q failed", name))
- }
-}
-
-var unmarshalNullTests = []func() interface{}{
- func() interface{} { var v interface{}; v = "v"; return &v },
- func() interface{} { var s = "s"; return &s },
- func() interface{} { var s = "s"; sptr := &s; return &sptr },
- func() interface{} { var i = 1; return &i },
- func() interface{} { var i = 1; iptr := &i; return &iptr },
- func() interface{} { m := map[string]int{"s": 1}; return &m },
- func() interface{} { m := map[string]int{"s": 1}; return m },
-}
-
-func (s *S) TestUnmarshalNull(c *C) {
- for _, test := range unmarshalNullTests {
- item := test()
- zero := reflect.Zero(reflect.TypeOf(item).Elem()).Interface()
- err := yaml.Unmarshal([]byte("null"), item)
- c.Assert(err, IsNil)
- if reflect.TypeOf(item).Kind() == reflect.Map {
- c.Assert(reflect.ValueOf(item).Interface(), DeepEquals, reflect.MakeMap(reflect.TypeOf(item)).Interface())
- } else {
- c.Assert(reflect.ValueOf(item).Elem().Interface(), DeepEquals, zero)
- }
- }
-}
-
-func (s *S) TestUnmarshalSliceOnPreset(c *C) {
- // Issue #48.
- v := struct{ A []int }{[]int{1}}
- yaml.Unmarshal([]byte("a: [2]"), &v)
- c.Assert(v.A, DeepEquals, []int{2})
-}
-
-func (s *S) TestUnmarshalStrict(c *C) {
- v := struct{ A, B int }{}
-
- err := yaml.UnmarshalStrict([]byte("a: 1\nb: 2"), &v)
- c.Check(err, IsNil)
- err = yaml.Unmarshal([]byte("a: 1\nb: 2\nc: 3"), &v)
- c.Check(err, IsNil)
- err = yaml.UnmarshalStrict([]byte("a: 1\nb: 2\nc: 3"), &v)
- c.Check(err, ErrorMatches, "yaml: unmarshal errors:\n line 1: field c not found in struct struct { A int; B int }")
-}
-
-//var data []byte
-//func init() {
-// var err error
-// data, err = ioutil.ReadFile("/tmp/file.yaml")
-// if err != nil {
-// panic(err)
-// }
-//}
-//
-//func (s *S) BenchmarkUnmarshal(c *C) {
-// var err error
-// for i := 0; i < c.N; i++ {
-// var v map[string]interface{}
-// err = yaml.Unmarshal(data, &v)
-// }
-// if err != nil {
-// panic(err)
-// }
-//}
-//
-//func (s *S) BenchmarkMarshal(c *C) {
-// var v map[string]interface{}
-// yaml.Unmarshal(data, &v)
-// c.ResetTimer()
-// for i := 0; i < c.N; i++ {
-// yaml.Marshal(&v)
-// }
-//}
diff --git a/vendor/gopkg.in/yaml.v2/emitterc.go b/vendor/gopkg.in/yaml.v2/emitterc.go
index 41de8b85..a1c2cc52 100644
--- a/vendor/gopkg.in/yaml.v2/emitterc.go
+++ b/vendor/gopkg.in/yaml.v2/emitterc.go
@@ -2,6 +2,7 @@ package yaml
import (
"bytes"
+ "fmt"
)
// Flush the buffer if needed.
@@ -664,7 +665,7 @@ func yaml_emitter_emit_node(emitter *yaml_emitter_t, event *yaml_event_t,
return yaml_emitter_emit_mapping_start(emitter, event)
default:
return yaml_emitter_set_emitter_error(emitter,
- "expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS")
+ fmt.Sprintf("expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS, but got %v", event.typ))
}
}
@@ -842,7 +843,7 @@ func yaml_emitter_select_scalar_style(emitter *yaml_emitter_t, event *yaml_event
return true
}
-// Write an achor.
+// Write an anchor.
func yaml_emitter_process_anchor(emitter *yaml_emitter_t) bool {
if emitter.anchor_data.anchor == nil {
return true
@@ -995,9 +996,9 @@ func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool {
space_break = false
preceded_by_whitespace = false
- followed_by_whitespace = false
- previous_space = false
- previous_break = false
+ followed_by_whitespace = false
+ previous_space = false
+ previous_break = false
)
emitter.scalar_data.value = value
diff --git a/vendor/gopkg.in/yaml.v2/encode.go b/vendor/gopkg.in/yaml.v2/encode.go
index 84f84995..a14435e8 100644
--- a/vendor/gopkg.in/yaml.v2/encode.go
+++ b/vendor/gopkg.in/yaml.v2/encode.go
@@ -3,12 +3,14 @@ package yaml
import (
"encoding"
"fmt"
+ "io"
"reflect"
"regexp"
"sort"
"strconv"
"strings"
"time"
+ "unicode/utf8"
)
type encoder struct {
@@ -16,25 +18,39 @@ type encoder struct {
event yaml_event_t
out []byte
flow bool
+ // doneInit holds whether the initial stream_start_event has been
+ // emitted.
+ doneInit bool
}
-func newEncoder() (e *encoder) {
- e = &encoder{}
- e.must(yaml_emitter_initialize(&e.emitter))
+func newEncoder() *encoder {
+ e := &encoder{}
+ yaml_emitter_initialize(&e.emitter)
yaml_emitter_set_output_string(&e.emitter, &e.out)
yaml_emitter_set_unicode(&e.emitter, true)
- e.must(yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING))
- e.emit()
- e.must(yaml_document_start_event_initialize(&e.event, nil, nil, true))
- e.emit()
return e
}
-func (e *encoder) finish() {
- e.must(yaml_document_end_event_initialize(&e.event, true))
+func newEncoderWithWriter(w io.Writer) *encoder {
+ e := &encoder{}
+ yaml_emitter_initialize(&e.emitter)
+ yaml_emitter_set_output_writer(&e.emitter, w)
+ yaml_emitter_set_unicode(&e.emitter, true)
+ return e
+}
+
+func (e *encoder) init() {
+ if e.doneInit {
+ return
+ }
+ yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING)
e.emit()
+ e.doneInit = true
+}
+
+func (e *encoder) finish() {
e.emitter.open_ended = false
- e.must(yaml_stream_end_event_initialize(&e.event))
+ yaml_stream_end_event_initialize(&e.event)
e.emit()
}
@@ -44,9 +60,7 @@ func (e *encoder) destroy() {
func (e *encoder) emit() {
// This will internally delete the e.event value.
- if !yaml_emitter_emit(&e.emitter, &e.event) && e.event.typ != yaml_DOCUMENT_END_EVENT && e.event.typ != yaml_STREAM_END_EVENT {
- e.must(false)
- }
+ e.must(yaml_emitter_emit(&e.emitter, &e.event))
}
func (e *encoder) must(ok bool) {
@@ -59,13 +73,28 @@ func (e *encoder) must(ok bool) {
}
}
+func (e *encoder) marshalDoc(tag string, in reflect.Value) {
+ e.init()
+ yaml_document_start_event_initialize(&e.event, nil, nil, true)
+ e.emit()
+ e.marshal(tag, in)
+ yaml_document_end_event_initialize(&e.event, true)
+ e.emit()
+}
+
func (e *encoder) marshal(tag string, in reflect.Value) {
- if !in.IsValid() {
+ if !in.IsValid() || in.Kind() == reflect.Ptr && in.IsNil() {
e.nilv()
return
}
iface := in.Interface()
- if m, ok := iface.(Marshaler); ok {
+ switch m := iface.(type) {
+ case time.Time, *time.Time:
+ // Although time.Time implements TextMarshaler,
+ // we don't want to treat it as a string for YAML
+ // purposes because YAML has special support for
+ // timestamps.
+ case Marshaler:
v, err := m.MarshalYAML()
if err != nil {
fail(err)
@@ -75,31 +104,34 @@ func (e *encoder) marshal(tag string, in reflect.Value) {
return
}
in = reflect.ValueOf(v)
- } else if m, ok := iface.(encoding.TextMarshaler); ok {
+ case encoding.TextMarshaler:
text, err := m.MarshalText()
if err != nil {
fail(err)
}
in = reflect.ValueOf(string(text))
+ case nil:
+ e.nilv()
+ return
}
switch in.Kind() {
case reflect.Interface:
- if in.IsNil() {
- e.nilv()
- } else {
- e.marshal(tag, in.Elem())
- }
+ e.marshal(tag, in.Elem())
case reflect.Map:
e.mapv(tag, in)
case reflect.Ptr:
- if in.IsNil() {
- e.nilv()
+ if in.Type() == ptrTimeType {
+ e.timev(tag, in.Elem())
} else {
e.marshal(tag, in.Elem())
}
case reflect.Struct:
- e.structv(tag, in)
- case reflect.Slice:
+ if in.Type() == timeType {
+ e.timev(tag, in)
+ } else {
+ e.structv(tag, in)
+ }
+ case reflect.Slice, reflect.Array:
if in.Type().Elem() == mapItemType {
e.itemsv(tag, in)
} else {
@@ -191,10 +223,10 @@ func (e *encoder) mappingv(tag string, f func()) {
e.flow = false
style = yaml_FLOW_MAPPING_STYLE
}
- e.must(yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style))
+ yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style)
e.emit()
f()
- e.must(yaml_mapping_end_event_initialize(&e.event))
+ yaml_mapping_end_event_initialize(&e.event)
e.emit()
}
@@ -240,23 +272,36 @@ var base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\.[0
func (e *encoder) stringv(tag string, in reflect.Value) {
var style yaml_scalar_style_t
s := in.String()
- rtag, rs := resolve("", s)
- if rtag == yaml_BINARY_TAG {
- if tag == "" || tag == yaml_STR_TAG {
- tag = rtag
- s = rs.(string)
- } else if tag == yaml_BINARY_TAG {
+ canUsePlain := true
+ switch {
+ case !utf8.ValidString(s):
+ if tag == yaml_BINARY_TAG {
failf("explicitly tagged !!binary data must be base64-encoded")
- } else {
+ }
+ if tag != "" {
failf("cannot marshal invalid UTF-8 data as %s", shortTag(tag))
}
+ // It can't be encoded directly as YAML so use a binary tag
+ // and encode it as base64.
+ tag = yaml_BINARY_TAG
+ s = encodeBase64(s)
+ case tag == "":
+ // Check to see if it would resolve to a specific
+ // tag when encoded unquoted. If it doesn't,
+ // there's no need to quote it.
+ rtag, _ := resolve("", s)
+ canUsePlain = rtag == yaml_STR_TAG && !isBase60Float(s)
}
- if tag == "" && (rtag != yaml_STR_TAG || isBase60Float(s)) {
- style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
- } else if strings.Contains(s, "\n") {
+ // Note: it's possible for user code to emit invalid YAML
+ // if they explicitly specify a tag and a string containing
+ // text that's incompatible with that tag.
+ switch {
+ case strings.Contains(s, "\n"):
style = yaml_LITERAL_SCALAR_STYLE
- } else {
+ case canUsePlain:
style = yaml_PLAIN_SCALAR_STYLE
+ default:
+ style = yaml_DOUBLE_QUOTED_SCALAR_STYLE
}
e.emitScalar(s, "", tag, style)
}
@@ -281,9 +326,20 @@ func (e *encoder) uintv(tag string, in reflect.Value) {
e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
}
+func (e *encoder) timev(tag string, in reflect.Value) {
+ t := in.Interface().(time.Time)
+ s := t.Format(time.RFC3339Nano)
+ e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE)
+}
+
func (e *encoder) floatv(tag string, in reflect.Value) {
- // FIXME: Handle 64 bits here.
- s := strconv.FormatFloat(float64(in.Float()), 'g', -1, 32)
+ // Issue #352: When formatting, use the precision of the underlying value
+ precision := 64
+ if in.Kind() == reflect.Float32 {
+ precision = 32
+ }
+
+ s := strconv.FormatFloat(in.Float(), 'g', -1, precision)
switch s {
case "+Inf":
s = ".inf"
diff --git a/vendor/gopkg.in/yaml.v2/encode_test.go b/vendor/gopkg.in/yaml.v2/encode_test.go
deleted file mode 100644
index 84099bd3..00000000
--- a/vendor/gopkg.in/yaml.v2/encode_test.go
+++ /dev/null
@@ -1,501 +0,0 @@
-package yaml_test
-
-import (
- "fmt"
- "math"
- "strconv"
- "strings"
- "time"
-
- . "gopkg.in/check.v1"
- "gopkg.in/yaml.v2"
- "net"
- "os"
-)
-
-var marshalIntTest = 123
-
-var marshalTests = []struct {
- value interface{}
- data string
-}{
- {
- nil,
- "null\n",
- }, {
- &struct{}{},
- "{}\n",
- }, {
- map[string]string{"v": "hi"},
- "v: hi\n",
- }, {
- map[string]interface{}{"v": "hi"},
- "v: hi\n",
- }, {
- map[string]string{"v": "true"},
- "v: \"true\"\n",
- }, {
- map[string]string{"v": "false"},
- "v: \"false\"\n",
- }, {
- map[string]interface{}{"v": true},
- "v: true\n",
- }, {
- map[string]interface{}{"v": false},
- "v: false\n",
- }, {
- map[string]interface{}{"v": 10},
- "v: 10\n",
- }, {
- map[string]interface{}{"v": -10},
- "v: -10\n",
- }, {
- map[string]uint{"v": 42},
- "v: 42\n",
- }, {
- map[string]interface{}{"v": int64(4294967296)},
- "v: 4294967296\n",
- }, {
- map[string]int64{"v": int64(4294967296)},
- "v: 4294967296\n",
- }, {
- map[string]uint64{"v": 4294967296},
- "v: 4294967296\n",
- }, {
- map[string]interface{}{"v": "10"},
- "v: \"10\"\n",
- }, {
- map[string]interface{}{"v": 0.1},
- "v: 0.1\n",
- }, {
- map[string]interface{}{"v": float64(0.1)},
- "v: 0.1\n",
- }, {
- map[string]interface{}{"v": -0.1},
- "v: -0.1\n",
- }, {
- map[string]interface{}{"v": math.Inf(+1)},
- "v: .inf\n",
- }, {
- map[string]interface{}{"v": math.Inf(-1)},
- "v: -.inf\n",
- }, {
- map[string]interface{}{"v": math.NaN()},
- "v: .nan\n",
- }, {
- map[string]interface{}{"v": nil},
- "v: null\n",
- }, {
- map[string]interface{}{"v": ""},
- "v: \"\"\n",
- }, {
- map[string][]string{"v": []string{"A", "B"}},
- "v:\n- A\n- B\n",
- }, {
- map[string][]string{"v": []string{"A", "B\nC"}},
- "v:\n- A\n- |-\n B\n C\n",
- }, {
- map[string][]interface{}{"v": []interface{}{"A", 1, map[string][]int{"B": []int{2, 3}}}},
- "v:\n- A\n- 1\n- B:\n - 2\n - 3\n",
- }, {
- map[string]interface{}{"a": map[interface{}]interface{}{"b": "c"}},
- "a:\n b: c\n",
- }, {
- map[string]interface{}{"a": "-"},
- "a: '-'\n",
- },
-
- // Simple values.
- {
- &marshalIntTest,
- "123\n",
- },
-
- // Structures
- {
- &struct{ Hello string }{"world"},
- "hello: world\n",
- }, {
- &struct {
- A struct {
- B string
- }
- }{struct{ B string }{"c"}},
- "a:\n b: c\n",
- }, {
- &struct {
- A *struct {
- B string
- }
- }{&struct{ B string }{"c"}},
- "a:\n b: c\n",
- }, {
- &struct {
- A *struct {
- B string
- }
- }{},
- "a: null\n",
- }, {
- &struct{ A int }{1},
- "a: 1\n",
- }, {
- &struct{ A []int }{[]int{1, 2}},
- "a:\n- 1\n- 2\n",
- }, {
- &struct {
- B int "a"
- }{1},
- "a: 1\n",
- }, {
- &struct{ A bool }{true},
- "a: true\n",
- },
-
- // Conditional flag
- {
- &struct {
- A int "a,omitempty"
- B int "b,omitempty"
- }{1, 0},
- "a: 1\n",
- }, {
- &struct {
- A int "a,omitempty"
- B int "b,omitempty"
- }{0, 0},
- "{}\n",
- }, {
- &struct {
- A *struct{ X, y int } "a,omitempty,flow"
- }{&struct{ X, y int }{1, 2}},
- "a: {x: 1}\n",
- }, {
- &struct {
- A *struct{ X, y int } "a,omitempty,flow"
- }{nil},
- "{}\n",
- }, {
- &struct {
- A *struct{ X, y int } "a,omitempty,flow"
- }{&struct{ X, y int }{}},
- "a: {x: 0}\n",
- }, {
- &struct {
- A struct{ X, y int } "a,omitempty,flow"
- }{struct{ X, y int }{1, 2}},
- "a: {x: 1}\n",
- }, {
- &struct {
- A struct{ X, y int } "a,omitempty,flow"
- }{struct{ X, y int }{0, 1}},
- "{}\n",
- }, {
- &struct {
- A float64 "a,omitempty"
- B float64 "b,omitempty"
- }{1, 0},
- "a: 1\n",
- },
-
- // Flow flag
- {
- &struct {
- A []int "a,flow"
- }{[]int{1, 2}},
- "a: [1, 2]\n",
- }, {
- &struct {
- A map[string]string "a,flow"
- }{map[string]string{"b": "c", "d": "e"}},
- "a: {b: c, d: e}\n",
- }, {
- &struct {
- A struct {
- B, D string
- } "a,flow"
- }{struct{ B, D string }{"c", "e"}},
- "a: {b: c, d: e}\n",
- },
-
- // Unexported field
- {
- &struct {
- u int
- A int
- }{0, 1},
- "a: 1\n",
- },
-
- // Ignored field
- {
- &struct {
- A int
- B int "-"
- }{1, 2},
- "a: 1\n",
- },
-
- // Struct inlining
- {
- &struct {
- A int
- C inlineB `yaml:",inline"`
- }{1, inlineB{2, inlineC{3}}},
- "a: 1\nb: 2\nc: 3\n",
- },
-
- // Map inlining
- {
- &struct {
- A int
- C map[string]int `yaml:",inline"`
- }{1, map[string]int{"b": 2, "c": 3}},
- "a: 1\nb: 2\nc: 3\n",
- },
-
- // Duration
- {
- map[string]time.Duration{"a": 3 * time.Second},
- "a: 3s\n",
- },
-
- // Issue #24: bug in map merging logic.
- {
- map[string]string{"a": "<foo>"},
- "a: <foo>\n",
- },
-
- // Issue #34: marshal unsupported base 60 floats quoted for compatibility
- // with old YAML 1.1 parsers.
- {
- map[string]string{"a": "1:1"},
- "a: \"1:1\"\n",
- },
-
- // Binary data.
- {
- map[string]string{"a": "\x00"},
- "a: \"\\0\"\n",
- }, {
- map[string]string{"a": "\x80\x81\x82"},
- "a: !!binary gIGC\n",
- }, {
- map[string]string{"a": strings.Repeat("\x90", 54)},
- "a: !!binary |\n " + strings.Repeat("kJCQ", 17) + "kJ\n CQ\n",
- },
-
- // Ordered maps.
- {
- &yaml.MapSlice{{"b", 2}, {"a", 1}, {"d", 4}, {"c", 3}, {"sub", yaml.MapSlice{{"e", 5}}}},
- "b: 2\na: 1\nd: 4\nc: 3\nsub:\n e: 5\n",
- },
-
- // Encode unicode as utf-8 rather than in escaped form.
- {
- map[string]string{"a": "你好"},
- "a: 你好\n",
- },
-
- // Support encoding.TextMarshaler.
- {
- map[string]net.IP{"a": net.IPv4(1, 2, 3, 4)},
- "a: 1.2.3.4\n",
- },
- {
- map[string]time.Time{"a": time.Unix(1424801979, 0)},
- "a: 2015-02-24T18:19:39Z\n",
- },
-
- // Ensure strings containing ": " are quoted (reported as PR #43, but not reproducible).
- {
- map[string]string{"a": "b: c"},
- "a: 'b: c'\n",
- },
-
- // Containing hash mark ('#') in string should be quoted
- {
- map[string]string{"a": "Hello #comment"},
- "a: 'Hello #comment'\n",
- },
- {
- map[string]string{"a": "你好 #comment"},
- "a: '你好 #comment'\n",
- },
-}
-
-func (s *S) TestMarshal(c *C) {
- defer os.Setenv("TZ", os.Getenv("TZ"))
- os.Setenv("TZ", "UTC")
- for _, item := range marshalTests {
- data, err := yaml.Marshal(item.value)
- c.Assert(err, IsNil)
- c.Assert(string(data), Equals, item.data)
- }
-}
-
-var marshalErrorTests = []struct {
- value interface{}
- error string
- panic string
-}{{
- value: &struct {
- B int
- inlineB ",inline"
- }{1, inlineB{2, inlineC{3}}},
- panic: `Duplicated key 'b' in struct struct \{ B int; .*`,
-}, {
- value: &struct {
- A int
- B map[string]int ",inline"
- }{1, map[string]int{"a": 2}},
- panic: `Can't have key "a" in inlined map; conflicts with struct field`,
-}}
-
-func (s *S) TestMarshalErrors(c *C) {
- for _, item := range marshalErrorTests {
- if item.panic != "" {
- c.Assert(func() { yaml.Marshal(item.value) }, PanicMatches, item.panic)
- } else {
- _, err := yaml.Marshal(item.value)
- c.Assert(err, ErrorMatches, item.error)
- }
- }
-}
-
-func (s *S) TestMarshalTypeCache(c *C) {
- var data []byte
- var err error
- func() {
- type T struct{ A int }
- data, err = yaml.Marshal(&T{})
- c.Assert(err, IsNil)
- }()
- func() {
- type T struct{ B int }
- data, err = yaml.Marshal(&T{})
- c.Assert(err, IsNil)
- }()
- c.Assert(string(data), Equals, "b: 0\n")
-}
-
-var marshalerTests = []struct {
- data string
- value interface{}
-}{
- {"_:\n hi: there\n", map[interface{}]interface{}{"hi": "there"}},
- {"_:\n- 1\n- A\n", []interface{}{1, "A"}},
- {"_: 10\n", 10},
- {"_: null\n", nil},
- {"_: BAR!\n", "BAR!"},
-}
-
-type marshalerType struct {
- value interface{}
-}
-
-func (o marshalerType) MarshalText() ([]byte, error) {
- panic("MarshalText called on type with MarshalYAML")
-}
-
-func (o marshalerType) MarshalYAML() (interface{}, error) {
- return o.value, nil
-}
-
-type marshalerValue struct {
- Field marshalerType "_"
-}
-
-func (s *S) TestMarshaler(c *C) {
- for _, item := range marshalerTests {
- obj := &marshalerValue{}
- obj.Field.value = item.value
- data, err := yaml.Marshal(obj)
- c.Assert(err, IsNil)
- c.Assert(string(data), Equals, string(item.data))
- }
-}
-
-func (s *S) TestMarshalerWholeDocument(c *C) {
- obj := &marshalerType{}
- obj.value = map[string]string{"hello": "world!"}
- data, err := yaml.Marshal(obj)
- c.Assert(err, IsNil)
- c.Assert(string(data), Equals, "hello: world!\n")
-}
-
-type failingMarshaler struct{}
-
-func (ft *failingMarshaler) MarshalYAML() (interface{}, error) {
- return nil, failingErr
-}
-
-func (s *S) TestMarshalerError(c *C) {
- _, err := yaml.Marshal(&failingMarshaler{})
- c.Assert(err, Equals, failingErr)
-}
-
-func (s *S) TestSortedOutput(c *C) {
- order := []interface{}{
- false,
- true,
- 1,
- uint(1),
- 1.0,
- 1.1,
- 1.2,
- 2,
- uint(2),
- 2.0,
- 2.1,
- "",
- ".1",
- ".2",
- ".a",
- "1",
- "2",
- "a!10",
- "a/2",
- "a/10",
- "a~10",
- "ab/1",
- "b/1",
- "b/01",
- "b/2",
- "b/02",
- "b/3",
- "b/03",
- "b1",
- "b01",
- "b3",
- "c2.10",
- "c10.2",
- "d1",
- "d12",
- "d12a",
- }
- m := make(map[interface{}]int)
- for _, k := range order {
- m[k] = 1
- }
- data, err := yaml.Marshal(m)
- c.Assert(err, IsNil)
- out := "\n" + string(data)
- last := 0
- for i, k := range order {
- repr := fmt.Sprint(k)
- if s, ok := k.(string); ok {
- if _, err = strconv.ParseFloat(repr, 32); s == "" || err == nil {
- repr = `"` + repr + `"`
- }
- }
- index := strings.Index(out, "\n"+repr+":")
- if index == -1 {
- c.Fatalf("%#v is not in the output: %#v", k, out)
- }
- if index < last {
- c.Fatalf("%#v was generated before %#v: %q", k, order[i-1], out)
- }
- last = index
- }
-}
diff --git a/vendor/gopkg.in/yaml.v2/example_embedded_test.go b/vendor/gopkg.in/yaml.v2/example_embedded_test.go
deleted file mode 100644
index c8b241d5..00000000
--- a/vendor/gopkg.in/yaml.v2/example_embedded_test.go
+++ /dev/null
@@ -1,41 +0,0 @@
-package yaml_test
-
-import (
- "fmt"
- "log"
-
- "gopkg.in/yaml.v2"
-)
-
-// An example showing how to unmarshal embedded
-// structs from YAML.
-
-type StructA struct {
- A string `yaml:"a"`
-}
-
-type StructB struct {
- // Embedded structs are not treated as embedded in YAML by default. To do that,
- // add the ",inline" annotation below
- StructA `yaml:",inline"`
- B string `yaml:"b"`
-}
-
-var data = `
-a: a string from struct A
-b: a string from struct B
-`
-
-func ExampleUnmarshal_embedded() {
- var b StructB
-
- err := yaml.Unmarshal([]byte(data), &b)
- if err != nil {
- log.Fatal("cannot unmarshal data: %v", err)
- }
- fmt.Println(b.A)
- fmt.Println(b.B)
- // Output:
- // a string from struct A
- // a string from struct B
-}
diff --git a/vendor/gopkg.in/yaml.v2/readerc.go b/vendor/gopkg.in/yaml.v2/readerc.go
index f4507917..7c1f5fac 100644
--- a/vendor/gopkg.in/yaml.v2/readerc.go
+++ b/vendor/gopkg.in/yaml.v2/readerc.go
@@ -93,9 +93,18 @@ func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
panic("read handler must be set")
}
+ // [Go] This function was changed to guarantee the requested length size at EOF.
+ // The fact we need to do this is pretty awful, but the description above implies
+ // for that to be the case, and there are tests
+
// If the EOF flag is set and the raw buffer is empty, do nothing.
if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) {
- return true
+ // [Go] ACTUALLY! Read the documentation of this function above.
+ // This is just broken. To return true, we need to have the
+ // given length in the buffer. Not doing that means every single
+ // check that calls this function to make sure the buffer has a
+ // given length is Go) panicking; or C) accessing invalid memory.
+ //return true
}
// Return if the buffer contains enough characters.
@@ -389,6 +398,15 @@ func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool {
break
}
}
+ // [Go] Read the documentation of this function above. To return true,
+ // we need to have the given length in the buffer. Not doing that means
+ // every single check that calls this function to make sure the buffer
+ // has a given length is Go) panicking; or C) accessing invalid memory.
+ // This happens here due to the EOF above breaking early.
+ for buffer_len < length {
+ parser.buffer[buffer_len] = 0
+ buffer_len++
+ }
parser.buffer = parser.buffer[:buffer_len]
return true
}
diff --git a/vendor/gopkg.in/yaml.v2/resolve.go b/vendor/gopkg.in/yaml.v2/resolve.go
index 232313cc..6c151db6 100644
--- a/vendor/gopkg.in/yaml.v2/resolve.go
+++ b/vendor/gopkg.in/yaml.v2/resolve.go
@@ -6,7 +6,7 @@ import (
"regexp"
"strconv"
"strings"
- "unicode/utf8"
+ "time"
)
type resolveMapItem struct {
@@ -75,7 +75,7 @@ func longTag(tag string) string {
func resolvableTag(tag string) bool {
switch tag {
- case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG:
+ case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG, yaml_TIMESTAMP_TAG:
return true
}
return false
@@ -92,6 +92,19 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
switch tag {
case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG:
return
+ case yaml_FLOAT_TAG:
+ if rtag == yaml_INT_TAG {
+ switch v := out.(type) {
+ case int64:
+ rtag = yaml_FLOAT_TAG
+ out = float64(v)
+ return
+ case int:
+ rtag = yaml_FLOAT_TAG
+ out = float64(v)
+ return
+ }
+ }
}
failf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag))
}()
@@ -125,6 +138,15 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
case 'D', 'S':
// Int, float, or timestamp.
+ // Only try values as a timestamp if the value is unquoted or there's an explicit
+ // !!timestamp tag.
+ if tag == "" || tag == yaml_TIMESTAMP_TAG {
+ t, ok := parseTimestamp(in)
+ if ok {
+ return yaml_TIMESTAMP_TAG, t
+ }
+ }
+
plain := strings.Replace(in, "_", "", -1)
intv, err := strconv.ParseInt(plain, 0, 64)
if err == nil {
@@ -158,28 +180,20 @@ func resolve(tag string, in string) (rtag string, out interface{}) {
return yaml_INT_TAG, uintv
}
} else if strings.HasPrefix(plain, "-0b") {
- intv, err := strconv.ParseInt(plain[3:], 2, 64)
+ intv, err := strconv.ParseInt("-" + plain[3:], 2, 64)
if err == nil {
- if intv == int64(int(intv)) {
- return yaml_INT_TAG, -int(intv)
+ if true || intv == int64(int(intv)) {
+ return yaml_INT_TAG, int(intv)
} else {
- return yaml_INT_TAG, -intv
+ return yaml_INT_TAG, intv
}
}
}
- // XXX Handle timestamps here.
-
default:
panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")")
}
}
- if tag == yaml_BINARY_TAG {
- return yaml_BINARY_TAG, in
- }
- if utf8.ValidString(in) {
- return yaml_STR_TAG, in
- }
- return yaml_BINARY_TAG, encodeBase64(in)
+ return yaml_STR_TAG, in
}
// encodeBase64 encodes s as base64 that is broken up into multiple lines
@@ -206,3 +220,39 @@ func encodeBase64(s string) string {
}
return string(out[:k])
}
+
+// This is a subset of the formats allowed by the regular expression
+// defined at http://yaml.org/type/timestamp.html.
+var allowedTimestampFormats = []string{
+ "2006-1-2T15:4:5.999999999Z07:00", // RCF3339Nano with short date fields.
+ "2006-1-2t15:4:5.999999999Z07:00", // RFC3339Nano with short date fields and lower-case "t".
+ "2006-1-2 15:4:5.999999999", // space separated with no time zone
+ "2006-1-2", // date only
+ // Notable exception: time.Parse cannot handle: "2001-12-14 21:59:43.10 -5"
+ // from the set of examples.
+}
+
+// parseTimestamp parses s as a timestamp string and
+// returns the timestamp and reports whether it succeeded.
+// Timestamp formats are defined at http://yaml.org/type/timestamp.html
+func parseTimestamp(s string) (time.Time, bool) {
+ // TODO write code to check all the formats supported by
+ // http://yaml.org/type/timestamp.html instead of using time.Parse.
+
+ // Quick check: all date formats start with YYYY-.
+ i := 0
+ for ; i < len(s); i++ {
+ if c := s[i]; c < '0' || c > '9' {
+ break
+ }
+ }
+ if i != 4 || i == len(s) || s[i] != '-' {
+ return time.Time{}, false
+ }
+ for _, format := range allowedTimestampFormats {
+ if t, err := time.Parse(format, s); err == nil {
+ return t, true
+ }
+ }
+ return time.Time{}, false
+}
diff --git a/vendor/gopkg.in/yaml.v2/scannerc.go b/vendor/gopkg.in/yaml.v2/scannerc.go
index 07448445..077fd1dd 100644
--- a/vendor/gopkg.in/yaml.v2/scannerc.go
+++ b/vendor/gopkg.in/yaml.v2/scannerc.go
@@ -871,12 +871,6 @@ func yaml_parser_save_simple_key(parser *yaml_parser_t) bool {
required := parser.flow_level == 0 && parser.indent == parser.mark.column
- // A simple key is required only when it is the first token in the current
- // line. Therefore it is always allowed. But we add a check anyway.
- if required && !parser.simple_key_allowed {
- panic("should not happen")
- }
-
//
// If the current position may start a simple key, save it.
//
@@ -2475,6 +2469,10 @@ func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, si
}
}
+ if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
+ return false
+ }
+
// Check if we are at the end of the scalar.
if single {
if parser.buffer[parser.buffer_pos] == '\'' {
@@ -2487,10 +2485,6 @@ func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, si
}
// Consume blank characters.
- if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) {
- return false
- }
-
for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {
if is_blank(parser.buffer, parser.buffer_pos) {
// Consume a space or a tab character.
@@ -2592,19 +2586,10 @@ func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) b
// Consume non-blank characters.
for !is_blankz(parser.buffer, parser.buffer_pos) {
- // Check for 'x:x' in the flow context. TODO: Fix the test "spec-08-13".
- if parser.flow_level > 0 &&
- parser.buffer[parser.buffer_pos] == ':' &&
- !is_blankz(parser.buffer, parser.buffer_pos+1) {
- yaml_parser_set_scanner_error(parser, "while scanning a plain scalar",
- start_mark, "found unexpected ':'")
- return false
- }
-
// Check for indicators that may end a plain scalar.
if (parser.buffer[parser.buffer_pos] == ':' && is_blankz(parser.buffer, parser.buffer_pos+1)) ||
(parser.flow_level > 0 &&
- (parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == ':' ||
+ (parser.buffer[parser.buffer_pos] == ',' ||
parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == '[' ||
parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' ||
parser.buffer[parser.buffer_pos] == '}')) {
@@ -2656,10 +2641,10 @@ func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) b
for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) {
if is_blank(parser.buffer, parser.buffer_pos) {
- // Check for tab character that abuse indentation.
+ // Check for tab characters that abuse indentation.
if leading_blanks && parser.mark.column < indent && is_tab(parser.buffer, parser.buffer_pos) {
yaml_parser_set_scanner_error(parser, "while scanning a plain scalar",
- start_mark, "found a tab character that violate indentation")
+ start_mark, "found a tab character that violates indentation")
return false
}
diff --git a/vendor/gopkg.in/yaml.v2/sorter.go b/vendor/gopkg.in/yaml.v2/sorter.go
index 5958822f..4c45e660 100644
--- a/vendor/gopkg.in/yaml.v2/sorter.go
+++ b/vendor/gopkg.in/yaml.v2/sorter.go
@@ -51,6 +51,15 @@ func (l keyList) Less(i, j int) bool {
}
var ai, bi int
var an, bn int64
+ if ar[i] == '0' || br[i] == '0' {
+ for j := i-1; j >= 0 && unicode.IsDigit(ar[j]); j-- {
+ if ar[j] != '0' {
+ an = 1
+ bn = 1
+ break
+ }
+ }
+ }
for ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ {
an = an*10 + int64(ar[ai]-'0')
}
diff --git a/vendor/gopkg.in/yaml.v2/suite_test.go b/vendor/gopkg.in/yaml.v2/suite_test.go
deleted file mode 100644
index c5cf1ed4..00000000
--- a/vendor/gopkg.in/yaml.v2/suite_test.go
+++ /dev/null
@@ -1,12 +0,0 @@
-package yaml_test
-
-import (
- . "gopkg.in/check.v1"
- "testing"
-)
-
-func Test(t *testing.T) { TestingT(t) }
-
-type S struct{}
-
-var _ = Suite(&S{})
diff --git a/vendor/gopkg.in/yaml.v2/writerc.go b/vendor/gopkg.in/yaml.v2/writerc.go
index 190362f2..a2dde608 100644
--- a/vendor/gopkg.in/yaml.v2/writerc.go
+++ b/vendor/gopkg.in/yaml.v2/writerc.go
@@ -18,72 +18,9 @@ func yaml_emitter_flush(emitter *yaml_emitter_t) bool {
return true
}
- // If the output encoding is UTF-8, we don't need to recode the buffer.
- if emitter.encoding == yaml_UTF8_ENCODING {
- if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {
- return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
- }
- emitter.buffer_pos = 0
- return true
- }
-
- // Recode the buffer into the raw buffer.
- var low, high int
- if emitter.encoding == yaml_UTF16LE_ENCODING {
- low, high = 0, 1
- } else {
- high, low = 1, 0
- }
-
- pos := 0
- for pos < emitter.buffer_pos {
- // See the "reader.c" code for more details on UTF-8 encoding. Note
- // that we assume that the buffer contains a valid UTF-8 sequence.
-
- // Read the next UTF-8 character.
- octet := emitter.buffer[pos]
-
- var w int
- var value rune
- switch {
- case octet&0x80 == 0x00:
- w, value = 1, rune(octet&0x7F)
- case octet&0xE0 == 0xC0:
- w, value = 2, rune(octet&0x1F)
- case octet&0xF0 == 0xE0:
- w, value = 3, rune(octet&0x0F)
- case octet&0xF8 == 0xF0:
- w, value = 4, rune(octet&0x07)
- }
- for k := 1; k < w; k++ {
- octet = emitter.buffer[pos+k]
- value = (value << 6) + (rune(octet) & 0x3F)
- }
- pos += w
-
- // Write the character.
- if value < 0x10000 {
- var b [2]byte
- b[high] = byte(value >> 8)
- b[low] = byte(value & 0xFF)
- emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1])
- } else {
- // Write the character using a surrogate pair (check "reader.c").
- var b [4]byte
- value -= 0x10000
- b[high] = byte(0xD8 + (value >> 18))
- b[low] = byte((value >> 10) & 0xFF)
- b[high+2] = byte(0xDC + ((value >> 8) & 0xFF))
- b[low+2] = byte(value & 0xFF)
- emitter.raw_buffer = append(emitter.raw_buffer, b[0], b[1], b[2], b[3])
- }
- }
-
- // Write the raw buffer.
- if err := emitter.write_handler(emitter, emitter.raw_buffer); err != nil {
+ if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil {
return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error())
}
emitter.buffer_pos = 0
- emitter.raw_buffer = emitter.raw_buffer[:0]
return true
}
diff --git a/vendor/gopkg.in/yaml.v2/yaml.go b/vendor/gopkg.in/yaml.v2/yaml.go
index bf18884e..de85aa4c 100644
--- a/vendor/gopkg.in/yaml.v2/yaml.go
+++ b/vendor/gopkg.in/yaml.v2/yaml.go
@@ -9,6 +9,7 @@ package yaml
import (
"errors"
"fmt"
+ "io"
"reflect"
"strings"
"sync"
@@ -81,12 +82,58 @@ func Unmarshal(in []byte, out interface{}) (err error) {
}
// UnmarshalStrict is like Unmarshal except that any fields that are found
-// in the data that do not have corresponding struct members will result in
+// in the data that do not have corresponding struct members, or mapping
+// keys that are duplicates, will result in
// an error.
func UnmarshalStrict(in []byte, out interface{}) (err error) {
return unmarshal(in, out, true)
}
+// A Decorder reads and decodes YAML values from an input stream.
+type Decoder struct {
+ strict bool
+ parser *parser
+}
+
+// NewDecoder returns a new decoder that reads from r.
+//
+// The decoder introduces its own buffering and may read
+// data from r beyond the YAML values requested.
+func NewDecoder(r io.Reader) *Decoder {
+ return &Decoder{
+ parser: newParserFromReader(r),
+ }
+}
+
+// SetStrict sets whether strict decoding behaviour is enabled when
+// decoding items in the data (see UnmarshalStrict). By default, decoding is not strict.
+func (dec *Decoder) SetStrict(strict bool) {
+ dec.strict = strict
+}
+
+// Decode reads the next YAML-encoded value from its input
+// and stores it in the value pointed to by v.
+//
+// See the documentation for Unmarshal for details about the
+// conversion of YAML into a Go value.
+func (dec *Decoder) Decode(v interface{}) (err error) {
+ d := newDecoder(dec.strict)
+ defer handleErr(&err)
+ node := dec.parser.parse()
+ if node == nil {
+ return io.EOF
+ }
+ out := reflect.ValueOf(v)
+ if out.Kind() == reflect.Ptr && !out.IsNil() {
+ out = out.Elem()
+ }
+ d.unmarshal(node, out)
+ if len(d.terrors) > 0 {
+ return &TypeError{d.terrors}
+ }
+ return nil
+}
+
func unmarshal(in []byte, out interface{}, strict bool) (err error) {
defer handleErr(&err)
d := newDecoder(strict)
@@ -110,8 +157,8 @@ func unmarshal(in []byte, out interface{}, strict bool) (err error) {
// of the generated document will reflect the structure of the value itself.
// Maps and pointers (to struct, string, int, etc) are accepted as the in value.
//
-// Struct fields are only unmarshalled if they are exported (have an upper case
-// first letter), and are unmarshalled using the field name lowercased as the
+// Struct fields are only marshalled if they are exported (have an upper case
+// first letter), and are marshalled using the field name lowercased as the
// default key. Custom keys may be defined via the "yaml" name in the field
// tag: the content preceding the first comma is used as the key, and the
// following comma-separated options are used to tweak the marshalling process.
@@ -125,7 +172,10 @@ func unmarshal(in []byte, out interface{}, strict bool) (err error) {
//
// omitempty Only include the field if it's not set to the zero
// value for the type or to empty slices or maps.
-// Does not apply to zero valued structs.
+// Zero valued structs will be omitted if all their public
+// fields are zero, unless they implement an IsZero
+// method (see the IsZeroer interface type), in which
+// case the field will be included if that method returns true.
//
// flow Marshal using a flow style (useful for structs,
// sequences and maps).
@@ -140,7 +190,7 @@ func unmarshal(in []byte, out interface{}, strict bool) (err error) {
// For example:
//
// type T struct {
-// F int "a,omitempty"
+// F int `yaml:"a,omitempty"`
// B int
// }
// yaml.Marshal(&T{B: 2}) // Returns "b: 2\n"
@@ -150,12 +200,47 @@ func Marshal(in interface{}) (out []byte, err error) {
defer handleErr(&err)
e := newEncoder()
defer e.destroy()
- e.marshal("", reflect.ValueOf(in))
+ e.marshalDoc("", reflect.ValueOf(in))
e.finish()
out = e.out
return
}
+// An Encoder writes YAML values to an output stream.
+type Encoder struct {
+ encoder *encoder
+}
+
+// NewEncoder returns a new encoder that writes to w.
+// The Encoder should be closed after use to flush all data
+// to w.
+func NewEncoder(w io.Writer) *Encoder {
+ return &Encoder{
+ encoder: newEncoderWithWriter(w),
+ }
+}
+
+// Encode writes the YAML encoding of v to the stream.
+// If multiple items are encoded to the stream, the
+// second and subsequent document will be preceded
+// with a "---" document separator, but the first will not.
+//
+// See the documentation for Marshal for details about the conversion of Go
+// values to YAML.
+func (e *Encoder) Encode(v interface{}) (err error) {
+ defer handleErr(&err)
+ e.encoder.marshalDoc("", reflect.ValueOf(v))
+ return nil
+}
+
+// Close closes the encoder by writing any remaining data.
+// It does not write a stream terminating string "...".
+func (e *Encoder) Close() (err error) {
+ defer handleErr(&err)
+ e.encoder.finish()
+ return nil
+}
+
func handleErr(err *error) {
if v := recover(); v != nil {
if e, ok := v.(yamlError); ok {
@@ -211,6 +296,9 @@ type fieldInfo struct {
Num int
OmitEmpty bool
Flow bool
+ // Id holds the unique field identifier, so we can cheaply
+ // check for field duplicates without maintaining an extra map.
+ Id int
// Inline holds the field index if the field is part of an inlined struct.
Inline []int
@@ -290,6 +378,7 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
} else {
finfo.Inline = append([]int{i}, finfo.Inline...)
}
+ finfo.Id = len(fieldsList)
fieldsMap[finfo.Key] = finfo
fieldsList = append(fieldsList, finfo)
}
@@ -311,11 +400,16 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
return nil, errors.New(msg)
}
+ info.Id = len(fieldsList)
fieldsList = append(fieldsList, info)
fieldsMap[info.Key] = info
}
- sinfo = &structInfo{fieldsMap, fieldsList, inlineMap}
+ sinfo = &structInfo{
+ FieldsMap: fieldsMap,
+ FieldsList: fieldsList,
+ InlineMap: inlineMap,
+ }
fieldMapMutex.Lock()
structMap[st] = sinfo
@@ -323,8 +417,23 @@ func getStructInfo(st reflect.Type) (*structInfo, error) {
return sinfo, nil
}
+// IsZeroer is used to check whether an object is zero to
+// determine whether it should be omitted when marshaling
+// with the omitempty flag. One notable implementation
+// is time.Time.
+type IsZeroer interface {
+ IsZero() bool
+}
+
func isZero(v reflect.Value) bool {
- switch v.Kind() {
+ kind := v.Kind()
+ if z, ok := v.Interface().(IsZeroer); ok {
+ if (kind == reflect.Ptr || kind == reflect.Interface) && v.IsNil() {
+ return true
+ }
+ return z.IsZero()
+ }
+ switch kind {
case reflect.String:
return len(v.String()) == 0
case reflect.Interface, reflect.Ptr:
diff --git a/vendor/gopkg.in/yaml.v2/yamlh.go b/vendor/gopkg.in/yaml.v2/yamlh.go
index 3caeca04..e25cee56 100644
--- a/vendor/gopkg.in/yaml.v2/yamlh.go
+++ b/vendor/gopkg.in/yaml.v2/yamlh.go
@@ -1,6 +1,7 @@
package yaml
import (
+ "fmt"
"io"
)
@@ -239,6 +240,27 @@ const (
yaml_MAPPING_END_EVENT // A MAPPING-END event.
)
+var eventStrings = []string{
+ yaml_NO_EVENT: "none",
+ yaml_STREAM_START_EVENT: "stream start",
+ yaml_STREAM_END_EVENT: "stream end",
+ yaml_DOCUMENT_START_EVENT: "document start",
+ yaml_DOCUMENT_END_EVENT: "document end",
+ yaml_ALIAS_EVENT: "alias",
+ yaml_SCALAR_EVENT: "scalar",
+ yaml_SEQUENCE_START_EVENT: "sequence start",
+ yaml_SEQUENCE_END_EVENT: "sequence end",
+ yaml_MAPPING_START_EVENT: "mapping start",
+ yaml_MAPPING_END_EVENT: "mapping end",
+}
+
+func (e yaml_event_type_t) String() string {
+ if e < 0 || int(e) >= len(eventStrings) {
+ return fmt.Sprintf("unknown event %d", e)
+ }
+ return eventStrings[e]
+}
+
// The event structure.
type yaml_event_t struct {
@@ -521,9 +543,9 @@ type yaml_parser_t struct {
read_handler yaml_read_handler_t // Read handler.
- input_file io.Reader // File input data.
- input []byte // String input data.
- input_pos int
+ input_reader io.Reader // File input data.
+ input []byte // String input data.
+ input_pos int
eof bool // EOF flag
@@ -632,7 +654,7 @@ type yaml_emitter_t struct {
write_handler yaml_write_handler_t // Write handler.
output_buffer *[]byte // String output data.
- output_file io.Writer // File output data.
+ output_writer io.Writer // File output data.
buffer []byte // The working buffer.
buffer_pos int // The current position of the buffer.
diff --git a/wg-apply/README.md b/wg-apply/README.md
index d65fd61a..9da25f03 100644
--- a/wg-apply/README.md
+++ b/wg-apply/README.md
@@ -12,7 +12,7 @@ Improve the state of declarative object management by fixing `kubectl apply`, mo
Resources can be found in [this Google drive folder](https://drive.google.com/drive/folders/1wlpgkS2gFZXdp4x2WlRsfUBxkFlt2Gx0)
## Meetings
-* Regular WG Meeting: [Tuesdays at 9:30 PT (Pacific Time)]() (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:30&tz=PT%20%28Pacific%20Time%29).
+* Regular WG Meeting: [Tuesdays at 9:30 PT (Pacific Time)](https://zoom.us/my/apimachinery) (weekly). [Convert to your timezone](http://www.thetimezoneconverter.com/?t=9:30&tz=PT%20%28Pacific%20Time%29).
## Organizers
diff --git a/wg-cloud-provider/cloud-provider-requirements.md b/wg-cloud-provider/cloud-provider-requirements.md
deleted file mode 100644
index ca16e81f..00000000
--- a/wg-cloud-provider/cloud-provider-requirements.md
+++ /dev/null
@@ -1,87 +0,0 @@
-# Conventions for Cloud Provider Repositories
-
-This purpose of this document is to define a common structure for the cloud
-provider repositories owned by current and future cloud provider SIGs[1]. In
-accordance with the WG-Cloud-Provider Charter[2] to "define a set of common
-expected behaviors across cloud providers", this proposal defines the location
-and structure of commonly expected code.
-
-As each provider can and will have additional features that go beyond expected
-common code, this document is only prescriptive to the location of the
-following code:
-
-* Cloud Controller Manager implementations.
-* Documentation.
-
-This document may be amended with additional locations that relate to enabling
-consistent upstream testing, independent storage drivers, and other code with
-common integration hooks may be added
-
-## Motivation
-
-The development of the Cloud Controller Manager[3] and Cloud Provider
-Interface[4] has enabled the provider SIGs to develop external providers that
-capture the core functionality of the upstream providers. By defining the
-expected locations and naming conventions of where the external provider code
-is, we continue in creating a consistent experience for:
-
-* Users of the providers, who will have easily understandable conventions for
- discovering and using all of the providers.
-* SIG-Docs, who will have a common hook for building or linking to externally
- managed documentation
-* SIG-Testing, who will be able to use common entry points for enabling
- provider-specific e2e testing.
-* Future cloud provider authors, who will have a common framework and examples
- from which to build and share their code base.
-
-## Requirements
-
-Each cloud provider hosted within the `kubernetes` organization shall have a
-single repository named `kubernetes/cloud-provider-<provider_name>`. Those
-repositories shall have the following structure:
-
-* A `cloud-controller-manager` subdirectory that contains the implementation
- of the provider-specific cloud controller.
-* A `docs` subdirectory.
-* A `docs/cloud-controller-manager.md` file that describes the options and
- usage of the cloud controller manager code.
-* A `tests` subdirectory that contains testing code.
-
-Additionally, the repository should have:
-
-* A `docs/getting-started.md` file that describes the installation and basic
- operation of the cloud controller manager code.
-
-Where the provider has additional capabilities, the repository should have
-the following subdirectories that contain the common features:
-
-* `dns` for DNS provider code.
-* `cni` for the Container Network Interface (CNI) driver.
-* `flex` for the Flex Volume driver.
-* `installer` for custom installer code.
-
-Each repository may have additional directories and files that are used for
-additional feature that include but are not limited to:
-
-* Other provider specific testing.
-* Additional documentation, including examples and developer documentation.
-* Dependencies on provider-hosted or other external code.
-
-## Timeline
-
-To facilitate community development, providers named in the `Make SIGs
-responsible for implementations of CloudProvider` patch[1] can immediately
-migrate their external provider work into their named repositories.
-
-Each provider will work to implement the required structure during the
-Kubernetes 1.11 development cycle, with conformance by the 1.11 release.
-
-After the 1.11 release all current and new provider implementations must
-conform with the requirements outlined in this document.
-
-## References
-
-1. [Makes SIGs responsible for implementations of `CloudProvider`](https://github.com/kubernetes/community/pull/1862)
-2. [Cloud Provider Working Group Proposal](https://docs.google.com/document/d/1m4Kvnh_u_9cENEE9n1ifYowQEFSgiHnbw43urGJMB64/edit#)
-3. [Cloud Controller Manager](https://github.com/kubernetes/kubernetes/tree/master/cmd/cloud-controller-manager)
-4. [Cloud Provider Interface](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go)
diff --git a/wg-container-identity/README.md b/wg-container-identity/README.md
index 623f3c1b..a36ca76c 100644
--- a/wg-container-identity/README.md
+++ b/wg-container-identity/README.md
@@ -17,7 +17,7 @@ Ensure containers are able to interact with external systems and acquire secure
## Organizers
* Clayton Coleman (**[@smarterclayton](https://github.com/smarterclayton)**), Red Hat
-* Greg Gastle (**[@destijl](https://github.com/destijl)**), Google
+* Greg Castle (**[@destijl](https://github.com/destijl)**), Google
## Contact
* [Slack](https://kubernetes.slack.com/messages/wg-container-identity)
diff --git a/wg-machine-learning/README.md b/wg-machine-learning/README.md
index a5796d46..b9c6efbd 100755
--- a/wg-machine-learning/README.md
+++ b/wg-machine-learning/README.md
@@ -30,9 +30,17 @@ A working group dedicated towards making Kubernetes work best for Machine Learni
The charter for this working group as [proposed](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kubernetes-dev/lOeMjOLilxI/wuQayFDvCQAJ) is as follows:
- * Asses the state of the art for ML workloads on K8s
- * Identify pain points users currently have with ML on k8s
- * Identify, prioritize and execute on improving k8s to better support ML workloads in the near, medium, and long term.
+ - Assess the state of the art for ML workloads on K8s
+ - Identify pain points users currently have with ML on k8s
+ - Identify, prioritize and execute on improving k8s to better support ML workloads in the near, medium, and long term.
+
+## Goals:
+
+Topics include, but are not limited to:
+
+ - Ease source changes to execution workflows, as they are a common barrier to entry.
+ - Scheduler enhancements such as improved bin packing for accelerators, job queueing, fair sharing and gang scheduling.
+ - Runtime enhancements such as job data loading (common data set sizes in the tens of gigabytes to terabytes), accelerator support, persisting job output (ML workloads can run for days and rely heavily on checkpointing) and multi-tenancy and job isolation (dealing with potential sensitive data sets).
+ - Job management such as experiment tracking (including enabling hyperparameter tuning systems) and scaling and deployment aspects of inference workloads.
-TODO: Finalize and update the charter after the initial kick off meeting on 3/1/2018.
<!-- END CUSTOM CONTENT -->