summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorKubernetes Submit Queue <k8s-merge-robot@users.noreply.github.com>2017-12-18 13:23:35 -0800
committerGitHub <noreply@github.com>2017-12-18 13:23:35 -0800
commit28ace4dfc6043450e5261ba117bfcedb5bea6d28 (patch)
tree426b523a844402cec470f0b5316272bf1af83344
parente27b668c98c015267211b7c5091d09d3401ba902 (diff)
parent49f87d3a3ac0b01ebdd2d43abc0edc31b9e9d2c2 (diff)
Merge pull request #1525 from spiffxp/tombstone-sig-release-content
Automatic merge from submit-queue. Tombstone content moved to kubernetes/sig-release xref https://github.com/kubernetes/sig-release/pull/48
-rw-r--r--contributors/devel/release/OWNERS8
-rw-r--r--contributors/devel/release/README.md119
-rw-r--r--contributors/devel/release/issues.md211
-rw-r--r--contributors/devel/release/patch-release-manager.md259
-rw-r--r--contributors/devel/release/patch_release.md93
-rw-r--r--contributors/devel/release/scalability-validation.md137
-rw-r--r--contributors/devel/release/testing.md176
7 files changed, 14 insertions, 989 deletions
diff --git a/contributors/devel/release/OWNERS b/contributors/devel/release/OWNERS
index 6a3bfd86..afb042fa 100644
--- a/contributors/devel/release/OWNERS
+++ b/contributors/devel/release/OWNERS
@@ -3,10 +3,6 @@ reviewers:
- pwittrock
- steveperry-53
- chenopis
- - sig-release
+ - spiffxp
approvers:
- - saad-ali
- - pwittrock
- - steveperry-53
- - chenopis
- - sig-release
+ - sig-release-leads
diff --git a/contributors/devel/release/README.md b/contributors/devel/release/README.md
index 1ed7327d..d6eb9d6c 100644
--- a/contributors/devel/release/README.md
+++ b/contributors/devel/release/README.md
@@ -1,118 +1,3 @@
-# Kubernetes Release Roles
-**Table of Contents**
-* [Patch Release Manager](#patch-release-manager)
-* [Kubernetes Release Management Team for Major/Minor Releases](#kubernetes-release-management-team-for-majorminor-releases)
-* [Individual Contributors](#individual-contributors)
+The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/README.md
-This document captures the requirements and duties of the individuals responsible for Kubernetes releases.
-
-As documented in the [Kubernetes Versioning doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md), there are 3 types of Kubernetes releases:
-* Major (x.0.0)
-* Minor (x.x.0)
-* Patch (x.x.x)
-
-Major and minor releases are managed by a **Kubernetes Release Management Team**, and patch releases are managed by the **Patch Release Manager**. Exact roles and duties are defined below.
-
-## Patch Release Manager
-
-Patch releases are managed by the **Patch Release Manager**. Duties of the patch release manager include:
-* Ensuring the release branch (e.g. `release-1.5`) remains in a healthy state.
- * If the build breaks or any CI for the release branch becomes unhealthy due to a bad merge or infrastructure issue, ensure that actions are taken ASAP to bring it back to a healthy state.
-* Reviewing and approving [cherry picks](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md) to the release branch.
- * Patch releases should not contain new features, so ensure that cherry-picks are for bug/security fixes only.
- * Cherry picks should not destabilize the branch, so ensure that either the PR has had time to stabilize in master or will have time to stabilize in the release branch before the next patch release is cut.
-* Setting the exact schedule (and cadence) for patch releases and actually cutting the [releases](https://github.com/kubernetes/kubernetes/releases).
-
-See the [Patch Release Manager Playbook](patch-release-manager.md) for more details.
-
-Current and past patch release managers are listed [here](https://github.com/kubernetes/community/wiki).
-
-## Kubernetes Release Management Team for Major/Minor Releases
-
-Major and Minor releases are managed by the **Kubernetes Release Management Team** which is responsible for ensuring Kubernetes releases go out on time (as scheduled) and with high quality (stable, with no major bugs).
-
-Roles and responsibilities within the Kubernetes Release Management Team are as follows.
-
-#### Release Management Team Lead
-The Release Management Team Lead is the person ultimately responsible for ensuring the release goes out on-time with high-quality. All the roles defined below report to the Release Management Team Lead.
-* Establishes and communicates responsibilities and deadlines to release management team members, developers/feature owners, SIG leads, etc.
-* Escalates and unblocks any issue that may jeopardise the release schedule or quality as quickly as possible.
-* Finds people to take ownership of any release blocking issues that are not getting adequate attention.
-* Keeps track of, and widely communicates, the status of the release (including status of all sub-leads, all release blockers, etc) and all deadlines leading up to release.
-* Manages [exception](https://github.com/kubernetes/features/blob/master/EXCEPTIONS.md) process for features that want to merge after code freeze.
-
-#### Release Branch Manager
-* Manages (initiates and enforces) code freeze on main branch as scheduled for the release.
- * Ensures no new features are merged after code complete, unless they've been approved by the [exception process](https://github.com/kubernetes/features/blob/master/EXCEPTIONS.md).
-* Cuts the `release-x.x` branch at the appropriate time during the milestone.
-* Ensures release branch (e.g. `release-1.5`) remains in a healthy state for the duration of the major or minor release.
- * If the build breaks, or any CI for the release branch becomes unhealthy due to a bad merge or infrastructure issue, ensures that actions are taken ASAP to bring it back to a healthy state.
-* Initiates automatic fast-forwards of the release branch to pick up all changes from master branch, when appropriate.
-* Reviews and approves [cherry picks](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md) to the release branch.
- * Ensures only bug/security fixes (but no new features) are cherry-picked after code complete unless approved by the [exception process](https://github.com/kubernetes/features/blob/master/EXCEPTIONS.md).
- * Ensures that cherry-picks do not destabilize the branch by either giving the PR enough time to stabilize in master or giving it enough time to stabilize in the release branch before cutting the release.
-* Cuts the actual [release](https://github.com/kubernetes/kubernetes/releases).
-
-#### Docs Lead
-* Sets docs related deadlines for developers and works with Release Management Team Lead to ensure they are widely communicated.
-* Sets up release branch for docs.
-* Pings feature owners to ensure that release docs are created on time.
-* Reviews/merges release doc PRs.
-* Merges the docs release branch to master to make release docs live as soon as the release is official.
-
-#### Features Lead
-* Compiles the major themes, new features, known issues, actions required, notable changes to existing behavior, deprecations, etc. and edits them into a release doc checked in to the feature repository (ready to go out with the release).
-* Collects and prepares the release notes
-
-#### Bug Triage Lead
-* Figures out which bugs (whether manually created or automatically generated) should be tracked for the release.
-* Ensures all bugs being tracked for the release have owners that are responsive.
-* Ensures all bugs are triaged as blocking or non-blocking.
-* Ensures all bugs that are blocking are being actively worked on, esp after code complete.
-
-#### Test Infra Lead
-* Sets up and maintains all CI for the release branch.
-
-#### Automated Upgrade Testing Lead
-* Ensures that automated upgrade tests provide a clear go/no-go signal for the release.
-* Tracks and finds owners for all issues with automated upgrade tests.
-
-#### Manual Upgrade Testing Lead
-* Ensures that any gaps in automated upgrade testing are covered by manual upgrade testing.
-* Organizes the manual upgrade testing efforts, including setting up instructions for manual testing, finding manual testing volunteers, and ensuring any issues discovered are communicated widely and fixed quickly.
-
-#### Testing Lead
-* Ensures that all non-upgrade test CI provides a clear go/no-go signal for the release.
-* Tracks and finds owners to fix any issues with any (non-upgrade) tests.
-
-## Individual Contributors
-
-Release responsibilities of individual contributors to the Kubernetes project are captured below.
-
-### Patch Release
-
-#### Cherry Picks
-If you have a patch that needs to be ported back to a previous release (meaning it is a critical bug/security fix), once it is merged to the Kubernetes `master` branch:
-* Mark your PR with the milestone corresponding to the release you want to port back to (e.g. `v1.5`), and add the `cherrypick-candidate` label to it.
-* The Patch Release Manager will then review the PR and if it is ok for cherry-picking, will apply a `cherrypick-approved` label to it.
-* Once your PR has been marked with the `cherrypick-approved` label by the Patch Release Manager, you should prepare a cherry-pick to the requested branch following the instructions [here](https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md#how-do-cherrypick-candidates-make-it-to-the-release-branch).
-
-### Major/Minor Release
-
-#### Propose and Track Feature
-If you are developing a feature for Kubernetes, make sure that an issue is open in the [features repository](https://github.com/kubernetes/features/issues). If you are targeting a particular release, make sure the issue is marked with the corresponding release milestone.
-
-Ensure that all code for your feature is written, tested, reviewed, and merged before code freeze date for the target release.
-
-During the code freeze period, fix any bugs discovered with you feature, and write feature documentation.
-
-##### Writing Feature Documentation
-
-1. Make sure your feature for the upcoming release is on the release tracking board (e.g. [link](https://docs.google.com/spreadsheets/d/1AFksRDgAt6BGA3OjRNIiO3IyKmA-GU7CXaxbihy48ns/edit?usp=sharing) for 1.8).
-2. Create a PR with documentation for your feature in the [documents repo](https://github.com/kubernetes/kubernetes.github.io).
- * **Your PR should target the release branch (e.g. [`release-1.8`](https://github.com/kubernetes/kubernetes.github.io/tree/release-1.8)), not the [`master`](https://github.com/kubernetes/kubernetes.github.io/tree/master) branch.**
- * Any changes to the master branch become live on https://kubernetes.io/docs/ as soon as they are merged, and for releases we do not want documentation to go live until the release is cut.
-3. Add link to your docs PR in the release tracking board, and notify the docs lead for the release (e.g. [Steve Perry](https://github.com/steveperry-53) for 1.8).
-4. The docs lead will review your PR and give you feedback.
-5. Once approved, the docs lead will merge your PR into the release branch.
-6. When the release is cut, the docs lead will push the docs release branch to master, making your docs live on https://kubernetes.io/docs/.
+This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/issues.md b/contributors/devel/release/issues.md
index 819b6e41..cccf12e9 100644
--- a/contributors/devel/release/issues.md
+++ b/contributors/devel/release/issues.md
@@ -1,210 +1,3 @@
-# Targeting issues and PRs to release milestones
+The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/issues.md
-This document describes how to target issues and PRs to a release.
-The process for shepherding issues into the release by the owner, release team, and GitHub bot
-is outlined below.
-
-## Definitions
-
-- *issue owners*: creator, assignees, and user who moved the issue into a release milestone
-- *Y days*: refers to business days (using the location local to the release-manager M-F)
-- *code slush*: starts when master branch only accepts PRs for release milestone. no additional feature development is merged after this point.
-- *code freeze*: starts 2 weeks after code slush. only critical bug fixes are accepted into the release codebase.
-
-## Requirements for adding an issue to the milestone
-
-**Note**: Issues with unmet label requirements will automatically be removed from the release milestone.
-
-When adding an issue to a milestone, the Kubernetes bot will check that the following
-labels are set, and comment on the issue with the appropriate instructions. The
-bot will attempt to contact the issue creator 3 times (over 3 days)
-before automatically removing the issue from the milestone.
-
-Label categories:
-
-- SIG label owner
-- Priority
-- Issue type
-
-### SIG owner label
-
-The SIG owner label defines the SIG to which the bot will escalate if the issue is not resolved
-or updated by the deadline. If there are no updates after escalation, the
-issue may automatically removed from the milestone.
-
-e.g. `sig/node`, `sig/multicluster`, `sig/apps`, `sig/network`
-
-**Note:**
- - For test-infrastructure issues use `sig/testing`.
- - For GKE and GCE issues use `sig/gcp` once it is created, and `sig/cluster-lifecycle` until then.
-
-### Priority
-
-Priority label used by the bot to determine escalation path before moving an issues
-out of the release milestone. Also used to determine whether or not a release should be
-blocked on the resolution of the issue.
-
-- `priority/critical-urgent`: Never automatically move out of a release milestone; continually escalate to contributor and SIG through all available channels.
- - considered a release blocking issue
- - code slush: issue owner update frequency: every 3 days
- - code freeze: issue owner update frequency: daily
- - would require a patch release if left undiscovered until after the minor release.
-- `priority/important-soon`: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.
- - not considered a release blocking issue
- - would not require a patch release
- - will automatically be moved out of the release milestone at code freeze
-- `priority/important-longterm`: Escalate to the issue owners; move out of the milestone after 1 attempt.
- - even less urgent / critical than `priority/important-soon`
- - moved out of milestone more aggressively than `priority/important-soon`
-
-### Issue type
-
-The issue type is used to help identify the types of changes going into the release over time.
-This will allow us to develop a better understanding of what sorts of issues we would miss
-with a faster release cadence.
-
-This will also be used to escalate to the correct SIG GitHub team.
-
-- `kind/bug`: Fixes a newly discovered bug.
- - were not known issues at the start of the development period.
-- `kind/feature`: New functionality.
-- `kind/cleanup`: Adding tests, refactoring, fixing old bugs.
-
-## Bot communication
-
-The bot will communicate the state of an issue in the active milestone via comments and labels.
-
-### Comments
-
-All bot comments will mention issue owners and link to this doc. Bot
-comments in the workflow section of this document will only include
-the message but be presumed to include the header and footer in the
-following example:
-
-```
-@pwittrock @droot
-
-<message>
-
-Additional instructions available [here](<link to this doc>)
-```
-
-### Labels
-
-The following labels are used by the bot to track the state of an
-issue in the milestone:
-
- - milestone/incomplete-labels - one or more of the required `kind/`, `priority`/ or `sig/` labels are missing
- - milestone/needs-approval - the `status/approved-for-milestone` label is missing
- - milestone/needs-attention - a status label is missing or an update is required
- - milestone/removed - the issue was removed from the milestone
-
-These labels are mutually exclusive - only one will appear on an issue at once.
-
-## Workflow
-
-1. An issue is added to the current release milestone (either through creation or update)
- - Bot checks to make sure all required labels are set on the issue
- - If any labels are missing, the bot comments listing the missing labels and applies the `milestone/incomplete-labels` label.
- ```
- **Action required**: Issue is missing the following required labels. Set the labels or the issue
- will be moved out of the milestone within 3 days.
-
- - priority
- - severity
- ```
- - **If required labels are not applied within 3 days of being moved to the milestone, the bot will move the issue out of the milestone and apply the `milestone/removed` label (unless the issue is critical-urgent).**
- - If the required labels are present, the bot checks whether the issue has the `status/approved-for-milestone` label.
- - If the approved label is not present, the bot comments indicating that the label must be applied by a SIG maintainer and applies the `milestone/needs-approval` label.
- ```
- **Action required**: This issue must have the `status/approved-for-milestone` label applied
- by a SIG maintainer.
- ```
- - If the approved label is present, the bot comments summarizing the label state and removes the other `milestone/*` labels.
- ```
- Issue label settings:
-
- sig/node: Issue will be escalated to SIG node if needed
- priority/critical-urgent: Never automatically move out of a release milestone.
- Escalate to SIG and contributor through all available channels.
- kind/bug: Fixes a bug.
- ```
- - **If the approved label is not applied within 7 days of the `milestone/needs-approval` label being applied, the bot will move the issue out of the milestone and apply the `milestone/removed` label (unless the issue is critical-urgent).**
-2. If labels change, the bot checks that the needed labels are present and updates its comment and labeling to reflect the issue's current state.
-3. Code slush
- - All issues are required to have a status label - one of `status/in-review` or `status/in-progress`.
- - If an issue does not have a status label, the bot comments indicating the required action and applies the `milestone/needs-attention` label.
- ```
- **Action required**: Must specify at most one of `status/in-review` or `status/in-progress`.
- ```
- - **priority/important- issues**
- - The bot includes a warning in the issue comment that the issue will be moved out of the milestone at code freeze.
- ```
- **Note**: This issue must be resolved or labeled as priority/critical-urgent by
- <date of code freeze> or it will automatically be moved out of the milestone
- ```
- - **priority/critical-urgent issues**
- - The bot includes a warning in the issue comment that the issue must be updated regularly.
- ```
- **Note**: This issue is marked as priority/critical-urgent, and is expected to be updated at
- least every 3 days.
- ```
- - If an issue hasn't been updated for more than 3 days, the bot comments and adds the `milestone/needs-attention` label.
- ```
- **Action Required**: This issue is marked as priority/critical-urgent, but has not been updated
- in 3 days. Please provide an update.
- ```
- - Owner updates can be a short ACK, but should include an ETA for completion and any risk factors.
- ```
- ACK. In progress
- ETA: DD/MM/YYYY
- Risks: Complicated fix required
- ```
-
- ```
- ACK. In progress
- ETA: ???
- Risks: Root cause unknown.
- ```
-4. Code freeze
- - **priority/important- issues**
- - The bot removes non-blocker issues from the milestone, comments as to why this was done, and adds the `milestone/removed` label.
- ```
- **Important**: Code freeze is in effect and only issues with priority/critical-urgent may remain
- in the active milestone. Removing it from the milestone.
- ```
- - **priority/critical-urgent issues**
- - If an issue has not been updated within 2 days, the bot comments and adds the `milestone/needs-attention` label.
-
-## Escalation
-
-SIGs will have issues needing attention escalated through the following channels
-
-- Comment mentioning the sig team appropriate for the issue type
-- Email the SIG googlegroup list
- - bootstrapped with the emails from the [community sig list](https://github.com/kubernetes/community/blob/master/sig-list.md)
- - maybe configured to email alternate googlegroup
- - maybe configured to directly email SIG leads or other SIG members
-- Message the SIG slack channel, mentioning the SIG leads
- - bootstrapped with the slackchannel and SIG leads from the [community sig list](https://github.com/kubernetes/community/blob/master/sig-list.md)
- - maybe configured to message alternate slack channel and users
-
-## Issues tracked in other repos
-
-Some issues are filed in repos outside the [kubernetes repo]. The bot must also be run against these
-repos and follow the same pattern. The release team can query issues across repos in the kubernetes org using
-a query like [this](https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+milestone%3Av1.7+org%3Akubernetes+)
-
-If the bot is not setup against the split repo, the repo owners should setup an umbrella tracking issue
-in the kubernetes/kubernetes repo and aggregate the status.
-
-`Release 1.<minor version> blocking umbrella: <repo name> (size: <number of open issues>)`
-
-it must also include:
-
-- a link to the repo with a query for issues in the milestone. See [this](https://github.com/kubernetes/kubectl/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20milestone%3Av1.7) example.
-- a list of unresolved issues blocking the release. See
-[this](https://github.com/kubernetes/kubernetes/issues/47747) example.
-
-
-[kubernetes repo]: (https://github.com/kubernetes/kubernetes)
+This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/patch-release-manager.md b/contributors/devel/release/patch-release-manager.md
index 1e444fb9..da1290e5 100644
--- a/contributors/devel/release/patch-release-manager.md
+++ b/contributors/devel/release/patch-release-manager.md
@@ -1,258 +1,3 @@
-# Patch Release Manager Playbook
-
-This is a playbook intended to guide new patch release managers.
-It consists of opinions and recommendations from former patch release managers.
-
-Note that patch release managers are ultimately responsible for carrying out
-their [duties](README.md#patch-release-manager) in whatever manner they deem
-best for the project.
-The playbook is more what you call "guidelines" than actual rules.
-
-## Getting started
-
-* Add yourself to the [Release Manager table](https://github.com/kubernetes/community/wiki)
- so the community knows you're the point of contact.
-* Ask a maintainer to add you to the [kubernetes-release-managers](https://github.com/orgs/kubernetes/teams/kubernetes-release-managers/members)
- team so you have write access to the main repository.
-* Ask to be added to the [kubernetes-security](https://groups.google.com/forum/#!forum/kubernetes-security)
- mailing list.
-* Ask to be given access to post to the [kubernetes-announce](https://groups.google.com/forum/#!forum/kubernetes-announce)
- and [kubernetes-dev-announce](https://groups.google.com/forum/#!forum/kubernetes-dev-announce)
- mailing lists.
-* Sync up with the outgoing release branch manager to take ownership of any
- lingering issues on the branch.
-* Run [anago](https://github.com/kubernetes/release) in mock mode to get prompts
- for setting up your environment, and familiarize yourself with the tool.
-
-## Cherrypick requests
-
-As a patch release manager, you are responsible for reviewing
-[cherrypicks](../cherry-picks.md) on your release branch.
-
-You can find candidate PRs in the [cherrypick queue dashboard](http://cherrypick.k8s.io/#/queue).
-Once a cherrypick PR is created and ready for your review, it should show up in
-a GitHub search such as [`is:pr is:open base:release-1.6`](https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr%20is%3Aopen%20base%3Arelease-1.6).
-
-As an example of the kind of load to expect, there were about 150 cherrypick PRs
-against the `release-1.6` branch in the 3 months between v1.6.0 and v1.7.0.
-
-For each cherrypick request:
-
-1. **Decide if it meets the criteria for a cherrypick**
-
- Make sure the PR author has supplied enough information to answer:
-
- * What bug does this fix?
- (e.g. *feature X was already launched but doesn't work as intended*)
- * What is the scope of users affected?
- (e.g. *anyone who uses feature X*)
- * How big is the impact on affected users?
- (e.g. *pods using X fail to start*)
- * How have you verified the fix works and is safe?
- (e.g. *added new regression test*)
-
- Ask the PR author for details if these are missing and not obvious.
- If you aren't sure what to do, escalate to the relevant SIGs.
-
- **Notes**
-
- * Version bumps (e.g. v0.5.1 -> v0.5.2) for dependencies with their own
- release cycles (e.g. kube-dns, autoscaler, ingress controllers, etc.)
- deserve special attention because it's hard to see what's changing.
- In the past, such bumps have been a significant source of regressions in
- the stable release branch.
-
- Check the release notes for the dependency to make sure there are no new
- behaviors that could destabilize the release branch.
- Ideally you should only accept version bumps whose release deltas contain
- only changes that you would have approved individually, if they had been
- part of the Kubernetes release cycle.
-
- However, this gets tricky when there are fixes you need for your branch
- that are tied up with other changes. Ask the cherrypick requester for
- context on the other changes and use your best judgment.
-
- * Historically (up through at least 1.6), patch release managers have
- occasionally granted exceptions to the "no new features" rule for
- cherrypicks that are confined to plugins like cloudproviders
- (e.g. vSphere, Azure) and volumes (e.g. Portworx).
-
- However, we required that these exceptions be approved by the plugin
- owners, who were asked to `/approve` through the normal `OWNERS` process
- (despite it being a cherrypick PR).
-
-1. **Make sure it has an appropriate release note**
-
- [Good release notes](https://github.com/kubernetes/community/issues/484)
- are particularly important for patch releases because cluster admins expect
- the release branch to remain stable and need to know exactly what changed.
- Take care to ensure every cherrypick that deserves a release note has one
- *before you approve it* or else the change may fall through the cracks at
- release cut time.
-
- Also make sure the release note expresses the change from a user's
- perspective, not from the perspective of someone contributing to Kubernetes.
- Think about what the user would experience when hitting the problem,
- not the implementation details of the root cause.
-
- For example:
-
- User perspective (good) | Code perspective (bad)
- ----------------------- | ----------------------
- *"Fix kubelet crash when Node detaches old volumes after restart."* | *"Call initStuff() before startLoop() to prevent race condition."*
-
- Ask the PR author for context if it's not clear to you what the release note
- should say.
-
- Lastly, make sure the release note is located where the [relnotes](https://github.com/kubernetes/release/blob/master/relnotes)
- script will find it:
-
- * If the cherrypick PR comes from a branch called `automated-cherry-pick-of-*`,
- then the release notes are taken from each parent PR (possibly more than one)
- and the cherrypick PR itself is ignored.
-
- Make sure the cherrypick PR and parent PRs have the `release-note` label.
-
- * Otherwise, the release note is taken from the cherrypick PR.
-
- Make sure the cherrypick PR has the `release-note` label.
-
- **Notes**
-
- * Almost all changes that are important enough to cherrypick are important
- enough that we should inform users about them when they upgrade.
-
- Rare exceptions include test-only changes or follow-ups to a previous
- cherrypick whose release note already explains all the intended changes.
-
-1. **Approve for cherrypick**
-
- PRs on release branches follow a different review process than those on the
- `master` branch.
- Patch release managers review every PR on the release branch,
- but the focus is just on ensuring the above criteria are met.
- The code itself was already reviewed, assuming it's copied from `master`.
-
- * For an *automated cherrypick* (created with `hack/cherry_pick_pull.sh`),
- you can directly apply the `approved` label as long as the parent PR was
- approved and merged into `master`.
- If the parent PR hasn't merged yet, leave a comment explaining that you
- will wait for it before approving the cherrypick.
- We don't want the release branch to get out of sync if the parent PR changes.
-
- Then comment `/lgtm` to apply the `lgtm` label and notify the author
- you've reviewed the cherrypick request.
-
- * For a *manual patch or cherrypick* (not a direct copy of a PR already merged
- on `master`), leave a comment explaining that it needs to get
- LGTM+Approval through the usual review process.
-
- You don't need to do anything special to fall back to this process.
- The bot will suggest reviewers and approvers just like on `master`.
-
- Finally, apply the `cherrypick-approved` label and remove the `do-not-merge`
- label to tell the bot that this PR is allowed to merge into a release
- branch.
-
- Note that the PR will not actually merge until it meets the usual criteria
- enforced by the merge bot (`lgtm` + `approved` labels, required presubmits,
- etc.) and makes its way through the submit queue.
- To give cherrypick PRs priority over other PRs in the submit queue,
- make sure the PR is in the `vX.Y` release milestone, and that the milestone
- has a due date.
-
-## Branch health
-
-Keep an eye on approved cherrypick PRs to make sure they aren't getting blocked
-on presubmits that are failing across the whole branch.
-Also periodically check the [testgrid](https://k8s-testgrid.appspot.com)
-dashboard for your release branch to make sure the continuous jobs are healthy.
-
-Escalate to test owners or [sig-testing](https://github.com/kubernetes/community/tree/master/sig-testing)/[test-infra](https://github.com/kubernetes/test-infra)
-as needed to diagnose failures.
-
-## Release timing
-
-The general guideline is to leave about 2 to 4 weeks between patch releases on
-a given minor release branch.
-The lower bound is intended to avoid upgrade churn for cluster administrators,
-and to allow patches time to undergo testing on `master` and on the release
-branch.
-The upper bound is intended to avoid making users wait too long for fixes that
-are ready to go.
-
-The actual timing is up to the patch release manager, who should take into
-account input from cherrypick PR authors and SIGs.
-For example, some bugs may be serious enough, and have a clear enough fix,
-to trigger a new patch release immediately.
-
-You should attend the [sig-release](https://github.com/kubernetes/community/tree/master/sig-release)
-meetings whenever possible to give updates on activity in your release branch
-(bugs, tests, cherrypicks, etc.) and discuss release timing.
-
-When you have a plan for the next patch release, send an announcement
-([example](https://groups.google.com/forum/#!topic/kubernetes-dev-announce/HGYsjOFtcdU))
-to [kubernetes-dev@googlegroups.com](https://groups.google.com/forum/#!forum/kubernetes-dev)
-(and *BCC* [kubernetes-dev-announce@googlegroups.com](https://groups.google.com/forum/#!forum/kubernetes-dev-announce))
-several working days in advance.
-You can generate a preview of the release notes with the [relnotes](https://github.com/kubernetes/release/blob/master/relnotes)
-script ([example usage](https://gist.github.com/enisoc/058bf0feddf6bffd8e25aa72f9dc38d6)).
-
-## Release cut
-
-A few days before you plan to cut a patch release, put a temporary freeze on
-cherrypick requests by removing the `cherrypick-approved` label from any PR that
-isn't ready to merge.
-Leave a comment explaining that a freeze is in effect until after the release.
-
-The freeze serves several purposes:
-
-1. It ensures a minimum time period during which problems with the accepted
- patches may be discovered by people testing on `master`, or by continuous
- test jobs on the release branch.
-
-1. It allows the continuous jobs to catch up with `HEAD` on the release branch.
- Note that you cannot cut a patch release from any point other than `HEAD`
- on the release branch; for example, you can't cut at the last green build.
-
-1. It allows slow test jobs like "serial", which has a period of many hours,
- to run several times at `HEAD` to ensure they pass consistently.
-
-On the day before the planned release, run a mock build with `anago` to make
-sure the tooling is ready.
-If the mock goes well and the tests are healthy, run the real cut the next day.
-
-After the release cut, reapply the `cherrypick-approved` label to any PRs that
-had it before the freeze, and go through the backlog of new cherrypicks.
-
-### Hotfix release
-
-A normal patch release rolls up everything that merged into the release branch
-since the last patch release.
-Sometimes it's necessary to cut an emergency hotfix release that contains only
-one specific change relative to the last past release.
-For example, we may need to fix a severe bug quickly without taking on the added
-risk of allowing other changes in.
-
-In this case, you would create a new, three-part branch of the form
-`release-X.Y.Z`, which [branches from a tag](https://github.com/kubernetes/release/blob/master/docs/branching.md#branching-from-a-tag)
-called `vX.Y.Z`.
-You would then use the normal cherrypick PR flow, except that you target PRs at
-the `release-X.Y.Z` branch instead of `release-X.Y`.
-This lets you exclude the rest of the changes that already went into
-`release-X.Y` since the `vX.Y.Z` tag was cut.
-
-Make sure you communicate clearly in your release plan announcement that some
-changes on the release branch will be excluded, and will have to wait until the
-next patch release.
-
-### Security release
-
-The Product Security Team (PST) will contact you if a security release is needed
-on your branch.
-In contrast to a normal release, you should not make any public announcements
-or push tags or release artifacts to public repositories until the PST tells you to.
-
-See the [Security Release Process](../security-release-process.md) doc for more
-details.
+The original content of this file has been migrated to https://git.k8s.io/sig-release/release-process-documentation/release-team-guides/patch-release-manager-playbook.md
+This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/patch_release.md b/contributors/devel/release/patch_release.md
index d04d96f9..1b074759 100644
--- a/contributors/devel/release/patch_release.md
+++ b/contributors/devel/release/patch_release.md
@@ -1,92 +1,3 @@
-# Building Kubernetes patch releases
+The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/patch_release.md
-This document describes the process for creating Kubernetes patch releases and communicating publishing
-status regarding known issues impacting the current release.
-
-## Process
-
-After a Kubernetes minor release, patch releases are managed by the Kubenretes patch release manager.
-The patch release manager works with SIGs to coordinate merging cherrypicks into the release branch,
-then cutting and publishing a new patch release.
-
-## Communicating fixes to the community
-
-The canonical source of issues in a patch release will be communicated through GitHub issues.
-
-To make it easy for the community to understand the status of each patch release, GitHub issues
-must follow a consistent format and contain basic information about the issue.
-
-GitHub issues targeted at patch releases must contain the following items:
-
-### Priority label
-
-The priority label describes the urgency and severity of the issue.
-
-- `priority/critical-urgent`: the issue is severe enough that it requires an immediate patch release
- (storage or network issues that can cause data corruption or outages for instance)
-- `priority/important-soon`: the issue should be fixed in a patch does not need to be fixed immediately
-
-### Sig label
-
-SIG labels should be applied for all SIGs involved in the issue / resolution.
-
-- `sig/cli`
-- `sig/node`
-
-### Summary template
-
-The issue description should be kept up-to-date by the issue owner through out the resolution process. Items
-discovered as part of triage should be reflected in the issue description.
-
-While much of this information may exist as discussion comments on the issue,
-providing the information in an easily understandable format and location
-makes it much easier to quickly understand the state of upcoming patch release.
-
-The template will start with a section describing which releases the issue was introduced and resolved in. The
-format will be machine parsable so that bots can apply labels and generate reports using the information in
-the issue.
-
-<p> &#96;&#96;&#96;release </p>
-<p>introduced-in=vX.Y.Z </p>
-<p>&#96;&#96;&#96;</p>
-
-<p> &#96;&#96;&#96;release </p>
-<p>resolved-in=vX.Y.Z </p>
-<p>resolved-in=vX.Y+1.Z </p>
-<p>&#96;&#96;&#96;</p>
-
-The rest of the template is as follows:
-
-```sh
-## Symptoms
-
-What users are experiencing
-
-## Root cause
-
-The technical cause of the symptoms including the list of components / binaries.
-
-e.g.
-
-Binaries:
-- kubectl
-
-Kubectl was incorrectly calculating the patch for apply. When diffing foo...
-
-## Impact
-
-Why this is important enough to warrant a patch vs waiting until the next minor release
-
-## Resolution
-
-How the issue will be (was) fixed
-
-## PRs
-
-- #23456
-```
-
-## Communicating security fixes to the community
-
-Due to the sensitive nature of security fixes, their details maybe omitted from the GitHub issue and
-simply state that the owner is working with the [product security team](https://github.com/kubernetes/community/blob/master/contributors/devel/security-release-process.md) on a resolution.
+This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/scalability-validation.md b/contributors/devel/release/scalability-validation.md
index 08ecf09b..8a943227 100644
--- a/contributors/devel/release/scalability-validation.md
+++ b/contributors/devel/release/scalability-validation.md
@@ -1,136 +1,3 @@
-# Scalability validation of the release
+The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/scalability-validation.md
-Scalability is an integral part of k8s. Numerous issues were identified during release 1.7 while running tests on large clusters (2k-5k nodes). A majority of these manifested only in large clusters - more info under umbrella issue [#47344]. The issues ranged from large cluster setup (both cloud-provider specific and independent ones) to test-infra problems to previously uncaught regressions with performance and resource usage and so on.
-
-We started [supporting] 5000-node clusters from k8s 1.6 and need to make sure this scale is supported in future releases too. We did it this time by running performance and correctness tests under different configurations manually. But this testing procedure needs to be automated, concrete & well-maintained.
-
-## Goals
-
-In this document, we address the following process-related problems wrt scale testing:
-
-- Automate large cluster tests with a reasonable frequency
-- Concretely define the testing configuration used for the release
-- Make scalability validation a release prerequisite
-- Clearly lay down responsibilities for scale tests
-
-## Non-Goals
-
-This document does not intend to:
-
-- Define the set of tests that comprise scalability and correctness suite
-- Define SLIs/SLOs (that’s discussed [here]) and thresholds for the tests
-- Discuss particular performance issues we’re facing wrt different releases
-
-While these are interesting from scalability perspective, they are not testing process issues but details of the tests themselves. And can change without requiring much changes in the process.
-
-## Proposed design
-
-For each of the problems above we propose solutions and discuss some caveats. This is an open discussion and better solutions may evolve in future.
-
-### Automate large cluster tests
-
-Currently, there are 2 kinds of tests needed to validate the claim “kubernetes supports X-node clusters”. These are:
-
-- Correctness tests - e2e tests verifying expected system behaviours
-- Performance tests - e2e tests for stress-testing the system perf ([Feature:Performance])
-
-We need to run them on 5k-node clusters, but they’re:
-
-- Time-consuming (12-24 hrs)
-- Expensive (tens of thousands of core hours per run)
-- Blocking other large tests (quota limitations + only one large test project available viz. 'kubernetes-scale')
-
-So we don’t want to run them too frequently. On the other hand, running them too infrequently means late identification and piling up of regressions. So we choose the following middleground:
-
-- Performance tests on 2k-node/5k-node GCE clusters alternatingly from Mon-Sat
- - would give us one performance run from each day to help catch regressions fast
- - running 2k-node on alternating days gives time for 5k-node correctness tests to run on those days
- - many of the performance regressions on 5k-node should also be seen on 2k-node (albeit a smaller version probably)
-- Correctness tests on 2k-node/5k-node GCE clusters alternatingly from Mon-Sat
- - would give us one correctness run from each day to help catch regressions fast
- - running 2k-node on alternating days gives time for 5k-node performance tests to run on those days
- - many of the correctness regressions on 5k-node should also be seen on 2k-node
-- Performance tests on 2k-node GKE cluster on Sun
- - would give us a performance run for sunday too
- - would also additionally help verify performance of GKE
-- Correctness tests on 2k-node GKE cluster on Sun
- - would give us a correctness run for sunday too
- - would also additionally help verify correctness of GKE
-
-Here's the proposed schedule (may be fine-tuned later based on test health / release schedule):
-(B = release-blocking job)
-
-| Day | | |
-| ------------- |:-------------:| -----:|
-| Mon | 5k-node performance @ 00:01 PT (B) | 2k-node correctness @ 22:01 PT |
-| Tue | 2k-node performance @ 05:01 PT | 5k-node correctness @ 14:01 PT (B) |
-| Wed | 5k-node performance @ 00:01 PT (B) | 2k-node correctness @ 22:01 PT |
-| Thu | 2k-node performance @ 05:01 PT | 5k-node correctness @ 14:01 PT (B) |
-| Fri | 5k-node performance @ 00:01 PT (B) | 2k-node correctness @ 22:01 PT |
-| Sat | 2k-node performance @ 05:01 PT | 5k-node correctness @ 14:01 PT (B) |
-| Sun | 'GKE' 2k-node performance @ 05:01 PT | 'GKE' 2k-node correctness @ 15:01 PT |
-
-Note: The above schedule is subject to change based on job health, release requirements, etc. You should find it up-to-date in this [calendar].
-
-Why this schedule?
-
-- 5k tests might need special attention in case of failures so they should mostly run on weekdays (EDIT: Given that they're quite stable now, we're trying running them on weekend too)
-- Running a large-scale performance job and a large-scale correctness job each day would:
- - help catch regressions on a daily basis
- - help verify fixes with low latency
- - ensure a good release signal
-- Running large scale tests on GKE once a week would help verify GKE setup also, at no real loss of signal ideally
-
-Why run GKE tests at all?
-
-Google is currently using a single project for scalability testing, on both GCE and GKE. As a result we need to schedule them together. There's a plan for CNCF becoming responsible for funding k8s testing, and GCE/GKE tests would be separated to different projects when that happens, with only GCE being funded by them. This ensures fairness across all cloud providers.
-
-### Concretely define test configuration
-
-This is a relatively minor issue but it is important that we clearly define the test configuration we use for the release. E.g. there was a confusion this time around testing k8s services, machine-type and no. of the nodes we used (we tested 4k instead of 5k due to a CIDR-setup problem). For ref - [#47344] [#47865]. To solve this, we need to document it using the below template in a file named scalability-validation-report.md placed under kubernetes/features/release-&gt;N&lt;. And this file should be linked from under the scalability section in the release's CHANGELOG.md.
-
-```
-Validated large cluster performance under the following configuration:
-- Cloud-provider - [GCE / GKE / ..]
-- No. of nodes - [5k (desired) / 4k / ..]
-- Node size, OS, disk size/type
-- Master size, OS, disk size/type
-- Any non-default config used - (monitoring with stackdriver, logging with elasticsearch, etc)
-- Any important test details - (services disabled in load test, pods increased in density test, etc)
-- <job-name, run#> of the validating run (to know other specific details from the logs)
-
-Validated large cluster correctness under the following configuration:
-- <similar to above>
-
-Misc:
-<Any important scalability insights/issues/improvements in the release>
-```
-
-### Make scalability validation a release prerequisite
-
-The model we followed this time was to create an umbrella issue ([#47344]) for scalability testing and labeling it as a release-blocker. While it helped block the release, it didn’t receive enough traction from individual SIGs as scale tests were not part of the release-blocking suite. As a result, the onus for fixing issues fell almost entirely on sig-scalability due to time constraints. Thus the obvious requirement here is to make the 5k-node tests release blockers. This along with test automation ensures failures are identified quickly and get traction from relevant SIGs.
-
-### Clearly lay down responsibilities for scale tests
-
-Responsibilities that lie with sig-scalability:
-
-- Setting and tuning the schedule of the tests on CI
-- Ensuring the project is healthy and quotas are sufficient
-- Documenting the testing configuration for the release
-- Identifying and fixing performance test failures (triaging/delegating as needed)
-
-Responsibilities lying with other SIGs/teams as applicable (could be sig-scalability too):
-
-- Fixing failing correctness test - Owner/SIG for the e2e test (as specified by [test_owners.csv])
-- Fixing performance regression - Owner/SIG of relevant component (as delegated by sig-scalability)
-- Testing infrastructure issues - @sig-testing
-- K8s-specific large-cluster setup issues - @sig-cluster-lifecycle
-- GKE-specific large-cluster setup issues - @goog-gke
-
-
-[#47344]: https://github.com/kubernetes/kubernetes/issues/47344
-[supporting]: http://blog.kubernetes.io/2017/03/scalability-updates-in-kubernetes-1.6.html
-[here]: https://docs.google.com/document/d/15rD6XBtKyvXXifkRAsAVFBqEGApQxDRWM3H1bZSBsKQ
-[#47865]: https://github.com/kubernetes/kubernetes/issues/47865
-[test_owners.csv]: https://github.com/kubernetes/kubernetes/blob/master/test/test_owners.csv
-[calendar]: https://calendar.google.com/calendar?cid=Z29vZ2xlLmNvbV9tNHA3bG1jODVubGlmazFxYzRnNTRqZjg4a0Bncm91cC5jYWxlbmRhci5nb29nbGUuY29t
+This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.
diff --git a/contributors/devel/release/testing.md b/contributors/devel/release/testing.md
index e12a8914..2ae76112 100644
--- a/contributors/devel/release/testing.md
+++ b/contributors/devel/release/testing.md
@@ -1,175 +1,3 @@
-# Kubernetes test sustaining engineering
+The original content of this file has been migrated to https://git.k8s.io/sig-release/ephemera/testing.md
-This document describes how Kubernetes automated tests are maintained as part
-of the development process.
-
-## Definitions
-
-The following definitions are for tests continuously run as part of CI.
-
-- *test*
- - *artifact*: a row in [test grid]
- - a single test run as part of a test job
- - maybe either an e2e test or an integration / unit test
-- *test job*
- - *artifact*: a tab in [test grid]
- - a collection of tests that are run together in shared environment. may:
- - run in a specific environment - e.g. [gce, gke, aws], [cvm, gci]
- - run under specific conditions - e.g. []upgrade, version skew, soak, serial]
- - test a specific component - e.g. [federation, node]
-- *test infrastructure*
- - not directly shown in the test grid
- - libraries and infrastructure common across tests
-- *test failure*
- - persistently failing test runs for a given test
-- *test flake*
- - periodically failling test runs for a given test
-
-## Ownership
-
-Each test must have an escalation point (email + slack). The escalation point is responsible for
-keeping the test healthy. Fixes for test failures caused by areas of ownership outside the
-responsibility of the escalation point should be coordinated with other teams by the
-test escalation point.
-
-Escalation points are expected to be responsive within 24 hours, and prioritize test failure
-issues over other issues.
-
-### test
-
-Each test must have an owning SIG or group that serves as the escalation point for flakes and failures.
-The name of the owner should be present in the test name so that it is displayed in the test grid.
-
-Owners are expected to maintain a dashboard of the tests that they own and
-maintain the test health.
-
-**Note:** e2e test owners are present in the test name
-
-### test job
-
-Each test job must have an owning SIG or group that is responsible for the health of the test job. The
-owner may also serve as an escalation point for issues impacting a test only in that specific test job
-(passing in other test jobs). e.g. If a test only fails on aws or only on gke test jobs, the test job
-owner and test owner must identify the owner for resolving the failure.
-
-Owners of test jobs are expected to maintain a dashboard of the test jobs they own and
-maintain the test job health.
-
-SIGs should update the [job config] and mark the tests that they own.
-
-### test infrastructure
-
-Issues with underlying test infrastructure (e.g. prow) should be escalated to sig/testing.
-
-## Monitoring project wide test health
-
-Dashboards for Kubernetes release blocking test are present on the [test grid].
-
-The following dashboards are expected to remain healthy throughout the development cycle.
-
-- [release-master-blocking](https://k8s-testgrid.appspot.com/release-master-blocking)
- - Tests run against the master branch
-- [1.7-master-upgrade & 1.6-master-upgrade](https://k8s-testgrid.appspot.com/master-upgrade)
- - Upgrade a cluster from 1.7 to the master branch and run tests
- - Upgrade a cluster from 1.6 to the master branch and run tests
-- [1.7-master-kubectl-skew](https://k8s-testgrid.appspot.com/master-kubectl-skew)
- - Run tests skewing the master and kubectl by +1/-1 version
-
-## Triaging ownership for test failures
-
-When a test is failing, it must be quickly escalated to the correct owner. Tests that
-are left to fail for days or weeks become toxic and create noise in the system health
-metrics.
-
-Each SIG is expected to ensure that the release blocking tests that belong to the SIG remain
-perpetually healthy by monitoring the test grid and escalating failures.
-
-Failing tests that are not being addressed, can be escalated by following the
-[sig escalation](#sig-test-escalation) path.
-
-*Tests without a responsive owner should be assigned a new owner or disabled.*
-
-### test failure
-
-A test is failing.
-
-*Symptom*: A row in the test grid is consistently failing across multiple jobs
-
-*How to check for symptom*: Go to the [triage tool], and
-search for the failing test by name. Check to see if it is failing across
-multiple jobs, or just one.
-
-*Action*: Escalate to the owning SIG present in the test name (e.g. SIG-cli)
-
-### test job failure
-
-A test *job* is unhealthy causing multiple unrelated tests to fail.
-
-*Symptom*: Multiple unrelated rows in the test grid are consistently failing in a single job,
-but passing in others jobs.
-
-*How to check for symptom*: Go to the [test grid]. Are a bunch of tests failing or just a couple? Are
-those tests passing on other jobs?
-
-*Action*: Escalate to the owning SIG for the test job.
-
-### test failure (only on specifics job)
-
-A test is failing, but only on specific jobs.
-
-*Symptom*: A row in the test grid is consistently failing on a single job, but passing on other jobs.
-
-*How to check for symptom*: Go to the [triage tool], and
-search for the failing test by name. Check to see if it is failing across
-multiple jobs, or just one.
-
-*Action*: Escalate to the owning SIG present in the test name (e.g. SIG-cli). They
-will coordinate a fix with the test job owner.
-
-## Triaging ownership for test flakes
-
-To triage ownership flakes, follow the same escalation process for failures. Flakes are considered less
-urgent than persistent failures, but still expected to have a root cause investigation within 1 week.
-
-## Broken test workflow
-
-SIGs are expected to proactively monitor and maintain their tests.
-
-- File an issue for the broken test so it can be referenced and discovered
- - Set the following labels: `priority/failing-test`, `sig/*`
- - Assign the issue to whoever is working on it
-- Root cause analysis of the test failure is performed by the owner
-- **Note**: The owning SIG for a test can reassign ownership of a resolution to another SIG only after getting
- approval from that SIG
- - This is done by the target SIG reassigning to themselves, not the test owning SIG assigning to someone else.
-- Tests failure is resolved either by fixing the underlying issue or disabling the test
- - Disabling a test maybe the correct thing to do in some cases - such as upgrade tests running e2e tests for alpha
- features disable in newer releases.
-- SIG owner monitors the test grid to make sure the tests begin to pass
-- SIG owner closes the issue
-
-## SIG test escalation
-
-As a Kubernetes developers if you observe a test failure, first search to see if an issue has been filed already,
-and if not (optionally file an issue and) escalate to the SIG escalation point.
-If the escalation point is unresponsive within a day, escalate to the SIG googlegroup and/or slack channel,
-mentioning the SIG leads. If escalation through the SIG googlegroup, slack channel and SIG leads is unsuccessful,
-escalate to SIG release through the googlegroup and slack - mentioning the SIG leads.
-
-The SIG escalation points should be bootstrapped from the [community sig list].
-
-## SIG Recommendations
-
-- Figure out which e2e test jobs are release blocking for your SIG.
-- Develop a process for making sure the SIGs test grid remains healthy and resolving test failures.
-- Consider moving the e2e tests for the SIG into their own test jobs if this would make maintaining them easier.
-- Consider developing a playbook for how to resolve test failures and how do identify whether or not another SIG owns the resolution of the issue.
-
-[community sig list]: https://github.com/kubernetes/community/blob/master/sig-list.md
-[triage tool]: https://storage.googleapis.com/k8s-gubernator/triage/index.html
-[test grid]: https://k8s-testgrid.appspot.com/
-[release-master-blocking]: https://k8s-testgrid.appspot.com/release-master-blocking#Summary
-[1.7-master-upgrade]: https://k8s-testgrid.appspot.com/1.7-master-upgrade#Summary
-[1.6-master-upgrade]: https://k8s-testgrid.appspot.com/1.6-master-upgrade#Summary
-[1.7-master-kubectl-skew]: https://k8s-testgrid.appspot.com/1.6-1.7-kubectl-skew
-[job config]: https://github.com/kubernetes/test-infra/blob/master/jobs/config.json
+This file is a placeholder to preserve links. Please remove after 3 months or the release of kubernetes 1.10, whichever comes first.