summaryrefslogtreecommitdiff
path: root/sig-storage
diff options
context:
space:
mode:
authorChristoph Blecker <admin@toph.ca>2017-12-21 17:53:39 -0800
committerChristoph Blecker <admin@toph.ca>2017-12-21 18:34:08 -0800
commit95a4a105cd5e626edca2f8f00eb3dff32f1f1c5c (patch)
tree04eb87ec9b60f1d8f282d528fdb308be787af018 /sig-storage
parentce3044d912391d987a9ef8315c701f3e5671fe45 (diff)
Use git.k8s.io for links
Diffstat (limited to 'sig-storage')
-rw-r--r--sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdfbin419572 -> 749063 bytes
-rw-r--r--sig-storage/1.3-retrospective/README.md10
-rw-r--r--sig-storage/contributing.md4
3 files changed, 7 insertions, 7 deletions
diff --git a/sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf b/sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf
index 95522475..7c972d69 100644
--- a/sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf
+++ b/sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf
Binary files differ
diff --git a/sig-storage/1.3-retrospective/README.md b/sig-storage/1.3-retrospective/README.md
index 52e28490..08668cc6 100644
--- a/sig-storage/1.3-retrospective/README.md
+++ b/sig-storage/1.3-retrospective/README.md
@@ -6,10 +6,10 @@
**Collaborators:** Saad Ali ([@saad-ali](https://github.com/saad-ali)), Paul Morie ([@pmorie](https://github.com/pmorie)), Tim Hockins ([@thockin](https://github.com/thockin)), Steve Watt ([@wattsteve](https://github.com/wattsteve))
**Links:**
-* [1.3 Schedule Dates](https://github.com/kubernetes/features/blob/master/release-1.3/release-1.3.md)
+* [1.3 Schedule Dates](https://git.k8s.io/features/release-1.3/release-1.3.md)
## Purpose
-This document is intended to chronicle the decisions made by the [Storage SIG](https://github.com/kubernetes/community/blob/master/sig-storage/README.md) near the end of the Kubernetes 1.3 release with the storage stack that were not well understood by the wider community. This document should explain those decisions, why the SIG made the exception, detail the impact, and offer lessons learned for the future.
+This document is intended to chronicle the decisions made by the [Storage SIG](/sig-storage/README.md) near the end of the Kubernetes 1.3 release with the storage stack that were not well understood by the wider community. This document should explain those decisions, why the SIG made the exception, detail the impact, and offer lessons learned for the future.
## What Problem Were We Trying to Solve?
Kubernetes 1.2 had numerous problems and issues with the storage framework that arose from organic growth of the architecture as it tackled numerous new features it was not initially designed for. There were race conditions, maintenance and stability issues, and architectural problems with all major components of the storage stack including the Persistent Volume (PV) & Persistent Volume Claim (PVC) controller and the attach/detach and mount/unmount logic.
@@ -41,7 +41,7 @@ Below are the Github Issues that were filed for this area:
## How Did We Solve the Problem?
Addressing these issues was the main deliverable for storage in 1.3. This required an in depth rewrite of several components.
-Early in the 1.3 development cycle (March 28 to April 1, 2016) several community members in the Storage SIG met at a week long face-to-face summit at Google's office in Mountain View to address these issues. A plan was established to approach the attach/detach/mount/unmount issues as a deliberate effort with contributors already handling the design. Since that work was already in flight and a plan established, the majority of the summit was devoted to resolving the PV/PVC controller issues. Meeting notes were captured [in this document](https://github.com/kubernetes/community/blob/master/sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf).
+Early in the 1.3 development cycle (March 28 to April 1, 2016) several community members in the Storage SIG met at a week long face-to-face summit at Google's office in Mountain View to address these issues. A plan was established to approach the attach/detach/mount/unmount issues as a deliberate effort with contributors already handling the design. Since that work was already in flight and a plan established, the majority of the summit was devoted to resolving the PV/PVC controller issues. Meeting notes were captured [in this document](/sig-storage/1.3-retrospective/2016-03-28_Storage-SIG-F2F_Notes.pdf).
Three projects were planned to fix the issues outlined above:
* PV/PVC Controller Redesign (a.k.a. Provisioner/Binder/Recycler controller)
@@ -63,7 +63,7 @@ The Kubelet Volume Redesign involved changing fundamental assumptions of data fl
## Impact:
1. **Release delay**
- * The large amount of churn so late in the release with little stabilization time resulted in the delay of the release by one week: The Kubernetes 1.3 release [was targeted](https://github.com/kubernetes/features/blob/master/release-1.3/release-1.3.md) for June 20 to June 24, 2016. It ended up [going out on July 1, 2016](https://github.com/kubernetes/kubernetes/releases/tag/v1.3.0). This was mostly due to the time to resolve a data corruption issue on ungracefully terminated pods caused by detaching of mounted volumes ([#27691](https://github.com/kubernetes/kubernetes/issues/27691)). A large number of the bugs introduced in the release were fixed in the 1.3.4 release which [was cut on August 1, 2016](https://github.com/kubernetes/kubernetes/releases/tag/v1.3.4).
+ * The large amount of churn so late in the release with little stabilization time resulted in the delay of the release by one week: The Kubernetes 1.3 release [was targeted](https://git.k8s.io/features/release-1.3/release-1.3.md) for June 20 to June 24, 2016. It ended up [going out on July 1, 2016](https://github.com/kubernetes/kubernetes/releases/tag/v1.3.0). This was mostly due to the time to resolve a data corruption issue on ungracefully terminated pods caused by detaching of mounted volumes ([#27691](https://github.com/kubernetes/kubernetes/issues/27691)). A large number of the bugs introduced in the release were fixed in the 1.3.4 release which [was cut on August 1, 2016](https://github.com/kubernetes/kubernetes/releases/tag/v1.3.4).
2. **Instability in 1.3's Storage stack**
* The Kubelet volume redesign shipped in 1.3.0 with several bugs. These were mostly due to unexpected interactions between the new functionality and other Kubernetes components. For example, secrets were handled serially not in parallel, namespace dependencies were not well understood, etc. Most of these issues were quickly identified and addressed but waited for 1.3 patch releases.
* Issues related to this include:
@@ -91,6 +91,6 @@ The value of the feature freeze date is to ensure the release has time to stabil
* Status: [Planned for 1.5](https://docs.google.com/document/d/1-u1UA8mBiPZiyYUi7U7Up_e-afVegKmuhmc7fpVQ9hc/edit?ts=57bcd3d4&pli=1)
* Discussed at [Storage-SIG F2F meeting held August 10, 2016](https://docs.google.com/document/d/1qVL7UE7TtZ_D3P4F7BeRK4mDOvYskUjlULXmRJ4z-oE/edit). See [notes](https://docs.google.com/document/d/1vA5ul3Wy4GD98x3GZfRYEElfV4OE8dBblSK4rnmrE_M/edit#heading=h.amd7ks7tpscg).
2. Establish a formal exception process for merging large changes after feature complete dates.
- * Status: [Drafted as of 1.4](https://github.com/kubernetes/features/blob/master/EXCEPTIONS.md)
+ * Status: [Drafted as of 1.4](https://git.k8s.io/features/EXCEPTIONS.md)
Kubernetes is an incredibly fast moving project, with hundreds of active contributors creating a solution that thousands of organization rely on. Stability, trust, and openness are paramount in both the product and the community around Kubernetes. We undertook this retrospective effort to learn from the 1.3 release's shipping delay. These action items and other work in the upcoming releases are part of our commitment to continually improve our project, our community, and our ability to deliver production-grade infrastructure platform software.
diff --git a/sig-storage/contributing.md b/sig-storage/contributing.md
index a6b5ea09..cbea6325 100644
--- a/sig-storage/contributing.md
+++ b/sig-storage/contributing.md
@@ -36,9 +36,9 @@ A great way to get involved is to pick an issue and help address it. We would lo
### Adding support for a new storage platform in Kubernetes
For folks looking to add support for a new storage platform in Kubernetes, you have several options:
- Write an in-tree volume plugin or provisioner: You can contribute a new in-tree volume plugin or provisioner, that gets built and ships with Kubernetes, for use within the Persistent Volume Framework.
-[See the Ceph RBD volume plugin example](https://github.com/kubernetes/kubernetes/tree/master/pkg/volume/rbd) or [the AWS Provisioner example](https://github.com/kubernetes/kubernetes/pull/29006)
+[See the Ceph RBD volume plugin example](https://git.k8s.io/kubernetes/pkg/volume/rbd) or [the AWS Provisioner example](https://github.com/kubernetes/kubernetes/pull/29006)
- Write a FlexVolume plugin: This is an out-of-tree volume plugin which you develop and build separately outside of Kubernetes.
-You then install the plugin on every Kubernetes host within your cluster and then [configure the plugin in Kubernetes as a FlexVolume](https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/flexvolume)
+You then install the plugin on every Kubernetes host within your cluster and then [configure the plugin in Kubernetes as a FlexVolume](https://git.k8s.io/kubernetes/examples/volumes/flexvolume)
- Write a Provisioner Controller: You can write a separate controller that watches for pending claims with a specific selector label on them.
Once an appropriate claim is discovered, the controller then provisions the appropriate storage intended for the claim and creates a corresponding
persistent volume for the claim that includes the same label used in the original claim selector. This will ensure that the PV for the new